name stringlengths 5 6 | title stringlengths 8 144 | abstract stringlengths 0 2.68k | fulltext stringlengths 1.78k 95k | keywords stringlengths 22 532 |
|---|---|---|---|---|
569030 | Theorem Proving Guided Development of Formal Assertions in a Resource-Constrained Scheduler for High-Level Synthesis. | This paper presents a formal specification and a proof of correctness for the widely-used Force-Directed List Scheduling (FDLS) algorithm for resource-constrained scheduling of data flow graphs in high-level synthesis systems. The proof effort is conducted using a higher-order logic theorem prover. During the proof effort many interesting properties of the FDLS algorithm are discovered. These properties are formally stated and proved in a higher-order logic theorem proving environment. These properties constitute a detailed set of formal assertions and invariants that should hold at various steps in the FDLS algorithm. They are then inserted as programming assertions in the implementation of the FDLS algorithm in a production-strength high-level synthesis system. When turned on, the programming assertions (1) certify whether a specific run of the FDLS algorithm produced correct schedules and, (2) in the event of failure, help discover and isolate programming errors in the FDLS implementation.We present a detailed example and several experiments to demonstrate the effectiveness of these assertions in discovering and isolating errors. Based on this experience, we discuss the role of the formal theorem proving exercise in developing a useful set of assertions for embedding in the scheduler code and argue that in the absence of such a formal proof checking effort, discovering such a useful set of assertions would have been an arduous if not impossible task. | and often expensive ramifications since as they could lead to the synthesis and finally fabrication of incorrect designs.
Therefore, reliability and correctness of these tools have become important issues which need to be addressed.
Simulation has traditionally been used to check for correct operations of digital systems. However, with increase
in system functionality and complexity, this is proving to be inadequate due to the computational demands on the
task involved. In formal verification, correctness is determined independent of the input values to the design. Thus,
exhaustive validation is implicit. Formal verification approaches like theorem proving and model checking techniques
are powerful techniques employed to formally verify RTL designs. Both support use of formal specification languages
with rigorous semantics and formal proof methods that support mechanization. An RTL design synthesized from a
real-world specification often comprises a large state space. This makes formal verification of the RTL design as a
post-design activity rather tedious, often resulting in a tremendous strain on the verification tool and the verification
engineer. In a model checking environment, increase in RTL design size results in a combinatorial explosion in the
number of global states. Theorem proving is not limited by design size but tends to be extremely tedious requiring a lot
of time and user interaction. Moreover in the synthesis of an RTL design from high-level synthesis, the information on
how the specification was refined into an implementation is no longer available at the end of synthesis. This compounds
the problems faced by formal verification techniques further.
Several researchers have proposed alternatives to the post-synthesis verification effort. The idea of formal hardware
synthesis was originally proposed by Johnson [19]. He presented a technique [19] for deriving synchronous-system
descriptions from recursive function definitions, using correctness preserving functional algebraic transformations.
Since then several techniques have been proposed that attempt to guarantee "correct by construction" synthesized
designs, eliminating the need for a separate verification stage [4, 7, 11, 20]. These techniques employ formal logic
and require the user to closely interact with the synthesis tool as the specification is refined into an implementation.
Eisenbiegler et al. [1] introduced a general scheme for formally embedding high level synthesis algorithms in HOL [6].
High Level Synthesis is performed by a sequence of logical transformations. The input specification (data flow graph)
is fed both to the synthesis system and to the theorem prover. After each stage in synthesis, the control information
is passed on to the HOL environment, where the corresponding transformation is executed within the logic. If the
control information generated by the external system is faulty, the corresponding transformation cannot be performed
within the logic and an exception is raised. This approach is probably the only reported attempt at formalizing a
conventional synthesis process. However, the methodology requires a tight integration between the synthesis system
and a formal verification environment.
Formal Verification of RTL designs generated in a conventional high-level synthesis environment has long been a
challenge. In HLS [3, 8, 14], a behavioral specification goes through a series of transformations leading finally to an
design that meets a set of performance goals. A HLS synthesis flow is usually comprised of the following four
main stages:
ffl The Scheduling Stage: This stage specifies the partial order between the operations in the input specification.
Each data and control operation is bound to a time step. The operations may also be mapped to functional
units in a component library during this stage.
ffl The Register Allocation Stage: Carriers in the specification, and those additionally identified during the scheduling
phase are mapped to physical registers.
ffl The Binding Stage: Interconnection among the different components in the design is established.
ffl The Control Generation Stage: A controller is generated to sequence the operations in the RTL design.
We propose a Formal Assertions approach to building a formal high-level synthesis System. The approach works under
the premise that, if each stage in a synthesis system, like scheduling, register optimization, interconnect optimization
etc. can be verified to perform correct transformations on its input, then by compositionality, we can assert that
the resulting RTL design is equivalent to its input specification. This divide and conquer approach to verification
is a powerful technique and has been well researched in the area of transformation-based synthesis systems but the
algorithmic complexities involved in conventional HLS renders the verification process in a formal proof system rather
complex. Our approach attempts to bypass this problem. The formal assertions approach is not limited by the state
space of the design since we never attempt to verify the RTL design directly, only the changes made to it during the
process of synthesis.
In this paper, we look closely at an important stage in high-level synthesis, namely the Scheduling stage. We will
illustrate our approach by verifying an implementation of this stage in a conventional HLS system. The formalism
achieved in the context of a theorem proving exercise is embedded within the synthesis domain. An appealing aspect
of our approach is that the verification exercise is conducted within the framework of a conventional synthesis system
thus avoiding the complexities involved in integrating the synthesis system with a verification tool. By seamlessly
integrating with conventional automatic synthesis, the formal assertions approach introduces a formalism into the
synthesis sub-tasks which is transparent to the designer.
Section 2 gives an outline of our verification approach. In Section 3, we will introduce the verification problem
and present the core set of correctness conditions. In Section 4, we will present a well known scheduling algorithm
and formulate the conditions for our verification approach and discuss the proof strategy. Section 5 discusses the
applicability of the proof exercise within the context of a high level synthesis environment. Results are presented in
Section 7, scope of our verification approach is discussed in Section 8 and we will make some conclusions in Section 9.
Assertions based Verification - An Outline
Our verification approach is based on higher-order logic theorem proving leading to formal assertions in program code.
Each stage in high-level synthesis is well understood and its scope well-defined. The input specification as it passes
through each stage in synthesis, undergoes very specific modifications that bring it closer to the final RTL design. It
is therefore possible to capture the specification of each stage in synthesis in a precise manner.
2.1 Verification Outline
ffl Characterization: Identify a base specification model for each synthesis task. This model should cover all aspects
of correctness for the particular synthesis task. With each task in synthesis being so well-defined, the base
specification model is usually a tight set of correctness properties that completely characterizes the synthesis
task.
ffl Formalization: The specification model is now formalized as a collection of theorems in a higher-order logic
theorem proving environment: these form the base formal assertions. An algorithm is chosen to realize the
corresponding synthesis task and is described in the same formal environment.
ffl Verification: The formal description of the algorithm is verified against the theorems that characterize the base
specification model. Inconsistencies in the base model are identified during the verification exercise. Further,
the model is enhanced with several additional formal assertions derived during verification The formal assertions
now represent the invariants in the algorithm.
ffl Formal Assertions Embedding:. Develop a software implementation of the algorithm that was formally verified
in the previous stage. Embed the much enhanced and formally verified set of formal assertions within this
software implementation as program assertions. During synthesis, the implementation of each task is continually
evaluated against the specification model specified by these assertions and any design error during synthesis can
be detected. A high-level synthesis system wherein each task is embedded with its formal specification model
constitutes a Formal Synthesis System that strives to generate error-free RTL designs.
In the rest of the paper, we will explain our verification approach in the context of the scheduling task in high-level
synthesis.
Scheduling Task and Base Specification Model
The Scheduling task is one of the most crucial steps in high-level synthesis since it directly impacts the tradeoff
between design cost and performance. It maps each operation in the specification to a time step given constraints
imposed by the input specification and the user. These time steps correspond to clock cycles in the RTL design.
Scheduling can be done either under resource constraints (design area and component library) or under time constraints
(design speed). In this paper we illustrate our verification technique on a resource-constrained scheduling algorithm.
The scheduling stage views the input specification as a dependency graph. A dependency graph is a directed acyclic
graph (DAG) where each vertex is associated with an operation in the specification and the presence of an edge between
any two vertices in the graph denotes a data dependency or control dependency between the operations associated
with the vertices. Data dependencies capture the operation of the assignment statements in the input specifications
and hence their order of execution. Control dependencies make up the semantics of conditional and loop constructs.
Consider a simple dependency graph as shown in Figure 1. OP1;OP2;OP3 and OP4 denote the four vertices in the
graph. The operation type is specified inside each vertex. The primary inputs feed OP1; OP2 and OP 3. The edges
denote data dependencies and specify the order in which the operations should be executed in the final RTL design.
In addition to a dependency graph, the scheduling stage expects a valid module bag of library components that have
enough functionality to fully implement all operations in the specification. This module bag is typically generated in
the module generation stage that usually precedes scheduling in the synthesis flow.
3.1 Base Specification Model for Scheduling Task
Let N be the set of operation nodes and E be the set of dependency edges in the dependency graph. Let R bag denote
the bag of resources available and sched func be the schedule function that maps every operation in the graph to a
positive time step. Given this, the following three correctness conditions capture the base specification model:
1. Existence: The schedule function maps each and every operation in the input specification to a positive time
step.
OP2
OP3
OP4
Figure
1: A Simple Dependency Graph
2. Dependency Preserved: If a directed edge exists between any two operations in the graph, the operation at
the source of the edge will be scheduled at an earlier time step than the operation at its destination.
sched sched
Thus, in Figure 1, OP2 should always be scheduled before OP 4.
3. Resource Sufficiency: Operations mapped to the same time step must satisfy resource constraints 1 . Let T
be the set of all time steps to which operations in the graph are scheduled. Let OTS be the set of all operator
types in the graph and op map be the function that maps an operator type to a set of operations at time step
n. Let Rmap map an operator type to all the modules in R bag that can implement the operator type.
The two "+" operations, OP1 and OP2 in the dependency graph shown in Figure 1 can be scheduled in the
same time step provided R bag has at least two adders.
Every correct schedule should satisfy the above three correctness conditions 2 . These properties are far removed from
any algorithmic details and so hold for the entire class of resource-constrained scheduling algorithms.
In our treatment of the scheduling stage in this paper, we have concentrated on the primary function of the scheduler:
time-stamping the operations in the input specification. In most high-level synthesis systems, in addition to this
primary function, the scheduling stage also performs functional unit allocation, either concurrently or in a stepwise
refinement manner. Our methodology can be quite easily extended to reflect any additional tasks performed by the
scheduling stage. For the sake of this paper, we will assume that the scheduling stage does not support multi-cycling
or chaining of operations. Therefore, an operation in the input specification consumes exactly one time step. We will
further assume that the high-level synthesis system generates non-pipelined RTL designs and therefore the scheduling
task does not have to consider structural pipelining or functional pipelining issues. Additional correctness conditions
can be easily included in the base specification model in order to reflect any or all of these extensions to the primary
scheduling task.
We assume that there is at least one resource in R bag that can implement an operation in the graph.
2 We can conceivablyadd a 4th condition that captures a desirable property in a scheduler: a tightness property that states that every time
step between 1 and the maximum time step for the scheduled graph must have at least one operation scheduled. Let Tmax be the maximum
time step to which nodes in the input graph are scheduled. We can state this property as
In the following sections, we will discuss our verification strategy by closely looking at a scheduling algorithm used
widely in high-level synthesis: the Force Directed List Scheduling algorithm proposed by Paulin and Knight [13].
Theory - Formal Verification of a Scheduling Algorithm
Force Directed List Scheduling (FDLS) [13] is a resource-constrained scheduling algorithm. It is a popular scheduling
technique, widely used in many synthesis tools that exist in the current literature. The FDLS algorithm shown in
Figure
2, is based on the classic list scheduling algorithm [16]. It uses a global measure of concurrency throughout the
scheduling process [13].
4.1 Overview of Force Directed List Scheduling Algorithm
All operations in the dependency graph are sorted in a topological order based on the control and data dependencies.
In each time step, T step a list of operations that are ready to be scheduled, called the ready list, L ready is formed.
This includes all operations whose predecessor operations have been scheduled. As long as the resource bag, R bag is
insufficient to schedule the operations in L ready , the inner while loop (See Figure 2) keeps deferring an operation in
each iteration. In order to select an operation to defer, a deferral force is calculated for each of the ready list operations
and the least force operation is picked. When the resources are sufficient the remaining operations in the ready list
are scheduled in the current time step.
The first step in computing deferral forces is to determine the time frames for every operation by evaluating their ASAP
and ALAP (as late as possible) schedules. The next step is to determine the distribution graphs which is a measure of
the concurrency of similar operations [12]. Finally, the forces for each operation are computed given their time frames
and the distribution graphs. After the calculation of forces for all operations have been performed, the deferral that
produces the lowest force is chosen. This is repeated until resources are sufficient to implement the operations in the
ready list. All operations in the pruned ready list are now assigned to a time step.
4.2 Formalization of Base Specification Model
The base specification model for the scheduling task described earlier in Section 3 is now formalized as theorems in
higher-order logic. The formulation is done in the PVS (Prototype Verification System) [17] specification language
environment. In order to understand the formal specification model better we will first introduce basic type information
about the formal model of the FDLS algorithm. Based on this, we will describe the formalization of the correctness
properties as theorems in higher-order logic.
4.2.1 FDLS Type Specification
The input to the scheduling algorithm is a dependency graph. The operations in this input specification form the
nodes in the graph and the dependencies between operations are represented by directed edges. The type specification
shown in Figure 3 describes the type structure of the input specification. Operations can be arithmetic, logical,
conditional etc. We therefore model the operation node, op node, for any desired set of data values by declaring it
as an uninterpreted type. It thus behaves as a placeholder for an arbitrary set of data values. The + in the type
declaration denotes that op node is a nonempty type and that there has to exist at least one variable of this type to
avoid certain typecheck conditions. dep edge is declared as an interpreted type. It is actually a type name for a tuple
type and defines an ordered pair of op node thus capturing the semantics of a directed edge in the input dependency
Force Directed List Scheduling(DFG; R bag )
Begin
Length in the DFG
while (T step - Tmax ) Each iteration corresponds to a T step
Evaluate Time F rames
ready / f All operations whose time frames intersect with T step g
while (R bag not sufficient) Need to defer an operation
Compute
Op / Operation in L ready with the least force
ready / L ready \Gamma fOpg Defer the operation
if (Op in critical path) then
Evaluate Time F rames
while
for each (operation Op 2 L ready )
Schedule Op at T step
end for
while
End
Figure
2: Force Directed List Scheduling Algorithm
graph. The tuple type op graph captures the semantics of the input dependency graph. It is defined to be an ordered
pair of sets. The first projection is of type op node set while the second projection is of type dep edge set and is
actually a dependent type defined in terms of its first projection. This ensures that a input dependency graph g is
well-formed in terms of its edges being defined only in terms of the nodes in the graph.
In addition to the input dependency graph, the scheduling task expects a bag of resources and an initial schedule
function. In the type declaration shown in Figure 3, module is an uninterpreted nonempty type. A bag of modules is
derived from it to represent the input resource bag. The scheduling function is of type schedule and maps the domain
of op node to the naturals.
Figure
4 shows some of the relevant variables used in the description of the correctness theorems. The input graph is
defined in terms of a finite set of nodes N, and a set of edges E. Three function variables of type schedule are declared
and a module bag variable, Rbag is declared. The scheduling algorithm is represented by the fdls function. It is a
recursive function that takes an input graph, (N, E), a bag of resources Rbag, and an initial schedule (just a placeholder
function to initiate recursion). final sched func is the output function of the scheduling algorithm.
4.2.2 Existence Theorem:
The existence theorem is shown in Figure 5. This theorem states that, final sched func, the schedule function
obtained after the execution of the fdls function maps every operation in the input graph to a positive natural:
The fdls function expects three arguments: the dependency graph, a bag of resources and an initial scheduling
function. The output of the fdls function is the final schedule. The above theorem asserts that this final schedule
maps every node in the graph to a positive time step. The existence function accepts a set of nodes and a schedule
function and returns a true if all nodes in the node set have been scheduled.
op-node: TYPE+;
dep-edge: e: edge - proj-1(e) /= proj-2(e);
pred[e:dep-edge - node-set(proj-1(e)) AND
proj-1(e1)=proj-1(e) AND proj-2(e1)=proj-2(e) =?
module: TYPE+;
modules
schedule:
Figure
3: Types for Formal Model of FDLS Algorithm
N: VAR finite-set[op-node];
E: VAR pred[dep-edge];
Rbag: VAR modules;
sched-func: VAR schedule;
init-sched-func: VAR schedule;
final-sched-func: VAR schedule;
Figure
4: Variables in PVS Model for Scheduling
4.2.3 Dependencies Preserved Theorem:
This theorem is shown in Figure 6. It ensures that the final schedule does not violate the dependencies specified in
the input dependency
The dependencies preserved function takes a graph and the final schedule and ensures that the dependencies in the
graph are preserved by the final schedule. This function visits every edge in the input graph and checks if the partial
order is maintained by the schedule.
existence: THEOREM
FORALL (N, E, Rbag, init-sched-func, final-sched-func):
=? existence(N, final-sched-func)
existence(N,
Figure
5: Existence Theorem for Scheduling
dependencies-preserved: THEOREM
FORALL (N, E, Rbag, init-sched-func, final-sched-func):
=? dependencies-preserved((N, E), final-sched-func)
dependencies-preserved(og,
Figure
Dependencies Preserved Theorem for Scheduling
FORALL (N, E, Rbag, init-sched-func,
fdls((N, E), Rbag, init-sched-func))
constraints-satisfied((N, E), Rbag,
final-sched-func)
constraints-satisfied(og, Rbag, sched-func):
FORALL (tstep: posnat):
proj-1(og)(n) and
IN
resource-suff?(ros: finite-set[op-node],
Rbag:
IF (EXISTS opnode-resource-map:
member(opnode-resource-map(n1), mbag) AND
member(opnode-reource-map(n2), mbag)
THEN true
ELSE false
Figure
7: Resources Sufficient Theorem for Scheduling
4.2.4 Resources Sufficient Theorem:
This theorem shown in Figure 7, asserts that a correct scheduling function obeys the resource constraints as specified
by the input resource bag. This is specific to a resource constrained scheduler algorithm which attempts to optimize
on the time steps given a limited set of resources.
The schedule satisfies constraints is a function that performs the resource suff? test at each time step. The
resource suff? function returns a true if all operations in the given time step denoted by S can be executed by the
resources in Rbag. The resource suff? function is specified in the PVS model of the fdls theory and is shown in
Figure
7.
The resource suff? function accepts a set of selected nodes known as the ready set and a bag of resources. We declare
a function, opnode resource map that maps the set of op node to the resource bag Rbag. The resource suff? function
returns true if this mapping is injective and all nodes in ros are mapped to resources in Rbag.
The three base theorems specify the functional correctness of a resource constrained scheduling algorithm. They make
no assumptions about the implementation details of the scheduling algorithm and assert properties that should be
satisfied by any correct scheduling task that attempts to time-stamp the input dependency graph.
We had two choices in modeling the FDLS algorithm in PVS: constructively by explicitly defining how the result
of the algorithm is to be constructed, or descriptively, i.e. by stating a set of properties (axioms) the algorithm is
to satisfy. We chose the former style since it was most conducive to our top-down approach to verifying the FDLS
algorithm. Each function in the algorithm is constructively defined and we use the PVS language mechanisms to
ensure that all functions and hence the algorithm is well defined and total. A purely descriptive style could introduce
inconsistent axioms and lemmas although it would be very useful for under-specification. In fact, a judicious mix of
both styles of specification would best suit our verification strategy. Since our verification exercise concentrates on
developing the correctness properties of the scheduling task, we can axiomatize portions of the formal model of the
FDLS algorithm that pertain purely to optimization issues. For example, in the FDLS algorithm shown in Figure 2,
the Compute deferral forces function decides which operations from the ready set are selected to be scheduled at
the current time step. A set of axioms can quite easily be stated to capture the requirements of this function. The
exact construction of this function is not necessary to conduct our verification exercise.
4.3 Formal Verification of FDLS Algorithm
Given the formal model of the FDLS algorithm and the three theorems as formal assertions that capture the general
correctness criteria for the FDLS algorithm, we next verify that the formal model of this algorithm indeed satisfies
these theorems in the PVS proof checking environment. The PVS verifier employs a sequent calculus [18]. In a sequent,
the conjunction of the antecedents should imply the disjunction of the consequents. A sequent calculus proof can be
viewed as a tree of sequents whose root is a sequent of the form ' A, where A is the property to be proved and the
antecedent is empty. A proof in PVS ends when every path from the root of the tree terminates in a leaf node, where
the implication is indeed true.
The fdls theory in PVS is shown in Figure 8. The fdls function is defined recursively and captures the semantics of
the FDLS algorithm shown in Figure 2. In the pseudo-code shown in Figure 2, the outer most while loop terminates
when But inside the loop Tmax is conditionally stretched. Modeling the loop structure as a recursive
function with the same terminating condition would lead to an ill-defined function. In a recursive function in PVS we
are required to specify a MEASURE function to ensure that the recursive function is well-defined. The measure is applied
to the arguments of the recursive call and compared to the measure applied to the original arguments. In the above
case, the MEASURE function specifies that the cardinality of the set of unscheduled op node in the input graph should
reduce with each recursive call. Due to lack of space, the formalization of force and its update using time frames,
distribution graphs, calculation of ASAP and ALAP schedules are not shown in the PVS model in Figure 8. We will
present some insight into our verification approach by partially walking through a portion of the proof exercise for one
of the theorems, namely the existence theorem.
The existence theorem states a truth about all nodes in the input graph. The proof for this theorem proceeds by
induction on the variable N, the set of op node using an induction scheme for card(N). This results in a base case
that is easily discharged by grind-a built-in strategy and an induction case that is displayed below as a sequent in
Figure
9.
The proof goal is specified by formula [1] in the consequent. In the antecedent, formula f-1g reiterates the induction
step. By carefully studying the sequent, one can observe that the proof goal is actually embedded within formula [-3].
With appropriate proof steps to isolate the right side of the implication in formula f-3g, we can, with proper instan-
tiations, extract a formula that matches the proof goal shown in formula [1]. In order to isolate the proof goal and
thus prove the theorem we have to carefully introduce four additional lemmas into the specification. These lemmas
form the first hierarchy of lemmas for the Existence theorem and are categorized as Level 1 lemmas as shown in Figure
10. They assert a set of correctness properties that are very specific to the FDLS algorithm. For example, lemma
delete ros card lemma states that the function new unsched nodes always returns a set whose cardinality is always
smaller than the original node set N. In other words, it formalizes the assertion that a nonempty set of operators is
scheduler: THEORY
BEGIN
IMPORTING fdls-types
get-max-parent-tstep((R: finite-set[op-node]),
(og: op-graph), (sched-func: schedule)):
IN IF empty?(P) THEN 0 ELSE maxrec(P) ENDIF
get-max-sched((sched-func:
updated-sched((ros: non-empty-finite-set[op-node]),
(sched-func: schedule), (max-sched: nat)):
(LAMBDA (n: op-node):
IF ros(n) THEN max-sched
updated-ros((ros: non-empty-finite-set[op-node]), (mbag: modules)):
get-ros((og:
n: op-node -
final-ros((og: op-graph), (mbag: modules)):
new-sched-func((og: op-graph), (mbag: modules), (sched-func: schedule)):
updated-sched(final-ros(og, mbag), sched-func, get-max-sched(sched-func))
new-edges((og: op-graph), (N: finite-set[op-node])):
e: dep-edge - PROJ-2(og)(e) AND N(PROJ-1(e)) AND N(PROJ-2(e))
new-unsched-nodes((og: op-graph), (mbag: modules)):
fdls((og: op-graph), (mbag: modules), (sched-func: schedule)):
fdls((new-unsched-nodes(og, mbag),
restrict(new-edges(og, new-unsched-nodes(og, mbag)))),
mbag, new-sched-func(og, mbag, sched-func))
END scheduler
Figure
8: Overview of PVS Theory for FDLS Algorithm
AND
fdls((new-unsched-nodes((S!1, restrict(E!1)), Rbag!1),
restrict(new-edges((S!1, restrict(E!1)),
new-unsched-nodes((S!1,
restrict(E!1)),
Rbag!1,
new-sched-func((S!1, restrict(E!1)), Rbag!1, init-sched-func!1)))
[-3]
IMPLIES FORALL (E, Rbag, init-sched-func, final-sched-func):
=? existence(S2, final-sched-func))
Figure
9: Induction Step Proof Sequent in FDLS Algorithm Verification
scheduled in each recursive call of the fdls function. The new unsched nodes function in this lemma takes the given
graph and returns a smaller node set by removing from the graph the operators that were scheduled in the current
iteration. The proof steps for the delete ros card lemma lemma are shown in Table 11. The proof for this lemma is
easily discharged with the introduction of three additional lemmas as shown in the table. These lemmas introduced
to prove the delete ros card lemma form the second hierarchy or Level 2 lemmas.
We adopted a top-down approach to simplify the proof exercise. Theorems are proved using lemmas (Level 1 lemmas)
and other appropriate inference rules provided by the proof system. The proofs of the the Level 1 lemmas sometimes
require the introduction of additional lemmas (Level 2 lemmas) and sufficient care is taken to ensure that the lemmas
that are introduced are consistent and relevant to the verification exercise. These lemmas are next proved and in the
course of their proofs, additional lemmas are introduced. This process continues until no more additional lemmas need
to be introduced. A theorem is thus considered proved only after it has been proved and the lemmas in all hierarchies
below it are successfully proved. The top-down approach results in a well-structured proof exercise. In addition to
making the overall proof effort manageable, it has the added advantage of systematically deriving a large set of formal
correctness properties (lemmas).
In a large and complex task such as Scheduling, it is rather difficult to identify the task invariants. This makes the
verification of such algorithms a hard problem. Our approach presents a systematic way to identify these invariants
and generate them in a formal environment. For the Existence theorem alone, a total of 26 invariants were formulated
as a part of the proof exercise. This large set of formally derived invariants provide considerable more insight into the
correctness issues concerning the existence theorem for the FDLS algorithm. A similar proof approach was adopted to
verify the other two theorems concerning dependency preservation and resource sufficiency. Thus, starting with three
base theorems, we were able to formulate a set of 43 lemmas or formal assertions as a consequence of the formal proof
system. These formal assertions were organized into four levels of hierarchy and assert several invariant properties of
the FDLS algorithm. Without introducing this formalism in our specification model, it would be very hard to identify
the task invariants or formal assertions, express them with precision and be assured of their correctness. In the next
section, we will show how we used the set of formal assertions that make up the enhanced specification model for the
delete-ros-card-lemma: LEMMA
nonempty?(N) =?
card(new-unsched-nodes((N, E),
delete-existence-lemma: LEMMA
final-sched-func: schedule,
init-sched-func: schedule, Rbag: modules):
existence(new-unsched-nodes((N, E), Rbag), final-sched-func)
AND
existence(difference(N, new-unsched-nodes((N, E), Rbag)),
new-sched-func((N, E), Rbag, init-sched-func))
=? existence(N, final-sched-func))
ros-construction-lemma: LEMMA
FORALL (E: pred[dep-edge],
N: finite-set[op-node], Rbag: modules, n: op-node):
member(n, N)
AND
NOT member(n, new-unsched-nodes((N, E), Rbag))
=? member(n, final-ros((N, E), Rbag))
ros-existence-lemma: LEMMA
init-sched-func: schedule, Rbag: modules, n: op-node):
member(n, final-ros((N, E), mbag))
=? new-sched-func((N, E), mbag, init-sched-func)(n) ? 0
Figure
10: Level 1 Lemmas for Existence Theorem in FDLS Algorithm
FDLS algorithm to formalize a C++ implementation of the FDLS algorithm.
Implementation - Formal Assertions Embedding in Program Code
In this section, we will discuss how we used the set of formal assertions that make up the specification model of the
FDLS algorithm to verify the scheduler implementation in an existing high-level synthesis system, DSS [5]. DSS accepts
algorithmic behavioral specifications written in a subset of VHDL and generates an RTL design also expressed in VHDL,
subject to the constraints on clock period, area, schedule length, and power dissipation. The scheduling phase in DSS
is implemented as a variation of the FDLS algorithm, extended to handle VHDL specifications with multiple processes,
signal assignments and wait statements. The FDLS algorithm is further enhanced to perform global process scheduling
such that operations across processes share the same data path resources. In addition to assigning timesteps, this
stage also binds operations to functional units in the target library. The scheduling stage currently does not support
pipelining, chaining or multicycling. The overall structure of the implemented scheduler in DSS is modeled closely on
the FDLS algorithm described in Section 4.
The theorems and lemmas formulated during the theorem proving exercise constitute a set of formal assertions and
invariants that represent the functional specification of the FDLS algorithm. If, during an execution run, the scheduler
is faithful to this formal specification model, we can assert that a correct schedule will be generated.
(ASSERT)
(ASSERT)
(ASSERT)
Figure
11: Proof Steps for delete ros card lemma
Since the formal specification model is formulated in higher-order logic and the implementation (the scheduler in DSS)
is in the C++ software domain, establishing the equivalence ' Imp ) Spec is not a straightforward procedure. The
formalized specification model is a set of formal assertions that specifies the invariants for different portions of the FDLS
algorithm. The formal assertions are translated into C++ program assert statements and embedded in portions of the
scheduler implementation that correspond to the spatial locality of the invariants in the formal model of the algorithm.
The scheduler is thus embedded with its formal specification model giving rise to an auto-verifying scheduler. For
the sake of illustration, Figure 12 shows the FDLS algorithm with a small sample of the formally derived program
assertions embedded within it. The three base program assertions correspond to the three base formal assertions stated
originally. We carefully translate these theorems to C++ assert statements and place them outside the body of the
FDLS implementation. Since they verify the truths for any scheduling technique, they can be used as a checkpoint in
order to ensure that the final state of the scheduler is not in violation of the three universal correctness properties.
Once the scheduler has completed execution and generated a schedule, the base program assertions are executed on
the schedule. If an incorrect schedule is generated, one of these assertions raises an exception. Thus, any schedule
that is generated by the formally embedded scheduler is guaranteed to be error-free. A correct scheduler is completely
specified by the three base assertions and they are capable of detecting any error in the scheduler implementation that
might result in an incorrect schedule.
The base formal assertions, due to their spatial location in the code and their high-level notion of correctness usually
do not provide any more useful information about an incorrect schedule apart from detecting its presence. The lemmas
and axioms that were systematically formulated as formal assertions and verified as a result of the PVS proof exercise
play an important role in error diagnostics. We will illustrate this by referring to the embedded FDLS algorithm
shown in Figure 12. The formal assertions, introduced at different levels of hierarchy during the proof exercise are
carefully translated into program assertions. The hierarchy is preserved in the organization of the program assertions.
Thus level 1 program assertions are narrower in scope but at the same time focus on detecting errors in local areas
of the scheduler implementation. These assertions are embedded in the code as shown just after the inner while
loop and after the for loop and they assert the invariant properties of the loop statement. The delete ros card
assertion states an invariant property that is true at the end of every iteration of the outer while loop. By placing
it in that portion of the code, we ensure the identification of any violations to this assertion in every iteration of the
scheduler. Contrast this with the error showing up in the base assertions after all iterations of the scheduler have
been completed. Level 1 assertions thus, offer better diagnostics and the user is promptly made aware of any error
in the implementation. As we proceed down the hierarchy of formal assertions, finer details of the implementation
are subjected to verification. The formal assertions that correspond to these levels are specific to verifying smaller
portions of the scheduler code and thus they can expose errors in the code are pinpoint them accurately. Going back
Force Directed List Scheduling(DFG; R bag )
Begin
Length in the DFG
Level 3 Assertions
schedule invariant; ordered schedule
strict subset nonempty
oe
while (T step - Tmax )
Evaluate Time F rames
ready / f All operations whose time frames intersect with T step g
while (R bag not sufficient)
Compute Assertions
ros invariant; ros construction
ros nondependence; card strict sub
oe
Op / Operation in L ready with the least force
ready / L ready \Gamma fOpg
if (Op in critical path) then
Evaluate Time F rames
end while Level 1 Assertions
final ros sufficiency; edge dependence
final ros wellformed; ros sufficiency
oe
for each (operation Op 2 L ready )
Schedule Op at T step
end for Level 1 Assertions
delete ros card; delete existence
graph wellformed; edge wellformed
oe
while
Base Assertions
existence; dep preserved; res sufficiency
oe
End
Figure
12: FDLS Algorithm with a Sample of Formal Assertions
to our previous example, although the program assertion delete ros card, by virtue of its position in the code is
able to detect errors within one iteration of the outside while loop in the the scheduler, it might still not be good
enough to locate the source of a problem. card strict subset was one of the lemmas used to complete the proof
of the delete ros card property in the theorem proving environment. The corresponding Level 2 formal assertion
verifies the invariance during each iteration of the inner while loop in the algorithm. The assertion concentrates on a
smaller portion of the implementation and can promptly detect any violations to the base assertions.The verification
approach can thus be extended to as many levels of hierarchy as there are in the formal assertion tree until the source
of an error is isolated.
Detection and Localization - An Example
DSS, since its inception has been used to synthesize over benchmarks and other large-scale specifications.
These design examples were carefully chosen to test a wide range of synthesis issues and ranged from single process
arithmetic-dominated specifications to multiple process specifications with complicated synchronization protocols and
various combinations of control constructs like conditionals and loops. In fact, this effort was part of a concerted
#endif
assert(dependencies preserved(fdls map));
#endif
#endif
Figure
13: Base Program Assertions in Scheduler
attempt to systematically validate DSS using simulation and formal techniques [9, 10, 15]. During the course of this
exercise, sometimes incorrect RTL designs were synthesized. Analysis of these faulty designs eventually lead to the
discovery of implementation errors in the HLS system. Notably, some of the errors in the RTL designs, were attributed
to conceptual flaws in the scheduler implementation. These errors were identified using systematic simulation methods
and traditional software debugging aids. Although this exercise led to an increased confidence in the reliability of the
synthesis system, given the limited number of test cases involved, one could never be sure of isolating all bugs in the
system. Also, the complexity of the synthesis system itself rendered the error trace back often quite laborious and time
consuming. With the formal assertions approach, we hope to address both problems in validating synthesized RTL
designs. In particular, since the formal specification model of the scheduler is embedded within its implementation (as
C++ assertions), an incorrect schedule is almost always guaranteed to violate this specification model. This violation
is immediately flagged as an exception and the user is notified. By properly enabling the formal derived program
assertions, the trace back to the source of the bug can be performed almost effortlessly.
To illustrate our approach, we will walk through an error detection exercise in the scheduling stage of the synthesis
system using the formal assertions technique. The formally embedded scheduler was seeded with an error that would
result in the synthesis of an RTL design with an incorrect schedule. We begin by enabling only the base program
assertions during the first run of the scheduler. Since these assertions are checked only once during an execution
run the overhead introduced by them is minimal. If necessary, we could then systematically enable the levels of
hierarchy in the formal assertions tree to build an error trace that would guide us to the problem area in the scheduler
implementation. The bug in the scheduler code fires the dependencies preserved base program assertion during
the synthesis of an RTL design. As shown in Figure 13, this assertion is situated at the end of the scheduling task
along with the other two base assertions. In the figure, the function fdls graph implements the FDLS algorithm and
returns a final schedule fdls map. The base assertions are placed outside the actual implementation of the scheduler.
The assertion that checks to see if the schedule preserves the dependencies (boldfaced and italicized in the figure) in
the input graph fails due to the error introduced in the code. This tells us that the final schedule somehow violates
the partial order specified by the input graph. In order to get more information about the source of the error, the
Level 1 program assertions are enabled. The scheduler is executed again for the same set of test cases and this time the
assertion failure occurs within the body of the fdls graph function. The portion of the scheduler function that has
the embedded assertion is shown in Figure 14. The assertion edge dependency lemma is placed just at the termination
of the inner while loop. This assertion is a C++ translation of the following PVS lemma.
if (unsched-nodes(p) == ready-list(n)) f
break;
// end inner while loop
assert(edge dependency lemma(ready list,
unsched nodes, original unsched nodes));
#endif
assert(delete-ros-card-existence-lemma(unsched-nodes.length(),
#endif
Figure
14: Level 1 Program Assertions in Scheduler
FORALL (og; op-graph, e: dep-edge):
member(e,
NOT member(e, new-edges(og, new-unsched-nodes(og, mbag)))
AND member(PROJ-2(e), new-unsched-nodes(og, mbag))
The above lemma states that, if an edge is present in the present graph but is not present in the next update of the
graph, one of its nodes must be in the ready set ROS, and the other node must be in the updated graph. Thus the
failure of this formal assertion (shown boldfaced in Figure 14) gives the user some insight into the nature of the error.
The portion of the scheduler code around this formal assertion was examined closely for errors but no immediate
cause for error was discovered. The Level 2 assertions were now enabled in the hope that they would provide more
information on the cause of the error. The scheduler is executed again and this time one of the assertions in the Level 2
class of program assertions fails. A snapshot of the code is shown in In Figure 15. This time the program assertion
ros nondependence assertion (shown boldfaced in Figure 15) fails. It is placed just before the inner while loop that
checks for resource sufficiency in Figure 12. This assertion is translated from the following PVS lemma.
ros-nondependence: LEMMA
FORALL (og: op-graph, n: op-node, m: op-node):
member(n, ros) AND member(m, ros)
=? NOT member((n, m), proj-2(og)) AND
NOT member((m, n), proj-2(og))
This lemma states a property that all nodes in the ready set, ros, must satisfy. This property asserts that no two
nodes in the ready set can have an edge between them. This means that, the ready set is comprised of nodes that
have no dependency relations among them. Clearly, this assertion failure indicates that the current ready set somehow
violates this property. So we only need to look at the routine that builds the ready set in order to find the error.
// Build the ready list
while (n !=
Check if there is any unscheduled parent
all-parents-are-scheduled
error if (p !=
if
all-parents-are-scheduled
break;
if (all-parents-are-scheduled) f
assert(ros nondependence(ready list, unsched nodes));
#endif
// Defer operations until resource constraints are met
while
Figure
15: Level 2 Program Assertions in Scheduler
Upon examining this portion of the code which is located just above the failed assertion, it can be easily noticed that
the selection process is the culprit. The selection process erroneously admits a node into the ready set even when one
of its parents have not been scheduled yet. This is caused by the if statement in the code shown in Figure 12. The
selection process should admit a node in the ready set only after all of its parents have been scheduled. This can be
achieved by replacing the erroneous if construct by a while construct.
By constraining the scheduler implementation to abide by its formal specification at every step of the execution run,
we can ensure a very efficient and reliable error detection and trace mechanism. Errors can be traced back to their
source using a technique of systematically enabling higher levels of formal assertions.
7 Errors Discovered by Formal Assertions
In order to test the effectiveness of the formal assertions approach, the synthesis system was seeded with 6 programming
errors in the formally embedded scheduling stage. These errors represent actual implementation errors that were earlier
discovered over a period of time through traditional validation techniques like simulation and code walk throughs. It
was hoped that our approach would serve two purposes: discover at least all 6 seeded errors and with little or no user
intervention and provide an error trace to the source of the program errors in the scheduler. We executed the synthesis
system for a number of design examples.
Table
1 shows the details of the experiment. The seven examples range in size from as little as 12 operation nodes up
to as many as 300 operation nodes.
Test Assertion Levels that detect the Errors
(op nodes)
26 fbase,1,2g pass fbaseg fbase,1g f1g pass pass
Test3 50 fbase,1,2g fbase,1g fbaseg pass f1g pass pass
Test6 200 fbase,1,2g fbase,1g fbaseg pass f1g fbase,1g fbaseg
Table
1: Verification Results for a Formally Embedded Scheduler
Columns 3-7 tabulate the error detection results for each of the five program errors detected during the execution of
the scheduler code. The entries in these columns indicate the levels of formal assertions that were needed to pinpoint
the source of the error. All the pass entries in Table 1 indicate successful execution. For these test cases, the program
errors had no adverse effects and a correct schedule was generated by the implementation. So none of the formal
assertions were triggered. If base formal assertions were sufficient to ascertain the source of an error, then base alone
appears in the corresponding entry of the table. Thus, Error 3 was detected by one of the base formal assertions for
all test cases. Next observe the results for Error 1. This error involved an incorrect way of building the ready set and
was earlier illustrated in the previous section when we walked through an error detection exercise. For Test
error went undetected since none of the nodes in this test case had more than one parent, and as a result the error
in the code did not result in an incorrect schedule. For the rest of the test cases, the formal assertions, base through
level 2 detected this error. In all these cases, although the error was detected by all three levels, formal assertions up
to level 2 had to be enabled in order to pinpoint the source of the error.
discovered with the formal assertions approach. The error hitherto, had escaped detection even during
simulation and code inspection. The defer operation routine was the culprit. The available resources were incorrectly
analyzed while building the ready list. As a result, there was a discontinuity in the schedule numbers assigned to the
operators in the input specification. This conceptual error was detected by enabling the Level 1 program assertions.
introduced an error in the ASAP and ALAP routines. It manifests itself in the last three test cases and in fact
needed formal assertions up to level 1 hierarchy to be enabled in order to locate the source of the error. Error 6 had
no effect over the schedule for the first four test cases since they had enough resources to schedule their operations
and hence the bug goes undetected. Error 7 represents a program error in the force calculation routine. The scheduler
generated correct schedules for test cases 1, 2, and 3 since there was enough resources to schedule the operations and
hence the defer operation routine was never executed while scheduling these test cases. In the rest of the four test
cases, the error was discovered and its source pinpointed by the base assertions.
The overhead of the formal assertions approach is not significant. Table 2 shows timing information for the test cases
presented above. This experiment was conducted on a Sparc 5 workstation with 60Mb resident memory. The second
column represents FDLS algorithm run time with none of the assertions enabled. The entries in the third column
denote the run times for the algorithm with the three base assertions enabled. The increase in run times is hardly
noticeable and this is to be expected since the base assertions are evaluated just once after the scheduling algorithm
assigns the time steps. The fourth column represents run times when all levels of program assertions are enabled in
the algorithm. There is an appreciable increase in run times since the assertions embedded within the algorithm are
now evaluated several times during the execution of the algorithm. It can be quite clearly seen that the overhead
introduced by the formal assertions approach does not pose any serious problems to the performance of the FDLS
algorithm. Typically, base assertions can be switched on during the normal synthesis process. An assertion failure
signals that the synthesis process is at fault somewhere in the implementation. The design can then be re-synthesized
Test No. FDLS Algorithm run time in seconds
assertions Base assertions All assertions
Table
2: Run-time Overhead due to Formal Assertions in Scheduler
by enabling lower levels of program assertions in order to trace back to the source of a program error in the synthesis
system.
8 Scope of the Verification Effort
The formal assertions technique ensures the detection of any incorrect schedule: a schedule that directly or
indirectly violates the base formal assertions. Therefore, the validity of the verification approach hinges on the
completeness of the set of theorems that make up our base specification model. The lemmas derived formulated
during the thorem proving exercise are usually limited to identifying errors that result in violations of any of the base
theorems. In the above experiment, it was observed that except for Error 5, all errors that led to an incorrect schedule
were identified by the base assertions. This reinforced our confidence in the completeness of the set of base theorems.
exploited the absence of a base correctness condition that ensured the tightness property discussed earlier in
the second footnote at the end of Section3.1. There we discussed the possibility of adding a fourth base theorem that
captured the so-called tightness property, which states that there cannot be a discontinuity in the schedule of the input
graph. Since this was not strictly a correctness issue we did not include it in our base theorems. But it so happened
that one of the level 1 lemmas (engendered by the induction strategy in our deductive analysis) used to prove the
base theorems explicitly specified this property. This explains why Error 5 was discovered by level 1 assertions but
slipped past the base assertions. Typically the lower level assertions and axioms enable the error detection at a finer
granularity than possible with the base assertions and the error detecting capabilities are limited to but more specific
as compared to the base formal assertions.
After verification, the formal assertions are translated into program assertions in the synthesis system. Given the
expressive differences between the logic and software domain, the translation process often could get quite complicated.
Ultimately, the correctness of the formal assertions approach hinges on this translation process. Convenient data
structures exist that allow us to conveniently conduct the translation process. Sometimes the theorems cannot be
translated directly in the software domain. In such cases, we develop equivalent formal assertions amenable to the
software domain and then formally establish the equivalence relationships to ensure that the translation process is
indeed correct.
Portability issues of the formal assertions approach need to be addressed. Base assertions, typically, can be quite easily
ported across different algorithms that perform the same task in synthesis. Lower level assertions formulated during
the course of the formal proof exercise present limited portability. These formal assertions with some modifications can
be fairly easily ported across implementations that belong to the same class of algorithms. Portability across classes
of algorithms could be restricted and would require additional proof exercises in order to formulate appropriate formal
assertions.
The formal assertions approach verifies a single execution run of the synthesis process and guarantees a correct design
if the specification is not violated in the process of synthesis. It is therefore entirely feasible that the bugs in the
synthesis system go undetected until they manifest themselves during an execution as shown in Table 1. The bugs are
exposed as soon as they introduce errors in the RTL designs being synthesized.
9 Conclusions and Future Work
Insertion of assertions and invariants [2] in programs has been known to be an effective technique for establishing the
correctness of the outcome of executing the program and for discovering and isolating errors. However, determination
of an appropriate set of assertions is often a tedious and error-prone task in itself. In this paper, we made use of
mechanical theorem proving to systematically discover a set of sufficiently capable assertions.
We presented a formal approach to verifying RTL designs generated during high-level synthesis. The verification
is conducted through program assertions discovered in a theorem proving environment. In this paper, we focused
on the resource-constrained scheduling task in synthesis. Correctness conditions for resource-constrained scheduling
have been formally specified in higher-order logic and a formal specification of the FDLS algorithm is verified using
deductive techniques. A large set of additional properties is systematically discovered during the verification exercise.
All of these properties are then embedded as program assertions in the implementation of the scheduling algorithm
in a high-level synthesis tool. These assertions act as watchpoints that collectively ensure the detection of any errors
during the synthesis process.
An appealing aspect of this approach is the systematic incorporation of design verification within a traditional high-level
synthesis flow. We conduct the formal verification exercise of the synthesized RTL design in the synthesis
environment as the design is being synthesized, avoiding the need for functional verification of the synthesized design
later using a formal verification tool or a simulator. The time taken for our "on-the-fly" verification approach scales
tolerably with the size of the design being synthesized. This is in contrast with blind post-facto simulation, model
checking or theorem proving based verification approaches that do not use any reasoning based on the properties of
the synthesis algorithms.
One criticism of this approach may concern the care and effort involved in the the manual process of converting the
formal assertions in higher-order logic into program assertions in C++. In our experience, this indeed proved to be
a process requiring considerable diligence. Often, we had to express the formal assertions in several different ways
in higher-order logic, each time carefully constructing the necessary data-structures in C++ to enable their implementation
as program assertions. This process had to be repeated until we discovered a form for the formal assertion
that lent itself to straight-forward transliteration into C++. We estimate the entire process of FDLS formalization,
verification and embedding of the assertions in the implementation took about 260-300 person hours.
Another criticism of this approach concerns the sufficiency of the assertions to isolate an error. An error cannot
be caught at its source, but only when it first causes an assertion violation. However, this is a problem with all
assertion-based approaches for program correctness. The sufficiency of the base correctness conditions is never formally
established; these conditions represent a formalization of our intuitive understanding of what a scheduler should do.
Effort is currently underway to adopt the verification strategy presented in this paper to formalize all the stages of a
high-level synthesis system. This approach will allow early detection of errors in the synthesis process before the RTL
design is completely generated.
--R
"Implementation Issues about the Embedding of Existing High Level Synthesis Algorithms in HOL"
"The Science of Programming"
"High-Level Synthesis,Introduction to Chip and System Design"
"Integration of Formal Methods with System Design"
"DSS: A Distributed High-Level Synthesis System"
"Introduction to HOL"
"An Engineering Approach to Formal System Design"
"Synthesis and Optimization of Digital Circuits"
"Synchronous Controller Models for Synthesis from Communicating VHDL Processes"
"Validation of Synthesized Register-Transfer Level Designs Using Simulation and Formal Verification"
"From VHDL to Efficient and First-Time Right Designs: A Formal Approach"
"Force Directed Scheduling for the Behavior Synthesis of ASICs"
"Scheduling and Binding Algorithms for High-Level Synthesis"
"High-Level VLSI Synthesis"
"Experiences in Functional Validation of a High Level Synthesis System"
"Some Experiments in Local Microcode Compaction for Horizontal Machines"
"PVS: A Prototype Verification System"
"User Guide for the PVS Specification and Verification System, Language and Proof Checker"
"Synthesis of Digital Designs from Recursion Equations"
"On the Interplay of Synthesis and Verification"
"A Survey of High-Level Synthesis Systems"
--TR
Scheduling and binding algorithms for high-level synthesis
High-level synthesis
Experiences in functional validation of a high level synthesis system
From VHDL to efficient and first-time-right designs
The Science of Programming
Synthesis of Digital Design from Recursive Equations
Synthesis and Optimization of Digital Circuits
High-Level VLSI Synthesis
DSS
An Engineering Approach to Formal Digital System Design
Implementation Issues About the Embedding of Existing High Level Synthesis Algorithms in HOL
On the Effectiveness of Theorem Proving Guided Discovery of Formal Assertions for a Register Allocator in a High-Level Synthesis System
PVS
Facet
Synchronous Controller Models for Synthesis from Communicating VHDL Processes
Theorem Proving Guided Development of Formal Assertions in a Resource-Constrained Scheduler for High-Level Synthesis | formal assertions;scheduler verification;formal synthesis;formal verfication;high-level synthesis;theorem proving |
569032 | Model Checking of Safety Properties. | Of special interest in formal verification are safety properties, which assert that the system always stays within some allowed region. Proof rules for the verification of safety properties have been developed in the proof-based approach to verification, making verification of safety properties simpler than verification of general properties. In this paper we consider model checking of safety properties. A computation that violates a general linear property reaches a bad cycle, which witnesses the violation of the property. Accordingly, current methods and tools for model checking of linear properties are based on a search for bad cycles. A symbolic implementation of such a search involves the calculation of a nested fixed-point expression over the system's state space, and is often infeasible. Every computation that violates a safety property has a finite prefix along which the property is violated. We use this fact in order to base model checking of safety properties on a search for finite bad prefixes. Such a search can be performed using a simple forward or backward symbolic reachability check. A naive methodology that is based on such a search involves a construction of an automaton (or a tableau) that is doubly exponential in the property. We present an analysis of safety properties that enables us to prevent the doubly-exponential blow up and to use the same automaton used for model checking of general properties, replacing the search for bad cycles by a search for bad prefixes. | Introduction
Today's rapid development of complex and safety-critical systems requires reliable verication
methods. In formal verication, we verify that a system meets a desired property by checking
that a mathematical model of the system meets a formal specication that describes the property.
Of special interest are properties asserting that observed behavior of the system always stays
within some allowed set of nite behaviors, in which nothing \bad" happens. For example, we
This reasearch is supported by BSF grant 9800096.
y Address: School of Computer Science and Engineering, Jerusalem 91904, Israel. Email: orna@cs.huji.ac.il
z Address: Department of Computer Science, Houston,
in part by NSF grant CCR-9700061, and by a grant from the Intel Corporation.
may want to assert that every message received was previously sent. Such properties of systems
are called safety properties. Intuitively, a property is a safety property if every violation of
occurs after a nite execution of the system. In our example, if in a computation of the system
a message is received without previously being sent, this occurs after some nite execution of
the system.
In order to formally dene what safety properties are, we refer to computations of a nonterminating
system as innite words over an alphabet . Typically, is the set
of the system's atomic propositions. Consider a language L of innite words over . A nite
word x over is a bad prex for L i for all innite words y over , the concatenation x y of
x and y is not in L. Thus, a bad prex for L is a nite word that cannot be extended to an
innite word in L. A language L is a safety language if every word not in L has a nite bad
prex. For example, if is a safety language. To see this, note that
every word not in L contains either the sequence 01 or the sequence 10, and a prex that ends in
one of these sequences cannot be extended to a word in L. The denition of safety we consider
here is given in [AS85], it coincides with the denition of limit closure dened in [Eme83], and
is dierent from the denition in [Lam85], which also refers to the property being closed under
stuttering.
Linear properties of nonterminating systems are often specied using Buchi automata on
innite words or linear temporal logic (LTL) formulas. We say that an automaton A is a safety
automaton if it recognizes a safety language. Similarly, an LTL formula is a safety formula if
the set of computations that satisfy it form a safety language. Sistla shows that the problem
of determining whether a nondeterministic Buchi automaton or an LTL formula are safety is
PSPACE-complete [Sis94] (see also [AS87]). On the other hand, when the Buchi automaton is
deterministic, the problem can be solved in linear time [MP92]. Sistla also describes su-cient
syntactic requirements for safe LTL formulas. For example, a formula (in positive normal
whose only temporal operators are G (always) and X (next), is a safety formula [Sis94]. Suppose
that we want to verify the correctness of a system with respect to a safety property. Can we
use the fact that the property is known to be a safety property in order to improve general
verication methods? The positive answer to this question is the subject of this paper.
Much previous work on verication of safety properties follow the proof-based approach to
verication [Fra92]. In the proof-based approach, the system is annotated with assertions, and
proof rules are used to verify the assertions. In particular, Manna and Pnueli consider verication
of reactive systems with respect to safety properties in [MP92, MP95]. The denition of safety
formulas considered in [MP92, MP95] is syntactic: a safety formula is a formula of the form
G' where ' is a past formula. The syntactic denition is equivalent to the denition discussed
here [MP92]. Proof-based methods are also known for the verication of liveness properties
[OL82], which assert that the system eventually reaches some good set of states. While proof-
rule approaches are less sensitive to the size of the state space of the system, they require a
heavy user support. Our work here considers the state-exploration approach to verication,
where automatic model checking [CE81, QS81] is performed in order to verify the correctness of
a system with respect to a specication. Previous work in this subject considers special cases
of safety and liveness properties such as invariance checking [GW91, McM92, Val93, MR97], or
assume that a general safety property is given by the set of its bad prexes [GW91].
General methods for model checking of linear properties are based on a construction of a
tableau or an automaton A : that accepts exactly all the innite computations that violate the
property [LP85, VW94]. Given a system M and a property , verication of M with respect
to is reduced to checking the emptiness of the product of M and A : [VW86a]. This check
can be performed on-the-
y and symbolically [CVWY92, GPVW95, TBK95]. When is an
LTL formula, the size of A is exponential in the length of , and the complexity of verication
that follows is PSPACE, with a matching lower bound [SC85].
Consider a safety property . Let pref ( ) denote the set of all bad prexes for . For exam-
ple, pref (Gp) contains all nite words that have a position in which p does not hold. Recall that
every computation that violates has a prex in pref ( ). We say that an automaton on nite
words is tight for a safety property if it recognizes pref ( ). Since every system that violates
has a computation with a prex in pref ( ), an automaton tight for is practically more helpful
than A : . Indeed, reasoning about automata on nite words is easier than reasoning about
automata on innite words (cf. [HKSV97]). In particular, when the words are nite, we can
use backward or forward symbolic reachability analysis [BCM In addition, using an
automaton for bad prexes, we can return to the user a nite error trace, which is a bad prex,
and which is often more helpful than an innite error trace.
Given a safety property , we construct an automaton tight for . We show that the
construction involves an exponential blow-up in the case is given as a nondeterministic Buchi
automaton, and involves a doubly-exponential blow-up in the case is given in LTL. These
results are surprising, as they indicate that detection of bad prexes with a nondeterministic
automaton has the
avor of determinization. The tight automata we construct are indeed
deterministic. Nevertheless, our construction avoids the di-cult determinization of the Buchi
automaton for (cf. [Saf88]) and just uses a subset construction.
Our construction of tight automata reduces the problem of verication of safety properties
to the problem of invariance checking [Fra92, MP92]. Indeed, once we take the product of a
tight automaton with the system, we only have to check that we never reach an accepting state
of the tight automaton. Invariance checking is amenable to both model checking techniques
deductive verication techniques [BM83, SOR93, MAB + 94]. In practice,
the veried systems are often very large, and even clever symbolic methods cannot cope with the
state-explosion problem that model checking faces. The way we construct tight automata also
enables, in case the BDDs constructed during the symbolic reachability test get too large, an
analysis of the intermediate data that has been collected. The analysis can lead to a conclusion
that the system does not satisfy the property without further traversal of the system.
In view of the discouraging blow-ups described above, we release the requirement on tight
automata and seek, instead, an automaton that need not accept all the bad prexes, yet must
accept at least one bad prex of every computation that does not satisfy . We say that such
an automaton is ne for . For example, an automaton that recognizes p (:p) (p _:p) does
not accept all the words in pref (Gp), yet is ne for Gp. In practice, almost all the benet that
one obtain from a tight automaton can also be obtained from a ne automaton. We show that
for natural safety formulas , the construction of an automaton ne for is as easy as the
construction of A .
To formalize the notion of \natural safety formulas", consider the safety
G(p _ (Xq ^X:q)). A single state in which p does not hold is a bad prex for . Nevertheless,
this prex does not tell the whole story about the violation of . Indeed, the latter depends
on the fact that Xq ^ X:q is unsatisable, which (especially in more complicated examples),
may not be trivially noticed by the user. So, while some bad prexes are informative, namely,
they tell the whole violation story, other bad prexes may not be informative, and some user
intelligence is required in order to understand why they are bad prexes (the formal denition of
informative prexes is similar to the semantics of LTL over nite computations in which Xtrue
does not hold in the nal position).
The notion of informative prexes is the base for a classication of safety properties into
three distinct safety levels. A property is intentionally safe if all its bad prexes are informative.
For example, the formula Gp is intentionally safe. A property is accidentally safe if every
computation that violates has an informative bad prex. For example, the formula G(p_(Xq^
X:q)) is accidentally safe. Finally, a property is pathologically safe if there is a computation
that violates and has no informative bad prex. For example, the formula [G(q _ GFp) ^
G(r _ GF:p)] _ Gq _ Gr is pathologically safe. While intentionally safe properties are natural,
accidentally safe and especially pathologically safe properties contain some redundancy and we
do not expect to see them often in practice. We show that the automaton A : , which accepts
exactly all innite computations that violate , can easily (and with no blow-up) be modied
to an automaton A true
on nite words, which is tight for that is intentionally safe, and is ne
for that is accidentally safe.
We suggest a verication methodology that is based on the above observations. Given a
system M and a safety formula , we rst construct the automaton A true
, regardless of the type
of . If the intersection of M and A true
: is not empty, we get an error trace. Since A true
runs
on nite words, nonemptiness can be checked using forward reachability symbolic methods.
If the product is empty, then, as A true
: is tight for intentionally safe formulas and is ne for
accidentally safe formulas, there may be two reasons for this. One, is that M satises , and
the second is that is pathologically safe. To distinguish between these two cases, we check
whether is pathologically safe. This check requires space polynomial in the length of . If
is pathologically safe, we turn the user's attention to the fact that his specication is needlessly
complicated. According to the user's preference, we then either construct an automaton tight
for , proceed with usual LTL verication, or wait for an alternative specication.
So far, we discussed safety properties in the linear paradigm. One can also dene safety in
the branching paradigm. Then, a property, which describes trees, is a safety property if every
tree that violates it has a nite prex all whose extensions violate the property as well. We
dene safety in the branching paradigm and show that the problems of determining whether
a CTL or a universal CTL formula is safety are EXPTIME-complete and PSPACE-complete,
respectively. Given the linear complexity of CTL model checking, it is not clear yet whether
safety is a helpful notion for the branching paradigm. On the other hand, we show that safety
is a helpful notion for the assume-guarantee paradigm, where safety of either the assumption or
the guarantee is su-cient to improve general verication methods.
Preliminaries
2.1 Linear temporal logic
The logic LTL is a linear temporal logic. Formulas of LTL are constructed from a set AP of
atomic propositions using the usual Boolean operators and the temporal operators X (\next
time"), U (\until"), and V (\duality of until"). Formally, given a set AP , an LTL formula in a
positive normal form is dened as follows:
true, false, p, or :p, for p 2 AP .
are LTL formulas.
For an LTL formula , we use cl( ) to denote the closure of , namely, the set of 's
subformulas. We dene the semantics of LTL with respect to a computation
where for every j 0, j is a subset of AP , denoting the set of atomic propositions that hold
in the j's position of . We denote the su-x of by j . We use j= to indicate
that an LTL formula holds in the path . The relation j= is inductively dened as follows:
For all , we have that j= true and 6j= false.
For an atomic proposition p 2 AP ,
there is 0 i < k such that i
Often, we interpret linear temporal logic formulas over a system with many computations.
Formally, a system is is the set of states, R W W is a
total transition relation (that is, for every w 2 W , there is at least one w 0 such that R(w; w 0 ),
the set W 0 is a set of initial states, and L maps each state to the sets of atomic
propositions that hold in it. A computation of M is a sequence w
and for all i 0 we have R(w
The model-checking problem for LTL is to determine, given an LTL formula and a system
M , whether all the computations of M satisfy . The problem is known to be PSPACE-complete
[SC85].
2.2 Safety languages and formulas
Consider a language L ! of innite words over the alphabet . A nite word x 2 is a bad
prex for L i for all y 2 ! , we have x y 62 L. Thus, a bad prex is a nite word that cannot
be extended to an innite word in L. Note that if x is a bad prex, then all the nite extensions
of x are also bad prexes. We say that a bad prex x is minimal i all the strict prexes of
x are not bad. A language L is a safety language i every w 62 L has a nite bad prex. For
a safety language L, we denote by pref (L) the set of all bad prexes for L. We say that a set
pref (L) is a trap for a safety language L i every word w 62 L has at least one bad prex in
X. Thus, while X need not contain all the bad prexes for L, it must contain su-ciently many
prexes to \trap" all the words not in L. We denote all the traps for L by trap(L).
For a language L ! , we use comp(L) to denote the complement of L; i.e.,
We say that a language L ! is a co-safety language i comp(L) is a safety language.
(The term used in [MP92] is guarantee language.) Equivalently, L is co-safety i every w 2 L
has a good prex x 2 such that for all y 2 ! , we have x y 2 L. For a co-safety language L,
we denote by co-pref (L) the set of good prexes for L. Note that co-pref
For an LTL formula over a set AP of atomic propositions, let k k denote the set of
computations in (2 AP ) ! that satisfy . We say that is a safety formula i k k is a safety
language. Also, is a co-safety formula i k k is a co-safety language or, equivalently, k: k is
a safety language.
2.3 Word automata
Given an alphabet , an innite word over is an innite sequence of letters
in . We denote by w l the su-x l l+1 l+2 of w. For a given set X, let (X) be
the set of positive Boolean formulas over X (i.e., Boolean formulas built from elements in X
using ^ and _), where we also allow the formulas true and false. For Y X, we say that Y
the truth assignment that assigns true to the members of Y
and assigns false to the members of X n Y satises . For example, the sets fq
both satisfy the formula (q 1 _ q while the set fq 1 does not satisfy this formula. The
transition function of a nondeterministic automaton with state space Q and
alphabet can be represented using (Q). For example, a transition (q; can
be written as (q; While transitions of nondeterministic automata correspond
to disjunctions, transitions of alternating automata can be arbitrary formulas in B (Q). We
can have, for instance, a transition -(q; meaning that the automaton
accepts from state q a su-x w l , starting by , of w, if it accepts w l+1 from both q 1 and q 2 or
from both q 3 and q 4 . Such a transition combines existential and universal choices.
Formally, an alternating automaton on innite words is is the
input alphabet, Q is a nite set of states,
a set of initial states, and F Q is a set of accepting states. While a run of a nondeterministic
automaton over a word w can be viewed as a function r : IN ! Q, a run of an alternating
automaton on w is a tree whose nodes are labeled by states in Q. Formally, a tree is a nonempty
set T IN , where for every x c 2 T with x 2 IN and c 2 IN, we have x 2 T . The elements of
are called nodes, and the empty word " is the root of T . For every x 2 T , the nodes x c 2 T
are the children of x. A node with no children is a leaf . A path of a tree T is a
set T such that " 2 and for every x 2 , either x is a leaf, or there exists a unique c 2 IN
such that x c 2 . Given a nite set , a -labeled tree is a pair hT ; V i where T is a tree and
maps each node of T to a letter in . A run of A on an innite word
is a Q-labeled tree hT r ; ri with T r IN such that r(") 2 Q 0 and for every node x 2 T r with
there is a (possibly empty) set such that S satises and for
all 1 c k, we have x c 2 T r and r(x c) = q c . For example, if -(q in ; 0
then possible runs of A on w have a root labeled q in , have one node in level 1 labeled q 1 or q 2 ,
and have another node in level 1 labeled q 3 or q 4 . Note that if for some y the function - has the
value true, then y need not have successors. Also, - can never have the value false in a run. For
a run r and an innite path T r , let inf(rj) denote the set of states that r visits innitely
often along . That is, for innitely many x 2 ; we have As Q
is nite, it is guaranteed that inf(rj) 6= ;. When A is a Buchi automaton on innite words,
the run r is accepting i inf(rj) \ F 6= ; for all innite path in T r . That is, i every path in
the run visits at least one state in F innitely often.
The automaton A can also run on nite words in . A run of A on a nite word
is a Q-labeled tree hT r ; ri with T r IN n , where IN n is the set of all words of length at most
n over the alphabet IN. The run proceeds exactly like a run on an innite word, only that all
the nodes of level n in T r are leaves. A run hT r ; ri is accepting i all its nodes of level n visit
accepting states. Thus, if for all nodes x 2 T r \ IN n , we have r(x) 2 F .
A word (either nite or innite) is accepted by A i there exists an accepting run on it.
Note that while conjunctions in the transition function of A are re
ected in branches of hT r ; ri,
disjunctions are re
ected in the fact we can have many runs on the same word. The language of
denoted L(A), is the set of words that A accepts. As we already mentioned, deterministic and
nondeterministic automata can be viewed as special cases of alternating automata. Formally,
an alternating automaton is deterministic if for all q and , we have -(q;
it is nondeterministic if -(q; ) is always a disjunction over Q.
We dene the size of an alternating automaton as the sum of jQj and
j-j, where j-j is the sum of the lengths of the formulas in -. We say that the automaton A
over innite words is a safety (co-safety) automaton i L(A) is a safety (co-safety) language.
We use pref (A), co-pref (A), trap(A), and comp(A) to abbreviate pref (L(A)), co-pref (L(A)),
trap(L(A)), and comp(L(A)), respectively. For an automaton A and a set of states S, we
denote by A S the automaton obtained from A by dening the set of initial states to be S. We
say that an automaton A over innite words is universal i . When A runs on nite
words, it is universal i An automaton is empty if ;. A state q 2 Q is
nonempty if set S of states is universal (resp., rejecting), when A S is universal
empty). Note that the universality problem for nondeterministic automata is known to
be PSPACE-complete [MS72, Wol82].
We can now state the basic result concerning the analysis of safety, which generalizes Sistla's
result [Sis94] concerning safety of LTL formulas.
Theorem 2.1 Checking whether an alternating Buchi automaton is a safety (or a co-safety)
automaton is PSPACE-complete.
Proof: Let A be a given alternating Buchi automaton. There is an equivalent nondeterministic
Buchi automaton N , whose size is at most exponential in the size of A [MH84]. We assume
that each state in N accepts at least one word (otherwise, we can remove the state and simplify
the transitions relation). Let N loop be the automaton obtained from N by taking all states as
accepting states. As shown in [AS87, Sis94], A is a safety automaton i L(N loop ) is contained
in L(A). In order to check the latter, we rst construct from A a nondeterministic automaton
~
N such that L( ~
To construct ~
N , we rst complement A with a quadratic
blow-up [KV97], and then translate the result (which is an alternating Buchi automaton) to
a nondeterministic Buchi automaton, which involves an exponential blow up [MH84]. Thus,
the size of ~
N is at most exponential in the size of A. Now, L(N loop ) is contained in L(A)
i the intersection L(N loop ) \ L( ~
N) is empty. Since the constructions described above can be
performed on the
y, the emptiness of the intersection can be checked in space polynomial in
the size of A. The claim for co-safety follows since, as noted, alternating Buchi automata can be
complemented with a quadratic blow-up [KV97]. The lower bound follows from Sistla's lower
bound for LTL [Sis94], since LTL formulas can be translated to alternating Buchi automata
with a linear blow-up (see Theorem 2.2).
We note that the nonemptiness tests required by the algorithm can be performed using model
checking tools (cf. [CVWY92, TBK95]).
2.4 Automata and temporal logic
Given an LTL formula in positive normal form, one can build a nondeterministic Buchi
automaton A such that [VW94]. The size of A is exponential in j j. It is
shown in [KVW00, Var96] that when alternating automata are used, the translation of to A
as above involves only a linear blow up 1 The translation of LTL formulas to alternating Buchi
automata is going to be useful also for our methodology, and we describe it below.
Theorem 2.2 [KVW00, Var96] Given an LTL formula , we can construct, in linear running
time, an alternating Buchi automaton
Proof: The set F of accepting states consists of all the formulas of the form ' 1
It remains to dene the transition function -. For all 2 2 AP , we dene:
Using the translation described in [MH84] from alternating Buchi automata to nondeterministic
Buchi automata, we get:
Corollary 2.3 [VW94] Given an LTL formula , we can construct, in exponential running
time, a nondeterministic Buchi automaton N such that L(N
Combining Corollary 2.3 with Theorem 2.1, we get the following algorithm for checking safety
of an LTL formula . The algorithm is essentially as described in [Sis94], rephrased somewhat
to emphasize the usage of model checking.
1. Construct the nondeterministic Buchi automaton
1 The automaton A has linearly many states. Since the alphabet of A is 2 AP , which may be exponential in
the formula, a transition function of a linear size involves an implicit representation.
2. Use a model checker to compute the set of nonempty states, eliminate all other states,
and take all remaining states as accepting states. Let N loop
be the
resulting automaton. By Theorem 2.1, is a safety formula i kN loop
k kN k; thus all
the computations accepted by N loop
satisfy .
3. Convert N loop
into a system M loop
where the transition
and the labeling function is such that
Thus, the system M loop
has exactly all the computations accepted by N loop
.
4. Use a model checker to verify that M loop
.
Detecting Bad Prexes
Linear properties of nonterminating systems are often specied using automata on innite words
or linear temporal logic (LTL) formulas. Given an LTL formula , one can build a nondeterministic
Buchi automaton A that recognizes k k. The size of A is, in the worst case, exponential
in [GPVW95, VW94]. In practice, when given a property that happens to be safe, what we
want is an automaton on nite words that detects bad prexes. As we discuss in the introduc-
tion, such an automaton is easier to reason about. In this section we construct, from a given
safety property, an automaton for its bad prexes.
We rst study the case where the property is given by a nondeterministic Buchi automaton.
When the given automaton A is deterministic, the construction of an automaton A 0 for pref (A) is
straightforward. Indeed, we can obtain A 0 from A by dening the set of accepting states to be the
set of states s for which A s is empty. Theorem 3.1 below shows that when A is a nondeterministic
automaton, things are not that simple. While we can avoid a di-cult determinization of A
(which may also require an acceptance condition that is stronger than Buchi) [Saf88], we cannot
avoid an exponential blow-up.
Theorem 3.1 Given a safety nondeterministic Buchi automaton A of size n, the size of an
automaton that recognizes pref (A) is 2 (n) .
Proof: We start with the upper bound. Let Recall that pref (L(A))
contains exactly all prexes x 2 such that for all y 2 ! , we have x y 62 L(A). Accordingly,
the automaton for pref (A) accepts a prex x i the set of states that A could be in after reading x
is rejecting. Formally, we dene the (deterministic) automaton A
are as follows.
The transition function - 0 follows the subset construction induced by -; that is, for every
s2S -(s; ).
The set of accepting states contains all the rejecting sets of A.
We now turn to the lower bound. Essentially, it follows from the fact that pref (A) refers to
words that are not accepted by A, and hence, it has the
avor of complementation. Complementing
a nondeterministic automaton on nite words involves an exponential blow-up [MF71].
In fact, one can construct a nondeterministic automaton Qi, in which all states
are accepting, such that the smallest nondeterministic automaton that recognizes comp(A) has
states. (To see this, consider the language L n consisting of all words w such that either
jwj < 2n or Given A as above, let A 0 be A when
regarded as a Buchi automaton on innite words. We claim that pref To see
this, note that since all the states in A are accepting, a word w is rejected by A i all the runs
of A on w get stuck while reading it, which, as all the states in A 0 are accepting, holds i w is
in pref
We note that while constructing the deterministic automaton A 0 , one can apply to it minimization
techniques as used in the verication tool Mona [Kla98]. The lower bound in Theorem
3.1 is not surprising, as complementation of nondeterministic automata involves an exponential
blow-up, and, as we demonstrate in the lower-bound proof, there is a tight relation
between pref (A) and comp(A). We could hope, therefore, that when properties are specied
in a negative form (that is, they describe the forbidden behaviors of the system) or are given
in LTL, whose formulas can be negated, detection of bad prexes would not be harder than
detection of bad computations. In Theorems 3.2 and 3.3 we refute this hope.
Theorem 3.2 Given a co-safety nondeterministic Buchi automaton A of size n, the size of an
automaton that recognizes co-pref (L(A)) is 2 (n) .
Proof: The upper bound is similar to the one in Theorem 3.1, only that now we dene the
set of accepting states in A 0 as the set of all the universal sets of A. We prove a matching lower
bound. For n 1, let &g. We dene L n as the language of all words w
such that w contains at least one & and the letter after the rst & is either & or it has already
appeared somewhere before the rst &. The language L n is a co-safety language. Indeed, each
word in L n has a good prex (e.g., the one that contains the rst & and its successor). We
can recognize L n with a nondeterministic Buchi automaton with O(n) states (the automaton
guesses the letter that appears after the rst &). Obvious good prexes for L n are 12&&, 123&2,
etc. We can recognize these prexes with a nondeterministic automaton with O(n) states. But
n also has some less obvious good prexes, like 1234 n& (a permutation of
by &). These prexes are indeed good, as every su-x we concatenate to them would start in
either & or a letter in ng that has appeared before the &. To recognize these prexes,
a nondeterministic automaton needs to keep track of subsets of ng, for which it needs
states. Consequently, a nondeterministic automaton for co-pref (L n ) must have at least 2 n
states.
We now extend the proof of Theorem 3.2 to get a doubly-exponential lower bound for going
from a safety LTL formula to a nondeterministic automaton for its bad prexes. The idea
is similar: while the proof in Theorem 3.2 uses the exponential lower bound for going from
nondeterministic to deterministic Buchi automata, the proof for this case is a variant of the
doubly exponential lower bound for going from LTL formulas to deterministic Buchi automata
[KV98]. In order to prove the latter, [KV98] dene the language L n f0; 1; #;&g by
A word w is in L n i the su-x of length n that comes after the single & in w appears somewhere
before the &. By [CKS81], the smallest deterministic automaton on nite words that accepts
L n has at least 2 2 n
states (reaching the &, the automaton should remember the possible set of
words in f0; 1g n that have appeared before). On the other hand, we can specify L n with the
following of length quadratic in n (we ignore here the technical fact that Buchi
automata and LTL formulas describe innite words).
1in
Theorem 3.3 Given a safety LTL formula of size n, the size of an automaton for pref ( )
and 2p n)
Proof: The upper bounds follows from the exponential translation of LTL formulas to non-deterministic
Buchi automata [VW94] and the exponential upper bound in Theorem 3.1. For
the lower bound, we dene, for n 1, the language L 0
n of innite words over f0; 1; #;&g where
every word in L 0
n contains at least one &, and after the rst & either there is a word in f0; 1g n
that has appeared before, or there is no word in f0; 1g n (that is, there is at least one # or &
in the rst n positions after the rst &). The language L 0
n is a co-safety language. As in the
proof of Theorem 3.2, a prex of the form x& such that x 2 f0; 1; #g contains all the words in
f0; 1g n is a good prex, and a nondeterministic automaton needs 2 2 n
states to detect such good
prexes. This makes the automaton for co-pref (L 0
doubly exponential. On the other hand,
we can specify L 0
n with an LTL formula n that is quadratic in n. The formula is similar to the
one for L n , only that it is satised also by computations in which the rst & is not followed by
a word in f0; 1g n . Now, the is a safety formula of size quadratic in n and the
size of the smallest nondeterministic Buchi automaton for pref (: ) is
In order to get the upper bound in Theorem 3.3, we applied the exponential construction in
Theorem 3.1 to the exponential Buchi automaton A for k k. The construction in Theorem 3.1
is based on a subset construction for A , and it requires a check for the universality of sets of
states Q of A . Such a check corresponds to a validity check for a DNF formula in which each
disjunct corresponds to a state in Q. While the size of the formula can be exponential in j j, the
number of distinct literals in the formula is at most linear in j j, implying the following lemma.
Lemma 3.4 Consider an LTL formula and its nondeterministic Buchi automaton A . Let
Q be a set of states of A . The universality problem for Q can be checked using space polynomial
in j j.
Proof: Every state in A is associated with a set of subformulas of . A set Q of states
of A then corresponds to a set f 1 of sets of subformulas. Let
l i
g.
The set Q is universal i the
j is valid. Though the formula
Q may be exponentially longer than , we can check its validity in PSPACE. To do this, we
rst negate it and get the formula :
. Clearly, Q is valid
not satisable. But : Q is satisable i at least one conjunction
in the disjunctive
normal . Thus, to check if : Q is satisable we
have to enumerate all such conjunctions and check whether one is satisable. Since each such
conjunction is of polynomial size, as the number of literals is bounded by j j, the claim follows.
Note that the satisability problem for LTL (and thus for all the :' i
's) can be reduced to the
nonemptiness problem for nondeterministic Buchi automata, and thus also to model checking
[CVWY92, TBK95]. In fact, the nondeterministic Buchi automaton A constructed in [VW94]
contains all sets of subformulas of as states. To run the universality test it su-ces to compute
the set of states of A accepting some innite word. Then a conjunction
is satisable
i the set f:' 1
g is contained in such a state.
Given a safety formula , we say that a nondeterministic automaton A over nite words
is tight for i In view of the lower bounds proven above, a construction
of tight automata may be too expensive. We say that a nondeterministic automaton A over
nite words is ne for i there exists X 2 trap(k k) such that X. Thus, a ne
automaton need not accept all the bad prexes, yet it must accept at least one bad prex of
every computation that does not satisfy . In practice, almost all the benet that one obtain
from a tight automaton can also be obtained from a ne automaton (we will get back to this
point in Section 6). It is an open question whether there are feasible constructions of ne
automata for general safety formulas. In Section 5 we show that for natural safety formulas ,
the construction of an automaton ne for is as easy as the construction of an automaton for .
4 Symbolic Verication of Safety Properties
Our construction of tight automata reduces the problem of verication of safety properties to the
problem of invariance checking, which is amenable to a large variety of techniques. In particular,
backward and forward symbolic reachability analysis have proven to be eective techniques for
checking invariant properties on systems with large state spaces [BCM In practice,
however, the veried systems are often very large, and even clever symbolic methods cannot cope
with the state-explosion problem that model checking faces. In this section we describe how the
way we construct tight automata enables, in case the BDDs constructed during the symbolic
reachability test get too big, an analysis of the intermediate data that has been collected. The
analysis solves the model-checking problem without further traversal of the system.
Consider a system n(M) be an automaton that accepts all
nite computations of M . Given , let A : be the nondeterministic co-safety automaton for
k. In the proof of Theorem 3.2, we construct an automaton A 0 such that
by following the subset construction of A : and dening the set of accepting
states to be the set of universal sets in A : . Then, one needs to verify the invariance that
the product n(M) A 0 never reaches an accepting state of A 0 . In addition to forward and
backward symbolic reachability analysis, one could use a variety of recent techniques for doing
semi-exhaustive reachability analysis [RS95, YSAA97], including standard simulation techniques
[LWA98]. Also, one could use bounded model-checking techniques, in which a reduction to the
propositional satisability problem is used, to check whether there is a path of bounded length
from an initial state to an accepting state in n(M) A 0 [BCC however, that if A 0
is doubly exponential in j j, the BDD representation of A 0 will use exponentially (in j
Boolean variables. It is conceivable, however, that due to the determinism of A 0 , such a BDD
would have in practice a not too large width, and therefore would be of a manageable size (see
Section 6.2 for a related discussion.)
Another approach is to apply forward reachability analysis to the product M A : of the
system M and the automaton A : . Formally, let A let M be as
above. The product M A : has state space W Q, and the successors of a state hw; qi
are all pairs hw methods use
the predicate post(S), which, given a set of S of states (represented symbolically), returns the
successor set of S, that is, the set of all states t such that there is a transition from some state
in S to t. Starting from the initial set S methods iteratively
construct, for i 0, the set S could therefore say that this construction
implements the subset construction dynamically, \on the
y", during model checking, rather
than statically, before model checking. The calculation the S i 's proceeds symbolically, and
they are represented by BDDs. Doing so, forward symbolic methods actually follow the subset
construction of M A : . Indeed, for each w 2 W , the set Q w
is the set of
states that A : that can be reached via a path of length i in M from a state in W 0 to the state
w. Note that this set can be exponentially (in j large resulting possibly in a large BDD; on
the other hand, the number of Boolean variables used to represent A : is linear in j j. More
experimental work is needed to compare the merit of the two approaches (i.e., static vs. dynamic
subset construction).
The discussion above suggests the following technique for the case we encounter space prob-
lems. Suppose that at some point the BDD for S i gets too big. We then check whether there
is a state w such that the set Q w
i is universal. By Lemma 3.4, we can check the universality of
i in space polynomial in j j. Note that we do not need to enumerate all states w and then
check Q w
. We can enumerate directly the sets Q w
whose number is at most doubly exponential
in j j. (Repeatedly, select a state w 2 W , analyze Q w
i , and then remove all states u 2 W such
that Q u
.) By Lemma 4.1, encountering a universal Q w
solves the model-checking problem
without further traversal of the system.
Lemma 4.1 If is a safety formula, then M A : is nonempty i Q w
i is universal for some
Proof: Suppose Q w
i is universal. Consider any innite trace y that starts in w. Since Q w
is universal, there is some state q 2 Q w
i such that y is accepted by A q
. In addition, by the
denition of S i , there is a nite trace x from some state in W 0 to w such that, reading x, the
automaton A : reaches q. Hence, the innite trace x y does not satisfy .
Suppose now that M A : is nonempty. Then there is an innite trace y of M that is
accepted by A : . As is a safety formula, y has a bad prex x of length i such that -(Q 0 ; x)
is universal. If x ends in the state w, then Q w
i is universal.
Let j be the length of the shortest bad prex for that exists in M . The direction from left
to right in Lemma 4.1 can be strengthened, as the nonemptiness of M A : implies that for
every i j, there is w 2 W for which Q i
w is universal. While we do not want to (and sometime
we cannot) calculate j in advance, the stronger version gives more information on when and
why the method we discuss above is not only sound but also complete.
Note that it is possible to use semi-exhaustive reachability techniques also when analyzing
That is, instead of taking S i+1 to be post(S i ) we can take it to be a subset S 0
of
We have to ensure, however, that
is saturated with respect to
states of A : [LWA98]. Informally, we are allowed to drop states of M from S i+1 , but we are not
allowed to drop states of A : . Formally, if hw; qi 2 S 0
(in other words, if some pair in which the M-state element is w stays in the subset S 0
of
all the pairs in which the M-state element is w should stay). This ensures that if
the semi-exhaustive analysis follows a bad prex of length i in M , then Q 0w
is universal. In the extreme case, we follow only one trace of M , i.e., we simulate M . In that
case, we have that S 0
i . For related approaches see [CES97, ABG + 00]. Note that
while such a simulation cannot in general be performed for that is not a safety formula, we
can use it as a heuristic also for general formulas. We will get back to this point in Remarks 5.3
and 6.1.
5 Classication of Safety Properties
Consider the safety LTL formula Gp. A bad prex x for Gp must contain a state in which p
does not hold. If the user gets x as an error trace, he can immediately understand why Gp is
violated. Consider now the The formula is equivalent
to Gp and is therefore a safety formula. Moreover, the set of bad prexes for and Gp coincide.
Nevertheless, a minimal bad prex for (e.g., a single state in which p does not hold) does
not tell the whole story about the violation of . Indeed, the latter depends on the fact that
Xq^X:q is unsatisable, which (especially in more complicated examples), may not be trivially
noticed by the user. This intuition, of a prex that \tells the whole story", is the base for a
classication of safety properties into three distinct safety levels. We rst formalize this intuition
in terms of informative prexes. Recall that we assume that LTL formulas are given in positive
normal form, where negation is applied only to propositions (when we we refer to its
positive normal form).
For an LTL formula and a nite computation
that is informative for i there exists a mapping such that the
following hold:
empty.
(3) For all 1 i n and ' 2 L(i), the following hold.
If ' is a propositional assertion, it is satised by i .
If
If
If
If
If
If is informative for , the existing mapping L is called the witness for : in . Note
that the emptiness of L(n guarantees that all the requirements imposed by : are fullled
along . For example, while the nite computation fpg ; is informative for Gp (e.g., with a
witness L for which it is not informative for
Xq)), an informative prex for
must contain at least one state after the rst state in which :p holds.
Theorem 5.1 Given an LTL formula and a nite computation of length n, the problem of
deciding whether is informative for can be solved in time O(n j j).
Proof: Intuitively, since has no branches, deciding whether is informative for can proceed
similarly to CTL model checking. Given and we construct a mapping
ng contains exactly all the formulas :' 2 cl(: ) such
that the su-x is informative for '. Then, is informative for (1). The
construction of L max proceeds in a bottom-up manner. Initially L
Then, for each 1 i n, we insert to L max (i) all the propositional assertions in cl(: ) that
are satised by i . Then, we proceed by induction on the structure of the formula, inserting
a subformula ' to L max (i) i the conditions from item (3) above are satised for it, taking
;. In order to cope with the circular dependency in the conditions for ' of the
insertion of formulas proceeds from L max (n) to L max (1). Thus, for
example, the formula ' 1 _ ' 2 is added to L
X' 1 is added to L contains no formulas of the form
is added to L
that we insert ' 1 U' 2 to L before we
examine the insertion of ' 1 U' 2 to L max (i)). We have at most j j subformulas to examine, each
of which requires time linear in n, thus the overall complexity is O(n j j).
Remark 5.2 A very similar argument shows that one can check in linear running time whether
an innite computation , represented as a prex followed by a cycle, satises an LTL formula .
Remark 5.3 Clearly, if an innite computation has a prex informative for , then does
not satisfy . On the other hand, it may be that does not satisfy and all the prexes of
up to a certain length (say, the length where the BDDs described in Section 4 explode) are not
informative. Hence, in practice, one may want to apply the check in Theorem 5.1 to both
and : . Then, one would get one of the following answers: fail (a prex that is informative for
exists, hence does not satisfy ), pass (a prex that is informative for : exists, hence
satises ), and undetermined (neither prexes were found). Note that the above methodology
is independent of being a safety property.
We now use the notion of informative prex in order to distinguish between three types of
safety formulas.
A safety formula is intentionally safe i all the bad prexes for are informative. For
example, the formula Gp is intentionally safe.
A safety formula is accidentally safe i not all the bad prexes for are informative,
but every computation that violates has an informative bad prex. For example, the
are accidentally safe.
A safety formula is pathologically safe if there is a computation that violates and has
no informative bad prex. For example, the formula [G(q_FGp)^G(r_FG:p)]_Gq_Gr
is pathologically safe.
Sistla has shown that all temporal formulas in positive normal form constructed with the
temporal connectives X and V are safety formulas [Sis94]. We call such formulas syntactically
safe. The following strengthens Sistla's result.
Theorem 5.4 If is syntactically safe, then is intentionally or accidentally safe.
Proof: Let be a syntactically safe formula. Then, the only temporal operators in : are X
and U . Consider a computation
the semantics of LTL, there is a mapping conditions (1) and (3) for a
witness mapping hold for L (with 1), and there is i 2 IN such that L(i + 1) is empty. The
prex 1 i of is then informative for . It follows that every computation that violates
has a prex informative for , thus is intentionally or accidentally safe.
As described in Section 2.4, given an LTL formula in positive normal form, one can build
an alternating Buchi automaton Essentially,
each state of L(A ) corresponds to a subformula of , and its transitions follow the semantics
of LTL. We dene the alternating Buchi automaton A true
redening
the set of accepting states to be the empty set. So, while in A a copy of the automaton
may accept by either reaching a state from which it proceed to true or visiting states of the
innitely often, in A true
all copies must reach a state from which they proceed to
true. Accordingly, A true
accepts exactly these computations that have a nite prex that is
informative for . To see this, note that such computations can be accepted by a run of A
in which all the copies eventually reach a state that is associated with propositional assertions
that are satised. Now, let n(A true
) be A true
when regarded as an automaton on nite words.
Theorem 5.5 For every safety formula , the automaton n(A true
accepts exactly all the
prexes that are informative for .
Proof: Assume rst that is a prex informative for in . Then, there is
a witness mapping ng in . The witness L induces a run r of
true
Formally, the set of states that r visits when it reads the su-x i of coincides
with L(i). By the denition of a witness mapping, all the states q that r visits when it reads n
therefore, r is accepting.
The other direction is similar, thus every accepting run of n(A true
on induces a witness
for : in .
Corollary 5.6 Consider a safety formula .
1. If is intentionally safe, then n(A true
is tight for .
2. If is accidentally safe, then n(A true
ne for .
Theorem 5.7 Deciding whether a given formula is pathologically safe is PSPACE-complete.
Proof: Consider a formula . Recall that the automaton A true
accepts exactly these computations
that have a nite prex that is informative for . Hence, is not pathologically safe i
every computation that does not satisfy is accepted by A true
. Accordingly, checking whether
is pathologically safe can be reduced to checking the containment of
Since the size of A is linear in the length of and containment for alternating Buchi automata
can be checked in polynomial space [KV97], we are done.
For the lower bound, we do a reduction from the problem of deciding whether a given formula
is a safety formula. Consider a formula , and let p; q, and r be atomic propositions not in .
The pathologically safe. It can be shown
that is a safety formula pathologically safe.
Note that the lower bound in Theorem 5.7 implies that the reverse direction of Theorem 5.4
does not hold.
Theorem 5.8 Deciding whether a given formula is intentionally safe is in EXPSPACE.
Proof: Consider a formula of size n. By Theorem 3.3, we can construct an automaton of
for pref ( ). By Theorem 5.5,
accepts all the prexes that are informative
for . Note that is intentionally safe i every prex in pref ( ) is an informative prex for
. Thus, to check that is intentionally safe, one has to complement n(A true
that its intersection with the automaton for pref ( ) is empty. A nondeterministic automaton
that complements n(A true
exponential in n [MH84], and its product with the automaton
for pref ( ) is doubly exponential in n. Since emptiness can be checked in nondeterministic
logarithmic space, the claim follows.
6 A Methodology
6.1 Exploiting the classication
In Section 5, we partitioned safety formulas into three safety levels and showed that for some
formulas, we can circumvent the blow-up involved in constructing a tight automaton for the
bad prexes. In particular, we showed that the automaton n(A true
: ), which is linear in the
length of , is tight for that is intentionally safe and is ne for that is accidentally safe.
In this section we describe a methodology for e-cient verication of safety properties that is
based on these observations. Consider a system M and a safety LTL formula . Let n(M) be
a nondeterministic automaton on nite words that accepts the prexes of computations of M ,
and let U true
be the nondeterministic automaton on nite words equivalent to the alternating
automaton
The size of U true
is exponential in the size of n(A true
it is exponential in the length of . Given M and , we suggest to proceed as follows (see
Figure
1).
Instead of checking the emptiness of M A : , verication starts by checking n(M) with
respect to U true
. Since both automata refer to nite words, this can be done using nite forward
reachability analysis 2 . If the product n(M) U true
is not empty, we return a word w in the
intersection, namely, a bad prex for that is generated by M 3 . If the product n(M) U true
is empty, then, as U true
is ne for intentionally and accidentally safe formulas, there may be
two reasons for this. One, is that M satises , and the second is that is pathologically safe.
Therefore, we next check whether is pathologically safe. (Note that for syntactically safe
formulas this check is unnecessary, by Theorem 5.4.) If is not pathologically safe, we conclude
that M satises . Otherwise, we tell the user that his formula is pathologically safe, indicating
that his specication is needlessly complicated (accidentally and pathologically safe formulas
contain redundancy). At this point, the user would probably be surprised that his formula was
a safety formula (if he had known it is safety, he would have simplied it to an intentionally
If the user wishes to continue with this formula, we give up using the fact
that is safety and proceed with usual LTL model checking, thus we check the emptiness of
(Recall that the symbolic algorithm for emptiness of Buchi automata is in the worst
case quadratic [HKSV97, TBK95].) Note that at this point, the error trace that the user gets if
M does not satisfy consists of a prex and a cycle, yet since the user does not want to change
his formula, he probably has no idea why it is a safety formula and a nite non-informative error
trace would not help him. If the user prefers, or if M is very large (making the discovery of bad
cycles infeasible), we can build an automaton for pref ( ), hoping that by learning it, the user
would understand how to simplify his formula or that, in spite of the potential blow-up in ,
nite forward reachability would work better.
Section 6.2 for an alternative approach.
3 Note that since may not be intentionally safe, the automaton U true
may not be tight for , thus while w
is a minimal informative bad prex, it may not be a minimal bad prex.
4 An automatic translation of pathologically safe formulas to intentionally safe formulas is an open problem.
Such a translation may proceed through the automaton for the formula's bad prexes, in which case it would be
nonelementary.
Y
M is correct Consult user
n(M) U true
M is incorrect
Y
Return error trace Is pathologically safe?
Figure
1: Verication of safety formulas
Remark 6.1 In fact, our methodology can be adjusted to formulas that are not (or not known
to be) safety formulas, and it can often terminate with a helpful output for such formulas. As
with safety formulas, we start by checking the emptiness of n(M) U true
(note that U true
is
dened also for formulas that are not safety). If the intersection is not empty, it contains an error
trace, and M is incorrect. If the intersection is empty, we check whether is an intentionally
or accidentally safe formula. If it is, we conclude that M is correct. Otherwise, we consult the
user. Note also that by determinizing U true
, we can get a checker that can be used in simulation
6.2 On going backwards
As detailed above, given a system M and a safety formula , our method starts by checking
whether there is nite prex of a computation of M (a word accepted by n(M)) that is an
informative bad prex for (a word accepted by U true
both n(M) and U true
are
automata on nite words, so is their product, thus a search for a word in their intersection can
be done using nite forward reachability analysis. In this section we discuss another approach
for checking that no nite prex of a computation of M is an informative bad prex.
We say that a nondeterministic automaton
for every state q 2 Q and letter 2 , there is at most one state q 0 2 Q such that q 2 (q
Thus, given the current state of a run of U and the last letter read from the input, one can
determine the state visited before the current one. Let the reverse
function of , thus 1 (q; )g. By the above, when U is reverse deterministic,
all the sets in the range of 1 are either empty or singletons. We extend 1 to sets in the
natural way as follows. For a set Q
contains all the states that may lead to some state in Q 0 when the letter in the input
is .
Assume that we have a reverse deterministic ne automaton U : for ; thus U : accepts
exactly all the bad prexes informative for . Consider the product
that M has a nite prex of a computation that is an informative
bad prex i P is nonempty; namely, there is a path in P from some state in S 0 to some state
in F . Each state in P is a pair hw; qi of a state w of M and a state q of U : . We say that a set
of states of P is Q-homogeneous if there is some q 2 Q such that S 0 Wfqg, where W is
the state of states of M ; that is, all the pairs in S 0 agree on their second element. For every state
hw; qi and letter 2 2 AP , the set 1 (hw; qi; ) may contain more than one state. Nevertheless,
since U : is reverse deterministic, the set 1 (hw; qi; ) is Q-homogeneous. Moreover, since
U : is reverse deterministic, then for every Q-homogeneous set S 0 S and for every 2 2 AP ,
the set 1 (S Q-homogeneous as well. Accordingly, if we start with some Q-homogeneous
set and traverse P backwards along one word in , we need to maintain only Q-homogeneous
sets. In practice, it means that instead of maintaining sets in 2 W 2 Q (which is what we need to
do in a forward traversal), we only have to maintain sets in 2 W Q. If we conduct a backwards
breadth-rst search starting from F , we could hope that the sets maintained during the search
would be smaller, though not necessarily homogeneous, due to the reverse determinism. The
above suggests that when the ne automaton for is reverse deterministic, it may be useful
to check the nonemptiness of P using a backwards search, starting from the ne automaton's
accepting states.
The automaton U true
: is dened by means of the alternating word automaton A : , and is not
reverse deterministic. For example,
can reach the
state q from both states Xp _ Xq and Xq _ Xr. Below we describe a ne reverse deterministic
for , of size exponential in the length of . The automaton is based on the
reverse deterministic automata dened in [VW94] for LTL. As in [VW94], each state of the
associated with a set S of formulas in cl(: ). When the automaton is in
a state associated with S, it accepts exactly all innite words that satisfy all the formulas in
S. Unlike the automata in [VW94], a state of N : that is associated with S imposes only
requirements on the formulas (these in S) that should be satised, and imposes no requirements
on formulas (these in cl(: ) n S) that should not be satised. This property is crucial for
being ne. When the automaton N : visits a state associated with the empty set, it has no
requirements. Accordingly, we dene f;g to be the set of accepting states (note that the fact
that the set of accepting states is a singleton implies that in the product P we can start with
the single Q-homogeneous set W f;g).
It is easy to dene N : formally in terms of its reverse deterministic function 1 . Consider
a state S cl(: ) of N : and a letter 2 2 AP . The single state S 0 in 1 (S; ) is the maximal
subset of cl(: ) such that if a computation satises all the formulas in S 0 and its rst position
is labeled by , then its su-x 1 satises all the formulas in S. Formally, S 0 contains exactly
all the propositional assertion in cl(: ) that are satised by , and for all formulas ' in cl(: )
for which the following hold.
If
If
If
If
If
It is easy to see that for every S and , there is a single S 0 that satises the above conditions.
Also, a sequence of states in N : that starts with some S 0 containing : and leads via the nite
computation induces a mapping showing that is informative for
. It follows that N : is a reverse deterministic ne automaton for .
6.3 Safety in the assume-guarantee paradigm
Given a system M and two LTL formulas ' 1 and ' 2 , the linear assume-guarantee specication
holds i for every system M 0 such that the composition MkM 0 satises (the assump-
the composition MkM 0 also satises (the guarantee) ' 2 . Testing assume-guarantee
specications as above can be reduced to LTL model checking. Indeed, it is not hard to see
that system M satises i the
intersection M A ' 1
It may be that while is not a safety formula, ' 1 is a safety formula. Then, by the proof of
Theorem 2.1, the analysis of M A ' 1
can ignore the fairness conditions of A ' 1
and can proceed
with model checking ' 2 . (Note, however, that the system MA ' 1
may not be total, i.e., it may
have dead-end states, which need to be eliminated before model checking ' 2 .) Suppose now that
' 2 is a safety formula, while ' 1 is not a safety formula. We can then proceed as follows. We
rst ignore the fairness condition in M A ' 1
and use the techniques above to model check the
Suppose we found a bad prex that ends with the state hw; qi of M A '1 .
It remains to check that hw; qi is a fair state, i.e., that there is a fair path starting from hw; qi.
Instead of performing fair reachability analysis over the entire state space of M A ' 1
or on the
reachable state space of M A '1 , it su-ces to perform this analysis on the set of states that
are reachable from hw; qi. This could potentially be much easier than doing full fair reachability
analysis.
In conclusion, when reasoning about assume-guarantee specications, it is useful to consider
the safety of the assumptions and the guarantee separately.
6.4 Safety in the branching paradigm
Consider a binary tree A prex of T is a nonempty prex-closed subset of T . For
a labeled tree prex P of T , a P -extension of hT labeled tree hT
which V and V 0 agree on the labels of the nodes in P . We say that a branching formula is a
violates , there exists a prex P such that all the
P -extensions of hT violates . The logic CTL is a branching temporal logic. In CTL, every
temporal operator is preceded by a path quantier, E (\for some path") or A (\for all paths").
Theorem 6.2 Given a CTL formula , deciding whether is a safety formula is EXPTIME-
complete.
Proof: Sistla's algorithm for checking safety of LTL formulas [Sis94] can be adapted to the
branching paradigm as follows. Consider a CTL formula . Recall that is not safe i there is
a tree that does not satisfy and all of whose prexes have at least one extension that satises
. Without loss of generality we can assume that the tree has a branching degree bounded
by
be a nondeterministic automaton for ; thus A d
accepts exactly
these d-ary trees that a satisfy . We assume that each state in A d
accepts at least one tree
(otherwise, we can remove it and simplify the transitions relation). Let A d;loop
be the automaton
obtained from A d
by taking all states to be accepting states. dening the set of accepting states
as the set of all states. Thus, A d;loop
accepts exactly all d-ary trees all of whose prexes that
have at least one extension accepted by A . Hence, is not safety i
not empty. Since the size of the automata is exponential in and the nonemptiness check is
quadratic [VW86b], the EXPTIME upper bound follows.
For the lower bound, we do a reduction from CTL satisability. Given a CTL formula , let
p be a proposition not in , and let We claim that ' is safe i is not satisable.
First, if is not satisable, then so is ', which is therefore safe. For the other direction assume,
by way of contradiction, that ' is safe and is satised by some tree hT ; V i. The tree
labeled only by the propositions appearing in . Let be an extension of hT refers
also to the proposition p and labels T so that hT does not
should have a bad prex P all of whose extensions violate '.
Consider a P extension that agrees with about the propositions in and has a frontier of
p's. Such a P -extension satises both and AFp, contradicting the fact that P is a bad prex.
Using similar arguments, we prove the following theorem, showing that when we disable
alternation between universal and existential quantication in the formula, the problem is as
complex as in the linear paradigm.
Theorem 6.3 Given an ACTL formula , deciding whether is a safety formula is PSPACE-complete
Since CTL and ACTL model checking can be completed in time linear [CES86], and can be
performed using symbolic methods, a tree automaton of exponential size that detects nite bad
prexes is not of much help. On the other hand, perhaps safety could oer an advantage in the
alternating-automata-theoretic framework of [KVW00]. At this point, it is an open question
whether safety is a useful notion in the branching-time paradigm.
Acknowledgment
The second author is grateful to Avner Landver for stimulating discussions.
--R
Recognizing safety and liveness.
Symbolic model checking using SAT procedures instead of BDDs.
Symbolic model check- ing: 10 20 states and beyond
Design and synthesis of synchronization skeletons using branching time temporal logic.
Automatic veri
Alternation. Journal of the Association for Computing Machinery
Memory e-cient algorithms for the veri cation of temporal properties
Alternative semantics for temporal logics.
Temporal and modal logic.
Program veri
Simple on-the- y automatic veri cation of linear temporal logic
Using partial orders for the e-cient veri cation of deadlock freedom and safety properties
A new heuristic for bad cycle detection using BDDs.
Forward model checking techniques oriented to buggy designs.
Weak alternating automata are not that weak.
An automata-theoretic approach to branching-time model checking
Logical foundation.
Checking that
Hybrid techniques for fast functional simulation.
The Stanford Temporal Prover.
Using unfolding to avoid the state explosion problem in the veri
Economy of description by automata
The Temporal Logic of Reactive and Concurrent Systems: Speci
The Temporal Logic of Reactive and Concurrent Systems: Safety.
Deadlock checking using net unfoldings.
The equivalence problem for regular expressions with squaring requires exponential time.
Proving liveness properties of concurrent programs.
On the complexity of
The complexity of propositional linear temporal logic.
Satefy, liveness and fairness in temporal logic.
The PVS proof checker: A reference manual (beta release).
Testing language containment for
An automata-theoretic approach to linear temporal logic
An automata-theoretic approach to automatic program veri ca- tion
Reasoning about in
Synthesis of Communicating Processes from Temporal Logic Speci
On combining formal and informal veri
--TR
The complexity of propositional linear temporal logics
Automatic verification of finite-state concurrent systems using temporal logic specifications
Automata-Theoretic techniques for modal logics of programs
Temporal and modal logic
The temporal logic of reactive and concurrent systems
Symbolic model checking
Memory-efficient algorithms for the verification of temporal properties
Reasoning about infinite computations
Testing language containment for MYAMPERSANDohgr;-automata using BDDs
High-density reachability analysis
An automata-theoretic approach to linear temporal logic
Forward model checking techniques oriented to buggy designs
Hybrid techniques for fast functional simulation
Symbolic model checking using SAT procedures instead of BDDs
Checking that finite state concurrent programs satisfy their linear specification
Alternation
An automata-theoretic approach to branching-time model checking
Proving Liveness Properties of Concurrent Programs
Program Verification
Simple on-the-fly automatic verification of linear temporal logic
Safety for Branching Time Semantics
Specification and verification of concurrent systems in CESAR
Using Unfoldings to Avoid the State Explosion Problem in the Verification of Asynchronous Circuits
On-the-Fly Verification with Stubborn Sets
On Combining Formal and Informal Verification
Deadlock Checking Using Net Unfoldings
Mona MYAMPERSANDamp; Fido
Design and Synthesis of Synchronization Skeletons Using Branching-Time Temporal Logic
Using Partial Orders for the Efficient Verification of Deadlock Freedom and Safety Properties
Freedom, Weakness, and Determinism
Checking formal specifications under simulation
Weak Alternating Automata Are Not That Weak
STeP: The Stanford Temporal Prover
Synthesis of communicating processes from temporal logic specifications
--CTR
Freddy Y.C. Mang , Pei-Hsin Ho, Abstraction refinement by controllability and cooperativeness analysis, Proceedings of the 41st annual conference on Design automation, June 07-11, 2004, San Diego, CA, USA
R. Armoni , S. Egorov , R. Fraer , D. Korchemny , M. Y. Vardi, Efficient LTL compilation for SAT-based model checking, Proceedings of the 2005 IEEE/ACM International conference on Computer-aided design, p.877-884, November 06-10, 2005, San Jose, CA
Orna Kupferman , Moshe Y. Vardi, From complementation to certification, Theoretical Computer Science, v.345 n.1, p.83-100, 21 November 2005
Roberto Sebastiani , Eli Singerman , Stefano Tonetta , Moshe Y. Vardi, GSTE is partitioned model checking, Formal Methods in System Design, v.31 n.2, p.177-196, October 2007
Chao Wang , Zijiang Yang , Franjo Ivani , Aarti Gupta, Disjunctive image computation for software verification, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.12 n.2, p.10-es, April 2007 | automata;safety properties;model checking |
569149 | Recent advances in direct methods for solving unsymmetric sparse systems of linear equations. | During the past few years, algorithmic improvements alone have reduced the time required for the direct solution of unsymmetric sparse systems of linear equations by almost an order of magnitude. This paper compares the performance of some well-known software packages for solving general sparse systems. In particular, it demonstrates the consistently high level of performance achieved by WSMP---the most recent of such solvers. It compares the various algorithmic components of these solvers and discusses their impact on solver performance. Our experiments show that the algorithmic choices made in WSMP enable it to run more than twice as fast as the best among similar solvers and that WSMP can factor some of the largest sparse matrices available from real applications in only a few seconds on a 4-CPU workstation. Thus, the combination of advances in hardware and algorithms makes it possible to solve those general sparse linear systems quickly and easily that might have been considered too large until recently. | INTRODUCTION
Developing an efficient parallel, or even serial, direct solver for general unsymmetric
sparse systems of linear equations is a challenging task that has been a subject of
research for the past four decades. Several breakthroughs have been made during
this time. As a result, a number of serial and parallel software packages for solving
such systems are available [Amestoy et al. 2000; Ashcraft and Grimes 1999; Davis
and Duff 1997b; Grund 1998; Gupta 2000; Li and Demmel 1999; Shen et al. 2001;
Schenk et al. 2000].
In this paper, we compare the performance and the main algorithmic features of
some prominent software packages for solving general sparse systems and show that
the algorithmic improvements of the past few years have reduced the time required
to factor general sparse matrices by almost an order of magnitude. Combined with
significant advances in the performance to cost ratio of parallel computing hard-
Author's address: Anshul Gupta, IBM T. J. Watson Research Center, P. O. Box 218, Yorktown
Heights, NY 10598.
Permission to make digital/hard copy of all or part of this material without fee for personal
or classroom use provided that the copies are not made or distributed for profit or commercial
advantage, the ACM copyright/server notice, the title of the publication, and its date appear, and
notice is given that copying is by permission of the ACM, Inc. To copy otherwise, to republish,
to post on servers, or to redistribute to lists requires prior specific permission and/or a fee.
c
ACM Transactions on Mathematical Software, Vol. 28, No. 3, September 2002, Pages 301-324.
ware during this period, current sparse solver technology makes it possible to solve
those problems quickly and easily that might have been considered impractically
large until recently. We demonstrate the consistently high level of performance
achieved by the Watson Sparse Matrix Package (WSMP) and show that it can factor
some of the largest sparse matrices available from real applications in only a few
seconds on 4-CPU workstation. The key features of WSMP that contribute to its
performance include a prepermutation of rows to place large entries on the diago-
nal, a symmetric fill-reducing permutation based on the nested dissection ordering
algorithm, and an unsymmetric-pattern multifrontal factorization that is guided by
near-minimal static task- and data-dependency graphs and uses symmetric inter-
supernode threshold pivoting [Gupta 2001b]. Some of these techniques, such as an
unsymmetric pattern multifrontal algorithm based on static near-minimal DAGs,
are new, while others have been used in the past, though not as a combination in
a single sparse solver. In this paper, we will discuss the impact of various algorithmic
components of the sparse general solvers on their performance with particular
emphasis on the contribution of the components of the analysis and numerical factorization
phases of WSMP to its performance.
The paper is organized as follows. In Section 2, we compare the serial time that
some of the prominent general sparse solvers spend in factoring a sparse matrix
A into triangular factors L and U-the most important phase in the solution of a
sparse system of linear equations-and discuss the relative robustness and speed
of these solvers under uniform and realistic test conditions. In Section 3, we list
the main algorithms and strategies that these packages use in their symbolic and
numerical phases, and discuss the effect of these strategies on their respective per-
formance. In Section 4, by means of experimental comparisons, we highlight the
role that various algorithms used in WSMP play in the performance of its LU
factorization. In Section 5, we present a detailed performance comparison in a
practical setting between WSMP and MUMPS-the general purpose sparse solver
that we show in Section 2 to be the best available at the time of WSMP's release.
Section 6 contains concluding remarks.
2. SERIAL PERFORMANCE OF SOME GENERAL SPARSE SOLVERS
In this section, we compare the performance of some of the well-known software
packages for solving sparse systems of linear equations on a single CPU of an IBM
RS6000 model S80. This is a 600 Mhz processor with a 64 KB 2-way set-associative
level-1 cache and a peak theoretical speed of of 1200 Megaflops, which is representative
of the performance of a typical high-end workstation available before 1999.
Table
2 lists the test matrices used in this paper, which are some of the largest
publicly available unsymmetric sparse matrices from real applications. The table
also includes the dimension, the number of nonzeros, and the application area of
the origin of each of these matrices.
The sparse solvers compared in this section are UMFPACK Version 2.2 [Davis
and Duff 1997b; 1997a], SuperLUMT [Demmel et al. 1999], SPOOLES [Ashcraft and
Grimes 1999], SuperLU dist [Li and Demmel 1998; 1999], MUMPS 4.1.6 [Amestoy
et al. 2001; Amestoy et al. 2000], WSMP [Gupta 2000] and UMFPACK Version 3.2
[Davis 2002]. The solver suite contains two versions each of SuperLU and UMF-
Recent Advances in Solution of General Sparse Linear Systems \Delta 303
Table
I. Test matrices with their order (N), number of nonzeros (NNZ), and the application area
of origin.
Matrix N NNZ Application
av41092 41092 1683902 Finite element analysis
bbmat 38744 1771722 Fluid dynamics
programming
e40r0000 17281 553956 Fluid dynamics
e40r5000 17281 553956 Fluid dynamics
simulation
fidap011 16614 1091362 Fluid dynamics
fidapm11 22294 623554 Fluid dynamics
lhr34c 35152 764014 Chemical engineering
mil053 530238 3715330 Structural engineering
mixtank 29957 1995041 Fluid dynamics
nasasrb 54870 2677324 Structural engineering
onetone1 36057 341088 Circuit simulation
onetone2 36057 227628 Circuit simulation
pre2 659033 5959282 Circuit simulation
raefsky3 21200 1488768 Fluid dynamics
raefsky4 19779 1316789 Fluid dynamics
simulation
twotone 120750 1224224 Circuit simulation
venkat50 62424 1717792 Fluid dynamics
wang3old 26064 177168 Circuit simulation
wang4 26068 177196 Circuit simulation
PACK because these versions employ very different algorithms in some or all of the
important phases of the solution process (see Section 3 for details). Some other
well-known packages are not featured in this section; however, their comparisons
with one or more of the packages included here are easily available in literature and
would not alter the inferences that can be drawn from the results in this section.
Davis and Duff compare UMFPACK with MUPS [Amestoy and Duff 1989] and
MA48 [Duff and Reid 1993]. MUPS is a classical multifrontal code and the predecessor
of MUMPS. MA48 is a sparse unsymmetric factorization code in the HSL package
[HSL 2000] and is based on conventional sparse data structures. Grund [Grund
1998] presents an experimental comparison of GSPAR [Grund 1998] with a few
other solvers, including SuperLU and UMFPACK; however, this comparison is limited
to sparse matrices arising in two very specific applications. MA41 [Amestoy
and Duff 1989; 1993] is a commercial shared-memory parallel version of MUPS
that has been available in HSL since 1990. Amestoy and Puglisi [Amestoy and
Puglisi 2000] have since introduced the unsymmetrization of frontal matrices in
MA41. Since MUMPS is more robust in parallel than MA41, we have chosen to
use MUMPS instead of MA41. A comparison of the S+ package [Shen et al. 2001]
from University of California at Santa Barbara with some others can be found in
[Cosnard and Grigori 2000; Gupta and Muliadi 2001; Shen et al. 2001]. We have
ACM Transactions on Mathematical Software, Vol. 28, No. 3, September 2002.
Table
II. LU factorization times on a single CPU (in seconds) for UMFPACK Version 2.2,
dist , MUMPS, WSMP, and UMFPACK Version 3.2, respec-
tively. The best pre-2000 time is underlined and the overall best time is shown in boldface. The
last row shows the approximate smallest pivoting threshold that yielded a residual norm close to
machine precision after iterative refinement for each package. FM indicates that a solver ran out
of memory, FC indicates an abnormal or no termination, and FN indicates that the numerical
results were inaccurate.
Matrices # UMFP 2 SLUMT SPLS SLU dist MUMPS WSMP UMFP 3
bbmat 682. 373. 97.7 256. 72.3 37.1 123.
comp2c 120. 11.1 287. 42.0 23.5 4.45 802.
e40r5000 29.9 17.8 395. FN 1.18 1.10 7.10
ecl32 FM 961. 562. 201. 116. 37.7 320.
fidap011 168. FM 12.2 FN 12.7 6.53 22.6
fidapm11 944. 145. 15.1 FN 16.3 10.4 63.2
lhr34c 3.46 FM FM 11.5 3.53 1.32 5.81
mixtank FM FM 346. 198. 86.7 36.5 695.
nasasrb 81.8 FM 25.0 26.7 22.0 11.0 76.1
onetone1 12.2 27.2 113. 10.7 5.79 3.52 7.33
pre2 FM FC FM FM FM 223. FM
raefsky3 39.0 34.2 10.0 6.86 6.09 5.04 20.4
raefsky4 109. FM 157. 28.6 28.5 7.93 36.2
twotone 30.0 FM 724. 637. 79.4 26.6 46.5
venkat50 16.2 33.6 11.6 8.11 8.70 4.39 14.3
wang3old 106. 281. 62.7 36.9 25.3 10.7 63.7
wang4 97.3 223. 16.2 23.7 18.6 10.9 85.2
excluded another recent software, PARDISO [Schenk et al. 2000], because it is designed
for unsymmetric matrices with a symmetric structure, and therefore, would
have failed on many of our test matrices unless they were padded with zero valued
entries to structurally symmetrize them.
Each package is compiled in 32-bit mode with the -O3 optimization option of the
AIX Fortran or C compilers and is linked with IBM's Engineering and Scientific
Subroutine Library (ESSL) for the basic linear algebra subprograms (BLAS) that
are optimized for RS6000 processors. Almost all the floating point operations in
each solver are performed inside the BLAS routines. Using the same compiler and
BLAS library affords each code an equal access to the hardware specific optimiza-
tions. A maximum of 2 GB of memory was available to each code.
Table
2 shows the LU factorization time taken by each code for the matrices
in our test suite. In addition to each solver's factorization time, the table also
lists the year in which the latest version of each package became available. FM ,
Recent Advances in Solution of General Sparse Linear Systems \Delta 305
FC , or FN entries indicate the failure of a solver to factor a matrix satisfactorily.
Subscripts M , C , and N indicate failure due to running out of memory, abnormal
or no termination, and numerically inaccurate results, respectively.
One of the ground rules for the experiments reported in Table 2 was that all
input parameters that may influence the behavior of a program were fixed and
were not modified to accommodate the demands of individual matrices. However,
through a series of pre-experiments, we attempted to fix these parameters to values
that yielded the best results on an average on the target machine. For example,
we tested all the packages on all matrices in our test suite for various values of the
pivoting threshold (Pthresh), and for each, chose the smallest value for which all
completed runs yielded a backward error that was close to machine precision after
iterative refinement. The message-passing version of SuperLU (SuperLU dist ) does
not have a provision for partial pivoting; hence the threshold is 0. Other parameters
easily accessible in the software, such as the various block sizes in SuperLU, were
also fixed to values that appeared to be the best on average.
Note that some of the failures in the first four columns of Table 2 can be fixed by
changing some of the options or parameters in the code. However, as noted above,
the options chosen to run the experiments reported in Table 2 are such that they are
best for the test suite as a whole. Changing these options may avoid some failures,
but cause many more. We have chosen to report results with a consistent set of
options because we believe that that is representative of the real-world environment
in which these solvers are expected to be used. Moreover, in most cases, even if an
alternative set of options exists that can solve the failed cases reported in Table 2,
those options are not known a-priori and can only be determined by trial and
error. Some memory allocation failures, however, are the result of static memory
allocation for data structures with variable and unknown sizes. Therefore, such
failures are artifacts of the implementation and neither reflect the actual amount of
memory needed (if allocated properly) nor that the underlying algorithms are not
robust.
The best factorization time for each matrix using any solver released before year
2000 is underlined in Table 2 and the overall best factorization time is shown in
boldface. Several interesting observations can be made in this Table 2. Perhaps the
most striking observation in the table pertains to the range of times that different
packages available before 1999 would take to factor the same matrix. It is not
uncommon to notice the fastest solver being faster than the slowest by one to
two orders of magnitude. Additionally, none of the pre-1999 solvers yielded a
consistent level of performance. For example, UMFPACK 2.2 is 13 times faster
than SPOOLES on e40r5000 but 14 times slower on fidap011. Also noticeable is
the marked increase in the reliability and ease of use of the softwares released in
1999 or later. There are 21 failures in the first four columns of Table 2 and only
two in the last three columns. MUMPS is clearly the fastest and the most robust
amongst the solvers released before 2000. However, WSMP is more than twice as
fast as MUMPS on this machine based on the average ratio of the factorization time
of MUMPS to that of WSMP. WSMP also has the most consistent performance. It
has the smallest factorization time for all but two matrices and is the only solver
that did not fail on any of the test matrices.
306 \Delta Anshul Gupta
3. KEY ALGORITHMIC FEATURES OF THE SOLVERS
In this section, we list the key algorithms and strategies that solvers listed in Table
use in the symbolic and numerical phases of the computation of the LU factors of
a general sparse matrix. We then briefly discuss the effect of these choices on the
performance of the solvers. Detailed descriptions of all the algorithms are beyond
the scope of this paper, but are readily available in the citations provided. Many
of the solvers, whose single-CPU performance is compared in Table 2, are designed
for shared- or distributed-memory parallel computers. The target architecture of
each of the solvers is also listed.
(1) UMFPACK 2.2 [Davis and Duff 1997b; 1997a]
-Fill reducing ordering: Approximate minimum degree [Amestoy et al. 1996]
on unsymmetric structure, combined with suitable numerical pivot search
during LU factorization.
-Task dependency Directed acyclic graph.
-Numerical factorization: Unsymmetric-pattern multifrontal.
-Pivoting strategy: Threshold pivoting implemented by row-exchanges.
-Target architecture: Serial.
(2) SuperLUMT [Demmel et al. 1999]
-Fill reducing ordering: Multiple minimum degree (MMD) [George and Liu
1981] computed on the symmetric structure of A T A or A + A T and applied
to the columns of A. (The structure of A T A was used in the experiments for
Table
2.)
-Task dependency Directed acyclic graph.
-Numerical factorization: Supernodal left-looking.
-Pivoting strategy: Threshold pivoting implemented by row-exchanges.
-Target architecture: Shared-memory parallel.
(3) SPOOLES [Ashcraft and Grimes 1999]
-Fill reducing ordering: Generalized nested dissection/multisection [Ashcraft
and Liu 1996] computed on the symmetric structure of A +A T and applied
symmetrically to rows and columns of A.
-Task dependency Tree based on the structure of A +A T .
-Numerical factorization: Supernodal Crout.
-Pivoting strategy: Threshold rook pivoting that performs row and column
exchanges to control growth in both L and U .
-Target architecture: Serial, shared-memory parallel, and distributed-memory
parallel.
dist [Li and Demmel 1998; 1999]
-Fill reducing ordering: Multiple minimum degree [George and Liu 1981]
computed on the symmetric structure of A+A T and applied symmetrically
to the rows and columns of A.
-Task dependency Directed acyclic graph.
-Numerical factorization: Supernodal right-looking.
-Pivoting strategy: No numerical pivoting during factorization. Rows are
preordered to maximize the magnitude of the product of the diagonal entries
[Duff and Koster 1999; 2001].
Recent Advances in Solution of General Sparse Linear Systems \Delta 307
-Target architecture: Distributed-memory parallel.
(5) MUMPS [Amestoy et al. 2001; Amestoy et al. 2000]
-Fill reducing ordering: Approximate minimum degree [Amestoy et al. 1996]
computed on the symmetric structure of A+A T and applied symmetrically
to rows and columns of A.
-Task dependency Tree based on the structure of A +A T .
-Numerical factorization: Symmetric-pattern multifrontal.
-Pivoting strategy: Preordering rows to maximize the magnitude of the product
of the diagonal entries [Duff and Koster 1999; 2001], followed by unsymmetric
row exchanges within supernodes and symmetric row and column
exchanges between supernodes.
-Target architecture: Distributed-memory parallel.
-Fill reducing ordering: Nested dissection [Gupta 1997] computed on the
symmetric structure of A applied symmetrically to rows and
columns of A.
-Task dependency Minimal directed acyclic graph [Gupta 2001b].
-Numerical factorization: Unsymmetric-pattern multifrontal.
-Pivoting strategy: Preordering rows to maximize the magnitude of the product
of the diagonal entries [Gupta and Ying 1999], followed by unsymmetric
partial pivoting within supernodes and symmetric pivoting between supern-
odes. Rook pivoting (which attempts to contain growth in both L and U) is
an option.
-Target architecture: Shared-memory parallel.
-Fill reducing ordering: Column approximate minimum degree algorithm
[Davis et al. 2000] to compute fill-reducing column preordering,
-Task dependency Tree based on the structure of A T A.
-Numerical factorization: Unsymmetric-pattern multifrontal.
-Pivoting strategy: Threshold pivoting implemented by row-exchanges.
-Target architecture: Serial.
The performance of the solvers calibrated in Table 2 is greatly affected by the algorithmic
features outlined above. We now briefly describe, in order of importance,
the relationship between some of these algorithms and the performance characteristics
of the solvers that employ these algorithms.
3.1 Pivoting Strategy and Application of Fill-Reducing Ordering
The application of the fill-reducing permutation and the pivoting strategy used in
different solvers seem to be the most important factors that distinguish MUMPS
and WSMP from the others and allows these two to deliver consistently good performance
3.1.1 The Conventional Strategy. In [George and Ng 1985], George and Ng
showed that the fill-in as a result of LU factorization of an irreducible square unsymmetric
sparse matrix A, irrespective of its row permutation, is a subset of the
ACM Transactions on Mathematical Software, Vol. 28, No. 3, September 2002.
fill-in that a symmetric factorization of A T A would generate. Guided by this re-
sult, many unsymmetric sparse solvers developed in 1990's adopted variations of
the following ordering and pivoting strategy. An ordering algorithm would seek to
compute a fill-reducing permutation of the columns of A based on their sparsity
pattern, because a column permutation of A is equivalent to a symmetric permutation
of A T A. The numerical factorization phase of these solvers would then seek
to limit pivot growth via threshold pivoting involving row interchanges.
A problem with the above strategy is that the upper-bound on the fill in the LU
factorization of A predicted by Gilbert and Ng's result can be very loose, especially
in the presence of even one relatively dense row in A. As a result, the initial column
ordering could be very ineffective. Moreover, two different column orderings,
both equally effective in reducing fill in the symmetric factorization of A T A, could
enjoy very different degrees of success in reducing the fill in the LU factorization
of A. There is some evidence of this being a factor in the extreme variations in
the factorization times of different solvers for the same matrices in Table 2. The
matrices that have a symmetric structure and require very little pivoting, such as
nasasrb, raefsky3, rma10, venkat50, and wang4 exhibit relatively less variation in
the factorization times of different solvers. On the other hand, consider the performance
of WSMP and UMFPACK 3.2 on matrices comp2c and tib, which contain a
few rows that are much denser than the rest. Both WSMP and UMFPACK 3.2 use
very similar unsymmetric-pattern multifrontal factorization algorithms. However,
since the column ordering in UMFPACK 3.2 seeks to minimize the fill in a symmetric
factorization of A T A rather than directly in the LU factorization of A, it
is more than two orders of magnitude slower than WSMP on these matrices. Our
experiments (Section 5) have verified that WSMP did not enjoy such a dramatic
advantage over UMFPACK 3.2 for these matrices due to other differences such as
the use of a nested-dissection ordering or a pre-permutation of matrix rows.
3.1.2 The Strategy Used in MUMPS and WSMP. We now briefly describe the
ordering and pivoting strategy of MUMPS and WSMP in the context of a structurally
symmetric matrix. Note that pivoting in MUMPS would be similar even in
the case of an unsymmetric matrix because it uses a symmetric-pattern multifrontal
algorithm guided by the elimination tree [Liu 1990] corresponding to the symmetric
structure of A +A T . On the other hand, WSMP uses an unsymmetric-pattern
multifrontal algorithm and an elimination DAG (directed acyclic graph) to guide
the factorization. Therefore, the pivoting is somewhat more complex if the matrix
A to be factored has an unsymmetric structure. However, the basic pivoting idea
and the reason why it is effective remain the same.
Both MUMPS and WSMP start with a symmetric fill-reducing permutation computed
on the structure of A+A T . Just like most modern sparse factorization codes,
MUMPS and WSMP work with supernodes-adjacent groups of rows and columns
with the same or nearly same structures in the factors L and U . An interchange
amongst the rows and columns of a supernode has no effect on the overall fill-in,
and is the preferred mechanism for finding a suitable pivot. However, there is
no guarantee that the algorithm would always succeed in finding a suitable pivot
within the pivot block; that is, an element whose row as well as column index lies
within the indices of the supernode being currently factored. When the algorithm
ACM Transactions on Mathematical Software, Vol. 28, No. 3, September 2002.
Recent Advances in Solution of General Sparse Linear Systems \Delta 309
reaches a point where it cannot factor an entire supernode based on the prescribed
threshold, it merges the remaining rows and columns of the supernode with its
parent supernode in the elimination tree. This is equivalent to a symmetric permutation
of the failed rows and columns to a location with higher indices within
the matrix. By virtue of the properties of the elimination tree [Liu 1990], the new
location of these failed rows and columns also happens to be their "next best"
location from the perspective of the potential fill-in that these rows and columns
would produce. For example, in the context of a fill-reducing ordering based on
the nested dissection [George and Liu 1978; Lipton et al. 1979] of the graph of the
coefficient matrix, this pivoting strategy is equivalent to moving the vertex corresponding
to a failed pivot from its partition to the immediate separator that created
that partition. Merged with a parent supernode, the unsuccessful portion of the
child supernode has more rows and columns available for potential interchange.
However, should a part of the new supernode remain unfactored due to a lack of
suitable intra-supernode pivots, it can again be merged with its parent supernode,
and so on.
The key point is that, with this strategy, pivot failures increase the fill-in gracefully
rather than arbitrarily. Moreover, the fewer the inter-supernode pivoting
steps, the closer the final fill-in stays to that of the original fill-reducing ordering.
Although, unlike the conventional strategy, there is no proven upper-bound on the
amount of fill-in that can potentially be generated, the empirical evidence clearly
suggests that the extra fill-in due to pivoting stays reasonably well-contained. To
further aid this strategy, it has been shown recently [Amestoy et al. 2001; Duff and
Koster 1999; Li and Demmel 1998] that permuting the rows or the columns of the
matrix prior to factorization so as to maximize the magnitude of its diagonal entries
can often be very effective in reducing the amount of pivoting during factorization.
Both MUMPS and WSMP use this technique to reduce inter-supernode pivoting
and the resulting extra fill-in.
Like MUMPS and WSMP, SPOOLES too uses a symmetric fill-reducing permutation
followed by symmetric inter-supernode pivoting. However, SPOOLES employs
a pivoting algorithm known as rook pivoting that seeks to limit pivot growth in
both L and U . Other than SPOOLES, in all solvers discussed in this paper, a pivot
is considered suitable as long as it is not smaller in magnitude than pivot threshold
times the entry with the largest magnitude in that column. The pivoting algorithm
thus seeks to control pivot growth only in L. The more stringent pivot suitability
criterion of SPOOLES causes a large number of pivot failures and the resulting
fill-in overshadows a good initial ordering. Simple threshold partial pivoting yields
a sufficiently accurate factorization for most matrices, including all our test cases.
Therefore, rook pivoting is an option in WSMP, but the default is the standard
threshold pivoting.
3.2 Ordering Algorithms
In addition to the decision whether to compute a fill-reducing symmetric ordering
or column ordering, the actual ordering algorithm itself affects the performance
of the solvers. WSMP uses a nested dissection based on a multilevel partitioning
scheme, which is very similar to Metis [Karypis and Kumar 1999]. As explained
in [Gupta 1997], WSMP's ordering scheme adds some heuristics to the
ACM Transactions on Mathematical Software, Vol. 28, No. 3, September 2002.
basic multilevel approach to make it more robust for some very unstructured prob-
lems. SPOOLES uses a similar ordering that generates multisections of graphs
instead of bisections [Ashcraft and Liu 1996]. Section 4 of this paper and [Amestoy
et al. 2001] present empirical evidence that graph-partioning based orderings used in
SPOOLES and WSMP are generally more effective in reducing the fill-in and operation
count in factorization than local heuristics, such as multiple minimum degree
(MMD) [Liu 1985] used in SuperLUMT and SuperLU dist , approximate minimum
degree (AMD) [Amestoy et al. 1996] used in MUMPS, a variation of AMD used in
UMFPACK 2.2, and the column approximate minimum degree (COLAMD) [Davis
et al. 2000] algorithm used in UMFPACK 3.2. In most solvers in which ordering is
a separate phase, the users can override the default ordering and can provide their
own permutation vectors.
In their ordering phase, UMFPACK 2.2 and WSMP perform another manipulation
of the sparse coefficient matrix prior to performing any other symbolic or
numerical processing on it. They seek to reduce it into a block triangular form [Duff
et al. 1990], which can be achieved very efficiently [Tarjan 1972]. Solving the original
system then requires analyzing and factoring only the diagonal block matrices.
Some of the symbolic algorithms employed in WSMP [Gupta 2001b] are valid only
for irreducible sparse matrices. The cost of reduction to a block triangular form
is insignificant, but it can offer potentially large savings when it is effective. In
our test suite, however, lhr34c is the only matrix that benefits significantly from
reduction to block triangular form. The others either have only one block or the
size of the largest block is fairly close to the dimension of the overall coefficient
matrix.
3.3 Symbolic Factorization Algorithms
The process of factoring a sparse matrix can be expressed by a directed acyclic task-
dependency graph, or task-DAG in short. The vertices of this DAG correspond
to the tasks of factoring row-column pairs or groups of row-column pairs of the
sparse matrix and the edges correspond to the dependencies between the tasks. A
task is ready for execution if and only if all tasks with incoming edges to it have
completed. In addition to a task-DAG, there is a data-dependency graph or a data-
DAG associated with sparse matrix factorization. The vertex set of the data-DAG
is the same as that of the task-DAG for a given sparse matrix. An edge from a
vertex i to a vertex j in the data-DAG denotes that at least some of the data
produced as the result of the output of task i is required as input by task j. While
the task-DAG is unique to a given sparse matrix, the data-DAG can be a function
of the sparse factorization algorithm. Multifrontal algorithms [Duff and Reid 1984;
Liu 1992; Davis and Duff 1997b] for sparse factorization can work with a minimal
data-DAG (i.e., a data-DAG with the smallest possible number of edges) for a given
matrix.
The task- and data-dependency graph involved in the factorization of a symmetric
matrix is a tree, known as the elimination tree [Liu 1990]. However, for unsymmetric
matrices, the task- and data-DAGs are general directed acyclic graphs. Moreover,
the edge-set of the minimal data-DAG for unsymmetric sparse factorization can be
a superset of the edge-set of a task-DAG. In [Gilbert and Liu 1993], Gilbert and
describe elimination structures for unsymmetric sparse LU factors and give
ACM Transactions on Mathematical Software, Vol. 28, No. 3, September 2002.
Recent Advances in Solution of General Sparse Linear Systems \Delta 311
an algorithm for sparse unsymmetric symbolic factorization. These elimination
structures are two directed acyclic graphs (DAGs) that are transitive reductions
of the graphs of the factor matrices L and U , respectively. The union of these
two directed acyclic graphs is the minimal task-dependency graph of sparse LU
factorization; that is, it is a task-dependency graph in which all edges are necessary.
Using a minimal elimination structure to guide factorization is useful because it
avoids overheads due to redundancy and exposes maximum parallelism. However,
some researchers have argued that computing an exact transitive reduction can be
too expensive [Davis and Duff 1997b; Eisenstat and Liu 1993] and have proposed
using sub-minimal DAGs with more edges than necessary. Traversing or pruning
the redundant edges in the elimination structure during numerical factorizations, as
is done in UMFPACK and SuperLUMT , can be a source of overhead. Alternatively,
many unsymmetric factorization codes, such as SPOOLES and MUMPS adopt the
elimination tree corresponding to the symmetric structure of A as the task-
and data-dependency graph to the guide the factorization. This adds artificial
dependencies to the elimination structure and can lead to diminished parallelism
and extra fill-in and operations during factorization.
WSMP uses a modified version of the classical unsymmetric symbolic factorization
algorithm [Gilbert and Liu 1993] that detects the supernodes as it processes
the rows and columns of the sparse matrix and enables a fast computation of the
exact transitive reductions of the structures of L and U to yield a minimal task-
dependency graph [Gupta 2001b]. In addition, WSMP uses a fast algorithm for
the derivation of a near-minimal data-dependency DAG from the minimal task-
dependency DAG. The data-dependency graph of WSMP is such that it is valid in
the presence of any amount of inter-supernode pivoting and yet has been empirically
shown to contain only between 0 and 14% (4% on an average) more edges than the
minimal task-dependency graph on a suite of large unsymmetric sparse matrices.
The edge-set of this static data-DAG is sufficient to capture all possible dependencies
that may result from row and column permutations due to numerical pivoting
during factorization. The powerful symbolic algorithms used in WSMP enable its
numerical factorization phase to proceed very efficiently spending minimal time on
non-floating-point operations.
3.4 Numerical Factorization Algorithms
The multifrontal method [Duff and Reid 1984; Liu 1992] for solving sparse systems
of linear equations usually offers a significant performance advantage over more
conventional factorization schemes by permitting efficient utilization of parallelism
and memory hierarchy. Our detailed experiments in [Gupta and Muliadi 2001] show
that all three multifrontal solvers-UMFPACK, MUMPS, and WSMP-run at a
much higher Megaflops rate than their non-multifrontal counterparts. The original
multifrontal algorithm proposed by Duff and Reid [Duff and Reid 1984] uses the
symmetric-pattern of A+A T to generate an elimination tree to guide the numerical
which works on symmetric frontal matrices. This symmetric-pattern
multifrontal algorithm used in MUMPS can incur a substantial overhead for very
unsymmetric matrices due to unnecessary dependencies in the elimination tree and
extra zeros in the artificially symmetrized frontal matrices. Davis and Duff [Davis
and Duff 1997b] and Hadfield [Hadfield 1994] introduced an unsymmetric-pattern
ACM Transactions on Mathematical Software, Vol. 28, No. 3, September 2002.
multifrontal algorithm, which is used in UMFPACK and overcomes the shortcomings
of the symmetric-pattern multifrontal algorithm. However, UMFPACK does
not reveal the full potential of the unsymmetric-pattern multifrontal algorithm.
UMFPACK 2.2 used a degree approximation algorithm similar to AMD [Amestoy
et al. 1996] fill-reducing ordering algorithm, which has now been shown to be less
effective than nested dissection [Amestoy et al. 2001]. Moreover, the merging of the
ordering and symbolic factorization within numerical factorization in UMFPACK
2.2 slows down the latter and excludes the possibility of using a better ordering while
retaining the factorization code. UMFPACK 3.2 separates the analysis (ordering
and symbolic factorization) from numerical factorization. However, as discussed in
Section 3.1, it suffers from the pitfalls of permuting only the columns based on a
fill-reducing ordering rather than using a symmetric fill-reducing permutation.
The unsymmetric-pattern multifrontal LU factorization in WSMP, described in
detail in [Gupta 2001b], is aided by powerful algorithms in the analysis phase
and uses efficient dynamic data structures to perform potentially multiple steps
of numerical factorization with minimum overhead and maximum parallelism. It
uses a technique similar to the one described in [Hadfield 1994] to efficiently handle
any amount of pivoting and different pivot sequences without repeating the
symbolic phase for each factorization. In [Gupta 2001b], the author defines near-minimal
static task- and data-dependency DAGs that can be computed during
symbolic factorization and describes an unsymmetric-pattern multifrontal factorization
algorithm based on these DAGs. As shown in [Gupta 2001b], an important
property of these DAGs is that even though they are static, they can handle an
arbitrary amount of dynamic pivoting to guarantee numerical stability. In other
unsymmetric multifrontal solvers [Hadfield 1994; Davis and Duff 1997a; 1997b],
the task- and data-DAGs are computed on the fly during numerical factorization.
The use of static pre-computed DAGs contributes significantly to the simplification
and efficiency of WSMP's numerical factorization. Furthermore, the resulting
unsymmetric-pattern multifrontal algorithm is also more amenable to efficient par-
allelization. A description of WSMP's shared-memory parallel LU factorization
algorithm can be found in [Gupta 2001a].
4. ROLE OF WSMP ALGORITHMS IN ITS LU FACTORIZATION PERFORMANCE
The speed and the robustness of WSMP's sparse LU factorization stems from (1)
its overall ordering and pivoting strategy, (2) an unsymmetric-pattern multifrontal
numerical factorization algorithm guided by static near-minimal task- and data-dependency
DAGs, (3) the use of nested-dissection ordering, and (4) permutation
of high-magnitude coefficients to the diagonal. In Section 3, we presented arguments
based on empirical data that a symmetric fill-reducing ordering followed by
a symmetric inter-supernode pivoting is a major feature distinguishing MUMPS
and WSMP from most other sparse unsymmetric solvers. The role that this strategy
plays in the performance of sparse LU factorization is evident from the results
in
Table
2. In this section, we present the results of some targeted experiments
on MUMPS and WSMP to highlight the role of each one of the other three key
algorithmic features of WSMP in its LU factorization performance.
The experiments described in this section were conducted on one and four pro-
Recent Advances in Solution of General Sparse Linear Systems \Delta 313
Table
III. Number of factor nonzeros (nnz f ), operation count (Ops), LU factorization time, and
speedup (S) of MUMPS and WSMP run on one processors
with default options.
MUMPS WSMP
Matrices nnz f Ops
af23560 8.34 2.56 4.05 2.27 1.8 9.58 3.27 3.96 1.83 2.2
bbmat 46.0 41.4 48.0 20.7 2.3 31.9 20.1 22.9 8.26 2.8
comp2c 7.05 4.22 10.2 7.33 1.3 2.98 0.78 1.64 0.67 2.4
e40r0000 1.72 .172 0.83 0.61 1.3 2.06 .250 0.56 0.28 2.0
fidap011 12.5 7.01 8.73 7.64 1.1 8.69 3.20 3.93 1.78 2.2
fidapm11 14.0 9.67 11.6 7.38 1.6 12.8 5.21 6.50 2.60 2.5
lhr34c 5.58 .641 2.21 1.15 1.9 2.91 .163 0.92 0.93 1.0
mil053 75.9 31.8 42.8 16.6 2.5 58.9 14.4 23.0 10.6 2.2
mixtank 38.5 64.4 64.8 31.0 2.1 23.2 19.5 21.9 8.32 2.6
nasasrb 24.2 9.45 13.1 10.2 1.3 18.9 5.41 6.98 3.37 2.1
onetone2 2.26 .510 1.17 0.82 1.4 1.41 .191 0.72 0.70 1.0
pre2 358. Fail Fail Fail - 79.2 96.3 127. 55.3 2.3
raefsky3 8.44 2.90 4.56 3.45 1.3 8.09 2.57 3.16 1.40 2.3
raefsky4 15.7 10.9 13.0 8.97 1.4 10.3 4.11 4.91 2.34 2.1
twotone 22.1 29.3 43.5 26.1 1.6 10.8 9.46 13.5 9.05 1.5
venkat50 12.0 2.31 4.87 2.74 1.8 11.4 1.75 2.83 1.13 2.5
wang3old 13.8 13.8 15.1 6.48 2.3 9.66 5.91 6.65 3.50 1.9
wang4 11.6 10.5 11.8 5.84 2.0 9.93 6.09 6.84 3.08 2.2
cessors of an IBM RS6000 WH-2. Each of its four 375 Mhz Power 3 processors
have a peak theoretical speed of 1.5 Gigaflops. The peak theoretical speed of the
workstation is therefore 6 Gigaflops, which is representative of the performance of
a high-end workstation available in 2001. The four CPUs of this workstation share
an 8 MB level-2 cache and have a 64 KB level-1 cache each. 2 GB of memory was
available to each single CPU run and the 4-CPU runs of WSMP. MUMPS, when
run on 4 processors, had a total of 4 GB of memory available to it. MUMPS uses
the message-passing paradigm and MPI processes for parallelism. The distributed-memory
parallel environment for which MUMPS is originally designed, may add
some constraints and overheads to it. For example, numerical pivoting is generally
easier to handle in a shared-memory parallel environment that in a distributed-memory
one. However, MUMPS was run in mode in which MPI was aware of and
took advantage of the fact that the multiple processes were running on the same
machine (by setting the MP SHARED MEMORY environment variable to 'yes'). The
current version of WSMP is designed for the shared-address-space paradigm and
uses the Pthreads library.
wang3old
bbmat
e40r0000
lhr34c
mil053
mixtank
nasasrb
raefsky3
raefsky4
twotone
venkat50
wang4
Fig. 1. Ratios of the factorization time of MUMPS to that of WSMP with default options in both.
This graph reflects the relative factorization performance of the two softwares that users are likely
to observe in their applications.
We give the serial and parallel sparse LU factorization performance of MUMPS
and WSMP, including fill-in and operation count statistics, in Table 4. Figure 1
shows bar graphs corresponding to the factorization time of MUMPS normalized
with respect to the factorization time WSMP for each matrix. The relative performance
of WSMP improves on the Power 3 machine as it is able to extract a higher
Megaflops rate from it. WSMP factorization is, on an average, about 2.3 times
faster than MUMPS on a single CPU and 2.8 times faster on four CPUs.
Recent Advances in Solution of General Sparse Linear Systems \Delta 315
The relative performance of sparse LU factorization in MUMPS and WSMP
shown in Table 4 and Figure 1 corresponds to what the users are likely to observe
in their applications. However, the factorization times are affected by the preprocessing
of the matrices, which is different in MUMPS and WSMP. WSMP always
uses a row permutation to maximize the product of the magnitudes of the diagonal
entries of the matrix. MUMPS does not use such a row permutation for matrices
whose nonzero pattern has a significant symmetry to avoid destroying the structural
symmetry. Additionaly, WSMP uses a nested dissection based fill-reducing order-
ing, whereas MUMPS uses the approximate minimum degree (AMD) [Amestoy
et al. 1996] algorithm. In order to eliminate the impact of these differences on LU
factorization performance, we ran WSMP with AMD ordering and with a selective
row permutation logic similar to that in MUMPS.
Figure
2 compares the relative serial and parallel factorization performance of
MUMPS with that of the modified version of WSMP. Although, for most matri-
ces, the ratio of MUMPS factor time to WSMP factor time decreases, the overall
averages remain more or less the same due to a significant increase in this ratio
for the matrix twotone. Since both codes are run with similar preprocessing and
use the same BLAS libraries for the floating point operations of the factorization
process, it would be fair to say that Figure 2 captures the advantage of WSMP's
unsymmetric-pattern multifrontal algorithm over MUMPS' symmetric-pattern multifrontal
algorithm. Other than the sparse factorization algorithm, there is only
one minor difference between the way MUMPS and WSMP are used to collect the
performance data for Figure 2. WSMP attempts a decomposition into a block-triangular
form, while MUMPS doesn't. However, other than lhr34c, this does not
play a significant role in determining the factorization performance on the matrices
in our test suite. Amestoy and Puglisi [Amestoy and Puglisi 2000] present a
mechanism to reduce the symmetrization overhead of the conventional tree-guided
multifrontal algorithm used in MUMPS. If incorporated into MUMPS, this mechanism
may reduce the performance gap between MUMPS and WSMP on some
matrices.
Next, we look at the role of the algorithm used to compute a fill-reducing ordering
on the structure of A + A T . Figure 3 compares the LU factorization times of
WSMP with AMD and nested dissection ordering. The bars show the factorization
time with AMD ordering normalized with respect to a unit factorization time with
nested-dissection ordering for each matrix. On the whole, factorization with AMD
ordering is roughly one and a half times slower than factorization with WSMP's
nested-dissection ordering.
Finally, we observe the impact of a row permutation to maximize the product of
the diagonal magnitudes [Duff and Koster 2001; Gupta and Ying 1999] on the factorization
performance of WSMP. Figure 4 shows the factorization times of WSMP
with this row permutation switched off normalized with respect to the factorization
times in the default mode when the row permutation is active. This figure
shows that factorization of about 40% of the matrices is unaffected by the row-
permutation option. These matrices are probably already diagonally-dominant or
close to being diagonally dominant. So the row order does not change and the same
matrix is factored, whether the row-permutation option is switched on or not. In
ACM Transactions on Mathematical Software, Vol. 28, No. 3, September 2002.
wang3old
bbmat
e40r0000
lhr34c
mil053
mixtank
nasasrb
raefsky3
raefsky4
twotone
venkat50
WSMP factor time (with same ordering and permuting options as MUMPS)
Fig. 2. Ratios of the factorization time of MUMPS to that of WSMP run with the same ordering
and row-prepermutation option as MUMPS. This graph enables a fair comparison of the numerical
factorization components of the two packages.
a few cases, there is a moderate decline, and in two cases, there is a significant
decline in performance as the row permutation option is switched off. On the other
hand there are a few cases in which there is a moderate advantage, and there is one
matrix, twotone, for which there is a significant advantage in switching off the row
permutation. This can happen when the original structure of the matrix is symmetric
or nearly symmetric and permuting the rows destroys the structural symmetry
(although, twotone is an exception and is quite unsymmetric). The extra fill-in and
ACM Transactions on Mathematical Software, Vol. 28, No. 3, September 2002.
Recent Advances in Solution of General Sparse Linear Systems \Delta 317
WSMP with AMD ordering
wang3old
bbmat
e40r0000
lhr34c
mil053
mixtank
nasasrb
raefsky3
raefsky4
twotone
venkat50
wang442WSMP with default (nested dissection) ordering
Fig. 3. A comparison of the factorization time of WSMP run with AMD ordering and with its
default nested-dissection ordering. The bars correspond to the relative factorization time with
AMD ordering compared to the unit time with nested-dissection ordering. This graph shows the
role of ordering in WSMP.
computation resulting from the disruption of the original symmetric pattern may
more than offset the pivoting advantage, if any, gained by moving large entries to
the diagonal. On the whole, it appears that permuting the rows of the matrix to
maximize the magnitude of the product is a useful safeguard against excessive fill-in
due to pivoting, such as in the case of raefsky4 and wang3old.
WSMP without row pre-permutation
wang3old
bbmat
e40r0000
lhr34c
mil053
mixtank
nasasrb
raefsky3
raefsky4
twotone
venkat50
wang4
WSMP with row pre-permutation (default)
7Fig. 4. A comparison of the factorization time of WSMP run without pre-permuting the rows to
move matrix entries with relatively large magnitudes to the diagonal. The bars correspond to the
relative factorization time without row pre-permutation compared to the unit time for the default
option.
5. A PRACTICAL COMPARISON OF MUMPS AND WSMP
In Section 2, we empirically demonstrated that MUMPS and WSMP contain the
fastest and the most robust sparse LU factorization codes among the currently
available general sparse solvers. In this section, we review the relative performance
of these two packages in further detail from the perspective of their use in a real
application.
Recent Advances in Solution of General Sparse Linear Systems \Delta 319
In
Table
4, we compared the factorization times of MUMPS and WSMP on one
and four CPU's of an RS6000 WH-2 node. A noteworthy observation from this
table (the T 4 column of WSMP) is that out of the 25 test cases, only six require
more than 5 seconds on a mere workstation and all but one of the matrices can be
factored in less than 11 seconds.
In real applications, although factorization time is usually of primary importance,
users are concerned about the total completion time, which includes analysis, fac-
torization, triangular solves, and iterative refinement. In Figure 5, we compare the
total time that MUMPS and WSMP take to solve our test systems of equations
from beginning to end on four CPUs of an RS6000 WH-2 node. For each matrix,
the WSMP completion time is considered to be one unit and all other times of
both the packages are measured relative to it. The analysis, factorization, and
solve times are denoted by bars of different shades. The solve time includes iterative
refinement steps necessary to bring the relative backward error down to the
order of magnitude of the machine precision.
Two new observations can be made from Figure 5. First, the analysis phase
of MUMPS is usually much shorter than that of WSMP. This is not surprising
because the AMD ordering algorithm used in MUMPS is much faster than the
nested-dissection algorithm used in WSMP. In addition, AMD yields a significant
amount of symbolic information about the factors that is available to MUMPS as
a byproduct of ordering. On the other hand, WSMP must perform a full separate
factorization step to compute the structures of the factors and the task
and data dependency DAGs. Secondly, the solve phase of MUMPS is significantly
slower than that of WSMP. This is mostly because of a slower triangular solve
in MUMPS, but also partly because MUMPS almost always requires two steps
of iterative refinement to reach the desired degree of accuracy, whereas a single
iterative refinement step suffices for WSMP for roughly half the problems in our
test suite.
A majority of applications of sparse solvers require repeated solutions of systems
with gradually changing values of the nonzero coefficients, but the same sparsity
pattern. In Figure 6, we compare the performance of MUMPS and WSMP for this
important practical scenario. We call the analysis routine of each package once,
and then solve 100 systems with the same sparsity pattern. We attempt to emulate
a real application situation as follows. After each iteration, 20% randomly chosen
coefficients are changed by a random amount between 1 and 20% of their value
from the previous iteration, 4% of the coefficients are similarly altered by at most
200% and 1.6% of the coefficients are altered by at most 2000%. The total time
that each package spends in the analysis, factor, and solve phases is then used
to construct the bar chart in Figure 6. Since the speed of the factorization and
solve phases is relatively more important than that of the analysis phase, WSMP
performs significantly better, as expected.
Recall that both MUMPS (for very unsymmetric matrices only) and WSMP (for
all matrices) permute the rows of the coefficient matrix to maximize the product
of the diagonal entries. This permutation is based on the values of the coefficients,
which are evolving. Therefore, the row-permutation slowly loses its effectiveness as
the iterations proceed. For some matrices, for which the row permutation is not
ACM Transactions on Mathematical Software, Vol. 28, No. 3, September 2002.
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
wang3old
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
WSMP
MUMPS
Total WSMP Time
Factor
Analyze Solve
bbmat
e40r0000
lhr34c
mil053
mixtank
nasasrb
raefsky3
raefsky4
twotone
venkat50
wang4
Fig. 5. A comparison of the total time taken by WSMP and MUMPS to solve a system of equations
once on four CPUs of an RS6000 WH-2 node. All times are normalized with respect to the time
taken by WSMP. Furthermore, the time spent by both packages in the Analysis, Factorization,
and Solve (including iterative refinement) phases is denoted by regions with different shades.
Recent Advances in Solution of General Sparse Linear Systems \Delta 321
wang3old
bbmat
e40r0000
lhr34c
mil053
mixtank
nasasrb
raefsky3
raefsky4
twotone
venkat50
wang4
100 Iteration WSMP TimeIteration
MUMPS
Time
Fig. 6. A comparison of the total time taken by WSMP and MUMPS to solve 100 sparse linear
systems with the same nonzero pattern and evolving coefficient values on a 4-CPU RS6000 WH-2
node.
useful anyway (see Figure 4), this does not affect the factorization time. However,
for others that rely on row permutation to reduce pivoting, the factorization time
may start increasing as the iterations proceed. WSMP internally keeps track of
growth in the time of the numerical phases over the iterations and may automatically
trigger a re-analysis when it is called to factor a coefficient matrix with a new
set of values. The frequency of the re-analysis is determined based on the analysis
time relative to the increase in the time of the numerical phases as the iterations
ACM Transactions on Mathematical Software, Vol. 28, No. 3, September 2002.
proceed. The re-analysis is completely transparent to the user. So although the
analysis phase was explicitly called only once in the experiment yielding the data
for
Figure
6, the actual time reported includes multiple analyses for many matrices.
As
Figure
6 shows, this does not have a detrimental effect on the overall performance
of WSMP because the re-analysis frequency is chosen to optimize the total
execution time. Note that a similar reanalysis strategy can also be implemented in
MUMPS and due to its small analysis time, may reduce the total time for matrices
with unsymmetric nonzero patterns.
6. CONCLUDING REMARKS
In this paper, we show that recent sparse solvers have significantly improved the
state of the art of the direct solution of general sparse systems. For instance, compare
the first four columns of Table 2 with the second last column of Table 4. This
comparison would readily reveal that a state-of-the-art solver running on today's
single-user workstation is easily an order of magnitude faster than the best solver-
workstation combination available prior to 1999 for solving sparse unsymmetric linear
systems. Moreover, the new solvers offer significant scalability of performance
that can be utilized to solve these problems even faster on parallel supercomputers
[Amestoy et al. 2001]. Therefore, it would be fair to conclude that recent years
have seen some remarkable advances in the general sparse direct solver algorithms
and software. As discussed in the paper, improvements in all phases of the sparse
direct solution process have contributed to these performance gains. These include
the use of a maximum bipartite matching to pre-permute large magnitude elements
to the matrix diagonal, nested-dissection based fill-reducing permutation applied
symmetrically to rows and columns, an unsymmetric pattern multifrontal algorithm
using minimal static task- and data-dependency DAGs, and implementing partial
pivoting using symmetric row and column interchanges.
ACKNOWLEDGMENTS
The author would like thank Yanto Muliadi for help with conducting the experiments
to gather data for Table 2 and the anonymous referees for their detailed
comments that helped improve the presentation in the paper.
--R
An approximate minimum degree ordering algorithm.
Vectorization of a multiprocessor multifrontal code.
International Journal of Supercomputer Applications
Memory management issues in sparse multifrontal methods on multiprocessors.
A fully asynchronous multifrontal solver using distributed dynamic scheduling.
Multifrontal parallel distributed symmetric and unsymmetric solvers.
Analysis and compar- Recent Advances in Solution of General Sparse Linear Systems \Delta 323 ison of two general sparse solvers for distributed memory computers
An unsymmetrized multifrontal LU factorization.
SPOOLES: An object-oriented sparse matrix library
Robust ordering of sparse matrices using multisection.
Using postordering and static symbolic factorization for parallel sparse LU.
UMFPACK V3.
A combined unifrontal/multifrontal method or unsymmetric sparse matrices.
An unsymmetric-pattern multifrontal method for sparse LU factorization
A column approximate minimum degree ordering algorithm.
An asynchronous parallel supernodal algorithm for sparse Gaussian elimination.
Direct Methods for Sparse Matrices.
The design and use of algorithms for permuting large entries to the diagonal of sparse matrices.
On algorithms for permuting large entries to the diagonal of a sparse matrix.
The multifrontal solution of unsymmetric sets of linear equations.
Exploiting structural symmetry in a sparse partial pivoting code.
Nested dissection of a regular finite element mesh.
Computer Solution of Large Sparse Positive Definite Systems.
An implementation of Gaussian elimination with partial pivoting for sparse systems.
Elimination structures for unsymmetric sparse LU factors.
Direct linear solver for vector and parallel computers.
http://www.
August 1
January/March
November 20
An experimental comparison of some direct sparse solver packages.
October 19
On the LU factorization of sequences of identically structured sparse matrices within a distributed memory environment.
A fast and high quality multilevel scheme for partitioning irregular graphs.
Making sparse Gaussian elimination scalable by static pivoting.
A scalable sparse direct solver using static pivoting.
Generalized nested dissection.
Modification of the minimum degree algorithm by multiple elimination.
The role of elimination trees in sparse factorization.
The multifrontal method for sparse matrix solution: Theory and practice.
Scalable parallel sparse LU factorization with a dynamical supernode pivoting approach in semiconductor device simulation.
PARDISO: A high-performance serial and parallel sparse linear solver in semiconductor device simulation
revised September
--TR
Direct methods for sparse matrices
The role of elimination trees in sparse factorization
The multifrontal method for sparse matrix solution
Elimination structures for unsymmetric sparse <italic>LU</italic> factors
Exploiting structural symmetry in a sparse partial pivoting code
Modification of the minimum-degree algorithm by multiple elimination
An Approximate Minimum Degree Ordering Algorithm
Fast and effective algorithms for graph partitioning and sparse-matrix ordering
An Unsymmetric-Pattern Multifrontal Method for Sparse LU Factorization
A combined unifrontal/multifrontal method for unsymmetric sparse matrices
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs
The Design and Use of Algorithms for Permuting Large Entries to the Diagonal of Sparse Matrices
An Asynchronous Parallel Supernodal Algorithm for Sparse Gaussian Elimination
Making sparse Gaussian elimination scalable by static pivoting
PARDISO
Computer Solution of Large Sparse Positive Definite
<i>S</i><sup>+</sup>
On Algorithms For Permuting Large Entries to the Diagonal of a Sparse Matrix
A Fully Asynchronous Multifrontal Solver Using Distributed Dynamic Scheduling
An Experimental Comparison of some Direct Sparse Solver Packages
On the lu factorization of sequences of identically structured sparse matrices within a distributed memory environment
--CTR
Marco Morandini , Paolo Mantegazza, Using dense storage to solve small sparse linear systems, ACM Transactions on Mathematical Software (TOMS), v.33 n.1, p.5-es, March 2007
Olaf Schenk , Klaus Grtner, Solving unsymmetric sparse systems of linear equations with PARDISO, Future Generation Computer Systems, v.20 n.3, p.475-487, April 2004
Nicholas I. M. Gould , Jennifer A. Scott, A numerical evaluation of HSL packages for the direct solution of large sparse, symmetric linear systems of equations, ACM Transactions on Mathematical Software (TOMS), v.30 n.3, p.300-325, September 2004
Andreas Wchter , Chandu Visweswariah , Andrew R. Conn, Large-scale nonlinear optimization in circuit tuning, Future Generation Computer Systems, v.21 n.8, p.1251-1262, October 2005 | Sparse Matrix Factorization;Sparse LU Decomposition;multifrontal method;Parallel Sparse Solvers |
569607 | A Chromatic Symmetric Function in Noncommuting Variables. | Stanley (i>Advances in Math. 111, 1995, 166194) associated with a graph i>G a symmetric function i>XG which reduces to i>G's chromatic polynomial {\cal X}_{G}(n) under a certain specialization of variables. He then proved various theorems generalizing results about {\cal X}_{G}(n), as well as new ones that cannot be interpreted on the level of the chromatic polynomial. Unfortunately, i>XG does not satisfy a Deletion-Contraction Law which makes it difficult to apply the useful technique of induction. We introduce a symmetric function i>YG in noncommuting variables which does have such a law and specializes to i>XG when the variables are allowed to commute. This permits us to further generalize some of Stanley's theorems and prove them in a uniform and straightforward manner. Furthermore, we make some progress on the (3 1)-free Conjecture of Stanley and Stembridge (i>J. Combin Theory (i>A) J. 62, 1993, 261279). | Introduction
Let G be a finite graph with verticies and edge set
E(G). We permit our graphs to have loops and multiple edges. Let XG (n) be the
chromatic polynomial of G, i.e., the number of proper colorings ng.
(Proper means that vw 2 E implies -(v) 6= -(w).)
In [12, 13], R. P. Stanley introduced a symmetric function, XG , which generalizes
XG (n) as follows. Let be a countably infinite set of commuting
indeterminates. Now define
where the sum ranges over all proper colorings, :g. It is clear
from the definition that XG is a symmetric function, since permuting the colors of
a proper coloring leaves it proper, and is homogeneous of degree j. Also the
specialization XG (1 n ) obtained by setting x
Stanley used XG to generalize various results about the chromatic polynomial as
well as proving new theorems that only apply to the symmetric function. However,
there is a problem when trying to find a deletion-contraction law for XG . To see what
goes wrong, suppose that for e 2 E we let G n e and G=e denote G with the e deleted
and contracted, respectively. Then XG and XGne are homogeneous of degree d while
X G=e is homogeneous of degree d \Gamma 1 so there can be no linear relation involving all
three. We should note that Noble and Welsh [10] have a deletion contraction method
for computing XG equivalent to [12, Theorem 2.5]. However, it only works in the
larger category of vertex-weighted graphs and only for the expansion of XG in terms
of the power sum symmetric functions. Since we are interested in other bases as well,
we take a different approach.
In this paper we define an analogue, YG , of XG which is a symmetric function
in noncommuting variables. (Note that these symmetric functions are different from
the noncommutative symmetric functions studied by Gelfand and others, see [7] for
example.) The reason for not letting the variables commute is so that we can keep
track of the color which - assigns to each vertex. This permits us to prove a Deletion-
Contraction Theorem for YG and use it to derive generalizations of results about XG
in a straightforward manner by induction, as well as make progress on a conjecture.
The rest of this paper is organized as follows. In the next section we begin with
some basic background about symmetric functions in noncommuting variables (see
also [5]). In Section 3 we define YG and derive some of its basic properties, including
the Deletion-Contraction Law. Connections with acyclic orientations are explored in
Section 4. The next three sections are devoted to making some progress on the (3+1)-
free Conjecture of Stanley and Stembridge [14]. Finally we end with some comments
and open questions.
Symmetric functions in noncommuting variables
Our noncommutative symmetric functions will be indexed by elements of the partition
lattice. We let \Pi d denote the lattice of set partitions - of f1;
by refinement. We write a block of -. The
(greatest lower bound) of the elements - and oe is denoted by - oe. We use - 0
to denote the unique minimal element, and - 1 for the unique maximal element.
For - 2 \Pi d we define -) to be the integer partition of d whose parts are the
block sizes of -. Also, if need the constants
We now introduce the vector space for our symmetric functions. Let fx 1
be a set of noncommuting variables. We define the monomial symmetric functions,
where the sum is over all sequences positive integers such that
if and only if j and k are in the same block of -. For example, we get
for the partition
From the definition it is easy to see that letting the x i commute transforms m -
into j-jm -) , a multiple of the ordinary monomial symmetric function. The monomial
are linearly independent over C , and we
call their span the algebra of symmetric functions in noncommuting variables.
There are two other bases of this algebra that will interest us. One of them consists
of the power sum symmetric functions given by
where the second sum is over all positive integer sequences
if j and k are both in the same block of -. The other basis contains the elementary
symmetric functions defined by
where the second sum is over all sequences positive integers such that
are both in the same block of -. As an illustration of these definitions,
we see that
and that
Allowing the variables to commute transforms p - into p -) and e - into -!e -) . We
may also use these definitions to derive the change-of-basis formulae found in the
appendix of Doubilet's paper [3] which show
where -; oe) is the M-obius function of \Pi n .
It should be clear that these functions are symmetric in the usual sense, i.e., they
are invariant under the usual symmetric group action on the variables. However, it will
be useful to define a new action of the symmetric group on the symmetric functions
in noncommuting variables which permutes the positions of the variables. For
we define
where the action of ffi 2 S d on a set partition of [d] is the obvious one acting on
the elements of the blocks. It follows that for any ffi this action induces a vector
space isomorphism, since it merely produces a permutation of the basis elements.
Alternatively we can consider this action to be defined on the monomials so that
and extend linearly.
Utilizing the first characterization of this action, it follows straight from definitions
(2) and (3) that
3 YG , The noncommutative version
We begin by defining our main object of study, YG .
Definition 3.1 For any graph G with vertices labeled in a fixed order,
define
where again the sum is over all proper colorings - of G, but the x i are now noncommuting
variables.
As an example, for P 3 , the path on three vertices with edge set fv 1
can calculate
Note that if G has loops then this sum is empty and
depends not only on G, but also on the labeling of its vertices.
In this section we will prove some results about the expansion of YG in various
bases for the symmetric functions in noncommuting varaibles and show that it satisfies
a Deletion-Contraction Recursion. To obtain the expansion in terms of monomial
symmetric functions, note that any partition P of V induces a set partition -(P ) of
[d] corresponding to the subscripts of the vertices. A partition P of V is stable if any
two adjacent vertices are in different blocks of P . (If G has a loop, there are no stable
partitions.) The next result follows directly from the definitions.
Proposition 3.2 We have
where the sum is over all stable partitions, P , of V .
In order to show that YG satisfies a Deletion-Contraction Recurrence it is necessary
to have a distinguished edge. Most of the time we will want this edge to be between
the last two vertices in the fixed order, but to permit an arbitrary edge choice we will
define an action of the symmetric group S d on a graph. For all ffi 2 S d we let ffi act on
the vertices of G by ffi(v This creates an action on graphs given by
where H is just a relabeling of G.
Proposition 3.3 (Relabeling Proposition) For any graph G, we have
where the vertex order used in both YG and Y ffi(G) .
Proof. Let We note that the action of ffi produces a bijection between
the stable partitions of G and H. Utilizing the previous proposition and denoting the
stable partitions of G and H by PG and PH , respectively, we have
Using the Relabeling Proposition allows us, without loss of generality, to choose a
labeling of G with the distinguished edge for deletion-contraction being d . It
is this edge for which we will derive the basic recurrence for YG .
Definition 3.4 We define an operation called induction, ", on the monomial
, by
and extend this operation linearly.
Note that this function takes a symmetric function in noncommuting variables which
is homogeneous of degree d \Gamma 1 to one which is homogeneous of degree d. Context
will make it clear whether the word induction refers to this operation or to the proof
technique.
Sometimes we will also need to use induction on an edge l so we extend
the definition as follows. For define an operation" l
k on symmetric functions in
noncommuting variables which simply repeats the variable in the k th position again
in the l th . That is, for a monomial x
, define
and extend linearly.
Provided G has an edge which is not a loop, we will usually start by choosing a
labeling such that We also note here that if there is no such edge, then
ae
if G has a loop, (8)
where K d is the completely disconnected graph on d vertices. We note that contracting
an edge e can create multiple edges (if there are vertices adjacent to both of e's
or loops (if e is part of a multiple edge), while contracting a loop deletes
it.
Proposition 3.5 (Deletion-Contraction Proposition) For
where the contraction of labeled v d\Gamma1 .
Proof. The proof is very similar to that of the Deletion-Contraction Property for
XG . We consider the proper colorings of G n e, which can be split disjointly into two
types:
1. proper colorings of G n e with vertices v d\Gamma1 and v d different colors;
2. proper colorings of G n e with vertices v d\Gamma1 and v d the same color.
Those of the first type clearly correspond to proper colorings of G. If - is a coloring
of G n e of the second type then (since the vertices v d\Gamma1 and v d are the same color) we
have
where ~ - is a proper coloring of G=e. Thus we have
We note that if e is a repeated edge, then the proper colorings of G n e are exactly
the same as those of G. The fact that there are no proper colorings of the second
type corresponds to the fact that G=e has at least one loop, and so it has no proper
colorings. Also note that because of our conventions for contraction we always have
cardinality.
It is easy to see how the operation of induction affects the monomial and power
sum symmetric functions. For denote the partition -
with d inserted into the block containing d \Gamma 1. From equations (1) and (2) it is easy
to see that
With this notation we can now provide an example of the Deletion-Contraction Proposition
for the vertices are labeled sequentially, and the distinguished edge is
It is not difficult to compute
This gives us
which agrees with our previous calculation.
We may use the Deletion-Contraction Proposition to provide inductive proofs for
noncommutative analogues of some results of Stanley [12].
Theorem 3.6 For any graph G,
where -(S) denotes the partition of [d] associated with the vertex partition for the
connected components of the spanning subgraph of G induced by S.
Proof. We induct on the number of non-loops in E. If E consists only of n loops,
for n - 0, then for all S ' E(G), we will have
ae p 1=2=:::=d if
This agrees with equation (8).
if G has edges which are not loops, we use the Relabeling Proposition to
obtain a labeling for G with . From the Deletion-Contraction Proposition
we know that by induction
~
It should be clear that
Hence it suffices to show that
~
To do so, we define a map \Theta : f ~
Then, because of our conventions for contraction, \Theta is a bijection. Clearly
-( ~
". Furthermore
and this completes the proof.
By letting the x i commute, we get Stanley's Theorem 2.5 [12] as a corollary. Another
result which we can obtain by this method is Stanley's generalization of Whit-
ney's Broken Circuit Theorem.
A circuit is a closed walk, distinct vertices and edges. Note
that since we permit loops and multiple edges, we can have 2. If we fix a
total order on E(G), a broken circuit is a circuit with its largest edge (with respect to
the total order) removed. Let BG denote the broken circuit complex of G, which is the
set of all S ' E(G) which do not contain a broken circuit. Whitney's Broken Circuit
Theorem states that the chromatic polynomial of a graph can be determined from its
broken circuit complex. Before we prove our version of this theorem, however, we will
need the following lemma, which appeared in the work of Blass and Sagan [1].
Lemma 3.7 For any non-loop e, there is a bijection between BG and BGne [ B G=e
given by
ae ~
~
where we take e to be the first edge of G in the total order on the edges .
Using this lemma, we can now obtain a characterization of YG in terms of the
broken circuit complex of G for any fixed total ordering on the edges.
Theorem 3.8 We have
where -(S) is as in Theorem 3.6.
Proof. We again induct on the number of non-loops in E(G). If the edge set
consists only of n loops, it should be clear that for n ? 0 we will have every edge
being a circuit, and so the empty set is a broken circuit. Thus we have
ae P
which matches equation (8).
For a non-loop, we consider apply
induction. From Lemma 3.7 and arguments as in Proposition 3.6, we have
and X
~
which gives the result.
orientations
An orientation of G is a digraph obtained by assigning a direction to each of its
edges. The orientation is acyclic if it contains no circuits. A sink of an orientation
is a vertex v 0 such that every edge of G containing it is oriented towards v 0 . There
are some interesting results which relate the chromatic polynomial of a graph to the
number of acyclic orientations of the graph and the sinks of these orientations. The
one which is the main motivation for this section is the following theorem of Greene
and Zaszlavsky [8]. To state it, we adopt the convention that the coefficient of n i in
XG (n) is a i .
Theorem 4.1 For any fixed vertex v 0 , the number of acyclic orientations of G with
a unique sink at v 0 is ja 1 j.
The original proof of this theorem uses the theory of hyperplane arrangements.
For elementary bijective proofs, see [6]. Stanley [12] has a related result.
Theorem 4.2 If
then the number of acyclic orientations of G with j
sinks is given by
We can prove an analogue of this theorem in the noncommutative setting by using
techniques similar to his, but have not been able to do so using induction. We can,
however, inductively demonstrate a related result which, unlike Theorem 4.2 implies
Theorem 4.1. For this result we need a lemma from [6]. To state it, we denote the set
of acyclic orientations of G by A(G), and the set of acyclic orientations of G with a
unique sink at v 0 by A(G; v 0 ). For completeness, we provide a proof.
Lemma 4.3 For any fixed vertex v 0 , and any edge
ae
is a bijection between A(G; the vertex of G=e
formed by contracting e is labeled v 0 .
Proof. We must first show that this map is well-defined, i.e., that in both cases we
obtain an acyclic orientation with unique sink at v 0 . This is true in the first case by
definition. In case two, where D n
must be true that D n e has
sinks both at u and at v 0 (since deleting a directed edge of D will neither disturb the
acyclicity of the orientation nor cause the sink at v 0 to be lost). Since u and v 0 are
the only sinks in D nuv 0 the contraction must have a unique sink at v 0 , and there will
be no cycles formed. Thus the orientation D=e will be in A(G=e; v 0 ) and this map is
well-defined.
To see that this is a bijection, we exhibit the inverse. This is obtained by simply
orienting all edges of G as in D n uv 0 or D=uv 0 as appropriate, and then adding in the
oriented edge \Gamma! uv 0 . Clearly this map is also well-defined.
We can now apply deletion-contraction to obtain a noncommutative version of
Theorem 4.1.
Theorem 4.4 Let
c - e - . Then for any fixed vertex, v 0 , the number of
acyclic orientations of G with a unique sink at v 0 is (d \Gamma 1)!c [d] .
Proof. We again induct on the number of non-loops in G. In the base case, if all the
edges of G are loops, then
ae e 1=2=:::=d if G has no edges
if G has loops.
or G has loops
oe
If G has non-loops, then by the Relabeling Proposition we may let
We know that ". Since we will only be interested in the leading
coefficient, let
a oe e oe ;
and
ce
c oe e oe
where - is the partial order on set partitions. By induction and Lemma 4.3, it is
enough to show that (d \Gamma
Utilizing the change-of-basis formulae (6) and (7) as well as the fact that for - 2
\Pi d\Gamma1 we have p - "= p -+(d) , we obtain
-oe+(d)
With this formula, we compute the coefficient of e [d] from Y G=e ". The only term which
contributes comes from ce [d\Gamma1] ", which gives us
ce
-oe+(d)
\Gammac
Thus, from we have that
which completes the proof.
This result implies Theorem 4.1 since under the specialization x
Y
polynomial is divisible by n 2 . Thus the
only summand contributing to the linear term of -G (n) is when and in that
case the coefficient has absolute value (d \Gamma 1)!c [d] .
The next corollary follows easily from the previous result.
Corollary 4.5 If
then the number of acyclic orientations of G with
one sink is d!c [d] .
5 Inducing e -
We now turn our attention to the expansion of YG in terms of the elementary symmetric
function basis. We recall that for any fixed - 2 \Pi d we use - to denote the
partition of [d formed by inserting the element (d into the block of - which
contains d. We will denote the block of - which contains d by B - . We also let -=d+ 1
be the partition of [d formed by adding the block fd + 1g to -.
It is necessary for us to understand the coefficients arising in e - " if we want to
understand the coefficients of YG which occur in its expansion in terms of the elementary
symmetric function basis. We have seen in equation (10) that the expression for
rather complicated. However, if the terms in the expression of e - " are grouped
properly, the coefficients in many of the groups will sum to zero. Specifically, we need
to combine the coefficients from set partitions which are of the same type (as integer
partitions), and whose block containing d+1 have the same size. Keeping track of the
size of the block containing d allow us to use deletion-contraction repeatedly.
To do this formally, we introduce a bit of notation. Suppose
a composition, i.e., an ordered integer partition. Let P (ff) be the set of all partitions
l of [d + 1] such that
2.
3.
The proper grouping for the terms of e - " is given by the following lemma.
Lemma 5.1 If e
1), and for any
composition ff, we have
Proof. Fix - 2 \Pi d . By equation (10)
Hence we may express
where for any fixed -
We first note that if -=d we have the
simple computation shows that c
Similarly, and we can easily compute
We now fix loss of generality
we can let be the blocks of - which are contained
in B -+(d+1) . For notational convenience, we will also let jB
Finally, let fi denote the partition obtained from - by merging the blocks of
- which contain d and d are in the same block of - .
Replacing oe equation (11), we see that
Now for any B ' [d + 1] we will consider the sets
The nonempty L(B) partition the interval according to the content of
the block containing fd; d + 1g and so we may express
To compute the inner sum, we need to consider the following 2 cases.
Case 1) For some k ? q strictly contained in a block of -
In this case, we see that each non-empty L(B) forms a non-trivial cross-section of a
product of partition lattices, and so for this case
Thus these - will not contribute to
Case 2) For all k ? q is a block of - 1). So, by abuse of notation,
we can write Also in this case, we can
assume q - 0, since we have already computed this sum when - 1). Then
we will showjBj \Gamma 1
Indeed, it is easy to see that if and so this part
is clear. Also, if
again and
Otherwise, L(B) again forms a non-trivial cross-section
of a product of partition lattices, and again gives us no net contribution to the
sum.
We notice that since fd; d B, the second case in (12) will only occur if
Adding up all these contributing terms gives us
In order to compute the sum over all - 2 P (ff), it will be convenient to consider
all possible orderings for the block of - containing d. So for
The sequence (B the ordered set partition - . We also define
ae
so
Thus we can see that
To obtain the sum over all - 2 P (ff) we need to sum over all P (ff; 2.
However, if we let k r be the number of blocks which have size r,
then in the sum over all P (ff; j), each - 2 P (ff) appears \Pi m+1
times. Combining
all this information, we see that
Hence it suffices to show that
Using the multinomial recurrence we have,
and so we need only show that
However, we may express
6 Some e-positivity results
We wish to use Lemma 5.1 to prove some positivity theorems about YG 's expansion
in the elementary symmetric function basis. If the coefficients of the elementary
symmetric functions in this expansion are all non-negative, then we say that YG is
e-positive. Unfortunately, even for some of the simplest graphs, YG is usually not
e\Gammapositive. The only graphs which are obviously e\Gammapositive are the complete graphs
on n vertices and their complements, for which we have
Even paths, with the vertices labeled sequentially, are not e\Gammapositive, for we can
compute that Y . However, in this example we can
see that while Y P 3
is not e\Gammapositive, if we identify all the terms having the same type
and the same size block containing 3, the sum will be non-negative for each of these
sets.
This observation along with the proof of the previous lemma inspires us to define
equivalence classes reflecting the sets P (ff). If the block of oe containing i is B oe;i and
the block of - containing i is B -;i , we define
and extend this definition so that
We let (-) and e (-) denote the equivalence classes of - and e - , respectively. Taking
formal sums of these equivalence classes allows us to write expressions such as
c oe e oe
(-)'\Pi d
c (-) e (-) where c
We will refer to this equivalence relation as congruence modulo i.
Using this notation, we have Y P3 j
We
will say that a labeled graph G (and similarly YG ) is (e)\Gammapositive if all the c (-) are
non-negative for some labeling of G and suitably chosen congruence. We notice that
the expansion of YG for a labeled graph may have all non-negative amalgamated
coefficients for congruence modulo i, but not for congruence modulo j. However, if
a different labeling for an (e)-positive graph is chosen, then we can always find a
corresponding congruence class to again see (e)-positivity. This should be clear from
the Relabeling Proposition.
We now turn our attention to showing that paths, cycles, and complete graphs
with one edge deleted are all (e)-positive. We begin with a few more preliminary
results about this congruence relation and how it affects our induction of e - .
We note that in the proof of Lemma 5.1, the roles played by the elements d and
d+1 are essentially interchangeable. That is, if we let ~
P (ff) be the set of all partitions
l of [d + 1] such that
2.
3.
and let ~
- be the partition - 2 \Pi d with d replaced by d then the same proof will
show that
Note that here ~ - + (d) is the partition obtained from ~
- by inserting the element d
into the block of ~ - containing This allows us to state a corollary in terms of the
congruence relationship just defined.
Corollary 6.1 If then for any - 2 \Pi d , we have
and
e (~-=d) \Gammab
The next lemma simply verifies that the induction operation respects the congruence
relation and follows immediately from equation (10) or the previous corollary.
Lemma 6.2 If e
From this we can extend induction to congruence classes in a well-defined manner:
In order to use induction to prove the (e)-positivity of a graph G, we will usually
try to delete a set of edges which will isolate either a single vertex or a complete graph
from G in the hope of obtaining a simpler (e)-positive graph. In order to see how this
procedure will affect YG , we use the following lemma.
Lemma 6.3 Given a graph, G on d vertices let where the vertices in Km
are labeled v d+1
c oe e oe , then
c oe e oe=d+1;d+2;:::;d+m .
Proof. From the labeling of H we have
c oe e oe e [m]
This result suggests we use the natural notation G=v d+1 for the graph G
U fv d+1 g.
We are now in a position to prove the (e)\Gammapositivity of paths.
Proposition 6.4 For all d - 1, Y P d
is (e)\Gammapositive.
Proof. We proceed by induction, having labeled P d so that the edge set is E(P d
and the proposition is clearly
true.
So we assume by induction that
Y P d
(-)'\Pi d
where c (- 0 for all (-) 2 \Pi d . From the Deletion-Contraction Recurrence applied to
6.1 and Lemma 6.3, we see that
(-)'\Pi d
c (-) e
(-)'\Pi d
c (-) e (-) "
(-)'\Pi d
c (-)
(-)'\Pi d
c (-)
Since we know that c (- 0, and jB - j - 1 for all - , this completes the induction
step and the proof.
In the commutative context we will say that the symmetric function XG is e-
positive if all the coefficients in the expansion of the elementary symmetric functions
are non-negative. Clearly (e)-positivity results for YG specialize to e-positivity results
for XG .
Corollary 6.5 X P d
is e\Gammapositive.
One would expect the (e)-expansions for cycles and paths to be related as is shown
by the next proposition. For labeling purposes, however, we first need a lemma which
follows easily from the Relabeling Proposition.
Lemma 6.6 If
Proposition 6.7 For all d - 1, if
Y P d
where we have labeled the graphs so E(P d and E(C d+1
g.
Proof. We proceed by induction on d. If
the proposition holds for
For the induction step, we assume that
and also that
Y C d j d
c (-) e
We notice that if does not have the standard labeling for
paths. But if we let
then we can use the
Deletion-Contraction Recurrence to get
However, since d + 1 is a fixed point for fl; Lemma 6.6 allows us to deduce that
Y C d+1
In the proof of Proposition 6.4 we saw that
Combining these two equations gives
Y C d+1
The demonstration of Proposition 6.4 also showed us that
Y P d
c (-)
c (-)
Applying Corollary 6.1 and Lemma 6.3 yields
Y P d
"j d+1
c (-)
c (-)
c (-)
c (-)
and
Y P d =v d+1
c (-)
c (-)
By the induction hypothesis,
Y C d
c (-) e
c (-)
Plugging these expressions for Y P d =v d+1
" into equation (13), grouping
the terms according to type, and simplifying gives
Y C d+1
c (-)
c (-)
This corresponds to the expression in equation (14) for Y P d
in exactly the desired
manner, and so we are done.
From the previous proposition and the fact that Y C 1
we get an immediate
corollaries.
Proposition 6.8 For all d - 1, Y C d
is (e)\Gammapositive.
Corollary 6.9 For all d - 1, X C d
is e\Gammapositive.
We are also able to use our recurrence to show the (e)-positivity of complete graphs
with one edge removed.
Proposition 6.10 For d - 2, if
YK d ne j d
Proof. Consider the complete graph K d and apply deletion-contraction to the edge
Together with Corollary 6.1 this will give us
Simplifying gives the result.
This also immediately specializes.
Corollary 6.11 For d - 2,
7 The (3+1)-free Conjecture
One of our original goals in setting up this inductive machinery was to make progress on
the (3+1)-free Conjecture of Stanley and Stembridge, which we now state. Let a+b
be the poset which is a disjoint union of an a-element chain and a b-element chain. The
poset P is said to be (a+b)-free if it contains no induced subposet isomorphic to a+b.
Let G(P ) denote the incomparability graph of P whose vertices are the elements of P
with an edge uv whenever u and v are incomparable in P . The (3+1)-free Conjecture
of Stanley and Stembridge [14] states:
Conjecture 7.1 If P is (3+1)-free, then XG(P ) is e-positive.
Gasharov [4] has demonstrated the weaker result that XG(P ) is s-positive, where s
refers to the Schur functions.
A subset of the (3+1)-free graphs is the class of indifference graphs. They are
characterized [13] as having vertices and edges
belong to some [k; l] 2 Cg,
where C is a collection of intervals [k; We note that without
loss of generality, we can assume no interval in the collection is properly contained
in any other. These correspond to incomparability graphs of posets which are both
(3+1)-free and (2+2)-free.
Indifference graphs have a nice inductive structure that should make it possible to
apply our deletion-contraction techniques. Although we have not been able to do this
for the full family, we are able to resolve a special case. For any composition of n,
j-i ff j . A K ff -chain is the indifference graph using the
collection of intervals f[1; ~
This is just a string of complete
graphs, whose sizes are given by the parts of ff, which are attached to one another
sequentially at single vertices. We notice that the K ff -chain for
can be obtained from the K -chain for attaching the graph
to its last vertex.
We will be able to handle this type of attachment for any graph G with vertices
Hence, we define G+Km to be the graph with
and
Using deletion-contraction techniques, we are able to exhibit the relationship between
the (e)-expansion of G +Km and the (e)-expansion of G. However, we will also need
some more notation. For denote the partition given by - with
the additional i elements added to B - . This is in contrast to
which denotes the partition given by - with the element i inserted into B - .
We denote the falling factorial by
and the rising factorial by
We begin studying the behavior of YG+Km " d+j
d with two lemmas, the first of which
follows easily from equation (10).
Lemma 7.2 For any graph G on d vertices and 1 -
Lemma 7.3 If G is a graph on d vertices with
(-)'\Pi d
then
\Theta e
(b)
Proof. We prove the lemma by induction on m. The case merely a
restatement of Corollary 6.1. So we may assume this lemma is true for YG+Km " d+m
d ,
and proceed to prove it for YG+Km+1 " d+m+1
From Lemma 7.2, it follows that for 1 - j - m, we have
Now, from G+Km+1 we may delete the edge set fv d
all the terms YG " d+j
to obtain
d \GammamY G+Km " d+m
d
d \GammamY G+Km " d+m
From this point on, we need only concern ourselves with the clerical details, making
sure that everything matches up properly. We can see from Lemma 5.1, Lemma 6.3
and the original hypothesis on YG , that
c (-)
where
Similarly, the induction hypothesis shows
(b)
where
Simplifying the terms and combining both equations (15) and (16) gives
c (-)
(b)
(b) i+2
Note that modulo d +m
So by shifting indices and simplifying, we obtain
c (-) hmi i
\Theta e
(b)
which completes the induction step and the proof.
This lemma is useful because it will help us to find an explicit formula for YG+Km+1
in terms of YG . Once this formula is in hand, it will be easy to verify that if G is
(e)-positive, then so is G+Km+1 . To complete the induction step in establishing this
formula, we will need the following observation which follows from equation (10).
Lemma 7.4 For any graph G on d vertices, and oe 2 S d ,
We now give the formula for YG+Km+1 in terms of YG .
Lemma
(-)'\Pi d
then
(-)'\Pi d
(b)
\Theta (b \Gamma m+ i)e (-)
Proof. We induct on m. If
This shows
that
c (-)
which verifies the base case.
To begin the induction step, we repeatedly utilize the Deletion-Contraction Recurrence
to delete the edges v d+i v d+m+1 for
d+m \GammaY G+Km+1 " d+m+1
Note that we are able to combine all the terms from YG+Km+1 " d+m+1
using Lemma 7.4, since in these cases the necessary permutation exists.
We now expand each of the terms in equation (17). For the first, using Lemma
6.3,
(b)
where
For the second term, using Corollary 6.1, we have
c (-) hmi i+1
(b)
where
And finally, using Lemma 7.3,
c (-) hmi i
(b)
where
Grouping the terms appropriately and shifting indices where needed gives
c (-) hmi i
(b)
This completes the induction step and the proof.
Examining this lemma, we can see that in YG+Km+1 we have the same sign on
all the coefficients as we had in YG , with the possible exception of the terms where
it is easy to see that in this case we have
This means that in the expression for YG+Km+1 as a sum over congruence classes modulo
m, we can combine the coefficients on these terms. And so upon simplification,
the coefficient on e (-+i=d+i+1;:::;d+m) will be:
(b)
(b) m\Gammai\Gammab
where c (-) is the coefficient on e (-) in YG .
Adding these fractions by finding a common denominator, we see that this is
actually zero, which gives us the next result.
Theorem 7.6 If YG is (e)-positive, then YG+Km is also (e)-positive.
Notice that Proposition 6.4 follows easily from Theorem 7.6 and induction, since
for paths As a more general result we have the following corollary.
Corollary 7.7 If G is a K ff -chain, then YG is (e)-positive. Hence, XG is also e-
positive.
We can also describe another class of (e)-positive graphs. We define a diamond
to be the indifference graph on the collection of intervals f[1; 3]; [2; 4]g: So a diamond
consists of two K 3 's sharing a common edge. Then the following holds.
Theorem 7.8 Let D be a diamond. If G is (e)-positive, then so is G+D.
Proof. The proof of this result is analogous to the proof for the case of G +Km ,
and so is omitted.
8 Comments and open questions
We will end with some questions raised by this work. We hope they will stimulate
future research.
(a) Obviously it would be desirable to find a way to use deletion-contraction to
prove that indifference graphs are e-positive (or even demonstrate the full (3+1)-Free
Conjecture). The reason that it becomes difficult to deal with the case where the last
two complete graphs overlap in more than one vertex is because one has to keep track
of all ways the intersection could be distributed over the block sizes of an e - . Not only
is the bookkeeping complicated, but it becomes harder to find groups of coefficients
that will sum to zero.
Another possible approach is to note that if G is an indifference graph, then for
the edge (where [k; d] is the last interval) both Gn e and G=e are indifference
graphs. Furthermore G n e is obtained from G=e by attaching a K d\Gammak so that it intersects
in all but one vertex with the final K d\Gammak of G=e. Unfortunately, the relationship
between the coefficients in the (e)-expansion of YGne and Y G=e " does not seem to be
very simple.
(b) Notice that if T is a tree on d vertices, we have X T
is a generalization of the chromatic polynomial, it might be reasonable to suppose that
it also is constant on trees with d vertices. This is far from the case! In fact, it has
been verified up to
This leads to the following question posed by Stanley.
Question 8.1 ([12]) Does X T distinguish among non-isomorphic trees?
We should note that the answer to this question is definitely "yes" for Y T . In fact
more is true.
Proposition 8.2 The function YG distinguishes among all graphs G with no loops or
multiple edges.
Proof. We know from Proposition 3.2 that
for the stable partitions
. Construct the graph H with vertex set V and edge
set there exists a -(P ) such that are in the same block of -(P )g:
comes from a stable partition P of G, v i and v j are in the same block of
some -(P ) if and only if there is no edge G. Hence the graph H constructed
is the (edge) complement of G and so we can recover G from H.
Of course we can have YG 6= YH but first step towards answering
Stanley's question might be to see if Y T still distinguishes trees under congruence. It
seems reasonable to expect to investigate this using our deletion-contraction techniques
since trees are reconstructible from their leaf-deleted subgraphs [9]. We proceed in the
following manner.
then by the reconstructibility of trees there must exist labelings of these
trees so that v d is a leaf of T 1 , ~
v d is a leaf of T 2 and
. By induction
we will have Y T 1 \Gammav d 6j Furthermore, our recurrence gives
One now needs to investigate what sort of cancelation occurs to see if these two
differences could be equal or not. Concentrating on a term of a particular type could
well be the key.
(c) It would be very interesting to develop a wider theory of symmetric functions
in noncommuting variables. The only relevant paper of which we are aware is
Doubilet's [3] where he talks more generally about functions indexed by set partitions,
but not the noncommutative case per se. His work is dedicated to finding the change
of basis formulae between 5 bases (the three we have mentioned, the complete homogeneous
basis, and the so-called forgotten basis which he introduced). However, there
does not as yet seem to be any connection to representation theory. In particular,
there is no known analog of the Schur functions in this setting.
Note added in proof: Rosas and Sagan have recently come up with a definition of
a Schur function in noncommuting variables and are investigating its properties.
--R
Bijective proofs of two broken circuit theorems
On the Foundations of Combinatorial Theory.
Incomparability graphs of (3
Noncommutative symmetric functions and the chromatic polyno- mial
Sinks in acyclic orientations of graphs
On the interpretation of Whitney numbers through arrangements of hyperplanes
The reconstruction of a tree from its maximal subtrees
A weighted graph polynomial from chromatic invariants of knots
Acyclic orientations of graphs
A symmetric function generalization of the chromatic polynomial of a graph
Graph colorings and related symmetric functions: Ideas and appli- cations: A description of results
On immanants of Jacobi-Trudi matrices and permutations with restricted position
A logical expansion in mathematics
--TR
On immanants of Jacobi-Trudi matrices and permutations with restricted position
Incomparability graphs of (3 1)-free posets are <italic>s</italic>-positive
Graph colorings and related symmetric functions
Sinks in acyclic orientations of graphs
--CTR
Bruce E. Sagan , Jaejin Lee, An algorithmic sign-reversing involution for special rim-hook tableaux, Journal of Algorithms, v.59 n.2, p.149-161, May 2006
Mark Skandera , Brian Reed, Total nonnegativity and (3+1)-free posets, Journal of Combinatorial Theory Series A, v.103 n.2, p.237-256, August | symmetric function in noncommuting variables;deletion-contraction;graph;chromatic polynomial |
569610 | Orthogonal Matroids. | The notion of matroid has been generalized to Coxeter matroid by Gelfand and Serganova. To each pair i>(W, P) consisting of a finite irreducible Coxeter group i>W and parabolic subgroup i>P is associated a collection of objects called Coxeter matroids. The (ordinary) matroids are the special case where i>W is the symmetric group (the i>An case) and i>P is a maximal parabolic subgroup.This generalization of matroid introduces interesting combinatorial structures corresponding to each of the finite Coxeter groups. Borovik, Gelfand and White began an investigation of the i>Bn case, called symplectic matroids. This paper initiates the study of the i>Dn case, called orthogonal matroids. The main result (Theorem 2) gives three characterizations of orthogonal matroid: algebraic, geometric, and combinatorial. This result relies on a combinatorial description of the Bruhat order on i>Dn (Theorem 1). The representation of orthogonal matroids by way of totally isotropic subspaces of a classical orthogonal space (Theorem 5) justifies the terminology i>orthogonal matroid. | Introduction
.
Matroids, introduced by Hassler Whitney in 1935, are now a fundamental tool
in combinatorics with a wide range of applications ranging from the geometry of
Grassmannians to combinatorial optimization. In 1987 Gelfand and Serganova [9]
[10] generalized the matroid concept to the notion of Coxeter matroid. To each nite
Coxeter group W and parabolic subgroup P is associated a family of objects called
Coxeter matroids. Ordinary matoids correspond to the case where W is the symmetric
group (the An case) and P is a maximal parabolic subgroup.
This generalization of matroid introduces interesting combinatorial structures corresponding
to each of the nite Coxeter groups. Borovik, Gelfand and White [2] began
an investigation of the Bn case, called symplectic matroids. The term \symplectic"
comes >from examples constructed from symplectic geometries. This paper initiates
the study of the Dn case, called orthogonal matroid because of examples constructed
from orthogonal geometries.
The rst goal of this paper is to give three characterizations of orthogonal matroids:
algebraic, geometric and combinatorial. This is done in Sections 3, 4 and 6 (Theorem
after preliminary results in Section 2 concerning the family Dn of Coxeter groups.
The algebraic description is in terms of left cosets of a parabolic subgroup P in
The work of Neil White on this paper was partially supported at UMIST under funding of EPSRC
Visiting Fellowship GR/M24707.
Dn . The Bruhat order on Dn plays a central role in the denition. The geometric
description is in terms of a polytope obtained as the convex hull of a subset of the
orbit of a point in R n under the action of Dn as a Euclidean re
ection group. The
roots of Dn play a central role in the denition. The combinatorial description is
in terms of k-element subsets of a certain set and
ags of such subsets. The Gale
order plays a central role in the denition. Section 5 gives a precise description of the
Bruhat order on both Bn and Dn in terms of the Gale order on the corresponding
ags (Theorem 1). A fourth characterization, in terms of ori
ammes, holds for an
important special case (Theorem 3 of Section 6).
Section 7 concerns the relationship between symplectic and orthogonal matroids.
Every orthogonal matroid is a symplectic matroid. Necessary and su-cient conditions
are provided when a Lagrangian symplectic matroid is orthogonal (Theorem 4). More
generally, the question remains open.
Section 8 concerns the representation of orthogonal matroids and, in particular,
justies the term orthogonal. Just as ordinary matroids arise from subspaces of
projective spaces, symplectic and orthogonal matroids arise from totally isotropic
subspaces of symplectic and orthogonal spaces, respectively (Theorem 5).
2. The Coxeter group Dn
We give three descriptions of the family Dn of Coxeter groups: (1) in terms of
generators and relations; (2) as a permutation group; and (3) as a re
ection group in
Euclidean space.
Presentation in terms of generators and relation. A Coxeter group W is
dened in terms of a nite set S of generators with the presentation
ss 0 is the order of ss 0 , and m ss (hence each generator is an involution).
The cardinality of S is called the rank of W . The diagram of W is the graph where
each generator is represented by a node, and nodes s and s 0 are joined by an edge
labeled m ss 0 whenever m ss 0 3. By convention, the label is omitted if m ss
Coxeter system is irreducible if its diagram is a connected graph. A reducible Coxeter
group is the direct product of the Coxeter groups corresponding to the connected
components of its diagram. The nite irreducible Coxeter groups have been completely
classied and are usually denoted by An (n 1); Bn(= Cn ) (n 2); Dn (n
the subscript denoting the
rank. The diagrams of of the families An ; B=Cn and Dn appear in Figure 1, these
being the families of concern in this paper.
Permutation representation. Throughout the paper we will use the notation
ng and [n] g. As a permutation group, An is isomorphic
to the symmetric group on the set [n + 1] with the standard generators being
the adjacent transpositions
n-12n
Fig. 1. Diagrams of three Coxeter families.
Likewise Bn is isomorphic to the permutation group acting on [n] [ [n] generated by
the involutions
We will use the convention i Call a subset X [n] [ [n]
admissible if X \ X does not contain both i and i for any i. The
group Bn acts simply transitively on ordered, admissible n-tuples; hence
The group Dn is isomorphic to the permutation group acting on [n] [ [n] and
generated by the involutions
Note that Dn is a subgroup of Bn . More precisely, Dn consists of all the even permutations
in Bn ; hence
Re
ection group. A re
ection in a Coxeter group W is a conjugate of some
involution in S. Let denote the set of all re
ections in W . Every nite
Coxeter group W can be realized as a re
ection group in some Euclidean space E of
dimension equal to the rank of W . In this realization, each element of T corresponds
to the orthogonal re
ection through a hyperplane in E containing the origin.
It is not di-cult to give an explicit representation of Bn and Dn as re
ection
groups. If i 2 [n], let e i denote the i th standard coordinate vector. Moreover, let
its subgroup Dn , as a permutation group as given above.
Then for w 2 Bn , the representation of w as an orthogonal transformation is given
by letting
for each i 2 [n] and expanding linearly.
As a re
ection group, each nite Coxeter group W acts on its Coxeter complex.
Let denote the set of all re
ecting hyperplanes of W , and let
The connected components of E 0 are called chambers. For any chamber , its closure
is a simplicial cone in E . These simplicial cones and all their faces form a simplicial
fan called the Coxeter complex and denoted := (W ). It is known that W acts
simply transitively on the set of chambers of (W ).
A
ag of an n-dimensional polytope is a nested sequence F 0 F 1 Fn 1
of faces. A polytope is regular if its symmetry group is
ag transitive. Each of the
irreducible Coxeter groups listed above, except Dn is the symmetry
group of a regular convex polytope. In particular An is the symmetry group of the
(n 1)-simplex, the permutation representation above being the action on the set of
vertices, each vertex labeled with an element of [n]. The group Bn is the symmetry
group of the n-cube or its dual, the cross polytope (generalized octahedron). For this
reason, the group Bn is referred to as the hyperoctahedral group. The permutation
representation is the action on the set of 2n vertices of the cross polytope, each vertex
labeled with an element of [n] [ [n] , the vertex i being the vertex antipodal to the
vertex i. Dually, the action is on the set of 2n facets of the n-cube, the facet i being
the one opposite the facet i. In the cases where Coxeter group W is the symmetry
group of a regular polytope, the intersection of the Coxeter complex (W ) with a
sphere centered at the origin is essentially the barycentric subdivision of the polytope.
The Coxeter group of type Dn also acts on the n-cube Qn , although not quite
ag
acts transitively on the set of k-dimensional faces of Qn for all k
However, there are two orbits in its action on the set of vertices of Qn ,
and hence two orbits in its action on the set of
ags of Qn .
3. Dn matroids
Three denitions of Dn matroid (orthogonal matroid) are now given: (1) algebraic,
(2) geometric, and (3) combinatorial. That these three denitions are equivalent is the
subject of Sections 4, 5 and 6. Three such denitions are also given of An (ordinary)
matroids and Bn (symplectic) matroids.
Algebraic description. We begin with a denition of the Bruhat order on a
Coxeter group W ; for equivalent denitions see e.g., [7][12]. We will use the notation
for the Bruhat order. For w 2 W a factorization into the product of
generators in S is called reduced if it is shortest possible. Let l(w) denote the length
k of a reduced factorization of w.
Denition 1. Dene u v if there exists a sequence
that re
ection
Every subset J S gives rise to a (standard) parabolic subgroup P J generated by
J . The Bruhat order can be extended to an ordering on the left coset space W=P for
any parabolic subgroup P of W .
Denition 2. Dene Bruhat order on W=P by u v if there exists a u 2 u and
such that u v.
Associated with each w 2 W is a shifted version of the Bruhat order on W=P ,
which will be called the w-Bruhat order and denoted w .
Denition 3. Dene u w v in the w-Bruhat order on W=P if w 1 u w 1 v.
Denition 4. The set L W=P is a Coxeter matroid (for W and P ) if, for each
there is a
The condition in Denition 4 is referred to as the Bruhat maximality condition. A
Coxeter diagram with a subset of the nodes circled will be referred to as a marked
diagram. The marked diagram G of W=P is the diagram of W with exactly those
nodes circled that do not correspond to generators of P . Likewise, if L is a Coxeter
matroid for W and P , then G is referred to as the marked diagram of L.
Geometric description. Consider the representation of a Coxeter group W as a
re
ection group in Euclidean space E as discussed in Section 2. A root of W is a vector
orthogonal to some hyperplane of re
ection (a hyperplane in the Coxeter complex).
For Dn , with respect to the same coordinate system used in equation (2.1), the roots
are precisely the vectors
while the roots of Bn are
For our purposes the norm of the root vector is not relevant.
In the Coxeter complex of W choose a fundamental chamber that is bounded by
the hyperplanes of re
ection corresponding to the generators in S. With the Coxeter
complex of Dn as described in Section 2, a fundamental chamber is the convex cone
spanned by the vectors
Let x be any nonzero point in the closure of this fundamental chamber. Denote the
orbit of x by
O
If L O x , then the convex hull of L, denoted (L), is a polytope. The following
formulation was originally stated by Gelfand and Serganova [10]; also see [12].
Denition 5. The set L O x is a Coxeter matroid if every edge in (L) is
parallel to a root of W .
The marked diagram G of a point x is the diagram of W with exactly those nodes
circled that correspond to hyperplanes of re
ection not containing point x. Note that
x and y have the same diagram if and only if they have the same stabilizer in W . If
L O x is a Coxeter matroid, then G is referred to as the marked diagram of L. The
polytope (L) is independent of the choice of x in the following sense. The proof
appears in [5].
Lemma 1. If x and y have the same diagram and L x O x and L y O y are
corresponding subsets of the orbits (i.e., determined by the same subset of W ), then
are combinatorially equivalent and corresponding edges of the two
polytopes are parallel.
Because of Lemma 1, there is no loss of generality in taking, for each diagram, one
particular representative point x in the fundamental chamber and the corresponding
orbit O x . In particular, we can take x in the set Z 0 of points
where each of the following quantities equals either 0 or 1: x
and xn 1 jx n j, and jx n j. The set Z 0 consists essentially of all possible barycenters of
subsets of the vectors in (3.1) that span the fundamental chamber. (Except, however,
the single vector e 1 used instead of the last two vectors in (3.1) when
both of those vectors are present. Thus, if example, we get (2;
instead of (3; 2; 0).)
Combinatorial description. Our combinatorial description of symplectic and
orthogonal matroids is analogous to the denition of an ordinary matroid in terms
of its collection of bases. Whereas the algebraic and geometric descriptions hold for
any nite irreducible Coxeter group, the denitions in this section are specic to the
An ; Bn and Dn cases. For a generalization see [13].
An essential notion in these denitions is Gale ordering. Given a partial ordering
on a nite set X, the corresponding Gale ordering on the collection of k-element
subsets of X is dened as follows: A B if, for some ordering of the elements of A
and B,
we have a i b i for all i. Equivalently, we need a bijection A so that (b) b
for all b 2 B. In later proofs when constructing such a bijection, we will refer to b as
dominating (b). The following lemma is straightforward.
Lemma 2. Let
and assume the elements have been ordered so that a i a i+1 and b i b i+1 for all
in the Gale order if and only if a i b i for all i.
Dene a
ag F of type nested
sequence of subsets of X
such that jA
ag of type k is just a single k-element set. Extend the Gale
ordering on sets to the Gale ordering on
ags as follows. If
are two
ags, then FA FB if A i B i for all i.
An matroids. Consider
ags for each
i. Let
denote the set of all
ags of type
Denition 6. A collection L A (k1 ;k 2 ;:::;k m ) is an An matroid if, for any linear
ordering of [n], L has a unique maximal member in the corresponding Gale order.
If the
ag consists of more than one subset, the An matroid is often referred to as
an ordinary
ag matroid. In the case of single sets (
ags of type k), it is a standard
result in matroid theory [14] that an An matroid is simply an ordinary matroid of
rank k.
Bn matroids. Now consider
ags
for each i. Call such a
ag admissible if Am \ A
does not contain
both i and i for any i. Let
denote the set of all admissible
ags of type
< km n.
Dene a partial order on [n] [ [n] by
A partial order on [n] [ [n] is called Bn -admissible if it is a shifted ordering w for
some w 2 Bn . By a shifted ordering we mean:
a w b if and only if w 1 (a) w 1 (b):
Note that an ordering on [n][[n] is admissible if and only if (1) is a linear ordering
and (2) from i j it follows that j i for any distinct elements
For example, is an admissible ordering.
Denition 7. A collection L A (k1 ;k 2 ;:::;k m ) is a Bn matroid (symplectic
ag
matroid) if, for any admissible ordering of [n] [ [n] , L has a unique maximal member
in the corresponding Gale ordering.
The marked diagram G of A (k1 ;k 2 ;:::;k m ) is the diagram for Bn with the nodes
referred to as the marked diagram of L.
Dn matroids. Again consider
ags
for each i. Let A (k1 ;k 2 ;:::;k m ) denote the same set of admissible
ags of type
as in the Bn case, except when
then let
denote the set of admissible
ags of type containing an
even number of starred elements, and let
denote the set of admissible
ags of type containing an
odd number of starred elements. Note that we do not permit both km
In the case should be understood, without explicitly stating it,
that A means either A + or A .
Dene a partial order on [n] [ [n] by
Note that the elements n and n are incomparable in this ordering. A partial order
on [n] [ [n] is Dn -admissible if it is a shifted order w for some w
a w b if and only if w 1 (a) w 1 (b):
Note that an ordering is Dn -admissible if and only if it is of the form
an
a 1 a an 1 a
2 a
a
an g is admissible and we again use the convention i
Denition 8. A collection L A (k1 ;k 2 ;:::;k m ) is a Dn matroid (orthogonal matroid)
if, for each admissible ordering, L has a unique maximal member in the corresponding
Gale order.
The elements of L will be called bases of the matroid. If m > 1, then the matroid
is sometimes referred to as an orthogonal
ag matroid. The marked diagram G of
A (k1 ;k 2 ;:::;k m ) is obtained from the diagram of Dn by considering three cases:
Case 1. If km n 2, then circle the nodes k
Case 2. If km 1 n 2 (or
and the node n 1 or n depending on whether the collection of
ags is A (k1 ;k
or A +
Case 3. If km 1 n 2 (or
and both nodes n 1 and n.
Note that all possibilities for marked diagrams are realized. If L A (k1 ;k 2 ;:::;k m ) is a
Dn matroid, then G is referred to as its marked diagram.
4. Bijections
The denitions of Dn matroid in the previous section are cryptomorphic in the sense
of matroid theory; that is, they dene the same object in terms of its various aspects.
Likewise for Bn matroids. For ordinary matroids there are additional denitions in
terms of independent sets,
ats, cycles, the closure operator, etc. In this section we
make the crytomorphisms explicit for orthogonal matroids.
The denitions given for Dn matroid in the previous section are
(1) in terms of the set W=P of cosets (Denition 4),
(2) in terms of the set O x of points in Euclidean space (Denition 5), and
(3) in terms of a collection A (k1 ;k 2 ;:::;k m ) of admissible
ags (Denition 8).
Explicit bijections are now established between W=P; O x and A (k1 ;k 2 ;:::;k m ) , each with
the same marked diagram:
To dene f , start with the collection Dn=P of cosets with marked diagram G. Fix
a point x 2 R n r f0g in the fundamental chamber with marked diagram G. In fact,
we can take x to be a point in the set Z 0 dened in Section 3. In other words, the
stabilizer of x in W is exactly P . For w 2 Dn , the point w(x) depends only on the
coset wP . This gives a bijection
To describe the inverse of f , let y 2 O x and let w 2 Dn be such that y. If P
is the parabolic subgroup of Dn generated by exactly those re
ections that stabilize
To dene g, again consider the collection Dn=P of cosets with marked diagram G.
Let A (k1 ;:::;k m ) be the collection of admissible sets with the same marked diagram G
(as described by the three cases in Section 3). If
ag
and A then the
ag will often be denoted
For example the
ag will be denoted simply
be the
ag
in the case
depending, respectively, on whether the node n 1 or n is circled. Let F be an
arbitrary
ag given in the form (4.1). The action of Dn as a permutation group on
as described in Section 2 extends to an action on A (k1 ;k
. For this action of Dn on A (k1 ;:::;k m ) the stabilizer of F 0 is P . So, for
the
ag w(F ) depends only on the coset wP . Thus a bijection is induced:
The map g is surjective because, if km < n, then there is one orbit, A (k1 ;:::;k m ) , in
the action of Dn on the set of admissible
ags of type
there are two orbits, A
the one consisting of those admissible
ags such that Am contains an even number of starred elements, and the other those
admissible
ags such that Am contains an odd number of starred elements.
To describe the inverse of g, let F be an arbitrary
ag in A (k1 ;:::;k m ) and let w 2 Dn
be such that w(F 0 If P is the parabolic subgroup of Dn that stabilizes F 0 , then
The third bijection is However, it is useful to provide the direct
construction, as follows. Let F be a
ag of type
Recall that e i dene the
km
where i is the number of sets in the
ag that contain a i . In
particular let
where we allow when we are in case
respectively. Now we have our map
There is an alternative way to describe the map h, in terms of the barycentric
subdivision (Q) of the n-cube Q centered at the origin with edges parallel to the
axes. Let the faces of Q be labeled by [n] [ [n] , where i and i are antipodal
faces. If F 2 A k , then h(F ) is a vertex of (Q) (appropriately scaled). For a
ag
the image h(F ) is the barycenter of the
simplex of (Q) determined by the vertices
To describe the inverse of h, note that each vertex v in (Q) represents a face f v
of Q. If A [n] [ [n] is the set of k facets of Q whose intersection is f v , then label
v by A. Point x is the barycenter of a simplex of (Q) whose vertices are labeled,
say [k (again, in the case replace
ng by f1; each point y 2 O x is the
barycenter of some simplex of (Q) whose vertices are labeled A 1 ; A
for each i. Then
Proposition. With the maps as dened above: h -
Proof. Let w 2 Dn=P and use formula (2.1):
km
km
5. The Bruhat Order on Dn .
Let Dn=P and A (k1 ;:::;k m ) have the same marked diagram. In this section a combinatorial
description of the Bruhat order on Dn=P is given in terms of A (k1 ;:::;k m ) .
This is also done in the Bn case.
Consider two
ags of the same type:
Alternatively,
Assume, without loss of generality, that in A and B the elements between consecutive
vertical bars are arranged in descending order with respect to (3.2). This is possible
because, by denition, elements n and n do not appear together in an admissible
set.
Now we dene a partial order on A (k1 ;:::;k m ) called the weak Dn Gale order. Consider
two distinct
ags A and B (denoted as above) where, for each i, either (1) b
or (2) b
i . In case (2) also assume that
(a) a i is unstarred and a i 6= n;
(b) a j (and thus also b j ) is less than a i in numerical value for all j > i; and
(c) all elements greater than a i in numerical value appear to the left of a i in A.
A in the Gale order for Dn . Call two
ags with the above properties close.
For example, if are close, but
and are not close and are not close. For any pair of
ags that are close, it is easy to check that one covers the other in the Gale order.
Consider the Hasse diagram of the Dn Gale order on the set A (k1 ;:::;k m ) of
ags.
Remove >from the diagram all covering relations that are close. Call the resulting
partial order on A (k1 ;:::;k m ) the weak Dn Gale order.
The following notation is used in the next lemma, which provides a formula for
the length l(u) of any element u 2 Dn . Let Fu 2 A (1;:::;n 1) . Since all nodes in the
corresponding diagram are circled, the parabolic subgroup in this case is trivial, so we
are justied in using the notation Fu , where u 2 Dn . Let F 0
u be the
ag in A (1;:::;n)
obtained from Fu by adjoining the missing element at the end so that the number of
starred elements is even. For a
ag A in A (1;:::;n) dene a descent as a pair (a i ; a j )
of elements such that j > i and a i a j . Let d(A) denote the number of descents in
ag A. Note that, as a permutation of [n] [ [n] , a re
ection t 2 Dn is an involution
of the form
A generating re
ection is of the form (i
n).
Lemma 3. Let Fu be the
ag corresponding to an element u 2 Dn . Then
(n
Proof. The proof is by induction. For a
ag in denote the
parameter
s 2[n] \F 0 (n s) by p(F ). Note that applying any generating
re
ection s to F changes p(F ) by at most 1. Since necessarily l(u)
But we can always arrange it so that p(sF
Theorem 1.
(1) The correspondence poset isomorphism between Dn=P
with respect to the Bruhat order and A (k1 ;:::;k m ) with respect to the weak Dn Gale
order. In other words, for Dn , Bruhat order is weaker than Gale order.
(2) On the other hand, for Bn , Bruhat order on Bn=P is isomorphic to Gale order on
A
Proof. In this proof the relation refers to the Dn weak Gale order. Let FA and FB
be two
ags in A (k1 ;:::;k m ) and v and u the corresponding cosets in Dn=P . It must
be shown that FB FA if and only if u v. According to denitions (1) and (2) in
Section 3, in Dn=P we have u v in the Bruhat order if and only if there exists a
sequence re
ection
and is a re
ection, and let
ags
F v and Fu be the corresponding
ags. Note that, according to Lemma 3, u v if
and only if Fu > F v .
Below it will be shown that, in A (k1 ;:::;k m ) , we have FB FA in the weak Dn
order if and only if there exists a sequence of
ags
that F of the form (5.1), and F i > F i 1 for
m. This will prove the theorem. Thus it is now su-cient to show that, for
any two
ags FB > FA , there is an involution t of the form (5.1) such that either
To simplify notation assume that B > A are two
ags. Let j be the rst index
such that a j 6= b j , and such that, furthermore, if b
then assume that either
(i) there is an a that is greater than a j in numerical value or
(ii) there is an element c greater than a j in numerical value such that neither c
nor c lies to the left of a j in A.
Since A and B are not close, such a j must exist. To simplify notation let a =
. If a b, then it is not possible that B > A. Also
not possible; otherwise there would be an earlier pair a
would
qualify as the rst pair such a j 6= b j . Thus b a. Let [a; bg. One of
the following cases must hold:
(1) There is a c 2 [a; b] such that either c lies to the right of a in A or neither c
nor c appears in A.
(2) There is a c 2 [a; b] such that c lies to the right of b in B or neither c nor c
appears in B.
(3) There is a c 2 [a; b] such that c lies to the right of a in A.
In fact, if neither (1) nor (2) holds, then clearly b can play the role of c in (3) unless
a and neither (1) nor (2) holds, then the non-closeness of A and
B is violated.
Case 1. Assume that such a c exists. If such c exists to the right of a in A, let
a k be the rst such element as we move to the right of a in A. (If neither c nor
c appear in A for all such c, let 1.) The elements a i 2 A between a and a k do
not lie in [a; b].
There are now two subcases. Assume that a C be obtained
from A by applying the involution (ac)(a c ). Clearly C > A, since c must occur in
a separate block of A from a, since c > a and we assumed elements in a block were
ordered in decreasing order. It remains to show that B C. We must show that
and we may assume that all elements
of the left of b in B i are used to dominate the elements to the left of a in A. The
element b in B i must be used to dominate an element of A that is less than or equal
to a. But then, without loss of generality, we may assume that b is used to dominate
a. Using the same correspondence we have B i C i .
Now assume the other subcase, that b
Arguing as above, for
may be the case that b is used to dominate a i . By the choice of j satisfying
properties (i) and (ii) above, b is less than a i in numerical value, and hence it must
be the case that b b
. But then, without loss of generality, we can take b i to
dominate a i and b to dominate a. Then proceed as in the paragraph above.
Case 2 is proved in an analogous fashion to Case 1, nding a C such that B > C
A:
Case 3. Assume that neither case (1) nor case (2) holds. We have already seen
that b 6= a . The situation is now divided into two possibilities. Either
(1) a and b are both unstarred or a is unstarred and
(2) a and b are both starred or b is starred and a = n.
The two cases are exhaustive. To see this let a 6= n be unstarred and b 6= n starred.
Either a is greater than b in numerical value or b is greater than a in numerical value.
Assume the former; the argument is the same in either case. Then a 2 [a; b] and
a 2 [a; b]. One of the following must be true: neither a nor a appear in B (which
is case 2) or one of a or a appears to the right of b in B (also case 2). (Note that
n is not possible.)
We consider the case where both a and b are unstarred or a is unstarred and
the argument in the second case is analogous. We assume that there is a c 2 [a; b]
such that c lies to the right of a in A. Let c = a k be the last such starred element
as we move to the right of a in A. In other words, all other elements in [a; b] that lie
to the right of a in A lie to the left of c . Let C be obtained from A by applying the
involution (ac)(a c ). Clearly C > A; it remains to show that B C. In particular,
we must show that B i C i for all i such that k i > j. We have and we can
assume, by the same reasoning as in case (1), that all elements of the left of b in B
are used to dominate the elements to the left of a in A. Since no elements in [a; b]
appear to the right of a in A, there is no loss of generality in assuming that b is used
to dominate a. Then, unless k i k, the same correspondence between the elements
of A i and B i shows that B i C i .
If k i k, then consider the set X of starred elements of A i (and n if
lie to the right of a and the set Y of starred elements of B i that lie to the right of b.
It is not necessary to consider elements to the left of a or b because, even if for some
such pair we have b
i , the element b i cannot be used to dominate any element of
X in the Gale order B i > A i since all elements in X are less than b i in numerical value
(and hence greater in order). So in the Gale order B i > A i the elements of X must be
dominated by elements of Y . Indeed, X \ ([a; g. There
may be elements of ([a; not in B i , by virtue of appearing to the right of
position k i . But in that case, even larger elements of Y must be used to dominate
g. By letting each element be used to dominate itself, where possible,
we see that there is no loss of generality in assuming that x a dominates
now almost the same correspondence shows that B i C i . Merely make
the changes that x dominates a 2 C i and c 2 B i (or some larger
dominates b 2 C i , whereas b now dominates c. This completes the proof of statement
(1) of Theorem 1.
A similar though considerably simpler proof of statement (2) in Theorem 1 can be
given. In fact, a general proof for all Coxeter groups with a linear diagram was given
in [13]. This would include the Bn case, but not the Dn case.
6. Cryptomorphisms.
We have seen that for Dn , the bijection not a poset
isomorphism between Bruhat order and Gale order, but rather that Bruhat order is
weaker than Gale order. It is therefore somewhat surprising that the Bruhat maximality
condition is still equivalent to the Gale maximality condition. We now prove
the equivalence of our three denitions of Dn matroid, the algebraic, geometric, and
combinatorial descriptions.
Theorem 2. Let L Dn=P be a collection of cosets, let be the
corresponding polytope, and let be the corresponding collection
of
ags. Then the following are equivalent.
(1) L satises the Bruhat maximality condition,
(2) (L) satises the root condition,
(3) F satises the Gale maximality condition.
Proof. This theorem follows from Theorem 3 in [13]. However, that paper considers a
much more general setting, and the proof, including that of the prerequisite Theorem
1 of [13], is quite long and involved. The situation simplies considerably in the
current setting. The equivalence of statements (1) and (2) is a special case of the
Gelfand-Serganova Theorem and is proved for any nite irreducible Coxeter group in
[12, Theorem 5.2]. Now we show the equivalence of (1) and (3).
Assume that L Dn=P satises the maximality condition with respect to Bruhat
order. This means that, for any w 2 Dn , there is a maximum u 0 2 L such that
statement (1) of Theorem 1 this implies that
in the weak Dn Gale order for all
is the
ag dened in Section 4. So F satises the Gale maximality
condition.
Conversely, suppose that F satises the Gale maximality condition with respect
to admissible Dn -orderings. Since every admissible Bn -ordering is a renement of an
admissible Dn -ordering, it is clear that F satises the Gale maximality condition with
respect to admissible Bn -orderings. By statement (2) of Theorem 1, the corresponding
collection of cosets of Bn=P satises the Bruhat maximality condition, and hence
the corresponding polytope (L) satises the root condition for Bn by the above-mentioned
equivalence of (1) and (2) for nite irreducible Coxeter groups. Note that
if we consider the same set of
ags F as both a Dn matroid and a Bn matroid, (L)
is the same polytope in both cases. If (L) also satises the root condition for Dn , we
are done. Therefore we assume, by way of contradiction, that (L) does not satisfy
the root condition for Dn , and hence that there is an edge of (L) which is parallel
to e i for some i. Let this edge be - A - B for some pair of
ags
and is the bijection from Section 4. Then A and B dier only
in element i, and hence for some k, a
be a
linear functional which takes its maximum value on the polytope (L) only on the
two vertices - A - B and, of course, the edge between them. Clearly
loss of generality, we may assume that f is chosen so that
Now choose an admissible Dn -ordering on [n] [ [n] according to the values of
we write
k , break the tie arbitrarily, as long as admissibility is attained. Also i and
remain incomparable. In the Gale order induced by this admissible order, A and
are unrelated. Suppose there exists some
ag X > A in the Gale order. Clearly
contradicting the fact that - A and - B are the unique vertices of (L)
on which f is maximized. Thus A, and likewise B, are both maximal in the Dn Gale
order, contrary to assumption.
We now consider a fourth equivalent denition of orthogonal matroid in the case
that the marked diagram has both nodes n 1 and n circled. In this case an orthogonal
matroid L A (k1 ;:::;k m ) has and the largest member Am of each
ag
is an admissible set of cardinality n 1. It is easily seen that the collection of all
such (n 1)-sets for all members of L itself constitutes an orthogonal matroid of rank
n 1. (This is likewise true for smaller ranks, as well as for all ranks for Bn and An
matroids.) However, the present case is the only one among all of these in which the
parabolic subgroup P is not maximal since, by the way the diagram is dened, the
two generators corresponding to n 1 and n are both deleted to get the generators
of P . Thus the idea presents itself that such an orthogonal matroid of rank n 1
should be equivalent in some way to a pair of orthogonal matroids of opposite parity,
corresponding to the two marked diagrams with either n 1 or n circled. This is
indeed the case. Let
be a
ag with
m and A m denote the unique extensions of
Am to admissible sets of cardinality n having an even and an odd number of starred
elements, respectively. Let us denote
where the notation is intended to convey that Am 1 is a subset of both A
m and
A m , whereas the latter two are not to be regarded as occurring in any particular
order since they are unrelated by containment. A is sometimes referred to as an
amme. Given a Dn admissible order, we will write A B if A i B i in Dn
Gale order, for each
which we dene to mean that either A
m . We will refer to this ordering as modied Gale order.
Theorem 3. For any two
ags FA ; FB of the same type
n 1, with corresponding ori
ammes we have FA FB if and only if
A B . Hence a collection of
ags is an orthogonal matroid if and only if the
corresponding collection of ori
ammes satises the maximum condition for modied
Gale order.
Proof. It su-ces to consider the case are
admissible sets of cardinality n 1. We may also
fy gg, where x; x are the unique pair neither of which is in A, and
similarly B. Note that if then the theorem is trivial, so we
assume henceforth that this is not the case.
Suppose A B . If x >
fy g, we see immediately that A B. Similar arguments cover the
remaining possible cases of the admissible order restricted to x; x ;
To prove the converse, assume that A B. We must prove that fA [
fy gg. The assumption that A B means that there is a
bijection >from A to B, so that a (a) for all a 2 A. Note that we can assume
without loss of generality that maps any element b of A\B to itself, for if
and then we can reassign b. Thus we have (a) 2 BrA
if and only if a 2 ArB.
Let us assume that our admissible ordering restricted to x; x ;
and . The argument for the remaining cases is similar. (In particular,
when x and x are between y and y , we just need to reverse the roles of A and B
and reverse the ordering to transform the argument to the above case.) Notice that
A[fxg B[fyg and A[fxg B[fy g as well. Thus we only need to prove either
fy g. Since x 6= we have two cases:
either
Case 1: x 2 B. Write
we have a
A. We denote
Now if a 2 y or a 2 y , then we
are done, for we can dene (or similarly with y ), with the
rest of 0 agreeing with , and 0 establishes the desired (modied) Gale dominance.
Hence we may now assume that a 2 <
have a
Denoting
and a 3 > a
a
and we write b
We continue in this fashion until either two a i 's coincide (and hence so do the
corresponding b i 's), or until some a i y or a i y for some even i, one of which
must occur eventually since A is nite. In the case of a coincidence, since a i >
for i odd but not for i even, our coincidence is of the form a of the same
parity. Assume that i is the minimal index in such a coincidence. But then b
since is a bijection, which if i 1 means a
By minimality of i, we must have had
showing that x 2 A, a contradiction. Thus we could not have two
a i 's coincide, so we must instead have a i y (or y ) for some even i. Note that for
even j 2, we have a
j+1 a
. We now dene
with . This gives the desired result for Case 1.
Case 2: x 2 B. Starting with b we construct a i 's and b i 's exactly as above,
except that now it is the even-numbered a i which are always larger than
want to nd one odd-numbered a i which is greater than either y or y . The rest of
the proof is similar to Case 1, with being used to contradict any coincidence
among the a i 's.
7. Relation between symplectic and orthogonal matroids
In this section we view both symplectic matroids and orthogonal matroids in terms
of their combinatorial description. Both are dened in terms of admissible k-element
subsets of [n] [ [n] . The number k is called the rank of the symplectic or orthogonal
matroid.
Corollary. Every orthogonal (
ag) matroid is a symplectic (
ag) matroid.
Proof. This follows directly from Theorem 2 since every admissible set of
ags for Dn
is admissible for Bn and every root of Dn is a root of Bn .
In general, the converse is false. For example, f12; 12 g is a rank 2 symplectic (B 4 )
matroid, but is not an orthogonal (D 4 ) matroid. We are not able, in general, to give
a simple combinatorial characterization of when a symplectic matroid is orthogonal
(the geometric characterization is obvious). Below, however, is a characterization for
the special case of a rank n symplectic matroid L An , called a Lagrangian matroid
in [2] or a symmetric matroid in [6].
Theorem 4. A Bn matroid L of rank n is a Dn matroid if and only if L lies either
entirely in A
n or entirely in A n .
Proof. Assume that L is a symplectic matroid. In one direction the result follows
directly from the denition of orthogonal matroid.
For the other direction, assume that L lies either entirely in A
n or entirely in
A n . Consider any admissible orthogonal ordering of [n] [ [n] . Without loss of
generality, let be the ordering in (3.3). Use the notation s for either one of the two
admissible symplectic orderings that are linear extensions of . With respect to s
there is a Gale maximum an ), where the a i can be taken in descending
order with respect to s . Thus for any descending order,
we have b i s a i for all i. However, the same inequality b i a i also holds for the
orthogonal ordering unless n and n appear in the same position in A and B, resp.
But that can happen only if A\ [n This would imply A 2 A
n and
or the other way around, a contradiction.
8. Representable orthogonal matroids.
Some symplectic matroids and orthogonal matoids arise naturally from symplectic
and orthogonal geometries, respectively, in much the same way that ordinary matroids
arise from projective geometry. The representation of symplectic matroids was
discussed in [2]; the representation of orthogonal matroids is discussed in this section.
However, it is convenient to consider the symplectic and the orthogonal case simulta-
neously; this leads to a simplied treatment of the symplectic case as well as a proof
in the orthogonal case. We will consider only the representation of symplectic and
orthogonal matroids of type k, for some k n. Flag symplectic and
ag orthogonal
matroids can similarly be represented using
ags of totally isotropic subspaces.
Both a symplectic space and an orthogonal space consist of a pair (V; f) where V
is a vector space over a eld of characteristic 6= 2 with basis
and f is a bilinear form hereafter denoted just (; ). The bilinear form is antisymmetric
for a symplectic space and symmetric in an orthogonal space. In both cases
A subspace U of V is totally isotropic if (u;
Let U be a totally isotropic subspace of dimension k of either a symplectic or an
orthogonal space V . Since U ? U , and dimU we see that k n.
Now choose a basis fu of U , and expand each of these vectors in terms
of the basis E:
. Thus we have represented the totally
isotropic subspace U as the row-space of a k2n matrix
with the columns indexed by [n] [ [n] , specically, the columns of A by [n] and those
of B by [n] .
Given a totally isotropic subspace U of dimension k, let be a k 2n
matrix dened above. If X is any k-element subset of [n] [ [n] , let CX denote the
formed by taking the j-th column of C for all j 2 X. Dene a collection
LU of k-element subsets of [n] [ [n] by declaring X 2 LU if X is an admissible k-element
set and det(CX ) 6= 0. Note that LU is independent of the choice of the basis
of U .
Theorem 5. If U is a totally isotropic subspace of a symplectic or orthogonal space,
then LU is the collection of bases of a symplectic or orthogonal matroid, respectively.
Proof. The fact that the row space is totally isotropic implies
for all i; l. In terms of the matrices A and B this is equivalent to
where A i and B i denote the respective row vectors and denotes the the usual dot
product. The sign is + in the orthogonal case and in the symplectic case. In the
orthogonal case, taking l, the equality (8.1) above implies
Let be any admissible ordering of [n][[n] . Order the columns of C in descending
order with respect to . (The order of n and n is arbitrary in the orthogonal case.)
The re-ordering may be done by rst interchanging pairs of columns indexed by j
and j for some j. In order to maintain (8.1) in the symplectic case (where we still
consider A to be the rst n columns of C), one of the two interchanged columns must
be multiplied by 1. Note that this does not change LU . Second, do like column
permutations on both A and B. Finally, reverse the order of the columns of B. Then
and (8.2) remain valid, provided we now interpret XY to mean the dot product
of X with the reverse of Y .
In light of the preceding paragraph, we may, without loss of generality, assume is
the ordering (3.2) in the symplectic case and the ordering (3.3) in the orthogonal case.
This will keep our notation simpler. We must show that LU contains a maximum
member with respect to the induced Gale ordering. Using the usual row operations,
put C in echelon form so that each row has a leading 1, the leading 1 to the right
of the leading one in the preceeding row, and zeros lling each column containing
a leading 1. Let X 0 be the subset of [n] [ [n] corresponding to the columns with
leading ones. It is now su-cient to show that (1) X 0 is admissible, and (2) X X 0
for any X such that the determinant of the k k minor CX of C corresponding to X
is non-zero.
Concerning (1), assume that X 0 is not admissible. Then both j and j appear in
l be the rows for which there is a leading 1 at positions
contradicting equality (8.1).
Concerning (2), if it is not the case that X X 0 , then, for some j, the rst j
columns of CX have at least k zeros. But such a matrix CX has
determinant 0. The exception to this argument is in the orthogonal case when X
and X 0 contain the incomparable elements, n and n , in the same position when
the elements of X and X 0 are arranged in descending order. However, for this to
happen, there must be a row of C, say the j-th, such that A
then by equality (8.2) we have b jn
In either case the rst j columns of CX or CX0 again have at least k
zeros, implying that det(CX
--R
The lattice of ats and its underlying ag matroid polytope
Coxeter groups and matroids
An adjacency criterion for Coxeter matroids
Some characterizations of Coxeter groups
On a general de
Combinatorial geometries and torus strata on homogeneous compact manifolds
Geometry of Coxeter Groups
A geometric characterization of Coxeter matroids
The greedy algorithm and Coxeter matroids
Theory of Matroids
--TR
Greedy algorithm and symmetric matroids
Symplectic Matroids
An Adjacency Criterion for Coxeter Matroids
The Greedy Algorithm and Coxeter Matroids
--CTR
Richard F. Booth , Alexandre V. Borovik , Neil White, Lagrangian pairs and Lagrangian orthogonal matroids, European Journal of Combinatorics, v.26 n.7, p.1023-1032, October 2005
Richard F. Booth , Maria Leonor Moreira , Maria Rosrio Pinto, A circuit axiomatisation of Lagrangian matroids, Discrete Mathematics, v.266 n.1-3, p.109-118, 6 May | coxeter matroid;coxeter group;orthogonal matroid;bruhat order |
569795 | Nitsche type mortaring for some elliptic problem with corner singularities. | The paper deals with Nitsche type mortaring as a finite element method (FEM) for treating non-matching meshes of triangles at the interface of some domain decomposition. The approach is applied to the Poisson equation with Dirichlet boundary conditions (as a model problem) under the aspect that the interface passes re-entrant corners of the domain. For such problems and non-matching meshes with and without local refinement near the re-entrant corner, some properties of the finite element scheme and error estimates are proved. They show that appropriate mesh grading yields convergence rates as known for the classical FEM in presence of regular solutions. Finally, a numerical example illustrates the approach and the theoretical results. | Introduction
For the e-cient numerical treatment of boundary value problems (BVPs), domain decomposition
methods are widely used. They allow to work in parallel: generating the mesh in
subdomains, calculating the corresponding parts of the stiness matrix and of the right-hand
side, and solving the system of nite element equations.
There is a particular interest in triangulations which do not match at the interface of
the subdomains. Such non-matching meshes arise, for example, if the meshes in dierent
subdomains are generated independently from each other, or if a local mesh with some
structure is to be coupled with a global unstructured mesh, or if an adaptive remeshing
in some subdomain is of primary interest. This is often caused by extremely dierent
data (material properties or right-hand sides) of the BVP in dierent subdomains or by a
complicated geometry of the domain, which have their response in a solution with singular
or anisotropic behaviour. Moreover, non-matching meshes are also applied if dierent
discretization approaches are used in dierent subdomains.
There are several approaches to work with non-matching meshes. The task to satisfy some
continuity requirements on the interface (e.g. of the solution and its conormale derivative)
can be done by iterative procedures (e.g. Schwarz's method) or by direct methods like the
Lagrange multiplier technique.
There are many papers on the Lagrange multiplier mortar technique, see e.g. [5, 6, 9, 25]
and the literature quoted in these papers. There, one has new unknowns (the Lagrange
multipliers) and the stability of the problem has to be ensured by satisfying some inf-sup
condition (for the actual mortar method) or by stabilization techniques.
Another approach which is of particular interest here is related to the classical Nitsche
method [16] of treating essential boundary conditions. This approach has been worked out
more generally in [23, 20] and transferred to interior continuity conditions by Stenberg [21]
(Nitsche type mortaring), cf. also [1]. As shown in [4] and [10], the Nitsche type mortaring
can be interpreted as a stabilized variant of the mortar method based on a saddle point
problem.
Compared with the classical mortar method, the Nitsche type mortaring has several ad-
vantages. Thus, the saddle point problem, the inf{sup{condition as well as the calculation
of additional variables (the Lagrange multipliers) are circumvented. The method employs
only a single variational equation which is, compared with the usual equations (without any
mortaring), slightly modied by an interface term. This allows to apply existing software
tools by slight modications. Moreover, the Nitsche type method yields symmetric and
positive denite discretization matrices in correspondence to symmetry and ellipticity of
the operator of the BVP. Although the approach involves a stabilizing parameter
, it is
not a penalty method since it is consistent with the solution of the BVP. The parameter
can be estimated easily (see below). The mortar subdivision of the chosen interface
can be done in a more general way than known for the classical mortar method. This
can be advantageous for solving the system of nite element equations by iterative domain
decomposition methods.
Basic aspects of the Nitsche type mortaring and error estimates for regular solutions u 2
2) on quasi-uniform meshes are published in [21, 4]. Compared with these
papers, we extend the application of the Nitsche type mortaring to problems with non-regular
solutions and to meshes being locally rened and not quasi-uniform.
We consider the model problem of the Poisson equation with Dirichlet data in the presence
of re-entrant corners and admit that the interface with non-matching meshes passes the
vertex of such corners. For the appropriate treatment of corner singularities we employ
local mesh renement around the corner by mesh grading in correspondence with the
degree of the singularity. Therefore, the Nitsche type mortaring is to be analyzed on more
general triangulations. For meshes with and without grading, basic inequalities, stability
and boundedness of the bilinear form as well as error estimates in a discrete H 1 -norm are
proved. The rate of convergence in L 2 is twice of that in the H 1 -norm. For an appropriate
choice of some mesh grading parameter, the rate of convergence is proved to be the same
as for regular solutions on quasi-uniform meshes. Finally, some numerical experiments are
given which conrm the rates of convergence derived.
Analytical preliminaries
In the following, H s (X), s real (X some domain, H denotes the usual Sobolev
spaces, with the corresponding norms and the abbreviation Constants
C or c occuring in inequalities are generic constants.
For simplicity we consider the Poisson equation with homogeneous Dirichlet boundary
conditions as a model problem:
in
Here,
is a bounded polygonal domain in R 2 , with Lipschitz-boundary
@
consisting of
straight line segments. Suppose further that f 2 L
holds. The variational equation of
(2.1) is given as follows. Find
@
such that
with a(u; v) :=
Z
Z
We now decompose the
domain
into non-overlapping subdomains. For simplicity of
notation we consider two
subdomainsand
2 with interface , where
holds closure of the set X). We assume that the boundaries
@
of
are
also Lipschitz-continuous and formed by open straight line segments j such that
We distinguish two important types of interfaces :
case I1: the intersection
\@
consists of two points being the endpoints
of , and at least one point is the vertex of a re-entrant corner, like in Figure 1,
case I2: \
@
does not touch the boundary
@
, like in Figure 2.1
Figure
1:1
Figure
2:
For the presentation of the method and error estimates we need the degree of regularity
of the solution u. Clearly, the functionals a(: ; :) and f(:) satisfy the standard assumptions
of the Lax-Milgram theorem and we have the existence of a solution u
of problem
(2.2) as well as the a priori estimate kuk
C kfk
.
Furthermore, the regularity theory of (2.2) yields
and kuk
C kfk
if
is
convex. If
@
has re-entrant corners with angles
can be represented by
I
with a regular remainder w 2 H
. Here, (r denote the local polar coordinates of a
point Pwith respect to the vertex
@
r 0j is the radius of some circle neighborhood with center at P j . Moreover, we have
2 < j < 1), a j is some constant, and j is a locally acting (smooth) cut-o function
around the vertex P j , with
The solution u
satises the relations
I
C kfk
and, owing to (2.3), also
for any ": 0 su-ciently small. For these
results, see e.g. [13, 7].
In the context of
dividing
into
subdomains;
2 , we introduce the restrictions v i :=
of some function v
on
i as well as the vectorized form of v by
i.e. we have
It should be noted that we shall use here the same symbol v
for denoting the function
on
as well as the vector
. This will not lead to confusion,
since the meaning will be clear from the context. The one-to-one correspondence between
the \eld function" v and the \vector function" v is given
2 . Moreover, vj is
dened by the trace. We shall keep the notation also in cases, where the traces v 1
on the interface are dierent (e.g. for interpolants
on
Using this notation, it is obvious that the solution of the BVP (2.1) is equivalent to the
solution of the following interface problem: Find
such that
in
@
@
are satised, where n i denotes the outward normal to
@
Introducing the
spaces given by
case
@
\@
for
@
@
case
@
@
and the space V := the BVP (2.5) can be formulated in a weak form (see e.g.
[2]). Clearly, we have u
as well as
. The
continuity of the solution u and of its normal derivative @u i
@n
on
to be required in the sense of H2
() and H2
() (the dual space of H2
()), respectively.
(@
by the range of V i by the trace operator and to be provided with
the quotient norm, see e.g. [9, 13]. So we use in case I1: H2
(@
@
@
case I2: H2
(@
@
@
means that
we identify the corresponding spaces. By
@
we shall denote the duality pairing of
(@
(@
3 Non-matching mesh nite element discretization
We
cover
by a triangulation T i
consisting of triangles. The triangulations
h and T 2
h are independent of each other. Moreover, compatibility of the nodes of T 1
and T 2
@
@
2 is not required, i.e., non-matching meshes on are admitted.
Let h denote the mesh parameter of these triangulations, with 0 < h h 0 and su-ciently
small h 0 . Take e.g.
h denote the triangulations of dened by the traces
of T 1
h and T 2
h on , respectively.
Assumption 3.1
(i) For
holds
(ii) Two arbitrary triangles
are either disjoint or have a
common vertex, or a common edge.
(iii) The mesh
in
is shape regular, i.e., for the diameter h T of T and the
diameter % T of the largest inscribed sphere of T , we have
C for any T 2 T i
where C is independent of T and h.
Clearly, relation (3.2) implies that the angle at any vertex and the length hF of any side
F of the triangle T satisfy the inequalities
with constants 0 and " 1 being independent of h and T . Owing to (3.2), the triangulations
do not have to be quasi-uniform in general.
For and according to V i from (2.6) introduce nite element spaces V i
h of functions
on
i by
@
\@
denotes the set of all polynomials on T with degree k. We do not employ
dierent polynomial degrees
2 , which could also be done. The nite element space
V h of vectorized functions v h with components v i
on
i is given by
In general, v h 2 V h is not continuous across .
Consider further some triangulation E h of by intervals
where hE denotes the diameter of E. Furthermore, let
be some positive constant (to be
specied subsequently) and 1 ; 2 real parameters with
Following [21] we now introduce the bilinear form B h (: ; :) on V h V h and the linear form
F h (:) on V h as follows:
for
(Note that in [4] a similar bilinear form with
2 and employed.) The
nite element approximation u h of u on the non-matching triangulation T
h is
now dened by u
h satisfying the equation
denotes the L
the H2
()-duality pairing
and product. Owing to u 2 H 3+"(
, the trace theorem yields
holds also for v
This will be
used subsequently for evaluating by the L 2 ()-scalar product. A natural choice for
the triangulation E h of is
h
cf.
Figure
3.
Figure
3:
We require the asymptotic behaviour of the triangulations T 1
h and of E h to be consistent
on in the sense of the following assumption.
Assumption 3.2 For
there are positive
constants C 1 and C 2 independent of h T , hE and h (0 < h h 0 ) such that the condition
is satised.
Relation (3.9) guarantees that the diameter h T of the triangle T touching the interface
at E is asymptotically equivalent to the diameter hE of the segment E, i.e., the equivalence
of h T ; hE is required only locally.
4 Properties of the discretization
First we show that the solution u of the BVP (2.1) satises the variational equation (3.7),
i.e., u is consistent with the approach (3.7).
Theorem 4.1 Let u be the solution of the BVP (2.1). Then
solves (3.7), i.e.,
we have
Proof. Insert the solution u into B h (:; v h ). Owing to the properties of u, B h (u; v h ) is well
dened and, since u 1
and @u 1
hold, cf. (2.5), we get
Taking into account (2.4) and using Green's formula on the
domains
i , the relations
are derived for any v h 2 V h . This proves the assertion.
Note that due to (4.1) and (3.7) we also have the B h -orthogonality of the error u u h on
For further results on stability and convergence of the method, the following \weighted
discrete trace theorem" will be useful, which describes also an inverse inequality.
Lemma 4.2 Let Assumption 3.1 and 3.2 be satised. Then, for any v h 2 V h the inequality
C IX
holds, where F
h is the face of a triangle TF 2 T i
h touching by F (TF
constant C I does not depend on h; h T ; hE .
Note that extending the norms on the right-hand side of (4.3) to the whole
of
implies
C IX
For inequalities on quasi-uniform meshes related with (4.4) see [23, 21, 4].
Proof. For
h yields @v i
@xs
2X
holds. Let hF denote the length of side F belonging to triangle Since the shape
regularity of T is given, the quantities hF and h T are asymptotically equivalent. Owing to
and to inequality
which is derived by means of the trace theorem on TF and of the inverse inequality, we get
where TF
i has the edge
h . The constants c i do not depend on h; c 2 is
also uniform in T . Inequality (4.5) combined with the previous inequalities yields (4.3).
The constant C I in the inequalities (4.3) and (4.4) can be estimated easily if special assumptions
on E h and on the polynomial degree k are made. For example, let us choose
h from (3.8),
Then, on the triangle T the derivatives
are constants which can be calculated explicitely, together with their
2 -norms on E and on . Thus, we get
denotes the height of over the side E, hE the length of E. Taking the sum
h for all inequalities (4.6), we obtain the value of C I to be
Thus, for equilateral triangles and isosceles rectangular triangles (see the mesh on the left-hand
sides of Figures 6, 7) near , we get C I = 4=
3 and C I = 2, respectively.
For deriving the V h -ellipticity and V h -boundedness of the discrete bilinear form B h (: ; :) from
(3.6), we introduce the following discrete norm
cf. [21] and [9, 4] (uniform weights). Then we can prove the following theorem.
Theorem 4.3 Let Assumptions 3.1 and 3.2 for T i
the constant
in (3.6) independently of h and such that
> C I holds, C I from (4.3). Then,
holds, with a constant 1 > 0 independent of h.
Proof. For from (3.6) we have the identity
Using Cauchy's inequality and Young's inequality (2ab a 2
i"
Utilizing inequality (4.3) yields (4.8), with
"g > 0, if " is chosen according
to C I < " <
Beside of the V h -ellipticity of B h (: ; :) we also prove the V h -boundedness.
Theorem 4.4 Let Assumption 3.1 and 3.2 be satised. Then there is a constant 2 > 0
such that the following relations holds,
Proof. We apply Cauchy's inequality several times (also with distributed weights hE , h 1
inequality (4.3) and get relation (4.9) with a constant
g.
estimates and convergence
Let u be the solution of (2.1) and u h from (3.7) its nite element approximation. We
shall study the error u u h in the norm k : k 1;h given in (4.7). For functions v satisfying
introduce the mesh-dependent norm
by
First we bound ku u h k 1;h by the norm jj : jj
of the interpolation error u I h u, where
I h u := (I h
h , and I h u i denotes the usual Lagrange interpolant of u i in the
space
Lemma 5.1 Let Assumption 3.1 and 3.2 be satised. For u; u h from (2.1), (3.7), respec-
tively, and
> C I , the following estimate holds,
Proof. Obviously, I h holds, and the triangle inequality yields
Owing to I h u and to the V h -ellipticity of B h (: ; :), we have
In relation (5.4) we utilize (4.2) and get
For abbreviation we use here w := I h u u and v h := I h u u h . Clearly
yields
2 L 2 (). Because of I h u; u
denoting
the interpolant of u i in V i
h and u i
belong only to H 3"(
Unfortunately, w 62 V h holds,
We now apply the same inequalities as used for the proof of Theorem 4.4, with the modication
that inequality (4.3) is only employed with respect to the function v h . This leads
to the estimate
which gives together with (5.5) the inequality
This inequality combined with (5.3) and with the obvious estimate kI h u uk 1;h kI h u uk
conrms assertion (5.2). The positive constant c 1 depends on
and C I .
An estimate of the error jju u h jj 1;h for regular solutions u is given in [20] and in [4] by
citation of results contained in [23]. Nevertheless, since we consider a more general case,
and since we need a great part of the proof for regular solutions also for singular solutions,
the following theorem is proved.
Theorem 5.2 Let u
(l 2) be the solution of (2.1) and u its nite element
approximation according to (3.7), with
> C I . Furthermore, let the mesh from Assumptions
3.1, 3.2 be quasi-uniform, i.e. max T2T h h T
C. Then the following error estimate holds,
with k 1 being the polynomial degree in V i
Proof. We start from inequality (5.2) which bounds ku u h k 1;h by the interpolation error
and, in the following, take into account tacitly the assumptions on the mesh.
Note that the traces on of the interpolants I h u i of u i in V i
do not coincide,
in general. First we observe that the weighted squared norms
0;E can be rewritten
such that interpolation estimates involve the edge F of the triangle T
h , for 2:
I h u i u i
F
I h u i u i
r I h u i u i
Moreover, we apply the rened trace theorem
for
which is proved in [24], cf. also [23]. Replace v by I h u i u i and @(I h u i
@xs
using (5.9) and some simple estimates, we get
I h u i u i
I h u i u i
I h u i u i
1;T
r I h u i u i
1;T
2;T
Taking the well-known interpolation error estimate on triangles T ,
I h u i u i
ch l j
see e.g. [8, 11], we derive from the inequalities (5.10) and (5.11) the estimates
I h u i u i
ch 2l 1
r I h u i u i
ch 2l 3
Using these estimates and (5.7), (5.8), we realize that
I h u i u i
0;EA ch 2l 2
holds. For the interpolation error I h u i u i
on
i , the estimate
r I h u i u i
ch 2l 2
obviously follows from (5.12). Clearly, (5.13) and (5.14) lead via (5.2) to (5.6).
6 Treatment of corner singularities
We now study the nite element approximation with non-matching meshes for the case
that has endpoints at vertices of re-entrant corners (case I1). Since the in
uence region
of corner singularities is a local one (around the vertex P 0 ), it su-ces to consider one
corner. For basic approaches of treating corner singularities by nite element methods see
e.g. [3, 7, 13, 17, 19, 22]. For simplicity, we study solutions u 62 H
in correspondence
with continuous piecewise linear elements, i.e.
h from (3.3). We shall consider the
error u u h on quasi-uniform meshes as well as on meshes with appropriate local renement
at the corner.
Let be the coordinates of the vertex P 0 of the corner, (r; ') the local polar coordinates
with center at P 0 , i.e. x x 4.
y
'r
P(x;y)
r
Figure
4:
some circular sector G around P 0 , with the radius r 0 > 0 and the angle ' 0 (here:
G :=
@G boundary of G. For dening a mesh with grading, we employ the real grading parameter
the grading function R i real constant b > 0, and the
step size h i for the mesh associated with layers [R
Here n := n(h) denotes an integer of the order h 1 , n :=
for some real >
integer part). We shall choose the numbers ; b > 0 such that 2
holds, i.e., the
mesh grading is located within G from (6.1).
Lemma 6.1 For h; the following relations hold
We skip the proof of Lemma 6.1 since it is comparatively simple.
Using the step size h i in the neighbourhood of the vertex P 0 of
the corner a mesh with grading, and for the remaining domain we employ a mesh which
is quasi-uniform. The triangulation T
h is now characterized by the mesh size h and the
grading parameter , with 0 < h h 0 and 0 < 1. We summarize the properties of T
in the following assumption.
Assumption 6.2 The triangulation T
h satises Assumption 3.1, Assumption 3.2 and is
provided with a grading around the vertex P 0 of the corner such that h T := diam T depends
on the distance R T of T from P 0 , R T := dist in the following way:
with some constants % some real R g , 0 < R g < R g < R
are xed and independent of h.
Here, R g is the radius of the sector with mesh grading and we can assume R (w.l.o.g.
Outside this sector the mesh is quasi-uniform. The value
in the whole
region
min T2T
C holds. In [3, 17, 19] related types of mesh
grading are described. In [15] a mesh generator is given which automatically generates a
mesh of type (6.4).
For the error analysis we introduce several subsets of the triangulation T
h near the vertex
of the re-entrant corner, viz.
with Rn from (6.2). The set C 0h is now decomposed into layers (of triangles) D jh ,
holds:
According to 2
are located in G, G from (6.1). Owing
to Assumption 6.2 (cf. also Lemma 6.1), the asymptotic behaviour of h T is determined by
the relations (given for the case of one corner)
with well as n taken from (6.2). Note that the number of
all triangles T 2 T
h (0 < 1) and nodes of the triangulation is of the order O(h 2 ). The
number n j of all triangles T 2 D jh is bounded by C j (j
independent of h, cf. [14].
First we investigate the interpolation error of a singularity function s from (2.3) in the class
of polynomials with degree 1. Employ the restrictions s i :=
and take always into
account that
Lemma 6.3 Let
be the singularity function with
respect to the corner at vertex P 0 . Further, let T
h be the triangulation
of
with mesh
grading within G according to Assumption 6.2 (cf. (6.2){(6.5)). Then, the interpolation
error s i I h s i in the seminorm
can be bounded as follows:
where (h; ) is given by
for < 1
Proof. According to the mesh layers D jh (j n), the norms of the global interpolation
error s i I h s i are represented by the local interpolation error s
for local P 1 {Lagrange interpolation operator) as follows
0h
jh
with
case T 2 D i
First, we consider triangles
0h and employ the estimate
1;T
1;T
Using the explicit representation of s i and I T s i , we calculate the norms on the right-hand
side of (6.8) and get the following bound:
case T 2 D i
We now consider triangles
jh which do not touch the vertex P 0 (center of singularity),
i.e.
0h . In this case, s 2 H 2 (T ) holds owing to R T > 0. Hence, the well-known
interpolation error estimate s i I T s i
1;T ch T
2;T (6.10)
can be applied, where c is independent of the triangle T . The norm
2;T is estimated
easily by
a r sin(')
r
for
Taking into account the relations between h, h T , R T , j and from Assumption 6.2, cf. also
(6.5), a we nd easily bounds of the right-hand side in (6.11). This
leads together with (6.10) to the estimates
2. Since the number of triangles in the layer D i
jh grows not faster than with j,
we get by summation of the error contributions of the triangles T 2 C 0h n D i
0h the estimate
jh
2: (6.12)
Using monotonicity arguments and the estimation of sums by related integrals, it is not
hard to derive the following set of inequalities,
Some simple estimates of the right-hand side of (6.12) allow to apply (6.13) and n ch 1
for getting the inequality
jh
with (h; ) given at (6.7) and for 2.
Finally, combining the estimates (6.8), (6.9) from case (i) and (6.14) from case (ii), we
easily conrm (6.6).
We now study the interpolation error s i I h s i and its rst order derivatives in the trace
norms.
Lemma 6.4 Under the assumption of Lemma 6.3 and with (h; ) from (6.7), the following
interpolation error estimates hold for the singularity function 2:
Proof. Clearly, due to the assumption on E h we have for the inequal-
ities
F
Consider now faces F of triangles touching and the local interpolate I T s i .
case T 2 D i
Here we use a similar approach like at (6.8) and get by direct evaluation of the norms the
following estimates:
F
F
F
I T s i
rs i
case T 2 D i
For the remaining faces F and adjacent triangles T which do not touch the vertex P 0 of the
corner, Therefore, inequalities (5.10), (5.11) can be applied. We insert
the well-known estimates
l;T ch 2 l
2;T
for any triangle with face F :
F
ch 2
ch 2
Calculating and estimating
and summation over all triangles T 2 C 0h n D 0h touching
near the singularity yields by analogy to (6.14) the estimate
2: (6.20)
Finally, we combine the inequalities (6.16){(6.20) and get (6.15).
Lemma 6.5 Assume that there is one re-entrant corner and that the triangulation T
h is
provided with mesh grading according to the Assumption 6.2. Then the following estimate
holds for the error u I h u of the Lagrange interpolant I h with u from (2.3) and
(h; ) from (6.7):
I h uk
Proof. According to (2.3), the solution u of the BVP (2.1) can be represented by
a r sin(')
denotes the regular part of the solution, and s is the
singular part. Apply the triangle inequality jju I h ujj
jjs I h sjj
.
holds, the norm jjw I h wjj
has been already estimated in the
proof of Theorem 5.2. Thus, using the estimates (5.13) and (5.14) for
with (2.4), we get
kw I h wk
ch kwk
ch kfk
Bounds of the norm ks I h sk
can be derived from Lemma 6.3 and Lemma 6.4. The
combination of (6.6), (6.15) and (2.4) yields the inequalities
ks I h sk
with (h; ) from (6.7). Estimate (6.21) is obvious by (6.22) and (6.23).
The nal error estimate is given in the next theorem.
Theorem 6.6 Let u and u h be the solutions of the BVP (2.1) with one re-entrant corner
and of the nite element equation (3.7), respectively. Further, for T
h let Assumption 6.2
be satised. Then the error u u h in the norm k : k 1;h (4.7) is bounded by
with
for < 1
Proof. The combination of Lemma 5.1 with Lemma 6.5 immediately yields the assertion.
Remark 6.7 Estimate (6.24) holds also for more than one re-entrant corner, with a slightly
modied function (h; ). For example, if the mortar interface touches the vertices
of two re-entrant corners with angles '
holds. According to 1 ; 2 , we employ meshes with grading
parameters holds now with
h jln hj2 for
Remark 6.8 Under the assumption of Theorem 6.6 and for the error in the L 2 {norm, the
estimate
holds. In particular, we have the O(h 2 ) convergence rate for meshes with appropriate
grading. Estimate (6.25) is proved by the Nitsche trick with additional ingredients, e.g.
include again some interpolant (cf. the proof of Lemma 5.1). For the proof in the conforming
case see e.g. [14].
7 Numerical experiments
We shall give some illustration of the Nitsche type mortaring in presence of some corner
singularity. In particular we investigate the rate of convergence when local mesh renement
is applied. Consider the BVP
in
@
where
is the L-shaped domain of Figure 5. The right-hand side f is chosen such that the
exact solution u is of the form
a
a
y
x
Figure
5: The L-shaped
domain
. Clearly, uj
@
3 and, therefore,
is satised. We apply the Nitsche type mortaring method to this BVP and
use initial meshes shown in Figure 6 and 7. The approximate solution u h is visualized in
Figure
9.
Figure
Triangulations with mesh ratio renement
(right).
Figure
7: Triangulations with mesh ratio renement
(right).
The initial mesh is rened globally by dividing each triangle into four equal triangles such
that the mesh parameters form a sequence fh
:g. The ratio
of the number of mesh segments on the mortar interface is given by
Figure 7). Furthermore, the values are chosen, i.e., the trace
of the triangulation T 1
of
1 on the interface forms the partition E h
(for
Figure
5). For the examples the choice
su-cient to ensure stability. (For numerical
experiments with
and also with regular solutions, cf. [18]). Moreover, we also apply local
renement by grading the mesh around the vertex P 0 of the corner, according to section 6.
The parameter is chosen by
Let u h denote the nite element approximation according to (3.7) of the exact solution u
from (7.1). Then the error estimate in the discrete norm jj : jj 1;h is given by (6.24). We
assume that h is su-ciently small such that
holds with some constant C which is approximately the same for two consecutive levels of
h, like h; h
2 . Then (observed value) is derived from (7.2) by obs := log 2 q h , where
. The same is carried out for the L 2 {norm, where ku u h k
Ch
is supposed. The values of and are given in Table 1 and Table 2, respectively.
mesh
Table
1: Observed convergence rates for dierent pairs (h i , h i+1 ) of h-levels, for
and for
3 ) in the norm
mesh
=0:7 2.0093 2.0835 2.2252 2.0863 2
Table
2: Observed convergence rates for dierent pairs (h i , h i+1 ) of h-levels, for
and for
in the norm jj : jj
.
The numerical experiments show that the observed rates of convergence are approximately
equal to the expected values. Furthermore, it can be seen that local mesh grading is suited
to overcome the loss of accuracy (cf. Figure 9) and the diminishing of the rate of convergence
on non-matching meshes caused by corner singularities.
number of elements
error
in
different
norms
1,h-norm
mr: 2:3
mr: 2:5
number of elements
error
in
different
norms
1,h-norm
mr: 2:3
mr: 2:511
Figure
8: The error in dierent norms on quasi-uniform meshes (left) and on meshes with
grading (right).
-0.50.500.20.4x
y
approximate
solution
-0.50.500.20.4y
x
approximate
solution
-22
x
y
pointwise
error
-22
x
y
pointwise
error
Figure
9: The approximate solution u h in two dierent perspectives (top), the local point-wise
error on the quasi-uniform mesh (bottom left) and the local pointwise error on the
mesh with grading (bottom right).
--R
Discontinuous Galerkin methods for elliptic problems.
Approximation of elliptic boundary value problems.
The Mortar
A new nonconforming approch to domain decomposition: The mortar element method.
A Multigrid Algorithm for the Mortar Finite Element Method.
Stabilization techniques for domain decomposition methods with non-matching grids
Elliptic Partial Di
Elliptic Problems in Nonsmooth Domains.
The Fourier- nite-element method for Poisson's equation in axisymmetric domains with edges
On adaptive grids in multilevel methods.
On some techniques for approximating boundary conditions in the
Mortaring by a Method of Nitsche.
An analysis of the
A Mortar Finite Element Method Using Dual Spaces for the Lagrange Multiplier.
--TR | corner singularities;finite element method;nitsche type mortaring;non-matching meshes;domain decomposition;mesh grading |
569859 | Using Hybrid Automata to Support Human Factors Analysis in a Critical System. | A characteristic that many emerging technologies and interaction techniques have in common is a shift towards tighter coupling between human and computer. In addition to traditional discrete interaction, more continuous interaction techniques, such as gesture recognition, haptic feedback and animation, play an increasingly important role. Additionally, many supervisory control systems (such as flight deck systems) already have a strong continuous element. The complexity of these systems and the need for rigorous analysis of the human factors involved in their operation leads us to examine formal and possibly automated support for their analysis. The fact that these systems have important temporal aspects and potentially involve continuous variables, besides discrete events, motivates the application of hybrid systems modelling, which has the expressive power to encompass these issues. Essentially, we are concerned with human-factors related questions whose answers are dependent on interactions between the user and a complex, dynamic system.In this paper we explore the use of hybrid automata, a formalism for hybrid systems, for the specification and analysis of interactive systems. To illustrate the approach we apply it to the analysis of an existing flight deck instrument for monitoring and controlling the hydraulics subsystem. | Introduction
The distinguishing feature of the modelling and specification of interactive systems is
the need to accommodate the user; for example, to formalise and analyse user require-
ments, and to conduct usability reasoning. The environment can also play a significant
role, as it can impose constraints on both the system and the user, and the communication
paths between them.
In existing approaches, interaction between user and system is assumed to be of a
discrete and sequential nature. Such a view may be inadequate where interaction between
user, system and environment contains continuous as well as discrete elements.
The 'continuous' aspect of the system can take many forms, including, but not restricted
to, continuous input and output devices. Certain types of interactive system have always
had a hybrid element - for example many forms of supervisory control systems, such
as flight deck systems and medical monitoring systems. Additionally, many emerging
technologies, such as virtual reality and haptic input devices, support richer and more
continuous interaction with the user, and hence applications using such techniques can
also be viewed as hybrid systems [18].
If the models we build of such systems are to support reasoning about issues of usability
and user requirements, then building a model of system behaviour is not enough
- we must also have some means of referring to both the user and the environment. It
has been proposed that usability issues can be better understood in terms of the conjoint
behaviour of system and user, and that syndetic models [8, 7, 9], which combine a formal
system model with a representation of human cognition, support such an approach.
In this paper, we do not consider the modelling of human cognition but rather apply
models of user input and observation of system output, which afford the possibility of
reasoning about user behaviour and inference. These models can take the form of constraints
imposed by the limitations of the user or of the environment, or they may take
the form of more explicit models of relevant aspects of the user or environment.
Automata provide a relatively simple formalism, with a convenient graphical repre-
sentation, for specifying the behaviour of systems. Basically, an automaton consists of
a number of locations, and a number of transitions which link these locations. System
specifications typically involve several automata, synchronised on certain transitions.
They include variables on which location invariants and transition guards are based.
Recently, a number of interesting variants of automata, including timed and hybrid au-
tomata, have been developed that allow the specification of processes that evolve in a
continuous rather than a discrete way. Timed automata include real valued clock variables
which increase at a uniform rate [14, 3]. Hybrid automata [11], on which we focus
in this paper, include analog variables which can change at arbitrary rates (both positive
and negative). The continuous change of real valued variables in hybrid automata
is specified by sets of differential equations, as is common practice in for example
physics. The automata based formalisms do not only provide a specification language
with a graphical interpretation, but also allow for automatic verification of properties
by means of reachability analysis provided by several model checking tools [3, 14, 12].
The example we focus on concerns flight deck instrumentation concerning the hydraulics
subsystem of an aircraft. This is based on a case study originally presented by
Fields and Merriam [10], which involves analysis of the support the instrument provides
to the user for the diagnosis of system failures (including issues of representation). Two
goals are identified in relation to this activity; firstly to preserve the safety of the system
(maintaining hydraulic power to the control surfaces), and secondly to discover the
cause of a problem. The user's actions are hence closely tied with the process of reasoning
about the possible faults in the system. As noted in [10], this type of activity is
typical in process control settings, and hence we see the case study as representative of
a class of applications. Through the use of hybrid automata, we have the potential to
expand on the original analysis, not only by modelling timing constraints, but also the
continuous variables representing quantities in the hydraulics subsystem itself.
Case study on aircraft hydraulics
The case study we analyse in this paper is taken from the domain of aircraft hydraulics,
and is based on the description in [10]. The hydraulics system is vital to the safe operation
of the system as it is the means by which inputs from the the pilot or autopilot are
conveyed to the control surfaces (eg. rudder and ailerons). Movement of the surfaces
is achieved by servodynes which rely on hydraulic fluid. This fluid is supplied from
reservoirs, and a number of valves determine which reservoir supplies a given servo-
dyne. In [10] a generic model of hydraulic systems is first presented, and a simplified
(but realistic) version used as the basis of analysis. We base our treatment on the simple
model.
In the simple case, only two control surfaces - rudder and aileron - are included. The
operation of these surfaces is powered by servodynes; each surface has a primary and
secondary servodyne. Hydraulic fluid is supplied to the servodynes from two reservoirs,
the primary servodynes of each being connected to the blue reservoir, the secondary to
the green reservoir. The valves between the reservoirs and servodynes are such that each
surface is connected to only one tank at a time (see Fig. 1).
Blue
Reservoir
Green
Reservoir
Rudder
Aileron
Fig. 1. Hydraulic System, from [10]
The focus of the model is the diagnosis and minimisation of leaks by the human
operator, the motivation being that a control surface connected to an empty reservoir
cannot be operated and hence is a serious hazard. In the model, each reservoir can leak
independently, as can each of the servodynes. Failures can occur in any combination of
reservoirs and servodynes, although fluid is only lost through a leaky servodyne while
the valve connecting it to one of the reservoirs is open. In reality there are many other
components involved in the system, and many more possible types of system failure,
but the model described here suffices to illustrate our approach.
User operation of the system is by means of two switches, one for each control
surface, which can be set by the user to either blue or green (see Fig. 2(a)). The level of
fluid in each reservoir is presented to the user by means of a pie-chart like display (see
Fig. 2(b)), where the loss of fluid can be observed by the pilot as a gradually decrease
of the 'filled' portion of arc in the display (shaded grey in our diagram).
Green
Blue
Rudder Aileron
(b)
(a)
Blue Green
Fig. 2. Controls and status display, adapted from [10]
The case study is interesting since it includes both real-time issues surrounding the
diagnosis and correction of leaks. There are analog (continuous) variables representing
the fluid levels in the tanks (which can change continuously at various rates), and the
continuous representation of those levels by the level indicators on the flight deck. These
continuous aspects are combined with discrete controls operated by the pilot or flight
engineer.
To illustrate how the diagnosis of leaks is tied to the observation of tank levels in the
different switch configurations, consider the following scenario, where the rudder primary
(R1) and aileron secondary (A2) servodynes are leaking. The sequence of switch
settings, observations, and the 'ideal' user understanding of the system involved in the
diagnosis are illustrated in table 1. Initially the user observes a decrease (L) in the quantity
of the blue reservoir (BR) and no decrease (N) in the level of the green reservoir
(GR). Setting both switches to green, a decrease in the level of the green reservoir is
observed, leading to the conclusion that neither blue nor green reservoir are leaking.
Setting the rudder back to blue leads to loss from both reservoirs, leading to the conclusion
that both the rudder primary and aileron secondary servodynes are leaking, and
toggling both switches leads the user to conclude that neither rudder secondary nor
aileron primary are leaking, completing the users knowledge of the state of the system.
Step Control State Observation Possible causes of leak
Rudder Aileron Blue Green
Table
1. Observations and inferences
3 Hybrid automata
In this section we give an informal overview of the specification language for hybrid
automata HyTech and the analysis capabilities of the associated tools. For details on
the formalism, the use of the tools and for further references to articles on the theory
underlying HyTech we refer to [12, 11].
3.1 HyTech specification language
A HyTech specification consists of a number of automata, a set of variable declarations
and a number of analysis commands. Each automaton is named, and contains some
initial conditions, a set of locations (with invariants), a set of transitions (which may
include guards and variable assignments), some of which may be labelled as urgent.
Transitions may be synchronised between any number of automata (multi-part syn-
chronisation), and variables are global, i.e. they may be referenced from any automaton.
Four forms of real-valued variable are provided, each of which is distinguished by the
rate of change. Discrete variables are 'ordinary' variables which can be assigned values,
and which have a rate of zero. Clocks are as discrete variables but increase at a constant
rate of 1. Stopwatches may have a rate of either zero or 1. Analog variables form the
general case and arbitrary linear conditions may be placed on their rate of change. a
state of a HyTech specification consists of a location vector and a valuation. The location
vector consists of one location name of every automaton in the specification. A
valuation is a vector of real numbers, containing a value for each of the global variables
of the specification.
Graphical manipulation of HyTech specifications is possible via the AutoGraph
tool, and a converter which maps state transition diagrams in AutoGraph to the HyTech
textual specification language [13].
3.2 HyTech model checking
Besides a specification language based on automata HyTech provides a tool for reachability
analysis. With the tool a subclass of hybrid automata specifications can be automatically
analysed. This subclass is the set of linear hybrid automata. These are hybrid
automata in which all invariants and conditions are expressed as (finite) conjunctions of
linear predicates. A linear predicate is an (in)equality between linear terms such as for
example:
The coefficients in the predicates must be rationals. HyTech can compute the set of
states of the parallel composition of linear hybrid automata. Given an initial region (a
subset of the state space is called a region) 1 , it can compute the set of states that is
reachable from the initial region by forward reachability analysis. Conversely, given a
final set of states, it can compute the set of states from which the final region can be
reached by using backward reachability analysis. As a side effect of reachability analysis
a sequence of delay- and transition steps can be generated that shows an example
of how one set of states can be reached starting from another. This can be extremely
helpful for what we could call 'high-level debugging' of a formal model of a system.
A third analysis feature is the use of models that contain parameters. HyTech can
be used to synthesize automatically precise constraints on these parameters which are
necessary to satisfy certain properties. In later sections we show how these kinds of
analysis can be useful for the analysis of human computer interfaces.
3.3 Example
Presented in Fig. 3 is an example of two simple automata, one modelling a reservoir
(Res) and one modelling an observer which monitors the level of liquid in the reservoir
(Observe). The automaton Res models the operation of a reservoir in the hydraulics
1 Note that in the context of HyTech specifications, the term region is used to refer to an arbitrary
subset of the state space; in the underlying theory concerning model checking of automata
specifications, this term has a more specific meaning.
system, and includes three locations encapsulating the possible states of the reservoir,
namely that it is not empty and not leaking (RNL), that it is leaking (RL) and that it is
empty (emptyR). It is assumed that we have defined two variables. The analog variable
that indicates the level of the liquid in the reservoir and discrete variable red
modelling a red lamp that can be on (red=1) or off (red=0).
The automaton Res starts in the initial state indicated by a double circle and with
the start condition, indicated at label Start, that the liquid level g in the reservoir is
3. In the initial state two invariant conditions hold. One states that the reservoir should
contain liquid (g>=0) and the other stating that the the reservoir is not leaking, i.e. the
rate with which the level changes is zero (dg=0). Eventually the reservoir may start
leaking. This is modelled by the transition from RNL to RL. The reservoir enters a state
in which the reservoir is not empty and is leaking with a rate of 1 unit per time-unit
(dg=-1). When the reservoir is empty, the automaton is forced to leave location RL,
because it can only stay there as long as g>=0, and moves to location emptyR by a
transition labeled by emptyr.
The automaton Observe models the detection of the reservoir becoming empty
and signalling this by means of a red light indicator (red'=1). When automaton Res
performs the transition labeled by emptyr, automaton Observe synchronises on this
label and moves to location o2. After this it performs the next transition as soon as
possible because of the special label asap which denotes that it is an urgent transition,
and switches the red light on (red'=1).
Res Res Res Res ResResResResRes Res Res Res Res Res Res
Res Res
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Start: g=3
Observe Observe Observe Observe ObserveObserveObserveObserveObserve Observe Observe Observe Observe
Observe Observe Observe Observe
Start: red=0
Start: red=0
Start: red=0
Start: red=0
Start: red=0
Start: red=0
Start: red=0
Start: red=0
Start: red=0
Start: red=0
Start: red=0
Start: red=0
Start: red=0
Start: red=0
Start: red=0
Start: red=0
Start: red=0
RNL RNL RNL RNL RNLRNLRNLRNLRNL RNL RNL RNL RNL RNL
RNL RNL RNL
dg=0 dg=0 dg=0
RL RL RL RL RLRLRLRLRL RL RL RL RL RL RL
RL RL
emptyR emptyR emptyR emptyR emptyRemptyRemptyRemptyRemptyR emptyR emptyR emptyR emptyR emptyR emptyR emptyR emptyR
dg=0 dg=0 dg=0 dg=0 dg=0dg=0dg=0dg=0dg=0 dg=0 dg=0 dg=0 dg=0 dg=0 dg=0 dg=0 dg=0
emptyr emptyr emptyr emptyr emptyremptyremptyremptyremptyr emptyr emptyr emptyr emptyr emptyr emptyr
emptyr emptyr
emptyr emptyr emptyr emptyr emptyremptyremptyremptyremptyr emptyr emptyr emptyr emptyr emptyr emptyr
emptyr emptyr
red'=1 red'=1 red'=1 red'=1 red'=1red'=1red'=1red'=1red'=1 red'=1 red'=1 red'=1 red'=1 red'=1
red'=1 red'=1 red'=1
asap asap asap asap asapasapasapasapasap asap asap asap asap asap
asap asap asap
Fig. 3. Reservoir Automaton and Detection of Emptyness
4 Reasoning about interactive systems with HyTech
The approach we take is to model not only the system but also aspects of the user and
potentially the environment by means of various automata. For the moment, we consider
user models which consist of models for input, and possibly observation. An input
model could simply capture the possible sequences of operations a user can invoke, or
it can be more focussed towards a specific task, or even a strategy for achieving some
goal.
The potential for applying these specifications to usability reasoning derives from
the way in which the user and system automata execute together, and hence it is worth
considering how the models are composed.
4.1 Linking user and system models
There are a number of ways in which the user and system automata can be linked:
synchronisation transitions, including urgent transitions. Where transitions are synchronised
(they have the same label), then they must occur simultaneously in all
automata which contain the label. Thus by means of a synchronised transition, we
can require that both system and user models make a transition at the same time.
shared variables - use of a variable from another part of the model in a guard or
invariant corresponds to an observation of some form. Hence where the user model
references the value of a system variable, this corresponds to the user observing the
variable - presumably in the presentation of the system.
The specification may also enforce constraints on the reachable states of the automata
which are less obvious, for example mutually exclusive system and user states. While
these relationships must be encoded at some level by means of synchronisations or
guards, they can be subtle and difficult to perceive. When we consider the significance
of such relationships to usability reasoning, we find that there are a number of interesting
semantic issues. For example, what does it mean when one "waits" for the other.
This seems to be an issue of initiative. Also, there can be a distinction between the occurrence
of an event, for example in the system model, and perception of that event,
which might correspond to a transition in the user model. Consider for example a form
of 'polling' behaviour by a user who periodically checks some portion of the system
display, rather than continuously monitoring it (an assumption which would be unreasonable
in many application domains).
The analysis section of the specification may also contain abstractions which can
be seen as modelling aspects of the user - eg. a region definition may correspond to
observation of the state of the system by the user. A detailed example of this is given in
section 6.3.
4.2 Properties
There are many benefits associated with the construction of a formal specification, but
the one we focus on here is the possibility of formally verifying whether certain properties
hold on the combined specification of system, user, and environment, in this case
via the capabilities of the HyTech model-checker. The basic capabilities of HyTech are
based on reachability analysis, but there are a number of ways in which this can be used
in the context of human factors analysis:
simple reachability This analysis technique can be used to show that it is possible for
the system to reach desirable states or sets of states (for example those representing
the user's goals), and that it is impossible to reach undesirable states (for example
those which would compromise the safety of the system). Abstractions such as
impact, i.e. 'the effect that an action or sequence of actions has on the safe and
successful operation of the system' [4] can be formalised and checked by means of
such reachability properties.
state/display conformance Where we have automata representing the presentation
of the system state, or observation of this by the user, we can check conformance
between the automata representing the system state, and those representing the presentation
and/or observation. See for example Dix [6] for a discussion of various
forms of state-display conformance.
performance related properties Since we can derive limits on timed and analog variables
such that certain states are reachable for example, it is possible to examine
performance related issues such as latency (see [1] for a treatment of latency related
issues in a lip synchronisation protocol modelled in UPPAAL, a specification
language for timed automata). Data on latencies can be compared to human factors
and ergonomics data on human response times and perception thresholds, to
ascertain whether system performance is compatible with the user's capabilities.
A review of the use of machine assisted verification (including model checking) in the
analysis of interactive systems can be found in Campos and Harrison [2]. Rushby [17]
describes some preliminary work in using a model checker to check for inconsistencies
between a user's model of system operation and actual system behaviour which lead to
automation surprises in an avionics case study [16].
5 Hydraulics system specification
In this section, we describe a formal specification of the hydraulics system introduced
in section 2. We construct automata to represent both the system and the user.
5.1 System model
The system model comprises two sets of automata, namely:
- those that model the servodyne leakage and valve state, one for each control surface.
- those that model the reservoirs, one for each reservoir.
We define two valve automata - one for each control surface, ValveR for the rudder
and ValveA for the aileron. The valve for each control surface can be set to either the
blue or green reservoir, and correspondingly the primary or secondary servodyne. These
transitions are given synchronisation labels sab, signifying 'set aileron to blue', srg
signifying 'set rudder to green', and so on. The locations with a label containing a B
as the second letter are those where the blue reservoir has been selected, and those
containing a G where the green reservoir has been selected.
Furthermore, each servodyne can leak independently, yielding a total of eight locations
for each automaton. From a given situation, more leaks can occur, and so we have
transitions (unlabelled in Fig. 4) denoting the occurrence of (additional) leaks. These
leaks are indicated in the location names by the last two letters, each of which can be
either N - notleaking or L - leaking, the second last letter representing the state of the
primary (blue) servodyne and the last letter that of the secondary (green) servodyne.
For each valve of the aileron an analog variable is introduced that models the rate
of leaking of the connected servodyne. For the servodynes of the aileron variables ba
and ga model the amount of fluid leaking from the primary and the secondary aileron
servodyne respectively, and their first time derivatives dba and dga the respective rate
of leakage. Note that when the valve connecting a leaky servodyne is closed the servo-
dyne does not loose liquid. So, for example, in the location labelled by AGLL, standing
for the aileron switched to the green (secondary) servodyne where both servodynes are
leaking, only the derivative regarding the leak in the secondary servodyne is non zero
(dba=0, dga=1).
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
ABNN ABNN ABNN ABNN ABNNABNNABNNABNNABNN ABNN ABNN ABNN ABNN
ABNN ABNN ABNN ABNN
dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0
dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0
AGNN AGNN AGNN AGNN AGNNAGNNAGNNAGNNAGNN AGNN AGNN AGNN AGNN AGNN AGNN
AGNN AGNN
dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0
AGLN AGLN AGLN AGLN AGLNAGLNAGLNAGLNAGLN AGLN AGLN AGLN AGLN
AGLN AGLN AGLN AGLN
dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0
dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0
ABLN ABLN ABLN ABLN ABLNABLNABLNABLNABLN ABLN ABLN ABLN ABLN ABLN ABLN
ABLN ABLN
dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0dba=1,dga=0dba=1,dga=0dba=1,dga=0dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0
ABLL ABLL ABLL ABLL ABLLABLLABLLABLLABLL ABLL ABLL ABLL ABLL ABLL
ABLL ABLL ABLL
dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0dba=1,dga=0dba=1,dga=0dba=1,dga=0dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0
dba=1,dga=0 dba=1,dga=0 dba=1,dga=0
AGLL AGLL AGLL AGLL AGLLAGLLAGLLAGLLAGLL AGLL AGLL AGLL AGLL AGLL AGLL AGLL AGLL
ABNL ABNL ABNL ABNL ABNLABNLABNLABNLABNL ABNL ABNL ABNL ABNL
ABNL ABNL ABNL ABNL
dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0
dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0
AGNL AGNL AGNL AGNL AGNLAGNLAGNLAGNLAGNL AGNL AGNL AGNL AGNL AGNL
AGNL AGNL AGNL
sag sag sag sag sagsagsagsagsag sag sag sag sag sag
sag sag sag
sab sab sab sab sabsabsabsabsab sab sab sab sab sab
sab sab sab sab sab sab sab sabsabsabsabsab sab sab sab sab sab
sab sab sab
sag sag sag sag sagsagsagsagsag sag sag sag sag
sag sag sag sag
sag sag sag sag sagsagsagsagsag sag sag sag sag sag
sag sag sag sab sab sab sab sabsabsabsabsab sab sab sab sab sab
sab sab sab
sag sag sag sag sagsagsagsagsag sag sag sag sag
sag sag sag sag
sab sab sab sab sabsabsabsabsab sab sab sab sab
sab sab sab sab
Fig. 4. Aileron Valve Automaton
A similar automaton is constructed for the valves and servodynes of the rudder with
variables br and gr, see Fig. 7.
The reservoir automaton, as presented in Fig. 5, includes three locations indicating
the status of the reservoir which can be empty, leaking, and not leaking but not
empty. For the green reservoir these have labels emptyG, GRL and GRN respectively.
Associated with the locations we have both invariants on the continuous variable g (the
GreenRes GreenRes GreenRes GreenRes GreenResGreenResGreenResGreenResGreenRes GreenRes GreenRes GreenRes GreenRes
GreenRes GreenRes GreenRes GreenRes
GRNL GRNL GRNL GRNL GRNLGRNLGRNLGRNLGRNL GRNL GRNL GRNL GRNL GRNL GRNL
GRNL GRNL
dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dgadg=0-dgr-dgadg=0-dgr-dgadg=0-dgr-dgadg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga
GRL GRL GRL GRL GRLGRLGRLGRLGRL GRL GRL GRL GRL
GRL GRL GRL GRL
emptyG emptyG emptyG emptyG emptyGemptyGemptyGemptyGemptyG emptyG emptyG emptyG emptyG
emptyG emptyG emptyG emptyG
dg=0 dg=0 dg=0 dg=0 dg=0dg=0dg=0dg=0dg=0 dg=0 dg=0 dg=0 dg=0
dg=0 dg=0 dg=0 dg=0
Fig. 5. Reservoir Automaton
level of fluid in the reservoir) and conditions on the rate variable dg, the rate at which
fluid is lost being the sum of that lost from the servodynes connected to the green reservoir
(-dgr-dga) and leakage from the reservoir itself (-1 in the condition for location
GRL). When the reservoir is empty, both the level of fluid and the rate of leakage are
zero.
A similar automaton is constructed for the blue reservoir, see Fig. 7.
5.2 User model
We model user input by means of a single automaton, in which the user can perform
actions to move between the four possible combinations of switch settings; that both
rudder and aileron are set to blue RBAB, that rudder is set to green and aileron to blue
RGAB and so on, as illustrated in Fig. 6. The transitions have synchronisation labels
that establish synchronisation between user actions and the two valves. Each transition
is guarded by a constraint on the level of fluid in the reservoir to which the user wants
to connect a servodyne.
RBAB RBAB RBAB RBAB RBABRBABRBABRBABRBAB RBAB RBAB RBAB RBAB
RBAB RBAB RBAB RBAB RBAG RBAG RBAG RBAG RBAGRBAGRBAGRBAGRBAG RBAG RBAG RBAG RBAG
RBAG RBAG RBAG RBAG
RGAB RGAB RGAB RGAB RGABRGABRGABRGABRGAB RGAB RGAB RGAB RGAB RGAB
RGAB RGAB RGAB RGAG RGAG RGAG RGAG RGAGRGAGRGAGRGAGRGAG RGAG RGAG RGAG RGAG RGAG
RGAG RGAG RGAG
User User User User UserUserUserUserUser User User User User User
User User User
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
srg srg srg srg srgsrgsrgsrgsrg srg srg srg srg srg srg
srg srg
sag sag sag sag sagsagsagsagsag sag sag sag sag sag sag
sag sag
sab sab sab sab sabsabsabsabsab sab sab sab sab sab sab
srg srg srg
srb srb srb srb srbsrbsrbsrbsrb srb srb srb srb srb
srb srb srb
sag sag sag sag sagsagsagsagsag sag sag sag sag sag sag
sag sag
sab sab sab sab sabsabsabsabsab sab sab sab sab
srb srb srb srb srbsrbsrbsrbsrb srb srb srb srb
srb srb srb srb
Fig. 6. User Input Automaton
User observation of the presentation of the system status, i.e. the change in the fluid
level indicators, can also be represented by automata. However for the purpose of the
analysis discussed in this paper it was found simpler and more convenient to model
observation as a set of regions defined in the analysis component of the HyTech spec-
ification. Each region corresponds to a user observationally-equivalent class of states.
Details of this approach are discussed in the next section.
As can be seen in the above diagrams, both valve/servodyne automata and the user
input automaton are synchronised on four events (namely srb, srg, sab, sag), which set
the rudder and aileron valves to the blue or green settings. The complete specification
(without an analysis section) is shown in Fig. 7.
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
GreenRes GreenRes GreenRes GreenRes GreenResGreenResGreenResGreenResGreenRes GreenRes GreenRes GreenRes GreenRes GreenRes GreenRes
GreenRes GreenRes
User User User User UserUserUserUserUser User User User User User User User User
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Config Config Config Config ConfigConfigConfigConfigConfig Config Config Config Config Config Config Config Config
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
var g, gr, ga, b, br, ba: analog;
t: clock;
t: clock;
t: clock;
t: clock;
t: clock;
t: clock;
t: clock;
t: clock;
t: clock;
t: clock;
t: clock;
t: clock;
t: clock;
t: clock;
t: clock;
t: clock;
t: clock;
BlueRes BlueRes BlueRes BlueRes BlueResBlueResBlueResBlueResBlueRes BlueRes BlueRes BlueRes BlueRes BlueRes BlueRes
BlueRes BlueRes
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
RBNN RBNN RBNN RBNN RBNNRBNNRBNNRBNNRBNN RBNN RBNN RBNN RBNN RBNN
RBNN RBNN RBNN
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
dbr=0, dgr=0
RGNN RGNN RGNN RGNN RGNNRGNNRGNNRGNNRGNN RGNN RGNN RGNN RGNN
RGNN RGNN RGNN RGNN
dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0dbr=0,dgr=0dbr=0,dgr=0dbr=0,dgr=0dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0
dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0
RGLN RGLN RGLN RGLN RGLNRGLNRGLNRGLNRGLN RGLN RGLN RGLN RGLN
RGLN RGLN RGLN RGLN
dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0dbr=0,dgr=0dbr=0,dgr=0dbr=0,dgr=0dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0
dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0
RBLN RBLN RBLN RBLN RBLNRBLNRBLNRBLNRBLN RBLN RBLN RBLN RBLN RBLN
RBLN RBLN RBLN
dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0dbr=1,dgr=0dbr=1,dgr=0dbr=1,dgr=0dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0
dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0
GRNL GRNL GRNL GRNL GRNLGRNLGRNLGRNLGRNL GRNL GRNL GRNL GRNL GRNL GRNL GRNL GRNL
dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dgadg=0-dgr-dgadg=0-dgr-dgadg=0-dgr-dgadg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga dg=0-dgr-dga
GRL GRL GRL GRL GRLGRLGRLGRLGRL GRL GRL GRL GRL
GRL GRL GRL GRL
RBAB RBAB RBAB RBAB RBABRBABRBABRBABRBAB RBAB RBAB RBAB RBAB RBAB RBAB RBAB RBAB RBAG RBAG RBAG RBAG RBAGRBAGRBAGRBAGRBAG RBAG RBAG RBAG RBAG RBAG RBAG RBAG RBAG
RGAB RGAB RGAB RGAB RGABRGABRGABRGABRGAB RGAB RGAB RGAB RGAB
RGAB RGAB RGAB RGAB RGAG RGAG RGAG RGAG RGAGRGAGRGAGRGAGRGAG RGAG RGAG RGAG RGAG
RGAG RGAG RGAG RGAG
BRNL BRNL BRNL BRNL BRNLBRNLBRNLBRNLBRNL BRNL BRNL BRNL BRNL
BRNL BRNL BRNL BRNL
db=0-dbr-dba db=0-dbr-dba db=0-dbr-dba db=0-dbr-dba db=0-dbr-dbadb=0-dbr-dbadb=0-dbr-dbadb=0-dbr-dbadb=0-dbr-dba db=0-dbr-dba db=0-dbr-dba db=0-dbr-dba db=0-dbr-dba
db=0-dbr-dba db=0-dbr-dba db=0-dbr-dba db=0-dbr-dba
BRL BRL BRL BRL BRLBRLBRLBRLBRL BRL BRL BRL BRL
BRL BRL BRL BRL
RBLL RBLL RBLL RBLL RBLLRBLLRBLLRBLLRBLL RBLL RBLL RBLL RBLL RBLL
RBLL RBLL RBLL
dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0dbr=1,dgr=0dbr=1,dgr=0dbr=1,dgr=0dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0
dbr=1,dgr=0 dbr=1,dgr=0 dbr=1,dgr=0
RGLL RGLL RGLL RGLL RGLLRGLLRGLLRGLLRGLL RGLL RGLL RGLL RGLL RGLL
RGLL RGLL RGLL
RBNL RBNL RBNL RBNL RBNLRBNLRBNLRBNLRBNL RBNL RBNL RBNL RBNL RBNL RBNL RBNL RBNL
dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0dbr=0,dgr=0dbr=0,dgr=0dbr=0,dgr=0dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0 dbr=0,dgr=0
RGNL RGNL RGNL RGNL RGNLRGNLRGNLRGNLRGNL RGNL RGNL RGNL RGNL RGNL RGNL
RGNL RGNL
ABNN ABNN ABNN ABNN ABNNABNNABNNABNNABNN ABNN ABNN ABNN ABNN ABNN ABNN ABNN ABNN
dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0
AGNN AGNN AGNN AGNN AGNNAGNNAGNNAGNNAGNN AGNN AGNN AGNN AGNN AGNN AGNN AGNN AGNN
dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0
AGLN AGLN AGLN AGLN AGLNAGLNAGLNAGLNAGLN AGLN AGLN AGLN AGLN
AGLN AGLN AGLN AGLN
dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0
dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0
ABLN ABLN ABLN ABLN ABLNABLNABLNABLNABLN ABLN ABLN ABLN ABLN ABLN ABLN
ABLN ABLN
dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0dba=1,dga=0dba=1,dga=0dba=1,dga=0dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0
ABLL ABLL ABLL ABLL ABLLABLLABLLABLLABLL ABLL ABLL ABLL ABLL ABLL ABLL
dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0dba=1,dga=0dba=1,dga=0dba=1,dga=0dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0 dba=1,dga=0
AGLL AGLL AGLL AGLL AGLLAGLLAGLLAGLLAGLL AGLL AGLL AGLL AGLL AGLL
AGLL AGLL AGLL
ABNL ABNL ABNL ABNL ABNLABNLABNLABNLABNL ABNL ABNL ABNL ABNL ABNL ABNL ABNL ABNL
dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0 dba=0,dga=0
AGNL AGNL AGNL AGNL AGNLAGNLAGNLAGNLAGNL AGNL AGNL AGNL AGNL
AGNL AGNL AGNL AGNL
emptyG emptyG emptyG emptyG emptyGemptyGemptyGemptyGemptyG emptyG emptyG emptyG emptyG emptyG emptyG
dg=0 dg=0 dg=0 dg=0 dg=0dg=0dg=0dg=0dg=0 dg=0 dg=0 dg=0 dg=0 dg=0 dg=0
db=0 db=0 db=0 db=0 db=0db=0db=0db=0db=0 db=0 db=0 db=0 db=0 db=0
db=0 db=0 db=0
srg srg srg srg srgsrgsrgsrgsrg srg srg srg srg srg srg srg srg
sag sag sag sag sagsagsagsagsag sag sag sag sag sag sag sag sag
srg srg srg srg srgsrgsrgsrgsrg srg srg srg srg
srg srg srg srg
sag sag sag sag sagsagsagsagsag sag sag sag sag sag
sag sag sag
srg srg srg srg srgsrgsrgsrgsrg srg srg srg srg srg srg
srg srg
srb srb srb
sab sab sab sab sabsabsabsabsab sab sab sab sab sab sab
srb srb srb srb srbsrbsrbsrbsrb srb srb srb srb srb srb srb srb
sab sab sab sab sabsabsabsabsab sab sab sab sab sab
sab sab sab
srb srb srb srb srbsrbsrbsrbsrb srb srb srb srb srb srb
srb srb srg srg srg srg srgsrgsrgsrgsrg srg srg srg srg srg srg srg srg srb srb srb srb srbsrbsrbsrbsrb srb srb srb srb
srb srb srb srb
srg srg srg srg srgsrgsrgsrgsrg srg srg srg srg srg srg srg srg
srb srb srb srb srbsrbsrbsrbsrb srb srb srb srb srb
srb srb srb
srg srg srg srg srgsrgsrgsrgsrg srg srg srg srg srg
srg srg srg
srb srb srb srb srbsrbsrbsrbsrb srb srb srb srb srb srb srb srb sag sag sag sag sagsagsagsagsag sag sag sag sag
sag sag sag sag
sab sab sab sab sabsabsabsabsab sab sab sab sab
sab sab sab sab sab sab sab sab sabsabsabsabsab sab sab sab sab sab sab sab sab
sag sag sag sag sagsagsagsagsag sag sag sag sag sag sag sag sag
sag sag sag sag sagsagsagsagsag sag sag sag sag
sag sag sag sag sab sab sab sab sabsabsabsabsab sab sab sab sab sab sab sab sab
sag sag sag sag sagsagsagsagsag sag sag sag sag sag sag sag sag
sab sab sab sab sabsabsabsabsab sab sab sab sab
sab sab sab sab
Fig. 7. Complete specification
6 Analysis
Having completed our specification of the system, we proceed with a number of analy-
ses. The examples in the current section serve mainly to illustrate the approach and the
possible kinds of analyses that could be performed with the model checker HyTech to
support human factors analysis. In some cases the results could also have been derived
without assistance of a model checker. However, it is easy to imagine extensions of
the case study in which such analyses are much harder to perform accurately without
automatic analysis.
In addition to the reachability and performance related analysis, we illustrate an
original form of task-driven analysis which concerns the possible inferences to be made
by the user in the diagnosis of system faults. We show how the expressiveness of the
analysis language of HyTech can be exploited to model the observations of the liquid
indicators by the pilot when performing the diagnosis activity. This approach makes
it possible to evaluate and compare the efficiency and efficacy of different diagnosis
strategies.
Our aim in this section is to illustrate how real-time and continuous elements can
be included in analyses with a human factors focus. In particular we show how hybrid
automata and associated model checking tools can support analyses driven by concepts
such as user tasks and user inference, which are external to the system specification.
6.1 Safety conditions
Safety conditions can be expressed via reachability conditions, and checked in a straight-forward
fashion. Sample traces, which reach a target region from an initial region, including
timing information, can be constructed automatically by HyTech. In the context
of our case study, the first simple analysis is to let HyTech find out whether a fix
can be reached starting from a particular 'leaky' situation. For example, let's suppose
the valves are switched to blue and only the primary servodyne of the rudder and the
secondary servodyne of the aileron are leaking. This situation is defined as the region
variable init reg.
init_reg := loc[User]=RBAB &
loc[ValveR]=RBLN &
loc[ValveA]=ABNL &
loc[GreenRes]=GRNL &
loc[BlueRes]=BRNL &
The observation of a decrease in the blue and green reservoir fluid quantity can be
expressed as two regions. Leaking of 'green' fluid can be observed when the rudder
is switched to green and the secondary servodyne is leaking, or when the aileron is
switched to green and the secondary servodyne is leaking or when the green reservoir is
leaking. A similar region can be defined for the observation of a decrease in the 'blue'
fluid.
In HyTech these regions are defined in the following way where j denotes the logical
'or':
green_leak := (loc[ValveR]=RGNL | loc[ValveR]=RGLL |
loc[ValveA]=AGNL | loc[ValveA]=AGLL |
loc[GreenRes]=GRL);
blue_leak := (loc[ValveR]=RBLL | loc[ValveR]=RBLN |
loc[ValveA]=ABLL | loc[ValveA]=ABLN |
loc[BlueRes]=BRL);
We can let HyTech compute the set of states reachable from this initial region intersected
with those states in which no leaks occur (defined as fin reg).
reached := reach forward from init_reg endreach;
If the intersection is not empty, we can let HyTech print a (shortest) trace from the
leaking situation to a situation in which the problem is fixed.
if empty (fin_reg & reached)
then prints "No fix possible";
else print fin_reg & reached;
print trace to fin_reg using reached;
The result is shown below:
Generating trace to specified target region ========
VIA: srg
============ End of trace generation ============
Max memory used
Time spent
The first three lines describe the set of states characterised by the intersection of
fin reg and reached. It consists of a location vector that lists the location of each
automaton in the specification in the situation in which a fix for the leaking is realised.
The list gives only the names of the automata locations. They belong respectively to the
automata ValveR, GreenRes, User, BlueRes and ValveA. The order in which
the location vector is reported is the same in each following analysis in this section.
The next two lines give the conditions on the variables that have to be satisfied in
that case.
The second part of the result concerns the trace starting from the initial region to
the final region. In this case the shortest trace to a fix consists of just one action, namely
switching the rudder to 'green', achieved by transition srg. Since we have not specified
that switching activity performed by the pilot costs time, the transition can be
performed in zero time units. Adding time constraints on the activity of the user could
be an interesting extension of the current case study. Further details on these constraints
should first be studied though in order to keep the model as realistic as possible.
In this case study there are 'leaky' situations that obviously cannot be fixed by the
pilot by means of the switches. One such situation is when both reservoirs are leaking.
Forward reachability analysis of such a situation gives us indeed the answer that a fix is
not possible.
A more interesting question is for which 'leaky' situations a fix can be found. This
can be computed by using a backward reachability analysis, starting from a situation
in which no leaks occur, i.e. the above defined region fin reg. This leads to a set of 36
states. Three representative ones are shown below.
Inspection of all the reported states shows that the automata that model the reservoirs
are always in the non-leaking location (GRNL and BRNL). Moreover, it never
occurs that both valves of the rudder or both valves of the aileron are leaking at the
same time. This gives indeed exactly the conditions under which the pilot is able to find
a fix.
6.2 Constraints on timing and continuous variables
Using HyTech, we can derive limits on the variables, including fluid levels and rates
of leakage. To do this, we use a special type of variable - parameters - and get the
system to find the values of the parameters for which a solution can be found (some
final region reached). For example, consider an analysis where we declare the initial
levels of fluid in the tanks to be given by a parameter alpha. We define the target
region to be one where neither tank is empty and the clock t has reached 3. In Fig. 8
below we show the analysis for an initial region where both reservoirs and both rudder
valves are leaking. The hide statement is a form of existential quantification, allowing
us to solve against the variables of interest. When we run the analysis we obtain the
Analysis Analysis Analysis Analysis AnalysisAnalysisAnalysisAnalysisAnalysis Analysis Analysis Analysis Analysis
Analysis Analysis Analysis Analysis
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[ValveA]=ABNN & loc[GreenRes]=GRL &
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
loc[BlueRes]=BRL
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
reached := reach forward from init_reg endreach;
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
if empty(reached & fin_reg) then
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
prints "Situation is not reachable";
else else else else elseelseelseelseelse else else else else
else else else else
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
print omit all locations hide non_parameters in reached&fin_reg endhide;
Fig. 8. Analysis to derive constraints
following output, which gives the range of values of the parameter and for which the
final region is reachable.
Composing automata *
.Number of iterations required for reachability: 8
Examination of more detailed output shows that the traces requiring the minimal
alpha value of 9/2 are those which involve switching the rudder valve from the primary
to the secondary reservoir at t=1.5, hence for each tank, 3 units are lost from
the reservoir leak, and 1.5 units from the rudder servodyne leaks.
The analysis above involved deriving a constraint on the analog variables of the
system. We could also solve against clock (and other) variables, for example to derive
the maximum time which can elapse before some action must be taken by the user to
preserve system safety. More interestingly, we could extend the user model to include
delays between user observation of some system event, and input of some response by
the user. As mentioned in section 4.2 above, these response times could be based on
empirical data on human performance, allowing us to analyse the system taking into
account the limitations of both the user and the technology.
6.3 Diagnosis activity
Besides finding a fix for a situation in which leaking of hydraulic fluid occurs, the
goal of the pilot is to discover as soon as possible and as precisely as possible which
components in the hydraulics system are leaking. As we have seen in the description of
the case study in section 2 the pilot is supposed to follow a certain diagnosis strategy
in which she switches the valves in different positions and observes the corresponding
changes in the fluid level indicators. In this strategy, the order in which the different
positions of the switches are examined is important because this may influence the
inferences about the status of the system components that the pilot can make in each
step of the diagnosis. Moreover, the pilot has to remember the observations made in
previous steps of the process, in order to reach a complete assessment of component
failures at the end of the diagnosis.
A first analysis that seems useful is to check whether the diagnosis strategy proposed
works for particular 'leaky' situations. This could be done in several ways. One
straightforward way would be to model the diagnosis behaviour of the user as an au-
tomaton. Given the number of possible situations, user observations and inferences that
can be made, this approach seems rather cumbersome and unlikely to scale up when
more difficult cases would be analysed.
A more indirect, but simpler, way to reach the same goal is to use the reachability
analysis language of HyTech. This is the approach we follow in this article.
We start the analysis from a region that characterises the situation in which the
user has switched the valves of the rudder and aileron both to the blue reservoir (i.e.
loc[User]=RBAB). In this situation the user observes a decrease in the blue reservoir
fluid quantity and no decrease in the green reservoir fluid quantity.
Let us assume that the pilot follows the diagnosis strategy indicated in table 1, going
through the various settings from top to bottom. The first situation of the diagnosis can
then be defined as:
By forward reachability we can compute all states that are reachable from this first
situation.
reach forward from sit1 endreach;
We are now interested in those reachable states that correspond to the second situation
in which the user has switched both the rudder and the aileron to green and
observes a decrease in the green fluid but not in the blue while the two reservoirs are
still not empty. The second situation can be characterised as:
We perform a second reachability analysis starting from the intersection of the states
reached in the first analysis s1 and the second situation sit2.
reach forward from s1 & sit2 endreach;
Region s2 in its turn can be intersected with the region describing the next situation
the user may encounter when she switches the aileron back to blue and observes that
both the fluids stopped decreasing. Note that this step corresponds to going from step 2
to step 4 in the diagnosis steps reported in Table 1, skipping the third step listed there.
The resulting regions printed by HyTech of for example s1 & sit2 and s2 & sit3
gives us an indication of the size and complexity of the resulting regions. The resulting
region of s2 & sit3 contains only one element:
This gives exactly one possibility for the locations of the automata as the result
of the diagnosis followed by the user. This unique result indicates that the pilot can
reach only one conclusion about the status of the components of the hydraulics system,
assuming she does not make a mistake in reasoning of course. The conclusion in this
case is that the primary servodyne of the rudder is leaking, but the secondary is not
(RGLN), the green reservoir is not leaking (GRNL), the blue reservoir is not leaking
(BRNL) and the primary servodyne of the aileron is not leaking, but the secondary is
(ABNL). This result corresponds to the combination of the derivations following steps
Table
1.
Note that for this diagnosis we have not made any assumptions about the possibility
that hydraulic components may start leaking while the pilot is performing the diagnosis.
The above analysis shows that in that particular diagnosis, with the described observations
by the pilot, a precise and unique assessment of the status of the hydraulic components
can be reached without excluding in advance that no component starts leaking
during the diagnosis.
It is also interesting to take a look at the region s1 & sit2 that gives the states
that can be reached right after the second observation (step 2) by the pilot. It gives
possible combinations of locations. From the detailed output we can derive what the
pilot could have correctly concluded about the status of the components after the second
step in the diagnosis. The detailed results show that the pilot can correctly conclude
that the blue reservoir is not leaking (BRNL in each possible location vector). But, for
example, the pilot can no longer be sure that the green reservoir is not leaking, although
he could conclude this correctly after the first observation. This is explained by the fact
that the full result obtained from HyTech also takes into account that the green reservoir
could have started leaking during the diagnosis.
If we assume that this is not the case, as the pilot is likely to do in order to reduce
the number of possibilities he has to remember, the number of combinations is reduced
to 9. They are given below without the corresponding constraints on the analog vari-
ables. The result now coincides with the derivations reported in [10] (and table 1), in
which it is stated that the pilot can conclude at this point that at least one of the primary
servodynes is leaking and one of the secondary servodynes, i.e. in none of the
locations the combinations RGN* and AGN* or RG*N and AG*N occurs, where *
stands for N or L. The result can be obtained automatically by adding as an extra constraint
to sit2 the assumption of the pilot that the green reservoir is still not leaking
(loc[GreenRes]=GRNL). This is also one way to make underlying assumptions
concerning the validity of the diagnosis strategy explicit.
The above shows that with the support of HyTech complex situations can be anal-
ysed, that would be difficult to perform accurately by pencil and paper. The analysis of
diagnosis methods can be performed more systematically and thoroughly.
7 Summary and Discussion
Summary
Starting from our concerns with the specification and analysis of interactive systems
with a continuous or hybrid aspect, we have shown that hybrid automata provide a
useful approach to modelling interactive systems with a real-time or continuous aspect.
We believe that if the models so constructed are to be useful as a basis for usability
reasoning, then they must include user relevant abstractions, and if we are to reason
about the operation of the system within a certain context, also environmental aspects
must be considered. Hence in the approach we have taken, we have modelled both
system and user as automata, with links between them via synchronised transitions and
shared variables.
Once the models are constructed, a number of forms of analysis are possible; not
only traditional reachability properties can be examined, such as those pertaining to
safety, but also constraints on the variables of the system can be automatically derived,
such as time constraints. These forms of analysis are similar to those commonly carried
out in the verification of critical systems, albeit with a more explicit human factors
dimension. Task driven analysis, such as the analysis of inferences involved in human
performed diagnosis of system faults, has long been established in the field of human
computer interaction (see for example [5]). However, the approach to the formal support
of such analysis in the context of critical systems which we have presented is a new
application of hybrid automata.
Discussion
Diagnosing failure in process control settings is an important and critical activity in
which the operator of the system is responsible for the safe operation of the system.
Careful design and analysis of diagnosis strategies is therefore very important. The
diagnosis of failures can often not be left completely to a computer system, for example
due to limitations in sensing and monitoring devices [10].
There are a number of interesting possibilities for further work; consider for example
the issue of context. Contextual assumptions can be specified as automata, just
as environmental factors. In [10] for example, it is assumed that no new leaks occur
during the diagnosis. If we label all 'new leak' transitions with a synchronisation label
(eg. start leak), and construct an automaton (Assumption in Fig. 9) with a reflexive
transition in the initial state which synchronises on this label, and a transition to
another state which does not synchronise on the label (NoMoreLeaks in the figure),
then by adding the conjunct loc[Assumption] = NoMoreLeaks to our analysis
regions, we preclude the occurrence of new leaks during the diagnosis.
Assumption Assumption Assumption Assumption AssumptionAssumptionAssumptionAssumptionAssumption Assumption Assumption Assumption Assumption
Assumption Assumption
Assumption Assumption
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Start: True
Leak Leak Leak Leak LeakLeakLeakLeakLeak Leak Leak Leak Leak Leak
Leak Leak Leak NoMoreLeaks NoMoreLeaks NoMoreLeaks NoMoreLeaks NoMoreLeaksNoMoreLeaksNoMoreLeaksNoMoreLeaksNoMoreLeaks NoMoreLeaks NoMoreLeaks NoMoreLeaks NoMoreLeaks
NoMoreLeaks NoMoreLeaks NoMoreLeaks NoMoreLeaks
start_leak start_leak start_leak start_leak start_leakstart_leakstart_leakstart_leakstart_leak start_leak start_leak start_leak start_leak start_leak start_leak
start_leak start_leak
Fig. 9. Assumption automaton
We see this issue of making explicit any assumptions about the context of an anal-
ysis, whether it concerns environmental factors, the operational context (for example
standard operating procedures), or the capabilities and performance limits of the user,
to be a significant benefit of the type of formalisation and analysis described here. At a
practical level, assumptions in this form can be easily added and removed which makes
it easy to perform analysis of the complete system under various assumptions.
Our modelling of the user has been rather simple so far; another interesting area to
explore would be the derivation of more realistic user models, particularly with a psychological
or ergonomic justification, for example modelling user observation, thinking
and reaction times. Toolkits of certain common user behaviours (eg. observation based
on polling) are also an interesting possibility.
--R
Specification and verification of media constraints using uppaal.
Formally verifying interactive systems: A review.
The tool KRONOS.
Using executable interactor specifications to explore the impact of operator interaction error.
Task Analysis for Human-Computer Interaction
Formal Methods for Interactive Systems.
Reasoning about gestural interaction.
Syndetic modelling.
Device models.
Inference and information resources: A design case study.
The theory of hybrid automata.
A model checker for hybrid systems.
Using Autograph to Create Input for Hytech
UPPAAL in a nutshell.
Oops, it didn't arm - a case study of two automation surprises
Using model checking to help discover mode confusions and other automation surprises.
The hybrid world of virtual environments.
--TR
The tool KRONOS
Task Analysis for Human-Computer Interaction
A Review of Formalisms for Describing Interactive Behaviour
HYTECH
The theory of hybrid automata | critical systems;hybrid automata;human factors |
569980 | A Theoretical Framework for Convex Regularizers in PDE-Based Computation of Image Motion. | Many differential methods for the recovery of the optic flow field from an image sequence can be expressed in terms of a variational problem where the optic flow minimizes some energy. Typically, these energy functionals consist of two terms: a data term, which requires e.g. that a brightness constancy assumption holds, and a regularizer that encourages global or piecewise smoothness of the flow field. In this paper we present a systematic classification of rotation invariant convex regularizers by exploring their connection to diffusion filters for multichannel images. This taxonomy provides a unifying framework for data-driven and flow-driven, isotropic and anisotropic, as well as spatial and spatio-temporal regularizers. While some of these techniques are classic methods from the literature, others are derived here for the first time. We prove that all these methods are well-posed: they posses a unique solution that depends in a continuous way on the initial data. An interesting structural relation between isotropic and anisotropic flow-driven regularizers is identified, and a design criterion is proposed for constructing anisotropic flow-driven regularizers in a simple and direct way from isotropic ones. Its use is illustrated by several examples. | Introduction
Even after two decades of intensive research, robust motion estimation continues to be
a key problem in computer vision. Motion is linked to the notion of optic
ow, the
displacement eld of corresponding pixels in subsequent frames of an image sequence.
Optic
ow provides information that is important for many applications, ranging from
the estimation of motion parameters for robot navigation to the design of second generation
video coding algorithms. Surveys of the state-of-the-art in motion computation
can be found in papers by Mitiche and Bouthemy [32], and Stiller and Konrad [50]. For
a performance evaluation of some of the most popular algorithms we refer to Barron et
al. [5] and Galvin et al. [20].
Bertero et al. [6] pointed out that, depending on its formulation, optic
ow calculations
may be ill-conditioned or even ill-posed. It is therefore common to use implicit or
explicit smoothing steps in order to stabilize or regularize the process.
Implicit smoothing steps appear for instance in the robust calculation of image
derivatives, where one usually applies some amount of spatial or temporal smoothing
(averaging over several frames). It is not rare that these steps are only described as algorithmic
details, but indeed they are often very crucial for the quality of the algorithm.
Thus, it would be consequent to make the role of smoothing more explicit by incorporating
it already in a continuous problem formulation. This way has been pioneered
by Horn and Schunck [25] and improved by Nagel [34] and many others. Approaches of
this type calculate optic
ow as the minimizer of an energy functional, which consists
of a data term and a smoothness term. Formulations in terms of energy functionals allow
a conceptually clear formalism without any hidden model assumptions, and several
evaluations have shown that these methods perform well [5, 20].
The data term in the energy functional involves optic
ow constraints such as the assumption
that corresponding pixels in dierent frames should reveal the same grey value.
The smoothness term usually requires that the optic
ow eld should vary smoothly
in space [25]. Such a term may be modied in an image-driven way in order to suppress
smoothing at or across image boundaries [1, 34]. As an alternative,
ow-driven
modications have been proposed which reduce smoothing across
ow discontinuities
[8, 12, 14, 29, 40, 43, 54]. Most smoothness terms require only spatial smoothness. Spatio-temporal
smoothness terms have been considered to a much smaller extent [7, 33, 36, 56].
Since smoothness terms ll in information from regions where reliable
ow estimates exist
to regions where no estimates are possible, they create dense
ow elds. In many
applications, this is a desirable quality which distinguishes regularization methods from
other optic
ow algorithms. The latter ones create non-dense
ow elds, that have to
be postprocessed by interpolation, if 100 % density is required.
Modeling the optic
ow recovery problem in terms of continuous energy functionals
oers the advantage of having a formulation that is as independent of the pixel grid
as possible. A correct continuous model can be rotation invariant, and the use of well-established
numerical methods shows how this rotation invariance can be approximated
in a mathematically consistent way.
From both a theoretical and practical point of view, it can be attractive to use energy
functionals that are convex. They have a unique minimum, and this global minimum
can be found in a stable way by using standard techniques from convex optimization,
for instance gradient descent methods. Having a unique minimum allows to use globally
convergent algorithms, where every arbitrary
ow initialization leads to the same solu-
tion: the global minimum of the functional. This property is an important quality of
a robust algorithm. Nonconvex energy functionals, on the other hand, may have many
local minima, and it is di-cult to nd algorithms that are both e-cient and converge
to a global minimum. Typical algorithms which converge to a global minimum (such as
simulated annealing [30]) are computationally very expensive, while methods which are
more e-cient (such as graduated non-convexity algorithms [9]) may get trapped in a
local minimum.
Minimizing continuous energy functionals leads in a natural way to partial dierential
equations (PDEs): applying gradient descent, for instance, yields a system of coupled
diusion{reactions equations for the two
ow components. The fastly emerging use
of PDE-based image restoration methods [22, 39], such as nonlinear diusion ltering
and total variation denoising, has motivated many researchers to apply similar ideas
to estimate optic
ow [1, 4, 12, 14, 24, 29, 38, 40, 43, 54]. A systematic framework
that links the diusion and optic
ow paradigms, however, has not been studied so
far. Furthermore, from the framework of diusion ltering it is also well-known that
anisotropic lters with a diusion tensor have more degrees of freedom than isotropic
ones with scalar-valued diusivities. These additional degrees of freedom can be used to
obtain better results in specic situations [53]. However, similar nonlinear anisotropic
regularizers have not been considered in the optic
ow literature so far.
The goal of the present paper is to address these issues. We present a theoretical
framework for a broad class of regularization methods for optic
ow estimation. For
the reasons explained above, we focus on models that allow a formulation in terms of
convex and rotation invariant continuous energy functionals. We consider image-driven
and
ow-driven models, isotropic and anisotropic ones, as well as models with spatial
and spatio-temporal smoothing terms. We prove that all these approaches are well-posed
in the sense of Hadamard: they have a unique solution that depends in a continuous
(and therefore predictable) way on the input data.
We shall see that our taxonomy includes not only many existing models, but also
interesting novel ones. In particular, we will derive novel regularization functionals for
optic
ow estimation that are
ow-driven and anisotropic. They are the optic
ow analogues
of anisotropic diusion lters with a diusion tensor. Many of the spatio-temporal
methods have not been proposed before as well. With the increased computational possibilities
of modern computers it is likely that they will become more important in the
future. In the present paper we also focus on interesting relations between isotropic and
anisotropic
ow-driven methods. They allow us to formulate a general design principle
which explains how one can create anisotropic optic
ow regularizers from isotropic ones.
Our paper is organized as follows. In Section 2 we rst review and classify existing
image-driven and isotropic
ow-driven models, before we derive a novel energy functional
leading to anisotropic
ow driven models. Then we show how one has to modify all
models with a spatial smoothness term in order to obtain methods with spatio-temporal
regularization. A unifying energy functional is derived that incorporates the previous
models as well as novel ones. Its well-posedness is established in Section 3. In Section 4
we take advantage of structural similarities between isotropic and anisotropic approaches
in order to formulate a design principle for anisotropic optic
ow regularizers. The paper
is concluded with a summary in Section 5.
A Framework for Convex Regularizers
2.1 Spatial regularizers
2.1.1 Basic structure
In order to formalize the optic
ow estimation problem, let us consider a real-valued
image sequence f(x; denotes the location within the image domain
and the time parameter 2 [0; T ] species the frame. The optic
ow eld
describes the displacement between two subsequent frames and
should depict the same
image detail. Frequently it is assumed that image objects keep their grey value over
time:
0: (1)
Such a model assumes that illumination changes do not appear, and that occlusions or
disocclusions do not happen. Numerous generalizations to multiple constraint equations
and/or dierent \conserved quantities" (replacing intensity) exist; see e.g. [18, 51]. How-
ever, since the goal of the present paper is to study dierent regularizers, we restrict
ourselves to (1). If the spatial and temporal sampling is su-ciently ne, we may replace
(1) by its rst order Taylor approximation
where the subscripts x, y and denote partial derivatives. This so-called optic
ow constraint
forms the basis of many dierential methods for estimating the optic
ow.
Evidently such a single equation is not su-cient to determine the two unknown functions
In order to recover a unique
ow eld, we need an additional
assumption. Regularization-based optic
ow methods use as additional assumption the
requirement that the optic
ow eld should be smooth (or at least piecewise smooth).
The basic idea is to recover the optic
ow as a minimizer of some energy functional of
type
Z
dx dy (3)
where r := (@ x ; @ y ) T denotes the spatial nabla operator, and u := . The rst
term in the energy functional is a data term requiring that the OFC be fullled, while
the second term penalizes deviations from (piecewise) smoothness. The smoothness term
called regularizer, and the positive smoothness weight is the regularization
parameter. One would expect that the specic choice of the regularizer has
a strong in
uence on the result. Therefore, let us discuss dierent classes of convex
regularizers next.
2.1.2 Homogeneous regularization
In 1981 Horn and Schunck [25] pioneered the eld of regularization methods for optic
ow computations. They used the regularizer
It is a classic result from the calculus of variations [13, 16] that { under mild regularity
conditions { a minimizer
Z
satises necessarily the so-called Euler{Lagrange equations
with homogeneous Neumann boundary conditions:
@
@
Hereby, n is a vector normal to the image boundary @
Applying this framework to the minimization of the Horn and Schunck functional
leads to the PDEs
where := @ xx denotes the Laplace operator. These equations can be regarded as
the steady state (t !1) of the diusion{reaction system
where t denotes an articial evolution parameter that should not be mixed up with the
time of the image sequence. These equations also arise when minimizing the Horn and
Schunck functional using steepest descent. Schnorr [41] has established well-posedness
by showing that this functional has a unique minimizer that depends continuously on
the input data f . Recently, Hinterberger [24] proved similar well-posedness results for a
related model with a dierent data term.
We observe that the underlying diusion process in the Horn and Schunck approach
is the linear diusion equation
with 2. This equation is well-known for its regularizing properties
and has been extensively used in the context of Gaussian scale-space; see [48] and the
references therein. It smoothes, however, in a completely homogeneous way, since its
diusivity As a consequence, it also blurs across semantically
important
ow discontinuities. This is the reason why the Horn and Schunck approach
creates rather blurry optic
ow elds. The regularizers described in the sequel are attempts
to overcome this limitation.
2.1.3 Isotropic image-driven regularization
It seems plausible that motion boundaries are a subset of the image boundaries. Thus, a
simple way to prevent smoothing at motion boundaries consists of introducing a weight
function into the Horn and Schunck regularizers that becomes small at image edges.
This modication yields the regularizer
where g is a decreasing, strictly positive function. This regularizer has been proposed
and theoretically analysed by Alvarez et al. [1]. The corresponding diusion{reaction
equations are given by
The underlying diusion process is
It uses a scalar-valued diusivity g that depends on the image gradient. Such a method
can therefore be classied as inhomogeneous, isotropic and image-driven. Isotropic refers
to the fact that a scalar-valued diusivity guarantees a direction-independent smoothing
behaviour, while inhomogeneous means that this behaviour may be space-dependent.
Since the diusivity does not depend on the
ow itself, the diusion process is linear.
For more details on this terminology and diusion ltering in image processing, we refer
to [53]. Homogeneous regularization arises as a special case of (15) when g(jrf
is considered.
2.1.4 Anisotropic image-driven regularization
An early anisotropic modication of the Horn and Schunck functional is due to Nagel
[34]; see also [2, 17, 35, 37, 41, 42, 47]. The basic idea is to reduce smoothing across image
boundaries, while encouraging smoothing along image boundaries. This is achieved by
considering the regularizer
D(rf) is a regularized projection matrix perpendicular to rf :
where I denotes the unit matrix. This methods leads to the diusion{reaction equations
The usage of a diusion tensor D(rf) instead of a scalar-valued diusivity allows a
direction-dependent smoothing behaviour. This method can therefore be classied as
anisotropic. Since the diusion tensor depends on the image f but not on the unknown
ow, it is a purely image-driven process that is linear in its diusion part. Well-posedness
for this model has been established by Schnorr [41].
The eigenvectors of D are v 1 := rf , v 2 := rf ? , and the corresponding eigenvalues
are given by
In the interior of objects we have
ideal edges where jrf 1. Thus, we have isotropic
behaviour within regions, and at image boundaries the process smoothes anisotropically
along the edge. This behaviour is very similar to edge-enhancing anisotropic diusion
ltering [53]. In contrast to edge-enhancing anisotropic diusion, however, Nagel's optic
ow technique is linear. It is interesting to note that only recently it has been pointed out
that the Nagel method may be regarded as an early predecessor of anisotropic diusion
ltering [2].
Homogeneous and isotropic image-driven regularizers are special cases of (19), where
D(rf) := I and D(rf) := g(jrf j 2 )I are chosen.
2.1.5 Isotropic
ow-driven regularization
Image-driven regularization methods may create oversegmentations for strongly textured
objects: in this case we have much more image boundaries than motion boundaries. In
order to reduce smoothing only at motion boundaries, one may consider using a purely
ow-driven regularizer. This, however, is at the expense of refraining from quadratic
optimization problems. In earlier work [43, 54], the authors considered regularizers of
type
dierentiable and increasing function that is convex in s, for instance
Regularizer of type (25) lead to the diusion{reaction system
where 0 denotes the derivative of with respect to its argument. The scalar-valued
diusivity shows that this model is isotropic and
ow-driven. In
general, the diusion process is nonlinear now. For the specic regularizer (26), for
instance, the diusivity is given by
Since this nonlinear diusivity is decreasing in its argument, smoothing at
ow discontinuities
is inhibited. For the specic choice homogeneous
regularization with diusivity 0 recovered again.
The preceding diusion{reaction system uses a common diusivity for both channels.
This avoids that edges are formed at dierent locations in each channel. The same
coupling also appears in isotropic nonlinear diusion lters for vector-valued images
as considered by Gerig et al. [21], and Whitaker and Gerig [57]. Nonlinear
ow-driven
regularizers with dierent diusivities for each channel are discussed in Section 4.
2.1.6 Anisotropic
ow-driven regularization
We have seen that there exist isotropic and anisotropic image-driven regularizers as well
as isotropic
ow-driven ones. Thus, our taxonomy would be incomplete without having
discussed anisotropic
ow-driven regularizers. In the context of nonlinear diusion l-
tering, anisotropic models with a diusion tensor instead of a scalar-valued diusivity
oer advantages for images with noisy edges or interrupted structures [55].
How can one construct related optic
ow methods? Let us rst have a look at diusion
ltering of multichannel images. In the nonlinear anisotropic case, Weickert [52, 55] and
Kimmel et al. [26] proposed to lter a multichannel image by using a joint diusion
tensor that depends on the gradients of all image channels. Our goal is thus to nd an
optic
ow regularizer that leads to a coupled diusion{reaction system where the same
ux-dependent diusion tensor used in each equation.
In order to derive this novel class of regularizers, we have to introduce some deni-
tions rst. As in the previous section, we consider an increasing smooth function
that is convex in s. Let us assume that A is some symmetric n n matrix with orthonormal
eigenvectors w 1 ,.,w n and corresponding eigenvalues 1 ,., n . Then we may
formally extend the scalar-valued function (z) to a matrix-valued function (A) by
dening (A) as the matrix with eigenvectors w 1 ,.,w n and eigenvalues ( 1 ),., ( n
This denition can be motivated from the case where (z) is represented by a power
series
. Then it is easy to see that the corresponding matrix-valued power
series
A k has the eigenvectors w 1 ,.,w n and eigenvalues ( 1 ),., ( n ). Another
denition that is useful for our considerations below is the trace of a quadratic matrix
It is the sum of its diagonal elements, or { equivalently { the sum of its
eigenvalues:
a
With these notations we consider the regularizer
Its argument
is a symmetric and positive semidenite 22 matrix. Hence, there exist two orthonormal
eigenvectors corresponding nonnegative eigenvalues 1 , 2 . These eigenvalues
specify the contrast of the vector-valued image in the directions v 1 and v 2 , respec-
tively. This concept has been introduced by Di Zenzo for edge analysis of multichannel
images [15]. It can be regarded as a generalization of the structure tensor [19], and it is
related to the rst fundamental form in dierential geometry [28].
Our result below states that the regularizer (32) leads to the desired nonlinear
anisotropic diusion-reaction system.
Proposition 1 (Anisotropic Flow-Driven Regularization)
For the energy functional (3) with the regularizer (32), the corresponding steepest descent
diusion{reaction system is given by
where the diusion tensor satises
Proof. The Euler{Lagrange equations for minimizing the energy
Z
dx dy (37)
are given by
In order to simplify the evaluation of the rst and second summand in both equations,
we replace (x; y) by the unit vector in x i direction. Together
with the identities
tr (ab T
div
it follows that
@
tr
tr
tr
Plugging this result into the Euler{Lagrange equations concludes the proof.
It should be noted that, in general, the eigenvalues of the diusion
tensor are not equal. Therefore, we have a real anisotropic diusion process with dierent
behaviour in dierent directions. Homogeneous regularization is a special case of the
regularizer (32), if
An interesting similarity between the isotropic regularizer (25) and its anisotropic
counterpart (32) becomes explicit when writing (25) as
This shows that it is su-cient to exchange the role of the trace operator and the penalty
function to switch between both regularization techniques. Another structural similarity
will be discussed in Section 4.
2.2 A unifying framework
Let us now make a synthesis of all previously discussed models. Table 1 gives an overview
of the smoothness terms that we have investigated so far.
Table
1: Classication of regularizers for optic
ow models.
isotropic anisotropic
image-driven
ow-driven
tr
One may regard these regularizers as special cases of two more general models. Using
the compact notation ru := the rst model has the structure
For model comprises pure image-driven models, regardless whether they
are isotropic (D(rf) := g(jrf j 2 )I) or anisotropic. Isotropic
ow-driven models arise for
D := I. In the general case, the model may be both image-driven and
ow-driven.
The second model can be written as
It comprises the anisotropic
ow-driven case and its combinations with image-driven
approaches. Note the large structural similarities between (45) and (46).
Both models can be assembled to the regularizer
where the paramter 2 [0; 1] determines the anisotropy. This regularizer is embedded
into the general optic
ow functional
Z
dx dy: (48)
2.3 Spatio-temporal regularizers
All regularizers that we have discussed so far use only spatial smoothness constraints.
Thus, it would be natural to impose some amount of (piecewise) temporal smoothness
as well. Using our results from the previous section it is straightforward to extend the
smoothness constraint into the temporal domain. Instead of calculating the optic
ow
as the minimizer of the two-dimensional integral (48) for each time frame , we
now minimize a single three-dimensional integral whose solution is the optic
ow for all
frames 2 [0; T ]:
E(u) :=
Z
dx dy d (49)
denotes the spatio-temporal nabla operator.
The corresponding diusion{reaction systems of spatio{temporal energy functionals
have the same structure as the pure spatial ones that we investigated so far. The only
dierence is that the spatial nabla operator r has to be replaced by its spatio-temporal
analogue r . Thus, one has to solve 3D diusion{reaction systems instead of 2D ones.
Not many spatio-temporal regularizers have been studied in the literature so far. To
the best of our knowledge, there have been no attempts to investigate rotation invariant
spatio-temporal models that use homogeneous, isotropic image-driven, or anisotropic
ow-driven regularizers.
Nagel [36] suggested an extension of his anisotropic image-driven smoothness con-
straint, where the diusion tensor (20) is replaced by
D(r f) := 1
Its eigenvalues are given by
Isotropic
ow-driven spatio-temporal regularizers have been studied by the authors in
[56]. They showed that it outperforms a corresponding spatial regularizer at low additional
computing time, if an entire image stack is to be processed.
It appears that the limited memory of previous computer architectures prevented
many researchers from studying approaches with spatio-temporal regularizers, since they
require to keep the entire image stack in the computer memory. On contemporary PCs
or workstations, however, this is no longer a problem, if typical stack sizes are used (e.g.
frames with 256 256 pixels). It is thus likely that spatio-temporal regularizers will
become more important in the future.
3 Well-Posedness Properties
In this section we shall prove that the energy functionals (48) and (49), respectively,
admit a unique solution that continuously depends on the initial data. These favourable
properties are the consequence of embedding the optic
ow constraint (2) into a convex
regularization approach.
From the perspective of regularization, Table 1 reveals another useful classication
in this context: while image{driven models correspond to the class of quadratic regularizers
[6],
ow{driven models belong to the more general class of non{quadratic convex
regularizers. This latter class has been suggested in [11, 45, 49] for generalizing the
well-known quadratic regularization approaches (cf. [6]) used for early computational
vision.
3.1 Assumptions
In the following, we do not distinguish between the approaches (48) and (49) since with
our results hold true for arbitrary n. Furthermore, we assume that the function
strictly convex with respect to s, and there exist constants c
such that
We consider only matrices D(rf) that are symmetric and positive denite. We dene
as the space of admissible optic
ow elds the set
endowed with the scalar product
Z
and its induced norm
In what follows, hf; ui denotes the action of some linear continuous functional f 2 H ,
i.e. some element of the dual space H , on some vector eld u 2 H.
3.2 Convexity
We wish to show that the functional E(u) is strictly convex over H. To this end, we may
disregard linear and constant terms in E(u) and consider the functional F (u) dened
by
F (u) :=
Z
where
Z
c :=
Z
Strict convexity is a crucial property for the existence of a unique global minimizing
optical
ow eld u of E(u) determined as the root of the equation
for any linear functional b 2 H . We proceed in several steps. First, we consider the
smoothness terms V 1 (rf; ru) and V 2 (rf; ru) separately. This can be done because
the sum of convex functions is again convex. Then we consider all terms together, that
is the functional F (u).
The term
belongs to the class of smoothness terms which were considered in earlier work on
isotropic nonlinear diusion of multichannel images (e.g. [44]). To see this, let
vec (ru) :=@ ru 1
denote the vector obtained by stacking the columns of ru one upon the other, and let
denote the norm induced by the scalar product
vec (ru); vec (rv)
can be rewritten as
tr ru T D(rf)ru
and the framework in [44] is applicable.
The second anisotropic and
ow{driven smoothness term
is new in the context of optical
ow computation. Note that by contrast to term V 1 , the
function is matrix-valued. The strict convexity of V 2 is stated in
Proposition 2 (Matrix-Valued Convexity)
strictly convex, A and B two positive semidenite symmetric matrices
with A 6= B, and 2 (0; 1). Then
tr
Proof. Put are symmetric, there are orthonormal
systems of eigenvectors fu i and real-valued eigenvalues f i g, f i g, f
i g such
that
Expanding the vectors u with respect to the system fw i g gives
(v T
With this we obtain
(v T
(v T
Comparing the coe-cients shows that
j is a convex combination of f i g and f i g:
(v T
strictly convex and
obtain
tr
(v T
(v T
(v T
This concludes the convexity proof.
So far we have shown the convexity of the smoothness term V (rf; ru) in (57). To
show that F (u) is strictly convex, we may use the equivalent condition that F 0 (u) is
strongly monotone [58]:
Note that the smoothness term fullls this condition because it is convex, as we have
just shown. Concerning the remaining rst term in (57), we have to cope with the small
technical di-culty that the vector eld u is multiplied with rf which may vanish in
homogeneous image regions. In this context, we refer to in [41] where this problem has
been dealt with.
3.3 Existence, uniqueness, and continuous dependence on the
data
It is a well-established result (see, e.g., [58]) that property (73) together with the Lipschitz
continuity of the operator F 0 (which holds true under mild conditions with respect
to the data rf; f , cf. [41, 46]) ensure the existence of a unique and globally minimizing
optical vector eld u that continuously depends on the data. To understand the latter
property, suppose we are given two image sequences and corresponding functionals
(cf. (57)) and minimizers
By virtue of (73) we have
Thus,
This equation states that, for a slight change of the image sequence data, the corresponding
optical
ow eld cannot arbitrarily jump but gradually changes, too. It is therefore
an important robustness property.
Extensions
All regularizers that we have discussed so far can be motivated from existing nonlinear
diusion methods for multichannel images, where a joint diusivity or diusion tensor
for all channels is used. As one might expect, this is not the only way to construct useful
optic
ow regularizers. In particular, there exists a more general design principle for
anisotropic
ow-driven regularizers which we will discuss next.
Our key observation for deriving this principle is an interesting relation between
anisotropic
ow-driven regularizers and isotropic
ow-driven ones: the anisotropic regularizer
tr (J) can be expressed by means of the eigenvalues 1 , 2 of J as
while its isotropic counterpart (tr J) can be written as
This observation motivates us to formulate the following design principle for rotationally
invariant anisotropic
ow-driven regularizers:
Design Principle (Rotationally Invariant Anisotropic Regularizers)
Assume that we are given some isotropic regularizer (
function , and a decomposition of its argument
where the j are rotationally invariant expressions. Then the regularizer
rotationally invariant and anisotropic.
Examples
1. The decomposion that has been used in (78) and (79) to transit from an isotropic
to an anisotropic model was the trace identity
where 1 and 2 are the eigenvalues of proposed the regularizer
with sh u :=
Applying the design principle, one can derive this expression from the identity [27]
Using the regularizer (82) in the functional (3) leads to the highly anisotropic
diusion{reaction system
u 2x
Note that now the coupling between both equations is more complicated than in
the previous cases, where a joint diusivity or a joint diusion tensor has been
used. We are not aware of similar diusion lters for multichannel images. Well-posedness
properties and experimental results for this optic
ow method are presented
in [43, 46].
3. Requiring that the j in (80) be rotationally invariant ensures the rotation invariance
of the anisotropic regularizer. If we dispense with rotation invariance, the
design principle can still be used. As an example, let us study the
ow-driven regularization
methods that are considered in [4, 12, 14, 29]. They use a regularizer
of type
According to our design principle, we may regard this regularizer as an anisotropic
version of the isotropic regularizer (25). However, the decomposition of its argument
into is not rotationally invariant. The corresponding
diusion{reaction system is given by
which shows that both systems are completely decoupled in their diusion terms.
Thus,
ow discontinuities may be created at dierent locations for each channel.
The same decoupling appears also for some other PDE-based optic
ow methods
such as [40].
While each of the two diusion processes is isotropic, the overall process reveals
some anisotropy: in general, the two diusivities are
not identical. Well-posedness results for this approach with a modied data term
have been established by Aubert et al. [4].
There is also a number of related stochastic methods that lead to discrete models
which are not consistent approximations to rotation invariant processes [7, 8, 10,
23, 31, 33]. Nonconvex regularizers are typically used in these approaches. Discrete
spatio-temporal versions of the regularizer (86) are investigated in [7, 33].
It is a challenging open question whether there exist more useful rotation invariant
convex regularizers than the ones we have just discussed. This is one of our current
research topics.
5 Summary and Conclusions
The goal of this paper was to derive a diusion theory for optic
ow functionals. Minimizing
optic
ow functionals by steepest descent leads to a set of two coupled diusion{
reaction systems. Since similar equations appear for diusion ltering of multi-channel
images, the question arises whether there are optic
ow analogues to the various kinds
of diusion lters.
We saw that image-driven optic
ow regularizers correspond to linear diusion lters,
while
ow-driven regularizers create nonlinear diusion processes. Pure spatial regularizers
can be expressed as 2D diusion{reaction processes, and spatio-temporal regularizers
may be regarded as generalizations to the 3D case. This taxonomy helped us not only to
classify existing methods within a unifying framework, but also to identify gaps, where
no models are available in the current literature. We lled these gaps by deriving suitable
methods with the specied properties, and we proved well-posedness for the class
of convex diusion-based optic
ow regularization methods.
One important novelty along these lines was the derivation of regularizers that can
be related to anisotropic diusion lters with a matrix-valued diusion tensor. This also
enabled us to propose a design principle for anisotropic regularizers, and we discovered an
interesting structural similarity between isotropic and anisotropic models: it is su-cient
to exchange the role of the trace operator and the penalty function in order to switch
between the two models.
We are convinced that these relations are only the starting point for many more
fruitful interactions between the theories of diusion ltering and variational optic
ow
methods. Diusion ltering has progressed very much in recent years, and so it appears
appealing to incorporate recent results from this area into optic
ow methods.
Conversely, it is clear that novel optic
ow regularizers can also be regarded as energy
functionals for suitable diusion lters.
We hope that our systematic taxonomy provides a unifying platform for algorithms
for the entire class of convex variational optic
ow methods. Our future plans are to
use such a platform for a detailed performance evaluation of the dierent methods in
this paper, and for a systematic comparison of dierent numerical algorithms. Another
point on our agenda is an investigation of alternative rotation-invariant decompositions
that can be applied to construct useful anisotropic regularizers.
Acknowledgement
. CS completed his doctoral thesis under the supervision of Prof.
Nagel in 1991. He is grateful to Prof. Nagel who introduced him to the eld of computer
vision.
--R
A computational framework and an algorithm for the measurement of visual motion
Computing optical ow via variational tech- niques
Performance of optical ow techniques
Robust dynamic motion estimation over time
The robust estimation of multiple motions: Parametric and piecewise smooth ow
Cambridge (Mass.
Nonlinear variational method for optical ow computation
Methods of mathematical physics
A note on the gradient of a multi-image
Calculus of variations
Investigation of multigrid algorithms for the estimation of optical ow
Computation of component image velocity from local phase information
Recovering motion
Multimodal estimation of discontinuous optical ow using Markov random
Generierung eines Films zwischen zwei Bildern mit Hilfe des op- tischen Flusses
Images as embedded maps and minimal surfaces: movies
Invariant properties of the motion parallax
a curve evolution approach
theory and applications
Computation and analysis of image motion: a synopsis of current problems and methods
Scene segmentation from visual motion using global optimization
Constraints for the estimation of displacement vector
On the estimation of optical ow: relations between di
Extending the 'oriented smoothness constraint' into the temporal domain and the estimation of derivatives of optical ow
An investigation of smoothness constraints for the estimation of displacement vector
Variational approach to optical ow estimation managing discontinuities
Determination of optical ow and its discontinuities using non-linear diusion
On the mathematical foundations of smoothness constraints for the determination of optical ow and for surface reconstruction
Gaussian scale-space theory
Discontinuity preserving regularization of inverse visual problems
Estimating motion in image sequences
Robust computation of optical ow in a multi-scale dierential framework
Anisotropic di
On discontinuity-preserving optic ow
Nonlinear functional analysis and its applications
--TR
--CTR
El Mostafa Kalmoun , Harald Kstler , Ulrich Rde, 3D optical flow computation using a parallel variational multigrid scheme with application to cardiac C-arm CT motion, Image and Vision Computing, v.25 n.9, p.1482-1494, September, 2007
Luis Alvarez , Rachid Deriche , Tho Papadopoulo , Javier Snchez, Symmetrical Dense Optical Flow Estimation with Occlusions Detection, International Journal of Computer Vision, v.75 n.3, p.371-385, December 2007
Joan Condell , Bryan Scotney , Philip Morrow, Adaptive Grid Refinement Procedures for Efficient Optical Flow Computation, International Journal of Computer Vision, v.61 n.1, p.31-54, January 2005
Nils Papenberg , Andrs Bruhn , Thomas Brox , Stephan Didas , Joachim Weickert, Highly Accurate Optic Flow Computation with Theoretically Justified Warping, International Journal of Computer Vision, v.67 n.2, p.141-158, April 2006
Joachim Weickert , Christoph Schnrr, Variational Optic Flow Computation with a Spatio-Temporal Smoothness Constraint, Journal of Mathematical Imaging and Vision, v.14 n.3, p.245-255, May 2001
Daniel Cremers , Stefano Soatto, Motion Competition: A Variational Approach to Piecewise Parametric Motion Segmentation, International Journal of Computer Vision, v.62 n.3, p.249-265, May 2005
Andrs Bruhn , Joachim Weickert , Timo Kohlberger , Christoph Schnrr, A Multigrid Platform for Real-Time Motion Computation with Discontinuity-Preserving Variational Methods, International Journal of Computer Vision, v.70 n.3, p.257-277, December 2006
Andrs Bruhn , Joachim Weickert , Christoph Schnrr, Lucas/Kanade meets Horn/Schunck: combining local and global optic flow methods, International Journal of Computer Vision, v.61 n.3, p.211-231, February/March 2005
David Tschumperl , Rachid Deriche, Orthonormal Vector Sets Regularization with PDE's and Applications, International Journal of Computer Vision, v.50 n.3, p.237-252, December 2002 | differential methods;regularization;diffusion filtering;optic flow;well-posedness |
569987 | On the Performance of Connected Components Grouping. | Grouping processes may benefit computationally when simple algorithms are used as part of the grouping process. In this paper we consider a common and extremely fast grouping process based on the connected components algorithm. Relying on a probabilistic model, we focus on analyzing the algorithm's performance. In particular, we derive the expected number of addition errors and the group fragmentation rate. We show that these performance figures depend on a few inherent and intuitive parameters. Furthermore, we show that it is possible to control the grouping process so that the performance may be chosen within the bounds of a given tradeoff. The analytic results are supported by implementing the algorithm and testing it on synthetic and real images. | Introduction
Perceptual grouping is an essential ingredient in various visual processes [WT83, Low85].
A major use of grouping is to accelerate recognition processes [Gri90, CJ91]. Many implementations
of grouping processes, however, take a large computational effort themselves, as
they use computational intensive algorithms such as dynamic programming [SU88] or even
email address:mic@cs.technion.ac.il
simulated annealing [HH93]. Therefore, such grouping processes may not be effective for
reducing the total complexity of the visual process.
Simpler grouping processes, on the other hand, may require lower computational effort
but work satisfactorily only with simple, clean, images. When operating on a complex image,
such procedures result in hypothesized groups which are either fragmented or corrupted with
alien image features. Therefore, it makes sense to use the suitable process for the situation.
That is, to use simple procedures for easy tasks and reserve the more complex, computational
expensive, processes for difficult tasks. To make the right choice of the grouping process, it
is essential to study the benefits and the limitations of the various processes.
This paper is concerned with a simple grouping processes based on a connected components
graph algorithm. This process, defined in detail in the next section, is very fast and is
essentially linear in the number of data features. The paper focuses on analyzing the quality
of the groups extracted by this algorithm, and show that quantitative and meaningful
performance measures may be predicted and even, within certain constraints, controlled. In
particular, we consider the number of background features which are falsely added to the
true groups and the fragmentation of these groups, and show a tradeoff between these two
performance measures. An interesting result is that the process behaves according to an intuitively
interpreted parameter, which, if larger than a certain threshold, leads to explosion.
That is to a single hypothesized group including almost all the image features.
The main contributions of this paper are:
1. The paper provides, for the first time, an analysis of the connected components grouping
process, capable of predicting the grouping quality in terms of measurable, meaningful
quantities. The results may also be used for design, and specifically for choosing
the parameters controlling the grouping process.
2. The results of the paper, analytic as well as empirical, show that the simple connected
components algorithm is useful for many grouping tasks, provided that the appropriate
parameters are chosen, and that the expected result (which is predictable) in
acceptable. Then, this algorithm may be the algorithm of choice as it is extremely fast.
The rest of the paper is as follows. The next section describes the grouping framework
we consider, the connected components grouping process, and the probabilistic modeling
of the grouping cues. Then, the processes of adding false features and fragmentation are
considered, relative to a random image model, in sections 3 and 4, respectively, and a tradeoff
is quantitatively specified. Experiments with synthetic and real data follow (section 5) and
a discussion of the results concludes the paper (section 6).
grouping framework and the connected components
algorithm
In a common formulation of a grouping task, one is given a set of data features (e.g. pixels,
edgels) in an image and is asked to partition it into meaningful subset, requiring, for example
that, ideally, every set consists of all the features from the same imaged object. We consider
a fairly general set of grouping processes which work by testing small subsets of data features
using some criterion, denoted a grouping cue, and integrating the information into a decision
on the plausible data set partitioning. We follow the approach specified in [AL96] and specify
the grouping algorithm by three main decisions:
The grouping cues the bits of local grouping information which indicate whether two
(or more) data features belong to the same group. Common cues decide whether
two edgels are collinear, two pixel have similar intensities, etc. but more complex
cues are also used (see a very partial list of examples to various grouping cues in
[Low85, SU88, HH93, Jac96].)
The feature subsets which are tested - The grouping cues operate on subsets of data
features and those are specified using different criteria, depending on the validity of
the cues and the computational complexity allowed.
The cue integration method - Given the partial information about the subsets grouping
we may integrate it into a more global decision about the grouping (or partitioning)
of the whole data set. This task is often formulated as a minimization of some cost
function, and is carried out by some optimization method ([SU88, HH93, AL96]).
The question considered in this paper is the ability of very simple algorithms, such as
connected components (CC) algorithm, to provide good grouping results. We shall consider
this question in the context of local grouping cues, which are valid only when the two features
are close. This is the situation in the most common task of grouping the edgels on a smooth
curve.
To make our predictions useful in other grouping domains, we follow [AL96] and characterize
the cue only by its reliability. Here, we shall consider a binary cue (an inherent
choice for the connected component algorithm), which provides a value "1" if its decision is
that the two features belong to the same group and "0" otherwise. In this case, the cue's
reliability is quantified by the two error probabilities ffl fa and ffl miss , giving the probability to
make a false decision of "1" and "0" respectively.
Following [AL96], we use the following graphs to describe the information available from
the cues. First, we specify an unlabeled graph, denoted Underlying graph, G u , which represents
the data features pairs which are tested by the cues. In this graph the vertices stand
for the feature points themselves and the vertex pairs corresponding to interacting pairs (i.e.
to pairs of features which are tested by the cue) are connected by an arc. Then the results of
the cues are used to specify another graph, denoted a measured graph, Gm , in which an arc
exists only if it exists in G u and if its corresponding cues got the "1" value. In this graph
notation, the grouping process is a partitioning of the graph into subgraphs. In [AL96] we
looked for a partitioning, which maximizes the likelihood of the cue results. Here we shall
use the much simpler CC criterion.
Thus, in this particular paper, we make the following choices:
1. Cue - As described above, for the analysis we shall use an abstract binary cue, characterized
only by its miss and false alarm probabilities. In the experiments we shall
use a common cue, based on a combination of co-circularity, proximity and low curva-
ture, which is commonly used for the task of grouping edgels on a smooth curve and
is described in detail in [AL97].
2. Underlying graph - Recalling the local nature of all cues that are used to group
edgels on a smooth curve, we shall restrict the pairs of features, which are tested by
these cues, to those which are R-close (R, denoted "interaction radius", is a parameter
of the grouping process). Thus, the Underlying graph, G u , is locally connected and
every vertex is connected to a varying number of vertices corresponding to R-close
features. similar graph connecting every vertex with a fixed number of its closest
neighbors [AL96] could be used as well.)
3. Cue integration by connected components - The integration method is extremely
simple - all vertices in the same connected component of the measured graph, Gm are
hypothesized to be a single group. That is, every two vertices in the same group (and
only these vertex pairs) are connected by a sequence of arcs in Gm .
The proposed grouping (or more precisely, cue integration) criterion is definitely not new
and was used for say, clustering, in classical applications of pattern recognition. It may be
implemented by extremely fast algorithm, which is linear in the number of arcs [CLR89].
Note, however, that one path of pairwise grouped cues suffices for grouping two features,
and that any other evidence for placing this feature pairs in a separate groups is ignored.
Therefore, the grouping result is expected to be sensitive to false alarms. The main analytic
part of this paper considers this sensitivity, and analyses the quality of the grouping results
available from this simple algorithm.
Measuring the quality of grouping is a delicate issue, as in many practical cases situations,
it is not straightforward or even possible to specify the "correct" result for a grouping process
(An indirect evaluation of the grouping quality is possible by measuring its effect on higher
processes (e.g. recognition, see [Jac88, SB93]). Still, we shall assume here that such a
specification exists, in the form of image partitioning to several subsets. In the simplest
form, which we consider first, the image is partitioned into a "figure" subset and a noise
"background" subset.
One quantifier of the grouping quality is the number of background points which are
erroneously connected to the figure. Such false additions may be eliminated however by
making the interaction radius R sufficiently small. A grouping algorithm associated with
such a small R value cannot be considered to be satisfactory, however, as it will probably
break the true groups into parts (or delete elements from them). Therefore, it is the tradeoff
between false additions and fragmentation of the figure, which should be evaluated. The
next sections are devoted to calculate these quality measures.
3 Estimating the number of false additions
3.1 General
We analyze the CC grouping algorithm relative to a random image model, approximating the
common realistic case of a curve like set "figure" object embedded in a uniformly distributed
"background" clutter. We assume, for simplicity, that this curve like set, or figure, is a
straight line. More specifically, feature point distribution is modeled on a discrete image:
the figure is a set of collinear pixels (e.g. one row). The probability that we have indeed a
feature point in a pixel belonging to the line is ae f . The background noise may appear in all
non-figure pixels with probability ae bg . See Figure 1a for an instance of this random model. (If
the curve differs from the straight line, but is still associated with a low curvature, we expect
similar results, and this is indeed revealed in the simulations.) Note that four probability
parameters are involved: ae f and ae bg which characterize the image, and ffl fa and ffl miss which
characterize the cue reliability.
Our ultimate goal in this section is to find an analytic expression for the expected number
of background feature points connected to the figure. It turns out, quite intuitively, that this
number grows with both the false alarm probability ffl fa , the density of background features
ae bg , and the interaction radius. The expressions for the number of false additions reveal that
these are the result of two growth processes: one related to features in the vicinity of the
figure and one related to the other features.
The growth in the number of false additions of this second type may even lead to "explo-
sion": A connected background feature, which has many neighbors in the R radius, together
with a cue characterized by a false alarm rate which is not too low, imply that there will be
at least one neighbor feature, connected to it with high probability. If the same conditions
hold for all background features, then it is likely that almost all the image features will join
the figure in one large group, making the result of the grouping process useless. Note also
that for the sake of simplicity we shall consider an infinite image, characterized by the above
random model. This way we do not have to consider the effect of image boundaries. The
total number of addition errors and the number of fragments tend to infinity as well and
therefore we shall consider their normalized value relative to a unit length of the figure.
To quantify this behavior in an analytic simple way, which gives more insight into the
process, and provide simple design parameters, we take a continuous limit approximation,
and show that the false addition process behavior is regulated only by one parameter, N
example, determines whether the false addition
process converges or not. The critical value of this parameter is 1. That is, if the process
converges then N bg ! 1. Note that this parameter depends on R, which may be set to satisfy
the condition. If R is larger then the process "explode" and we get an infinite number of
features in the large groups (or practically, all the image becomes one large group). If R is
substantially smaller then it is likely that only little false additions exist.
3.2 Distance and connectedness layers
Intuitively, the probability of finding a connected point in a particular non-figure (i.e. back-
ground) pixel, ae c (L), depends on the distance from the figure, L (see Figure 1b): closer
background points are more likely to erroneously connect to the figure. The expected number
of false additions per unit length (pixel size) of the figure is
false additions =X
ae c (L): (1)
Background points, falsely connected to the figure may be directly connected to the
figure but may also be connected to other background points which are already connected.
For our analysis we distinguish between these types of connections using the concept of
connectedness layer (see Figure 2). A background point, directly connected to the figure is
in the 1-st layer. A background points, which is connected to a point in the i\Gammath layer, but
does not belong to any of the j \Gammath layers, is in the (i
c (L)
be the probability of finding a connected point which belongs to the i-th layer in a particular
background pixel in distance L from the figure (see Figure 1b). Clearly,
(a) (b)
Figure
1: An instance of probabilistic model (a). ae f is a density of point in figure (or
probability to find a figure point in given point of a line). ae bg is a density of background
points (it is the probability to find a feature point in a given background image pixel). For
the image (a) ae 0:05. In the analysis we consider the probabilities as
a function of the distance L of the background feature V from the figure (b). (R is an
interacting radius.)
Figure
2: The layers in the Measured Graph: a layer consists of all the feature between two
dotted curves (and some more features in the other side of the figure which are not shown
here). Note that, here, the many arcs between figure points are not shown and only the arcs
associated with false alarms are shown.
ae c
ae (i)
c (L) (2)
Estimating the number of false additions, (eq. 1), is the goal of this section, and the
rest of it is devoted to this calculation. The results are demonstrated in Figure 4: if R is
larger than some threshold (6), then the the probability to find a connected feature point in
a pixel, in distance L from the figure does not decrease with R, meaning we find connected
features over all the plane. If it is smaller then the process converges.
3.3 Directly connected points
The false addition process begins close to the figure by the features which are directly
connected to the figure. More specifically, the probability of a particular feature point v, in
distance L from the figure, to be directly connected to the figure, P (1)
c (L), is the probability
that at least one arc connecting the background point to the figure points in the underlying
graph is associated with a false alarm error. Although the values taken by the cues are not
necessarily independent random variables, we shall rely on this pragmatic assumption in our
analysis, and get
For L - R, the number, m, of figure points which are R-"close" to v is binomially distributed
with probability ae f and maximal value
Therefore, the probability ae (1)
c (L) to find a 1st layer background point in a
particular background pixel, is
ae (1)
3.4 Indirectly connected points
As mentioned above, connected background points (e.g., directly connected points) are a
potential source to additional false connections. Consider now a point v, which does not
Figure
3: A non-connected feature point,at distance L from the figure, may join the connected
set if it is connected to a another, connected background point (which is in distance X from
the figure). (This figure comes, mostly, to illustrate these distance).
belong to any of the first layers. The probability that this point is in the i-th layer,
is the probability that it is connected to at least one point of the i \Gamma 1-th layer in its R-
neighborhood (Figure 3). Note that in contrast to directly connected points (1st layer), we
refer now to background points in any distance from the figure. Let Q c
(X; L) be the
probability of point v at distance L from the figure, to be connected to at least one point of
the (i \Gamma 1)-th layer, which is X-distant from the figure. Let Q c
(X; L) be its complement.
Then,
Let m be the number of points, X-distant from the figure and R-close to v. This number
is binomially distributed with probability ae (i\Gamma1)
(X) and maximum value
defined in eq. (4)). Therefore,
ae (i\Gamma1)
The probability that a given pixel contains a point from the i-th layer, ae (i)
c (L), is equal to
the product of the probability ae (i\Gamma1)
that the pixel contains a feature point which does
not belong to the previous layers and the probability that this point, if it indeed exists in
this pixel, belongs to this layer, P (i)
ae (i)
c (L)ae (i\Gamma1)
where by definition:
ae (i)
ae (j)
Inserting (6) and (9) into (8), we get
ae (i)
ae (i)
c (L)
!/
Inserting eq. (7,8) in eq. (10) and then in eq. (2), indeed gives the expression we required for
the number of false additions. The result for particular ffl fa , ae f , ae bg is shown on the figure 4.
When the interaction radius is small the number of connected points decrease as a function
of distance. When it is even smaller, then the total number of false additions remains finite
(and in most such cases relatively small). When the interacting radius however, becomes
larger (R ? 5, for the example related to Figure 4) the total number of connected points
does not decrease to zero but remain finite for all distances. And as an interacting radius
is larger the part of the connected points for increases. For large R, about all
background points are connected to the figure.
Note that the false additions may be roughly divided into two major sets: the directly
connected point, the number of which is upper bounded and which lie close to the figure and
the indirectly connected features, which may be anywhere. The process of adding connected
points may diverge and then the second set is naturally dominant. If the process converges
however, then the first set is a substantial fraction, if not the dominant part, of the connected
points. See, for example, Figure 5 for a comparison of the number of false additions of the
first type with the total number of false additions. Therefore, we considered both processes
in our analysis and also in the continuous approximation considered now.
continuous limit expression for the number of false additions
The previous derivation provided the expression for the required number of false additions,
and looking at the graphs may give us some intuitive idea about its behavior. In this sec-
tion, we shall strengthen this intuitive understanding by showing that the results essentially
depend on two inherent parameters. The main tool that we use is a continuous limit expres-
sion, that is, a substitution of binomial distributions by the normal one. The same process
may be done without this approximation and with strict rigorous bounds, but it is somewhat
less elegant, and is not considered here.
3.5.1 Directly connected points - continuous limit expression
For directly connected background points we may approximate the binomial distribution in
eq. (5) by normal distribution and thereby write down the continuous limit expression for
probability P (1)
c (L) for a given point v to be directly connected to the figure:
c =ae
This gauss integral can
be evaluated in a closed form:
Note that this probability may be interpreted as a variation over a more intuitive expression,
This approximation is easily obtained by assuming that the
number m, of figure points which are R-close to a background feature point is not a random
variable but a deterministic number, equal to the expected value of the random variable,
m(L). The correction (1 is due to the non-proportional (higher)
contribution of small m values to the integral (eq. 11).
s
R
where
Figure
4: The probability to find a connected feature point in a pixel, in distance L from
the figure, ae c (L) for ffl
Figure
5: For low interaction radii, the main contribution to the false additions comes from
the directly connected points. Here for example, we compare between ae c (L) and ae (1)
c (L), for
situation when the number of false additions converges, ffl
ae
is the average number of feature points from the figure which are falsely connected to a
background feature, which lies just of the figure 0). Note that 2Rae f is the expected
number of R-close feature points, 2Rae f ffl fa is the expected number of these features for which
the cue makes a false (positive) decision, and the rest is just the correction factor implied
by the uneven contribution to the integral (eq. 11). We can see that the P (1)
c (L) expression
depends only on m c and normalized distance L=R.
3.5.2 Indirectly connected points
The continuous limit treatment of indirectly connected points is similar. Recall that Q c
is the probability that a feature point (L-distant from the figure) does not join the i\Gammath layer
by a false positive cue related to it and to another background feature point which belongs
to the layer and is X \Gammadistant from the figure. Approximating the binomial
distribution by the normal one, we get
where -(X;
c (X)). Note that -(X;
expected number of
background feature points, belonging to the (i \Gamma 1) which are X \Gammadistance from the figure
and R\Gammadistant from a feature point which L\Gamma distant from the figure. Again, this is a gauss
integral, and a close form evaluation is possible:
c (X)) log(1\Gammaffl fa )=2) (16)
Again, this probability may be interpreted as a variation over a more intuitive expression,
resulting from a simplifying assumption that the number m,
of background neighbors, is deterministic, and equal to the expected value -(X; L). The
meaning of an additional term
c (X)) log(1 \Gamma ffl fa )=2) is exactly the same as in the
case of directly connected points.
Replacing ffl fa by the small value approximation of
Using this expression we can rewrite the expression, (6),for the probability of the given point
v on the distance L to be connected in layer i to the figure:
where the last approximation is due to ae (i\Gamma1)
being much smaller than one in normal
conditions.
Substituting -(X; L) into eq. 18, and approximating the sum as an integral, we get
ae (i\Gamma1)
is the normalized distance. The equation 19 specifies a recursion, which
eventually determines the amount of false additions. To make the analysis more intuitive,
and to get rid of some constants, we prefer to use terms related to these numbers of false
additions. Specifically, let
c (l)
c (l)
The meaning of N (i)
c (l) is roughly the average number of connected points inside a disk of
radius R, which are added on the stage i. N (i)
free (l) is the "potential number" of new points
to be added in the next stages. Note that it depends on ffl fa as more reliable cues reduce this
potential, and that initially it is equal to N bg .
multiplying eq. 19 by N (i)
yields the following recursive relation on N (i)
c (l)
Note that, naturally, the higher the number of features added in one layer, the higher will be
the number of features added in the next layer, if there is still a sufficient "potential number".
(The meaning of the exponent term is an average of N (i)
c (l) over an R-close circle.)
The obtained evolution equation depends only on one intrinsic parameter N bg (eq. 20)
and the system evolutes according to its value.
Note now that the total number of background points connected to the figure (per unit
length of it), is proportional to
Z 1ae c (L)dL. This integral converges only when ae c (l) ! 0
for l ! 1. Since ae c (l)(=X
ae (i)
c (l)) is proportional to N c (=X
c (l)), we may find the
necessary convergence conditions for ae c (l) by evaluating this condition for N c .
3.5.3 A necessary condition for convergence
We shall make the assumption that N (i)
c (l) is a smooth function and that, for l AE 1, its
value is at most linear in l. Therefore the integral in (21) may be evaluated,
\GammaN (i\Gamma1)
\GammaN (i\Gamma1)
c (l)
We are interested in the necessary conditions for the convergence of N c to zero. Therefore,
we may assume that N (i\Gamma1)
c (l) is small and expand the exponent as a power series in N (i\Gamma1)
c (l)
up to the linear term, leading to the
c (l): (23)
This series always converges, as the number of features in any particular distance from
the curve (or in any finite distance range) is bounded. For the convergence of the sum N c ,
we should ask for a faster convergence, which for a given number of added features in one
layer adds a smaller number in the next layer. If we do not require that then the process of
adding false additions in increasing distances from the figure does not stop as more features
re available in these larger distances. Note however that if N c converges, then for large l,
very little fraction of the features are connected and thus N (i)
. Then the relation 23
becomes the geometric progression
c - N bg N (i\Gamma1)
c (24)
and the condition for convergence is simply N bg ! 1.
.
ss sssss ssssssssss
s
Figure
The model of fragmentation: to make a single break, some cues must be missed
4 The fragmentation errors and the implied tradeoff
As mentioned above, quantifying performance by counting only the number of false may
lead to absurd results. The trivial grouping algorithm, which does nothing and specify every
feature as a separate group is a perfect algorithm according to this criterion, but is of course
useless. Therefore, we add a balancing performance criterion: the number of fragments to
which a true group decomposes.
The probability of separating the figure between two particular data features on it, P frag ,
is approximately equal to the probability of missing all arcs passing there (see Figure 6).
This approximation neglects the possibility to connect the parts of the figure via background
points, which is of much lower probability as it requires two false alarms. This simplest
approximation of the process, implies that breaking the curve requires a multiple number
of misses, for all pairs of feature points on the figure lying in opposite sides of the break
point. The expected number of such pairs is roughly
and gives the
approximated probability of fragmentation:
f
This expression demonstrate that increasing the interaction radius, exponentially decreases
the probability to break the curve at that point (see similar analysis in [AL96]).
Thus, choosing a small interaction radius, reduces the number f false additions (by reducing
increases the fragmentation. This tradeoff is plotted in Figure 7. (Note
the logarithmic scale of P frag axis on this figure). The plot is done for a particular set of
scenes and cue parameters: ffl which is typical
for real images. For these parameters, we can see that choosing a low interacting radius
added
Figure
7: The tradeoff between the expected number of false additions (plotted, normalized
to the length of the figure curve, as the vertical coordinate) and the expected number of
break point for every length unit of this figure curve (horizontal coordinate) for ffl
0:2. The points on the graph correspond to
to left).
(e.g. produces acceptable fragmentation. Choosing a larger radius, gives only
a negligible improvement in fragmentation, but significantly increases the number of false
additions. Therefore, it is not advisable. Choosing a lower radius, on the other hand, causes
significant fragmentation.
5 Experimental Results
To check our model and the implied predictions we implemented the connected components
algorithm and applied it to several synthetic and real examples. We describe two of these
experiments.
5.1 Synthetic Image
Synthetic images may be created together with ground truth and therefore, may be used
to to test the theoretical aspects of the predictions. We used an image containing several
clean "blobs", edge detected it, and then removed 0.2 of the edge points in random locations.
Then, we added random shot noise in 0.05 of the rest of the pixels. (See Figure 8a ). This
image includes a little less than 1000 figure points and about 3500 background points.
We used a common co-circularity cue, developed in [AL97], which could be tuned by
changing an internal threshold so that Miss and False alarm probabilities could be traded.
We also checked it with a synthetic cue, relying on a known ground truth, which made a
mistake with controlled probabilities. The later case, of course, is not a realistic task, but it
helps to understand the algorithm behavior.
Consider the grouping results obtained with the co-circularity cue for various interacting
Figure
8). The three right images (Figure 8b, 8d, 8f) show the large groups (larger
than 20 edgels each) for three interacting radii (R = 3; 5; 7). Increasing R clearly decreases
the fragmentation and increases the number of false additions. Note that for the two smaller
radii, all the large groups are dominated by figure points. It is clear that CC grouping is
useful as an initial stage of grouping, which separates figure from background, or even as an
independent grouping algorithm by itself.
It is difficult to measure the number of false additions per unit length of the extracted
figure segments. Instead, we estimated the lengths of the groups by the number of figure
points inside them, and estimated the false additions measure as the ratio between the
number of figure points (known from ground truth) and the background points (the rest of
the points) in the relatively large hypothesized groups. (Large groups were specified either
as larger than 10 features or 20.) This false addition measure is plotted as a function of
interaction radius in figure 9 (for groups larger than 10). Note that for low R values, the
number of false additions increase slowly, but after some threshold it starts to rise much
quicker, until it saturates, due to the finite size of the image.
The fragmentation is measured indirectly by the number of groups containing more than
or 20 points. These numbers are plotted as a function of radius in Figure 10. For small R
Initial Image (a)
largest group (c)
largest group
Figure
8: Grouping results for the synthetic example: the input image (a), all groups larger
than 20, for various R values (R = 3; 5; 7), superimposed, and the largest group for 7.
Note for example, that for substantial groups which are relatively clean, and
that for the larger group is similar to the larger true groups, with still relatively little
false additions.
Figure
9: The number of false additions per unit length as a function of interacting radius
for the synthetic image and the co-cirulcar cue. Note the slow growth for low interacting
radii and the saturation for very large radii
values, increasing the radius leads to a lower number of groups and to lower fragmentation.
For the groups consist of almost only figure points, but they are very fragmented,
and only part of the figure points participate in large groups (see Figure 8b). For moderate
R values (of say the fragmentation is moderate, especially when recalling that
originally the figure consists of 8 smooth curves. Then, for larger R values, clutter groups
are created and increase the number of large groups. For such values the increased number
of groups does not reflect fragmentation but rather the additional clutter group. Finally, for
larger (say 12) R values, all groups merge into just a few (useless) groups, which contain
both the figure and almost all the clutter.
A most interesting issue, is the validity of our modeling and the implied predictions.
Qualitatively, we already found that increasing the interaction radius increases the false
additions, as implied also by our model. More quantitative results are shown in Figure 11,
which gives the false additions rates, for the co-cicularity cue described above, for a co-
circularity cue associated with a different threshold and a lower false alarm rate (the left and
right thick curves in the figure, respectively). The other curves in this figure correspond to
the synthetic cues with false alarm probabilities of (left to right) ffl
We estimated the false alarm rate of the experimental cues by simply dividing the number
Radius
Figure
10: The number of groups larger than 10 and 20 feature points as a function of
interacting radius. The top curve is the number of groups larger than 10 and the bottom
curve is the number of groups larger than 20.
of arcs in the measured graph by the number of arcs in the underlying graph, (excluding
the figure-figure arcs from both counts). These empirical false alarm rate turned out to be
for the two thresholds.
We can see that the synthetic cue provides an upper bound on the real results, which
is relatively close, especially if we compare these results to the proximity-only cue (ffl
most left curve). There are some discrepancies, though, and we attribute them to one
cause: it seems that, in contrast to our assumptions, the cue's behavior suffers from cross-
dependence. This is especially true for clutter features which are close to the figure: if a
feature is connected to one edgel on the figure, it is more likely to be co-cirular and to be
connected to others as well. We estimated from our measurements that for R = 5, the
average number connecting a clutter feature to the figure, if it is connected, is 3, which is
much more than the expected value of 1, calculated relying on the empirical ffl
and the average number of figure neighbors (about 5-6). Therefore, the effective false alarm
rate of the cue is much lower than the measured and is roughly 0:16=3 - 0:05. (Recall that
in order to add a feature to the figure, the CC algorithm requires only one connection.) If
we take this rough approximation we can see a nice agreement, for small R values, between
the real cue behavior and the behavior of the synthetic cue.
For feature points far from the figure, the cross dependence exists but is much weaker.
Therefore, the effective false alarm rate is not as measured empirically but is also not as
low as when the feature is close to the figure. Indeed we get a nice match with the curve
associated with synthetic cue and ffl We believe that the cue is more reliable near the
figure, because there, the feature points are excluded from the location of the figure itself
where the highest probability of false alarm exists.
Although we get fairly accurate results with our current analysis, we intend to incorporate
the non-independence effects quantitatively in our future predictions, and to develop methods
for measuring both the cross dependence and the effective false alarm rate of cues.
We found that a necessary condition for convergence is that N bg ! 1, where N
ae bg is first specified in eq. 20. This condition imposes an upper
bound on the interacting radius which is 6:58 (if we take ffl 0:16, as found empirically) or
8:20 (if we take ffl as seems to be the effective value - see above). Indeed, it is possible
to see from either Figure 11 or from Figure 10, that an R value of 7 \Gamma 8 is the threshold and
that using a larger interaction radius leads to excessive number of groups and large growth
in the number of false additions, which means that background features aggregate together
into groups of their own.
Regarding the speed of the algorithm, we distinguish here between the extraction of
the perceptual information stage, in which the underlying graph is calculated and the cues
are measured, and the stage of the connected components "integration". The first stage is
required in every grouping algorithm, and requires an effective means for finding the closest
neighbors for fast operation (see [AL97]). The connected components algorithm is linear in
the number of cues (or arcs of G u ). In our experiments it always ran in less than a second.
(See figure 12 where the different curves correspond to the different cases of Figure 11).
5.2 Real Images
Every algorithm should be tested on real images as well. Here we started with an even
harder task, and applied the CC algorithms to a corrupted real image, which itself was
not too simple. Specifically we took a 256 \Theta 226 CT head image, applied some standard
Radius
Figure
11: The number of false additions per unit length as a function of interacting radius.
The darker curves corrrespond to the co-circularity cue with two different thresholds. The
lighter curves correspond to a synthetic cue characterized by ffl
to right): The true cue behaves approximately as a random cue associated with false alarm
probability of 0:05 \Gamma 0:10.
Execution time
Radius
User
time
netto
(sec)
Figure
12: Execution CPU time of the connected components algorithm as a function of
interacting radius. All cases considered for the synthetic cue are superimposed.
(Khoros) edge detection process on it, left ae 0:8 of the edge points and, and added
additional clutter in the form of shot noise, in 0:05 of the pixels (see figure 13a). Note
that in fact ae bg ? 0:05, as the edge detection process itself adds much clutter. The results
for different radii (Figure 13b,d,f) show the same qualitative behaviour as described above
for the synthetic image. Due to the high density of the figure points, the grouping process
collapses to a single large group containing all the figure for relatively low interacting radius
Figure
however that much of the added clutter, is removed also in this
non-optimal choice of R. Note also that if we choose then the large groups include
the main parts of the figure (Figure 13d). Furthermore, the groups created in this stage (e.g
Figure
13c) may be the input to a more complex grouping process that will run at a reduced
computational cost due to the significantly lower amount of features.
It should have been natural to test the algorithm on the easier case of real images which
are not so corrupted, as these are the normal operation conditions. For some technical
reasons, we could not test this case at the time of submission.
6 Conclusions
The connected components approach leads to an extremely fast grouping algorithms, which
unfortunately provide grouping that is sometime inferior relative to more complex algorithms.
In this paper we considered this approach and provided, for the first time, an analysis, capable
of predicting the grouping quality in terms of measurable, meaningful quantities. These
results may be used do optimize the CC grouping algorithm by choosing the interaction
radius R, and if possible, by tuning the error probabilities (of the cue) within the given
limitation. It can also help to discriminate between an easy grouping task, which may be
carried out fast by the CC algorithm, and the more difficult tasks, which require a more
complex, slower, approach.
It seems that many grouping tasks may be decomposed into easier and more difficult
sub-tasks. For grouping edgels on smooth boundaries, for example, connecting the edgels
to short segments is relatively easy, but calculating the final image partition, requiring to
take into account occlusions, junction structure, etc., is a global process which seems more
Initial Image (a)
largest group (c)
largest group
Figure
13: Grouping results for corrupted CT image (head)
Initial Image (a)
largest group (c)
largest group
Figure
14: Grouping results for CT image (head)
difficult. Thus, one application of the CC grouping approach is as a preprocessor to more
complex grouping algorithms.
The paper describes work in progress and many directions are open for improving the
basic algorithm, and also its analysis. Characterizing the cues without the independence
assumption, which was mentioned above, is expected to lead to more accurate predictions.
Developing efficient, postprocessing, cleaning algorithms is of much practical interest as well.
--R
Quantitative analysis of grouping processes.
Ground from figure discrimination.
Space and time bounds on indexing 3-d models from 2-d images
Introduction to algorithms.
Object Recognition by Computer.
The Use of Grouping in Visual Object Recognition.
Robust and efficient detection of salient convex groups.
Perceptual Organization and Visual Recognition.
Perceptual organization in computer vision: A review and proposal for a classifactory structure.
Structural saliency: The detection of globally salient structures using locally connected network.
On the role of structure in vision.
--TR
Introduction to algorithms
Object recognition by computer
Recognizing solid objects by alignment with an image
Space and Time Bounds on Indexing 3D Models from 2D Images
Perceptual Organization for Scene Segmentation and Description
Robust and Efficient Detection of Salient Convex Groups
Random perturbation models for boundary extraction sequence
A Generic Grouping Algorithm and Its Quantitative Analysis
Perceptual organization in computer vision
Use of the Hough transformation to detect lines and curves in pictures
Perceptual Organization and Visual Recognition
Perceptual Organization for Artificial Vision Systems
Figure-Ground Discrimination
B-spline Contour Representation and Symmetry Detection
Normalized Cuts and Image Segmentation | quantitative predictions;perceptual organization;connected components algorithm;grouping performance analysis |
570169 | Edge Detection by Helmholtz Principle. | We apply to edge detection a recently introduced method for computing geometric structures in a digital image, without any a priori information. According to a basic principle of perception due to Helmholtz, an observed geometric structure is perceptually meaningful if its number of occurences would be very small in a random situation: in this context, geometric structures are characterized as large deviations from randomness. This leads us to define and compute edges and boundaries (closed edges) in an image by a parameter-free method. Maximal detectable boundaries and edges are defined, computed, and the results compared with the ones obtained by classical algorithms. | Introduction
In statistical methods for image analysis, one of the main problems is
the choice of an adequate prior. For example, in the Bayesian model
(Geman and Geman, 1984), given an observation \obs", the aim is to
nd the original \model" by computing the Maximum A Posteriori
(MAP) of
The term P [obsjmodel] represents the degradation (superimposition of
a gaussian noise for example) and the term P [model] is called the prior.
This prior plays the same role as the regularity term in the variational
framework. This prior has to be xed and it is generally dicult to nd
a good prior for a given class of images. It is also probably impossible
to give an all-purpose prior!
In (Desolneux et al., 1999) and (Desolneux et al., 2000), we have
outlined a dierent statistical approach, based on phenomenological observations
coming from Gestalt theory (Wertheimer, 1923). According
to a perception principle which seems to go back to Helmholtz, every
large deviation from a \uniform noise" image should be perceptible,
provided this large deviation corresponds to an a priori xed list of
geometric structures (lines, curves, closed curves, convex sets, spots,
c
2001 Kluwer Academic Publishers. Printed in the Netherlands.
Moisan and Morel
local groups,. Thus, there still is an a priori geometric model, but,
instead of being quantitative, this model is merely qualitative. Let us
illustrate how this should work for \grouping" black dots in a white
sheet. Assume we have a white image with black dots spread out. If
some of them form a cluster, say, in the center of the image, then, in
order to decide whether this cluster indeed is a group of points, we
compute the expectation of this grouping event happening by chance
if the dots were uniformly distributed in the image. If this expectation
happens to be very low, we decide that the group in the center is
meaningful. Thus, instead of looking for objects as close as possible to
a given prior model, we consider a \wrong" and naive model, actually
a random uniform distribution, and then dene the \objects" as large
deviations from this generic model. One can nd in (Lowe, 1985) a very
close formulation of computer vision problems.
We may call this method Minimal A Posteriori Expectation, where
the prior for the image is a uniform random noise model. Indeed, the
groups (geometric structures, gestalts 1 ) are dened as the best counter-
examples, i.e. the least expected. Those counterexamples to the uniform
noise assumption are taken in a restricted geometric class. Notice that
not all such counterexamples are valid: the Gestalt theory xes a list of
perceptually relevant geometric structures which are supposedly looked
for in the perception process. The computation of their expectation in
the uniform noise model validates their detection: the least expected in
the uniform noise model, the more perceptually meaningful they will
be.
This uniform noise prior is generally easy to dene. Consider for
example the case of orientations: since we do not have any reason to
favour some directions, the prior on the circle S 1 will be the uniform
distribution. We applied this method in a previous paper dedicated
to the detection of meaningful alignments (Desolneux et al., 1999).
In (Desolneux et al., 2000) we have generalized the same method to
the denition of what we called \maximal meaningful modes" of a
histogram. This denition is crucial in the detection of many geometric
structures or gestalts, like groups of parallel lines, groups of segments
with similar lengths, etc.
It is clear that the above outlined Minimum A Posteriori method will
prove its relevance in Computer Vision only if it can be applied to each
and all of the gestalt qualities proposed by phenomenology. Actually,
we think the method might conversely contribute to a more formal and
general mathematical denition of geometric structures than just the
We choose to write gestalt(s) instead of the german original Gestalt (en). We
maintain the german spelling for \Gestalt theory"
Edge Detection by Helmholtz Principle 3
ones coming from the usual plane geometry. Now, for the time being,
we wish to validate the approach by matching the results with all of
the classicaly computed structures in image analysis. In this paper, we
shall address the comparison of edge and boundary detectors obtained
by the Minimum a Posteriori method with the ones obtained by state
of the art segmentation methods.
A main claim in favour of the Minimum a Posteriori is its reduction
to a single parameter, the meaningfulness of a geometric event depending
only on the dierence between the logarithm of the false alarm rate
and the logarithm of the image size! We just have to x this false alarm
rate and the dependance of the outcome is anyway a log-dependence
on this rate, so that the results are very insensitive to a change. Our
study of edge detection will conrm this result, with slightly dierent
formulas though.
In addition, and although the list of geometric structures looked for
is wide (probably more than ten in Gestalt theory), the theoretical
construction will make sense if they are all deduced by straightforward
adaptations of the same methodology to the dierent geometric struc-
tures. Each case of geometric structure deserves, however, a particular
study, in as much as we have to x in each case the \uniform noise"
model against which we detect the geometric structure. We do not
claim either that what we do is 100% new: many statistical studies
on images propose a \background" model against which a detection
is tested ; in many cases, the background model is a merely uniform
noise, as the one we use here. Optimal thresholds have been widely
addressed for detection or image thresholding (Abutaled, 1989; Guy
and Medioni, 1992; Pun, 1981; Weszka, 1978). Also, many applied
image analysis and engineering methods, in view of some detection,
address the computation of a \false alarm rate". Our \meaningfulness"
is nothing but such a false alarm rate, but applied to very general
geometric objects instead of particular looked for shapes and events.
As was pointed out to us by David Mumford, our method is also
related to the statistical hypothesis testing, where the asked question is:
does the observation follow the prior law given by Helmoltz principle ?
The gestalts will be the \best proofs" (in terms of the a priori xed
geometric structures) that the answer to this question is no. Let us
illustrate what is being done in the hypothesis testing language, by
taking the case of the detection of alignments.
Let us summarize: not all geometric structures are perceptually relevant
small list of the relevant ones is given in Gestalt theory ;
we can \detect" them one by one by the above explained Helmholtz
principle as large deviations from randomness. Now, the outcome is not
a global interpretation of the image, but rather, for each gestalt quality
4 Desolneux, Moisan and Morel
(alignment, parallelism, edges), a list of the maximal detectable events.
The maximality is necessary, as shows the following example, which
can be adapted to each other gestalt: assume we have detected a dense
cluster of black dots ; this means that the expectation of such a big
group is very small for a random uniform distribution of dots. Now, very
likely, many subgroups of the detected dots and also many larger groups
will have a small expectation too. So we can add spurious elements to
the group and still have a detectable group. Thus, maximality is very
relevant in order to obtain the best detectable group. We say that a
group or gestalt is \maximal detectable" if any subgroup and any group
containing it are less detectable, that is, have a smaller expectation.
We shall address here one of the serpents de mers of Computer
Vision, namely \edge" and boundary \detection". We dene an \edge"
as a level line along which the contrast of the image is strong. We call
\boundary" a closed edge. We shall in the following give a denition
of meaningfulness and of optimality for both objects. Then, we shall
show experiments and discuss them. A comparison with the classical
Mumford-Shah segmentation method will be made and also with the
Canny-Deriche edge detector. We shall give a (very simple in that case)
proof of the existence of maximal detectable gestalt, applied to the
edges. What we do on the edges won't be a totally straightforward
extension of the method we developped for alignments in (Desolneux
et al., 1999). Indeed, we cannot do for edge or boundary strength as
for orientation, i.e. we cannot assume that the modulus of the gradient
of an image is uniformly distributed.
2. Contrasted Boundaries
We call \contrasted boundary" any closed curve, long enough, with
strong enough contrast and which ts well to the geometry of the
image, namely, orthogonal to the gradient of the image at each one of
its points. We will rst dene "-meaningful contrasted boundaries, and
then maximal meaningful contrasted boundaries. Notice that this de-
nition depends upon two parameters (long enough, contrasted enough)
which will be usually xed by thresholds in a computer vision al-
gorithm, unless we have something better to say. In addition, most
boundary detection will, like the snake method (Kass et al., 1987),
introduce regularity parameters for the searched for boundary (Morel
and Solimini, 1994). If we remove the condition \long enough", we can
have boundaries everywhere, as is patent in the classical Canny lter
(Canny, 1986).
Edge Detection by Helmholtz Principle 5
The considered geometric event will be: a strong contrast along a
level line of an image. Level lines are curves directly provided by the
image itself. They are a fast and obvious way to dene global, contrast
insensitive candidates to \edges" (Caselles et al., 1996). Actually, it is
well acknowledged that edges, whatever their denition might be, are as
orthogonal as possible to the gradient (Canny, 1986; Davis, 1975; Duda
and Hart, 1973; Martelli, 1972; Rosenfeld and Thurston, 1971). As a
consequence, we can claim that level lines are the adequate candidates
for following up local edges. The converse statement is false: not all
level lines are \edges". The claim that image boundaries (i.e. closed
edges) in the senses proposed in the literature (Zucker, 1976; Pavlidis,
1986) also are level lines is a priori wrong. How wrong it is will come
out from the experiments, where we compare an edge detector with a
boundary detector. Surprisingly enough, we will see that they can give
comparable results.
We now proceed to dene precisely the geometric event: \at each
point of a length l (counted in independent points) part of a level line,
the contrast is larger than ". Then, we compute the expectation of
the number of occurrences of such an event (i.e. the number of false
alarms). This will dene the thresholds: minimal length of the level
line, and also minimal contrast in order to be meaningful. We will give
some examples of typical numerical values for these thresholds in digital
images. Then, as we mentioned has been done for other gestalts like
alignments and histograms, we will dene here a notion of maximality,
and derive some properties.
2.1. Definitions
Let u be a discrete image, of size N N . We consider the level lines
at quantized levels 1 ; :::; k . The quantization step q is chosen in such
a way that level lines make a dense covering of the image: if e.g. this
quantization step q is 1 and the natural image ranges 0 to 256, we get
such a dense covering of the image. A level line can be computed as a
Jordan curve contained in the boundary of a level set with level ,
Notice that along a level line, the gradient of the image must be everywhere
above zero. Otherwise the level line contains a critical point
of the image and is highly dependent upon the image interpolation
method. Thus, we consider in the following only level lines along which
the gradient is not zero. The interpolation considered in all experiments
below is the order zero interpolation (the image is considered constant
on each pixel and the level lines go between the pixels).
6 Desolneux, Moisan and Morel
Let L be a level line of the image u. We denote by l its length counted
in independent points. In the following, we will consider that points at
a geodesic distance (along the curve) larger than 2 are independent (i.e.
the contrast at these points are independent random variables). Let x 1 ,
l denote the l considered points of L. For a point x 2 L, we will
denote by c(x) the contrast at x. It is dened by
where ru is computed by a standard nite dierence on a 2 2 neighborhood
(Desolneux et al., 2000). For 2 R
, we consider the event:
i.e. each point of L has a contrast larger
than . From now on, all computations are performed in the Helmholtz
framework explained in the introduction: we make all computations
as though the contrast observations at x i were mutually independent.
Since the l points are independent, the probability of this event is
where H() is the probability for a point on any level line to have a
contrast larger than . An important question here is the choice of
H(). Shall we consider that H() is given by an a priori probability
distribution, or is it given by the image itself (i.e. by the histogram of
gradient norm in the image)? In the case of alignments, we took by
Helmholtz principle the orientation at each point of the image to be a
random, uniformly distributed variable on [0; 2]. Here, in the case of
contrast, it does not seem sound at all to consider that the contrast
is uniformly distributed. In fact, when we observe the histogram of
the gradient norm of a natural image (see Figure 1), we notice that
most of the points have a \small" contrast (between 0 and 3), and that
only a few points are highly contrasted. This is explained by the fact
that a natural image contains many
at regions (the so called \blue sky
eect", (Huang and Mumford, 1999)). In the following, we will consider
that H() is given by the image itself, which means that
where M is the number of pixels of the image where ru 6= 0. In order
to dene a meaningful event, we have to compute the expectation of
the number of occurrences of this event in the observed image. Thus,
we rst dene the number of false alarms.
DEFINITION 1 (Number of false alarms). Let L be a level line with
length l, counted in independent points. Let be the minimal contrast
Edge Detection by Helmholtz Principle 7
of the points x 1 ,., x l of L. The number of false alarms of this event
is dened by
where N ll is the number of level lines in the image.
Notice that the number N ll of level lines is provided by the image
itself. We now dene "-meaningful level lines. The denition is analogous
to the denition of "-meaningful modes of a histogram or to the
denition of alignments: the number of false alarms of the event is less
than ".
DEFINITION 2 ("-meaningful boundary). A level line L with length l
and minimal contrast is an "-meaningful boundary if
The above denition involves two variables: the length l of the level
line, and its minimal contrast . The number of false alarms of an event
measures the \meaningfulness" of this event: the smaller it is, the more
meaningful the event is.
Let us now proceed to dene \edges". We denote by N llp the number
of pieces of level lines in the image.
DEFINITION 3 ("-meaningful edge). A piece of level line E with length
l and minimal contrast is an "-meaningful edge if
Here is how N llp is computed: we rst compute all level lines at uniformly
quantized levels (grey level quantization step is 1 and generally
ranges from 1 to 255. For each level line, L i with length l i , we compute
its number of pieces, sampled at pixel rate, the length unit being pixel
side. We then have
l
This xes the used number of samples. This number of samples will be
fair for a 1-pixel accurate edge detector. Clearly, we do detection and
not optimization of the detected edge: in fact, according to Shannon
conditions, edges have a between two or three pixels width. Thus, the
question of nding the \best" edge representative among the found ones
is not addressed here, but has been widely addressed in the literature
(Canny, 1986; Davis, 1975).
8 Desolneux, Moisan and Morel
norm of the gradient
percentage
of
pixels
200.20.40.60.8contrast
Figure
1. From left to right: 1. original image; 2. histogram of the norm of the
gradient; 3. its repartition function ( 7! P [jruj > ]).
2.2. Thresholds
In the following we will denote by F the function dened by
Thus, the number of false alarms of a level line of length l and minimal
contrast is simply F (; l).
Since the function 7! is decreasing, and since for
all , we have H() 6 1, we obtain the following elementary properties:
We x and l 6 l 0 , then
which shows that if two level lines have the same minimal contrast,
the more meaningful one is the longer one.
Edge Detection by Helmholtz Principle 9
We x l and 6 0 , then
which shows that if two level lines have the same length, the more
meaningful one is the one with higher contrast.
When the contrast is xed, the minimal length l min () of an "-
meaningful level line with minimal contrast is
l min
log H()
Conversely, if we x the length l, the minimal contrast min (l) needed
to become "-meaningful is such that
2.3. Maximality
In this subsection, we address two kinds of maximality for the edges
and the boundaries. Let us start with boundaries. A natural relation
between closed level lines is given by their inclusion (Monasse, 1999).
If C and C 0 are two dierent closed level lines, then C and C 0 cannot
intersect. Let D and D 0 denote the bounded domains surrounded by C
and C 0 . Either D \ D (D D 0 or D 0 D). We can consider,
as proposed by Monasse, the inclusion tree of all level lines. From now
on, we work on the subtree of the detected level curves, that is, the
ones for which F (; l) 6 " where " is our a priori xed expectation of
false alarms. (In practice, we take all experiments.) On this
subtree, we can, following Monasse, dene what we shall call a maximal
monotone level curve interval, that is, a sequence of level curves C i ,
is the unique son of C
- the interval is maximal (not contained in a longer one)
- the grey levels of the detected curves of the interval are either decreasing
from 1 to k, or increasing from 1 to k.
We can see many such maximal monotone intervals of detected curves
in the experiments: they roughly correspond to \fat" edges, made of
several well contrasted level lines. The edge detection ideology tends to
dene an edge by a single curve. This is easily made by selecting the
best contrasted edges along a series of parallel ones.
DEFINITION 4. We associate with each maximal monotone interval
its optimal level curves, that is, the ones for which the false alarms number
F (; l) is minimal along the interval. We call \optimal boundary
map" of an image the set of all optimal level curves.
Moisan and Morel
This optimal boundary map will be compared in the experiments
with classical edge detectors or segmentation algorithms.
We now address the problem of nding optimal edges among the detected
ones. We won't be able to proceed as for the boundaries. Although
the pieces of level lines inherit the same inclusion structure as
the level lines, we cannot compare two of them belonging to dierent
level curves for detectability, since they can have dierent positions and
lengths. We can instead compare two edges belonging to the same level
curve. Our main aim is to dene on each curve a set of disjoint maximally
detectable edges. In the following, we denote by NF
the false alarm number of a given edge E with minimal gradient norm
and length l.
DEFINITION 5. We call maximal meaningful edge any edge E such
that for any other edge E 0 on the same level curve such that E
E) we have NF
This denition follows (Desolneux et al., 1999) and (Desolneux et al.,
2000) where we apply it to the denition of maximal alignments and
maximal modes in a histogram.
PROPOSITION 1. Two maximal edges cannot meet.
Proof: Let E and E 0 be two maximal distinct and non-disjoint
meaningful edges in a given level curve and and 0 the respective
minima of gradient of the image on E and E 0 . Assume e.g. that 6 0 .
has the same minimum as E 0 but is longer. Thus, by the
remark of the preceding subsection, we have F
which implies that E[E 0 has a smaller number of false alarms than E 0 .
Thus, E 0 is not maximal. As a consequence, two maximal edges cannot
3. Experiments
INRIA desk image (Figure 2).
In this experiment, we compare our method with two other methods
Mumford and Shah image segmentation and Canny-Deriche edge
detector.
In the Mumford and Shah model (Mumford and Shah, 1985), given
an observed image u dened on the domain D, one looks for the
piecewise approximation v of u that minimizes the functional
Z
Edge Detection by Helmholtz Principle 11
Figure
2. First row: left: original image; right: boundaries obtained with the Mum-
ford-Shah model (1000 regions). Second row: edges obtained with Canny-Deriche
edge detector, for two dierent threshold values (2 and 15). Third row: edges (left)
and boundaries (right) obtained with our model reconstruction
with the Mumford-Shah model (left) and with our model (right). This last reconstruction
is easily performed by the following algorithm: attribute to each pixel x
the level of the smallest (for inclusion) meaningful level line surrounding x (see
(Monasse, 1999)).
Moisan and Morel
where length(K(v)) is the one-dimensional measure of the discontinuity
set of v, and a parameter. Hence, this energy is a balance between
a delity term (the approximation error in L 2 norm) and a regularity
term (the total length of the boundaries). The result v, called a segmentation
of u, depends upon the parameter , that indicates how to
weight both terms. As shown on Figure 2, the Mumford-Shah model
generally produces reasonable boundaries except in \
at" zones where
spurious boundaries often appear (see the front side of the desk for
example). This is easily explained: the a priori model is: the image
is piecewise constant with boundaries as short as possible. Now, the
image does not t exactly the model: the desk in the image is smooth
but not
at. The detected \wrong" boundary in the desk is necessary
to divide the desk into
at regions. The same phenomenon occurs in
the sky of the cheetah image (next experiment).
The Canny-Deriche lter (Canny, 1986; Deriche, 1987) is an optimization
of Canny's well known edge detector, roughly consisting in
the detection of maxima of the norm of the gradient in the direction of
the gradient. Notice that, in contrast with the Mumford-Shah model
and with our model, it does not produce a set of boundaries (ie one-dimensional
structures) but a discrete set of points that still are to be
connected. It depends on two parameters : the width of the impulse
response, generally set to 1 pixel, and a threshold on the norm of the
gradient that selects candidates for edge points. As we can see on Figure
2, the result is very dependent on this threshold. Thus, we can consider
the meaningfulness as a way to select the right edges. If Canny's lter
were completed to provide us with pieces of curves, our algorithm could
a posteriori decide which of them are meaningful. Notice that many
Canny edges are found in
at regions of the image, where no perceptual
boundary is present. If we increase the threshold, as is done on the right,
the detected edges look perceptually more correct, but are broken.
Cheetah image (Figure 3).
This experiment compares our edge detector with the Mumford-
Shah model. As before, we observe that the Mumford-Shah model
produces some spurious boundaries on the background, due to the
inadequacy of the piecewise constant model. This means that a more
sophisticated model must be applied if we wish to avoid such spurious
boundaries: the general Mumford-Shah model replaces the piece-wise
constant constraint by a smoothness term (the Dirichlet integral
R
on each region. Now, adding this term means using a
two-parameters model since, then, the Mumford-Shah functional has
three terms whose relative weights must be xed.
Edge Detection by Helmholtz Principle 13
Figure
3. First row: original image (left) and boundaries obtained with the Mum-
ford-Shah model with 1000 regions (right). Second row: edges (left) and boundaries
(right) obtained with our method
DNA image (Figure 4).
This experiment illustrates the concept of \optimal boundaries" that
we have introduced previously. When we compute the boundaries of the
original image, each \spot" produces several parallel boundaries due to
the important blur. With the denition of maximality we adopted, we
select exactly one boundary for each spot.
14 Desolneux, Moisan and Morel
Figure
4. From top to bottom: 1. original image; 2. boundaries; 3. optimal
boundaries.
Edge Detection by Helmholtz Principle 15
Figure
5. Up: original image. Downleft: boundaries. Downright: optimal boundaries.
Segments image (Figure 5).
As in the DNA experiment, the \optimal boundaries" allow to select
exactly one boundary per object (here, hand-drawn segments). In
particular, the number of boundaries we nd (21) counts exactly the
number of segments.
Noise image (Figure 6).
This image is obtained as a realization of a Gaussian noise with
standart deviation 40. For no boundaries are de-
tected. For larger values of ", some boundaries begin to be detected :
Figure 6), 148 for
Moisan and Morel
Figure
6. Left: an image of a Gaussian noise with standart deviation 40. Right: the
meaningful boundaries found for are found for
4. Discussion and conclusion
In this discussion, we shall address objections and comments made to
us by the anonymous referees and also by Jose-Luis Lisani, Yves Meyer
and Alain Trouve. In all that follows, we call respectively \boundary
detection algorithm" and \edge detection algorithm" the algorithms we
proposed. The other edge or boundary detection algorithms put into
the discussion will be called by their author's names (Mumford-Shah,
Canny).
4.1. Eight objections and their answers
Objection 1: the blue sky eect.
If a signicant part of a natural image happens to be very
at, because
of a \blue sky eect", then most level lines of the image will be detected
as meaningful. If (e.g.) one tenth of the image is a black
at region,
then the histogram of the gradient has a huge peak near zero. Thus, all
gradients slightly above this peak will have a probability 9
signicantly
smaller than 1. As a consequence, all level lines long enough (with
length larger than, say, will be meaningful. In practice, this
means that the image will be plagued with detected level lines with a
small contrast. These detected level lines are no edges under any decent
criterion ?
Answer 1: If the image has a wide \blue sky", then most level lines
of the ground are meaningful because any strong deviation from zero
becomes meaningful. This eect can be checked on the cheetah image:
the structured and contrasted ground has lots of detected boundaries
(and the sky has none). This outcome can be interpreted in the following
way: when a
at region is present in the image, it gives, via the
Edge Detection by Helmholtz Principle 17
gradient histogram, a indirect noise estimate. Every gradient which is
above the noise gradient of the
at region becomes meaningful and this
is, we think, correct.
Objection 2: dependence upon windows.
Then the detection of a given edge depends upon the window (contain-
ing the edge) on which you apply the algorithm ?
Answer 2: Yes, the algorithm is global and is aected by a reframing
of the image. If (e.g.) we detect edges on a window essentially containing
the sky, we shall detect more boundaries (see Figure 7) and if we
compute edges in a window only containing the contrasted boundaries,
it will detect less boundaries.
Figure
7. First row: left: original image (chinese landscape); right: maximal meaningful
edges row: the same algorithm, but run on a subwindow
(drawn on the left image); right: the result (in black), with in light grey the edges
that were detected in the full image.
Question 3: how to compute edges with multiple windows ?
Thus, you can apply your detection algorithm on any window of the
image and get more and more edges !
Answer 3: Yes, but, rst, if the window is too small, no edge will be
detected at all. Second, if we apply the algorithm to say, 100 windows,
we must take into account in our computations that the number of tests
is increased. Thus, we must decrease accordingly the value of " in order
to avoid false detections: an easy way is to do it is this: if we have 100
windows, we can take on each one the global number
of false alarms over all windows remains equal to 1. Thus, a multiwin-
dows version of the algorithm is doable and recommandable. Indeed,
Moisan and Morel
psychophysics and neurophysiology both advocate for a spatially local
treatment of the retinian information.
Objection 4: synthetic images where everything is meaningful.
If an image has no noise at all (synthetic image), all boundaries contain
relevant information. All the same, your algorithm won't detect them
all?
Answer 4: Right. If a synthetic binary image is made (e.g.) of a black
square with white background, then all gradients are zero except on the
square's boundary. The gradient histogram has one single value, 255.
(Remember that zero values are excluded from the gradient histogram).
Thus, which means that no line is meaningful. Thus, the
square's boundary won't be detected, which is a bit paradoxical! The
addition of a tiny noise or of a slight blur would of course restore the
detection of this square's boundary. This means that synthetic piece-wise
constant images fall out of the range or the detection algorithm.
Now, in that case, the boundary detection is trivial by any other edge
detector and our algorithm is not to be applied.
Question 5: class of images to which the algorithm is adapted ?
Is there a class of images for which the Mumford-Shah functional is
better adapted and another class of images where your algorithm is
more adapted ?
Answer 5: Our comparison of both algorithms may be misleading.
We are comparing methods with dierent scopes. The Mumford-Shah
algorithm aims at a global and minimal explanation of the image in
terms of boundaries and regions. As we pointed out in the discussion
of the experiments, this global model is robust but rough, and more
sophisticated models would give a better explanation, provided the
additional parameters can be estimated (but how?).
The detection algorithm does not aim at such a global explanation: it
is a partial detection algorithm and not a global explanation algorithm.
In particular, detected edges can be doubled or tripled or more, since
many level lines follow a given edge. In contrast, the Mumford-Shah
functional and the Canny detector attempt at selecting the best representative
of each edge. Conversely, the detection algorithm provides a
check tool to accept or reject edges proposed by any other algorithm.
Objection the algorithm depends upon the quantization
step.
The algorithm depends upon the quantication step q. When q tends to
Edge Detection by Helmholtz Principle 19
zero, you will get more and more level lines. Thus N ll and N llp (numbers
of level lines and pieces of level lines respectively) will blow up. Thus,
get less and less detections when q increases and, at the end, none!
Answer again. The numbers N ll and N llp stand for the
number of eectuated tests on the image. When the number of tests
tends to innity, the number of false alarms of Denition 1 also tends to
innity. Now, as we mentionned, q must be large enough in order to be
sure that all edges contain at least one level line. Since the quantization
noise is 1 and the standard deviation of noise never goes below 1 or
2, it is not likely to nd any edge with contrast smaller than 2. Thus,
enough, and we cannot miss any detectable edge. If we take q
smaller, we shall get more spatial accuracy to the cost of less detections.
Question 7: accuracy of the edges depends upon the quantization
step.
All the same, if q is not very small, you lose accuracy in the position
detection. Indeed, the quantized levels do not coincide with the optimal
level of the edge, as it would be found by a Canny edge detector.
Answer 7: Right again. The Canny edge detector performs two tasks in
one: detecting and optimizing the edge's position at subpixel accuracy.
The proposed detection algorithm does not nd the optimal position
of each edge. The spatial accuracy is roughly q=minjruj, where the
min is computed on the detected edge. In the case of the detection
of optimal boundaries, we therefore get this spatial accuracy for the
detected optimal boundaries. Of course, a postprocessing nding for
each edge the best position in terms of detectability is possible.
Objection 8: edges are not level lines.
You claim that every edge coincides with some level line. This is simply
not true!
Answer 8: If an edge has contrast kq, where q is the quantization step
(usually equal to 1), then k level lines coincide with the edge, locally. Of
course, one can construct long edges whose contrast is everywhere k but
whose average level varies in such a way that no level line fully coincides
with the edge. Now, long pieces of level lines coincide partially with it.
Thus, detection of this edge by the detection algorithm is possible all
the same, but it will be detected as a union of several more local edges.
Objection 9: values of the gradient on the level lines are not
independent.
Moisan and Morel
You chose as test set the set of all level lines. You claim that the gradient
amplitudes at two dierent points of every edge are independent. This
is, in most images, not true.
Answer 9: The independence assumption is, indeed, not a realistic
assumption. It is made in order to apply the Helmholtz principle,
according to which every large deviation from uniform randomness
assumption is perceptible. Thus, the independence assumption is not
a model for the image ; it is an a contrario assumption against which
the gestalts are detected.
Objection 10: A minimal description model would do the job
as well.
A minimal description model (MDL) can contain very wide classes of
models for which parameters will be estimated by the MDL principle of
shortest description in a xed language. This xed language can be the
language of Gestalt theory: explain the image in terms of lines, curves,
edges, regions, etc. Then existence and nonexistence of a given gestalt
would come out from the MDL description: a \detectable" edge would
be an edge which is used by the minimal description. Thus, thresholds
would be implicit in a MDL model, but exist all the same.
Answer 10: A MDL model is global in nature. Until we have constructed
it, we cannot make any comparison. In a MDL model, the
thresholds on edges would depend on all other gestalts. Thus, we would
be in the same situation as with the Mumford-Shah model: we have seen
that a sligth error on the region model leads to a false detection for
edges. The main advantage of the proposed method relies on its lack
of ambition: it is a partial gestalt detection algorithm, which does not
require any global explanation model in order to be applied. We may
compare the outcome of the algorithm with the computation in optimization
theory of feasible solutions. Feasible solutions are not optimal.
We provide feasible, i.e. acceptable edges. We do not provide an optimal
set of edges as is aimed at by the other considered methods.
Objection 11: is " a method parameter ?
You claim that the method has no parameter. We have seen in the
course of the discussion not less than three parameters coming out: the
choice of the windows, the choice of q, and nally the choice of ". So
what ?
Answer 11: We always Indeed, as we proved, the dependence
of detectability upon " is a Log-dependence. We also x
Edge Detection by Helmholtz Principle 21
again, the q dependence would be a Log-dependence, since the number
of level lines varies roughly linearly as a function of q. Finally, it is quite
licit to take as many windows as we wish, provided we take "
where k is the number of windows. This yields a false alarm rate of
1 over all windows. Again, since the number of windows is necessarily
small (they make a covering of the image and cannot be too small),
we can even take " because of the Log-dependence mentionned
above. To summarize, not a parameter. When we subdivide
our set of tests in subsets on several windows, we must of course divide
this value 1 by the number of sets of subtests. This does not requires
any user's input.
4.2. Conclusion
In this paper, we have tried to stress the possibility of giving a perceptually
correct check for any boundary or edge proposed by any algorithm.
Our method, based on the Helmholtz principle, computes thresholds of
detectability for any edge. This algorithm can be applied to level lines
or to pieces of level lines and computes then all detectable level lines.
One cannot view the algorithm as a new \edge detector", to be added to
the long list of existing ones ; indeed, rst, the algorithm does not select
the \best" edge as the other algorithms do. Thus, it is more primitive
and only yields \feasible" candidates to be an edge. Only in the case of
boundary detection can it be claimed to give a nal boundary detector.
this boundary detector may anyway yield multiple boundaries.
On the other hand, the proposed method has the advantage of giving
for any boundary or edge detector a sanity check.
Thus, it can, for any given edge detector, help removing all edges which
are not accepted from the Helmholtz principle viewpoint. As a sanity
check, the Helmholtz principle is hardly to be discussed, since it only
rejects any edge which could be observed in white noise.
The number of false alarms gives, in addition, a way to evaluate the
reliability of any edge and we think that the maximality criterion could
also be used in conjonction with any edge detector.
Finally, we can claim that the kind of algorithm and experiments proposed
here advocate for the necessity and usefulness of an intermediate
layer in image analysis algorithms, where feasibility of the sought for
structures is checked before any more global interpretation is attempted
by a variational method.
22 Desolneux, Moisan and Morel
--R
Automatic thresholding of gray-level pictures using two-dimensional entropy
A computational approach to edge detection.
Progress in Nonlinear Di
Elements of Information Theory.
A survey of edge detection techniques.
Using Canny's Criteria to derive a recursively implemented optimal edge detector.
Meaningful Alignments.
Maximal meaningful events and applications to image analysis.
Pattern Classi
A survey on image segmentation.
Stochastic relaxation
Inferring global perceptual contours from local features.
of Comp.
Image segmentation techniques.
Statistics of Natural Images and Models.
Grammaire du Voir.
Snakes: active contour models.
Constructing Simple Stable Descriptions for Image Partitioning.
of Comp.
Perceptual Organization and Visual Recognition.
Edge detection using heuristic search methods.
Gesetze des Sehens.
Variational Methods In Image Segmentation.
Boundary detection by minimizing functionals.
Trace inference
Entropic thresholding
A universal prior for integers and estimation by Minimum Description Length.
Edge and curve detection for visual scene analysis.
IEEE Trans.
The statistics of natural images.
Structural saliency
Untersuchungen zur Lehre der Gestalt
A survey of threshold selection techniques.
On the role of structure in vision.
Region growing: Childhood and Adolescence (Survey).
--TR
--CTR
Agns Desolneux , Lionel Moisan , Jean-Michel Morel, A Grouping Principle and Four Applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.4, p.508-513, April
I. Abraham , R. Abraham , A. Desolneux , S. Li-Thiao-Te, Significant edges in the case of non-stationary Gaussian noise, Pattern Recognition, v.40 n.11, p.3277-3291, November, 2007
D. Coupier , A. Desolneux , B. Ycart, Image Denoising by Statistical Area Thresholding, Journal of Mathematical Imaging and Vision, v.22 n.2-3, p.183-197, May 2005
Frdric Cao , Julie Delon , Agns Desolneux , Pablo Mus , Frdric Sur, A Unified Framework for Detecting Groups and Application to Shape Recognition, Journal of Mathematical Imaging and Vision, v.27 n.2, p.91-119, February 2007
Andrs Almansa , Agns Desolneux , Sbastien Vamech, Vanishing Point Detection without Any A Priori Information, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.4, p.502-507, April
Thomas Veit , Frdric Cao , Patrick Bouthemy, An a contrario Decision Framework for Region-Based Motion Detection, International Journal of Computer Vision, v.68 n.2, p.163-178, June 2006
Frdric Cao , Pablo Mus , Frdric Sur, Extracting Meaningful Curves from Images, Journal of Mathematical Imaging and Vision, v.22 n.2-3, p.159-181, May 2005
Pablo Mus , Frdric Sur , Frdric Cao , Yann Gousseau , Jean-Michel Morel, An A Contrario Decision Method for Shape Element Recognition, International Journal of Computer Vision, v.69 n.3, p.295-315, September 2006
Ballester , Vicent Caselles , Laura Igual , Luis Garrido, Level Lines Selection with Variational Models for Segmentation and Encoding, Journal of Mathematical Imaging and Vision, v.27 n.1, p.5-27, January 2007 | edge detection;helmholtz principle;large deviations;perception;image analysis |
570189 | Simple termination of context-sensitive rewriting. | Simple termination is the (often indirect) basis of most existing automatic techniques for proving termination of rule-based programs (e.g., Knuth-Bendix, polynomial, or recursive path orderings, but also DP-simple termination, etc.). An interesting framework for such programs is context-sensitive rewriting (CSR) that provides a bridge between the abstract world of general rewriting and the (more) applied setting of declarative specification and programming languages (e.g., OBJ*, CafeOBJ, ELAN, and Maude). In these languages, certain replacement restrictions can be specified in programs by the so-called strategy annotations. They may significantly improve the computational properties of programs, especially regarding their termination behaviour. Context-sensitive rewriting techniques (in particular, the methods for proving termination of CSR) have been proved useful for analyzing the properties of these programs. This entails the need to provide implementable methods for proving termination of CSR. Simplification orderings (corresponding to the notion of simple termination) which are well-known in term rewriting (and have nice properties) are, then, natural candidates for such an attempt. In this paper we introduce and investigate a corresponding notion of simple termination of CSR. We prove that our notion actually provides a unifying framework for proving termination of CSR by using standard simplification orderings via the existing (transformational) methods, and also covers CSRPO, a very recent proposal that extends the recursive path ordering (RPO) (a well-known simplification ordering) to context-sensitive terms. We also introduce polynomial orderings for dealing with (simple) termination of CSR. Finally, we also give criteria for the modularity of simple termination, for the case of disjoint unions as well as for constructor-sharing rewrite systems. | Introduction
Well-founded orderings (i.e., orderings allowing no infinite decreasing
sequence) provide a suitable basis for proving termination
in a number of programming languages and computational systems.
For instance, in term rewriting systems (TRS's) a proof of termination
can be achieved if we are able to find a (monotone and stable)
well-founded ordering > on terms (i.e., a reduction ordering) such
that l > r for every rule l ! r of the rewrite system [10, 40]. In
practice, if we want to implement a tool for proving termination of
a TRS R , we need to make this problem decidable. It is well-known
that termination of TRSs is an undecidable problem, even for TRSs
containing only one rule [7]. Hence, we can only provide effective
approaches (which yield decidable termination criteria) for certain
classes of systems. Simplification orderings are those monotone
and stable orderings > satisfying the following subterm property:
for each term t, t > s for every proper subterm s of t [8]. If termination
of a TRS R can be proved by using a simplification ordering,
then we say that R is simply terminating. Although simple termination
is also undecidable (see [33]) it covers most usual automati-
zable orderings for proving termination of rewriting (e.g., recursive
path orderings (rpo [9]), Knuth-Bendix orderings (kbo [23]), and
polynomial orderings (poly [26]), see [37] for a survey on simplification
orderings). Moreover, simple termination has interesting
properties regarding modularity: in contrast to the general case,
simple termination is modular for disjoint, constructor-sharing, and
(some classes of) hierarchical unions of TRS's [36].
obj EXAMPLE is
sorts Nat LNat .
cons : Nat LNat -> LNat [strat (1)] .
op from : Nat -> LNat [strat (1 0)] .
first : Nat LNat -> LNat [strat (1 2 0)] .
vars
Figure
1. Strategy annotations in OBJ
Context-sensitive rewriting (CSR [28]) is a restriction of rewriting
which forbids reductions on selected arguments of functions. A replacement
for each
k-ary symbol f of the signature F discriminates, for each symbol of
the signature, the argument positions on which replacements are allowed
[28]. In this way, the termination behavior of rewriting computations
can be improved, e.g., by pruning all the infinite rewrite
sequences.
In eager programming languages such as OBJ2 [14], OBJ3 [20],
CafeOBJ [15], and Maude [6], it is possible to specify the so-called
strategy annotations for controlling the execution of pro-
grams. For instance, the OBJ3 program in Figure 1 (borrowed from
[2]) specifies an explicit strategy annotation (1) for the list constructor
cons which disables replacements on the second argument (lazy
lists, see Appendix C.5 of [20]). If we collect as -( f ) the positive
indices appearing in the strategy annotation for each symbol
f in a given OBJ program 1 , we can use CSR to provide a frame-work
for analyzing and ensuring essential computational properties
such as termination, correctness and completeness (regarding
the usual semantics: head-normalization, normalization, functional
evaluation, and infinitary normalization) of programs using strategy
annotations, see [1, 2, 29, 30, 31]. In particular, termination
of (innermost) context-sensitive rewriting has been recently related
to termination of such languages [29, 30]. For instance, we can
prove termination of the OBJ3 program in Figure 1 by using the
techniques for proving termination of CSR, see Examples 5 and 7
below.
Termination of CSR and its applications has been studied in a number
of papers [5, 13, 16, 19, 22, 27, 32, 38, 39]. In most of these
papers (e.g., [13, 16, 19, 27, 38, 39]) termination of CSR is studied
in a transformational setting, namely by transforming a given
context-sensitive rewrite system (CSRS i.e., a pair (R ; -) consisting
of a TRS R and a replacement map -) into an ordinary TRS
such that termination of the latter implies termination of the former
as CSRS. Unfortunately, such transformations typically use new
symbols and rules that introduce a loss of structure and intuition,
due to the encoding of the context-sensitive control in the original
system by such new elements. They can unnecessarily complicate
the proofs of termination and make standard techniques for easing
such proofs (e.g., modular approaches) infeasible. Only recently,
some work has been devoted to the direct analysis of termination
and related properties of CSR: regarding the definition of suitable
1 As in [20], by OBJ we mean OBJ2, OBJ3, CafeOBJ, or Maude.
orderings for proving termination of CSR [5], and concerning the
modularity of termination and related properties [22]. Moreover,
the abstract properties of termination of CSR remain almost unexplored
(with the notable exception of Zantema's work [39]). In-
deed, all these lines of work appear to be promising and suggest to
further investigate direct methods for achieving simpler proofs of
termination of CSR.
In the remainder of the paper, we first introduce the necessary background
on CSR. Then we introduce and discuss simple termination
of CSR in Section 3. In Section 4 we show that in all cases where
the transformational approaches yield simple termination, the original
CSRS is simply terminating, too. Therefore, all existing transformations
for proving termination of CSR can also be used for
proving simple termination of CSRS's. Then, in Section 5 we deal
with two direct approaches for proving termination of CSR. We
show that CSRPO-termination [5] in fact yields simple termina-
tion, and we also cover simple termination proofs via polynomial
orderings. Finally, some modularity results concerning simple termination
are presented in Section 6. 2
As a motivating example for our work consider the following.
Example 1. Consider the following one-rule TRS which is contained
in the famous example of Toyama.
It is well known that this TRS is not simply terminating. However,
by forbidding reductions on the first and second argument, i.e., by
defining we obtain a 'simply terminating' behavior (re-
garding CSR) which can be easily proved by using a polynomial
ordering for CSR, see Example 10 below.
Preliminaries
2.1 Basics
Subsequently we will assume in general some familiarity with the
basic theory of term rewriting (cf. e.g. [3], [12]). Given a set A,
denotes the set of all subsets of A. Given a binary relation R
on a set A, we denote the transitive closure by R + and its reflexive
and transitive closure by R . We say that R is terminating iff there
is no infinite sequence a 1 R a 2 R a 3 . Throughout the paper, X
denotes a countable set of variables and F denotes a signature, i.e.,
a set of function symbols ff; each having a fixed arity given
by a mapping ar N. The set of terms built from F and
is said to be linear if it has no multiple
occurrences of a single variable. Terms are viewed as labelled trees
in the usual way. Positions are represented by chains of
positive natural numbers used to address subterms of t. We denote
the empty chain by L. The set of positions of a term t is P os(t).
Positions of non-variable symbols in t are denoted as P os F (t), and
are the positions of variables. The subterm at position p of
t is denoted as tj p and t[s] p is the term t with the subterm at position
replaced by s. The symbol labelling the root of t is denoted as
root(t).
A rewrite rule is an ordered pair (l; r), written l ! r, with l; r 2
ar(l). The left-hand side (lhs)
of the rule is l and r is the right-hand side (rhs). A TRS is a pair
is a set of rewrite rules. L(R ) denotes the set
2 For the sake of readability, some proofs have been moved to
the appendix.
of lhs's of R . An instance s(l) of a lhs l of a rule is a redex. The
set of redex positions in t is P os R (t). A TRS R is left-linear if
for all l 2 L(R ), l is a linear term. Given TRSs
be the TRS
rewrites to s (at position p), written t p
(or just t ! s), if tj
substitution s. A TRS is terminating if ! is
terminating.
2.2 Context-Sensitive Rewriting
Given a signature F , a mapping - is a replacement
map (or F -map) if for all f
(or M R if determines the considered symbols),
the set of all F -maps. For the sake of simplicity, we will apply
a replacement map - 2 M F on symbols f 2 F 0 of any signature
F 0 by assuming that -( f . The inclusion
ordering on P (N) extends to an ordering v on
means that - considers
less positions than - 0 (for reduction). We also say that - is more
restrictive than - 0 . The least upper bound (lub), t of - are
given by (-t- 0 replacement map - specifies
the argument positions which can be reduced for each symbol in
F . Accordingly, the set of -replacing positions P os - (t) of t 2
. The set of positions of replacing redexes in t is P os -
context-sensitive rewrite system (CSRS) is
a pair (R ; -), where R is a TRS and - is a replacement map. In
context-sensitive rewriting (CSR [28]), we (only) contract replacing
redexes: t -rewrites to s, written t ,! - s (or just t ,! s), if
(R ; -) is terminating if ,! - is
terminating.
3 Simple Termination of CSR
Zantema has provided an algebraic characterization of termination
of CSR [39]. As for standard rewriting, he proves that termination
of CSR is fully captured by the so-called -reduction orderings, i.e.,
well-founded, stable orderings > such that, for all
-terminating if and only if there is a -reduction ordering
> which is compatible with the rules of R , i.e., for all l ! r 2R, l >
r ([39, Proposition 1]). He also shows that -reduction ordering can
be defined by means of (well-founded) -monotone F -algebras. An
(ordered) F -algebra, is a triple (A;F A ; >), where A is a set, F A is a
set of mappings f A : A k ! A for each f 2 F where
> is a (strict) ordering on A. Given f 2 F , we say that
is -monotone if for all a;b 2 A such that a > b, we have that
f A (a
for all a
) is -monotone if for all f 2 F A , f is -monotone. A F -
algebra well-founded if > is well-founded. For
a given valuation mapping a A, the evaluation mapping
inductively defined by
and [a]( f
only if [a](t) > [a](s)
for all a A is a reduction ordering on terms for
every well-founded -monotone F -algebra (A;F A ; >) ([39, Proposition
2]). We use these notions in the following.
Given a signature F , we consider the TRS Emb(F
simply terminating if R [Emb(F ) is terminating. Regarding CSR,
the most obvious extension of this notion (a CSRS (R ; -) is simply
terminating if (R [Emb(F ); -) is terminating) is not appropriate:
Example 2. Consider the TRS R (where a is a constant):
with ?. The CSRS (R ; -) is clearly terminating; however,
(R [Emb(F ); -) (where Emb(F ) has a single rule c(x) ! x) is
not: a ,! c(a) ,! a ,! .
Formally, in contrast to TRS's where termination of ! R implies
termination of (! R [ > st ) (with > st being the proper subterm or-
dering), this property does not hold for CSRS's any more, i.e., termination
of ,! R in general does not imply termination of (,! R
st ).
The problem is that the projections in Emb(F ) should not
make those arguments reducible which are not reducible using
the replacement map -. Therefore, we define Emb -
propose the following
DEFINITION 1. A CSRS (R ; -) is simply terminating if (R [
Obviously, if Emb - > simple termination as
in Definition 1 and the standard notion of simple termination of
TRS's coincide (as one would expect). Furthermore, any simply
terminating TRS R , viewed as CSRS (R ; -) for arbitrary
is also simply terminating (since Emb - a subset of Emb(F )
for all -). Finally, we note that if - v - 0 , then Emb - included
in Emb - 0
simple termination of (R
termination of (R ; -) in this case.
Simplification orderings are the algebraic (indeed originating)
counterpart of simple termination. A simplification ordering is
a monotonic, stable ordering on terms which additionally satisfies
the following 'subterm' property: for all t 2 T
. It is well-known that simplification orderings
are well-founded (for terms over finite signatures). Thus,
any simplification ordering is a reduction ordering, and hence it is
suitable for (trying to) prove termination of a TRS R by just comparing
the left- and right-hand sides of its rules.
The following natural generalization of simplification orderings immediately
arises.
DEFINITION 2. Let F be a signature and - 2 M F . A -
monotonic, stable ordering > is a -simplification ordering if, for
all
Unfortunately, in (sharp) contrast with the standard case, -
simplification orderings are not well-founded any more (in general).
Example 3. Consider the signature consisting of the constant a and
the unary symbol c. If we choose -(c) =?, then the binary relation
> consisting of pairs f(c n (a);c n+1 (a)) j n 0g is a -monotone
strict ordering and has the -subterm property. However, it admits
an infinite sequence
a > c(a) > > c n (a) >
Nevertheless, Definition 2 characterizes the notion of simple termination
of Definition 1 in the following (obvious) sense.
THEOREM 1. A CSRS (R ; -) is simply terminating if and only if
there is a well-founded -simplification ordering > such that for
every rule l ! r 2 R, l > r.
PROOF. If (R ; -) is simply terminating, then the CSRS (R [
terminating and ,! + is a -
reduction ordering which clearly satisfies the -subterm property
(due to the presence of the corresponding rules in Emb -
is a -simplification ordering. Obviously, we have
On the other hand, assume that > is a well-founded -simplification
ordering such that l > r, for every rule l ! r 2 R. Since it is a -
simplification
F . Hence, > is a -reduction ordering which is compatible with
all rules from left to right. Hence,
R is simply -terminating.
Proving Simple Termination of CSR
Termination of CSRS's (R ; -) is usually proved by demonstrating
termination of a transformed TRS R -
Q obtained from R and - by
using a transformation 3 Q [13, 16, 17, 27, 38, 39]. A transformation
Q is
1. correct (regarding (simple) termination) if (simple) termination
of R -
termination of (R ; -) for all TRS
R and replacement map - 2 M R .
2. complete (regarding (simple) termination) if (simple) termination
of (R ; -) implies (simple) termination of R -
Q for all TRS
R and replacement map - 2 M R .
The simplest (and trivial) correct transformation for proving termination
of CSRS's is the identity: if R -
(R ; -) is terminating for every replacement map -.
Here, we are interested in proving simple termination of CSRS's by
using existing transformations. According to this goal in mind, we
review the main (non-trivial) correct transformations for proving
termination of CSR regarding their suitability for proving simple
termination of CSR.
4.1 The Contractive Transformation
Let F be a signature and - 2 M F be a replacement map. With the
contractive transformation [27], the non-replacing arguments of
all symbols in F are removed and a new, -contracted signature F -
is obtained (possibly reducing the arity of symbols). The function
drops the non-replacing immediate sub-terms
of a term t 2 T constructs a '-contracted' term by
joining the (also transformed) replacing arguments below the corresponding
operator of F -
L . A CSRS (R ; -), where
-contracted into R -
for a tool, MU-TERM 1.0, that implements these transformations.
ing to this definition, it is not difficult to see that R -
(R [Emb -
L . Thus, we have the following:
THEOREM 2. Let (R ; -) be a CSRS. If R -
L is simply terminating,
then (R ; -) is simply terminating.
PROOF. If R -
L is simply terminating, then R -
(R [
L is terminating. Hence, (R [Emb -
ing, i.e., (R ; -) is simply terminating.
Example 4. Consider the following CSRS which can be used to
obtain (as first(n,terms(1))) the first n terms of the series that approximates
with other k-ary symbol
f . Then,
is use an rpo based on precedence 4
terms F :; recip;sqr; sqr F dbl;+ F s; and first F []:
Hence, (R ; -) is simply terminating.
The contractive transformation only works well with -conserva-
tive TRSs, i.e., satisfying that V ar - (r) V ar - (l) for every rule
r of R [27]; otherwise, extra variables will appear in a rule of
which thus would become non-terminating. Let
ar - (r) V ar - (l)g
That is: CoCM R contains the replacement maps - 2 CM R that
make R -conservative [32].
THEOREM 3. [32] Let R be a left-linear TRS and - 2 CoCM R . If
(R ; -) is terminating, then R -
L is terminating.
Hence, we can use the contractive transformation to fully characterize
simple termination of CSR in restricted cases.
THEOREM 4. Let be a left-linear TRS and - 2
CoCM R . Then, (R ; -) is simply terminating if and only if R -
L is
simply terminating.
4 A precedence F is a quasi-ordering (i.e., a reflexive and transitive
relation) on the symbols of the signature F .
PROOF. The if part is Theorem 2. For the only if part, assume that
(R ; -) is simply terminating. Then, (R [Emb - terminating
and, by Theorem 3, (R [Emb -
L is terminating. Since
(R [Emb -
L
ing, hence R -
L is simply terminating.
4.2 Zantema's Transformation
Zantema's transformation marks the non-replacing arguments of
function symbols (disregarding their positions within the term)
[39]. Given
Z consists of two parts. The first part results
from R by replacing every function symbol f occurring in a left or
right-hand side with f 0 (a fresh function symbol of the same arity
as f which, then, is included in F 0 ) if it occurs in a non-replacing
argument of the function symbol directly above it. These new function
symbols are used to block further reductions at this position. In
addition, if a variable x occurs in a non-replacing position in the lhs
l of a rewrite rule l ! r, then all occurrences of x in r are replaced
by activate(x). Here, activate is a new unary function symbol which
is used to activate blocked function symbols again.
The second part of R -
Z consists of rewrite rules that are needed for
blocking and unblocking function symbols:
for every f 0 2 F 0 , together with the rule activate(x) ! x Again,
we note that, since (Emb -
and Emb -
Z ), we have that R -
(R [
Z ). Thus, we have:
THEOREM 5. Let (R ; -) be a CSRS. If R -
Z is simply terminating,
then (R ; -) is simply terminating.
Example 5. The following CSRS
with other k-ary symbol
f corresponds to the OBJ3 program in Figure 5 (we use : and []
instead of cons and nil respectively). Then,
irst 0 (x;activate(z))
rom 0 (s(x))
rom
irst 0 (x;y)
irst
is simply terminating: use the rpo which is based on precedence
sel F activate = F first F from; :; first'; [] and from F
:; from';s, and gives sel the usual lexicographic status. Hence,
(R ; -) is simply terminating.
In [13], Ferreira and Ribeiro propose a variant of Zantema's transformation
which has been proved strictly more powerful than Zan-
tema's one (see [19]). Again, R -
FR has two parts. The first part
results from the first part of R -
Z by marking all function symbols
(except activate) which occur below an already marked sym-
bol. Therefore, all function symbols of non-replacing subterms are
marked. The second part consists of the rule activate(x) ! x plus
the rules:
f
f
for every f 2 F for which f 0 appears in the first part of R -
FR , where
f
f
rules
f
f
for k-ary symbols f where f 0 does not appear in the first part of
FR . However, Giesl and Middeldorp have recently shown that
these rules are not necessary for obtaining a correct transformation
[19]. Since we also have R -
(R [Emb -
FR ), this transformation is similar to Zantema's one regarding
simple termination, i.e., we have the following:
THEOREM 6. Let (R ; -) be a CSRS. If R -
FR is simply terminating,
then (R ; -) is simply terminating.
4.3 Giesl and Middeldorp's Transformations
Giesl and Middeldorp introduced a transformation that explicitly
marks the replacing positions of a term (by using a new symbol
active). Given a TRS , the TRS R -
consists of the following rules (for all
have the following.
THEOREM 7. Let (R ; -) be a CSRS. If R -
GM is simply terminating,
then (R ; -) is simply terminating.
In [16], Giesl and Middeldorp suggest a slightly different presentation
of R -
mGM of the previous transformation. In this presentation,
symbol active is not used anymore. However, we have proved that
both transformations are equivalent regarding simple termination,
GM is simply terminating if and only if R -
mGM is [32]. Thus,
Theorem 7 also holds for R -
mGM .
Giesl and Middeldorp also introduced a transformation R -
C which
is complete, i.e., every -terminating TRS is transformed into a terminating
Given a TRS and a replacement map -, the TRS R -
consists of the following rules (see [16] for a more detailed expla-
F such that
and constants c 2 F ,
THEOREM 8. Let (R ; -) be a CSRS. If R -
C is simply terminating,
then (R ; -) is simply terminating.
We conclude this section by noticing that Toyama's CSRS of Example
1 cannot be proved to be simply terminating by using any of
the previous transformations:
a ! a 0
Note that R -
L is not even
Z (and R -
FR which, in this
case coincides with R -
Z ) is not simply terminating as it contains the
non-simply terminating TRS f (a
GM is not
simply terminating:
GM
GM
GM
Finally, R -
C is not simply terminating either:
Therefore, these examples show that, in general, Theorems 2, 5,
6, 7, and 8 cannot be used in the 'only if' direction. In other
words, this means that in certain cases simple termination cannot
be proved by (the considered) transformational techniques, i.e., all
of them are incomplete regarding simple termination of CSR. No-
tably, the transformation C, which is complete for proving termination
of CSR, becomes incomplete for proving simple termination
of CSR (if we want to use simplification orderings for proving termination
of R -
C ). In the following section, we consider a different
class of methods which are not based on applying transformations
to CSRS's; in contrast, these methods are able to directly address
termination of CSRS's without transforming them.
5 Direct Approaches to SimpleTermination of
CSR
5.1 The Context-Sensitive Recursive Path Ordering
(CSRPO)
In [5], the recursive path ordering (RPO), a well-known technique
for automatically proving (simple) termination of TRSs, has been
extended to deal with termination of CSRS's. Thus, a natural question
arises: are the CSRPO-terminating CSRS's simply terminat-
ing? In this section, we positively answer this question.
The definition of CSRPO is akin to that of RPO. First, we recall the
definition of RPO. Given a precedence F on the set of function
symbols, which is the union of a well-founded ordering F and a
compatible and a status function stat( f
defined recursively as
follows:
1. s i rpo t, for some
2.
3.
or
4.
where rpo is the union of rpo and syntactic equality.
The first idea that comes in mind to extend RPO to context-sensitive
rewriting (CSRPO) is marking the symbols which are in blocked
positions and consider them smaller than the active ones. Therefore
terms in blocked positions become smaller.
Example 6. Consider the rule
together with f1g. In order to prove that from(x) is greater
than x:from(s(x)), we take into account the replacement restriction
comparing from(x) and x:from(s(x)) where from is
a marked version of from (and we set from F from).
However, marking all symbols in non-replacing positions can unnecessarily
weaken the resulting ordering. Thus, in addition to the
usual precedence F on symbols, a marking map, denoted by m,
which defines for every symbol and every blocked position the set
of symbols that should be marked is also used. By F we denote
the set of marked symbols corresponding to F . Given a symbol
f in F [F and i provides the
subset of symbols in F that should be marked, i.e. m( f ; i) F .
Marking maps are intended to mark only blocked arguments, i.e.,
seems reasonable and is also technically
necessary, see Definition 6 below). In this way, we mark only
the necessary symbols (in blocked positions).
However, if we simply apply RPO to the terms after marking the
symbols in blocked positions the resulting ordering is not stable
under substitutions. The difficult/interesting point of CSRPO is
the treatment of variables, since variables in blocked positions are
somehow smaller than variables in active positions, which is not
taken into account in RPO (see [5] for a thorough discussion of this
issue). This is achieved by appropriately marking the variables of
terms which are compared using CSRPO. Now, by X we denote
the set of labeled variables corresponding to X . The variables in X
are labeled by subsets of F , for instance x f f ;g;hg
, and we will ambiguously
use the variables of X to denote variables labeled by the
empty set.
When using the ordering, the marking map tells us whether we have
to mark the top symbol of a term every time we go to an argument
of this symbol. Thus, the marking map does not apply to the
whole term but only to the top symbol of the arguments in the recursive
calls of the definition of CSRPO. Therefore, if we have a term
access the arguments using mt(s
which represents the result of marking the top symbol (of arguments
conditional marking of top symbols
is defined by:
We
and s ng. Note that, as said, marked
symbols can only appear at the top of a term. A ground term s is in
it is in T contains no variable.
As remarked above, the treatment of variables is crucial in the definition
of CSRPO. This is because we need to ensure that CSRPO
is a stable ordering. In order to solve this problem, given a term s,
we will provide the set of labeled variables that can be considered
smaller than (or equal to) s without risk of losing stability under
substitutions. Note that we label the variables with the symbols
that should be marked in case of applying a substitution. To ensure
that some labeled variable x W is in the set of safe (wrt. stability)
labeled variables of a term s, we need x to occur in s and to be sure
that for any substitution s we have that mt(s(x);W ) is smaller than
s(s). Therefore, assuming that x occurs in s, the important point
is what happens with the function symbols heading s(s). Due to
this we analyze which function symbols are harmless as head sym-
bols. In all cases, the symbols which are included in the label W
of x; additionally, all function symbols which do not appear in the
label when we reach some occurrence of x in s are safe. Finally,
and more importantly, the symbols g that can be proved to be safe
because the head symbol of s (or recursively using some subterm
of s containing x) is greater than or equal to g (and in the latter case
they have multiset status), and g and g have the same marking.
DEFINITION 3. ([5]) Let s be a non-variable term in T
and x W a labeled variable. Then x W 2 Stable(s) if and only if x 2
The set Sa f e(s;x) for some variable x s.t. x 2V ar(s) or
some label V ) is defined as the smallest subset of F containing
1. if
2. if
(a) the union of all Sa f e(mt(s ng
and
(b) all g 2F such that
and (m(g; i) =m(g; i)) for all ar(g)g.
(c) all g 2 F such that f F g and (m(g;
ar(g)g.
Now we can give the definition of the context-sensitive recursive
path ordering. First we give the definition of the equality relation,
induced by the equality on function symbols, that we will use.
DEFINITION 4. Given two terms in T
follows:
x
We can enlarge the equality relation by considering permutations of
arguments of symbols with multiset status.
DEFINITION 5 (CSRPO [5]). Let s;t 2 T
1.
2. or mt(s ng
3. or
4. or
5. or
t, and S and S
lex are respectively
the multiset and lexicographic extension of S wrt. = S .
The precedence F and the marking map m have to satisfy some
conditions to ensure the appropriate properties of S for proving
termination of CSR.
DEFINITION 6. ([5]) Let F be a precedence, - a replacement
map and m a marking map. Then ( F ; m) is a valid marking pair
if
1.
2. f F f 8 f 2 F
3.
obj EXAMPLE-TR is
sorts Nat LNat .
ops
ops
ops nil nil' : -> LNat .
cons : Nat LNat -> LNat [strat (1)] .
op from : Nat -> LNat [strat (1 0)] .
ops sel sel' : Nat LNat -> Nat [strat (1 2 0)] .
ops first first' : Nat LNat -> LNat [strat (1 2 0)] .
vars
Figure
2. Transformed OBJ program
When valid marking maps are used, CSRPO is a reduction -
ordering and can be used for proving termination of CSRS's ([5]):
A CSRS (R ; -), where CSRPO-terminating if there
is a precedence F on F and a valid marking map m for F such
that l S r for all l ! r 2 R.
Example 7. Consider the CSRS (R ; -) of Example 5. Termination
of (R ; -) can also be proved using CSRPO: use the marking
and the precedence first F
and sel F from F f:;s; fromg. We use the lexicographic status
for first and sel and the multiset status for all other symbols (see
Example 7 of [5]).
Example 8. Using rewriting restrictions may cause that some normal
forms of input expressions are unreachable by restricted com-
putation. For instance, the evaluation of using
the program in Figure 1 yields
obj EXAMPLE
5 We use the version 2.0 of the OBJ3 interpreter (available at
http://www.kindsoftware.com/products/opensource).
reduce in EXAMPLE : first(s(0),from(0))
rewrites: 2
result LNat: cons(0,first(0,from(s(0))))
Note that cons(0,first(0,from(s(0)))) is not a normal form. However,
is a normal form of t which cannot
be obtained by using the OBJ3 interpreter.
This can be solved by using program transformation techniques.
For instance, by applying the program transformation from OBJ
programs to OBJ programs of [2], we obtain the program of Figure
2. In contrast to the program in Figure 1, this new program can be
used to fully evaluate expressions (see [2]). Moreover, we are also
able to prove termination of this new program by using CSRPO
[4]: use the marking
and the precedence
from F f rom;cons;s
quote F 0';s'
quote' F cons'; nil'
quote' F quote
fcons F cons
fcons
We use the lexicographic status for all symbols.
We prove that CSRPO-termination of a CSRS implies simple termination
of the CSRS.
THEOREM 9. Let (R ; -) be a CSRS. If (R ; -) is CSRPO-
terminating, then it is simply terminating.
PROOF. If (R ; -) is CSRPO-terminating, then there exists a precedence
F and a valid marking map m with l S r for every
rule l ! r of R . We only need to prove that l S r also holds
for every rule l ! r in Emb -
Note that we have x 2 Stable( f
if and only if f 2 Sa f e( f . Now, since
and Sa f e((x Therefore, Sa f e( f
F and f 2 Sa f e( f which means
For instance, the OBJ3 program of Figure 2 (viewed as a CSRS) is
simply terminating. On the other hand, it is not difficult to see that
termination of the CSRS of Example 1 cannot be proved using the
CSRPO: we would need to prove that the tuple ha;b;xi containing
constant symbols a;b is greater than the tuple hx;x;xi that only contains
variables. According to the definition of CSRPO, this is not
possible.
To summarize what we have done so far, we can say that our notion
of simple termination of CSR plays (almost) the same role
regarding CSR as simple termination for ordinary rewriting. We
have proved that simply terminating TRSs R -
Q prove simple termination
of (R ; -) for all the existing transformations Q. We have
also proved that CSRPO-terminating CSRS's are simply terminat-
ing. Furthermore, we have given an example of a simply terminating
CSRS which cannot be proved to be so by using the currently
developed (automatic) techniques.
Next we will consider still another method of direct termination
proofs of CSRS's.
5.2 Polynomial Orderings
A monomial in k variables over Z is a function F : Z k !Z defined
by
k for some integer a 6= 0 and some non-negative
integers . The number a is called the coefficient
of the monomial. If r then the monomial is
called a constant. A polynomial in k variables over Z is the sum of
finitely many monomials in k variables over Z.
Given a signature F and - 2 M F , let (N;F N ; >) be a F -algebra
such that, for all f 2 F , f N 2 F N is a polynomial in ar( f ) variables
satisfying (1) f N
definedness) and (2) f N is -monotone. Then, (N;F N ; >) is a well-founded
-monotone algebra and the corresponding -reduction ordering
is denoted >
poly and said to be a polynomial -ordering. The
F -algebra A is called a polynomial -interpretation for F .
In fact, given a polynomial -interpretation (N;F N ; >), the corresponding
-reduction ordering >
poly can equivalently be defined as
follows: for t; s 2 T
poly s , 8x
(i.e., we
do not need to make explicit the evaluation mapping anymore).
A positive aspect of polynomial orderings regarding other reduction
orderings is that they ease the proofs of monotonicity. In unrestricted
rewriting, monotonicity of polynomial interpretations is
normally ensured by requiring that all coeficients of all polynomials
associated to function symbols be positive (see Proposition 10
in [40] or [3, Section 5.3]). Of course, we do not want to do this,
as this would actually mean that we are using a reduction ordering
thus making it useless for proving termination of CSR in the 'inter-
esting' cases, i.e., when the TRS is not terminating. Even though
there is no simple way to ensure -monotonicity of polynomial -
orderings by constraining the shape of coeficients of monomials, we
can still use the following result (where -F=-x i means the partial
derivative of function F w.r.t. its i-th argument).
Z be a polynomial over Z. Then,
its i-th argument if -F=-x i > 0 for all
PROOF. Let x;y 2 Z be such that y > x. Polynomials (viewed as
functions on real numbers) are obviously differentiable in all their
arguments. Then, by the intermediate value theorem, there is a real
number x < z < y such that the value of -F
-x at
coincides with
by hypothesis, this is a positive number and y > x, we have
hence the conclusion
Theorem 10 does not provide a full characterization of -monotony.
For instance, the polynomial obviously monotone,
but not positive for our result can
be used to ensure (full) monotony (i.e., -monotony) of polynomials
when more standard conditions (as the aforementioned ones) do
not hold. For instance, polynomial contains
negative coefficients (which, as mentioned before, is not allowed in
the usual polynomial interpretations); the monotonicity of F can be
ensured using Theorem 10 since
N. We can use -polynomial orderings for proving termination
of CSRS's.
Example 9. Consider the CSRS (R ; -) of Example 1 and the polynomial
interpretation given by f N
and b N = 1. Obviously, f N (x;y;z) 2 N for all x;y;z 2 N. Note also
that f N is -monotone: since - f N this follows by Theorem
10. We have:
(R ; -) is terminating.
Note that the polynomial -interpretation used in Example 9 is not
monotonic in the standard case: For instance f N
f N (0;1;0) but 1 > 0, i.e., f N is not monotone in its first argument.
Similarly, f N N is not monotone
in its second argument either.
As for the unrestricted case, polynomial -orderings are well-founded
-simplification orderings if we additionally require that
THEOREM 11. Let F be a signature containing at least a constant
and (N;F N ; >) be a polynomial -interpretation
such that for all f 2 F and
poly is a well-founded -simplification ordering
Now, we have the following immediate corollary.
COROLLARY 1. Let (R ; -) be a CSRS and >
poly be a polynomial
-simplification ordering. If l > -
poly r for all rule l ! r in R , then
(R ; -) is simply terminating.
Example 10. Continuing Example 9. Note that f N in Example 9
verifies f N (x;y;z) z z for all x;y;z 2 N. Hence, (R ; -) is
simply terminating.
6 Modularity of Simple Termination of
CSRS's
We shall now investigate to what extent the notion of simple termination
of CSRS's introduced behaves in a modular way, i.e.,
whether simple termination of two given CSRS's implies simple
termination of their union. Such a kind of modularity analysis for
general termination of CSRS's has recently been initiated in [22]
with promising first results. Let us recall a few notions and results
from [22] that we need subsequently. For simplicity, for the rest of
the paper we assume that all considered CSRS's are finite. Some of
the results (but not all) do also hold for arbitrary (infinite) systems.
DEFINITION 7. We say that a property P of CSRS's is modular for
disjoint unions if, whenever two disjoint CSRS's have property P ,
then their (disjoint) union also does. 6
This notion of modularity can be generalized in a straightforward
way to other (more general) classes of combinations of CSRS's.
6 The reverse implication usually holds, too, but for simplicity
we don't include it in the definition here.
DEFINITION 8. A rule l ! r in a CSRS (R ; -) is non-duplicating
if for every x 2 V ar(l) the multiset of replacing occurrences of x in
r is contained in the multiset of replacing occurrences of x in l, and
duplicating otherwise. (R ; -) is non-duplicating if all its rules are,
and duplicating, otherwise. A rule l ! r is said to be collapsing, if
r is a variable, and non-collapsing otherwise. A CSRS is collapsing
if it has a collapsing rule, and non-collapsing, otherwise.
Note that for CSRS's without any replacement restrictions, collaps-
ingness and non-duplication as above just yield the corresponding
well-known notions for TRS's.
Of course, in order to sensibly combine two CSRS's, one should
require some basic compatibility condition regarding the respective
replacement restrictions.
DEFINITION 9. Two CSRS's (R are said to be compatible
if they have the same replacement restrictions for shared
function symbols, i.e., if R
. The union (R ; -) of two compatible
CSRS's (R defined componentwise, i.e.,
Disjoint CSRS's are trivially compatible.
THEOREM 12. ([22]) Let (R be two disjoint, terminating
CSRS's, and let (R ; -) be their union. Then the following
hold:
(R ; -) terminates, if both (R are non-
collapsing.
(R ; -) terminates, if both (R are non-
duplicating.
(R ; -) terminates, if one of the systems is both non-collapsing
and non-duplicating.
DEFINITION 10. A TRS R is said to be terminating under free
projections, FP-terminating for short, if the disjoint union of R
and the TRS (fGg;fG(x;y) ! x;G(x;y) ! yg) is terminating. A
(R ; -) is said to be FP-terminating, if the disjoint union of R
and the CSRS ((fGg;fG(x;y)
f1;2g, is terminating.
THEOREM 13. ([22]) Let (R be two disjoint, terminating
CSRS's, such that their union (R ; -) is non-terminating.
Then one of the systems is not FP-terminating, and the other system
is collapsing.
As already shown for TRS's, this abstract and powerful structure
result has a lot of direct and indirect consequences and corollaries.
To mention only a few:
DEFINITION 11. A CSRS is non-deterministically collapsing if
there is a term that reduces to two distinct variables (in a finite
number of context-sensitive rewrite steps).
THEOREM 14. Any non-deterministically collapsing, terminating
CSRS is FP-terminating.
THEOREM 15. ([22]) Termination is modular for non-deterministically
collapsing disjoint CSRS's.
Consequently we also get the next result.
THEOREM 16. FP-termination is modular for disjoint CSRS's.
In the case of TRS's it is well-known that simple termination is
modular for disjoint unions ([25]). This can also be shown via the
TRS version of the general Theorem 13 above (cf. [21]). In par-
ticular, we note that for TRS's we have the equivalence:
simply terminating iff Now it is
obvious that if F contains a function symbols of arity at least 2, then
(R [Emb(F ); -) is non-deterministically collapsing, and, if additionally
(R [Emb(F ); -) is terminating, then (R [Emb(F ); -) is
FP-terminating. In fact, even for the case where the TRS
has only functions symbols of arity 0 and 1, simple termination of
its FP-termination as can be easily shown (e.g., by
a minimal counterexample proof). 7
With these preparations, we are ready now to tackle modularity of
simple termination for CSRS's.
THEOREM 17. Let (R ; -) with be a CSRS with
for at least one f 2F . Then simple termination of (R ; -)
implies FP-termination of (R ; -).
PROOF. Simple termination of (R ; -) means termination of (R [
By assumption, there is an f 2 F with j-( f )j
2. Thus, f rewrites to both x i and x j (using
j. But this means that (R [
non-deterministically collapsing, hence, by Theorem
14, (R [Emb - consequently, also (R ; -) are FP-
terminating.
By combining Theorem 17 and Theorem 13 we get now the following
modularity result for simple termination.
THEOREM 18. Let (R
be two disjoint CSRS's, and let (R ; -) be their disjoint
union (with
Moreover suppose that there exists an f i 2 F i with j-( f i )j 2, for
are simply terminating, then
(R ; -) is simply terminating, too.
Interestingly, for the proof of this result via Theorem
we need the technical assumption that there exists an f i 2 F i with
Currently, we do not know whether the
statement of Theorem 17 also holds without this condition. If yes,
Theorem would immediately generalize, too, and yield in general
modularity of simple termination for CSRS's.
But note that if this condition above is not satisfied, any proof of
the corresponding statement in Theorem 17 cannot work as in the
TRS case any more, since now we may have arbitrarily complicated
terms.
Finally, let us consider the case of (some) non-disjoint unions of
CSRS's.
6.1 Extension to the Constructor-Sharing
Case
Finally, let us consider the case of (some) non-disjoint unions of
CSRS's.
7 Note that in this case the terms over F have a very simple
shape, and essentially are strings.
DEFINITION 12. For a CSRS (R ; -), where R), the set of
defined (function) symbols is its set of
constructors is
be CSRS's with F 1 denoting their respective
signatures, sets of constructors, and defined function symbols.
(R are said to be (at most) constructor
sharing if D 1 \F ?. The set of shared constructors
between them is said to be
(shared) constructor lifting if root(r) 2 C , for
to be (shared) constructor lifting if it has a constructor lifting rule
said to be shared symbol lifting if
root(r) is a variable or a shared constructor. R i is said to be shared
lifting if it is collapsing or has a constructor lifting rule. R i
is layer preserving if it is not shared symbol lifting.
DEFINITION 13. Let ((F ; R;F ); -) be a CSRS and f 2F . We say
that f is fully replacing if -( f ng where n is the arity of
f .
THEOREM 19 ([22], EXTENDS [21, THEOREM 34]).
(R be two constructor sharing, compatible, terminating
CSRS's with all shared constructors fully replacing, such
that their union (R ; -) is non-terminating. Then one of the systems,
is not FP-terminating and the other system is shared symbol lifting
(i.e., collapsing or constructor lifting). 8
PROOF. We only sketch the proof idea. Analogous to [21] one considers
a minimal counterexample, i.e., a non-terminating derivation
in the union with a minimal number of alternating layers. By using
some sophisticated abstracting transfromation such a counterexample
can be translated into a counterexample that uses only one of the
two systems plus a disjoint system with only two rules of the form
(that serve for "extracting relevant information
of the former signature by need"). Note that for this construction
to work properly, we need the assumption that the shared
constructors are fully replacing.
Without the above assumption the statement of the Theorem does
not hold in general.
Example 11. Consider the CSRS's
and
with that have a shared constructor
':' which is not fully replacing. Now, both CSRS's are obviously
terminating and also FP-terminating, but their union is not due to
the cycle
length(zeros)
As for disjoint unions, from Theorem 19 above many results can
be derived. Especially regarding simple termination, we have the
following.
THEOREM 20. Let (R
constructor-sharing CSRS's with all shared constructors
fully replacing, and let (R ; -) be their union (with
8 As for TRS's, this result holds not only for finite CSRS's, but
also for finitely branching ones. But, in contrast to the disjoint
union case, it doesn't hold any more for infinitely branching sys-
tems, cf. [35] for a counterexample.
suppose that there exists an
are simply terminating, then (R ; -) is simply terminating, too.
Note that also other "symmetric" and "asymmetric" results can be
easily obtained from Theorem 19 and also from Theorem 20, anaol-
ogously to the case of disjoint unions of CSRS's (and as for TRS's).
A simple (though somewhat artificial) application example is the
following variant of Example 1.
Example 12. The two CSRS's
and
with f3g. The systems are compat-
ible, constructor-sharing and all shared constructors are (trivially)
fully replacing. Both are simply terminating (as it is not very difficult
to show; for instance, consider the polynomial interpretation
of Example 9 together with g N hence the combined
system is also simply terminating by Theorem 20. Note that
the union includes the full version of Toyama's TRS (which is non-terminating
in absence of replacement restrictions), here proved to
be simply terminating regarding CSR (for the selected replacement
map).
7 Conclusion
We have introduced a definition of simple termination of CSR and
analyzed its suitability as a unifying notion which, similar to simple
termination of rewriting, represents a class of orderings which
are well-suited for automatization. We have proven that all existing
transformations for proving termination of CSR can also be used for
proving simple termination of CSRS's (this is in contrast to other
restricted forms of termination like, e.g., innermost termination of
CSR, see [18, 22]). We have also shown that CSRPO-termination
is actually a method for proving simple termination of CSRS's. We
have analyzed the use of polynomial orderings as a tool for analyzing
termination of CSRS's. After this analysis, our
notion of simple termination appears to be a quite natural one for
the context-sensitive setting. In contrast to the context-free case
where simplification orderings (over finite signatures) are automatically
well-founded by Kruskal's Theorem, in the context-sensitive
case termination needs to be explicitly established if such orderings
are used for termination proofs. Some possible lines for doing so
have also been mentioned. Finally, we have also obtained some
first modularity results concerning simple termination of CSRS's.
These results are quite encouraging and we expect that further similar
results can be obtained, and also be extended to more general
combinations of CSRS's, e.g., to composable ([34]) and to certain
hierarchical ([24], [11]) systems.
Acknowledgements
We thank the anonymous referees for their helpful remarks.
--R
Improving on-demand strategy annotations
Correct and complete (pos- itive) strategy annotations for OBJ
rewriting and All That.
Personal communication
Recursive path orderings can be context-sensitive
Principles of Maude.
Simulation of turing machines by a regular rewrite rule.
A note on simplification orderings.
Orderings for term rewriting systems.
Termination of rewriting.
Hierarchical termination.
Principles of OBJ2.
An overview of CAFE specification environment - an algebraic approach for creating
Transformation techniques for context-sensitive rewrite systems
Transforming context-sensitive rewrite systems
Innermost termination of context-sensitive rewriting
Transformation techniques for context-sensitive rewrite systems
Introducing OBJ.
Generalized sufficient conditions for modular termination of rewriting.
Modular termination of context-sensitive rewriting
Simple word problems in universal alge- bra
Modular proofs for completeness of hierarchical term rewriting systems.
Modularity of simple termination of term rewriting systems with shared constructors.
On proving term rewriting systems are noetherian.
Termination of context-sensitive rewriting by rewriting
Termination of on-demand rewriting and termination of OBJ programs
Termination of rewriting with strategy annotations.
Termination of (canonical) context-sensitive rewriting
Simple termination is difficult.
Modular Properties of Composable Term Rewriting Systems.
On the modularity of termination of term rewriting systems.
Advanced Topics in Term Rewriting.
Simplification orderings: History of results.
of context-sensitive rewriting
In TeReSe
--TR
Termination of rewriting
Modularity of simple termination of term rewriting systems with shared constructors
Simulation of Turing machines by a regular rewrite rule
On the modularity of termination of term rewriting systems
Modular proofs for completeness of hierarchical term rewriting systems
rewriting and all that
Principles of OBJ2
Advanced topics in term rewriting
Modular termination of context-sensitive rewriting
Context-sensitive rewriting strategies
Termination of Rewriting With Strategy Annotations
Improving On-Demand Strategy Annotations
Termination of Context-Sensitive Rewriting by Rewriting
Context-Sensitive AC-Rewriting
Transforming Context-Sensitive Rewrite Systems
Termination of (Canonical) Context-Sensitive Rewriting
Termination of Context-Sensitive Rewriting
Recursive Path Orderings Can Be Context-Sensitive
Hierachical Termination
Termination of on-demand rewriting and termination of OBJ programs
An overview of CAFE specification environment-an algebraic approach for creating, verifying, and maintaining formal specifications over networks
--CTR
Mirtha-Lina Fernndez, Relaxing monotonicity for innermost termination, Information Processing Letters, v.93 n.3, p.117-123, 14 February 2005
Beatriz Alarcn , Ral Gutirrez , Jos Iborra , Salvador Lucas, Proving Termination of Context-Sensitive Rewriting with MU-TERM, Electronic Notes in Theoretical Computer Science (ENTCS), 188, p.105-115, July, 2007
Nao Hirokawa , Aart Middeldorp, Tyrolean termination tool: Techniques and features, Information and Computation, v.205 n.4, p.474-511, April, 2007
Jrgen Giesl , Aart Middeldorp, Transformation techniques for context-sensitive rewrite systems, Journal of Functional Programming, v.14 n.4, p.379-427, July 2004
Salvador Lucas, Proving termination of context-sensitive rewriting by transformation, Information and Computation, v.204 n.12, p.1782-1846, December, 2006 | declarative programming;context-sensitive rewriting;modular program analysis and verification;automatic proofs of termination;evaluation strategies |
570235 | Computing Densities for Markov Chains via Simulation. | We introduce a new class of density estimators, termed look-ahead density estimators, for performance measures associated with a Markov chain. Look-ahead density estimators are given for both transient and steady-state quantities. Look-ahead density estimators converge faster (especially in multidimensional problems) and empirically give visually superior results relative to more standard estimators, such as kernel density estimators. Several numerical examples that demonstrate the potential applicability of look-ahead density estimation are given. | Introduction
Visualization is becoming increasingly popular as a means of enhancing one's understanding of a stochastic
system. In particular, rather than just reporting the mean of a distribution, one often finds that more
useful conclusions may be drawn by seeing the density of the underlying random variable.
We will consider the problem of computing the densities of performance measures associated with
a Markov chain. For chains on a finite state space, this typically amounts to computing or estimating
a finite number of probabilities, and standard methods may be applied easily in this case (see below).
When the chain evolves on a general state space, however, the problem is not so straight-forward.
General state-space Markov chains arise naturally in the simulation of discrete-event systems (Hen-
derson and Glynn 1998). As a simple example, consider the customer waiting time in the single-server
queue with traffic intensity ae ! 1 (see Section 6). The sequence of customer waiting times forms a
Markov chain that evolves on the state space [0; 1). More generally, many discrete-event systems may
The research of the second author was supported by the U.S. Army Research Office under Contract No. DAAG55-97-
1-0377 and by the National Science Foundation under Grant No. DMS-9704732.
be described by a generalized semi-Markov process, and such processes can be viewed as Markov chains
on a general state space (see, e.g., Henderson and Glynn 1998). General state-space Markov chains are
also prevalent in the theory of control systems; see Chapter 2 of Meyn and Tweedie (1993).
This paper is an outgrowth of, and considerably extends, Glynn and Henderson (1998), in which
we introduced a new methodology for stationary density estimation. For a general overview of density
estimation from i.i.d. observations, see Prakasa Rao (1983), Devroye (1985) or Devroye (1987). Yakowitz
(1985), (1989) has considered the stationary density estimation problem for Markov chains on state space
ae IR d where the stationary distribution has a density with respect to Lebesgue measure. He showed
that under certain conditions, the kernel density estimator at any point x is asymptotically normally
distributed with error proportional to (nh d
is the so-called "bandwidth", and n is the
simulation runlength. One of the conditions needed to establish this result is that hn ! 0 as n ! 1.
Hence, the rate of convergence for kernel density estimators is typically strictly slower than n \Gamma1=2 , and
depends on the dimension d (see Remarks 5 and 7). In contrast, the estimator we propose converges at
rate n \Gamma1=2 independent of the dimension d.
In fact, the estimator that we propose has several appealing features.
1. It is relatively easy to compute (compared, say, to nearest-neighbour or kernel density estimators).
2. No tuning parameters need to be selected (unlike the "bandwidth" for kernel density estimators,
for example).
3. Well-established steady-state simulation output analysis techniques may be applied to analyze the
estimator.
4. The error in the estimator converges to 0 at rate n \Gamma1=2 independent of the dimension of the state
space, where n is the simulation runlength.
5. Under relatively mild assumptions, look-ahead density estimators consistently estimate not only
the density itself, but also the derivatives of the density; see Theorem 9.
6. The estimator can be used to obtain a new quantile estimator. The variance estimator for the
corresponding quantile estimator has a rigorous convergence theory, and converges at rate n \Gamma1=2
(Section 5).
7. Empirically, the estimator yields superior representations of stationary densities compared with
other methods (Example 1 of Section 6).
We first introduce the central ideas behind look-ahead density estimation in a familiar context. Although
this problem is subsumed by the treatment of Section 3, a separate development should prove
helpful in understanding the look-ahead approach. Let be an irreducible positive
recurrent Markov chain on finite state space S, and -(y) be the stationary probability of a point y 2 S.
Our goal is to estimate the stationary "density" -(\Delta); in the finite state space context, the stationary
"density" coincides with the stationary probabilities -(y), for y 2 S. To estimate -(y), the standard
estimator is
~
where I(\Delta) is the indicator function that is 1 if its argument is true, and 0 otherwise. The estimator ~ -n (y)
is simply the proportion of time the Markov chain X spends in the state y.
Notice however, that one could also estimate -(y) by
where P (\Delta; \Delta) is the transition matrix of X . The estimator -n (y) is a (strongly) consistent estimator of
-(y) as can be seen by noting that
as by the strong law for positive recurrent Markov chains on discrete state space. Notice that
so that the quantity P (X(i); y) is, in effect, "looking ahead" to
see whether the next iterate of the Markov chain will equal y. This is the motivation for the name
"look-ahead" density estimator.
In the remainder of this paper we assume a general state-space (not necessarily discrete) unless
otherwise specified. We refer to the density we are trying to estimate as the target density, and the
associated distribution as the target distribution.
In Section 2, look-ahead density estimators are developed for several performance measures associated
with transient simulations, and their pointwise asymptotic behaviour is derived. Steady-state performance
measures are similarly considered in Section 3. In Section 4, we turn to the global convergence behaviour
of look-ahead density estimators. In particular, we give conditions under which the look-ahead density
estimator converges to the target density in an L q sense (Theorem 5), is uniformly convergent (Theorem
7), and is differentiable (Theorem 9).
In Section 5 we consider the computation of several features of the target distribution, including the
mode of the target density, and quantiles of the target distribution. Finally, in Section 6 we give three
examples of look-ahead density estimation.
Computing Densities for Transient Performance Measures
be a Markov chain taking values in a state space S. Since our focus in this
section is on transient performance measures, we will permit our chain to possess transition probabilities
that are non-stationary.
Recall that is a transition kernel if Q(x; \Delta) is a probability measure on S
for each x 2 S, and if Q(\Delta; dy) is suitably measurable. (If S is a discrete state space, Q corresponds to
a transition matrix.) By permitting X to have non-stationary transition probabilities, we are asserting
that there exists a sequence of transition kernels such that
a.s.
for Our basic assumption is:
A1. There exists a (oe-finite) measure fl on S and a function
for
Remark 1: Assumption A1 is automatically satisfied when S is finite or countably infinite.
Remark 2: Given that this paper is concerned with density estimation, the case where fl is Lebesgue
measure and S is a subset of IR d is of the most interest to us. However, it is important to note that A1
does not restrict us to this context. In fact, Example 1 in Section 6 shows that this apparent subtlety
can in fact be very useful.
Remark 3: If X has stationary transition probabilities, P
In our discussion of steady-state density estimation (see Section 3), we will clearly wish to restrict
ourselves to such chains.
We will now describe several different computational settings to which the ideas of this paper apply.
In what follows, we will adopt the generic notation pZ (\Delta) to denote the fl-density of the r.v. Z. In other
words, pZ (\Delta) is a function with the property that
for all y in the range of Z. Also, for a given initial distribution - on S, let P - (\Delta) be the probability
distribution on the path-space of X under which X has initial distribution -.
Problem 1: Compute the density of X(r).
For r - 1, let p X(r) (\Delta) be the fl-density of X(r). Note that
Z
Z
so that
Z
is the expectation operator corresponding to P - . To compute the density p X(r) (y), simulate n
i.i.d. replicates of X under P - . Then, A1 and the strong law of large numbers together
guarantee that
a.s.
as n !1, so that p X(r) (y) can indeed be computed by our look-ahead estimator p 1n (y).
Remark 4: Suppose that A1 is weakened to
for so that now we are assuming the existence of a density only for the m-step
transition probability distribution. Provided r - m, we can write
so that p X(r) (y) can again be easily computed via independent replication of X .
For a given subset A ' S, let Ag be the first entrance time to A.
Problem 2: Compute the density of X(T ).
Suppose that P - starts in A c under initial distribution -. Then, for
Z
Z
so that for y 2 A,
Z
Again, A1 and the strong law of large numbers ensure that
as n !1, where the X i 's are independent replicates of X under P - , and T Ag.
An important class of transient performance measures is concerned with cumulative costs. Specifically,
be a sequence of real-valued r.v.'s in which \Gamma(n) may be interpreted as the ``cost''
associated with running X over
\Gamma(i)
is the cumulative cost corresponding to the time interval [0; n). We assume that
Y
so that, conditional on X , the \Gamma(i)'s are independent r.v.'s and the conditional distribution of \Gamma(i) depends
on X only through X(i \Gamma 1) and X(i). An important special case arises when
IR. In this case, (1) is automatically satisfied and f(x)
may be viewed as the cost associated with spending a unit amount of time in x 2 S. (We permit the
additional generality of (1) because such cost structures are a standard ingredient in the general theory
of "additive functionals" for Markov chains, and create no difficulties for our theory.)
Before proceeding to a discussion of the cumulative cost C(n), we note that Problems 1 and 2 have
natural analogues here. However, we will need to replace A1
A2. There exists a (oe-finite) measure fl on S and a function ~
for
Problem 3: Compute the density of \Gamma(r).
For y 2 IR, the density p \Gamma(r) (y) can be consistently estimated by
=n
~
are independent replicates of X .
Problem 4: Compute the density of \Gamma(T ).
Here, the density p \Gamma(T ) (y) can be consistently estimated via
=n
~
As usual, are independent replicates of X under P - , and T i is the first entrance time of
X i to the set A.
In addition to consistency of p 3n (y) and p 4n (y), A2 permits us to solve a couple of additional computational
problems that relate the to the density of the cumulative cost r.v. introduced earlier.
Problem 5: Compute the density of C(r).
We assume here that fl is Lebesgue measure. Then, if r - 1, we may use A2 to write
Z
Z
Z y\Gammaz
so that
Z
Z
Evidently, A2 and the strong law of large numbers together guarantee that
~
a.s.
as n !1, so that p 5n (y) is a consistent estimator of p C(r) (y), the (Lebesgue) density of C(r).
Problem Compute the density of C(T ).
As we did earlier, we assume that arguments to those
used above establish the identity
~
Thus A2 and the strong law prove that
is a consistent estimator for
To this point, we have constructed unbiased density estimators for each of the six density computation
problems described above. We now turn to the development of asymptotically valid confidence regions
for these densities. The key is to recognize that each of the six estimators may be represented as
p in
1. (Here, the index set is either S or IR, depending on which of the
estimators is under consideration.) For y 2 , let p(y; i) 4
(y). For d points y
in (~y) (~p(~y; i)) to be a d-dimensional vector with jth component p in (y j
A straightforward application of the multivariate central limit theorem (CLT) yields the following result.
points in , selected so that E- ij (y k
as is a d-dimensional multivariate normal random vector with mean vector
zero and covariance matrix \Sigma i (~y) having (j; k)th element given by cov(- i1 (y j ); - i1 (y k )).
Proposition 1 suggests the approximation
~p in (~y) D
for n large, where D
- denotes the (non-rigorous) relation "has approximately the same distribution as",
and \Sigma 1=2
i (~y) is a square root (Cholesky factor) of the non-negative definite symmetric matrix \Sigma i (~y). Since
easily estimated consistently from X by the sample covariance matrix, it follows that
(2) may be used to construct asymptotically valid confidence regions for ~ p(~y; i).
Remark 5: Equation (2) implies that the error in the look-ahead density estimator decreases at rate
n \Gamma1=2 . This dimension-independent rate stands in sharp contrast to the heavily dimension-dependent
rate exhibited by other density estimators, including kernel density estimators; see Prakasa Rao (1983).
The convergence rate for such estimators is typically (nh d
is the bandwidth parameter
and d is the dimension of . To minimize mean squared error, the bandwidth hn is typically chosen to
be of the order n \Gamma1=(d+4) , and then the error in the kernel density estimators decreases at rate n \Gamma2=(d+4) .
Even in one dimension, this asymptotic rate is slower than that exhibited by the look-ahead density
estimator, and in higher dimensions, the difference is even more apparent; see Example 2 of Section 6.
Computing Densities for Steady-State Performance Measures
We now extend our look-ahead estimation methodology to the steady-state context. In order for the
concept of steady-state to be well-defined, we assume that X has stationary transition probabilities, so
that the transition kernels introduced in Section 2 are independent of n. In other words,
we assume that there exists a transition kernel P such that P
for defined as in A1.
Proposition 2 Under A1, any stationary distribution of X possesses a density - with respect to fl.
Proof: Let - be a stationary distribution of X . (Note that we are using - to represent both the
stationary distribution and its density with respect to fl. The appropriate interpretation should be clear
from the context.) Then,
for all (suitably) measurable
Z
Z
Z
Z
Z
It follows from (3) and (4) that the stationary distribution - has a fl-density -(\Delta) having value
Z
at y 2 S. 2
According to Proposition 2, the density -(y) may be expressed as an expectation, namely
see (5). Relation (6) suggests using the estimator
-n (y) =n
to compute -(y); -n (y) requires simulating X up to time
To establish laws of large numbers and CLT's for -n (y), we require that X be positive recurrent in a
suitable sense.
A3. Assume that there exists a subset B ' S, positive scalars -; a and b, an integer m - 1, a probability
distribution '(\Delta) on S, and a (deterministic) function
1.
2. E[V
where I(x 2 B) is 1 or 0 depending on whether or not x 2 B.
In the language of general state-space Markov chain theory, A3 ensures that X is a geometrically
ergodic Harris recurrent Markov chain; see Meyn and Tweedie (1993) for details. Condition 1 of A3 is
typically satisfied for reasonably behaved Markov chains by choosing B to be a compact set; -; ', and m
are then determined so that 1 is satisfied. Condition 2 is known as a Lyapunov function condition. For
many chains, a potential choice for V is something of the form V
great deal of ingenuity may be necessary in order to construct such a V . See Example 1 of Section 6 for
an illustration of the verification of A3. In any case, A3 ensures that X possesses a unique stationary
distribution.
Remark Assumption A3 is a stronger condition than is necessary to obtain the laws of large numbers
and CLT's below. However, in most applications, A3 is a particularly straightforward sufficient condition
to verify, and we offer it in that spirit.
consist of d points selected from S, and let ~- n (~y) (~-(~y)) be a d-dimensional vector
in which the jth component is -n (y j
Theorem 3 Assume A1, A3, and suppose that for 1
as n !1. Also, there exists a non-negative definite symmetric matrix
as is a d-dimensional standard Brownian motion and ) denotes weak
convergence in D[0; 1).
Proof: The proof follows directly from results from Meyn and Tweedie (1993). The strong law is a
consequence of Theorem 17.0.1. Lemma 17.5.1 and Lemma 17.5.2 together imply the existence of a
square integrable (with respect to -) solution to Poisson's equation. This then enables an application of
Theorem 17.4.4 to yield the result. 2
Remark 7: Equation (2) implies that the error in the look-ahead density estimator for estimating
stationary densities decreases at rate n \Gamma1=2 . This is the same rate we observed in Remark 5 for the
case of independent observations. Furthermore, exactly as in the independent setting, other existing
estimators, including kernel density estimators, converge at a slower rate; see Yakowitz (1989). The
convergence rate for such estimators is typically (nh d
is the bandwidth parameter, and d
is the dimension of the (Euclidian) state space. Since hn ! 0 as convergence rate is slower
than n \Gamma1=2 .
Yakowitz (1989) does not give the optimal (in terms of minimizing mean squared error) choice of
bandwidth hn . However, an i.i.d. sequence is a special case of a Markov chain, and, as noted in Remark
5, the fastest possible root mean square error convergence rate in that setting is of the order n \Gamma2=d+4 .
This rate is heavily dimension dependent, so that in large-dimensional problems, one might expect very
slow convergence of kernel density estimators.
To obtain confidence regions for the density vector ~-(~y), several different approaches are possible. If
\Sigma(~y) is positive definite and there exists a consistent estimator \Sigma n (~y) for \Sigma(~y), then Theorem 3 asserts
that for D ' IR d ,
for large n, where \Sigma n (~y) 1=2 D is defined to be the set fx
w for some ~
confidence regions for ~-(~y) can then be easily obtained from (7). If X enjoys regenerative structure,
the regenerative method for steady-state simulation output analysis provides one means of constructing
such consistent estimators for \Sigma(~y); see, for example, Bratley, Fox, and Schrage (1987).
An alternative approach exploits the functional CLT provided by Theorem 3 to ensure the asymptotic
validity of the method of multivariate batch means; see Mu~noz and Glynn (1998) for details.
Remark 8: The discussion of this section generalizes to the computation of the density \Gamma(1) under the
stationary distribution -. In particular, suppose that X satisfies A2 and A3. Then for y 2 IR,
where ~
defined as in A2; the methodology
of this section then generalizes suitably.
Global Behaviour of the Look-ahead Density Estimator
In the previous section we focused on the pointwise convergence properties of the look-ahead density
estimator. Specifically, we showed that for any finite collection y of points in either S or IR
(depending on the estimator), the look-ahead density estimator converges a.s. and the rate of convergence
is described by a CLT in which the rate is dimension independent. In this section, we turn to the
estimator's global convergence properties. We assume throughout the remainder of this paper that S is
a complete separable metric space. (In particular, this includes any state space that is a "reasonable"
subset of IR k .)
Let fl be as in Sections 2 and 3, and let be either S or IR (depending on the estimator considered).
Then, for any function f : ! IR, we may define, for q - 1, the L q -norm
kfk q
'Z
For any two functions f 1 and f is a measure of the "distance" from f 1 to f 2 . We first analyze
the look-ahead density estimators introduced in Section 2.
Theorem 4 Suppose that E -
R
kp in (\Delta) \Gamma p(\Delta; i)k q
as n !1.
Proof: Evidently,
for y. Note that
-n
due to the convexity of jxj q . For each y satisfying (8), the right hand side of (9) converges a.s. and in
expectation to E - j- i1 (y) \Gamma p(y; i)j q . Consequently, the right-hand side of (9) is uniformly integrable.
Also, for each such y, the left-hand side of (9) converges to zero a.s. Since the left-hand side is dominated
by a uniformly integrable sequence, it follows that
Ejp in
as n !1 for fl a.e. y. Also, taking the expectation of both sides of (9) yields the inequality
Ejp in
the right-hand side is integrable in y, by hypothesis. The Dominated Convergence Theorem, applied to
(10), then gives Z
Ejp in
and hence
Ekp in (\Delta) \Gamma p(\Delta; i)k q
Consequently,
kp in (\Delta) \Gamma p(\Delta; i)k q
as n !1, from which the theorem follows. 2
We turn next to obtaining the analogous result for the steady-state density estimator -n (\Delta) of Section
3.
Theorem 5 Suppose Z
for x 2 S, with q - 1. If A3 holds and the initial distribution - has a density with respect to fl, then
as n !1.
Proof: Condition (11) guarantees that
Z
see Theorem 14.3.7 of Meyn and Tweedie (1993). So, y, and the proof
follows the same pattern as that for Theorem 2. That argument yields the conclusion that for ffl ? 0,
as almost every x. Therefore, the
Dominated Convergence Theorem allows us to conclude that
Z
as n !1, where u(\Delta) is the -density of -. This is the desired conclusion. 2
Remark 9: Note that the hypotheses of both Theorems 2 and 3 are automatically satisfied when
Convergence of the estimated density in L q ensures that for a given runlength n, errors of a given size
can only occur in a small (with respect to fl) set.
We now turn to the question of when the look-ahead density estimator converges to its limit uniformly.
Uniform convergence is especially important in a visualization context. If one can guarantee that the
error in the estimator is uniformly small, then graphs of the estimated density will be "close" to the
graph of the limit.
We will focus our attention here on the steady-state density estimator -n ; similar results can be
derived for our other density estimators through analogous arguments.
Theorem 6 Suppose that A1 is in force, and p : S \Theta S ! [0; 1) is continuous and bounded. If A3
holds, then for each compact set K,
sup
as n !1.
Proof: Fix ffl ? 0. Since - is tight (see Billingsley 1968), there exists a compact set K(ffl) for which -
assigns at most ffl mass to its complement. Write
1. The second term on the right-hand side of (12) may be bounded by
2 K(ffl)), which has an a.s. limit supremum of at most -ffl. As for the first term, note
that if K is compact, then K(ffl) \Theta K is compact and p is therefore uniformly continuous there. Because
of uniform continuity, there exists ffi(ffl) such that whenever \Theta K is within distance ffi(ffl)
of ffl. Since K is compact, we can find a finite collection
of points in K such that the open balls of radius ffi(ffl) centered at y
each y 2 K, there exists y i in our collection such that jp(X(j);
So, for y 2 K,
Letting 1)ffl. Hence, for y 2 K, we obtain the uniform
bound
By letting n ! 1, applying the strong law for Harris chains to n \Gamma1
sending ffl ! 0, we obtain the desired conclusion. 2
Remark 10: If S is compact, Theorem 6 yields uniform convergence of -n to - over S, under a
continuity hypothesis on p. (The boundedness is automatic in this setting.)
Our next result establishes uniform convergence of -n to - over all of S, provided that we assume
that p(x; \Delta) "vanishes at infinity".
Theorem 7 Suppose that A1 holds and uniformly continuous and bounded.
Assume that for each x 2 S and ffl ? 0, there exists a compact set K(x; ffl) such that whenever
ffl. If A3 holds, then
sup
as n !1.
Proof: Fix ffl ? 0, and choose ffi(ffl) so that whenever lies within distance ffi(ffl) of
choose K(ffl) as in the proof of Theorem 6 and let x K(ffl) be a finite
collection of points such that the open balls of radius ffi(ffl) centred at x K(ffl). For each x i ,
there exists K i
and note that K is compact. Theorem 6 establishes that
sup
as n !1. To deal with
construct the sequence (X
2 K(ffl) and X 0 (n) is the closest point within the collection fx
Then, for
Sending n !1 allows us to conclude that -(y) -
K. The inequality (14) then yields
lim sup
sup
together imply the theorem. 2
The following consequence of Theorem 7 improves Theorem 5 from convergence in probability to a.s.
convergence when basically Scheff'e's Theorem (see, for example, p. 17, Serfling 1980).
Corollary 8 Under the conditions of Theorem 7,
Z
as n !1.
Proof: The result is immediate if fl is a finite measure (since j- n (\Delta) \Gamma -(\Delta)j is uniformly bounded and
converges to zero a.s. by Theorem 7. If fl is an infinite measure (like Lebesgue measure), Theorem 7
asserts that -n (\Delta) ! -(\Delta) a.s. so that path-by-path, we may argue that
Z
Z
(since the integrand is dominated by -(\Delta), which integrates to one, thereby permitting the application of
the Dominated Convergence Theorem path-by-path). 2
A very important characteristic of the look-ahead density estimator is that it "smoothly approximates"
the density to be computed. To be specific, suppose that either or that we are considering the
density of one of the real-valued r.v.'s associated with the estimators p 3n (\Delta); p 4n (\Delta), p 5n (\Delta) or p 6n (\Delta). Since
we are then working in a subset of Euclidian space, it is reasonable to measure smoothness in terms of
the derivatives of the density.
any real loss of generality, assume so that y 2 IR. The look-ahead density estimators
we have developed take the form
for some sequence of random functions (G 1). (Both the estimators of Section 2 and Section
3 admit this representation.) To estimate the kth derivative of the target density to be computed, the
natural estimator is therefore
dy k pn
dy
Under quite weak conditions on the problem, it can be shown that the above estimator computes the
kth derivative of the target density consistently; see below for a discussion. Such a result proves that not
only does look-ahead density estimation compute the density, but it also approximates the derivatives of
the density in a consistent fashion. In other words, it "smoothly approximates" the target density.
As an illustration of the types of conditions needed in order to ensure that the look-ahead density
estimator smoothly approximates the target density, we consider the steady-state density estimator of
Section 3. Let p 0 (x;
dy p(x; y).
Theorem 9 Suppose A1 holds, continuously differentiable with bounded
derivative. If A3 holds, then - has a differentiable density -(\Delta), and
sup
as n !1 for each compact K ' S. Furthermore, if jp 0 (x; y)j - V 1=2 (x) for x 2 S, then there exists d(y)
such that
as n !1.
Proof: Note that
Z
lies between y and y Because the derivative is
assumed to be bounded, the Bounded Convergence Theorem then ensures that the limit in (18) exists
and bounded and continuous, exactly the same argument as that used
in proving Theorem 6 can be used here to obtain (16).
The CLT (17) is an immediate consequence of Theorems 17.0.1 and 17.5.4 of Meyn and Tweedie
(1993). 2
An important implication of Theorem 9 is that the look-ahead density estimator computes the derivative
accurately. In fact, the density estimator converges at rate n \Gamma1=2 , independent of the dimension of
the state space, and furthermore, independent of the order of the derivative being estimated.
Remark 11: It is also known that kernel density estimators smoothly approximate the target density;
see Prakasa Rao 1983 p. 237, and Scott 1992 p. 131. The choice of bandwidth that minimizes mean
squared error of the kernel density derivative estimator is larger than in the case of estimating the target
density itself. The resulting rates of convergence of kernel density derivative estimators are adversely
affected by both the order of the derivatives, and the dimension of the state space. For example, in
one dimension, kernel density derivative estimators of an rth order derivative converge at best at rate
This rate is fastest when estimating the first derivative, and even then,
is slower than the rate of convergence of the look-ahead density derivative estimator discussed above.
Computing Special Features of the Target Distribution Using
Look-ahead Density Estimators
As discussed earlier, computation of the density is a useful element in developing visualization tools
for computer simulation. In this section, we focus on the computation of certain features of the target
distribution, to which our look-ahead density estimator can be applied to advantage.
5.1 Computing the Relative Likelihood of Two Points
In Sections 2 and 3, we introduced a number of different look-ahead density estimators, each of which we
can write generically as pn (\Delta). The look-ahead density estimator pn (\Delta) is an estimator for a target density
p(\Delta) say. For each pair of points (y represents the likelihood of the point y 1
relative to that of y 2 .
The joint CLT's developed in Proposition 1 and Theorem 3 can be used to obtain a CLT (suit-
able for construction of large-sample confidence intervals for the relative likelihood) for the estimator
as
as n !1. If the covariance matrix of (N 1 can be consistently estimated (as with, for example, the
regenerative method), then confidence intervals for the relative likelihood (based on (19)) can easily be
obtained. Otherwise, one can turn to the batch means method to produce such confidence intervals; see
Mu~noz and Glynn (1997).
5.2 Computing the Mode of the Density
The mode of the density provides information as to the region within which the random variable of
interest attains its highest likelihood. Given that the target distribution here has density p(\Delta), our goal
is to compute the modal location y and the modal value p(y ). As discussed earlier in this section, we
our look-ahead density estimator generically as pn (\Delta). The obvious estimator of y is, of course, any
y
which maximizes pn (\Delta), and the natural estimator for p(y ) is then pn (y
can and will assume
that the maximizer y
n has been selected to be measurable.) We denote the domain of p(\Delta) by . Because
our analysis involves using a Taylor expansion, we require that 2 IR d .
Theorem
1. p(\Delta) has a unique mode at location y ;
2. sup
a.s. as n !1;
3. there exists an ffl-neighbourhood of y with ffl ? 0 such that p(\Delta) and pn (\Delta) are twice continuously
differentiable there a.s.;
4. sup
ky\Gammay k!ffl
a.s. as n !1;
5. sup
ky\Gammay k!ffl
a.s. as n !1, where Hn (y) and H(y) are the Hessians of pn (\Delta) and
p(\Delta) at y, respectively;
6. H(y ) is negative definite;
7. n 1=2 (p n (y
Then, y
a.s. as n !1 and
as n !1.
Proof: The almost sure convergence of y
n to y is an immediate consequence of relations 1 and 2.
For the weak convergence statement, observe that rpn (y
are local
maxima of pn (\Delta) and p(\Delta), respectively. (To be precise, this is valid only for n so large that y
n lies in the
ffl-neighbourhood specified by relations 3 - 5.) So,
rpn (y
But
rpn (y
a.s. as n !1. It follows that n 1=2 (y
\Gammay
lies on the line segment joining y
n and y . Since rpn (- n a.s. and we have
established weak convergence of n 1=2 (y
evidently the first term in (21) converges to zero in
probability. Consequently, (20), (21), relation 7, and the "converging together principle" (see Billingsley
1968, for example) imply the desired joint convergence result. 2
Remark 12: The uniform convergence theory of Section 3 for pn and its derivatives can be easily
applied to verify relations 2, 4 and 5.
Remark 13: The CLT established in Theorem 10 shows that the look-ahead estimator of the mode
converges at the asymptotic rate n \Gamma1=2 , independent of the dimension d of the state space. This compares
very favourably with the rate of convergence of kernel estimators of the mode. A kernel estimator of the
mode converges at rate (nh d+2
when the bandwidth hn is chosen appropriately; see Theorem 4.5.6
of Prakasa Rao 1983 p. 284.
Remark 14: To construct confidence intervals based on Theorem 10, there are again a couple of
alternatives. Assume first that for each fixed y 2 , one can consistently estimate the covariance matrix
that arises in the joint CLT of relation 7. (For example, this can be done in the transient context or
the setting of regenerative processes in steady-state simulation). To estimate the covariance structure
at y , one can compute the corresponding covariance estimate evaluated at the point
n . In the
transient problems considered in Section 2, it is typically easy to verify that that the covariance matrix
is continuous in y, so that using the estimator associated with y
n in place of the covariance at y is
asymptotically valid. In the steady-state context, it is not as straightforward to theoretically establish
the continuity of the covariance (although one suspects it is valid in great generality); one potential avenue
is to adopt the methods of Glynn and L'Ecuyer (1995). If consistent estimates of the covariance matrix
at each fixed y 2 are not available (as might occur in non-regenerative steady-state simulations), then
one can potentially appeal to the method of batch means; see Mu~noz (1998).
5.3 Computing Quantiles of the Target Distribution
We focus here on the special case in which ' IR, so that the target distribution is that of a real-valued
r.v. In this setting, suppose that dy, and let
p(y) dy
be the target distribution. An important special feature of this distribution is the pth quantile of F .
Specifically, for each p 2 (0; 1), we define the pth quantile of F as the quantity
There is a significant literature on the computation of such quantiles. Iglehart (1976) considered
quantile estimation in the context of regenerative simulation, and proved a central limit theorem for the
standard estimator. Seila (1982) introduced the batch quantile method, again for regenerative processes,
that avoids some of the difficulties associated with the estimation procedure proposed by Iglehart (1976).
The approach suggested by Heidelberger and Lewis (1984) is based on the so-called "maximum transfor-
mation" and mixing assumptions of the underlying process, and does not require regenerative structure.
Hesterberg and Nelson (1998) and the references therein discuss the use of control variates to obtain
variance reduction in quantile estimation. Avramidis and Wilson (1998) obtain variance reduction in
estimating quantiles through the use of antithetic variates and Latin hypercube sampling.
Kappenman (1987) integrated and inverted a kernel density estimator for p(\Delta) in the case when the
observations are i.i.d. Our approach is similar to Kappenman's in that we invert the integrated look-ahead
density estimator. Let pn (\Delta) be the look-ahead density estimator, and set
pn (y) dy:
The natural estimator for the quantile q is then
(p).
Theorem 11 Suppose that
1. p(q) ? 0;
2. p(\Delta) is continuous in an ffl neighbourhood of q;
3. sup
a.s. as n !1;
4. n 1=2
Proof: Recall that kpn (\Delta) \Gamma p(\Delta)k 1 ) 0 as n !1; see Remark 9. Hence,
sup
x
as n !1, from which it follows that Qn ) q as n !1. But
and
lies between Qn and q. Relations 2 and 3 imply that pn (- n ) ) p(q) as n !1. The result then
follows from (22), (23), and relation 4. 2
Remark 15: Similar issues to those discussed in Remark 14 arise in constructing confidence intervals
based on Theorem 11. Once again, it is possible to consistently estimate the variance parameter that
arises in the CLT in relation 4 in either the transient context, or the setting of regenerative steady-state
simulation. To see this, recall that the look-ahead density estimator pn (\Delta) may be expressed as
some sequence of random functions (G
pn (y) dy
=n
Evidently, (24) is a sample mean over a sequence of real-valued r.v.'s, and the sequence is either i.i.d.
or regenerative, depending on the context. Therefore, standard methods may be applied to estimate the
variance parameter in the CLT of relation 4.
The comments in Remark 14 related to continuity of the variance parameter apply directly here. In
particular, one must establish that the variance of the r.v. N in relation 4 is continuous as a function of
q, so that estimating the variance at q by an estimate of the variance at q n is asymptotically valid.
It is natural to ask how the performance of the look-ahead quantile estimator compares with that of
a more standard quantile estimator. For ease of exposition, in the remainder of this section we specialise
to the case where is a Markov chain taking values in IR. Suppose that A1 and A3 are
in force with and we are interested in computing is the distribution
function of the stationary distribution - of X .
A natural approach to estimation of q is to first estimate F by the empirical distribution function ~
Fn ,
where
~
and then choose the estimator ~
Qn of q as ~
(p).
Alternatively, using look-ahead methodology, one could estimate q by F \Gamma1
The proof of the following proposition rests primarily on the observation that
so that the estimators Fn and ~
Fn are related through the principle of extended conditional Monte Carlo
(Bratley et al. 1987 p. 71, Glasserman 1993).
Let p be a density of the stationary distribution - with respect to Lebesgue measure, and let var -
denote the variance operator associated with the path space of X , where X has initial distribution -.
Proposition 12 Suppose that A1 and A3 hold, and conditions 1 and 2 of Theorem 11 are satisfied.
Then,
as
~
Fn (q), and N 1 (0; 1) and N 2 (0; 1) are
standard normal r.v.'s. In addition, if X is stochastically monotone, then oe 2 - ~
oe 2 .
Proof: Observe that for all w 2 IR,
Z q
so that Theorem 17.5.3 of Meyn and Tweedie implies that n 1=2 Similarly,
Theorem 17.5.3 also gives n 1=2 ( ~
If X is stochastically monotone, then in view of (25), we can apply Theorem 12 of Glynn and
Iglehart (1988) to achieve the result. (The required uniform integrability follows from the fact that
~
Combining the results of Theorem 11 and Proposition 12, we see that under reasonable conditions,
as n !1. It can also be shown, again under reasonable conditions, that
~
oe
as Henderson and Glynn (1999). Proposition 12 asserts that oe 2 - ~
so that in the
context of steady-state quantile estimation for stochastically monotone Markov chains, the look-ahead
quantile estimator may typically be expected to achieve variance reduction over a more standard quantile
estimator.
Remark 16: It is well-known that the waiting time sequence in the single-server queue is a stochastically
monotone Markov chain, and thus the results of this section may be applied in that context.
6 Examples
We present three examples of the application of look-ahead density estimators. Our first example is an
example of steady-state density estimation and illustrates how to establish A3.
Example 1: It is well-known that the sequence of customer waiting times
(excluding service) in the FIFO single-server queue is a Markov chain on state space In
particular, W satisfies the Lindley recursion (p. 181, Asmussen 1987)
is an i.i.d. sequence with Y (n
V (n) is the service time of the nth customer, and U(n + 1) is the interarrival time between the nth and
1)st customer.
To verify A3, we proceed as follows. Define some yet to be determined constant
note that
Let us assume that the moment generating function OE(t) 4 =Ee tY (1) of Y (1) exists in a neighbourhood
of zero, so that OE(t) is finite for sufficiently small t. For stability, we must have EY (1) ! 0, which
implies that OE 0 (0) ! 0. Hence, there exists an ff ? 0 such that OE(ff) ! 1. Now choose K ? 0 so that
then for all x ? K, we see from (26) that E[V (W (1))jW
where From (26) we also see that for x - K, E[V (W (1))jW
1. Thus, we have verified condition 2 of A3 for the set
To verify condition 1, note that EY (1) ! 0 implies that there exists
fi. It follows that conditional on W
Taking ' to be a point mass at 0, and we see that condition 1 of A3 is verified. We have
therefore established the following result.
Proposition 13 If the moment generating function of Y (1) exists in a neighbourhood of 0, and EY (1) !
0, then A3 is satisfied.
We now specialise to the M/M/1 queue with arrival rate -, service rate -, and traffic intensity
ae 4
1. The transition kernel for W is then given by
where p(x;
is the probability measure that assigns unit mass
to the origin. Noting that p(\Delta; \Delta) is bounded by max(-; 1), it follows (after possibly scaling the function
that the conditions of Theorem 3 are satisfied, and the look-ahead density estimator therefore
converges at rate n \Gamma1=2 to the stationary density of W .
Defining a suitable kernel density estimator is slightly more problematical, due to the presence of the
point mass at 0 in the stationary distribution and the need to select a kernel and bandwidth. To estimate
the point mass at 0 we use
the mean number of visits to 0 in a run of length n. For y ? 0, we estimate -(y) using
where
is the density of a standard normal r.v., and This choice of hn (modulo a multiplicative
constant) yields the optimal rate of mean-square convergence in the case where the observations are i.i.d.
(Prakasa Rao 1983, p. 182), and so it seems a reasonable choice in this context.
For this example we chose so that the traffic intensity ae = 0:5. To remove the
effect of initialization bias (note that both estimators are affected by this), we simulated a stationary
version of W by sampling W 0 from the stationary distribution.
exact
look-ahead
kernel
100.050.150.250.350.45waiting time
density
Density Estimators for the M/M/1 Queue
Figure
1: Density estimates from a run of length 100.
The density estimates for x ? 0, together with the exact density, are plotted for simulation runlengths
of
Figure
We observe the following.
1. Visually, the look-ahead density estimate appears to be a far better representation of the true
density than the kernel density estimate.
2. The kernel density estimate has several local modes, and its performance near the origin is particularly
poor, even for the run of length 1000.
The previous example is a one-dimensional density estimation problem. Our results suggest that the
rate of convergence of the look-ahead density estimators is insensitive to the underlying dimension of the
problem. However, the rate of convergence of kernel density estimators is known to be adversely affected
by the dimension; see Remarks 5 and 7. To assess the difference in performance in a multi-dimensional
setting, we provide the following example.
exact look-ahead
kernel
100.10.30.5waiting time
density
Density Estimators for the M/M/1 Queue
Figure
2: Density estimates from a run of length 1000.
Example 2: Let be a sequence of d dimensional i.i.d. normal random vectors with
zero mean and covariance matrix the identity matrix I . Define the Markov chain
inductively by
1. The Markov chain X is a (very) special case of the linear state space model defined
on p. 9 of Meyn and Tweedie (1993). We chose such a model for this example so that the steady-state
density is easily computed. In particular, the stationary distribution of X is normal with mean zero and
covariance matrix thus X has stationary density
exp
We estimate this density at dimensions using both a kernel density estimator
and a look-ahead density estimator, with estimators are constructed from simulated
sample paths of length 10, 100 and 1000. We sample X(0) from the stationary distribution to remove
any initialization bias. To estimate the mean squared error (MSE) of the density estimators at
repeat the simulations 100 times.
The kernel density estimator we chose uses a multivariate standard normal distribution as the kernel,
and a bandwidth for the rationale behind this choice of bandwidth).
Table
1 reports the root MSE for the two estimators as a percentage of the true density value -(0).
Observe that as the dimension increases, the rate of convergence of the kernel density estimator deteriorates
rapidly. In contrast, the rate of convergence of the look-ahead density estimator remains constant
(for each increase in runlength by a factor of 10, relative error decreases by a factor of approximately 3),
independent of the dimension of the problem.
Remark 17: It is possible to construct look-ahead density estimators for far more complicated linear
state space models than the one considered here. The critical ingredient is A1 which is easily satisfied,
d -(0) Estimator Runlength
Table
1: Root MSE of estimators of -(0) as a percentage of -(0).
for example, if the innovation vectors W (k) have a known density with respect to Lebesgue measure.
Our final example is an application to stochastic activity networks (SANs). This example is not easily
captured within our Markov chain framework, and therefore gives some idea of the potential applicability
of look-ahead density estimation methodology.
Example 3: In this example, we estimate the density of the network completion time (the length of the
longest path from the source to the sink) in a simple stochastic activity network taken from Avramidis
and Wilson (1998). Consider the SAN in Figure 3 with independent task durations, source node 1, and
sink node 9. The labels on the arcs give the mean task durations. We assume that tasks (6,
have densities (with respect to Lebesgue measure), so that the network completion time L has a density
p(\Delta) (with respect to Lebesgue measure).
Figure
3: Stochastic activity network with mean task duration shown beside each task.
Suppose that we sample all task durations except task (6, and compute the lengths
and L(8) of the longest paths from the source node to nodes 6 and 8 respectively. Then
F
where, for a given task ab, F ab denotes the task duration distribution function, -
F ab ab (\Delta), and
f ab (\Delta) is the (Lebesgue) density. Then, A1 and the strong law of large numbers ensure that the look-ahead
density estimator
F
is a strongly consistent estimator of p(y).
For the purposes of our simulation experiment, we assumed that all task durations were exponentially
distributed with means as indicated on Figure 3. The resulting density estimate is depicted in Figure 4
for a run of length 1000.
1500.0050.0150.025Completion Time
density
Estimated Network Completion Time Density
Figure
4: Estimate of the Network Completion Time Density.
Remark 18: The approach taken in this example clearly generalizes to other SANs where all of the
arcs entering the sink node have densities (with respect to Lebesgue measure).
Remark 19: One need not base a look-ahead density estimator on the arcs that are incident on the
sink. For example, one might instead focus on arcs that leave the source. In the above example, these
arcs correspond to tasks (1, 2) and (1, 3), and one would condition on the longest paths from nodes 2
and 3 to the sink.
Acknowledgments
--R
Applied Probability and Queues.
Convergence of Probability Measures.
A Guide to Simulation
Nonparametric Density Estimation: The L 1 View.
A Course in Density Estimation.
Filtered Monte Carlo.
Estimation of Stationary Densities of Markov Chains.
Likelihood ratio gradient estimation for stochastic recursions.
Quantile estimation in dependent sequences.
Regenerative steady-state simulation of discrete-event systems
Asymptotic results for steady-state quantile estimation in Markov chains
Control variates for probability and quantile estimation.
Simulating stable stochastic systems
Improved distribution quantile estimation.
Markov Chains and Stochastic Stability.
A batch means methodology for the estimation of quantiles of the steady-state distribution
Multivariate standarized time series for output analysis in simulation experiments.
Batch means methodology for estimation of a nonlinear function of a steady-state mean
Nonparametric Functional Estimation.
Multivariate Density Estimation: Theory
A batching approach to quantile estimation in regenerative simulations.
Approximation Theorems of Mathematical Statistics.
Nonparametric density estimation
Nonparametric density and regression estimation for Markov sequences without mixing assumptions.
--TR
--CTR
Shane G. Henderson, Simulation mathematics and random number generation: mathematics for simulation, Proceedings of the 33nd conference on Winter simulation, December 09-12, 2001, Arlington, Virginia | density estimator;markov chain;simulation |
570391 | Answer set programming and plan generation. | The idea of answer set programming is to represent a given computational problem by a logic program whose answer sets correspond to solutions, and then use an answer set solver, such as SMODELS or DLV, to find an answer set for this program. Applications of this method to planning are related to the line of research on the frame problem that started with the invention of formal nonmonotonic reasoning in 1980. | Introduction
Kautz and Selman [19] proposed to approach the problem of plan generation
by reducing it to the problem of nding a satisfying interpretation for a set
of propositional formulas. This method, known as satisability planning,
is used now in several planners. 1 In this paper we discuss a related idea,
due to Subrahmanian and Zaniolo [36]: reducing a planning problem to the
problem of nding an answer set (\stable model") for a logic program. The
advantage of this \answer set programming" approach to planning is that the
representation of properties of actions is easier when logic programs are used
instead of axiomatizations in classical logic, in view of the nonmonotonic
character of negation as failure. Two best known answer set solvers (systems
for computing answer sets) available today are smodels 2 and dlv 3 . The
results of computational experiments that use smodels for planning are
reported in [4, 30].
for the latest system of this
kind created by the inventors of satisability planning.http://www.tcs.hut.fi/Software/smodels/ .http://www.dbai.tuwien.ac.at/proj/dlv/ .
In this paper, based on earlier reports [22, 23], applications of answer set
programming to planning are discussed from the perspective of the research
on the frame problem and nonmonotonic reasoning done in AI since 1980.
Specically, we relate them to the line of work that started with the invention
of default logic [33]|the nonmonotonic formalism that turned out to
be particularly closely related to logic programming [2, 26, 10]. After the
publication of the \Yale Shooting Scenario" [14] it was widely believed that
the solution to the frame problem outlined in [33] was inadequate. Several
alternatives have been proposed [18, 20, 34, 15, 21, 29, 8]. It turned out,
however, that the approach of [33] is completely satisfactory if the rest of
the default theory is set up correctly [37]. It is, in fact, very general, as
discussed in Section 5.2 below. We will see that descriptions of actions in
the style of [33, 37] can be used as a basis for planning using answer set
solvers.
In the next section, we review the concept of an answer set as dened
in [9, 10, 25] and its relation to default logic. Then we describe some of the
computational possibilities of answer set solvers (Section
the answer set programming method [27, 30] by applying it to a graph-theoretic
search problem (Section 4). In Section 5 we turn to the use of
answer set solvers for plan generation. Section 6 describes the relation of
this work to other research on actions and planning.
Answer Sets
2.1 Logic Programs
We begin with a set of propositional symbols, called atoms. A literal is an
expression of the form A or :A, where A is an atom. (We call the
\classical negation," to distinguish it from the symbol not used for negation
as failure.) A rule element is an expression of the form L or not L, where L
is a literal. A rule is an ordered pair
Head Body (1)
where Head and Body are nite sets of rule elements. A rule (1) is a constraint
disjunctive if the cardinality of Head is greater
than 1. If
and
then we write (1) as
(2)
We will drop in (2) if the body of the rule is empty
A program is a set of rules.
These denitions dier from the traditional description of the syntax of
logic programs in several ways. First, our rules are propositional: atoms are
not assumed to be formed from predicate symbols, constants and variables.
An input le given to an answer set solver does usually contain \schematic
rules" with variables, but such a schematic rule is treated as an abbreviation
for the set of rules obtained from it by grounding. The result of grounding
is a propositional object, just like a set of clauses that would be given as
input to a satisability solver.
On the other hand, in some ways (2) is more general than rules found in
traditional logic programs. Each L i may contain the classical negation symbol
traditional logic programs use only one kind of negation|negation
as failure. The head of (2) may contain several rule elements, or it can be
traditionally, the head of a rule is a single atom. The negation as
failure symbol is allowed to occur in the head of a rule, and not only in
the body as in traditional logic programming. We will see later that the
additional expressivity given by these syntactic features is indeed useful.
2.2 Denition of an Answer Set
The notion of an answer set is dened rst for programs that do not contain
negation as failure (l = k and in every rule (2) of the program). Let
be such a program, and let X be a consistent set of literals. We say that
X is closed under if, for every rule (1) in , Head \ X
Body X. We say that X is an answer set for if X is minimal among
the sets closed under (relative to set inclusion).
For instance, the program
has two answer sets:
If we add the constraint
to (3), we will get a program whose only answer set is the rst of sets (4).
On the other hand, if we add the rule
to (3), we will get a program whose only answer set is fp; :q; :rg.
To extend the denition of an answer set to programs with negation as
failure, take an arbitrary program , and let X be a consistent set of literals.
The reduct X of relative to X is the set of rules
for all rules (2) in such that X contains all the literals L
does not contain any of Lm+1 . Thus X is a program without
negation as failure. We say that X is an answer set for if X is an answer
set for X .
Consider, for instance, the program
not q ;
q not r ;
r not s ;
and let X be fp; rg. The reduct of (5) relative to this set consists of two
rules:
Since X is an answer set for this reduct, it is an answer set for (5). It is
easy to check that program (5) has no other answer sets.
This example illustrates the original motivation for the denition of an
answer set|providing a declarative semantics for negation as failure as implemented
in existing Prolog systems. Given program (5), a Prolog system
will respond yes to a query if and only if that query is p or r, that is to say,
if and only if the query belongs to the answer set for (5). In this sense, the
role of answer sets is similar to the role of the concept of completion [3],
which provides an alternative explanation for the behavior of Prolog (p and
r are entailed by the program's completion).
2.3 Comparison with Default Logic
Let be a program such that the head of every rule of is a single literal:
We can transform into a (propositional) default theory in the sense of [33]
by turning each rule (6) into the default
There is a simple correspondence between the answer sets for and the
extensions for this default theory DT : if X is an answer set for then the
deductive closure of X is a consistent extension for DT ; conversely, every
consistent extension for DT is the deductive closure of an answer set for .
For instance, the default theory corresponding to program (5) is
r
The only extension for this default theory is the deductive closure of the
program's answer set fp; rg.
Under this correspondence, a rule without negation as failure is represented
by a default without justications, that is to say, by an inference
rule. A fact|a rule with the empty body|corresponds to a default that
has neither prerequisites nor justications, that is, an axiom. The normal
is the counterpart of the rule
q p; not :q: (8)
Logic programs as dened above are more general than defaults in that
their rules may have several elements in the head, and these elements may
include negation as failure. On the other hand, defaults are more general in
that they may contain arbitrary propositional formulas, not just literals or
conjunctions of literals.
In this connection, it is interesting to note that one of the technical issues
related to the \Yale Shooting" controversy is whether the eects of actions
should be described by axioms, such as
loaded
or by inference rules, such as
loaded
According to [37], formulation (10) is a better choice. In the language of
logic programs (10) would be written as
:alive(result(shoot (s))) loaded (s):
Formula (9), on the other hand, does not correspond to any rule in the
sense of logic programming. Paradoxically, limitations of the language of
logic programs play a positive role in this case by eliminating some of the
\bad" representational choices that are available when properties of actions
are described in default logic.
2.4 Generating and Eliminating Answer Sets
From the perspective of answer set programming, two kinds of rules play a
special role: those that generate multiple answer sets and those that can be
used to eliminate some of the answer sets of a program.
One way to write a program with many answer sets is to use the disjunctive
rules
for several atoms A. A program that consists of n rules of this form has 2 n
answer sets. For instance, the program
has 4 answer sets:
As observed in [5], rule (11) can be equivalently replaced in any program
by two nondisjunctive rules
A not :A ;
:A not A :
In the notation of default logic, these rules can be written as
A
Alternatively, a program with many answer sets can be formed using
rules of the form
not L (12)
where L is a literal. This rule has two answer sets: fLg and ;. A program
that consists of n rules of form (12) has 2 n answer sets|all subsets of the
set of literals occurring in the rules. For instance, the answer sets for the
program
q; not q (13)
are the 4 subsets of fp; qg.
The rules that can be used to eliminate \undesirable" answer sets are
constraints|rules with the empty head. We saw in Section 2.2 that appending
the constraint q to program (3) eliminates one of its two answer
sets (4). The eect of adding a constraint to a program is always mono-
tonic: the collection of answer sets of the extended program is a part of the
collection of answer sets of the original program.
More precisely, we say that a set X of literals violates a constraint
if be the program obtained
from a program by adding constraint (14). Then a set X of literals is an
answer set for 0 i
X is an answer for , and
X does not violate constraint (14).
For instance, the second of the answer sets (4) for program (3) violates the
constraint q, and the rst doesn't; accordingly, adding this constraint
to (3) eliminates the second of the program's answer sets.
To see how rules of both kinds|those that generate answer sets and
those that eliminate them|can work together, consider the following translation
from propositional theories to logic programs. Let be a set of
clauses, and let be the program consisting of
rules (11) for all atoms A occurring in , and
the constraints L clauses L 1 _ _ L n in .
(By L we denote the literal complementary to L.) The answer sets for
are in a 1{1 correspondence with the truth assignments satisfying , every
truth assignment being represented by the set of literals to which it assigns
the value true.
3 Answer Set Solvers
System dlv computes answer sets for nite programs without negation as
failure in the heads of rules (l = k in every rule (2) of the program). For
instance, given the input le
-r :- p.
it will return the answer sets for program (3). Given the input le
p :- not q.
q :- not r.
r :- not s.
it will return the answer set for program (5).
System smodels requires additionally that its input program contain no
disjunctive rules. This limitation is mitigated by two circumstances.
First, the input language of smodels allows us to express any \exclusive
disjunctive rule," that is, a disjunctive rule
accompanied by the constraints
This combination is represented as
Second, smodels allows us to represent the important disjunctive combination
(12) in the head of a rule by enclosing L in braces:
f L g.
A list of rules of the form
can be conveniently represented in an smodels input le by one line
For instance, rules (13) can be written simply as
{p,q}.
Both dlv and smodels allow the user to specify large programs in a
compact fashion, using rules with schematic variables and other abbrevia-
tions. Both systems employ sophisticated grounding algorithms that work
fast and simplify the program in the process of grounding.
joined(X,Y) :- edge(X,Y).
joined(X,Y) :- edge(Y,X).
:- in(X), in(Y), X!=Y, not joined(X,Y),
vertex(X), vertex(Y).
hide.
show in(X).
Figure
1: Search for a large clique.
4 Answer Set Programming
The idea of answer set programming is to represent a given computational
problem by a program whose answer sets correspond to solutions, and then
use an answer set solver to nd a solution.
As an example, we will show how this method can be used to nd a large
clique, that is, a subset V of the vertices of a given graph such that
every two vertices in V are joined by an edge, and
the cardinality of V is not less than a given constant j.
Figure
1 shows an smodels input le that can be used to nd a large
clique or to determine that it does not exist. This le is supposed to be
accompanied by a le that describes the graph and species the value of j,
such as the one shown in Figure 2.
The possible values of the variables X, Y in Figure 1 are restricted by
the \domain predicates" vertex and edge. In case of the graph described
in
Figure
2, the predicate vertex holds for the numerals and the
const j=3.
vertex(0.5).
edge(0,1). edge(1,2). edge(2,0). edge(3,4).
edge(4,5). edge(5,3). edge(4,0). edge(2,5).
Figure
2: A test for the clique program.
predicate edge holds for 8 pairs of vertices Accordingly,
the expression
at the beginning of Figure 1 (\the set of atoms in(X) for all X such that
vertex(X)") has the same meaning as
{in(0), in(1), in(2), in(3), in(4), in(5)} .
The last expression can be understood as an abbreviation for a set of rules of
form (12), as discussed in Section 3. The answer sets for this set of rules are
arbitrary sets formed from these 6 atoms. Symbol j at the beginning of the
rule restricts the answer sets to those whose cardinality is at least j. This
is an instance of the \cardinality" construct available in smodels [31]. It
allows the user to bound, from below and from above, the number of atoms
of a certain form that are included in the answer set. lower bound is
placed to the left of the expression in braces, as in this example; an upper
bound would be placed to the right.)
The main parts of the program in Figure 1 are the two labeled GENERATE
and TEST. The former denes a large collection of answer sets|\potential
solutions." The latter consists of the constraints that \weed out" the answer
sets that do not correspond to solutions. As discussed above, a potential
solution is any subset of the vertices whose cardinality is at least j; the
constraints eliminate the subsets that are not cliques. This is similar to the
use of generating and eliminating rules in Section 2.4.
The part labeled DEFINE contains the denition of the auxiliary predicate
joined. The part labeled DISPLAY tells smodels which elements of the
answer set should be included in the output: it instructs the system to \hide"
all literals other than those that encode the clique. In case of the problem
shown in Figure 2, the part of the answer set displayed by smodels is
The discussion of this example in terms of generating a set of potential
solutions and testing its elements illustrates the declarative meaning of the
program, but it should not be understood as a description of what is actually
happening during the operation of an answer set solver. System smodels
does not process the program shown above by producing answer sets for
the GENERATE part and checking whether they satisfy the constraints in the
TEST part, just as a reasonable satisability solver does not search for a
model of a given set of clauses by generating all possible truth assignments
and checking for each of them whether the clauses are satised. The search
procedures employed in systems smodels and dlv use sophisticated search
strategies somewhat similar to those used in e-cient satisability solvers.
Answer set programming has found applications to several practically
important computational problems [30, 35, 16]. One of these problems is
planning.
5 Planning
5.1 Example
The code in Figures 3{5 allows us to use smodels to solve planning problems
in the blocks world. We imagine that blocks are moved by a robot with
several grippers, so that a few blocks can be moved simultaneously. However,
the robot is unable to move a block onto a block that is being moved at the
same time. As usual in blocks world planning, we assume that a block can
be moved only if there are no blocks on top of it.
There are three domain predicates in this example: time, block and
location; a location is a block or the table. The constant lasttime is an
upper bound on the lengths of the plans to be considered. (To nd the
shortest plan, one can use the minimize feature of smodels which is not
discussed in this paper.)
The GENERATE section denes a potential solution to be an arbitrary set of
move actions executed prior to lasttime such that, for every T, the number
of actions executed at time T does not exceed the number of grippers.
The rules labeled DEFINE describe the sequence of states corresponding
to the execution of a given potential plan. Each sequence of states is represented
by a complete set of on literals. The DEFINE rules in Figure 5 specify
the positive literals describing the initial positions of all blocks. The rst
two DEFINE rules in Figure 3 specify the positive literals describing the po-
time(0.lasttime).
location(B) :- block(B).
location(table).
time(T), T<lasttime.
% effect of moving a block
block(B), location(L), time(T), T<lasttime.
:- on(B,L,T), not -on(B,L,T+1),
location(L), block(B), time(T), T<lasttime.
% uniqueness of location
block(B), location(L), location(L1), time(T).
Figure
3: Planning in the blocks world, Part 1.
% two blocks cannot be on top of the same block
block(B), time(T).
% a block can't be moved unless it is clear
block(B), block(B1), location(L), time(T), T<lasttime.
% a block can't be moved onto a block that is being moved also
block(B), block(B1), location(L), time(T), T<lasttime.
hide.
show move(B,L,T).
Figure
4: Planning in the blocks world, Part 2
const grippers=2.
const lasttime=3.
block(1.6).
Initial state: Goal:
on(1,2,0).
on(2,table,0).
on(3,4,0).
on(4,table,0).
on(5,6,0).
on(6,table,0).
:- not on(3,2,lasttime).
:- not on(2,1,lasttime).
:- not on(1,table,lasttime).
:- not on(6,5,lasttime).
:- not on(5,4,lasttime).
:- not on(4,table,lasttime).
Figure
5: A test for the planning program.
sitions of all blocks at time T+1 in terms of their positions at time T. The
uniqueness of location rule species the negative on literals to be included
in an answer set in terms of the positive on literals in this answer set.
Note that the second DEFINE rule in Figure 3 is the smodels representation
of the normal default
|if it is consistent to assume that at time t+1 block b is at the same location
where it was at time t then it is indeed at that location (see Section 2.3).
This default is interesting to compare with the solution to the frame problem
proposed by Reiter in 1980:
[33, Section 1.1.4]. If we take relation R to be on, and the tuple of arguments
x to be b; l , this expression will turn into
The only dierence between defaults (15) and (16) is that the rst describes
change in terms of the passage of time (t becomes t + 1), and the latter in
terms of state transitions (s becomes f(b; l; s)).
Consider now the three constraints labeled TEST in Figure 4. The role of
the rst constraint is to prohibit, indirectly, the actions that would create
physically impossible congurations of blocks, such as moving two blocks
onto the same block b. The other two constraints express the robot's
limitations mentioned at the beginning of this section.
Adding these constraints to the program eliminates the answer sets corresponding
to the sequences of actions that are not executable in the given
initial state. When we further extend the program by adding the TEST section
of Figure 5, we eliminate, in addition, the sequences of actions that do
not lead to the goal state. The answer sets for the program are now in a
1{1 correspondence with solutions of the given planning problem.
The DISPLAY section instructs smodels to \hide" all literals except
for those that begin with move. The part of the answer set displayed by
smodels is the list of actions included in the plan:
Stable Model: move(3,table,0) move(1,table,0) move(5,4,1)
True
Duration: 0.340
5.2 Discussion
The description of the blocks world domain in Figures 3 and 4 is more
sophisticated, in several ways, than the shooting example [14] that seemed
so di-cult to formalize in 1987. First, this version of the blocks world
includes the concurrent execution of actions.
Second, some eects of moving a block are described here indirectly. In
the shooting domain, the eects of all actions are specied explicitly: we are
told how the action load aects the
uent loaded , and how the action shoot
aects the
uent alive. The description of the blocks world given above
is dierent. When block 1, located on top of block 2, is moved onto the
table, this action aects two
uents: on(1; table) becomes true, and on(1; 2)
becomes false. The rst of these two eects is described explicitly by the
rst DEFINE rule in Figure 3, but the description of the second eect is
indirect: the uniqueness of location rule allows us to conclude that block 2
is not on top of block 1 anymore from the fact that block 2 is now on the
table. The ramication problem|the problem of describing indirect eects
of actions|is not addressed in the classical action representation formalisms
STRIPS [7] and ADL [32].
Finally, the executability of actions is described in this example indirectly
as well. As discussed above, the impossibility of moving two blocks b 1 , b 2
onto the same block b is implicit in our description of the blocks world:
executing that action would have created a conguration of blocks that is
prohibited by one of the constraints in Figure 4. In STRIPS and ADL,
the executability of an action has to be described explicitly, by listing the
action's preconditions. The usual description of the blocks world asserts,
for instance, that moving one block on top of another is not executable if
the target location is not clear. This description is not applicable, however,
when several blocks can be moved simultaneously: in the initial state shown
in
Figure
5, block 1 can be moved onto block 4 if block 3 is moved at the
same time. Fortunately, when the answer set approach to describing actions
is adopted, specifying action preconditions explicitly is unnecessary.
The usefulness of indirect descriptions of action domains for applications
of AI was demonstrated in a recent report [38] on modelling the Reaction
Control System (RCS) of the Space Shuttle. The system consists of several
fuel tanks, oxidizer tanks, helium tanks, maneuvering jets, pipes, valves,
and other components. How is the behavior of the RCS aected by
ipping
one of its switches? According to [38], this action has only one direct eect,
which is trivial: changing the position of a switch causes the switch to be
in the new position. But there is also a postulate asserting that, if a valve
is functional, it is not stuck closed, and the switch controlling it is in the
open (or closed) position then the valve is open (or closed). These two facts
together tell us that, under certain conditions,
ipping a switch indirectly
aects the corresponding valve. Furthermore, if a helium talk has correct
pressure, there is an open path to a propulsion tank, and there are no paths
to a leak, then the propulsion tank has correct pressure also. Using this
postulate we can conclude that, under certain conditions,
ipping a switch
aects pressure in a propulsion tank, and so on. This multi-level approach
to describing the eects of actions leads to a well-structured and easy to
understand formal description of the operation of the RCS. The answer set
programming approach handles such multi-leveled descriptions quite easily.
6 Relation to Action Languages and Satisability
Planning
Some of the recent work on representing properties of actions is formulated
in terms of \high-level" action languages [12], such as A [11] and C [13].
Descriptions of actions in these languages are more concise than logic programming
representations. For example, the counterparts of the rst two
DEFINE rules from Figure 3 in language C are
move(b; l) causes on(b; l)
and
inertial on(b; l):
The design of language C is based on the system of causal logic proposed
in [28].
For a large class of action descriptions in C, an equivalent translation
into logic programming notation is dened in [24]. The possibility of such a
translation further illustrates the expressive power of the action representation
method used in this paper.
As noted in the introduction, the answer set programming approach to
planning is related to satisability planning. There is, in fact, a formal connection
between the two methods. If a program without classical negation
is \positive-order-consistent," or \tight," then its answer sets can be characterized
by a collection of propositional formulas [6]|the formulas obtained
by applying the completion process [3] to the program. The translations
from language C described in [24] happen to produce tight programs. Describing
a planning problem by a program like this, then translating the
program into propositional logic, and, nally, invoking a satisability solver
to nd a plan is a form of satisability planning that can be viewed also as
\answer set programming without answer set solvers" [1]. This is essentially
how planning is performed by the Causal Calculator. 4
7 Conclusion
In answer set programming, solutions to a combinatorial search problem
are represented by answer sets. Plan generation in the domains that involve
actions with indirect eects are a promising application area for this
programming method.
Systems smodels and dlv allow us to solve some nontrivial planning
problems even in the absence of domain-specic control information. For
larger problems, however, such information becomes a necessity. The possibility
of encoding domain-specic control knowledge so that it can be used
by an answer set solver is crucial for progress in this area, just as the possibility
of using control knowledge by propositional solvers is crucial for further
progress in satisability planning [17]. This is a topic for future work.
Acknowledgements
Useful comments on preliminary versions of this paper have been provided
by Maurice Bruynooghe, Marc Denecker, Esra Erdem, Selim Erdogan, Paolo
Ferraris, Michael Gelfond, Joohyung Lee, Nicola Leone, Victor Marek, Norman
McCain, Ilkka Niemela, Aarati Parmar, Teodor Przymusinski, Miros law
Truszczynski and Hudson Turner. This work was partially supported by the
National Science Foundation under grant IIS-9732744.
--R
Fages' theorem and answer set programming.
Minimalism subsumes default
Negation as failure.
Encoding planning problems in non-monotonic logic programs
Transformations of logic programs related to causality and planning.
STRIPS: A new approach to the application of theorem proving to problem solving.
Autoepistemic logic and formalization of common-sense reasoning
The stable model semantics for logic programming.
Logic programs with classical negation.
Representing action and change by logic programs.
Action languages.
An action language based on causal explanation: Preliminary report.
Nonmonotonic logic and temporal projection.
Simple causal minimizations for temporal persistence and projection.
Using logic programs with stable model semantics to solve deadlock and reachability problems for 1-safe Petri nets
Control knowledge in planning: bene
The logic of persistence.
Planning as satis
Pointwise circumscription: Preliminary report.
Formal theories of action (preliminary report).
Action languages
Answer set planning.
Representing transition systems by logic programs.
Answer sets in general nonmonotonic reasoning (preliminary report).
Victor Marek and Miros law Truszczy
Victor Marek and Miros law Truszczy
Causal theories of action and change.
The anomalous extension problem in default reasoning.
Ilkka Niemel
Ilkka Niemel
ADL: Exploring the middle ground between STRIPS and the situation calculus.
A logic for default reasoning.
Chronological ignorance: Time
Timo Soininen and Ilkka Niemel
Relating stable models and AI planning domains.
Representing actions in logic programs and default theories: a situation calculus approach.
An application of action theory to the space shuttle.
--TR
Nonmonotonic logic and temporal projection
The anomalous extension problem in default reasoning
Autoepistemic logic and formalization of commonsense reasoning: preliminary report
Logic programs with classical negation
ADL
Planning as satisfiability
An action language based on causal explanation
Control knowledge in planning
Answer set planning
Extending and implementing the stable model semantics
Logic programs with stable model semantics as a constraint programming paradigm
Developing a Declarative Rule Language for Applications in Product Configuration
An Application of Action Theory to the Space Shuttle
Representing Transition Systems by Logic Programs
Transformations of Logic Programs Related to Causality and Planning
Using Logic Programs with Stable Model Semantics to Solve Deadlock and Reachability Problems for 1-Safe Petri Nets
Encoding Planning Problems in Nonmonotonic Logic Programs
--CTR
Esra Erdem , Vladimir Lifschitz, Tight logic programs, Theory and Practice of Logic Programming, v.3 n.4, p.499-518, July
M. De Vos , D. Vermeir, Extending Answer Sets for Logic Programming Agents, Annals of Mathematics and Artificial Intelligence, v.42 n.1-3, p.103-139, September 2004
Enrico Giunchiglia , Joohyung Lee , Vladimir Lifschitz , Norman McCain , Hudson Turner, Nonmonotonic causal theories, Artificial Intelligence, v.153 n.1-2, p.49-104, March 2004
Josefina Sierra-Santibez, Heuristic planning: a declarative approach based on strategies for action selection, Artificial Intelligence, v.153 n.1-2, p.307-337, March 2004
Chiaki Sakama, Induction from answer sets in nonmonotonic logic programs, ACM Transactions on Computational Logic (TOCL), v.6 n.2, p.203-231, April 2005
Ernest Davis , Leora Morgenstern, Introduction: progress in formal commonsense reasoning, Artificial Intelligence, v.153 n.1-2, p.1-12, March 2004
Thomas Eiter , Axel Polleres, Towards automated integration of guess and check programs in answer set programming: a meta-interpreter and applications, Theory and Practice of Logic Programming, v.6 n.1-2, p.23-60, January 2006
Davy Van Nieuwenborgh , Dirk Vermeir, Preferred answer sets for ordered logic programs, Theory and Practice of Logic Programming, v.6 n.1-2, p.107-167, January 2006
Tran Cao Son , Chitta Baral , Nam Tran , Sheila Mcilraith, Domain-dependent knowledge in answer set planning, ACM Transactions on Computational Logic (TOCL), v.7 n.4, p.613-657, October 2006
Pascal Hitzler , Matthias Wendt, A uniform approach to logic programming semantics, Theory and Practice of Logic Programming, v.5 n.1-2, p.93-121, January 2005
Marcello Balduccini , Enrico Pontelli , Omar Elkhatib , Hung Le, Issues in parallel execution of non-monotonic reasoning systems, Parallel Computing, v.31 n.6, p.608-647, June 2005
Chiaki Sakama , Katsumi Inoue, An abductive framework for computing knowledge base updates, Theory and Practice of Logic Programming, v.3 n.6, p.671-715, November
Nicola Leone , Gerald Pfeifer , Wolfgang Faber , Thomas Eiter , Georg Gottlob , Simona Perri , Francesco Scarcello, The DLV system for knowledge representation and reasoning, ACM Transactions on Computational Logic (TOCL), v.7 n.3, p.499-562, July 2006
Logic programming and knowledge representation-the A-prolog perspective, Artificial Intelligence, v.138 n.1-2, p.3-38, June 2002 | answer sets;default logic;planning;frame problem;logic programming |
570392 | Fixed-parameter complexity in AI and nonmonotonic reasoning. | Many relevant intractable problems become tractable if some problem parameter is fixed. However, various problems exhibit very different computational properties, depending on how the runtime required for solving them is related to the fixed parameter chosen. The theory of parameterized complexity deals with such issues, and provides general techniques for identifying fixed-parameter tractable and fixed-parameter intractable problems. We study the parameterized complexity of various problems in AI and nonmonotonic reasoning. We show that a number of relevant parameterized problems in these areas are fixed-parameter tractable. Among these problems are constraint satisfaction problems with bounded treewidth and fixed domain, restricted forms of conjunctive database queries, restricted satisfiability problems, propositional logic programming under the stable model semantics where the parameter is the dimension of a feedback vertex set of the program's dependency graph, and circumscriptive inference from a positive k-CNF restricted to models of bounded size. We also show that circumscriptive inference from a general propositional theory, when the attention is restricted to models of bounded size, is fixed-parameter intractable and is actually complete for a novel fixed-parameter complexity class. | Introduction
Many hard decision or computation problems are known to become tractable if a problem parameter is fixed
or bounded by a fixed value. For example the well-known NP-hard problems of checking whether a graph has
a vertex cover of size at most k, and of computing such a vertex cover if so, become tractable if the integer
k is a fixed constant, rather than being part of the problem instance. Similarly, the NP complete problem
of finding a clique of size k in a graph becomes tractable for every fixed k. Note, however, that there is an
important difference between these problems:
- The vertex cover problem is solvable in linear time for every fixed constant k. Thus the problem is not
only polynomially solvable for each fixed k, but, moreover, in time bounded by a polynomial p k whose
degree does not depend on k.
- The best known algorithms for finding a clique of size k in a graph are all exponential in k (typically, they
require runtime
n\Omega (k=2) ). Thus, for fixed k, the problem is solvable in time bounded by a polynomial p k ,
whose degree depends crucially on k.
Problems of the first type are called fixed-parameter tractable (fp-tractable), while problems of the second
type can be classified as fixed-parameter intractable (fp-intractable) [8]. It is clear that fixed-parameter
tractability is a highly desirable feature.
The theory of parameterized complexity, mainly developed by Downey and Fellows [8, 6, 5], deals with
general techniques for proving that certain problems are fp-tractable, and with the classification of fp-intractable
problems into a hierarchy of fixed-parameter complexity classes.
In this paper we study the fixed-parameter complexity of a number of relevant AI and NMR problems. In
particular, we show that the following problems are all fixed-parameter tractable (the parameters to be fixed
are added in square brackets after the problem description):
Constraint Satisfiability and computation of the solution to a constraint satisfaction problem (CSP) [fixed
parameters: (cardinality of) domain and treewidth of constraint scopes].
- Satisfiability of CNF [fixed parameter: treewidth of variable connection graph].
Prime Implicants of a q-CNF [fixed parameters: maximal number q of literals per clause and size of the
prime implicants to be computed].
Propositional logic programming [fixed parameter: size of a minimal feedback vertex set of the atom
dependency
Circumscriptive inference from a positive q-CNF [fixed parameters: maximal number q of literals per
clause and size of the models to be considered].
We believe that these results are useful both for a better understanding of the computational nature of the
above problems and for the development of smart parameterized algorithms for the solution of these and
related problems.
We also study the complexity of circumscriptive inference from a general propositional theory when the
attention is restricted to models of size k. This problem, referred-to as small model circumscription (SMC), is
easily seen to be fixed-parameter intractable, but it does not seem to be complete for any of the fp-complexity
classes defined by Downey and Fellows. We introduce the new class \Sigma 2 W [SAT ] as a miniaturized version of
the class \Sigma P
2 of the polynomial hierarchy, and prove that SMC is complete for \Sigma 2 W [SAT ]. This seems to be
natural, given that the nonparameterized problem corresponding to SMC is \Sigma P
-complete [9]. Note, however,
that completeness results for parameterized classes are more difficult to obtain. In fact, for obtaining our
completeness result we had to resort to the general version of circumscription (called P;Z-circumscription)
where the propositional letters of the theory to be circumscribed are partitioned into two subsets P and Z, and
only the atoms in P are minimized, while those in Z can float. The restricted problem, where P consists of
all atoms and Z is empty does not seem to be complete for \Sigma 2 W [SAT ], even though its non-parameterized
version is still \Sigma P
The paper is organized as follows. In Section 2 we state the relevant formal definitions related to fixed
parameter complexity. In Section 3 we deal with constraint satisfaction problems. In Section 4 we study fp-
tractable satisfiability problems. In Section 5 we deal with logic programming. Finally, in Section 6 we study
the problem of circumscriptive inference with small models.
Parameterized Complexity
Parameterized complexity [8] deals with parameterized problems, i.e., problems with an associated parameter.
Any instance S of a parameterized problem P can be regarded as consisting of two parts: the "regular"
instance I S , which is usually the input instance of the classical - non parameterized - version of P ; and the
associated parameter kS , usually of integer type.
Definition 1. A parameterized problem P is fixed-parameter tractable if there is an algorithm that correctly
decides, for input S, whether S is a yes instance of P in time f(kS )O(n c ), where n is the size of I S (jI
n), kS is the parameter, c is a constant, and f is an arbitrary function.
A notion of problem reduction proper to the theory of parameterized complexity has been defined.
Definition 2. A parameterized problem P fp-reduces to a parameterized problem P 0 by an fp-reduction if
there exist two functions f; f 0 and a constant c such that we can associate to any instance S of P an instance
satisfying the following conditions: (i) the parameter kS 0 of S 0 is f(kS ); (ii) the regular instance I S 0
is computable from S in time is a yes instance of P if and only if S 0 is a yes instance of
A parameterized class of problems C is a (possibly infinite) set of parameterized problems. A problem P
is C-complete if P 2 C and every problem P 0 2 C is fp-reducible to P .
A hierarchy of fp-intractable classes, called W -hierarchy, has been defined to properly characterize the
degree of fp-intractability associated to different parameterized problems. The relationship among the classes
of problems belonging to the W -hierarchy is given by the following chain of inclusions:
where, for each natural number t ? 0, the definition of the class W [t] is based on the degree t of the
complexity of a suitable family of Boolean circuits.
The most prominent W [1]-complete problem is the parameterized version of clique, where the parameter
is the clique size. W [1] can be characterized as the class of parameterized problems that fp-reduce to parameterized
CLIQUE. Similarly, W [2] can be characterized as the class of parameterized problems that fp-reduce
to parameterized Hitting Set, where the parameter is the size of the hitting set.
A k-truth value assignment for a formula E is a truth value assignment which assigns true to exactly k
propositional variables of E. Consider the following problem Parameterized SAT:
Instance: A Boolean formula E.
Parameter: k.
Question: Does there exist a k-truth value assignment satisfying E?
is the class of parameterized problems that fp-reduce to parameterized SAT. W [SAT ] is contained
in W [P ], where Boolean circuits are used instead of formulae. It is not known whether any of the
above inclusionships is proper or not, however the difference of all classes is conjectured.
The AW -hierarchy has been defined in order to deal with some problems that do not fit the W -classes [8].
The AW -hierarchy represents in a sense the parameterized counterpart of PSPACE in the classical complexity
setting. In this paper we are mainly interested in the class AW [SAT ]. Consider the following problem
Parameterized QBFSAT:
Instance: A quantified boolean formula
Question: Is \Phi valid? (Here, 9 k i x denotes the choice of some k i -truth value assignment for the variables
x, and 8 k j x denotes all choices of k j -truth value assignments for the variables x.)
is the class of parameterized problems that fp-reduce to parameterized QBFSAT.
3 Constraint Satisfaction Problems, Bounded Treewidth, and FP-Tractability
In this section we prove that Constraint Satisfaction Problems of bounded treewidth over a fixed domain are
FP tractable. In order to get this results we need a number of definitions. In Section 3.1 we give a very general
definition of CSPs; in Section 3.2 we define the treewidth of CSP problems and quote some recent results; in
Section 3.3 we show the main tractability result.
3.1 Definition of CSPs
An instance of a constraint satisfaction problem (CSP) (also constraint network) is a triple I = (V ar; U; C),
ar is a finite set of variables, U is a finite domain of values, and is a finite
set of constraints. Each constraint C i is a pair (S list of variables of length m i called
the constraint scope, and r i is an m i -ary relation over U , called the constraint relation. (The tuples of r i
indicate the allowed combinations of simultaneous values for the variables S i ). A solution to a CSP instance
is a substitution ar \Gamma! U , such that for each 1 . The problem of deciding whether a
CSP instance has any solution is called constraint satisfiability (CS). (This definition is taken almost verbatim
from [16].)
To any CSP instance I = (V ar; U; C), we associate a hypergraph
denotes the set of variables in the scope S of the constraint
C.
be the constraint hypergraph of a CSP instance I . The primal graph of I is a graph
E), having the same set of variables (vertices) as H(I) and an edge connecting any pair of
variables X;Y 2 V such that fX; Y g ' h for some h 2 H .
3.2 Treewidth of CSPs
The treewidth of a graph is a measure of the degree of cyclicity of a graph.
Definition 3 ([19]). A tree decomposition of a graph E) is a
tree, and - is a labeling function associating to each vertex p 2 N a set of vertices -(p) ' V , such that the
following conditions are satisfied:
1. for each vertex b of G, there is a p 2 N such that b 2 -(p);
2. for each edge fb; dg 2 F , there is a p 2 N such that fb; dg ' -(p);
3. for each vertex b of G, the set fp 2 N j b 2 -(p)g induces a (connected) subtree of T .
The width of the tree decomposition is \Gamma1. The treewidth of G is the minimum width
over all its tree decompositions.
Bodlaender [2] has shown that, for each fixed k, there is a linear time algorithm for checking whether a
graph G has treewidth bounded by k and, if so, computing a tree decomposition of G having width k at most.
Thus, the problem of computing a tree decomposition of a graph of width k is fp-tractable in the parameter
k.
The treewidth of a CSP instance I is the treewidth of its primal graph G(I). Accordingly, a tree decomposition
of I is a tree decomposition of G(I).
3.3 FP-Tractable CSPs
Constraint Satisfaction is easily seen to be NP-complete. Moreover, the parameterized version, where the
parameter is the total size of all constraint scopes, is W [1]-complete, and thus not fp-tractable. This follows
from well-known results on conjunctive query evaluation [7, 18], which is equivalent to constraint satisfaction
(cf. [14]). Therefore, also bounded treewidth CSP is fp-intractable and W [1]-hard. Indeed, the CSPs having
total size of the constraint scopes - k form a subclass of the CSPs having treewidth - k. Note that, for each
fixed k, CSPs of width - k can be evaluated in time O(n k log n) [15].
In this section we show that, however, if as an additional parameter we fix the size of the domain U , then
bounded treewidth CSP is fixed parameter tractable.
It is worthwhile noting that the general CSP problem remains NP-complete even for constant domain U .
(See, e.g., the 3-SAT problem discussed below.)
Theorem 1. Constraint Satisfaction with parameters treewidth k and universe size
So is the problem of computing a solution of a CSP problem with parameters k and u.
Proof. (Sketch.) Let I = (V ar; U; C) be a CSP instance having treewidth k and jU We exhibit an fp-
transformation from I to an equivalent CSP instance I assume w.l.o.g. that no constraint
scope S in I contains multiple occurrences of variables. (In fact, such occurrences can be easily removed by
a simple preprocessing of the input instance.) Note that, from the bound k on the treewidth, it follows that
each constraint scope contains at most k variables, and thus the constraint relations have arity at most k.
-i be a k-width tree decomposition of G(I) such that jV j - cjG(I)j, for a fixed predetermined
constant c. (This is always possible because Bodlaender's algorithm runs in linear time.) For each
vertex I 0 has a constraint C the scope S is a list containing the variables
belonging to -(p), and r is the associated relation, computed as described below.
The relations associated to the constraints of I 0 are computed through the following two steps:
1. For each constraint C i.e., the jvar(S 0 )j-fold cartesian product
of the domain U with itself.
2. For each constraint any constraint of I 0 such that var(S) '
Such a constraint must exist by definition of tree decomposition of the primal graph G(I).
Modify r 0 as follows. r rg. (In database terms, r 0
is semijoin-reduced by r.)
It is not hard to see that the instance I 0 is equivalent to I , in that they have exactly the same set of solutions.
Note that the size of I 0 is - jU j k (cjG(I)j), and even computing I 0 from I is feasible in linear time. Thus the
reduction is actually an fp-reduction.
The resulting instance I 0 is an acyclic constraint satisfaction problem which is equivalent to an acyclic
conjunctive query over a fixed database [14]. Checking whether such a query has a nonempty result and, in
the positive case, computing a single tuple of the result, is feasible in linear time by Yannakakis' well-known
algorithm [23]. ut
Note that, since CSP is equivalent to conjunctive query evaluation, the above result immediately gives us
a corollary on the program complexity of conjunctive queries, i.e. the complexity of evaluating conjunctive
queries over a fixed database [22]. The following result complements some recent results on fixed-parameter
tractability of database problems by Papadimitriou and Yannakakis [18].
Corollary 1. The evaluation of Boolean conjunctive queries is fp-tractable w.r.t. the treewidth of the query
and the size of the database universe. Moreover, evaluating a nonboolean conjunctive query is fp-tractable
in the input and output size w.r.t. the treewidth of the query and the size of the database universe.
4 FP-Tractable Satisfiability Problems
4.1 Bounded-width CNF Formulae
As an application of our general result on FP tractable CSPs we show that a relevant satisfiability problem is
also FP tractable.
The graph G(F ) of a CNF formula F has as vertices the set of propositional variables occurring in F and
has an edge fx; yg iff the propositional variables x and y occur together in a clause of F . The treewidth of F
is defined to be the treewidth of the associated graph G(F ).
Theorem 2. CNF Satisfiability with parameter treewidth k is fp-tractable. So is the problem of computing a
model of a CNF formula with parameter k.
Proof. (Sketch.) We fp-transform a CNF formula F into a CSP instance I(F defined as
follows. V ar contains a variable X p for each propositional variable p occurring in F ;
each clause D of F , I(F ) contains a constraint (S; r) where the constraint scope S is the list containing
all variables X p such that p is a propositional variable occurring in p, and the constraint relation r ' U jDj
consists of all tuples corresponding to truth value assignments satisfying D.
It is obvious that every model of F correspond to a solution of I(F ) and vice versa. Thus, in particular, F
is satisfiable if and only if I(F ) is a positive CSP instance.
Since G(F ) is isomorphic to G(I(F )), both F and I(F ) have the same treewidth. Moreover, any CNF
formula F of treewidth k has clauses of cardinality at most k. Therefore, our reduction is feasible in time
and is thus an fp-reduction w.r.t. parameter k.
By this fp-reduction, fp-tractability of CNF-SAT with the treewidth parameter follows from the fp-tractability
of CSPs w.r.t. treewidth, as stated in Theorem 1. ut
4.2 CNF with Short Prime Implicants
The problem of finding the prime implicants of a CNF formula is relevant to a large number of different areas,
e.g., in diagnosis, knowledge compilation, and many other AI applications.
Clearly, the set of the prime implicants of a CNF formula F can be viewed as a compact representation
of the satisfying truth assignments for F . It is worthwhile noting that the restriction of Parameterized SAT
to CNF formulae is fp-intractable. More precisely, deciding whether a q-CNF formula F has a k-truth value
assignment is W [2]-complete [8]. (We recall that a k-truth value assignment assigns true to exactly k propositional
Nevertheless, we identified a very natural parameterized version of satisfiability which is fp-tractable. We
simply take as the parameter the length of the prime implicants of the Boolean formula.
Given a q-CNF formula F , the Short Prime Implicants problem (SPI) is the problem of computing the
prime implicants of F having length - k, with parameters k and q.
Theorem 3. SPI is fixed-parameter tractable.
Proof. (Sketch.) Let F be a q-CNF formula. W.l.o.g., assume that F does not contain tautological clauses. We
generate a set IM k of implicants of F from which it is possible to compute the set of all prime implicants
of F having length - k. (this is very similar to the well-known procedure of generating vertex covers of
bounded size, cf. [4, 8]). Pick an arbitrary clause C of F . Clearly, each implicant I of F must contain at least
one literal of C. We construct an edge-labeled tree t whose vertices are clauses in F as follows. The root of
t is C. Each nonleaf vertex D has an edge labeled ' to a descendant, for each literal ' 2 D. As child attach
to this edge any clause E of F which does not intersect the set of all edge-labels from the root to the current
position. A branch is closed if such a set does not exist or the length of the path is k.
For each root-leaf branch fi of the tree, let I(fi) be the set containing the - k literals labeling the edges of
fi. Check whether I(fi) is a consistent implicant of F and add I(fi) to the set IM k
It is easy to see that the size of the tree t is bounded by q k and that for every prime implicant S of F having
length - k, S ' I holds, for some implicant I 2 IM k
Moreover, note that there are at most q k implicants in IM k For each implicant I 2 IM k the set of
all consistent prime implicants of F included in I can be easily obtained in time O(2 k jF j) from I . It follows
that SPI is fp-tractable w.r.t. parameters q and k. ut
5 Logic Programs with Negation
Logic programming with negation under the stable model semantics [13] is a well-studied form of nonmonotonic
reasoning.
A literal L is either an atom A (called positive) or a negated atom :A (called negative). Literals A and
:A are complementary; for any literal L, we denote by ::L its complementary literal, and for any set Lit of
literals,
A normal clause is a rule of the form
A
where A is an atom and each L i is a literal. A normal logic program is a finite set of normal clauses.
A normal logic program P is stratified [1], if there is an assignment str(\Delta) of integers 0,1,. to the predicates
in P , such that for each clause r in P the following holds: If p is the predicate in the head of r and
q the predicate in an L i from the body, then str(p) - str(q) if L i is positive, and str(p) ? str(q) if L i is
negative.
The reduct of a normal logic program P by a Herbrand interpretation I [13], denoted P I , is obtained from
P as follows: first remove every clause r with a negative literal L in the body such that ::L 2 I , and then
remove all negative literals from the remaining rules.
An interpretation I of a normal logic program P is a stable model of P [13], if I is the least Herbrand
model of P I .
In general, a normal logic program P may have zero, one, or multiple (even exponentially many) stable
models. Denote by stabmods(P ) the set of stable models of P .
It is well-known that every stratified logic program has a unique stable model which can be computed in
linear time.
The following problems are the main decision and search problems in the context of logic programming.
Main logic programming problems. Let P be a logic program.
1. Consistency: Determine whether P admits a stable model.
2. Brave Reasoning: Check whether a given literal is true in a stable model of P .
3. Cautious Reasoning: Check whether a literal is true in every stable model of P .
4. SM Computation: Compute an arbitrary stable model of P .
5. SM Enumeration: Compute the set of all stable models of P .
For a normal logic program P , the dependency graph G(P ) is a labeled directed graph (V;
the set of atoms occurring in P and A is a set of edges such that (p; q) 2 A is there exists a rule r 2 P having
p in its head and q in its body. Moreover, if q appears negatively in the body, then the edge (p; q) is labeled
with the symbol :. The undirected dependency graph G (P ) of P is the undirected version of G(P ).
A feedback vertex set S of an undirected (directed) graph G is a subset X of the vertices of G such that
any cycle (directed cycle) contains at least one vertex in S. Clearly, if a feedback vertex set is removed from
G, then the resulting graph is acyclic. The feedback width of G is the minimum size over its feedback vertex
sets.
It was shown by Downey and Fellows [8, 4] that determining whether an undirected graph has feedback
width k and, in the positive case, finding a feedback vertex set of size k, is fp-tractable w.r.t. the parameter k.
Let P be a logic program defined over a set U of propositional atoms. A partial truth value assignment
(p.t.a.) for P is a truth value assignment to a subset U 0 of U . If - is a p.t.a. for P , denote by P [- ] the program
obtained from P as follows:
- eliminate all rules whose body contains a literal contradicting - ;
- eliminate from every rule body all literals whose literals are made true by - .
The following lemma is easy to verify.
Lemma 1. Let M be a stable model of some logic program P , and let - be a p.t.a. consistent with M . Then
M is a stable model of P [- ].
Theorem 4. The logic programming problems (1-5) listed above are all fp-tractable w.r.t. the feedback width
of the dependency graph of the logic program.
Proof. (Sketch.) Given a logic program P whose graph G (P ) has feedback width k, compute in linear time
(see [8]) a feedback vertex set S for G (P ) s.t.
Consider the set T of all the 2 k partial truth value assignments to the atoms in S.
For each p.t.a. - 2 T , P [- ] is a stratified program whose unique stable model M - can be computed in
linear time. For each
denotes the set of all stable models of P (this latter can be done in linear time, too, if suitable data structures
are used).
By definition of \Sigma , it suffices to note that every stable model M for
P belongs to \Sigma . Indeed, let - be the p.t.a. on S determined by M . By Lemma 1, it follows that M is a stable
model of P [- ] and hence M 2 \Sigma .
Thus, P has at most 2 k stable models whose computation is fp-tractable and actually feasible in linear time.
Therefore, the problem 5 above (Stable Model Enumeration) is fp-tractable. The fp-tractability of all other
problems follows. ut
It appears that an overwhelmingly large number of "natural" logic programs have very low feedback width,
thus the technique presented here seems to be very useful in practice. Note, however, that the technique does
not apply to some important and rather obvious cases. In fact, the method does not take care of the direction
and the labeling of the arcs in the dependency graph G(P ). Hence, positive programs width large feedback
width are not recognized to be tractable, although they are trivially tractable. The same applies, for instance,
for stratified programs having large feedback width, or to programs whose high feedback-with is exclusively
due to positive cycles.
Unfortunately, it is not known whether computing feedback vertex sets of size k is fixed-parameter tractable
for directed graphs [8].
Another observation leading to a possible improvement is the following. Call an atom p of a logic program
malignant if it lies on at least one simple cycle of G(P ) containing a marked (=negated) edge. Call an
atom benign if it is not malignant. It is easy to see that only malignant atoms can be responsible for a large
number of stable models. In particular, every stratified program contains only benign atoms and has exactly
one stable model. This suggest the following improved procedure:
- Identify the set of benign atoms occurring in P ;
- Drop these benign vertices from G (P ), yielding H(P );
- Compute a feedback vertex set S of size - k of H(P );
- For each p.t.a. - over S compute the unique stable model M - of P [- ] and check whether this is actually
a stable model of P , and if so, output M - .
It is easy to see that the above procedure correctly computes the stable models of P . Unfortunately, as shown
by the next theorem, it is unlikely that this procedure can run in polynomial time.
Theorem 5. Determining whether an atom of a propositional logic program is benign is NP-complete.
Proof. (Sketch.) This follows by a rather simple reduction from the NP-complete problem of deciding
whether for two pairs of vertices of a directed graph G, there are two vertex-disjoint
paths linking x 1 to x 2 and y 1 to y 2 [11]. A detailed explanation will be given in the full paper. ut
We thus propose a related improvement, which is somewhat weaker, but tractable.
A atom p of a logic program P is called weakly malignant if it lies on at least one simple cycle of G (P )
containing a marked (=negated) edge. An atom is called strongly benign if it is not weakly-malignant.
Lemma 2. Determining whether an atom of a propositional logic program is strongly benign or weakly
malignant can be done in polynomial time.
Proof. (Sketch.) It is sufficient to show that determining whether a vertex p of an undirected graph G with
Boolean edge labels lies on a simple cycle containing a marked edge. This can be solved by checking for each
marked edge hy 1 ; y 2 i of G and for each pair of neighbours x whether the graph G \Gamma fxg contains
two vertex-disjoint paths linking x 1 to y 1 and x 2 to y 2 , respectively. The latter is in polynomial time by a
result of Robertson and Seymour [20]. ut
We next present an improved algorithm for enumerating the stable models of a logic program P based on
the feedback width of a suitable undirected graph associated to P .
Modular Stable Model Enumeration procedure (MSME).
1. Compute the set C of the strongly connected components (s.c.c.) of G(P );
2. For each s.c.c. C 2 C, let PC be the set of rules of P that "define" atoms belonging to C, i.e., PC contains
any rule r 2 P whose head belongs to C;
3. Determine the set UC ' C of the strongly connected components (s.c.c.) of G(P ) whose corresponding
program PC is not stratified;
4. For each s.c.c. C 2 UC compute the set of strongly benign atoms SB(C) occurring in PC ;
5. Let P
6. Let H(P 0 ) be the the subgraph of G (P 0 ) obtained by dropping every vertex p occurring in some set of
strongly benign atoms SB(C) for some C 2 UC;
7. Compute a feedback vertex set S of size - k of H(P 0 );
8. For each p.t.a. - over S compute the unique stable model M - of P [- ] and check whether this is actually
a stable model of P , and if so, output M - .
The feedback width of the graph H(P 0 ) is called the weak feedback-width of the dependency graph of P .
The following theorem follows from the fp-trectability of computing feedback vertex sets of size k for
undirected graph and from well-known modular computation methods for stable model semantics [10].
Theorem 6. The logic programming problems (1-5) listed above are all fp-tractable w.r.t. the weak feedback-
width of the dependency graph of the logic program.
Note that the methods used in this section can be adapted to show fixed-parameter tractability results for
extended versions of logic programming, such as disjunctive logic programming, and for other types of non-monotonic
reasoning. In the case of disjunctive logic programming, it is sufficient to extend the dependency
graph to contain a labeled directed edge between every pair of atoms occurring together in a rule head.
A different perspective to the computation of stable models has been recently considered in [21], where
the size of stable models is taken as the fixed parameter. It turns out that computing large stable models is
fixed-parameter tractable, whereas computing small stable models is fixed-parameter intractable.
6 The Small Model Circumscription Problem
In this section we study the fixed-parameter complexity of a tractable parametric variant of circumscription,
where the attention is restricted to models of small cardinality.
6.1 Definition of Small Model Circumscription
The Small Model Circumscription Problem (SMC) is defined as follows. Given a propositional theory T , over
a set of atoms given a propositional formula ' over vocabulary A, decide whether ' is
satisfied in a model M of T such that:
- M is of small size, i.e., at most k propositional atoms are true in M (written jM j - k); and
- M is P ; Z-minimal w.r.t. all other small models 1 , i.e., for each model M 0 of T such that jM 0 j - k,
1 In this paper, whenever we speak about P ; Z-minimality, we mean minimality as defined here.
This problem appears to be a miniaturization of the classical problem of (brave) reasoning with minimal
models. We believe that SMC is useful, since in many contexts, one has large theories, but is mainly interested
in small models (e.g. in abductive diagnosis).
Clearly, for each fixed k, SMC is tractable. In fact it sufficed to enumerate jAj k candidate interpretations in
an outer loop and for each such interpretation M check whether M
The latter can be done by an inner loop enumerating all small interpretations and performing some easy
checking tasks.
It is also not hard to see that SMC is fp-intractable. In fact the Hitting Set problem, which was shown to be
[2]-complete [8], can be fp-reduced to SMC and can be actually regarded as the restricted version of SMC
consists of a CNF having only positive literals. In Section 6.2 we present the
fp-tractable subclass of this version of SMC, where the maximum clause length in the theory is taken as an
additional parameter. However, in Section 6.3 we show that, as soon as the set Z of floating variables is not
empty, this problem becomes fp-intractable.
Since brave reasoning under minimal models was shown to be \Sigma P
2 complete in [9], and is thus one level
above the complexity of classical reasoning, it would be interesting to determine the precise fixed-parameter
complexity of the general version of SMC w.r.t. parameter k. This problem too is tackled in Section 6.3.
6.2 A Tractable Restriction of SMC
We restrict SMC by requiring that the theory T be a q-CNF with no negative literal occuring in it, and
by minimizing over all atoms occurring in the theory. The problem Restricted Small Model Circumscription
(RSMC) is thus defined as SMC except that T is required to be a purely positive q-CNF formula, the "floating"
set Z is empty, and the parameters are the maximum size k of the models to be considered, and the maximum
size q of the number of literals in the largest conjunct (=clause) of T .
Theorem 7. RSMC is fixed-parameter tractable.
Proof. (Sketch.) Since T is positive and ;, the set of minimal models of T to be considered are exactly
the prime implicants of T having size - k. By Theorem 3, computing these prime implicants for a q-CNF
theory is fp-tractable w.r.t. parameters k and q. Thus, the theorem easily follows. ut
6.3 The Fixed-Parameter Complexity of SMC
We first show that the slight modification of the fp-tractable problem RSMC where Z 6= ; is fp-intractable
and in fact W [SAT
The problem Positive Small Model Circumscription (PSMC) is defined as SMC except that T is required
to be a purely positive q-CNF formula, and the parameters are the maximum size k of the models to be
considered, and the maximum clause length q.
Let us define the Boolean formula count k (x), where x is a list of variables
1-i-n
iA
k-r-n
r
count k
Intuitively, in any satisfying truth value assignment for count k (x), the propositional variable q j
gets the value
true iff x i is the j th true variable among x Note that the size of count k (x) is O(kn 2 ).
The variables x in the formula above are called the external variables of the formula, while all the
other variables occurring in the formula are called private variables.
Whenever a theory T contains a count subformula, we assume w.l.o.g. that the private variables of this
subformula do not occur in T outside the subformula. In particular, if T contains two count subformulas,
then their set of private variables are disjoint.
Lemma 3. Let F be a formula and x a list of variables occurring in F . Then
only if there exists a truth value assignment oe for F assigning true to
exactly k variables from x.
Every k-truth value assignment oe satisfying F can be extended in a unique way to an assignment oe 0
satisfying F - count k (x).
Every satisfying truth value assignment for F - count k (x) assigns true to exactly k private variables of
count k (x) and true to exactly k variables from x.
Theorem 8. PSMC is W [SAT ]-hard. The problem remains hard even for 2-CNF theories.
Proof. (Sketch.) Let \Phi be a Boolean formula over propositional variables fx g. We fp-reduce the
W [SAT ]-complete problem of deciding whether there exists a k-truth value assignment satisfying \Phi to an
instance of PSMC where the maximum model size is 2k and the maximum clause length is 2.
be the private variables of the count k subformula.
Moreover, let T be the following 2-CNF positive theory:
We take g.
Note that a set M is a P ; Z minimal model of T having size - 2k
S is any subset of Z such that jM j - 2k.
From Lemma 3, every satisfying truth value assignment for \Phi 0 must make true exactly k variables from
variables from the set of private variables of count k . It follows that there exists a
minimal model M of T such that jM j - 2k only if there exists a k-truth value
assignment satisfying \Phi. ut
Let us now focus on the general SMC problem, where both arbitrary theories are considered and floating
variables are permitted. It does not appear that SMC is contained in W [SAT ]. On the other hand, it can
be seen that SMC is contained in AW [SAT ], but it does not seem to be hard (and thus complete) for this
class. In fact, AW [SAT ] is the miniaturization of PSPACE and not of \Sigma P
. No class corresponding to the
levels of the polynomial hierarchy have been defined so far in the theory of the fixed-parameter intractability.
reasoning problems, such as SMC, seem to require the definitions of such classes. We next
define the exact correspondent of \Sigma P
2 at the fixed-parameter level.
Definition of the class \Sigma 2 W [SAT ].
defined similarly to AW [SAT ], but the quantifier prefix is restricted to \Sigma 2 .
Parameterized QBF 2 SAT.
Instance: A quantified boolean formula 9 k1 x8 k2 yE.
Question: Is 9 k1 x8 k2 yE valid? (Here, 9 k1 x denotes the choice of some k 1 -truth value assignment for
the variables x, and 8 k2 y denotes all choices of k 2 -truth value assignments for the variables y.)
Definition 4. \Sigma 2 W [SAT ] is the set of all problems that fp-reduce to Parameterized QBF 2 SAT.
Membership of SMC in \Sigma 2 W [SAT ].
Let the problem Parameterized QBF 2 SAT- be the variant of Parameterized QBF 2 SAT where the quantifiers
9 k1 x and 8 k2 y are replaced by quantifiers 9 -k1 x and 8 -k2 y with the following meaning. 9 -k1 x ff
means that there exists a truth value assignment making at most k 1 propositional variables from x true such
that ff is valid. Simmetrically, 8 -k2 y ff means that ff is valid for every truth value assignment making at most
propositional variables from y true.
Lemma 4. Parameterized QBF 2 SAT- is in \Sigma 2 W [SAT ].
Proof. (Sketch.) It suffices to show that Parameterized QBF 2 SAT- is fp-reducible to Parameterized QBF 2 SAT.
be an instance of Parameterized
. It is easy to see that the following instance \Phi 0 of Parameterized QBF 2 SAT is equivalent to \Phi.
9
are new variables and E(x 1 - x 0
is obtained from E by substituting x
m). ut
Theorem 9. SMC is in \Sigma 2 W [SAT ].
Proof. (Sketch.) By Lemma 4 it is sufficient to show that every SMC instance S can be fp-reduced to an
equivalent instance \Phi(S) of Parameterized QBF 2 SAT- . Let be an SMC
instance, where
be two sets of fresh variables. \Phi(S) is defined as follows:
9 -k
where is obtained from T (P; Z) by substituting p 0
m).
The first part of \Phi(S) guesses a model M of T with at most k atoms among P [ Z which satisfies '. The
second part makes sure that the M is P ; Z minimal by checking that each model M 0 of T is either equivalent
to M over the P variables, or has at least one P variable true whereas the same variable is false in M . Hence
bravely entails ' under small models P ; Z circumscription if and only if \Phi(S) is valid. ut
]-hardness of SMC.
Theorem 10. SMC is \Sigma 2 W [SAT ]-hard, and thus \Sigma 2 W [SAT ]-complete.
Proof. (Sketch.) We show that Parameterized QBF 2 SAT is fp-reducible to SMC. Let \Phi be the following
instance of Parameterized QBF 2 SAT.
9 k1 x 1
We define a corresponding instance of SMC
w is a fresh variable, consists of
all the other variables occurring in T , namely, the variables in y and the private variables of the two count
subformulae.
We prove that \Phi is valid if and only if S(\Phi) is a yes instance of SMC.
(Only if part.) Assume \Phi is valid. Then, there exists a k 1 -truth value assignment oe to the variables x such
that for every k 2 -truth value assignment to the variables y, the formula E is satisfied.
Let M be an interpretation for T constructed as follows. M contains the k 1 variables from x which are
made true by oe and the first k 2 variables of y; in addition, M contains w and private variables which
make true the two count subformulae. This is possible by Lemma 3.
It is easy to see that M is a model for T . We now show that M is a P ; Z minimal model of T . Assume that
M 0 is a P ; Z smaller model. Due to the count k1 (x) subformula, M 0 must contain exactly k 1 atoms from x
and therefore M and M 0 coincide w.r.t. the x atoms. It follows that w 62 M 0 . However, by validity of \Phi and
the construction of M , M 0
(If part.) Assume there exists a P ; Z minimal model M of T such that M entails w and jM j - k. Note
that, by Lemma 3, it must hold that M contains exactly k 1 true variables from x and exactly k 2 true variables
from y.
Towards a contradiction, assume that \Phi is not valid. Then it must hold that for every k 1 -truth value assignment
oe to the variables x, there exists a k 2 -truth value assignment oe 0 to the variables y, such that oe [ oe 0
falsifies E. In particular, for the k 1 variables from x which are true according to M , it is possible to make
true exactly k 2 variables from y such that the formula E is not satisfied. Consider now the interpretation M 0
containing these true variables plus the true by the two count subformulae. M 0 is a
model of T whose P variables coincide with those of M except for w which belongs to M , but not to M 0 .
Therefore, M is not P ; Z minimal, a contradiction.
Finally, note that the transformation from \Phi to S(\Phi) is an fp-reduction. Indeed it is feasible in polynomial
time and is just linear in k. ut
Corollary 2. Parameterized QBF 2 SAT- is \Sigma 2 W [SAT ]-complete.
Proof. (Sketch.) Completeness follows from the fact that, as shown in Lemma 4, this problem belongs to
and by Theorem 10, which shows that the \Sigma 2 W [SAT ]-hard problem SMC is fp-reducible to
Parameterized QBF 2 SAT- . ut
Downey and Fellows [8] pointed out that completeness proofs for fixed parameter intractability classes
are generally more involved than classical intractability proofs. Note that this is also the case for the above
proof, where we had to deal with subtle counting issues. A straightforward downscaling of the standard \Sigma Pcompleteness proof for propositional circumscription appears not to be possible.
In particular, observe that we have obtained our completeness result for a very general version of propositional
minimal model reasoning, where there are variables to be minimized (P ) and floating variables (Z).
It is well-known that minimal model reasoning remains \Sigma P
complete even if all variables of a formula are
minimized (i.e., if Z is empty). This result does not seem to carry over to the setting of fixed parameter in-
tractability. Clearly, this problem, being a restricted version of SMC, is in \Sigma 2 W [SAT ]. Moreover it is easy
to see that the problem is hard for W [2] and thus fixed parameter intractable. However, we were not able to
show that the problem is complete for any class in the range from W [2] to \Sigma 2 W [SAT ], and leave this issue
as an open problem.
Open Problem. Determine the fixed-parameter complexity of SMC when all variables of the theory T are to
be minimized.
--R
Towards a Theory of Declarative Knowledge.
A linear-time algorithm for finding tree-decompositions of small treewidth
Optimal Implementation of Conjunctive Queries in relational Databases.
Fixed Parameter Tractability and Completeness.
Fixed Parameter Intractability (Extended Abstract).
Fixed Parameter Tractability and Completeness I: Basic Results.
On the Parametric Complexity of Relational Database Queries and a Sharper Characterization of W
Parameterized Complexity.
Propositional Circumscription and Extended Closed World Reasoning are
Disjunctive Datalog.
The Directed Subgraph Homeomorphism Problem.
Computers and Intractability.
The Stable Model Semantics for Logic Programming.
The Complexity of Acyclic Conjunctive Queries.
A Comparison of Structural CSP Decomposition Methods.
Closure Properties of Constraints.
On the Complexity of Database Queries.
Graph Minors II.
Graph Minors XX.
Computing Large and Small Stable Models.
Complexity of Relational Query Languages.
Algorithms for Acyclic Database Schemes.
--TR
A sufficient condition for backtrack-bounded search
A theory of diagnosis from first principles
Constraint satisfaction from a deductive viewpoint (Research Note)
Towards a theory of declarative knowledge
Tree clustering for constraint networks (research note)
Propositional circumscription and extended closed-world reasoning are MYAMPERSANDPgr;<supscrpt>p</supscrpt><subscrpt>2</subscrpt>-complete
Fixed-Parameter Tractability and Completeness I
A Linear-Time Algorithm for Finding Tree-Decompositions of Small Treewidth
Closure properties of constraints
On the complexity of database queries (extended abstract)
Conjunctive-query containment and constraint satisfaction
On the Desirability of Acyclic Database Schemes
Computing large and small stable models
The complexity of acyclic conjunctive queries
Graph Algorithms
Declarative problem-solving using the DLV system
Computers and Intractability
Propositional lower bounds
Smodels - An Implementation of the Stable Model and Well-Founded Semantics for Normal LP
Pushing Goal Derivation in DLP Computations
Stable Model Semantics of Weight Constraint Rules
Descriptive and Parameterized Complexity
A Comparison of Structural CSP Decomposition Methods
The complexity of relational query languages (Extended Abstract)
--CTR
Stefan Szeider, Minimal unsatisfiable formulas with bounded clause-variable difference are fixed-parameter tractable, Journal of Computer and System Sciences, v.69 n.4, p.656-674, December 2004
Carsten Sinz, Visualizing SAT Instances and Runs of the DPLL Algorithm, Journal of Automated Reasoning, v.39 n.2, p.219-243, August 2007
Logic programming and knowledge representation-the A-prolog perspective, Artificial Intelligence, v.138 n.1-2, p.3-38, June 2002 | stable models;parameterized complexity;circumscription;prime implicants;logic programming;constraint satisfaction;conjunctive queries;fixed-parameter tractability;nonmonotonic reasoning |
570394 | Preference logic grammars. | The addition of preferences to normal logic programs is a convenient way to represent many aspects of default reasoning. If the derivation of an atom A1 is preferred to that of an atom A2, a preference rule can be defined so that A2 is derived only if A1 is not. Although such situations can be modelled directly using default negation, it is often easier to define preference rules than it is to add negation to the bodies of rules. As first noted by Govindarajan et al. [Proc. Internat. Conf. on Logic Programming, 1995, pp. 731-746], for certain grammars, it may be easier to disambiguate parses using preferences than by enforcing disambiguation in the grammar rules themselves. In this paper we define a general fixed-point semantics for preference logic programs based on an embedding into the well-founded semantics, and discuss its features and relation to previous preference logic semantics. We then study how preference logic grammars are used in data standardization, the commercially important process of extracting useful information from poorly structured textual data. This process includes correcting misspellings and truncations that occur in data, extraction of relevant information via parsing, and correcting inconsistencies in the extracted information. The declarativity of Prolog offers natural advantages for data standardization, and a commercial standardizer has been implemented using Prolog. However, we show that the use of preference logic grammars allow construction of a much more powerful and declarative commercial standardizer, and discuss in detail how the use of the non-monotonic construct of preferences leads to improved commercial software. | Introduction
Context-free grammars (CFGs) have traditionally been used for specifying the syntax of a language,
while logic is generally the formalism of choice for specifying semantics (meaning). Logic grammars
combine these two notations, and can be used for specifying syntactic and semantic constraints in
a variety of parsing applications, from programming languages to natural languages to documents
layouts. Several forms of logic grammars have been researched over the last two decades [AD89].
Basically, they extend context-free grammars in that they generally allow: (i) the use of arguments
with nonterminals to represent semantics, where the arguments may be terms of some logic; (ii)
the use of unification to propagate semantic information; (iii) the use of logical tests to control the
application of production rules.
An important problem in these applications is that of resolving ambiguity-syntactic and
semantic. An example of syntactic ambiguity is the 'dangling else' problem in programming language
syntax. To illustrate, consider the following CFG rules for if-then and if-then-else statements in
procedural programming languages:
!ifstmt? ::= if !cond? then !stmtseq? j if !cond? then !stmtseq? else !stmtseq?
Given a sentence of the form if cond1 then if cond2 then assign1 else assign2, there are two possible
parses depending upon which then the else is paired with (parentheses are used below to indicate
if cond1 then (if cond2 then assign1 else assign2)
if cond1 then (if cond2 then assign1) else assign2
While one can re-design the language to avoid the ambiguity, this example typifies a form of ambiguity
that arises in other settings as well. For the above language of if-then-else statements, in general it
is not easy to alter the grammar to avoid ambiguity; doing so involves the introduction of several
additional nonterminals, which not only destroys the clarity of the original ambiguous grammar,
but also results in a less efficient parsing scheme. In such ambiguous grammars, we usually have a
preferred parse in mind. In the above example, we would like to pair up each else with the closest
previous unpaired then. We present in this paper a general means of stating such preferences.
We develop our methodology starting from a particular form of logic grammar called definite
clause grammar (DCG). (We also consider a generalization of DCGs called definite clause translation
grammars (DCTGs) [Abr84].) The primary contribution of this paper is in providing a modular and
declarative means of specifying the criteria to be used in resolving ambiguity in a logic grammar.
These criteria are understood using the concept of preference, hence we call the resulting grammars
preference logic grammars (PLG). In essence, a PLG is a logic grammar where some of the nonterminals
have one or more arbiter (preference) rules attached to them. Whenever there are multiple
parses for some sentence derivable from a nonterminal, these arbiter clauses will dictate the most preferred
parse for the sentence. Just as definite clause grammars can be translated into logic programs,
preference logic grammars can be translated into preference logic programs which we introduced
in our earlier work [GJM95, GJM96, Gov97]. The paradigm of preference logic programming was
introduced in order to obtain a logical means of specifying optimization problems. The operational
semantics of preference logic programs constructs the optimal parse by keeping track of alternative
parses and pruning parses that are suboptimal.
We first provide some simple examples to illustrate the use of PLGs for handling ambiguities
in programming-language grammars, and then discuss two applications of preference logic grammars:
optimal parsing and natural language parsing. Each of these topics is a substantial research area in
its own right, and we only give a brief introduction to the issues in this paper. Optimal parsing is an
extension of the standard parsing problem in that costs are associated with the different (ambiguous)
parses of a string, and the preferred parse of the string is the one with least cost. Many applications
such as optimal layout of documents [BMW92], code generation [AGT89], etc., can be viewed as
optimal parsing problems. We show in this paper how the criteria for laying out paragraphs-which
a formatter such as would use [KP81]-can be specified declaratively using a logic
grammar. In the area of ambiguity resolution, we illustrate the use of PLG resolving ambiguity in
natural language sentences using the problem of prepositional attachment. We show how preference
clauses provide a simple and elegant way of expressing preferences such as minimal attachment and
right association, and discuss some of the problems associated with ambiguity resolution.
The remainder of this paper is organized as follows: section 2 presents preference logic grammars
and illustrates their use for ambiguity resolution of programming language grammars. Section 3
introduces preference logic programs and shows how preference logic grammars can be translated into
preference logic programs. Section 4 illustrates the use of PLGs in optimal parsing and ambiguity
resolution in natural language sentences. Finally, section 5 contains the conclusions and directions
for further research. We assume that the reader is familiar with basic concepts in logic programming
and definite-clause grammars. A good introductory treatment of logic programming including its
theory can be found in [Llo87].
Preference Logic Grammars
We describe the syntax of preference logic grammars, and illustrate their use with examples from
programming language grammars. Since preference logic grammars are based upon definite clause
grammars (DCGs), we will briefly describe DCGs first.
Clause Grammars. A definite-clause grammar (DCG) is basically an extension of a
context-free grammar wherein each nonterminal is optionally accompanied by one or more arguments.
For the purpose of this paper, we consider two forms of DCG rules:
literal -? [ ].
literal -? literal 1 literal k .
The first form of DCG rule corresponds to a null production. Each literal is either a terminal symbol
of the form
or a nonterminal with arguments of the form
where terms is a sequence of zero or more term which (as usual) is built up of constants, variables,
and constructors. In general, on the right-hand side of a DCG rule it is legal to have any sequence
of Prolog goals enclosed with f and g. Such a goal sequence may occur in any position that a literal
may occur.
To illustrate a definite-clause grammar, we show below how one can encode the BNF of
section 1. Assuming definitions for the nonterminals expr, cond, and var, the ambiguous DCG is as
follows:
stmtseq(S) -? stmt(S).
stmt(S) -? assign(S).
stmt(S) -? ifstmt(S).
The above DCG also illustrates parse tree construction: the argument of each nonterminal
serves to carry the parse tree for the sentence spanned by the nonterminal. For example, the term
if(C,T) can be thought of as representing the parse tree for an if-then statement, while the term
if(C,T,E) can be thought of as representing the parse tree for an if-then-else statement. Here, C,
T, and E stand for the parse trees for the condition-part, then-part, and else-part respectively.
clause grammars can be translated in definite clause logic programs in a straightforward
way [PW80]: One definite clause is created per DCG rule. Each nonterminal with n arguments
in the DCG rule is translated into a predicate with n+2 arguments. The two extra arguments carry
respectively the input list to be parsed and the remaining list after some initial prefix of the input
list has been parsed. These two extra arguments are made the last two arguments of the predicate.
We illustrate the translated program for the above DCG:
stmtseq(S,I,O) :- stmt(S,I,O).
stmtseq(ss(S,Ss),I,O) :- stmt(S,I,[;-O1]), stmtseq(Ss,O1,O).
stmt(S,I,O) :- assign(S,I,O).
stmt(S,I,O) :- ifstmt(S,I,O).
assign(assign(V,E),I,O) :- var(V,I,[:=-O1]), expr(E,O1,O).
stmtseq(T,O1,O).
stmtseq(T,O1,[else-O2]),
stmtseq(E,O2,O).
Preference Logic Grammars. A preference logic grammar is a definite clause grammar in which
each nonterminal n is optionally accompanied by one or more preference clauses of the form:
where each L i is a positive or negative atom of the form p(terms), where p is a predicate. This form
of the rule states that when a sentence spanned by nonterminal n has two different parses, the parse
corresponding to -
t is less preferred than the one corresponding to - u provided the condition given
by (Later, we will also illustrate show how one can extend definite clause
translation grammars with preference clauses.)
To ilustrate PLGs, we can extend the above DCG with a preference clause to resolve the
'dangling else' ambiguity. As explained in section 1, to resolve the ambiguity we need to specify the
criterion that each else pairs up with the closest previous unpaired then. We can state this criterion
by the following preference clause:
This rule states that when an if-then-else statement has two different parses, the parse corresponding
to the tree if(C,if(C1,T),E) is less preferred than the one corresponding to the tree
ifstmt(if(C,if(C1,T,E)). Note that the above specification is capable of correctly resolving the
ambiguity for arbitrarily nested if-then-else statements.
In the above example, while it might appear that the preference clause is requiring the
sentence to be scanned twice in order to compute the two parses, this need not happen in an actual
implementation. The above specification actually does not commit to any specific parsing strategy.
By making use of memoization (or a related technique) [War92], we can avoid re-parsing the phrase
[if], cond(C), [then], stmtseq(T) when the second rule for ifstmt is being considered. For
example, a system such as XSB [SSW] would be a very good vehicle for implementing preference
logic grammars because of its memoization capability. However, we do not go into implementation
issues in this paper.
To illustrate the use of multiple preference clauses, consider the following ambiguous definite
clause grammar for arithmetic expressions:
exp(id) -? [id].
To resolve the ambiguity, preference clauses may be used to specify the precedence and associativity
of and * in a modular and declarative manner, i.e., without rewriting the original grammar. The
first two arbiter clauses below specify that * has a higher precedence than +. The third clause
specifies that + is left-associative whereas the fourth clause specifies that * is right-associative.
Readers familiar with Prolog DCGs may be concerned about the use of left-recursion in the above
grammar. Once again, memoization be used to circumvent the possible nontermination arising from
left-recursion.
3 Translation of Preference Logic Grammars
We present the translation of preference logic grammars into preference logic programs. We begin
with a brief introduction to preference logic programs.
3.1 Preference Logic Programs
A preference logic program (PLP) may be thought of as containing two parts: a first-order theory
and an arbiter. The first-order theory consists of clauses each of which can have one of two forms:
1. clauses. Each B i is of the form p( - t) where p is a
predicate and - t is a sequence of terms. In general, some of the B i s could be constraints as in
CLP [JL87, JM94].
2. clauses. are constraints 2
as in CLP [JL87, JM94] that must be satisfied for this clause to be applicable to a goal; they
must be read as antecedents of the implication. The variables that appear only on the RHS of
the ! clause are existentially quantified. The intended meaning of this clause is that the set
of solutions to the head is some subset of the set of solutions to the body.
Moreover, the predicate symbols can be partitioned into three disjoint sets depending on the kinds
of clauses used to define them:
1. C-predicates appear only in the heads of definite clauses and the bodies of these clauses contain
only other C-predicates (C stands for core).
2. O-predicates appear in the heads of only optimization clauses (O stands for optimization). For
each ground instance of an optimization clause, the instance of the O-predicate at the head is
a candidate for the optimal solution provided the corresponding instance of the body of the
clause is true. The constraints that appear before the j in the body of an optimization clause
are referred to as the guard and must be satisfied in order for the head H to be reduced.
3. D-predicates appear in the heads of only definite clauses and any one goal in the body of any of
these clauses is either an O-predicate or a D-predicate. (D stands for derived from O-predicates.)
The arbiter part of a preference logic program, which specifies the optimization criterion for the
O-predicates, has clauses of form:
where p is an O-predicate and each L i is an atom whose head is a C-predicate or a constraint as in
CLP. In essence this form of the arbiter states that p( - t) is less preferred than p(-u) if L
We have formalized the semantics of preference logic programs in our earlier work, and we
refer the reader to [GJM96, Gov97] for a detailed exposition. In short, preference logic programs
have a well-defined meaning as long as recursion through the optimization clauses is well-founded,
or locally-stratified. (This is similar in spirit to the semantics of negation-as-failure [Llo87].) We
have given a possible-worlds semantics for a preference logic program; essentially, each world is a
model for the constraints of the program, and an ordering over these worlds is enforced by the arbiter
clauses in the program. We introduce the concept of preferential consequence to refer to truth in the
optimal worlds (in contrast with logical consequence which refers to truth in all worlds). An optimal
2 We do not need the full generality of this feature for the translation of preference logic grammars.
answer to the query G is a substitution ' such that G' is a preferential consequence of the preference
logic program. It is not hard to see that such notions are also relevant to the semantics of preference
logic grammars.
We have also provided a derivation scheme called PTSLD-derivation, which stands for
Pruned Tree SLD-derivation, for efficiently computing the optimal answers to queries [GJM96,
Gov97]. The basic idea is to grow the SLD search tree for a query and apply the arbiter clauses
to prune unproductive search paths, i.e., suboptimal solutions. Since we are computing preferential
consequences as opposed to logical consequences, we do not incur the cost of theorem proving in
a general modal logic. In order to achieve the needed efficiencies and termination properties for
preference logic grammars, we must augment this derivation procedure with memo-tables.
We present two simple examples of preference logic programs, the first being a naive formulation
of the shortest-path problem, and the second is a dynamic-programming formulation of the same
problem. Consider the following CLP clauses for the predicate path(X,Y,C,P), which determines P
as a path (list of edges) with distance C from node X to Y in a directed
path(X,Y,C,[e(X,Y)]) :- edge(X,Y,C).
Unlike Prolog, in the above definition, the symbol does indeed stand for the addition operator. A
logical specification of the shortest path problem could be given as follows:
sh path(X,Y,C,P) ! path(X,Y,C,P).
sh path(X,Y,C1,P1) - sh path(X,Y,C2,P2) :- C2 ! C1.
The first clause is an example of an optimization clause, and it identifies sh path as an optimization
predicate. The space of feasible solutions for this predicate is some subset of the solutions for path;
hence the use of a ! clause. This clause effectively says that every shortest path is also a path. The
second clause is an example of an arbiter clause and it states the criterion for optimization: given
two solutions for sh path, the one with lesser distance is preferred. Computationally speaking, the
search tree for path is first constructed and those solution paths that are suboptimal according to
the arbiter clause are pruned; the remaining paths form the solutions for sh path.
The program below is a dynamic-programming formulation of the shortest-path problem.
sh dist(X,X,N,0).
sh
sh sh dist(X,Z,1,C1), sh dist(Z,Y,N,C2).
sh sh dist(X,Y,N,C2) :- C2 ! C1.
(We show only the specification of the shortest distance; the associated path can be specified with
the aid of an extra argument). This program explicitly expresses the optimal sub-problem property of
a dynamic-programming algorithm. The recursive clause for sh dist encodes the fact that candidate
shortest paths with n edges between any two vertices a, and b are obtained by extending the shortest
paths with edges. Note that the variable Z that appears only on the right-hand side of the
clause is existentially quantified. It therefore gets bound to some neighbor of the source vertex. In
fact, a candidate shortest path of n edges is generated by composing the shortest path from the
neighboring vertex to the destination with the edge between the source and the neighbor. Note
that this formulation does not state that the shortest path is obtained by only considering the edge
with the least cost emanating from the source to compute the shortest path. Furthermore, by using
memoization, we can avoid recomputing subproblems. In the previous formulation of this problem,
domain knowledge, such as the monotonicity of +, would be necesary to achieve a similar effect. This
example also shows the need for the guard: the conditions X !? Y and N ? 1, X !? Y should be
read as antecedents of the implication.
3.2 From PLGs to PLPs
The translation of a preferene logic grammar into a preference logic program proceeds as follows:
1. For each nonterminal N that has no preference clause attached to it, the grammar clauses for
are translated into definite clauses as in the case of definite-clause grammars.
2. For each nonterminal N that has a preference clause attached to it, we introduce a new non-terminal
pref N with the same number of arguments as N . There is one optimization clause
associated with pref N , which is defined as follows:
pref
3. For each (grammatical) arbiter clause for N of the form
the translated preference clause for pref N is:
pref N(-u; I; O) - pref N(-v; I; O) :- L
4. For each occurrence of the predicate N in the body of a PLG rule, in the translated program
we replace N by pref N .
Nonterminals that have preference rules attached to them give rise to O-predicates (and their associated
definitions) in the target PLP program; and, nonterminals that depend upon other nonterminals
that have one or more preference rules attached to them give rise to D-predicates in the target PLP
program.
We briefly describe how the if-then-else grammar is translated into a PLP.
stmtseq(S,I,O) :- stmt(S,I,O).
stmtseq(ss(S,Ss),I,O) :- stmt(S,I,[;-O1]), stmtseq(Ss,O1,O).
stmt(S,I,O) :- assign(S,I,O).
stmt(S,I,O) :- pref ifstmt(S,I,O).
assign(assign(V,E),I,O) :- var(V,I,[:=-O1]), expr(E,O1,O).
stmtseq(T,O1,O).
stmtseq(T,O1,[else-O2]),
stmtseq(E,O2,O).
pref
pref ifstmt(if(C,if(C1,T),E),In,Out) - pref ifstmt(if(C,if(C1,T,E)),In,Out).
Note the introduction of the new predicate pref ifstmt. Every occurrence of ifstmt on the right
hand side of a grammar rule is replaced by the corresponding instance of pref ifstmt.
4 Applications of Preference Logic Grammars
We now present two major paradigms of preference logic grammars: optimal parsing and natural
language parsing.
4.1 Optimal Parsing
Clause Translation Grammars. We first briefly describe definite clause translation
grammars [Abr84], and then present an example of optimal parsing from a document layout appli-
cation. Definite clause translation grammars (DCTGs) can be viewed as a logical counterpart of
attribute grammars [Knu68]. In attribute grammars, the syntax is specified by context-free rules
and semantics are specified by attributes attached to nonterminal nodes in the derivation trees and
by function definitions that define the attributes. Attribute grammars and DCTGs can be readily
translated into constraint logic programs over some appropriate domain [JL87, vH89, JM94]. In
general, we need constraint logic programs rather than ordinary definite-clause programs because we
are interested in interpreting the functions defining the attributes over an appropriate domain.
rules augment context-free rules in two ways: (i) they provides a mechanism for
associating with nonterminals logical variables which represent subtrees rooted at the nonterminal;
and (ii) they provide a mechanism for specifying the computation of semantic values of properties
of the trees. DCTGs differ from attribute grammars in that the semantics of DCTGs is captured
by associating a set of definite clauses to nonterminal nodes of a derivation tree, and no distinction
is made in DCTGs between inherited and synthesized attributes. The use of logical variables that
unify with parse trees eliminates the need for this distinction. In DCTGs, we associate a logical
variable N with a nonterminal nt by writing:
The logical variable N will be eventually instantiated to the subtree corresponding to the nonterminal
nt. To specify the computation of a semantic value X of a property p of the tree N, we write:
The following is a simple DCTG from [AD89] that specifies the syntax and semantics of bitstrings.
The semantics of a bitstring is its decimal equivalent. Note that syntactic rules are specified by ::=
productions while semantic rules are specified by ::- productions.
bit ::= "0"
bit ::= "1"
bitstring
length(0).
bitstring ::= bit-B, bitstring-B1
number ::= bitstring-B
There are two productions for the nonterminal bit and two for bitstring. The toplevel nonterminal
is number. The predicates specifying semantic properties are bitval, length, and value. The
translation of this DCTG into a logic program is straightforward and is also explained in [AD89].
Line-Breaking Problem. We show how the problem of laying out paragraphs optimally can be
specified in the formalism. Logically, a paragraph is a sequence of lines where each line is a sequence
of words. This view of a paragraph can be captured by the following
Knuth and Plass [KP81] describe how to lay out a sequence of words forming a paragraph by
computing the badness of the paragraph in terms of the badness of the lines that make up the
paragraph. The badness of a line is determined by the properties of the line such as the total width
of the characters that make up the line, the number of white spaces in the line, the stretchability and
shrinkability of the white spaces, the desired length of the line, etc. [KP81] insists that each line in
the paragraph be such that the ratio of the difference between actual length and the desired length
and the stretchability or shrinkability (the adjustment ratio) be bounded. This can be captured
by the following DCTG (we take the liberty to extend the syntax to include interpreted function
para ::= line-Line
para ::= line-Line, para-Para
line ::= "word"
difference(Dl-N) ::- desiredlength(Dl), naturallength(N).
adjustment(D/S) ::- difference(D), D ? 0, stretchability(S).
adjustment(D/S) ::- difference(D), D =! 0, shrinkability(S).
lineshrinkbound, A ? linestretchbound.
line ::= "word", line-Line
difference(Dl-N) ::- desiredlength(Dl), naturallength(N).
adjustment(D/S) ::- difference(D), D ? 0, stretchability(S).
adjustment(D/S) ::- difference(D), D =! 0, shrinkability(S).
lineshrinkbound, A ? linestretchbound.
The grammars specifying document structures, such as the one above, are extremely ambigu-
ous. Given a description of a document and a sequence of words representing its content, we are
interested in a parse that has a particular property. For instance, we may be interested in the parse
of a sequence of words from !para? that has the least badness. Brown et al [BMW92] augmented
attribute grammars with minimization directives that specified which attributes had to be minimized
in the preferred parse. Similarly, we extend DCTGs with statements that specify the preferred parse.
For instance, in the line-breaking example above, the preferred parse is specified by augmenting the
definition of para as follows:
In essence, PLGs allow one to specify optimal parsing problems in a succinct manner. Once again, we
note that the above specification does not commit to any particular parsing strategy for obtaining
the optimal parse. The naive method for computing the best parse involves enumerating all the
parses, and this approach can easily lead to an exponential complexity. Note that the line-breaking
algorithm used by T E X [KP81] constructs the optimal parse in polynomial time.
4.2 Natural Language Parsing
A practical use of preference logic grammars lies in ambiguity resolution in natural languages. This is
a substantial research area and we provide a brief glimpse of the issues through an example. Consider
the following simple ambiguous definite-clause grammar.
verbphrase(vp(V,N)) -? verb(V), nounphrase(N)
verbphrase(vp(V,N,P)) -? verb(V), nounphrase(N), prepphrase(P)
prepphrase(pp(P,N)) -? preposition(P), nounphrase(N)
where the details of determiner, noun, verb, and preposition are not shown. Given a sentence,
The boy saw the girl with binoculars,
there are two possible parses:
sent(np(the boy), vp(verb(saw), np(the girl), pp(with binoculars)))
sent(np(the boy), vp(verb(saw), np(np(the girl), pp(with binoculars))))
This is the well-known prepositional attachment problem. In this example, the preferred reading
of the sentence is given by the first parse, which conforms to the principle of minimal attachment
[Kim73], i.e., create the simplest syntactic analysis, where simplicity might be measured by the
number of nodes in the parse tree. On the other hand, minimal attachment does not yield the
preferred reading of the sentence
The boy saw the girl with long hair.
Instead, we prefer here to parse the sentence according to the principle of right association, i.e., a
new constituent is analyzed as being part of the phrase subtree under construction rather than a
part of a higher phrase subtree. Both principles can be expressed by preference clauses, as shown
below (nodes in(P) counts the number of nodes in parse tree P, while span(P) returns the length
of the sentence spanned by P):
While these two principles are purely syntactic in nature, in general the attachment problem
is more complex, requiring semantic and contextual knowledge for correct resolution. There has been
considerable recent interest in the computational linguistics community in the use of constraints and
preferences. For example, [Sch86] discusses preference tradeoffs in attachment decisions; [Per85]
shows how the principles of minimal attachment and right association can be realized by suitable
parsing actions in a bottom-up parser; [Usz91] presents controlled linguistic deduction as a means
for adding control knowledge to declarative grammars; and [Erb92] discusses the use of preference
values with typed feature structures and resolving ambiguity by choosing feature structures with the
highest preference value. We are at present developing an extension of preference logic grammars
to formulate a principled and practical scheme for using context-sensitive information for ambiguity
resolution in feature-based grammar formalisms.
5 Conclusions and Further Research
We have presented a logic-grammar extension called preference logic grammars, and shown their
use for ambiguity resolution in logic grammars and for specifying optimal parsing problems. The
use of preferential statements in logic grammars is concise, modular, and declarative. Although
this extension was originally motivated by the extension to attribute grammars to specify document
layout [BMW92], this paper shows that the ideas are applicable in a broader setting.
There are many interesting avenues for further research. The first is to develop an implementation
of PLGs incorporating memoization and pruning. For applications such as optimal parsing, it
is desirable to further annotate the grammar with information so that pruning of suboptimal parses
can be done at the earliest opportunity. For this purpose, domain knowledge, such as the additive
property of costs, is important. It may be noted that this kind of information is generally not of
interest in other applications of ambiguity resolution.
The paradigm of preference logic programming was originally proposed for specifying both
optimization and relaxation problems [GJM96, Gov97]. Relaxation is performed when the constraints
specifying the problem do not have any solutions or when the optimality requirements need to
weakened. In PLP [GJM96], we can encode constraint relaxation regimes such as those given in
[WB93] and also preference relaxation regimes. The latter is achieved through the notion of a
relaxable goal of the following form:
where p is an O-predicate and c is a C-predicate or a constraint as in CLP. The predicate p is said to
be a relaxable predicate and c is said to be the relaxation criterion of the relaxation goal. Essentially
if the optimal solutions to p( - t) satisfy c(-u), those are the intended solutions. However, if none of the
optimal solutions to p( - t) satisfies c(-u), then the feasible set of solutions of p is reduced by adding c
as an additional constraint and the arbiter is applied to this constrained set. In applications such
as document processing, this notion of relaxation seems particularly useful. For instance, a page
may consist of many paragraphs but the optimal layout of the page might require that some of the
paragraphs that make up the page are laid out sub-optimally. We are investigating the methods
by which such criteria could be incorporated into a logic grammar formalism. Another possible
application for relaxation is in specifying error-recovery strategies in compilers and code generation.
In the area of natural language processing, we are interested in exploring how preferences can
be used to resolve several semantic ambiguity issues, such as 'quantifier scoping'. For example, the
sentence Every man loves a woman has a quantifier scoping ambiguity, i.e., does every man love
the same woman or can the woman be different for different men? The latter reading is usually the
one intended, and this reflects our preference that the quantifier in the noun-phrase should scope
over the one in the verb-phrase.
Finally, we are also interested in incremental computation strategies for parsing with preference
logic grammars. For example, in code-generation, if a small change were made to the input
program, the compiler should not have to compute the optimal code sequence of the program from
scratch. In the document layout application, the formatter (T E X, L A T E X, etc.) should not have to
reformat the whole document after each change to obtain the new layout.
--R
Clause Translation Grammars.
Logic Grammars.
Code Generation using Tree Matching and Dynamic Programming.
The Declarative Semantics of Document Pro- cessing
Using Preference Values in Typed Feature Structures to Exploit Non-absolute Constraints for Disambiguation
Preference Logic Programming.
Optimization and Relaxation in Constraint Logic Languages.
Optimization and Relaxation in Logic Languages.
Constraint Logic Programming.
Constraint Logic Programming: A Survey.
Seven Principles of Surface Structure Parsing in Natural Language.
Semantics of Context-Free Languages
Breaking Paragraphs into Lines.
Foundations of Logic Programming.
A new Characterization of Attachment Preferences
Clause Grammars for Language Analysis - A Survey of the Formalism and a Comparison with Augmented Transition Networks
Are There Preference Trade-offs in Attachment Decisions? <Proceedings>In Proc
XSB: An Overview of its Use and Implementation.
Strategies for Adding Control Information to Declarative Grammars.
Constraint Satisfaction in Logic Programming.
Memoing for Logic Programs.
Hierarchical Constraint Logic Programming.
--TR
Foundations of logic programming
Every logic program has a natural stratification and an iterated least fixed point model
The well-founded semantics for general logic programs
Preferred answer sets for extended logic programs
Tabling for non-monotonic programming
Reasoning with Prioritized Defaults
Psychiatric Diagnosis from the Viewpoint of Computational Logic
--CTR
Hai-Feng Guo , Bharat Jayaraman, Mode-directed preferences for logic programs, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Hai-Feng Guo , Bharat Jayaraman , Gopal Gupta , Miao Liu, Optimization with mode-directed preferences, Proceedings of the 7th ACM SIGPLAN international conference on Principles and practice of declarative programming, p.242-251, July 11-13, 2005, Lisbon, Portugal
Torsten Schaub , Kewen Wang, A semantic framework for preference handling in answer set programming, Theory and Practice of Logic Programming, v.3 n.4, p.569-607, July
Logic programming and knowledge representation-the A-prolog perspective, Artificial Intelligence, v.138 n.1-2, p.3-38, June 2002 | non-monotonic reasoning;reasoning with preferences;logic programming;natural language processing;XSB |
570395 | Annotated revision programs. | Revision programming is a formalism to describe and enforce updates of belief sets and databases. That formalism was extended by Fitting who assigned annotations to revision atoms. Annotations provide a way to quantify the confidence (probability) that a revision atom holds. The main goal of our paper is to reexamine the work of Fitting, argue that his semantics does not always provide results consistent with intuition, and to propose an alternative treatment of annotated revision programs. Our approach differs from that proposed by Fitting in two key aspects: we change the notion of a model of a program and we change the notion of a justified revision. We show that under this new approach fundamental properties of justified revisions of standard revision programs extend to the annotated case. | Introduction
Revision programming is a formalism to specify and enforce constraints on
databases, belief sets and, more generally, on arbitrary sets of elements. Revision
programming was introduced and studied in [MT95,MT98]. The formalism was
shown to be closely related to logic programming with stable model semantics
[MT98,PT97]. In [MPT99], a simple correspondence of revision programming
with the general logic programming system of Lifschitz and Woo [LW92] was dis-
covered. Roots of another recent formalism of dynamic programming [ALP
can also be traced back to revision programming.
Revision rules come in two forms of in-rules and out-rules:
in(a)
and
out(a)
Expressions in(a) and out(a) are called revision atoms. Informally, the atom
in(a) stands for "a is in the current set" and out(a) stands for "a is not in the
current set." The rules (1) and (2) have the following imperative, or computa-
tional, interpretation: whenever elements a k , 1 - k - m, belong to the current
set (database, belief set) and none of the elements belongs to the
current set then, in the case of rule (1), the item a must be added to the set (if it
is not there already), and in the case of rule (2), a must be eliminated from the
database (if it is there). The rules (1) and (2) have also an obvious declarative
interpretation.
To provide a precise semantics to revision programs, that is, collections of
revision rules, the concept of a justified revision was introduced in [MT95,MT98].
Informally, given an initial set B I and a revision program P , a justified revision
of B I with respect to P (or, simply, a P -justified revision of B I ) is obtained
from B I by adding some elements to B I and by removing some other elements
from B I so that each change is, in a certain sense, justified by the program.
The formalism of revision programs was extended by Fitting [Fit95] to the
case when revision atoms are assigned annotations. These annotations can be
interpreted as the degree of confidence that a revision atom holds. For instance,
an annotated atom (in(a):0:2) can be regarded as the statement that a is in the
set with probability 0:2. In his paper, Fitting described the concept of a justified
revision of an annotated program and studied properties of that notion.
The main goal of our paper is to reexamine the work of Fitting, argue that
his semantics does not always provide results consistent with intuition, and to
propose an alternative treatment of annotated revision programs. Our approach
differs from that proposed by Fitting in two key aspects: we change the notion of
a model of a program and we change the notion of a justified revision. We show
that under this new approach all fundamental properties of justified revisions of
standard revision programs extend to the case of annotated revision programs.
We also show that annotated revision programming can be given a more uniform
treatment if the syntax of revision programs is somewhat modified. The
new syntax yields a formalism that is equivalent to the original formalism of
annotated revision programs. The advantage of the new syntax is that it allows
us to generalize the shifting theorem proved in [MPT99] and used there to establish
the equivalence of revision programming with general logic programming
of Lifschitz and Woo [LW92].
Finally, in the paper we also address briefly the issue of disjunctive annotated
programs and other possible research directions.
Preliminaries
Throughout the paper we consider a fixed universe U whose elements are referred
to as atoms. Expressions of the form in(a) and out(a), where a 2 U , are called
revision atoms. In the paper we assign annotations to revision atoms. These
annotations are members of a complete distributive lattice with the de Morgan
complement (an order reversing involution). Throughout the paper this lattice is
denoted by T . The partial ordering on T is denoted by - and the corresponding
meet and join operations by - and -, respectively. The de Morgan complement
of a 2 T is denoted by - a.
An annotated revision atom is an expression of the form (in(a):ff) or (out(a):
ff), where a 2 U and ff 2 T . An annotated revision rule is an expression of the
where are annotated revision atoms. An annotated revision program
is a set of annotated revision rules.
A T -valuation is a mapping from the set of revision atoms to T . A T -
valuation v describes our information about the membership of the elements
from U in some (possibly unknown) set B ' U . For instance,
can be interpreted as saying that a 2 B with certainty ff. A T -valuation v
satisfies an annotated revision atom (in(a):ff) if v(in(a)) - ff. Similarly, v satisfies
ff. The T -valuation v satisfies a list or a set
of annotated revision atoms if it satisfies each member of the list or the set.
A T -valuation satisfies an annotated revision rule if it satisfies the head of the
rule whenever it satisfies the body of the rule. Finally, a T -valuation satisfies an
annotated revision program (is a model of the program) if it satisfies all rules in
the program.
Given a revision program P we can assign to it an operator on the set of all
-valuations. Let t P (v) be the set of the heads of all rules in P whose bodies
are satisfied by v. We define an operator TP as follows:
(note that ? is the join of an empty set of lattice elements). The operator
TP is a counterpart of the well-know van Emden-Kowalski operator from logic
programming and it will play an important role in our paper.
It is clear that under T -valuations, the information about an element a 2 U
is given by a pair of elements from T that are assigned to revision atoms in(a)
and out(a). Thus, in the paper we will also consider an algebraic structure T 2
with the domain T \Theta T and with an ordering - k defined by:
If a pair hff 1 viewed as a measure of our information about membership
of a in some unknown set B then ff 1 - ff 2 and fi 1 - fi 2 imply that the pair
higher degree of knowledge about a. Thus, the ordering - k
is often referred to as the knowledge or information ordering. Since the lattice
T is complete, T 2 is a complete lattice with respect to the ordering - k 1 .
The operations of meet, join, top, and bottom under - k are
?, and ?, respectively. In addition, we make use of an additional operation,
conflation. Conflation is defined as \Gammahff;
ffi. An element A 2 T 2 is
consistent if A - k \GammaA.
-valuation is a mapping from atoms to elements of T 2 . If
under some T 2 -valuation B, we say that under B the element a is in a set with
certainty ff and it is not in the set with certainty fi. We say that a T 2 -valuation
is consistent if it assigns a consistent element of T 2 to every atom in U .
1 There is another ordering that can be associated with T 2 . We can define hff
This ordering is often called the truth ordering.
Since T is a distributive lattice, T 2 with both orderings -k and - t forms a bilattice
(see [Gin88,Fit99] for a definition). In this paper we will not use the ordering - t nor
the fact that T 2 is a bilattice.
In the paper, T 2 -valuations will be used to represent current information
about sets (databases) as well as change that needs to be enforced. Let B be
a T 2 -valuation representing our knowledge about a certain set and let C be a
representing change that needs to be applied to B. We define the
by C by
The intuition is as follows. After the revision, the new valuation must contain
at least as much knowledge about atoms being in and out as C. On the other
hand, this amount of knowledge must not exceed implicit bounds present in
C and expressed by \GammaC , unless C directly implies so (if
evidence for in(a) must not exceed -
fi and the evidence for out(a) must not
exceed -
ff, unless C directly implies so). Since we prefer explicit evidence of C
to implicit evidence expressed by \GammaC , we perform the change by first using \GammaC
and then applying C (however, let us note here that the order matters only if
C is inconsistent; if C is consistent,
C)\Omega \GammaC ). This
specification of how a change modeled by a T 2 -valuation is enforced plays a key
role in our definition of justified revisions in Section 4.
There is a one-to-one correspondence ' between T -valuations (of revision
atoms) and T 2 -valuations (of atoms). For a T -valuation v, the T 2 -valuation
'(v) is defined v(out(a))i. The inverse mapping of ' is
denoted by ' \Gamma1 . Clearly, using the mapping ', the notions of satisfaction defined
earlier for T -valuations can be extended to T 2 -valuations. Similarly, the operator
TP gives rise to a related operator T b
. The operator T b
P is defined on the set of
all -valuations by T b
. The key property of the operator T b
P is
its - k -monotonicity.
Theorem 1. Let P be an annotated revision program and let B and B 0 be two
-valuations such that
By Tarski-Knaster Theorem it follows that the operator T b
P has a least fixpoint
in T 2 [KS92]. This fixpoint is an analogue of the concept of a least Herbrand
model of a Horn program. It represents the set of annotated revision atoms
that are implied by the program and, hence, must be satisfied by any revision
under P of any initial valuation. Given an annotated revision program P we
will refer to the least fixpoint of the operator T b
P as the necessary change of P
and will denote it by NC(P ). The present concept of the necessary change generalizes
the corresponding notion introduced in [MT95,MT98] for the original
unannotated revision programs.
To illustrate concepts and results of the paper, we will consider two special
lattices. The first of them is the lattice with the domain [0; 1] (interval of reals)
and with the standard ordering - and the standard complement operation. We
will denote this lattice by T [0;1] . Intuitively, the annotated revision atom (in(a):
x), where x 2 [0; 1], stands for the statement that a is "in" with likelihood
(certainty) x.
The second lattice is the Boolean algebra of all subsets of a given set X . It will
be denoted by TX . We will think of elements from X as experts. The annotated
revision atom (out(a):Y ), where Y ' X , will be understood as saying that a is
believed to be "out" by those experts that are in Y (the atom (in(a):Y ) has a
similar meaning).
3 Models and c-models
The semantics of annotated revision programs will be based on the notion of a
model as defined in the previous section. The following result provides a characterization
of the concept of a model in terms of the operator T b
Theorem 2. A T 2 -valuation B of an annotated revision program P is a model
of
Given an annotated revision program P , its necessary change NC(P ) satisfies
is a model of P .
As we will argue now, not all models are appropriate for describing the meaning
of an annotated revision program. The problem is that T 2 -valuations may
contain inconsistent information about elements from U . When studying the
meaning of an annotated revision program we will be interested in those models
only whose inconsistencies are limited by the information explicitly or implicitly
present in the program.
Consider the annotated revision program P , consisting of the following rule:
(in(a):fqg) / (out(a):fpg)
(the literals are annotated with elements of the lattice T fp;qg ). Some models of
this program are consistent (for instance, the T 2 -valuation that assigns hfqg; fpgi
to a). However, P also has inconsistent models. Let us consider first the T 2 -
valuation is a model of P . More-
over, it is an inconsistent model - the expert p believes both in(a) and out(a).
Let us notice though that this inconsistency is not disallowed by the program.
The rule (in(a) : fqg) / (out(a) : fpg) is applicable with respect to B 1 and,
thus, provides an explicit evidence that q believes in in(a). This fact implicitly
precludes q from believing in out(a). However, this rule does not preclude that
expert p believes in out(a). In addition, since no rule in the program provides
any information about out(a), it prevents neither p nor q from believing in in(a).
To summarize, the program allows for p to have inconsistent beliefs (however,
q's beliefs must be consistent).
Next, consider the
valuation is also a model of P . In B 2 both p and q are inconsistent in their
beliefs. As before, the inconsistent beliefs of p are not disallowed by P . However,
reasoning as before we see that the program disallows q to believe in out(a). Thus
the inconsistent beliefs of expert q cannot be reconciled with P . In our study of
annotated revision programs we will restrict ourselves only to consistent models
and to those inconsistent models whose all inconsistencies are not disallowed by
the program.
Speaking more formally, by direct (or explicit) evidence we mean evidence
provided by heads of program rules applicable with respect to B. It can be
described as T b
(B). The implicit bound on allowed annotations is given by a
version of the closed world assumption: if the evidence for a revision atom l
provided by the program is ff then, the evidence for the dual revision atom l D
(in(a), if l = out(a), or out(a), otherwise) must not exceed -
ff (unless explicitly
forced by the program). Thus, the implicit upper bound on allowed annotations
is given by \GammaT b
(B). Hence, a model B of a program P contains no more evidence
than what is implied by P given
P (B)). This
discussion leads us to a refinement of the notion of a model of an annotated
revision program.
Definition 1. Let P be an annotated revision program and let B be a T 2 -
valuation. We say B is a c-model of P if
Thus, coming back to our example, the T 2 -valuation B 1 is a c-model of P
and B 2 is not.
The "c" in the term c-model is to emphasize that c-models are "as consistent
as possible", that is, inconsistencies are limited to those that are not explicitly
or implicitly disallowed by the program. The notion of a c-model will play an
important consideration in our considerations.
Clearly, by Theorem 2, a c-model of P is a model of P . In addition, it is easy
to see that the necessary change of an annotated program P is a c-model of P
(it follows directly from the fact that NC(P
The distinction between models and c-models appears only in the context of
inconsistent information. This observation is formally stated below.
Theorem 3. Let P be an annotated revision program. A consistent T 2 -valu-
ation B is a c-model of P if and only if B is a model of P .
Justified revisions
In this section, we will extend to the case of annotated revision programs the
notion of a justified revision introduced for revision programs in [MT95]. The
reader is referred to [MT95,MT98] for the discussion of motivation and intuitions
behind the concept of a justified revision and of the role of the inertia principle
(a version of the closed world assumption).
There are several properties that one would expect to hold when the notion
of justified revision is extended to the case of programs with annotations.
Clearly, the extended concept should specialize to the original definition if annotations
can be dropped. Next, all main properties of justified revisions studied in
should have their counterparts in the case of justified revisions
of annotated programs. In particular, justified revisions of an annotated logic
program should satisfy it. Finally, there is one other requirement that naturally
arises in the context of programs with annotations.
Consider two annotated revision rules r and r 0 that are exactly the same
except that the body of r contains two annotated revision atoms l:fi 1 and l:fi 2 ,
while the body of r 0 instead of l:fi 1 and l:fi 2 contains annotated revision atom
It is clear, that for any T 2 -valuation B, B satisfies (l:fi 1 ) and (l:fi 2 ) if and only
Consequently, replacing rule r by rule r 0 (or vise versa)
in an annotated revision program should have no effect on justified revisions
In fact, any reasonable semantics for annotated revision programs should be
invariant under such operation, and we will refer to this property of a semantics
of annotated revision programs as invariance under join.
In this section we introduce the notion of the justified revision of an annotated
revision program and contrast it with an earlier proposal by Fitting [Fit95]. In
the following section we show that our concept of a justified revision satisfies all
the requirements listed above.
Let a T 2 -valuation B I represent our current knowledge about some subset of
the universe U . Let an annotated revision program P describe an update that
I should be subject to. The goal is to identify a class of T 2 -valuations that
could be viewed as representing updated information about the subset, obtained
by revising B I by P . As argued in [MT95,MT98], each appropriately "revised"
valuation BR must be grounded in P and in B I , that is, any difference between
I and the revised T 2 -valuation BR must be justified by means of the program
and the information available in B I .
To determine whether BR is grounded in B I and P , we use the reduct of P
with respect to the two valuations. The construction of reduct consists of two
steps and mirrors the original definition of the reduct of an unannotated revision
program [MT98]. In the first step, we eliminate from P all rules whose bodies
are not satisfied by BR (their use does not have an a posteriori justification with
respect to BR ). In the second step, we take into account the initial valuation B I .
How can we use the information about the initial T 2 -valuation B I at this
stage? Assume that B I provides evidence ff for a revision atom l. Assume also
that an annotated revision atom (l:fi) appears in the body of a rule r. In order
to satisfy this premise of the rule, it is enough to derive, from the program
resulting from step 1, an annotated revision atom (l:fl), where ff - fl - fi. The
least such element exists (due to the fact that T is complete and distributive).
Let us denote it by pcomp(ff; fi) 2 .
Thus, in order to incorporate information about a revision atom l contained
in the initial T 2 -valuation B I , which is given by I ))(l), we proceed
2 The operation pcomp(\Delta; \Delta) is known in the lattice theory as the relative pseudocom-
plement, see [RS70].
as follows. In the bodies of rules of the program obtained after step 1, we replace
each annotated revision atom of the form (l:fi) by the annotated revision atom
(l:pcomp(ff; fi)).
Now we are ready to formally introduce the notion of reduct of an annotated
revision program P with respect to the pair of T 2 -valuations, initial one, B I ,
and a candidate for a revised one, BR .
Definition 2. The reduct PBR jB I is obtained from P by
1. removing every rule whose body contains an annotated atom that is not satisfied
in BR ,
2. replacing each annotated atom (l:fi) from the body of each remaining rule by
the annotated atom (l:fl), where
We now define the concept of a justified revision. Given an annotated revision
program P , we first compute the reduct PBR jB I of the program P with respect
to B I and BR . Next, we compute the necessary change for the reduced program.
Finally we apply the change thus computed to the T 2 -valuation B I . A T 2 -
valuation BR is a justified revision of B I if the result of these three steps is BR .
Thus we have the following definition.
Definition 3. BR is a P -justified revision of B I if
I ) is the necessary change for PBR jB I .
We will now contrast the above approach with one proposed by Fitting in
[Fit95]. In order to do so, we recall the definitions introduced in [Fit95]. The key
difference is in the way Fitting defines the reduct of a program. The first step
is the same in both approaches. However, the second steps, in which the initial
valuation is used to simplify the bodies of the rules not eliminated in the first
step of the construction, differ.
be an annotated revision program and let B I
and BR be T 2 -valuations. The F-reduct of P with respect to (B I ; BR ) (denoted
BR jB I ) is defined as follows:
1. Remove from P every rule whose body contains an annotated revision atom
that is not satisfied in BR .
2. From the body of each remaining rule delete any annotated revision atom
that is satisfied in B I .
The notion of justified revision as defined by Fitting differs from our notion
only in that the necessary change of the F-reduct is used. We call the justified
revision using the notion of F -reduct, the F-justified revision.
In the remainder of this section we show that the notion of the F-justified
revision does not in general satisfy some basic requirements that we would like
justified revisions to have. In particular, F-justified revisions under an annotated
revision program P are not always models of P .
Example 1. Consider the lattice T fp;qg . Let P be a program consisting of the
following rules:
(in(a):fpg) / (in(b):fp; qg) and (in(b):fqg) /
and let B I be an initial valuation such that B I (a) = h;; ;i and B I (b) = hfpg; ;i.
Let BR be a valuation given by BR
is an F -justified revision of B I (under P ). However, BR
does not satisfy P .
The semantics of F -justified revisions also fails to satisfy the invariance under
join property.
Example 2. Let P be a revision program consisting of the following rules:
(in(a):fpg) / (in(b):fp; qg) and (in(b):fqg) /
and let P 0 consist of
(in(a):fpg) / (in(b):fpg); (in(b):fqg) and (in(b):fqg) /
Let the initial valuation B I be given by B I (a) = h;; ;i and B I (b) = hfpg; ;i. The
only F-justified revision of B I (under P ) is a T 2 -valuation BR , where BR
h;; ;i and BR ;i. The only F-justified revision of B I (under
is a T 2 -valuation B 0
replacing in the body of a rule annotated revision atom (in(b):fp; qg) by (in(b):
fpg) and (in(b):fqg) affects F-justified revisions.
However, in some cases the two definitions of justified revision coincide. The
following result provides a complete characterization of those cases.
Theorem 4. F-justified revisions and justified revisions coincide if and only if
the lattice T is linear (that is, for any two elements
Theorem 4 explains why the difference between the justified revisions and F -
justified revisions is not seen when we limit our attention to revision programs
as those considered in [MT98]. Namely, the lattice T of boolean
values is linear. Similarly, the lattice of reals from the segment [0; 1] is linear,
and there the differences cannot be seen either.
5 Properties of justified revisions
In this section we study basic properties of justified revisions. We show that
key properties of justified revisions in the case of revision programs without
annotations have their counterparts in the case of justified revisions of annotated
revision programs.
First, we will observe that revision programs as defined in [MT95] can be
encoded as annotated revision programs (with annotations taken from the lattice
Namely, a revision rule
(where p and all q i s are revision atoms) can be encoded as
In [Fit95], Fitting argued that under this encoding the semantics of F-justified
revisions generalizes the semantics of justified revisions introduced in [MT95].
Since for lattices whose ordering is linear the approach by Fitting and the approach
presented in this paper coincide, and since the ordering of T WO is linear,
the semantics of justified revisions discussed here extends the semantics of justified
revisions from [MT95].
Next, let us recall that in the case of revision programs without annotations,
justified revisions under a revision program P are models of P . In the case of
annotated revision programs we have a similar result.
Theorem 5. Let P be an annotated revision program and let B I and BR be
-valuations. If BR is a P -justified revision of B I then BR is a c-model of P
(and, hence, also a model of P ).
In the case of revision programs without annotations, a model of a program
P is its unique P -justified revision. In the case of programs with annotations,
the situation is slightly more complicated. The next result characterizes those
models of an annotated revision program that are their own justified revisions.
Theorem 6. Let a T 2 -valuation B I be a model of an annotated revision program
I is a P -justified revision of itself if and only if B I is a c-model of
As we observed above, in the case of programs without annotations, models
of a revision program are their own unique justified revisions. This property does
not hold, in general, in the case of annotated revision programs.
Example 3. Consider an annotated revision program P (with annotations belonging
to T fp;qg ) consisting of the clauses:
(out(a):fqg) / and (in(a):fqg) / (in(a):fqg)
Consider a T 2 -valuation B I such that B I (a) = hfqg; fqgi. It is easy to see that
B I is a c-model of P . Hence, B I is its own justified revision (under P ).
However, B I is not the only P -justified revision of B I . Consider the T 2 -
valuation BR such that BR (a) = h;; fqgi. We have PBR jB I = f(out(a):fqg) /g.
Let us denote the corresponding necessary change, NC(PBR jB I ), by C. Then,
BR (a). Consequently, BR is a P -justified revision of B I .
The same behavior can be observed in the case of programs annotated with
elements from other lattices.
Example 4. Let P be an annotated revision program (annotations belong to the
lattice consisting of the rules:
I be a valuation such that B I (a) = h0:4; 1i. Then, B I is a c-model of
and, hence, it is its own P -justified revision. Consider a valuation BR such
that BR (a) = h0; 1i. We have PBR jB I us denote the
necessary change NC(PBR jB I ) by C. Then
Thus, ((B
(a). That is, BR is a P -justified revision
of B I .
Note that in both examples the additional justified revision BR of B I is
smaller than B I with respect to the ordering - k . It is not coincidental as demonstrated
by our next result.
Theorem 7. Let B I be a model of an annotated revision program P . Let BR be
a P -justified revision of B I . Then, BR - k B I .
Finally, we observe that if a consistent T 2 -valuation is a model (or a c-model;
these notions coincide in the class of consistent valuations) of a program then,
it is its unique justified revision.
Theorem 8. Let B I be a consistent model of an annotated revision program P .
I is the only P -justified revision of itself.
To summarize, when we consider inconsistent valuations (they appear natu-
rally, especially when we measure beliefs of groups of independent experts), we
encounter an interesting phenomenon. An inconsistent valuation B I , even when
it is a model of a program, may have different justified revisions. However, all
these additional revisions must be less informative than B I . In the case of consistent
models this phenomenon does not occur. If a valuation B is consistent
and satisfies P then it is its unique P -justified revision.
6 An alternative way of describing annotated revision
programs and order-isomorphism theorem
We will now provide an alternative description of annotated revision programs.
Instead of evaluating separately revision atoms (i.e. expressions of the form in(a)
and out(a)) we will evaluate atoms. However, instead of evaluating revision
atoms in T , we will evaluate atoms in T 2 (i.e. T \Theta T ). This alternative presentation
will allow us to obtain a result on the preservation of justified revisions
under order isomorphisms of T 2 . This result is a generalization of the "shift
theorem" of [MPT99].
An expression of the form ahff; fii, where hff; fii 2 T 2 , will be called an annotated
atom (thus, annotated atoms are not annotated revision atoms). Intuitively,
an atom ahff; fii stands for both (in(a):ff) and (out(a):fi). An annotated rule
is an expression of the form p / q are annotated
atoms. An annotated program is a set of annotated rules.
satisfies an annotated atom ahff; fii if hff; fii - k B(a).
This notion of satisfaction can be extended to annotated rules and annotated
programs.
We will now define the notions of reduct, necessary change and justified
revision for the new kind of program. The reduct of a program P with respect
to two valuations B I and BR is defined in a manner similar to Definition 2.
Specifically, we leave only the rules with bodies that are satisfied by BR , and
in the remaining rules we reduce the annotated atoms (except that now the
transformation ' is no longer needed!). Next, we compute the least fixpoint of
the operator associated with the reduced program. Finally, as in Definition 3,
we define the concept of justified revision of a valuation B I with respect to a
revision program P .
It turns out that this new syntax does not lead to a new notion of justified
revision. Since we talk about two different syntaxes, we will use the term "old
syntax" to denote the revision programs as defined in Section 2, and "new syn-
tax" to describe programs introduced in this section. Specifically we now exhibit
two mappings. The first of them, tr 1 , assigns to each "old" in-rule
(in(a):ff)
a "new" rule
Encoding of an "old" out-rule
is analogous:
Translation tr 2 , in the other direction, replaces a revision "new" rule by one
in-rule and one out-rule. Specifically, a "new" rule
an hff
is replaced by two "old" rules (with identical bodies but different heads)
(in(a):ff)
and
The translations tr 1 and tr 2 can be extended to programs. We then have the
following theorem.
Theorem 9. Both transformations tr 1 , and tr 2 preserve justified revisions. That
are valuations in T 2 and P is a program in the "old" syntax, then
BR is a P -justified revision of B I if and only if BR is a tr 1 (P )-justified revision
of B I . Similarly, if B I ; BR are valuations in T 2 and P is a program in the
"new" syntax, then BR is a P -justified revision of B I if and only if BR is a
tr 2 (P )-justified revision of B I .
In the case of unannotated revision programs, the shifting theorem proved in
[MPT99] shows that for every revision program P and every two initial databases
there is a revision program P 0 such that there is a one-to-one correspondence
between P -justified revisions of B and P 0 -justified revisions of B 0 . In
particular, it follows that the study of justified revisions (for unannotated pro-
grams) can be reduced to the study of justified revisions of empty databases. We
will now present a counterpart of this result for annotated revision programs.
The situation here is more complex. It is no longer true that a T 2 -valuation can
be "shifted" to any other T 2 -valuation. However, the shift is possible if the two
valuations are related to each other by an order isomorphism of the lattice of all
There are many examples of order isomorphisms on the lattice of T 2 -valua-
tions. For instance, the mapping defined by /(hff;
is an order isomorphism of T 2 . In the case of a specific lattice
isomorphisms of T 2
are generated by permutations of the set X . An order isomorphism
on T 2 can be extended to annotated atoms, programs and valuations.
The extension to valuations is again an order isomorphism, this time on the
lattice of all T 2 -valuations.
The following result generalizes the shifting theorem of [MPT99].
Theorem 10. Let / be an order-isomorphism on the set of T 2 -valuations. Then,
BR is a P -justified revision of B I if and only if /(BR ) is a /(P )-justified revision
of /(B I ).
7 Conclusions and further research
The main contribution of our paper is a new definition of the reduct (and hence of
justified revision) for the annotated programs considered by Fitting in [Fit95].
This new definition eliminates some anomalies (specifically the fact that the
justified revisions of [Fit95] do not have to be models of the program). We also
found that in cases where the intuition of [Fit95] is very clear (for instance in case
when annotations are numerical degrees of belief), the two concepts coincide.
Due to the limited space of the extended abstract, some results were not
included. Below we briefly mention two research areas that are not discussed
here but that will be discussed in the full version of the paper.
First, the annotation programs can be generalized to disjunctive case, that
is to programs admitting "nonstandard disjunctions" in the heads of rules. It
turns out that a definition of justified revisions by means of such programs is
possible, and one can prove that the disjunctive revisions for programs that have
the head consisting of just one literal reduce to the formalism described above.
Second, one can extend the formalism of annotated revision programs to the
case when the lattice of annotations is not distributive. However, in such case
only some of the results discussed here still hold.
Acknowledgments
This work was partially supported by the NSF grants CDA-9502645 and IRI-
9619233.
--R
Annotated revision specification programs.
Fixpoint semantics for logic programming - a survey
Multivalued logics: a uniform approach to reasoning in artificial intelligence.
Theory of generalized annotated logic programs and its applications.
Answer sets in general nonmonotonic reasoning.
Revision programming
Revision programming
Revision programming.
Update by means of inference rules.
The Mathematics of metamathematics.
--TR
Quantitative deduction and its fixpoint theory
Theory of generalized annotated logic programming and its applications
Stable semantics for probabilistic deductive databases
Revision programming
The Semantics of Predicate Logic as a Programming Language
Fixpoint semantics for logic programming a survey
Revision Programming, Database Updates and Integrity Constraints
Annotated Revision Specification Programs
Revision Programming = Logic Programming
--CTR
Logic programming and knowledge representation-the A-prolog perspective, Artificial Intelligence, v.138 n.1-2, p.3-38, June 2002 | database updates;annotated programs;belief revision;revision programming;knowledge representation |
570546 | Specifying and Verifying a Broadcast and a Multicast Snooping Cache Coherence Protocol. | In this paper, we develop a specification methodology that documents and specifies a cache coherence protocol in eight tables: the states, events, actions, and transitions of the cache and memory controllers. We then use this methodology to specify a detailed, modern three-state broadcast snooping protocol with an unordered data network and an ordered address network that allows arbitrary skew. We also present a detailed specification of a new protocol called Multicast Snooping and, in doing so, we better illustrate the utility of the table-based specification methodology. Finally, we demonstrate a technique for verification of the Multicast Snooping protocol, through the sketch of a manual proof that the specification satisfies a sequentially consistent memory model. |
1. High-level specification for cache controller
Since, at a high level, cache coherence protocols are simply finite state machines, it would appear at first
glance that it would be easy to specify and verify a common three state (MSI) broadcast snooping protocol.
Unfortunately, at the level of detail required for an actual implementation, even seemingly straightforward
protocols have numerous transient states and possible race conditions that complicate the tasks of specifica-
tion and verification. For example, a single cache controller in a simple MSI protocol that we will specify
in Section 2.1 has 11 states (8 of which are transient), 13 possible events, and 21 actions that it may perform.
The other system components are similarly complicated, and the interactions of all of these components are
difficult to specify and verify.
Why is verification important? Rigorous verification is important, since the complexity of a low-level,
implementable protocol makes it difficult to design without any errors. Many protocol errors can be uncovered
by simulation. Simulation with random testing has been shown to be effective at finding certain classes
of bugs, such as lost protocol messages and some deadlock conditions [27]. However, simulation tends not
to be effective at uncovering subtle bugs, especially those related to the consistency model. Subtle consistency
bugs often occur only under unusual combinations of circumstances, and it is unlikely that un-directed
(or random) simulation will drive the protocol to these situations. Thus, systematic and perhaps more formal
verification techniques are needed to expose these subtle bugs.
Verification requires a detailed, low-level specification. Systematic verification of an implementable
cache coherence protocol requires a low-level, detailed specification of the entire protocol. While there exist
numerous verification techniques, all of these techniques seek to show that an implementable specification
meets certain invariants. Verifying an abstract specification only shows that the abstract protocol is correct.
For example, the verification of a high-level specification which omits transient states may show that invariants
hold for this abstraction of the protocol, but it will not show that an implementable version of this protocol
obeys these invariants.
Current specifications are not sufficient. Specifications that have been published in the literature have not
been sufficiently detailed for implementation purposes, and they are thus not suitable for verification pur-
poses. In academia, protocol specifications tend to be high-level, because a complete low-level specification
may not be necessary for the goal of publishing research [4,7,13]. Moreover, a complete low-level specifica-
tion without a concise format does not lend itself to publication in academia. In industry, low-level, detailed
specifications are necessary and exist, but, to the best of our knowledge, none have been published in the lit-
erature. These specifications often match the hardware too closely, which complicates verification and limits
alternative implementations but eliminates the problem of verifying that the implementation satisfies the
specification.
A new table-based specification technique that is sufficient for verification. To address the need for concise
low-level specifications, we have developed a table-based specification methodology. For each system
component that participates in the coherence protocol, there is a table that specifies the component's behavior
with respect to a given cache block. As an illustrative example, Table 1 shows a specification for a sim-
plified atomic cache controller.
The rows of the table correspond to the states that the component can enter, the columns correspond to the
events that can occur, and the entries themselves are the actions taken and resulting state that occur for that
combination of state and event. The actions are coded with letters which are defined below the table. For
example, the entry a/S denotes that a Load event at the cache controller for a block in state I causes the cache
controller to perform a Get Shared and enter state S.
This simple example, however, does not show the power of our specification methodology, because it does
not include the many transient states possessed by realistic coherence protocols. For simple atomic proto-
cols, the traditional specification approach of drawing up state transition diagrams is tractable. However,
non-atomic transactions cause an explosion in the state space, since events can occur between when a
1. Simplified Atomic Cache Controller Transitions
Event
State
Load Store Other GETS Other GETX
I
a/S c/M
I
d/I
a: perform Get-Shared d: send data to requestor
c: perform Get-Exclusive m: send data to memory
request is issued and when it completes, and numerous transient states are used to capture this behavior.
Section 2 illustrates the methodology with a more realistic broadcast snooping protocol and a multicast
snooping protocol [5].
A methodology for proving that table-based specifications are correct. Using our table-based specifica-
tion methodology, we present a methodology for proving that a specification is sequentially consistent, and
we show how this methodology can be used to prove that our multicast protocol satisfies SC. Our method
uses an extension of Lamport's logical clocks [16] to timestamp the load and store operations performed by
the protocol. Timestamps determine how operations should be reordered to witness SC, as intended by the
designer of the protocol. Thus, associated with any execution of the augmented protocol is a sequence of
timestamped operations that witnesses sequential consistency of that execution. Logical clocks and the associated
timestamping actions are, in effect, a conceptual augmentation of the protocol and are specified using
the same table-based transition tables as the protocol itself. We note that the set of all possible operation
traces of the protocol equals that of the augmented protocol, and that the logical clocks are purely conceptual
devices introduced for verification purposes and are never implemented in hardware. We consider the process
of specifying logical clocks and their actions to be intuitive for the designer of the protocol, and indeed
the process is a valuable debugging tool in its own right.
A straightforward invariant of the augmented protocol guarantees that the protocol is sequentially consistent.
Namely, for all executions of the augmented protocol, the associated timestamped sequence of LDs and STs
is consistent with the program order of operations at all processors and the value of each LD equals that of
the most recent ST. To prove this invariant, numerous other support invariants are added as needed. It can
be shown that all executions of the protocol satisfy all invariants by induction on the length of the execution.
This involves a tedious case-by-case analysis of each possible transition of the protocol and each invariant.
To summarize, the strengths of our methodology are that the process of augmenting the protocol with timestamping
is useful in designing correct protocols, and an easily-stated invariant of the augmented protocol
guarantees sequential consistency. However, our methodology also involves tedious case-by-case proofs that
transitions respect invariants. To our knowledge, no automated approach is known that avoids this type of
case analysis. Because the problem of verifying SC is undecidable, automated approaches have been proved
to work only for a limited class of protocols (such as those in which a finite state observer can reorder operations
in order to find a witness to sequential consistency [14]) that does not include the protocols of this
paper. We will discuss other verifications techniques and compare them to ours in Section 4.
What have we contributed? This paper makes four contributions. First, we develop a new table-based
specification methodology that allows us to concisely describe protocols. Second, we provide a detailed,
low-level specification of a three-state broadcast snooping protocol with an unordered data network and an
address network which allows arbitrary skew. Third, we present a detailed, low-level specification of multi-cast
snooping [5], and, in doing so, we better illustrate the utility of the table-based specification methodol-
ogy. The specification of this more complicated protocol is thorough enough to warrant verification. Fourth,
we demonstrate a technique for verification of the Multicast Snooping protocol, through the sketch of a manual
proof that the specification satisfies a sequentially consistent memory model.
Specifying Broadcast and Multicast Snooping Protocols
In this section, we demonstrate our protocol specification methodology by developing two protocols: a
broadcast snooping protocol and a multicast snooping protocol. Both protocols are MSI (Modified, Shared,
Invalid) and use eight tables to document and specify:
the states, events, actions, and transitions of the cache controller
the states, events, actions, and transitions of the memory controller
The controllers are state machines that communicate via queues, and events correspond to messages being
processed from incoming queues. The actions taken when a controller services an incoming queue, including
enqueuing messages on outgoing queues, are considered atomic.
2.1 Specifying a Broadcast Snooping Protocol
In this section, we shall specify the behavior of an MSI broadcast snooping protocol.
2.1.1 System Model and Assumptions
The broadcast snooping system is a collection of processor nodes and memory nodes (possibly collocated)
connected by two logical networks (possibly sharing the same physical network), as shown in Figure 2.
A processor node contains a CPU, cache, and a cache controller which includes logic for implementing the
coherence protocol. It also contains queues between the CPU and the cache controller. The Mandatory queue
contains Loads (LDs) and Stores (STs) requested by the CPU, and they are ordered by program order. LD
and ST entries have addresses, and STs have data. The Optional queue contains Read-Only and Read-Write
Prefetches requested by the CPU, and these entries have addresses. The Load/Store Data queue contains the
LD/ST from the Mandatory queue and its associated data (in the case of a LD). A diagram of a processor
node is also shown in Figure 2.
Processor Node
Address network
FIFO
Mandatory Queue
FIFO
Cache and Controller
Optional Queue
FIFO
TBEs
Load/Store-Data
CPU
Data network
Broadcast
Address Network
Point to Point Data Network
2. Broadcast Snooping System
The memory space is partitioned among one or more memory nodes. It is responsible for responding to
coherence requests with data if it is the current owner (i.e., no processor node has the block Modified). It
also receives writebacks from processors and stores this data to memory.
The two logical networks are a totally ordered broadcast network for address messages and an unordered
unicast network for data messages. The address network supports three types of coherence requests: GETS
(Get-Shared), GETX (Get-Exclusive) and PUTX (Dirty-Writeback). Protocol transactions are address messages
that contain a data block address, coherence request type (GETX, GETS, PUTX), and the ID of the
requesting processor. Data messages contain the data and the data block address.
All of the components in the system make transitions based on their current state and current event (e.g., an
incoming request), and we will specify the states, events, and transitions for each component in the rest of
this section. There are many components that make transitions on many blocks of memory, and these transitions
can happen concurrently. We assume, however, that the system appears to behave as if all transitions
occur atomically.
2.1.2 Network Specification
The network consists of two logical networks. The address network is a totally ordered broadcast network.
Total ordering does not, however, imply that all messages are delivered at the same time. For example, in an
asynchronous implementation, the path to one node may take longer than the path to another node. The
address network carries coherence requests. A transition of the address network is modeled as atomically
transferring an address message from the output queue of a node to the input queues of all of the nodes, thus
inserting the message into the total order of address messages.
The data network is an unordered point-to-point network for delivering responses to coherence requests. A
transition of the data network is modeled as atomically transferring a data message from the output queue of
a node to the input queue of the destination node.
All nodes are connected to the networks via queues, and all we assume about these queues is that address
queues from the network to the nodes are served in FIFO order. Data queues and address queues from the
nodes to the network can be served without this restriction. For example, this allows a processor node's
GETX to pass its PUTX for the victim block.
2.1.3 CPU Specification
A transition of the CPU occurs when it places a LD or ST in the Mandatory queue, places a Prefetch in the
Optional queue, or removes data from the LD/ST data queue. It can perform these transitions at any time.
2.1.4 Cache Controller Specification
In each transition, a cache controller may inspect the heads of its incoming queues, inject new messages into
its queues, and make appropriate state changes. All we assume about serving incoming queues is that no
queue is starved and that the Address, Mandatory, and Optional queues are served in strict FIFO order. The
actions taken when a queue is served are considered atomic in that they are all done before another queue
(including the same queue) is served. Before any of the actions are taken, however, the cache controller
checks to ensure that resources, such as space in an outgoing queue or an allocated TBE, are available for all
of the actions. If the sum of the resources required for all of the actions is not available, then the cache controller
aborts the transition, performs none of the actions, and waits for resources to become available (where
we define a cache block to be available for a LD/ST if either the referenced block already exists in the cache
or there exists an empty slot which can accommodate the referenced block when it is received from external
sources). The exception to this rule is having an available block in the cache, and this situation is handled by
treating a LD, ST, or Prefetch for which no cache block is available as a Replacement event for the victim
block.
If the request at the head of the Mandatory or Optional queue cannot be serviced (because the block is not
present with the correct permissions or a transaction for the block is outstanding), then no further requests
from that queue can be serviced. Optional requests can be discarded without affecting correctness.
The cache controller keeps a count of all outstanding coherence transactions issued by that node and, for
each such transaction, one Transaction Buffer Entry (TBE) is reserved. No transactions can be issued if there
is no space in the outgoing address queue or if there is already an outstanding transaction for that block. A
TBE contains the address of the block requested, the current state of the transaction, and any data received.1
1. The data field in the TBE may not be required. An implementation may be able to use the cache's data array to buffer
the data for the block. This modification reduces the size of a TBE and avoids specific actions for transferring data from
the TBE to the cache data array.
2. Broadcast Snooping Cache Controller States
Stable
states
Transient
states
State
Cache
State Description
I invalid
shared
modified
ISAD busy invalid, issued GETS, have not seen GETS or data yet
ISA busy invalid, issued GETS, have not seen GETS, have seen data
ISD busy invalid, issued GETS, have seen GETS, have not seen data yet
IMAD busy invalid, issued GETX, have not seen GETX or data yet
IMA busy invalid, issued GETX, have not seen GETX, have seen data
IMD busy invalid, issued GETX, have seen GETX, have not seen data yet
MIA I modified, issued PUTX, have not seen PUTX yet
IIA I
modified, issued PUTX, have not seen PUTX, then saw other
GETS or GETX (reachable from MIA)
The possible block states and descriptions of these states are listed in Table 2. Note that there are two types
of states for a cache block: the stable state and the transient state. The stable state is one of M (Modi-
fied), S (Shared), or I (Invalid), it is recorded in the cache, and it indicates the state of the block before the
latest outstanding transaction for that block (if any) started. The transient state, as shown in Table 2, is
recorded in a TBE, and it indicates the current state of an outstanding transaction for that block (if any).
When future tables refer to the state of a block, it is understood that this state is obtained by returning the
transient state from a TBE (if there is an outstanding transaction for this block), or else (if there is no outstanding
by accessing the cache to obtain the stable state. Blocks not present in the cache are
assumed to have the stable state of I. Each transient state has an associated cache state, as shown in Table 2,
assuming that the tag matches in the cache. A cache state of busy implies that there is a TBE entry for this
block, and its state is a transient state other than MIA or IIA.
To represent the transient states symbolically, we have developed an encoding of these transient states which
consists of a sequence of two or more stable states (initial, intended, and zero or more pending states), where
the second state has a superscript which denotes which part(s) of the transaction - address (A) and/or data
(D) - are still outstanding. For example, a processor which has block B in state I, sends a GETS into the
Address-Out queue, and sees the data response but has not yet seen the GETS, would have B in state ISA.
When the GETS arrives, the state becomes S.
Events at the cache controller depend on incoming messages. The events are listed and described in Table 3.
Note that, in the case of Replacements, block B refers to the address of the victim block. The allowed cache
controller actions are listed in Table 4. Cache controller behavior is detailed in Table 5, where each entry
contains a list of <actions / next state> tuples. When the current state of a block corresponds to the row of
3. Broadcast Snooping Cache Controller Events
Event Description Block B
Load
Read-Only
Prefetch
Store
Read-Write
Prefetch
Mandatory
Replacement
Optional
Replacement
Own GETS
Own GETX
Own PUTX
Other GETS
Other GETX
Other PUTX
Data
LD at head of Mandatory queue
Read-Only Prefetch at head of Optional
queue
ST at head of Mandatory queue
Read-Write Prefetch at head of Optional
queue
LD/ST at head of Mandatory queue for
which no cache block is available
Read-Write Prefetch at head of Optional
queue for which no cache block is available
Occurs when we observe our own GETS
request in the global order
Occurs when we observe our own GETX
request in the global order
Occurs when we observe our own PUTX
request in the global order
Occurs when we observe a GETS request
from another processor
Occurs when we observe a GETX request
from another processor
Occurs when we observe a PUTX request
from another processor
Data for this block from the data network
address of LD at head of Mandatory
Queue
address of Read-Only Prefetch at
head of Optional Queue
address of ST at head of Mandatory
Queue
address of Read-Write Prefetch at
head of Optional Queue
address of victim block for LD/ST
at head of Mandatory queue
address of victim block for
Prefetch at head of Optional queue
address of transaction at head of
incoming address queue
same as above
same as above
same as above
same as above
same as above
address of data message at head of
incoming data queue
the entry and the next event corresponds to the column of the entry, then the specified actions are performed
and the state of the block is changed to the specified new state. If only a next state is listed, then no action is
required. All shaded cases are impossible.
Memory Node Specification
One of the advantages of broadcast snooping protocols is that the memory nodes can be quite simple. The
memory nodes in this system, like those in the Synapse [9], maintain some state about each block for which
this memory node is the home, in order to make decisions about when to send data to requestors. This state
includes the state of the block and the current owner of the block. Memory states are listed in Table 6, events
are in
Table
7, actions are in Table 8, and transitions are in Table 9.
2.2 Specifying a Multicast Snooping Protocol
In this section, we will specify an MSI multicast snooping protocol with the same methodology used to
describe the broadcast snooping protocol. Multicast snooping requires less snoop bandwidth and provides
4. Broadcast Snooping Cache Controller Actions
Action Description
a Allocate TBE with Address=B
c Set cache tag equal to tag of block B.
d Deallocate TBE.
f Issue GETS: insert message in outgoing Address queue with Type=GETS, Address=B, Sender=N.
Issue GETX:insert message in outgoing Address queue with Type=GETX, Address=B, Sender=N
h Service LD/ST (a cache hit) from the cache and (if a LD) enqueue the data on the LD/ST data queue.
incoming address queue.
incoming data queue.
l Pop optional queue.
Send data from TBE to memory.
Send data from cache to memory.
Issue PUTX: insert message in outgoing Address queue with Type=PUTX, Address=B, Sender=N
q Copy data from cache to TBE.
r Send data from the cache to the requestor
s Save data in data field of TBE.
Service LD from TBE, pop mandatory queue, and enqueue the data on the LD/ST data queue if the
LD at the head of the Mandatory queue is for this block.
Service LD/ST from TBE, pop mandatory queue, and (if a LD) enqueue the data on the LD/ST data
queue if the LD/ST at the head of the Mandatory queue is for this block.
w Write data from data field of TBE into cache
y Send data from the TBE to the requestor.
z Cannot be handled right now.
higher throughput of address transactions, thus enabling larger systems than are possible with broadcast
snooping.
2.2.1 System Model and Assumptions
Multicast snooping, as described by Bilir et al. [5], incorporates features of both broadcast snooping and
directory protocols. It differs from broadcast snooping in that coherence requests use a totally ordered multi-cast
address network instead of a broadcast network. Multicast masks are predicted by processors, and they
must always include the processor itself and the directory for this block (but not any other directories), yet
5. Broadcast Snooping Cache Controller Transitions
I
caf/I caf/IS cag/I cag/I
SAD AD MAD MAD
hk l ag/IM ag/IM I I
AD AD
hk l hk l aqp/M aqp/M
IA IA
ISAD
IMAD
z z z z z z
z z z z z z
i/ISD
sj/ISA
sj/IMA
i/IMD
ISA
IMA
z z z z z z
z z z z z z
uwdi/S
vwdi/
MIA
IIA
z z z z z z
z z z z z z
ISD
IMD
z z z z z z
z z z z z z
suwdj/
svwdj/
State
Load
RPeraedf-eOtcnhly
Store
RMeOePparplndaetdf-cieWoaetnmtcoraheirltnyet
Own GPGUETXS
Other PGUETXS
Other GETX
Data
z z i
. Only change the cache state to I if the tag matches.
they are allowed to be incorrect. A GETS mask is incorrect if it omits the current owner, and a GETX mask
is incorrect if it omits the current owner or any of the current sharers. This scenario is resolved by a simple
directory which can detect mask mispredictions and retry these requests (with an improved mask) on behalf
of the requestors.
The multicast snooping protocol described here differs from that specified in Bilir et al. in a couple of significant
ways. First, we specify an MSI protocol here instead of an MOSI protocol. Second, we specify the protocol
here at a lower, more detailed level. Third, the directory in this protocol can retry requests with
incorrect masks on behalf of the original requester.
A multicast system is shown in Figure 3. The processor nodes are structured like those in the broadcast
snooping protocol. Instead of memory nodes, though, the multicast snooping protocol has directory nodes,
which are memory nodes with extra protocol logic for handling retries, and they are also shown in Figure 3.
In the next two subsections, we will specify the behaviors of processor and directory components in an MSI
multicast snooping protocol.
6. Broadcast Snooping Memory Controller States
State Description
MSA
MSD
Shared or Invalid
Modified
Modified, have not seen GETS/PUTX, have seen data
Modified, have seen GETS or PUTX, have not seen data
7. Broadcast Snooping Memory Controller Events
Event Description Block B
Other Home
GETS
PUTX (requestor is
PUTX (requestor is
not owner)
Data
A request arrives for a block whose
home is not at this memory
A GETS at head of incoming
address queue
A GETX at head of incoming
address queue
A PUTX from owner at head of
incoming address queue
A PUTX from non-owner at head
of incoming address queue
Data at head of incoming data
queue
address of transaction at head of
incoming address queue
same as above
same as above
same as above
same as above
address of message at head of
incoming data queue
8. Broadcast Snooping Memory Controller Actions
Action Description
c
d
z
owner equal to directory.
Send data message to requestor.
address queue.
data queue.
owner equal to requestor.
Write data to memory.
Delay transactions to this block.
9. Broadcast Snooping Memory Controller Transitions
State
wk/MSA
Other Home
GETS
PiP(srUeoqTwuXneesrto)r
(requestor
not owner)
Data
Directory Node
Multicast
Address Network
Point to Point Data Network
Address network
FIFO
Directory
Block
state
Memory
Data network
3. Multicast Snooping System
2.2.2 Network Specification
The data network behaves identically to that of the broadcast snooping protocol, but the address network
behaves slightly differently. As the name implies, the address network uses multicasting instead of broadcasting
and, thus, a transition of the address network consists of taking a message from the outgoing address
queue of a node and placing it in the incoming address queues of the nodes specified in the multicast mask,
as well as the requesting node and the memory node that is the home of the block being requested (if these
nodes are not already part of the mask).
Address messages contain the coherence request type (GETS, GETX, or PUTX), requesting node ID, multi-cast
mask, block address, and a retry count. Data messages contain the block address, sending node ID, destination
node ID, data message type (DATA or NACK), data block, and the retry count of the request that
triggered this data message.
2.2.3 CPU Specification
The CPU behaves identically to the CPU in the broadcast snooping protocol.
2.2.4 Cache Controller Specification
Cache controllers behave much like they did in the broadcast snooping protocol, except that they must deal
with retried and nacked requests and they are more aggressive in processing incoming requests. This added
complexity leads to additional states, TBE fields, protocol actions, and protocol transitions.
There are additional states in the multicast protocol specified here due to the more aggressive processing of
incoming requests. Instead of buffering incoming requests (with the 'z' action) while in transient states, a
cache controller in this protocol ingests some of these requests, thereby moving into new transient states. An
example is the state IMDI, which occurs when a processor in state IMD ingests an incoming GETX request
from another processor instead of buffering it. The notation signifies that a processor started in I, is waiting
for data to go to M, and will then go to I immediately (except for in cases in which forward progress issues
require the processor to perform a LD or ST before relinquishing the data, as will be discussed below). There
are also three additional states that are necessary to describe situations where a processor sees a nack to a
request that it has seen yet.
There are four additional fields in the TBE: ForwardProgress, ForwardID, RetryCount, and ForwardIDRe-
tryCount. The ForwardProgress bit is set when a processor sees its own request that satisfies the head of the
Mandatory queue. This flag is used to determine when a processor must perform a single load or store on the
cache line before relinquishing the block.2 For example, when data arrives in state IMDI, a processor can service
a LD or ST to this block before forwarding the block if and only if ForwardProgress is set. The Forwar-
dID field records the node to which a processor must send the block in cases such as this. In this example,
ForwardID equals the ID of the node whose GETX caused the processor to go from IMD to IMDI. Retry-
Count records the retry number of the most recent message, and ForwardIDRetryCount records the retry
count associated with the block that will be forwarded to the node specified by ForwardID.
We use the same table-driven methodology as was used to describe the broadcast snooping protocol. Tables
10, 11, 12, and 13 specify the states, events, actions, and transitions, respectively, for processor nodes.
2.2.5 Directory Node Specification
Unlike broadcast snooping, the multicast snooping protocol requires a simplified directory to handle incorrect
masks. A directory node, in addition to its incoming and outgoing queues, maintains state information
for each block of memory that it controls. The state information includes the block state, the ID of the current
owner (if the state is M), and a bit vector that encodes a superset of the sharers (if the state is S). The
possible block states for a directory are listed in Table 14. As before, we refer to M, S and I as stable states
and others as transient states. Initially, for all blocks, the state is set to I, the owner is set to memory and the
bit-vector is set to encode an empty set of sharers. The state notation is the same as for processor nodes,
although the state MXA refers to the situation in which a directory is in M and receives data, but has not seen
the corresponding coherence request yet and therefore does not know (or care) whether it is PUTX data or
data from a processor that is downgrading from M to S in response to another processor's GETS.
A directory node inspects its incoming queues for the address and data networks and removes the message at
the head of a queue (if any). Depending on the incoming message and the current block state, a directory
may inject a new message into an outgoing queue and may change the state of the block. For simplicity, a
directory currently delays all requests for a block for which a PUTX or downgrade is outstanding.3
2. Another viable scheme would be to set this bit when a processor observes its own address request and this request
corresponds to the address of the head of the Mandatory queue. It is also legal to set ForwardProgress when a LD/ST
gets to the head of the Mandatory queue while there is an outstanding transaction for which we have not yet seen the
address request. However, sequential consistency is not preserved by a scheme where ForwardProgress is set when data
returns for a request and the address of the request matches the address at the head of the Mandatory queue.
10. Multicast Snooping Cache Controller States
State
Cache
State Description
I Invalid
Shared
Modified
ISAD busy invalid, issued GETS, have not seen GETS or data yet
IMAD busy invalid, issued GETX, have not seen GETX or data yet
SMAD busy shared, issued GETX, have not seen GETX or data yet
ISA busy invalid, issued GETS, have not seen GETS, have seen data
IMA busy invalid, issued GETX, have not seen GETX, have seen data
SMA busy shared, issued GETX, have not seen GETX, have seen data
ISA* busy invalid, issued GETS, have not seen GETS, have seen nack
IMA* busy invalid, issued GETX, have not seen GETX, have seen nack
SMA* busy shared, issued GETX, have not seen GETX, have seen nack
MIA I modified, issued PUTX, have not seen PUTX yet
IIA I modified, issued PUTX, have not seen PUTX, then saw other GETS or GETX
ISD busy invalid, issued GETS, have seen GETS, have not seen data yet
ISDI busy invalid, issued GETS, have seen GETS, have not seen data, then saw other GETX
IMD busy invalid, issued GETX, have seen GETX, have not seen data yet
IMDS busy invalid, issued GETX, have seen GETX, have not seen data yet, then saw other GETS
IMDI busy invalid, issued GETX, have seen GETX, have not seen data yet, then saw other GETX
IMDSI busy
invalid, issued GETX, have seen GETX, have not seen data yet, then saw other GETS,
then saw other GETX
SMD busy shared, issued GETX, have seen GETX, have not seen data yet
SMDS busy shared, issued GETX, have seen GETX, have not seen data yet, then saw other GETS
The directory events, actions and transitions are listed in Tables 15, 16, and Table 17, respectively. The
action 'z' (delay transactions to this block) relies on the fact that a directory can delay address messages
for a given block arbitrarily while waiting for a data message. Conceptually, we have one directory per
block. Since there is more than one block per directory, an implementation would have to be able to delay
only those transactions which are for the specific block. Note that consecutive GETS transactions for the
same block could be coalesced.
3. This restriction maintains the invariant that there is at most one data message per block that the directory can receive,
thus eliminating the need for buffers and preserving the sanity of the protocol developers.
11. Multicast Snooping Cache Controller Events
Event Description Block B
Load
Read-Only
Prefetch
Store
Read-Write
Prefetch
Mandatory
Replacement
Optional
Replacement
Own GETS
Own GETX
Own GETS
Own GETX
Own PUTX
Other GETS
Other GETX
Other PUTX
Data
Data
LD at head of Mandatory queue
Read-Only Prefetch at head of Optional
queue
ST at head of Mandatory queue
Read-Write Prefetch at head of Optional
queue
LD/ST at head of Mandatory queue for
which no cache block is available
Read-Write Prefetch at head of Optional
queue for which no cache block is available
Occurs when we observe our own GETS
request in the global order
Occurs when we observe our own GETX
request in the global order
Occurs when we observe our own GETS
request in the global order, but the Retry-
Count of the GETS does not match Retry-
Count of the TBE
Occurs when we observe our own GETX
request in the global order, but the Retry-
Count of the GETS does not match Retry-
Count of the TBE
Occurs when we observe our own PUTX
request in the global order
Occurs when we observe a GETS request
from another processor
Occurs when we observe a GETX request
from another processor
Occurs when we observe a PUTX request
from another processor
Data for this block arrives
Data for this block arrives, but the Retry-
Count of the data message does not match
RetryCount of the TBE
address of LD at head of Mandatory Queue
address of Read-Only Prefetch at head of
Optional Queue
address of ST at head of Mandatory Queue
address of Read-Write Prefetch at head of
Optional Queue
address of victim block for LD/ST at head of
Mandatory queue
address of victim block for Prefetch at head
of Optional queue
address of transaction at head of incoming
address queue
same as above
same as above
same as above
same as above
same as above
same as above
same as above
address of message at head of incoming data
queue
address of message at head of incoming data
queue
3 Verification of Snooping Protocols
In this section, we present a methodology for proving that a specification is sequentially consistent, and we
show how this methodology can be used to prove that our multicast protocol satisfies SC. Our method uses
an extension of Lamport's logical clocks [16] to timestamp the load and store operations performed by the
protocol. Timestamps determine how operations should be reordered to witness SC, as intended by the
12. Multicast Snooping Cache Controller Actions
Action Description
a
Allocate TBE with Address=B, ForwardID=null, RetryCount=zero, ForwardIDRetryCount=zero, For-
wardProgress bit=unset.
b Set ForwardProgress bit if request at head of address queue satisfies request at head of Mandatory queue.
c Set cache tag equal to tag of block B.
d Deallocate TBE.
e Record ID of requestor in ForwardID and record retry number of transaction in ForwardIDRetryCount.
f
Issue GETS: insert message in outgoing Address queue with Type=GETS, Address=B, Sender=N, Retry-
Count=zero.
Issue GETX: insert message in outgoing Address queue with Type=GETX, Address=B, Sender=N,
RetryCount=zero.
h Service load/store (a cache hit) from the cache and (if a LD) enqueue the data on the LD/ST data queue.
incoming address queue.
incoming data queue.
l Pop optional queue.
Send data from TBE to memory.
Send data from cache to memory.
data and ForwardIDRetryCount from the TBE to the processor indicated by ForwardID.
Issue PUTX: insert message in outgoing Address queue with Type=PUTX, Address=B, Sender=N.
q Copy data from cache to TBE.
r Send data from the cache to the requestor
s Save data in data field of TBE.
Copy retry field from message at head of incoming Data queue to Retry field in TBE, set
null, and set ForwardIDRetryCount=zero.
Service LD from TBE, pop mandatory queue, and enqueue the data on the LD/ST data queue if the LD at
the head of the mandatory queue is for this block.
v Treat as either h or z (optional cache hit). If it is a cache hit, then pop the mandatory queue.
w Write data from the TBE into the cache.
x
If (and only if) ForwardProgress bit is set, service LD from TBE, pop mandatory queue,and enqueue the
data on the LD/ST data queue.
y Send data from the TBE to the requestor.
z
Cannot be handled right now. Either wait or discard request (can discard if this request is in the Optional
queue).
a
Copy retry field from message at head of incoming address queue to Retry field in TBE, set
null, and set ForwardIDRetryCount=zero.
Service LD/ST from TBE, pop mandatory queue, and (if a LD) enqueue the data on the LD/ST data
queue if the LD/ST at the head of the mandatory queue is for this block. (If ST, store data to TBE).
l Optionally service LD/ST from TBE.
d
If (and only if) ForwardProgress bit is set, service LD/ST from TBE, pop mandatory queue,and (if a LD)
enqueue the data on the LD/ST data queue.
13. Multicast Snooping Cache Controller Transitions
I
caf/ caf/ cag/ cag/
ISAD ISAD IMAD IMAD
hk l ag/ ag/ I I
hk l hk l aqp/ aqp/
MIA MIA
ISAD
IMAD
z l z z z z
z l z l z z
l z l z z
bi/
ISD
IMAD
sj/ stj/ tj/ tj/
ISA ISA ISA* ISA*
sj/ stj/ tj/ tj/
IMA IMA IMA* IMA*
sj/ stj/ tj/ tj/
SMA SMA SMA* SMA*
ISA*
IMA*
z l z z z z
z l z l z z
l z l z z
ISA
IMA
z l z z z z
z l z l z z
l z l z z
uwdi/
MIA
IIA
l z l z z z
z z z z z z
A
ISD
ISDI
IMD
IMDS
IMDI
IMDSI
z l z z z z
z z z z z z
z l z l z z
z l z z z z
z z z z z z
z z z z z z
z l z l z z
z l z z z z
IMDS IMDI
SI
SMDS IMDI
IMDSI
suwdj/ stj/ISA dj/I tj
sxdj/I stj/ISA dj/I tj
sgwdj/ stj/ dj/I tj
sdom- stj/ dj/I tj
wdj/S IMA
sdodj/ stj/ dj/I tj
I IMA
sdomd stj/ dj/I tj
j/I IMA
sgwdj/ stj/ dj/S tj
sgom- stj/ dj/S tj
wdj/S SMA
State
Load
read-wonrliyteprreefefetctch
Store
Mandatory Replacement
Optional Replacement
Own GETS
Own GETX
Own GETS (mismatch)
Own GETX (mismatch)
Own PUTX
Other GPUETXS
Other GETX
Data
nack
bi/
IMD
bi/
di/I
di/S
gwdi/
gwdi/
mdi/I
di/I
ai/ISD
ai/IMD
ai/IMD
ai/IMD
ai/SMD
Only change the state to I if the tag matches.
14. Multicast Snooping Memory Controller States
State Description
I Invalid - all processors are Invalid
Shared - at least one processor is Shared
Modified - one processor is Modified and the rest are Invalid
MXA Modified, have not seen GETS/PUTX, have seen data
MSD Modified, have seen GETS, have not seen data
MID Modified, have seen PUTX, have not seen data
15. Multicast Snooping Memory Controller Events
Event Description Block B
GETS
GETS-RETRY
GETS-NACK
GETX-RETRY
GETX-NACK
PUTX (requestor is
(requestor is not
Data
GETS with successful mask at head of
incoming address queue
GETX with successful mask at head of
incoming address queue
GETS with unsuccessful mask at head of
incoming queue. Room in outgoing
address queue for a retry.
GETS with unsuccessful mask at head of
incoming queue. No room in outgoing
address queue for a retry.
GETX with unsuccessful mask at head of
incoming queue. Room in outgoing
address queue for a retry.
GETX with unsuccessful mask at head of
incoming queue. No room in outgoing
address queue for a retry.
PUTX from owner at head of incoming
address queue.
PUTX from non-owner at head of incoming
address queue.
Data message at head of incoming data
queue
address of transaction at head
of incoming address queue
same as above
same as above
same as above
same as above
same as above
same as above
same as above
address of message at head of
incoming data queue
designer of the protocol. Logical clocks and the associated timestamping actions are a conceptual augmentation
of the protocol and are specified using the same table-based transition tables as the protocol itself. We
note that the set of all possible operation traces of the protocol equals that of the augmented protocol.
The process of developing a timestamping scheme is a valuable debugging tool in its own right. For exam-
ple, an early implementation of the multicast protocol did not include a ForwardProgress bit in the TBE,
and, upon receiving the data for a GETX request when in state IMDI, always satisfied an OP at the head of
the mandatory queue before forwarding the data. Attempts to timestamp OP reveal the need for a forward
16. Multicast Snooping Memory Controller Actions
Action Description
c Clear set of sharers.
d Send data message to requestor with RetryCount equal to RetryCount of request.
address queue.
data queue.
owner equal to requestor.
Send nack to requestor with RetryCount equal to RetryCount of request.
q Add owner to set of sharers.
r
Retry by re-issuing the request. Before re-issuing, the directory improves the multicast
mask and increments the retry field. If the transaction has reached the maximum number of
retries, the multicast mask is set to the broadcast mask.
s Add requestor to set of sharers.
w Write data to memory.
x Set owner equal to directory.
z Delay transactions to this block.
17. Multicast Snooping Memory Controller Transitions
I
dsj/S dmj/M
dsj cdmj/M
qsxj/MSD mj
wk/MXA
MXA qsxj/S mj rj nj rj nj xj/I j
MSD
MID
z z
z z
wk/S
wk/I
State
GETS
(Unsuccessful mask)
(unsuccessful mask)
(requestor is owner)
(requestor not owner)
Data
progress bit, roughly to ensure that OP can indeed be timestamped so that it appears to occur just after the
time of the GETX, and that this OP's logical timestamp also respects program order.
In brief, our methodology for proving sequential consistency consists of the following steps.
Augment the system with logical clocks and with associated actions that assign timestamps to LD and
ST operations. The logical clocks are purely conceptual devices introduced for verification purposes
and are never implemented in hardware
Associate a global history with any execution of the augmented protocol. Roughly, the history includes
the configuration at each node of the system (states, TBEs, cache contents, logical clocks, and queues),
the totally ordered sequence of transactions delivered by the network, and the memory operations serviced
so far, in program order, along with their logical timestamps.
Using invariants, define the notion of a legal global history. The invariants are quite intuitive when
expressed using logical timestamps. It follows immediately from the definition of a legal global history
that the corresponding execution is sequentially consistent.
Finally, prove that the initial history of the system is legal, that each transition of the protocol maps legal
global histories to legal global histories, and that the entries labelled impossible in the protocol speci-
fication tables are indeed impossible. It then follows by induction that the protocol is sequentially consistent
The first step above, that of augmenting the system with logical clocks, can be done hand in hand with development
of the protocol. Thus, it is, on its own, a valuable debugging tool. The second step is straightforward.
It is also straightforward to select a core set of invariants in the third step that are strong enough to guarantee
that the execution corresponding to any legal global history is sequentially consistent. The final step of the
proof methodology above requires a proof for every transition of the protocol and every invariant, and may
necessitate the addition of further invariants to the definition of legal. This step of the proof, while not diffi-
cult, is certainly tedious.
In the rest of this section, we describe the first three steps of this process in more detail, namely how the multicast
protocol is augmented with logical clocks, and what is a global history and a legal global history. We
include examples of the cases covered in the final proof step in Appendix A.
3.1 Augmenting the System with Logical Clocks
In this section, we shall describe how we augment the system specified earlier with logical clocks and with
actions that increment clocks and timestamp operations and data. These timestamps will make future defini-
tions (of global states and legal global states) simpler and more intuitive. These augmentations do not
change the behavior of the system as originally specified.
3.1.1 The Augmented System
The system is augmented with the following counters, all of which are initialized to zero:
One counter (global pulse number) associated with the multicast address network.
Two counters (global and local clocks) associated with each processor node of the system.
One counter (pulse number) added to each data field and to each ForwardID field of each TBE.
One counter (pulse number) field added to each data message.
One counter (global clock) associated with each directory node of the system.
18. Processor clock actions
Action Description
Set global clock equal to pulse of transaction being handled, and set local clock to zero
Increment local clock. The timestamp of the LD/ST is set equal to the associated global
and local clock values.
pulse equal to transaction pulse.
Optionally treat as h.
data message pulse equal to TBE ForwardID pulse.
data pulse equal to pulse of incoming data message.
If first Op in Mandatory queue is a LD for this block, then increment local clock. The
timestamp of the LD/ST is set equal to the associated global and local clock values.
If first Op in Mandatory queue is a LD/ST for this block, then increment local clock. The
timestamp of the LD/ST is set equal to the associated global and local clock values.
x
If ForwardProgress bit is set (i.e., head of Mandatory Queue is a LD or this block), then
no clock update, set global timestamp of LD equal to pulse of incoming data message,
and set local clock value equal to 1.
y Set data message pulse equal to transaction pulse.
z Same as x, but allow a LD or ST for this block.
3.1.2 Behavior of the Augmented System
In the augmented system, the clocks get updated and timestamps (or pulses) are assigned to operations and
data upon transitions of the protocol according to the following rules.
Network: Each new address transaction that is appended to the total order of address transactions by the net-work
causes the global pulse number to increment by 1. The new value of the pulse number is associated
with the new transaction.
Processor: Tables describe how the global and local clocks are updated. The TBE counter is used
to record the timestamp of a request that cannot be satisfied until the data arrives. When the data arrives, the
owner sends the data with the timestamp that was saved in the TBE.
Directory: Briefly, upon handling any transaction, the directory updates its clock to equal the global pulse of
that transaction. The pulse attached to any data message is set to be the value of the directory's clock.
3.2 Global Histories
The global history associated with an execution of the protocol is a 3-tuple <TransSeq,Config,Ops>. Trans-
records information on the sequence of transactions requested to date: the type of transaction, requester,
address, mask, retry-number, pulse (possibly undefined), and status (successful, unsuccessful, nack, or unde-
termined). Config records the configuration of the nodes: state per block, cache contents, queue contents,
TBEs, and logical clock values. Ops records properties of all operations generated by the CPUs to date:
operations along with address, timestamp (possibly undefined), value, and rank in program order.
19. Processor clock updates
Processor/Cache Request See Own See Other See Own
Retry
Match
Retry
Mismatch
I
gy gy
ISAD
IMAD
ISA*
IMA*
ISA
IMA
MIA
IIA
gy
gy
gy gy
ISD
ISDI
IMD
IMDS
IMDI
IMDSI
zot t
zot t
zot t
zot t
Current
State
LD
Mreadn-dowanrtloiytreyprRreefefpetlctachchement
Optional Replacement
GETSX
GETSX
GETSX
RMReattrcyh
Mismatch
RMReattrcyh
Mismatch
The global history is defined inductively on the sequence of transitions in the execution. In the initial global
history, Trans-Seq and Ops are empty. In Config, all processors are in state I for all blocks, have empty
queues, no TBEs and clocks initialized to zero. For all blocks, the directory is in state I, the owner s is set to
the directory, and the list of sharers is empty. All incoming queues are empty. Upon each transition, Trans-
Seq, Ops, and Config are updated in a manner consistent with according to the actions of that transition.
3.3 Legal Global Configurations and Legal Global Histories
There are several requirements for a global history <TransSeq,Config,Ops> to be legal. Briefly, these are as
follows. The first requirement is sufficient to imply sequential consistency. The remaining four requirements
supply additional invariants that are useful in building up to the proof that the first requirement holds.
Ops is legal with respect to program order. That is, the following should hold:
3.3.1 Ops respects program order. That is, for any two operations O1 and O2, if O1 has a smaller
timestamp than O2 in Ops, then O1 must also appear before O2 in program order.
3.3.2 Every LD returns the value of the most recent ST to the same address in timestamp order.
TransSeq is legal. To describe the type of constraints that TransSeq must satisfy, we introduce the notion
of A-state vectors. The A-state vector corresponding to TransSeq for a given block B records, for each
processor N, whether TransSeq confers Shared (S), Modified (M), or no (I) access to block B to processor
N. For example, in a system with three processors, if TransSeq consists of a successful GETS to
block B by processor 1, followed by an unsuccessful GETX to block B by processor 2, followed by a
successful GETS to block B by processor 3, then the corresponding A-state for block B is (S,I,S). The
constraints on TransSeq require, for example, that a GETX on block B should not be successful if its
mask does not include all processors that, upon completion of the transaction just prior to the GETX,
may have Shared or Modified access to B. That is, if TransSeq consist of TransSeq' followed by GETX
on block B and A is the A-state for block B corresponding to TransSeq', then the mask of the GETX
should contain all processors whose entries in A are not equal to I. The precise definition of a legal
transaction sequence is included in Appendix A.
Ops is legal with respect to Trans-Seq. Intuitively, for all operations op in Ops, if op is performed by
processor N at global timestamp t, then the A-state for processor N at logical time t should be either S or
M and should be M if op is a ST.
Config is legal with respect to TransSeq. This involves several constraints, since there are many components
to Config. For example, if processor N is in state ISAD for block B, then a GETS for block B,
requested by N, with timestamp greater than that of N (or undefined) should be in TransSeq.
Config is legal with respect to Ops. That is, for all blocks B and nodes N, the following should hold:
3.3.3 If N is a processor and its state for block B is one of S, M, MIA, SMAD, or SMA, then the
value of block B in N's cache equals that of the most recent ST in Ops, relative to N's clock. By
most recent ST relative to N's clock we mean a ST whose timestamp is less then or equal to
N's clock.
3.3.4 If N is a processor and block B is in one of N's TBEs, then its value equals that of the most
recent ST in Ops, relative to p.0.0, where p is the pulse in the data field of the TBE.
3.3.5 If data for block B is in N's incoming data queue, its value equals the most recent ST in Ops
(relative to the data's timestamp, not N's current time).
3.3.6 If N is the directory of block B, then for each block B for which N is the owner, its value
equals that of the most recent ST in Ops (relative to N's clock).
3.4 Properties of Legal Global Histories
It is not hard to show that the global history of the system is initially legal. The main task of the proof is to
show the following:
Theorem 1: Each protocol transition takes the system from a legal global history to a legal global history.
To illustrate how Theorem 1 is proved, we include in Appendix A the proof of why the transition at each
entry of Table 13 (cache controller transitions) maps a legal global history, <TransSeq,Ops,Config>, to a
new global history, <TransSeq',Ops',Config'> in which TransSeq' is legal.
20. Classification of Related Work
Manual Semi-automated Automated
Complete
method
lazy caching[2]
DASH memory model [12]
Lamport Clocks [25, 21, 6, 15],
Lamport Clocks (this paper),
term rewriting [24]
Incomplete
method
RMO testing[19],
Origin2000 coherence[8],
FLASH coherence [20],
Alpha 21264/21364 [3],
HP Runway testing[11, 18]
4 Related Work
We focus on papers that specify and prove a complete protocol correct, rather than on efforts that focus on
describing many alternative protocols and consistency models, such as [1, 10]. There is a large body of literature
on the subject of formal protocol verification4 which we have classified into a taxonomy along two
independent axes: automation and completeness [23]. We distinguish verification methods based on the level
of automation they support: manual, semi-automated or automated. Manual methods involve humans who
read the specification and construct the proofs. Semi-automated methods involve some computer programs
(a model checker or theorem prover) which are guided by humans who understand the specification and provide
the programs with the invariants or lemmas to prove. Automated methods take the human out of the
loop and involve a computer program that reads a specification and produces a correctness proof completely
automatically. We also distinguish techniques that are complete (proof that a system implements a particular
consistency model) from those that are incomplete (proof of coherence or selected invariants). Table 20 provides
a summary of our taxonomy. We discuss each column of the table separately below.
4. Formal methods involve construction of rigorous mathematical proofs of correctness while informal methods include
such techniques as simulation and random testing which do not guarantee correctness. We only consider formal methods
in this review.
Manual techniques: Lazy caching [2] was one of the earliest examples of a formal specification and verifica-
tion of a protocol (lazy caching) that implemented sequential consistency. The authors use I/O automata as
their formal system models and provide a manual proof that a lazy caching system implements SC. Their use
of history variables in the proof is similar to the manner in which we use Lamport Clock timestamps in our
proofs. Gibbons et al. [12] provide a framework for verifying that shared memory systems implement
relaxed memory models. The method involves specifying both the system to be verified as well as an operational
definition of a memory model as I/O automata and then proving that the system automaton implements
the model automaton. As an example, they provide a specification of the Stanford DASH memory
system and manually prove that it implements the Release Consistency memory model. Our table-based
specification methodology is complementary in that it could also be used to describe I/O automata.
Our previous papers [25, 21, 6, 15] specified various shared memory systems (directory and bus protocols)
at a high level, and employed manual proofs using our Lamport Clocks technique to show that these systems
implemented various memory models (SC, TSO, Alpha). This paper is our latest effort which demonstrates
our technique applied to more detailed table-based specifications of snooping protocols. Shen and Arvind
[24] propose using term rewriting systems (TRSs) to both specify and verify memory system protocols.
Their verification technique involves showing that the system under consideration and the operational definition
of a memory model, when expressed as TRSs, can simulate each other. This proof technique is similar
to the I/O automata approach used by Gibbons et al. [12]. Both TRSs and our table-based specification
method can be used in a modular and flexible fashion. A drawback of TRSs is that they lack the visual clarity
of our table-based specification. Although their current proofs are manual, they mention the possibility of
using a model checker to automate tedious parts of the proof.
Semi-automated techniques: Park and Dill [19] provide an executable specification of the Sun RMO memory
model written in the language of the Murj model checker. This language, which is similar to a typical
imperative programming language, is unambiguous but not necessarily compact. They use this specification
to check the correctness of small synchronization routines. Eiriksson and McMillan [8] describe a methodology
which integrates design and verification where common state machine tables drive a model checker and
generators of simulators and documentation. The protocol specification tables they describe were designed
to be consumed by automated generators rather than by humans, and they do not describe the format of the
text specifications generated from these tables. They use the SMV model checker (which accepts specifica-
tions in temporal logic) to prove the coherence of the protocol used in the SGI Origin 2000. However, the
system verified had only one cache block (which is sufficient to prove coherence, but not consistency). Pong
et al. [22] verify the memory system of the Sun S3.mp multiprocessor using the Murj and SSM (Symbolic
State Model) model checkers, but again the verified system had only one cache block and thus cannot verify
whether the system satisfies a memory model. Park and Dill [20] express both the definition of the memory
model and the system being verified in the same specification language and then use aggregation to map
the system specification to the model specification (similar to the use of TRSs by Shen and Arvind[24] and
I/O automata by Gibbons et al. [12]). As an example, they specify the Stanford FLASH protocol in the language
of the PVS theorem prover (the language is a typed high-order logic) and use this aggregation technique
to prove that the Delayed mode of the FLASH memory system is sequentially consistent. Akhiani et
al. [3] summarize their experience with using TLA+ (a form of temporal logic) and a combination of manual
proofs and a TLA+ model checker (TLC) to specify and verify the Compaq Alpha 21264 and 21364 memory
system protocols. Although they did find a bug that would not have been caught by simulation or model
checking, their manual proofs were quite large and only a small portion could be finished even with 4 people
and 7 person-months of effort. The TLA+ specifications are complete and formal, but they are both nearly
two thousand lines long. Nalumasu et al. [11, 18] propose an extension of Collier's ArchTest suite which
provides a collection of programs that test certain properties of a memory model. Their extension creates the
effect of having infinitely long test programs (and thus checking all possible interleavings of test programs)
by abstracting the test programs into non-deterministic finite automata which drive formal specifications of
the system being verified. Both the automata and the implementations were specified in Verilog and the VIS
symbolic model checker was used to verify that various invariants are satisfied by the system when driven by
these automata. The technique is useful in practice and has been applied to commercial systems such as the
HP PA-8000 Runway bus protocol. However, it is incomplete in that the invariants being tested do not imply
SC (they are necessary, but not sufficient).
Automated techniques: Henzinger et al. [14] provide completely automated proofs of lazy and a certain
snoopy cache coherence protocol using the MOCHA model checker. Their protocol specifications (with the
system being expressed in a language similar to a typical imperative programming language and the proof
requirements expressed in temporal logic) are augmented with a specification of a finite observer which
can reorder protocol transactions in order to produce a witness ordering which satisfies the definition of a
memory model. They provide such observers for the two protocols they specify in the paper. However, the
general problem of verifying sequential consistency is undecidable and such finite observers do not exist for
the protocols we specify in this paper or in the protocols used in modern high-performance shared-memory
multiprocessors.
To the best of our knowledge, there are no published examples of a completely automated proof of correctness
of a system specified at a low level of abstraction.
Conclusions
In this paper, we have developed a specification methodology that documents and specifies a cache coherence
protocol in eight tables: the states, events, actions, and transitions of the cache and memory controllers.
We have used this methodology to specify a detailed, low-level three-state broadcast snooping protocol with
an unordered data network and an ordered address network which allows arbitrary skew. We have also presented
a detailed, low-level specification of the Multicast Snooping protocol [5], and, in doing so, we have
shown the utility of the table-based specification methodology. Lastly, we have demonstrated a technique for
verification of the Multicast Snooping protocol, through the sketch of a manual proof that the specification
satisfies a sequentially consistent memory model.
Acknowledgments
This work is supported in part by the National Science Foundation with grants EIA-9971256, MIPS-
9625558, MIP-9225097, CCR 9257241, and CDA-9623632, a Wisconsin Romnes Fellowship, and donations
from Sun Microsystems and Intel Corporation. Members of the Wisconsin Multifacet Project contributed
significantly to improving the protocols and protocol specification model presented in this paper,
especially Anastassia Ailamaki, Ross Dickson, Charles Fischer, and Carl Mauer.
--R
Designing Memory Consistency Models for Shared-Memory Multiprocessors
Parallel Computer Architecture: A Hardware/Software Approach.
Memory Consistency Models for Shared-Memory Multiprocessors
Computer Architecture: A Quantitative Approach.
--TR
Cache coherence protocols: evaluation using a multiprocessor simulation model
A class of compatible cache consistency protocols and their support by the IEEE futurebus
Proving sequential consistency of high-performance shared memories (extended abstract)
Lazy caching
Designing memory consistency models for shared-memory multiprocessors
An executable specification, analyzer and verifier for RMO (relaxed memory order)
Verification of FLASH cache coherence protocol by aggregation of distributed transactions
Memory consistency models for shared-memory multiprocessors
Verification techniques for cache coherence protocols
Lamport clocks
Using MYAMPERSANDldquo;test model-checkingMYAMPERSANDrdquo; to verify the Runway-PA8000 memory model
Design Verification of the S3.mp Cache-Coherent Shared-Memory System
Computer architecture (2nd ed.)
Multicast snooping
A system-level specification framework for I/O architectures
Formal Automatic Verification of Cache Coherence in Multiprocessors with Relaxed Memory Models
Time, clocks, and the ordering of events in a distributed system
Introduction To Automata Theory, Languages, And Computation
Verifying a Multiprocessor Cache Controller Using Random Test Generation
Cache Coherence Verification with TLA+
The ''Test Model-Checking'' Approach to the Verification of Formal Memory Models of Multiprocessors
Verifying Sequential Consistency on Shared-Memory Multiprocessor Systems
Using Formal Verification/Analysis Methods on the Critical Path in System Design
Origin System Design Methodology and Experience
Using Lamport Clocks to Reason About Relaxed Memory Models
--CTR
Collin McCurdy , Charles Fischer, Using Pin as a memory reference generator for multiprocessor simulation, ACM SIGARCH Computer Architecture News, v.33 n.5, December 2005
Collin McCurdy , Charles Fischer, A localizing directory coherence protocol, Proceedings of the 3rd workshop on Memory performance issues: in conjunction with the 31st international symposium on computer architecture, p.23-29, June 20-20, 2004, Munich, Germany
Ahmed Louri , Avinash Karanth Kodi, An Optical Interconnection Network and a Modified Snooping Protocol for the Design of Large-Scale Symmetric Multiprocessors (SMPs), IEEE Transactions on Parallel and Distributed Systems, v.15 n.12, p.1093-1104, December 2004
Milo M. K. Martin , Daniel J. Sorin , Bradford M. Beckmann , Michael R. Marty , Min Xu , Alaa R. Alameldeen , Kevin E. Moore , Mark D. Hill , David A. Wood, Multifacet's general execution-driven multiprocessor simulator (GEMS) toolset, ACM SIGARCH Computer Architecture News, v.33 n.4, November 2005
Felix Garcia-Carballeira , Jesus Carretero , Alejandro Calderon , Jose M. Perez , Jose D. Garcia, An Adaptive Cache Coherence Protocol Specification for Parallel Input/Output Systems, IEEE Transactions on Parallel and Distributed Systems, v.15 n.6, p.533-545, June 2004
Milo M. K. Martin , Pacia J. Harper , Daniel J. Sorin , Mark D. Hill , David A. Wood, Using destination-set prediction to improve the latency/bandwidth tradeoff in shared-memory multiprocessors, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Michael R. Marty , Mark D. Hill, Coherence Ordering for Ring-based Chip Multiprocessors, Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, p.309-320, December 09-13, 2006 | memory consistency;protocol specification;cache coherence;protocol verification;multicast snooping |
570558 | Compact recognizers of episode sequences. | Given two strings X=a1...an and P=b1...bm over an alphabet , the problem of testing whether P occurs as a subsequence of X is trivially solved in linear time. It is also known that a simple O (n log ||) time preprocessing of X makes it easy to decide subsequently, for any P and in at most |P| log || character comparisons, whether P is a subsequence of X. These problems become more complicated if one asks instead whether P occurs as a subsequence of some substring Y of X of bounded length. This paper presents an automaton built on the textstring X and capable of identifying all distinct minimal substrings Y of X having P as a subsequence. By a substring Y being minimal with respect to P, it is meant that P is not a subsequence of any proper substring of Y. For every minimal substring Y, the automaton recognizes the occurrence of P having the lexicographically smallest sequence of symbol positions in Y. It is not difficult to realize such an automaton in time and space O (n2) for a text of n characters. One result of this paper consists of bringing those bounds down to linear or O (n log n), respectively, depending on whether the alphabet is bounded or of arbitrary size, thereby matching the corresponding complexities of automata constructions for offline exact string searching. Having built the automaton, the search for all lexicographically earliest occurrences of P in X is carried out in time O (i=1mroccii) or O (n+i=1mroccii log n), depending on whether the alphabet is fixed or arbitrary, where rocci is the number of distinct minimal substrings of X having b1...bi as a subsequence (note that each such substring may occur many times in X but is counted only once in the bound). All log factors appearing in the above bounds can be further reduced to log log by resorting to known integer-handling data structures. | Introduction
We consider the problem of detecting occurrences of a pattern string as a subsequence of a substring
of bounded length of a larger text string. Variants of this problem arise in numerous applications,
ranging from information retrieval and data mining (see, e.g., [10]) to molecular sequence analysis
(see, e.g., [12]) and intrusion and misuse detection in a computer system (see, e.g., [9]).
Recall that given a pattern
say that P occurs as a subsequence of X iff there exist indices 1 - such that
a
this case we also say that the substring
a
of X is a realization of P beginning at position i 1 and ending at position i m in X. We reserve the
term occurrence for the sequence i 1 . It is trivial to compute, in time linear in jXj, whether
occurs as a subsequence of X. Alternatively, a simple O(nj\Sigmaj) time preprocessing of X makes
it easy to decide subsequently for any P , and in at most jP j character comparisons, whether P
is a subsequence of X. For this, all that is needed is a pointer leading, for every position of X
and every alphabet symbol, to the closest position occupied by that symbol, as exemplified in Fig.
1. Slightly more complicated arrangements, such as developed in [2], can accommodate within
preprocessing time O(n log j\Sigmaj) and space linear in X also the case of an arbitrary alphabet size,
though introducing an extra log j\Sigmaj cost factor in the search for P . We refer also to [3] for additional
discussion of subsequence searching.
a
a
a
a
a
a
a
a
a
a
Figure
1: Recognizer for the subsequences of abaababaabaababa, shown here without explicit labels
on forward "skip" links
These problems become more complicated if one asks instead whether X contains a realization
Y of P of bounded length, since the earliest occurrence of P as a subsequence of X is not guaranteed
to be a solution. In this case, one would need to apply the above scheme to all suffixes of X or find
some other way to detect the minimal realizations Y of P in X, where a realization is minimal if no
substring of Y is a realization of P . Algorithms for the so-called episode matching problem, which
consists of finding the earliest occurrences of P in all minimal realizations of P in X have been
given previously in [7]. An occurrence i 1 of P in a realization Y is an earliest occurrence
if the string lexicographically smallest with respect to any other possible occurrence
of P in Y . The algorithms in [7] perform within roughly O(nm) time, without resorting to any
auxiliary structure or index based on the structure of the text.
In some applications of exact string searching, the text string is preprocessed in such a way
that any subsequent query regarding pattern occurrence takes time proportional to the size of the
pattern rather than that of the text. Notable among these constructions are those resulting in
structures such as subword trees and graphs (refer to, e.g., [1, 6]). Notice that the answer to the
typical query is now only whether or not the pattern appears in the text. If one wanted to locate
all the occurrences as well, then the time would become O(jwj denotes the total
number of occurrences. These kinds of searches may be considered as on line with respect to the
pattern, in the sense that preprocessing of the pattern is not allowed, but are off-line in terms of
the ability to preprocess the text. In general, setting up efficient structures of this kind for non
exact matches seems quite hard: sometimes a small selection of options is faced, that represent
various compromises among a few space and time parameters. In [11, 5], the idea of limiting the
search only to distinct substrings of the text is applied to perform approximate string matching
with suffix trees.
This paper addresses the construction of an automaton, based on the textstring X and suitable
for identifying, for any given P , the set of all distinct minimal realizations of P in X. Specifically,
the automaton recognizes, for each such realization Y , the earliest occurrence of P in Y . The
preceding discussion suggests that it is not difficult to realize such an automaton in time and space
a text of n characters. The main result of the paper consists of bringing those bounds
down to linear or or O(n log n), depending on the assumption on alphabet size, thus matching the
cost of preprocessing in off-line exact string searching with subword graphs. Our construction can
be used, in particular, in cases in which the symbols of P are affected by individual "expiration
deadlines", expressed, e.g., in terms of positions of X that might elapse at most before next symbol
(or, alternatively, the entire occurrence of pattern P ) must be matched.
The paper is organized as follows. In next section, we review the basic structure of Directed
Acyclic Word Graphs and outline an extension of it that constitutes a first, quadratic space realization
of our automaton. A more compact implementation of the automaton is addressed in the
following section. Such an implementation requires linear space but only in the case of a finite
alphabet. The case of general alphabets is addressed in the last section, and it results in a trade-off
between seach time and space.
2 DAWGs and Skip-edge DAWGs
Our main concern in this section is to show how the text X can be preprocessed in a such a way,
that a subsequent search for the earliest occurrences in X of all prefixes of any given P is carried
out in time bounded by the size of the output rather than that of the input. Our solution rests on
an adaptation of the partial minimal automaton recognizing all subwords of a word, also known
as the DAWG (Directed Acyclic Word Graph) [4] associated with that word. Let W be the set of
all subwords of the text X, and P i m) be the ith prefix of P . Our modified graph can
be built in time and space quadratic or linear in the length of the input, depending on whether
the size of the input alphabet is arbitrary or bounded by a constant, respectively, and it can be
searched for the earliest occurrences in all rocc i distinct realizations of P m) in time
O(
Here a realization of P m) is any minimal substring of X having
quence. Note that a realization of P i is a substring that may occur many times in X but is counted
only once in our bound.
We begin our discussion by recalling the structure of the DAWG for string First,
we consider the following partial deterministic finite automaton recognizing all subwords of X.
Given two words X and Y , the end-set of Y in X is the set endpos X (Y for some
Two strings W and Y are equivalent on X if endpos X (W
The equivalence relation instituted in this way is denoted by jX and partitions the set of all strings
over \Sigma into equivalence classes. It is convenient to assume henceforth that our text string X is
fixed, so that the equivalence class with respect to jX of any word W can be denoted simply by
is the set of all strings that have occurrences in X terminating at the same set of
positions as W . Correspondingly, the finite automaton A recognizing all substrings of X will have
one state for each of the equivalence classes of subwords of X under jX . Specifically:
1. The start state of A is [ffl], where ffl is the empty word;
2. For any state [W ] and any symbol a 2 \Sigma, there is a transition edge leading to state [Wa];
3. The state corresponding to all strings that are not substrings of W , is the only nonaccepting
state, all other states are accepting states.
Deleting from A above the nonaccepting state and all of its incoming arcs yields the DAWG
associated with X. As an example, the DAWG for abbbaabaa is reported in Figure 2.
a
a
a
a
a
a a a aa a
Figure
2: An example DAWG
We refer to, e.g., [4, 6], for the construction of a DAWG. Here we recall some basic properties
of this structure. This is clearly a directed acyclic graph with one sink and one source, where every
state lies on a path from the source to the sink. Moreover, the following two properties hold [4, 6].
Property 1 For any word X, the sequence of labels on each distinct path from the source to the
sink of the DAWG of X represents one distinct suffix of X.
Property 2 For any word X, the DAWG of X has a number of states Q such that jXj
a number of edges E such that jXj -
Recalling the basic structure of a subsequence detector such as the one of Fig. 1, it is immediate
to see how the DAWG of X may be adapted to recognize all earliest occurrences of any given pattern
P as a subsequence of X. Essentially, we need to endow every node ff with a number of "downward
failure links" or skip-edges. Each such link will be associated with a specific alphabet symbol, and
the role of a link leaving ff with label a will be to enable the transition to a descendant of ff on
a nontrivial (i.e., with at least two original edges) path labeled by a string in which symbol a
occurs only as a suffix. Formally, a skip link labeled a is set from the node ff associated with the
equivalence class [W ] to any other node fi, associated with some class [WV a] such that V 6= ffl and
a does not appear in V . Thus, a skip-edge labeled a is issued from ff to each one of its closest
descendants other than children where an original incoming edge labeled a already exists. As an
example, Figure 3 displays a partially augmented version of the DAWG of Figure 2, with skip-edges
added only from the source and its two adjacent nodes. We will use the words full skip-edge DAWG
to refer to the structure that would result from adding skip-edges to all nodes of the DAWG. Clearly,
the role of skip-edges is to serve as shortcuts in the search. However, these edges also introduce
"nondeterminism" in our automaton, in particular, now more than one path from the source may
be labebed with a prefix of P . The following theorem is used to summarize the discussion.
Theorem 1 For any string X of n symbols, the full skip-edge DAWG of X can be built in O(n 2 j\Sigmaj)
time and space, and it can be searched for all rocc i earliest realizations of prefixes of any pattern
of m symbols in time
O(
Proof. Having built the DAWG in time O(n log j\Sigmaj) by one of the existing methods, the augmentation
itself is easily carried out in O(n 2 j\Sigmaj) time and space, e.g., by adaptation of a depth-first visit
of the DAWG, as follows. First, when the sink is first reached, it gets assigned NIL skip-edges for
all alphabet symbols; next, every time we backtrack to a node ff from some other node fi, the label
of arc (ff; fi) and the skip-edges defined at fi are used to identify and issue (additional) skip-edges
from ff. The bound follows from the fact that for every node and symbol, skip-edges might have
to be directed to \Theta(n) other nodes.
The time bound on searches is subtended by an immediate consequence of Property 1. Namely,
we have that P occurs as a subsequence of X beginning at some specific position i of X if and only
if the following two conditions hold: (1) there is a path - labeled P from the source to some node ff
of the full skip-edge DAWG of X, and (2) it is possible to replace each skip-edge in - with a chain
of original edges in such a way that the resulting path from the source to ff is labeled by consecutive
symbols of X beginning with position i. Therefore, the search for P is trivially performed, e.g.,
as a depth-first visit of all longest paths in the full skip-edge DAWG that start at the source and
are labeled by some prefix of P . (Incidentally, the depth of the search may be suitably bounded
by taking into account the length of P , and lengths of the shunted paths.) Each edge is traversed
precisely once, and to each time we backtrack from a node corresponds a prefix of P which cannot
be continued along the path being explored, whence the claimed time bound for searches. 2
The search-time bound of Theorem 1 is actually not tight, an even tighter one being represented
by the total number of distinct nodes traversed. In practice, this may be expected to be proportional
to some small power of the length of P . Consideration of symbol durations may also be added to the
construction phase, thereby further reducing the number of skip-edges issued. The main problem,
however, is that storing a full skip-edge DAWG would take an unrealistic \Theta(n 2 ) worst-case space
even when the alphabet size is a constant (cf. Fig. 3). The remainder of our discussion is devoted
to improving on this space requirement.
a
a
a
a
a
a
a
a
a
a a
a
a a a a
Figure
3: Adding skip-edges from the source and its two adjacent nodes
Compact skip-edge DAWGs
Observe that by Property 1 each node of the DAWG of X can be mapped to a position i in X in
such a way that some path (say, to fix the ideas, the longest one) from that node to the sink is
labeled precisely by the suffix a i a i+1 :::a n of X. As is easy to check, such a mapping assignment can
be carried out during the construction of the DAWG at no extra cost. Observe also that there is
always a path labeled X in the DAWG of X. This path will be called the backbone of the DAWG.
Let the depth of a node - in the DAWG be the length of the longest word W on a path from the
source to -. We have then that, by the definition of a DAWG, every other path from the source to
- is labeled by one of the consecutive suffixes of W down to a certain minimum length (these are
the words in the equivalence class [W ] that occur in X only as suffixes of W ). It also follows that,
considering the set of immediate predecessors of - on these paths, their depths must be mutually
different and each smaller than jW j. Finally, the depths of the backbone nodes must be given by
the consecutive integers from 0 (for the source) to n (for the sink).
In order to describe how skip links are issued on the DAWG, we resort to a slightly extended
version of a spanning tree of the DAWG (see Fig. 4). The extension consists simply of replicating
the nodes of the DAWG that are adjacent to the leaves of the spanning tree, so as to bring into
the final structure also the edges connecting those nodes (any of these edges would be classified as
either a "cross edge" or a "descendant edge" in the depth-first visit of the DAWG resulting in our
tree). We stipulate that the duplicates of node - are created in such a way, that this will leave -
connected to the immediate predecessor - of - with the property that the depth of - in the DAWG
equals the depth of - minus 1. Note that such a node - must exist for each - except the source.
Note also that our spanning tree must contain a directed path that corresponds precisely to the
backbone of the DAWG.
Let T be the resulting structure. Clearly, T has the same number of edges of the DAWG.
Moreover, each node of the DAWG is represented in T at most once for every incoming edge.
Therefore, the size of T is linear in jXj. We use the more convenient structure of T to describe
how to set skip-edges and other auxiliary links. In actuality, edges are set on the DAWG.
Since the introduction of a skip-edge for every node and symbol would be too costly, we will
endow with such edges only a fraction of the nodes. Specifically, our policy will result in a linear
number of skip-edges being issued overall. From any node not endowed with a skip-edge on some
desired symbol, the corresponding transition will be performed by first gaining access to a suitable
node where such a skip-edge is available, and then by following that edge. In order to gain access
from any node to its appropriate "surrogate" skip-edge, we need to resort to two additional families
of auxiliary edges, respectively called deferring edges and back edges. The space overhead brought
about by these auxiliary edges will be only O(n \Delta j\Sigmaj), hence linear when \Sigma is finite. The new
edges will be labeled, just like skip-edges, but unlike skip-edges their traversal on a given input
symbol does not consume that symbol. Their full structure and management will be explained in
due course.
We now describe the augmentation of the DAWG. With reference to a generic node fl of T , we
distinguish the following cases.
outdegree 1. Assume that the edge leaving fl is labeled a, and consider
the original path - from fl to a branching node or leaf of T , whichever comes first. For every
first occurrence on - of an edge (j; fi) labeled - a 6= a, direct a skip-edge labeled -
a from fl to
a
a a
a a a a
a
a
a
a
Figure
4: An extended spanning tree T with sample skip-edges
fi. For every symbol of the alphabet not encountered on - set a deferring edge from fl to the
branching node or leaf found at the end of -.
ffl Case 2: Node fl is a branching node. The auxiliary edges to be possibly issued from fl are
determined as follows (see also Fig. 5). Let j be a descendant other than a child of fl in T ,
with an incoming edge labeled a, and let - be the longest ascending original path from j such
that no other edge of - is labeled a. If fl is the highest (i.e., closest to the root) branching
node on -, then perform the following two actions. First, direct a skip-edge labeled a from
fl to j. Next, consider the subtree of T rooted at fl. Any path of T in this tree that does
not lead eventually to an arc labeled a (like the arc leading to node j) must end on a leaf.
To every such leaf, direct from fl a deferring edge labeled a. This second action is always
performed in the special case where fl is the root.
ffl Case 3: Node fl is a leaf. If fl is the sink, nothing is done. Otherwise, let - be the original
DAWG node of which fl is a replica. Back on T , for each symbol a of the alphabet, follow
every distinct path from - until encountering a first occurrence of a or the sink. For every
such path with no intervening branching nodes, direct a skip-edge from the leaf fl to the
node at the end of the path. For every path traversing and proceding past a branching node,
direct a deferring edge from fl to the deepest one among such branching nodes. At the end,
eliminate possible duplicates among the deferring edges that were introduced.
Figures
4 and 5 exemplify skip links for the backbone, the root and one of its children in the
tree T of our example.
a
a
a
a
a
a
a
a
d
Figure
5: A one-symbol transition from a node ff of T to its descendant j is implemented via
three auxiliary-edge traversals: first, through a deferring link to the nearest branch node fi; next
from fi to fl through a back-edge; finally, through the skip-edge from fl to j. To avoid unnecessarily
cluttering the figure, not all edges are shown. Note that the presence of another a-labeled skip-edge
from fl to ffi introduces ambiguity as to which direction to take once the search has reached fl
At this point and as a result of our construction policy, there may be branching nodes that do
not get assigned any skip- or deferring edge. This may cause a search to stall in the middle of a
downward path, for lack of appropriate direction. In order to prevent this problem, back edges are
added to every such branching node, as follows (see Fig. 5). For every branching node fi of T and
every alphabet symbol a such that an a-labeled skip-edge from fi is not defined, an edge labeled
a is directed from fi to the closest ancestor fl of fi from which a skip- or deferring edge labeled a
is defined. We refer to fl as the a-backup of fi, and we denote it by back a (fi). Note that, by our
construction of Case 2, such a backup node exists always. Clearly, our intent is that the effect of
traversing a skip-edge as described in the previous section can now be achieved by traversing a
small sequence of auxiliary edges. In the example of Fig. 5, the transition from node ff to j is
implemented by traversing in succession one deferring edge, one back edge and one skip-edge. This
complication is compensated by the following advantage.
Lemma 1 The total number of auxiliary edges in T , whence also in the augmented DAWG of X,
is O(jXj \Delta j\Sigmaj).
Proof. There is at most one auxiliary edge per alphabet symbol leaving nodes of outdegree 1,
so that we can concentrate on branching nodes and leaves. As for the skip-edges directed from
branching nodes, observe that for any symbol a and any node fi, at most one skip-edge labeled a
may reach fi from some such node, due to the conventions made in Case 2. Indeed, if a skip-edge
labeled a is set from some branching node ff to another node fi, then by construction no branching
node on the path from ff to fi can be issued an a-labeled skip-edge. Also by construction, either ff
is the root or else there must be an edge labeled a on the path from the closest branching ancestor
of ff to ff itself. Hence, no skip-edge labeled a could possibly be set from a branching ancestor of
ff to a node in the subtree of T rooted at ff. A similar argument holds for deferring edges directed
towards every leaf of T . Indeed, by the mechanics of Case 2, if a deferring edge labeled a is set
from a branching node ff to a leaf fi, then there is no original edge labeled a on the path from ff
to fi. Again, either ff is the root or else there must be an original edge labeled a on the path from
the closest branching ancestor of ff to ff itself, so that no deferring edge labeled a may be issued
from a branching ancestor of ff to a node in the subtree of T rooted at ff.
Considering now the leaves, observe first that at most one skip-edge per symbol may be directed
from a leaf to a node. To get a global bound on all deferring edges set from leaves, consider the
compact trie of all suffixes of X. This trie has O(n) leaves and arcs, and every branching node of
T can be mapped in a distinct branching node in the trie (indeed, T can be obtained figuratively
by pruning the non-compact version of the trie). We will map each one of the deferring edges set
from a leaf fl into an arc of the trie, and in such a way that each arc of the trie is charged at
most one such deferring edge per alphabet symbol. To see this mapping, let W be the word from
the root of T to fl, and let W a be the extension of W that completes one of the paths
ending in a first occurrence of a after crossing some branching nodes in the DAWG. The deferring
edge relative to W 0 is charged to the edge of the trie where W 0 ends. Observe that, for any other
extension a of W appearing in X and such that a has no occurrence in V 0 , V and V 0
must diverge. In other words, W 00 must have a path in the trie that diverges from that of W 0 , and
thus will charge an arc of the trie different from the one charged by W 0 . Moreover, since no prefix
of W ends on a leaf of T , then no ancestor of fl can produce the same charges. In conclusion, for
each leaf of T and symbol of \Sigma a distinct arc of the trie is charged, whence the total number of
auxiliary edges of this kind per symbol is bounded by the length of X. 2
has a realization Y in X beginning with a i and ending with a j if and only if there is
a sequence oe of arcs in T with the following properties: (i) the concatenation of consecutive labels
on the original and skip-edges of oe spells out P ; (ii) any original or skip-edge of oe is followed by
a sequence containing at most two deferring edges and at most one back edge; (iii) there is a path
labeled in the DAWG from the source to the node which is
reached by the last arc of oe.
Proof. The proof is by induction on the length m of the pattern. Let assume that P has
a realization Y in X as stated. By the definition of T , there must be an original arc corresponding
to . The node reached by this arc satisfies trivially points (i \Gamma iii). For m ? 1,
assume that the claim holds for all patterns of lengths and that we have matched
the prefix of P down to some node ff of T while maintaining (i \Gamma iii). Hence,
for some index f ! j, there is a path from the root of T to ff labeled Y and such
that Y f is a realization of Pm\Gamma1 .
Given that P has a realization Y , then there must be a path in the DAWG labeled a f+1 :::a j
and originating at the node represented in T by ff. The path itself has an image in T , perhaps
fragmented in a sequence of segments. Considering this image, the claim is then easily checked if
the last symbol b m of P is consumed through either an original arc or a defined skip-edge leaving ff
and shunting the path labeled a f+1 :::a j . Assuming that neither type of edge is defined at ff, then
by construction there must be a deferring edge labeled a leading through a path labeled by
a prefix a f+1 :::a k of a f+1 either to a branching node, call it fi, or to some leaf -.
In this second case, we have either a defined skip-edge leading to the final node, which would
conclude the argument, or one more deferring edge that will take from - to a branching node.
Considering such a branching node, it must be the image in T of some node of the DAWG on the
path from ff labeled a k+1 :::a j . Hence from this moment on we can reset our reasoning and assume
to be in the same case as if we were on a branching node fi, except for the fact that we would now
start with the "handicap" of having already consumed up to two deferring edges.
Assume then that we are on a branching node fi, possibly having already traversed one or two
deferring edges, and with some suffix of a k+1 :::a j , (f still to be matched. Clearly, if fi has
a defined a-labeled skip- or original edges, this concludes the argument. Thus, the only remaining
cases of interest occur when no a-labeled skip- or original edge is defined from fi. In such an event,
the link to back a (fi) is traversed instead, as depicted in Fig. 5.
Let j be any of the descendants of fi such that j is connected to fi through a nontrivial original
path - of T in which symbol a appears precisely as a suffix and there is no a-labeled original edge
from fi to j. We claim that there is a skip-edge labeled a from fl to j. In fact, by our selection
of -, there is no other edge labeled a on the path from fi to the parent node of j. Assuming one
such edge existed on the path from fl to fi, then fi itself or a branching ancestor of fi other than
would have a-labeled skip-edges defined, thereby contradicting that back a
we choose j to be the node at the end of the path originating from fi and labeled by the suffix of
a k+1 :::a j that remains at this point to be consumed, we have that a skip-edge labeled a must exist
from fl to j. Traversal of this edge achieves propagation of points (i \Gamma iii) from
to P within the transitions of at most two deferring edge, one back edge and one skip-edge.
The proof of the converse is straightforward and thus is omitted. 2
Based on Lemma 2, a search for the realizations of a pattern in the augmented DAWG of X
may be carried out along the lines of a bounded-depth visit of a directed graph. The elementary
downward step in a search consists of following an original edge or a skip-edge, depending on which
one is found. The details are then obvious whenever such an edge actually exists. The problem
that we need to examine in detail is that for any symbol a there may be more than one skip- or
deferring edge labeled a leaving a node like the node fl of Fig. 5, with some such edges leading to
descendants of fl that are not simultaneously descendants of ff. In Fig. 5, an instance of such a
situation is represented by node ffi.
We assume that all auxiliary (i.e., skip- or deferring) edges leaving a node fl under the same
label a are arranged in a list dedicated to a, sorted, say, according to the left-to-right order
of the descendants of fl. Thus, in particular, any descendants of the node ff of our example that are
reachable by an a-labeled auxiliary edge from fl are found as consecutive entries in the auxiliary
edge list of fl associated with symbol a. This list or part of it will be traversed left to right in our
search, as follows naturally from the structure of a depth-first visit of a graph. The convention is
also made that the back-edge from fi to fl points directly to the auxiliary edge associated with the
leftmost descendant of fi. During a search, the beginning of the sublist at fl relative to descendants
of fi is immediately accessed from fi in this way, and the skip- and deferring edges in that sublist are
scanned sequentially while the subtree of T rooted at fi is being explored. The following theorem
summarizes the discussion.
Theorem 2 The compact skip-edge DAWG associated with X supports the search for all earliest
occurrences of a pattern in X in time
O(
where rocc i is the number of distinct realizations in X of the prefix P of P .
As already noted, a realization is a substring that may occur many times in X but is counted
only once in our bound. It is not difficult to modify the DAWG augmentation highlighted in the
previous section so as to build the compact variant described here. Again, the core paradigm
is a bottom-up computation on T , except that this time lists of skip- and deferring edges may
be assigned to branching nodes only on a temporary basis: whenever, climbing back towards the
root from some node fi, an ancestor branching node ff is encountered before any intervening edges
labeled a, then the a-labeled skip- and deferring edge lists of fi are surrendered to ff, and fi is
simultaneously appended to a list Back of branching nodes awaiting back edges. As soon as
(because of an intervening original edge labeled a or having reached the root) the a-labeled lists are
permanently assigned to some node ff, appropriate back-edges are also directed from every node in
the list Back, and Back itself is disposed of. A similar process governs the introduction of skip-
and deferring edges from leaves. Recall that a leaf of T is in fact a replica of a same "confluence
node" of the DAWG. This not only shows that it is possible to compute this class of auxiliary edges
bottom-up, but also suggests that for a group of leaves replicating a same node of the DAWG it
suffices to issue the edges at the node of T that is the image of that DAWG node and let the replicas
simply point to it. Note that, as long as we insist on reasoning in terms of T , these deferring edges
from leaves must be suitably marked, lest they be confused with those issued at branching nodes
and play havoc with the search as described in Lemma 2.
The overall process takes time and space linear in the structure at the outset, which is linear
in jXj for fixed alphabets. Symbol durations may be taken into account both during construction
as well as in the searches, possibly resulting in additional savings. The details are tedious but
straightforward and are left to the reader.
4 Generalizing to unbounded alphabets
When the alphabet size j\Sigmaj is not a constant independent of the length n of X, we face the choice
of implementing the (original) adjacency list of a node of the DAWG as either a linear list or
a balanced tree. The first option leaves space unaffected but introduces slowdown by a linear
multiplicative factor in worst-case searches. The second introduces some linear number of extra
nodes but now the overhead of a search is only a multiplicative factor O(log j\Sigmaj). Below, we assume
this second choice is made. Rather straightforward adaptations to the structure discussed in the
previous section would lead to a statement similar to Theorem 2, except for an O(log j\Sigmaj) factor
in the time bound. Here, however, we are more interested in the fact that when the alphabet size
is no longer a constant Lemma 1 collapses, as the number of auxiliary edges needed in the DAWG
may become quadratic. In this Section, we show that a transducer supporting search time
can in fact be built within O(n log n) time and linear space.
The idea is of course to forfeit many skip-edges and other auxiliary edges and pay for this
sparsification with a log j\Sigmaj overhead on some elementary transitions. We explain first how this
can work on the original array in which X is stored. We resort to a global table Close, defined as
follows [2].
contains the smallest position larger
than j where there is an occurrence of s p , the pth symbol of the alphabet.
Thus, Close is regarded as subdivided into blocks of size j\Sigmaj the entries of which are cyclically
assigned to the different symbols of the alphabet. It is trivial to compute Close from X, in linear
time. Let now closest(i; p) be the closest instance of s p to the right of position i (if there is no such
occurrence set closest(i; 1). The following property holds.
Lemma 3 Given the table Close and the sorted list of occurrences of s p , closest(i; p) can be
computed for any p in O(log j\Sigmaj) time.
We refer to [2] for a proof of Lemma 3. The main idea is that two accesses of the form
must either identify the desired
occurrence or else will define an interval of at most j\Sigmaj entries in the occurrence list of s p , within
which the desired occurrence can be found by binary search, hence in O(log j\Sigmaj) time. Note that
the symbols of X can be partitioned into individual symbol-occurrence lists in O(n log j\Sigmaj) overall
time, and that those lists occupy linear space collectively.
The above construction enables us immediately to get rid of all skip-edges issued inside each
chain of unary nodes present in T . A key element in making this latter fact possible is the cir-
cumstance, already remarked, that we can map every path to the sink of the DAWG, hence also
every such maximal chain, to a substring of a suffix (hence, to an interval of positions) of X. In
fact, once such an interval is identified, an application of closest will tell how far down along the
chain one should go. Along these lines, we only need to show how a downward transition on T
is performed following the identification made by closest of the node that we want to reach: we
may either scan the chain sequentially or search through it logarithmically. The first option results
in adding to the overall time complexity a term linear in n, the second requires additional ad-hoc
auxiliary links at the rate of at most 2 log n per node, of which log n point upward and at most as
many point downwards. The overhead introduced by the second option is O(log n) per transition,
which absorbs the O(log possibly charged by closest. The same scheme can be adapted
to get rid of skip-edges directed from the leaves.
We still face a potential of \Theta(j\Sigmaj) deferring edges per chain node, and as many backup edges
per branching node. These edges are easy to accommodate: all deferring edges from a node point
to a same branching node and can thus coalesce into a single "downward failure link". As for the
backup edges, recall that by definition, on a path - between fi and back a (fi) there can be no
edge labeled a, but such an edge must exist on the path from the closest branching ancestor of fl
to fl. Let trivially each node of T be given as a label the starting position of the earliest suffix of
whose path passes through that node. Then, we can use the table closest on array X to find
the distance of this arc from j, climb to it on T using at most log n auxiliary upward links, and
finally reach fl through the downward failure link. Considering now the deferring edges that lead
to leaves, these edges can be entirely suppressed at the cost of visiting the subtrees of T involved
at search time: this introduces work linear overall, since, e.g., in a breadth-first search it suffices to
visit each subtree once.
Finally, we consider the collection of all deferring edges that originate at an arbitrary leaf. Recall
that when a deferring edge labeled a is set from a leaf fl to a branching node fi, this is done with the
intent of making accessible during searches a final target node j that is found along a unary chain
connecting fi to its closest branching descendant (or leaf) -. Specifically, j is the node at the end
of the first edge labeled a on the chain connecting fi to -. In analogy with what discussed earlier,
j can be reached in logarithmic time from -, through an application of closest. The problem is
thus how to be prepared to reach nodes such as -, during searches, without dedicating one separate
deferring edge to every such node.
In the terms of the discussion of Lemma 1, the idea could be again to coalesce in a same deferring
edge all of those deferring edges from fl that would be charged to a same arc of the suffix trie of
X, and let closest discern at search time among the individual symbols present on that arc. In the
specific case we are considering, this trie arc would be one that maps, in the DAWG, to the path
connecting fi to -. However, this time this is not enough, since not every symbol of \Sigma is guaranteed
to appear in every DAWG chain or trie arc. We must go one step further and coalesce all of the at
most j\Sigmaj edges reaching down along a given path of the trie into the deepest one among those edges.
The intent is that, during a search, the table closest will be used to climb back to the appropriate
depth and symbol. We need to show that this is done consistently, i.e., that a connected path
supporting this climb is guaranteed to exist in T .
Let W be the word associated with fl and W shortest extensions of W such that V
contains at least one instance of every symbol of \Sigma and W 0 ends at a branching node of the suffix
trie. Let -
W and -
respectively, the longest words in the equivalence classes [W ] and [W 0 ],
and recall that fl is a replica of the node of T corresponding to -
W . Clearly, there must be a path
in the DAWG connecting the node of [W ] to that of [W 0 ] and labeled V . Moreover, the DAWG
node corresponding to [W 0 ] must be a branching node, because such is the corresponding node in
the trie. By our construction of T , such a branching node must exist also in this tree, and it must
be connected to the root through a path labeled -
is a suffix of -
connected path
labeled V exists in T as claimed.
We conclude by pointing out that all log factors appearing in our claims can be reduced to
log log at the expense of some additional bookkeeping, by deploying data structures especially
suited for storing integers in a known range [8]. It is also likely that the log n factors could be made
to disappear entirely by resort to amortized finger searches such as, e.g., in [2].
5 Conclusion
We have described a data structure suitable for reporting occurrence of a pattern string as a
constrained subsequence of another string. Since the full-fledged data structure would be too bulky
in practical allocations, a more compact, "sparse" version was built where space saving is traded
in exchange for some overhead on search time. Both of these parameters are perhaps susceptible of
further improvement. In particular, it is not clear that the bounds attained for fixed alphabet sizes
cannot be extended without penalty to the case of an unrestricted alphabet. Non-trivial estimates
or bounds on the terms rocc i that appear in our complexities may shed more light on the expected
or worst case performance of a search. Finally, little is known about indices that would return, in
time linear in the pattern size, whether or not any given pattern occurs as an episode subsequence
of the textstring.
6
Acknowledgements
We are indebted to the Referees for their thorough scrutiny of the original version of this paper
and for their many valuable comments.
--R
Pattern Matching Algorithms
The Longest Common Subsequence Problem Revisited
The Smallest Automaton Recognizing the Subwords of a Text
Fast Approximate Matchings Using Suffix Trees
Algorithms
A Priority Queue in which Initialization and Queue Operations Take O(log log n) Time
A Pattern-Matching Model for Instrusion Detection
Discovering Frequent Episodes in Sequences
Approximate String Matching with Suffix Trees
Introduction to Computational Biology
--TR
Searching subsequences
Text algorithms
Pattern matching algorithms
Discovery of Frequent Episodes in Event Sequences
Approximate String-Matching over Suffix Trees
Episode Matching
--CTR
Zdenk Tronek, Episode directed acyclic subsequence graph, Nordic Journal of Computing, v.11 n.1, p.35-40, Spring 2004
Abhijit Chattaraj , Laxmi Parida, An inexact-suffix-tree-based algorithm for detecting extensible patterns, Theoretical Computer Science, v.335 n.1, p.3-14, 20 May 2005
Robert Gwadera , Mikhail J. Atallah , Wojciech Szpankowski, Reliable detection of episodes in event sequences, Knowledge and Information Systems, v.7 n.4, p.415-437, May 2005
Philippe Flajolet , Wojciech Szpankowski , Brigitte Valle, Hidden word statistics, Journal of the ACM (JACM), v.53 n.1, p.147-183, January 2006 | compact subsequence automation;DAWG;skip-link;subsequence and episode searching;algorithms;suffix automation;skip-edge DAWG;forward failure function;pattern matching |
570575 | Finite variability interpretation of monadic logic of order. | We consider an interpretation of monadic second-order logic of order in the continuous time structure of finitely variable signals. We provide a characterization of the expressive power of monadic logic. As a by-product of our characterization we show that many fundamental theorems which hold in the discrete time interpretation of monadic logic are still valid in the continuous time interpretation. | Introduction
In the recent years systems whose behavior change in the continuous (real)
time were extensively investigated. Hybrid and control systems are prominent
examples of real time systems.
A run of a real time system is represented by a function from non-negative
reals into a set of values - the instantaneous states of a system. Such a function
will be called a signal. Usually, there is a further restriction on behavior of
continuous time systems. For example, a function that gives value q 0 for the
rationals and value q 1 for the irrationals is not accepted as a 'legal' signal.
A requirement that is often imposed in the literature is that in every bounded
time interval a system can change its state only finitely many times. This
requirement is called a finite variability (or a non-Zeno) requirement. A function
from the non-negative real into a set # that satisfies this requirement is called a
finitely variable signal. If in addition such function x satisfies the requirement
that for every t there is # > 0 such that x is constant on [t, t+#), then it is called
a right continuous signal. It is clear that finite variability and right continuous
requirements are not metric requirements.
Recall that the language L <
2 of monadic second order logic of order contains
individual variables, second order variables and the binary predicate <. In the
discrete time structure # (this structure will be defined precisely in section 3),
the individual variables are interpreted as natural numbers, the second order
variables as monadic predicates (monadic functions from the natural numbers
into the booleans), and < is the standard order on the set of natural numbers.
A monadic formula #(X) with one free predicate variable X defines a set
of #-strings over {0, 1} that satisfies #. There exists a natural one-one correspondence
between the set of #-strings over the alphabet {0, 1} n and the set of
n-tuples of monadic predicates over the set of natural numbers. With a formula
the set of #-strings which satisfies # through this correspondence
can be associated. Such a set of #-strings is called the #-language definable by
#. So, monadic logic can be considered as a formalism for the specification of
the behavior (set of runs) of discrete time systems. This logic is accepted as a
kind of an universal formalism among decidable formalisms for the specification
of discrete time behavior [13].
In this paper we consider interpretations of monadic logic in the continuous
time structures of the finitely variable signals and the right continuous signals.
In these structures the individual variables range over the non-negative real
numbers, the second order variables range over the finitely variable (respectively,
right continuous) boolean signals, and < is the standard order relation on the set
of real numbers. Similar to the discrete case, monadic logic can be considered
as a formalism for the specification of the behavior of continuous time systems.
Note that metric properties of reals cannot be specified in this logic.
We provide (see Theorem 1) a characterization of signal languages definable
in the monadic logic of order. The result is significant due to the fact that many
specification formalisms for reasoning about real time which were considered in
the literature can be e#ectively embedded in L <
2 . In [10] we illustrated the
expressive power of L <
2 by providing meaning preserving compositional translations
from Restricted Duration Calculus [4], Propositional Mean Value Calculus
[5] and Temporal Logic of Reals - TLR [3] into the first-order fragment of L <
.
We apply Theorem 1, for the analysis of a number of fundamental problems.
First, as an immediate consequence of our main result we obtain the decidability
of L <
under finitely variable and right continuous interpretations. These decidability
results were obtained in [9, 10] by the method of interpretation [7]. We
reduced in [9, 10] these decidability problem to the decidability problem for L <under F # interpretation. In F # interpretation the monadic predicate variables
range over the countable unions of closed subsets of reals. The decidability of
2 under under F # interpretation was shown to be decidable in [7].
Second, we show that under finitely variable and right continuous interpretations
the existential fragment of L <
2 is expressive complete, i.e. for every L <formula # there is an equivalent formula of the form #X 1 . #Xn#, where # is
a first-order monadic formula.
Then we reconsider two fundamental problems of Classical Automata The-
ory: (1) Automata characterization of the L <
definable languages and (2) The
uniformization problem.
The classical result of B-uchi provides an automata theoretical characterization
of the L <
definable languages It says that an #-language is definable by a
monadic formula (under the discrete interpretation) i# it is accepted by a finite
state automaton.
Let #(X, Y ) be a formula such that #X#Y # holds. The uniformization
problem for # is to find a finite state input-output automaton (transducer) such
that the function F computable by the automata satisfies #X.#(X, F (X)). The
uniformization problem for the monadic second order theory of order of the
structure # was solved positively by B-uchi and Landweber [2].
We check whether these classical results can be extended to continuous time.
In [11] automata that accept finitely variable languages were defined. It was
announced there that a finitely variability language is definable in monadic logic
i# it is accepted by a finite state automata. Here we show that this theorem is
a consequence of Theorem 1. In [11] it was announced that the uniformization
problem has a positive solution in the continuous time. We found a bug in our
proof. In the last section we show that the failure of the uniformization is a
consequence of Theorem 1.
The rest of this paper is organized as follows. In Section 2 terminology and
notations are fixed. In Section 3 the syntax and semantics of monadic second
order logic of order is recalled. Theorem 1 provides reductions between definable
#-languages and definable signal languages; it is stated in Section 4. In Section
5 we collect some simple lemmas and in Section 6 we prove the Theorem 1.
Section 7 gives some important corollaries.
2 Terminology and Notations
N is the set of natural numbers; R is the set of real numbers, R #0 is the set
of non negative reals; BOOL is the set of booleans and # is a finite non-empty
set. We use f # g for the composition of f and g.
A function from N to # is called an #-string over #. A function h from the
non-negative reals into a finite set # is called a finitely variable signal over # if
there exists an unbounded increasing sequence #
such that h is constant on every interval Below we will use 'signal'
for 'finitely variable signal'. We say that a signal x is right continuous at t i#
there is t 1 > t such that We
say that a signal is right continuous if it is right continuous at every t.
A set of #-strings over # is called an #-language over #. Similarly, a set of
finitely variable (respectively, right continuous) signals over # is called a finitely
variable (respectively, right continuous) #-signal language.
3 Monadic Second Order Theory of Order
3.1
The language L <
2 of monadic second order theory of order has individual vari-
ables, monadic second order variables, a binary predicate < , the usual propositional
connectives and first and second order quantifiers # 1 and # 2 . We use
t, v for individual variables and X, Y for second order variables. Often it will
be clear from the context whether a quantifier is the first or the second
in such cases we will drop the superscript. We use the standard abbreviations,
in particular, "#!" means "there is a unique".
The atomic formulas of L <are formulas of the form: t < v and X(t). The
formulas are constructed from atomic formulas by using logical connectives and
first and second order quantifiers.
We write #(X, Y, t, v) to indicate that the free variables of a formula # are
among X, Y, t, v.
3.2 Semantics
A structure
2 consists of a set A partially ordered by <K
and a set B of monadic functions from A into BOOL. The letters # and x, y
will range over the elements of A and B respectively. We will not distinguish
between a subset of A and its characteristic function. The satisfiability relation
defined in a standard way.
We sometimes use K
We will be interested in the following structures:
1. Structure #N, 2 N , <N #, where 2 N is the set of all monadic functions
from N into BOOL.
2. The signal structure Sig is defined as
SIG is the set of finitely variable boolean signals.
3. The right continuous signal structure Rsig is defined as
where RSIG is the set of right continuous boolean signals.
3.3 Definability
Let #(X) be an L <
be a structure. We say that
a set C # B is definable by #(X) if x # C if and only if K, x |= #(X).
Example (Interpretations of Formulas).
1. The formula #t
defines the #-language {(01) # ,
} in the structure # and defines the
set of all signals in the signal and right continuous signal structures.
2. The formula #Y. #t # .Y (t #t. X(t) # Y
defines in the structure # the set of
strings in which between any two occurrences of 1 there is an occurrence
of 0. In the signal structure the above formula defines the set of signals
that receive value 1 only at isolated points. The formula defines the empty
language under the right continuous signal interpretation.
In the above examples, all formulas have one free second order variable and they
define languages over the alphabet {0, 1}. A formula #(X 1 , .
second order variables defines a language over the alphabet {0, 1} n .
We say that an #-languages (finitely variable or right continuous signal lan-
guage) is definable if it is definable by a monadic formula in the structure #
(respectively in the structure Sig or Rsig).
4 Characterization of Definable Signal Languages
Recall that a function x from the non-negative reals into a finite set # is called a
finitely variable signal over # if there exists an unbounded increasing sequence
. such that x is constant on every interval
In this case there exists an #-string #a
alphabet # such that x(# i #-string
is said to represent a finite variable signal x. We denote by FV (s) the set
of finitely variable signals represented by an #-string s. For an #-language L
we use FV (L) for # s#L FV (s). Similarly, an #-string a 0 a 1 . an . over the
alphabet # represents a right continuous signal x if there is an unbounded
increasing sequence of reals such that
It is clear that every right continuous signal over # is represented
by an #-string over #. We denote by RC(s) the set of right continuous signals
represented by an #-string s. For an #-language L we use RC(L) for # s#L RC(s).
Theorem 1
1. A finitely variable signal language S is definable if and only if there is a
definable #-language L such that
2. A right continuous signal language S is definable if and only if there is a
definable #-language L such that
Our proof of Theorem 1 is constructive. In the proof of the if direction of
Theorem 1(2) we will provide a compositional mapping
2 such
that T r(#) defines the right continuous signal language RC(L), where L is an
#-language defined by #. From the proof of the only-if direction of Theorem
1(2), one can also extract an e#ective mapping T r
2 such that
the right continuous signal language defined by # is equal to RC(L), where
L is . the #-language defined by T r #). However, T r # is not compositional.
It is not di#cult to show that there is no n such that the length of T r #)
is bounded by expn (|#|), where expm (k) is the m-times iterated exponential
function (e.g. exp 2
). We do not know whether there exists a more
e#cient translation. Similar remarks hold for the proof of Theorem 1(1).
A natural question is whether Theorem 1 holds if we replace "definable" by
"definable in first-order monadic logic of order". The proof method used in this
paper does not allow to establish directly this result. However, the question has
a positive answer. The proof is based on the Shelah's compositional method
and will be given somewhere else.
In the next section some preliminary lemmas are collected. The proof of
Theorem 1 is given in Section 6.
5 Representation of signals by #-strings
In this section some basic notions and lemmas about the representation of signals
by #-languages are collected. The proofs of some lemmas are straightforward
and we omit them.
Lemma 2 Suppose that an #-string #a 0 , b 0 #a 1 , b 1 #a n , b n # . represents a
finitely variable signal x. Then x is right continuous if and only if a
all i.
Lemma 3 Let # be an order preserving bijection from R #0 to R #0 . An #-
string s represents x if and only if s represents x #.
5.1 Stuttering and Speed-Independence
#-strings that represent the same right continuous signal are said to be stuttering
equivalent. We will use # 1 for stuttering equivalence. It is easy to see that
the stuttering equivalence on #-strings is the smallest equivalence such that
a 0 a 1 . anan+1 . is equivalent to a 0 a 1 . ananan+1 .
#-strings over an alphabet # that represent the same finitely variable
language are said to be # 2 -equivalent. It is easy to see that the
-equivalence on #-strings over alphabet # is the smallest equivalence
such that #a 0 , b 0 #a 1 , b 1 #a n , b n #a n+1 , b n+1 # . is equivalent to
An #-language L is stuttering closed if whenever s # L and s # 1 s then
s # L. We use Stut 1 (L) for the stuttering closure of L, i.e. for the #-language
-closure of the languages over # is defined
similarly.
A finitely variable (right continuous) signal language L is speed-independent
if for every order preserving bijection #, x # L i# x # L. We use SI(L)
for speed-independent closure of L, i.e. for the language {x
L and # is an order preserving bijection}. Recall that FV (L) (respectively RC(L))
denotes the set of finitely variable (respectively right continuous) signals represented
by the #-strings of L. It is clear that FV (L) and RC(L) are speed-
independent.
We say that an #-language L represents a finitely variable (right continuous)
signal language S if for every x # S there is s # L that represents x and for
every s # L there is x # S represented by s.
The following is straightforward.
Lemma 4 1. If RC(s 1
2. if s 1
3. If FV
4. if s 1
5. A finite variable (right continuous) signal language S is speed-independent
6. An #-language L represents a finitely variable (respectively, right contin-
uous) signal language S i# Stut 2 (L) (respectively, Stut 1 (L)) represents
S.
7. An #-language L represents S i# L represents SI(S).
8. FV induces bijection between the set of Stut 2 -closed #-languages and the
set of speed-independent finitely variable languages.
9. RC induces bijection between the set of Stut 1 -closed #-languages and the
set of speed-independent right continuous languages.
Lemma 5 Every definable finitely variable (respectively right continuous) language
is speed-independent.
Proof: Let K be the finitely variable signal structure or the right continuous
signal structure. Let # : R #0 # R #0 be an order preserving bijec-
tion. By the structural induction on L <
formulas it is easy to show that
#. This implies the lemma. #
Lemma 6 (1) If L is a definable #-language, then Stut 1 (L) is definable. (2) If
L is a definable #-language over an alphabet #, then Stut 2 (L) is definable.
Moreover, there exists an algorithm that for every # constructs # such that the
#-language definable by # is the stuttering closure of the #-language definable
by #.
Proof: Lemma 6(1) was proved in [9]. The proof of Lemma 6(2) is almost the
same and is sketched below.
Recall [1], that a set L of #-strings is L <
definable i# L is a regular #-
language (see [13, 12] for a survey of automata on infinite objects). More-
over, there exist algorithms for translations between #-regular expressions and
2 formulas (see [13]). Let h be a regular language substitution defined as
h(#a, b#
the lemma follows from the the fact that regular #-languages are closed under
the regular morphisms and the inverse images of the regular morphisms.
Actually, the proof of this fact gives an algorithm for constructing an #-
regular expression for the image (pre-image) of an #-language L from an #-
regular expression that defines L and regular expressions that define a morphism
Hence, there is an algorithm that for every # constructs # such that the
#-language definable by # is the stuttering closure of the #-language definable
by #
Remark 7 The complexity of the algorithm extracted from the proof is non-
elementary, i.e. there is no n such that for every formula #, the run time of
the algorithm on # is bounded by expn (|#|), where expm (k) is m-time iterated
exponential function.
5.2 Set theoretical operations on languages
Let f be a function from a set A into # 1 - # 2 - . #n . We use the notation
(f) for the projection of f onto # is clear from the context
we sometimes will drop the subscript. Similarly for an #-string s over
use P roj #1-#1 (s) for the corresponding projection onto
Projections are extended pointwise to sets, i.e. for a set F of functions
we use P roj # for the set {P roj Below we use # , FV# and
RC# for the sets of all #-strings over #, finitely variable signals over #, and
right continuous signals over #, respectively.
Lemma 8 (operations on finitely variable signal languages and stuttering)
1. (Union) Let L 1 , L 2 be #-languages over # and let S 1 , S 2 be finitely
variable signal languages over #. If L i represents S i
represents
2. (Complementation) Let S be a speed-independent language. If L represents
S then the complementation of Stut 2 (L) represents the complementation
of S.
3. (Projection) Let L be an #-language (over
a finitely variable signal language S (over the alphabet # 1
represents P roj # i
(S).
4. (Cylindrification) Let L be an #-language that represents a finitely variable
signal language S (over an alphabet # 1 ). Then the language {x #
represented by {s
Lemma 9 (operations on right continuous signal languages and stuttering)
1. (Union) Let L 1 , L 2 be #-languages over # and let S 1 , S 2 be right continuous
signal languages over #. If L i represents S i
represents
2. (Complementation) Let S be a speed-independent language. If L represents
S then the complementation of Stut 1 (L) represents the complementation
of S.
3. (Projection) Let L be an #-language that represents a right continuous
signal language S (over an alphabet # 1 -# 2 ). Then P roj # i
(L) represents
(S).
4. (Cylindrification) Let L be an #-language that represents a right continuous
signal language S (over an alphabet # 1 ). Then the language {x #
represented by {s # 1
6 Proof of Theorem 1
6.1 The if direction
Let Cont(X, t) be the formula #t
be defined as -Cont(X, t). If x is a finitely variable
signal and # R #0 , then Jump(x, #) holds under the finitely variable interpretation
or x is not continuous at # . Such # are called jump points of x.
Similarly, if x is a right continuous and # R #0 , then Jump(x, #) holds under
the right continuous interpretation if or x is not continuous at # .
Scale) be the
formula obtained from # when the first order quantifiers are relativized to
the jump points of Scale, i.e., when "#t." and "#t." are replaced by
"#t. Jump(Scale, t) # ." and by "#t. Jump(Scale, t) # ." respectively.
The following lemma is immediate
Assume that # is obtained from # as described above. Let s
a i
1 . be #-strings for Assume that the set
of jump points of finitely variable (respectively right continuous) signal scale is
infinite. Let x i be finitely variable (respectively, right continuous) signals such
that x i
Let Infjump(X) be #t#t # .t # > t # (X(t) # -X(t # )). It is cleat that under
both the finitely variable and the right continuous interpretations, Infjump(x)
holds i# the set of jump points of x is infinite.
obtained
from # as above by relativizing the first order quantifiers to the jump
points of Scale. Lemma 10 implies that the right continuous signal language
definable by
#Scale. Infjump(Scale) #(X 1 , . ,
#t. Cont(Scale,
is equal to RC(L), where L is the #-language definable by #. This completes
the proof of the if direction of Theorem 1(2).
jump point of x.
m ) be a monadic formula. Let the formula # be
obtained from # by relativizing the first order quantifiers to the jump points of
implies that the finitely variable language definable by
#Scale. Infjump(Scale) #X j
#t. Cont(Scale, t) # (Cont(X j
#(-Jump(Scale,
is equal to FV (L), where L is the #-language definable by #.
Remark 11 Note that for every formula # we constructed a formula # such
that the #-language definable by # represents the right continuous signal language
definable by # . In our construction (1) the length of # is linear in the
length of #; (2) if # has the form Q does not have
the second order quantifiers and Q i are the second order quantifiers, then #
has the form #Scale.Q 1 X 1 . QnXn# , where # does not have the second order
quantifiers. Hence, we added one existential second order quantifier; (3) An alternative
proof of the if-direction: first, construct an automaton A # that accepts
the #-language L definable by # and then from A # construct a monadic formula
# that defines the signal language RC(L). However, the size of # extracted from
this proof is proportional to the size of A # and is non-elementary in the size of
#.
Similar remarks hold for the translation to formulas interpreted over finitely
variable signal structure, however, in this case several existential second order
quantifiers are added.
6.2 Proof of the only-if direction of Theorem 1(1)
Let L be an #-language. Lemma 4(5) implies that FV (L) (respectively RC(L))
is the unique speed-independent finitely variable (respectively, right continuous)
signal language represented by L. Therefore, by Lemma 5, in order to show the
only-if direction of Theorem 1 it is su#cient to prove that if a finitely variable
(right continuous) language is definable, then it is representable by a definable
#-language.
It is convenient instead of the finitely variable signal structure to consider
the first order structure #SIG; Sing, <, #, where SIG is the set of
finitely variable signals,
and # is interpreted as the usual inclusion relation.
be the formula in the first order language appropriate
for M , which is obtained from a monadic formula #(X 1 , . relativizing
the first order quantifiers to Sing (i.e., through the replacement of "# 1 t." by
"#t. Sing(t)#"), and by the replacement of "X(t)" by "t # X ". It is easy to see
that the signal language definable in Sig by # is the same as the signal language
definable in M by # .
Therefore, to establish the only-if direction of Theorem 1 it is su#cient to
show
Proposition 12 A language definable in M is representable by a definable #-
language.
Proof: Let LM be the the first order language appropriate for M . The proof
proceeds by the structural induction on the LM formulas. For every LM formula
that the #-language definable by # represents the signal language definable by
#.
Basis. The formula sing(X
#t. -X s (t) #!t. X j (t) corresponds
to the atomic formula Sing(X). The formula sing(X j
corresponds to X 1 < X 2 . The formula #t. X j
2 (t) corresponds to X 1 # X 2 .
Inductive Step. The inductive step is immediately obtained from Lemma
8 and Lemma 6. Indeed, negation corresponds to the complementation, the
existential quantifier corresponds to the projection and disjunction is easily expressible
from union and cylindrification. #
6.3 Proof of the only-if direction of Theorem 1(2)
The proof is obtained by the method of interpretation [7] as follows. Let
First, we construct a monadic formula
such that the language definable by # under the finitely variable
interpretation coincides with the language definable by # under the right
continuous interpretation.
Let rsignal(X) be the formula #t#t # . t # > t #t # .t < t # < t #
It is clear that a finitely variable signal satisfies rsignal(X) i# it is right
continuous.
Let # be obtained from # by relativizing all the second order quantifiers to
right continuous signals, i.e. by replacing "#X." (respectively "#X.")
by "#X. rsignal(X) # ." (respectively, "#X. It is easy to see that
a right continuous signal satisfies # under right continuous interpretation i# it
satisfies # under finitely variable interpretation. Hence, the required formula
can be defined as rsignal(X 1 )#rsignal(X 2 )#rsignal(Xn )#
# .
Now, by Theorem 1(1), there exists a #(X j
n ) such that
the #-language definable by # represents the language definable by # under
the finitely variable interpretation. Lemma 2 and the fact that the language
definable by # under the right continuous interpretation is the same as the right
continuous language definable by # under the finitely variable interpretation,
imply that the #-language definable by #X j
. # represents the language
definable by # under the right continuous interpretation.
7 Fundamental Corollaries
In this section we re-examine four fundamental theorems that hold in the structure
#. Three of these theorems still hold in the finitely variable and the right
continuous structures. Their proofs are easily derivable from Theorem 1. In [11]
we announced that the uniformization problem has a positive solution in the
structures Rsig and Sig. We found a bug in our proof, and here we will show
that in contrast to the discrete case of #-structure, the uniformization fails in
the Rsig and Sig structures.
7.1 Decidability
The satisfiability problem of the monadic second order theory of the structure
# is decidable ( B-uchi [1]). As a consequence of the B-uchi theorem and the
e#ectiveness of the proof of Theorem 1 we obtain a new proof of
Theorem 13 The monadic second order theory of the right continuous structure
is decidable. The monadic second order theory of the finitely variable structure
is decidable.
Note that the proofs of Theorem 13 given in [9, 10] is obtained by interpreting the
monadic theories of the right continuous and finitely variable signal structures
in the monadic theory of two successors. No characterizations of definable signal
languages can be extracted from these proofs.
7.2 Completeness of the Existential Fragment of Monadic
Logic
An existential monadic formula is a formula of the form #X 1 . #Xn .#, where
# does not contain the second order quantifiers. It is well known that every
equivalent (in the structure #) to an existential formula
#. Moreover, # can be constructed e#ectively from #. This together with
Theorem 1 and Remark 11 imply
Theorem 14 For every monadic formula # there exists an existential monadic
formula # such that # is equivalent to # in Rsig. Moreover, # can be constructed
e#ectively from #.
Theorem 15 For every monadic formula # there exists an existential monadic
formula # such that # is equivalent to # in Sig. Moreover, # can be constructed
e#ectively from #.
In the F # interpretation of monadic logic, the monadic predicates range over
subsets of the real line. The decidability of the monadic logic under F #
interpretation was established by Rabin [7].
As far as we know the following is an open problem
Open Question: Is the existential fragment of monadic logic complete for F #
Note that the existential fragment is in the first level of the alternation
hierarchy.
Open Question: Does the alternation hierarchy collapse for F # interpretation?
7.3 Failure of Uniformization
The uniformization problem for a theory Th in a language L can be formulated
as follows [6]: Suppose Th #
is an L-formula and
Y are tuples of variables. Is there another formula # such that
Y .#
Y ) and Th #
Y .#
Here #! means "there is a unique". Hence, # defines the graph of a function
which lies inside the set definable by #.
The uniformization problem for the monadic second order theory of order
of the structure # was solved positively by B-uchi and Landweber [2]. Below
we show that the uniformization fails in both the finitely variable and the right
continuous signal structures.
First observe that if for every order preserving bijection # on R #0 ,
then x is constant on the positive reals. Recall that the languages definable in
Rsig and in Sig are speed-independent (see Lemma 5). Therefore,
Lemma If a singleton language {x} is definable in Rsig or in Sig, then x
is constant on the positive reals.
Remark 17 (Contrast with a discrete case) Note that a singleton #-language
{x} is definable in the structure # i# x is quasiperiodic, i.e.,
As a consequence we have
Theorem The uniformization fails for the monadic second order theory of
order of the finitely variable structure. The uniformization fails for the monadic
second order theory of order of the right continuous structure.
Proof: Let #(Y ) be the formula #t#t # . t # > t # (Y (t) # -Y (t # )).
It is clear that Sig |= #Y.#(Y ) and that if Sig |= #(y) then y changes
infinitely often.
Assume that Sig |= #!Y. # (Y ). Then Lemma implies that the unique y
that satisfies # (Y ) is constant on the positive reals. Therefore, it cannot satisfy
To sum up, there is no # (Y ) such that
Hence, the uniformization fails for the monadic second order theory of order
of the finitely variable structure. The proof for the right continuous structure
signals is the same. #
7.4 Characterization of definable Languages by Automata
In this section we provide an automata theoretical characterization of the finitely
variable and the right continuous signal languages definable in monadic logic.
The characterization is a simple consequence of Theorem 1 and the automata
theoretical characterization of #-languages definable in monadic logic.
7.4.1 Syntax
A Labeled Transition System T is a triple #Q, # that consists of a set Q
of states, a finite alphabet # of actions and a transition relation # which is a
subset of Q-Q; we write q a
finite we say that
the LTS is finite;
Sometimes the alphabet # of T will be the Cartesian product # 1 - # 2 of
other alphabets; in such a case we will write q a,b
for the transition from q to
labeled by the pair (a, b).
An automaton A over # is a triple #T , INIT (A), FAIR(A)#, where
#Q, # is an LTS over the alphabet #; INIT (A) # Q - the initial states
of A. and FAIR(A) - a collection of fairness conditions (subsets of Q).
7.4.2 Semantics
A run of an automaton A is an #-sequence q 0 a 0 q 1 a 1 . such that q i
a i
all i. Such a run meets the initial conditions if q 0 # INIT (A). A run meets
the fairness conditions if the set of states that occur in the run infinitely many
times is a member of FAIR(A).
An #-string a 0 , a 1 . over # is accepted by A if there is a run q 0 a 0 q 1 a 1 .
that meets the initial and fairness conditions of A. The #-language accepted by
A is the set of all #-strings acceptable by A.
Theorem 19 (B-uchi [1]) An #-language is acceptable by a finite state automaton
i# it is definable by a monadic formula.
7.4.3 Automata as acceptors of signal languages
A right continuous signal x over # is accepted by an automaton A if there are
an #-string a 0 a 1 . an . over alphabet # acceptable by A and an unbounded
increasing sequence of reals such that
A finitely variable signal x over # is accepted by an automaton A if
A is an automaton over the alphabet # and there are an #-string
acceptable by A and an unbounded increasing sequence
of reals such that x(# i
A version of the next theorem was announced in [11] and it is immediately
derived from Theorem 1 and Theorem 19.
Theorem 20 A finitely variable (respectively, right continuous) signal language
is acceptable by a finite state automaton if and only if it is definable by a monadic
formula under the finitely variable (respectively, right continuous) interpretation
Acknowledgements
I would like to thank the anonymous referee for his very helpful suggestions.
--R
On a decision method in restricted second order arithmetic
Solving sequential conditions by finite-state strategies
A really abstract concurrent model and its fully abstract semantics.
Decidability and undecid- ablity results for Duration Calculus
A mean value calculus of duration.
Rabin's uniformization problem.
Decidability of second order theories and automata on infinite trees.
Decidable theories.
On translation of temporal logic of actions into monadic second order logic.
On the Decidability of Continuous Time Specification Formalisms.
From Finite Automata toward Hybrid Systems.
Automata on Infinite Objects.
Finite Automata
--TR
Automata on infinite objects
A mean value calculus of durations
On translations of temporal logic of actions into monadic second-order logic
A really abstract concurrent model and its temporal logic
Decidability and Undecidability Results for Duration Calculus
From Finite Automata toward Hybrid Systems (Extended Abstract)
--CTR
B. A. Trakhtenbrot, Understanding Basic Automata Theory in the Continuous Time Setting, Fundamenta Informaticae, v.62 n.1, p.69-121, January 2004 | monadic logic of order;continuous time specification;definability |
570577 | Correspondence and translation for heterogeneous data. | Data integration often requires a clean abstraction of the different formats in which data are stored, and means for specifying the correspondences/relationships between data in different worlds and for translating data from one world to another. For that, we introduce in this paper a middleware data model that serves as a basis for the integration task, and a declarative rules language for specifying the integration. We show that using the language, correspondences between data elements can be computed in polynomial time in many cases, and may require exponential time only when insensitivity to order or duplicates are considered. Furthermore, we show that in most practical cases the correspondence rules can be automatically turned into translation rules to map data from one representation to another. Thus, a complete integration task (derivation of correspondences, transformation of data from one world to the other, incremental integration of a new bulk of data, etc.) can be specified using a single set of declarative rules. | Introduction
A primary motivation for new database technology is to provide support for the broad spectrum
of multimedia data available notably through the network. These data are stored under different
formats: SQL or ODMG (in databases), SGML or LaTex (documents), DX formats (scientific data),
Step (CAD/CAM data), etc. Their integration is a very active field of research and development
(see for instance, for a very small sample, [10, 6, 7, 9, 8, 12, 19, 20]). In this paper, we provide a
formal foundation to facilitate the integration of such heterogeneous data and the maintenance of
heterogeneous replicated data.
A sound solution for a data integration task requires a clean abstraction of the different formats
in which data are stored, and means for specifying the correspondences/relationships between data
in different worlds and for translating data from one world to another. For that we introduce a
middleware data model that serves as a basis for the integration task, and declarative rules for
specifying the integration.
The choice of the middleware data model is clearly essential. One common trend in data integration
over heterogeneous models has always been to use an integrating model that encompasses the
source models. We take an opposite approach here, i.e., our model is minimalist. The data structure
we use consists of ordered labeled trees. We claim that this simple model is general enough to capture
the essence of formats we are interested in. Even though a mapping from a richer data model to
this model may loose some of the original semantics, the data itself is preserved and the integration
with other data models is facilitated. Our model is similar to the one used in [7] and to the OEM
model for unstructured data (see, e.g., [21, 20]). This is not surprising since the data formats that
motivated these works are part of the formats that our framework intends to support. A difference
with the OEM model is that we view the children of each vertex as ordered. This is crucial to
describe lists, an essential component of DX formats. Also, [13] introduces BNF generated trees to
unify hierarchical data models. However, due to the fixed number of sons of a rule, collections are
represented by left or right deep trees not suitable for the casual users.
A main contribution of the paper is in the declarative specification of correspondences between
data in different worlds. For this we use datalog-style rules, enriched with, as a novel feature, merge
and cons term constructors. The semantics of the rules takes into consideration the fact that some
internal nodes represent collections with specific properties (e.g., sets are insensitive to order and
duplicates). We show that correspondences between data elements can be computed in polynomial
time in many cases, and may require exponential time only when insensitivity to order or duplicates
are considered.
Deriving correspondences within existing data is only one issue in a heterogeneous context. One
would also want to translate data from one representation to another. Interestingly, we show that
y This author's permanent position is INRIA-Rocquencourt, France. His work was supported by the Air
Force Wright Laboratory Aeronautical Systems Center under ARPA Contract F33615-93-1-1339, and by
the Air Force Rome Laboratories under ARPA Contract F30602-95-C-0119.
This work was partially supported by EC Projects GoodStep and Opal and by the Israeli Ministry of
Science
in most practical cases, translation rules can be automatically be derived from the correspondence
rules. Thus, a complete integration task (derivation of correspondences, transformation of data from
one world to the other, incremental integration of a new bulk of data, etc.) can be specified using a
single declarative set of rules. This is an important result. It saves in writing different specifications
for each sub-component of the integration task, and also helps in avoiding inconsistent specifications.
It should be noted that the language we use to define correspondence rules is very simple. Similar
correspondences could be easily derived using more powerful languages previously proposed (e.g.,
LDL [5] or IQL [4]). But in these languages it would be much more difficult (sometimes impossible)
to derive translation rules from given correspondence rules. Nevertheless, our language is expressive
enough to describe many desired correspondences/translations, and in particular can express all the
powerful document-OODB mappings supported by the structuring schemas mechanism of [2, 3].
As will be seen, correspondence rules have a very simple and intuitive graphical representation.
Indeed, the present work serves as the basis for a system, currently being implemented, where a
specification of integration of heterogeneous data proceeds in two phases. In a first phase, data is
abstracted to yield a tree-like representation that is hiding details unnecessary to the restructuring
(e.g., tags or parsing information). In a second phase, available data is displayed in a graphical
window and starting from that representation, the user can specify correspondences or derive data.
The paper is organized as follows. Section 2 introduces a core data model and Section 3 a core
language for specifying correspondences. In Section 4, we extend the framework to better deal with
collections. Section 5 deals with the translation problem. The last section is a conclusion. More
examples and figures are given in two appendixes.
2 The Data Model
Our goal is to provide a data model that allows declarative specifications of the correspondence
between data stored in different worlds (DX, ODMG, SGML, etc. We first introduce the model,
then the concept of correspondence. To illustrate things we use below an example. A simple instance
of an SGML document is given in Figure 1. A tree representation of the document in our middleware
model, together with correspondences between this tree and a forest representation of a reference in
a bibliographical OODB is given in Figure 2.
2.1 Data Forest
We assume the existence of some infinite sets: (i) name of names; (ii) vertex of vertexes; (iii) dom
of data values. A data forest is a forest of ordered labeled trees. An ordered labeled tree is a tree with
a labeling of vertexes and for each vertex, an ordering of its children. The internal vertexes of the
trees have labels from name whereas the leaves have labels from name [ dom [ vertex. The only
constraint is that if a vertex occurs as a leaf label, it should also occur as a vertex in the forest.
Observe that this is a rather conventional tree structure. This is a data model in the spirit of complex
value model [17, 1, 11] and many others, it is particularly influenced by models for unstructured
data [21, 20] and the tree model of [7]. A particularity is the ordering of vertexes that is important
to model data formats essentially described by files obeying a certain grammar (e.g., SGML).
data forest F is a triple (E; G;L), where (E; G) is a finite ordered forest (the
ordering is implicit); E is the set of vertexes; G the set of edges; L (the labeling function) maps some
leaves in E to E [ dom; and all other vertexes to name.
For each vertex v in E, the maximal subtree of root v is called the object v. The set of vertexes
E of a forest F is denoted vertex(F ) and the set of data values appearing in F is denoted dom(F ).
Remark. Observe that by definition, we allow a leaf to be mapped to a name. For all purposes, we
may think of such leaves as internal vertexes without children. This will turn useful to represent for
instance the empty set or the empty list. In the following, we refer by the word leaf only to vertexes
v such that L(v) is a vertex or is in dom.
We illustrate this notion as well as syntactic representations we use in an example. Consider the
graphical representation of the forest describing the OODB, shown in the lower part of Figure 2. A
tabular representation of part of the same forest is given in Figure 3. Finally, below is the equivalent
textual representation:
reference f &21 key f &211 "ACM96" f g g;
authors f &231 &3 f
abstract f &241":::" f g g g
To get a morecompact representation, we omit brackets when a vertex has a single or no children, and
omit vertex identifiers when they are irrelevant for the discussion. For example the above reference
tree may be represented by
reference f key "ACM96";
&22 title "Correspondence:::";
authorsf &3; &4; &5
abstract ":::" g
Let us now see how various common data sources can be mapped into our middleware model.
We consider here three different types of mappings. The first concerns relational databases, but
also all simple table formats. The second is used for object-oriented databases, but a similar one
will fit most graph formats. Finally, the last will fit any format having a BNF (or similar) grammar
description. Note that the three mappings are invertible and can easily be implemented.
Relations can be represented by a tree whose root label is the relation name and which has as
many children as rows in the relation. At depth 2, nodes represent rows and are labeled by the
label "tuple". At depth 3, 4 and 5, nodes are labeled respectively by attribute names, types and
values.
An object oriented database is usually a cyclic graph. However, using object identifier one may
easily represents a cyclic graph as a tree [4].
We pick one possible representation but many other ones can be proposed. A class extent is
represented by a tree whose root node is labeled with the class name. This node has as many
children as there are objects in the extent, each of which is labeled by the object type. We
assume that objects appear in the class extent of their most specific class. We now describe the
representation of subtrees according to types.
- A node labeled by an atomic type has a unique child whose label is the appropriate atomic
value.
- A node labeled "tuple" has one child for each attribute. The children are labeled with the
attribute names and each has one child labeled by the appropriate type and having the
relevant structure.
- A node labeled "set" (or "list", "bag", .) has as many children as elements in the collection,
one for each collection member. (For lists the order of elements is preserved). Each child is
labeled by the appropriate type, and has the relevant structure.
- A node labeled by an object type has a unique child labeled by the identifier of the node
that representing the object in the tree of the class extent to which it belongs.
A document can be described by a simplified representation of its parsing tree. The labels of the
internal nodes (resp. leaves) represent the grammar non-terminal symbols (resp. tokens).
SGML and HTML, among other formats, allow references to internal and external data. Parsers
do not interpret these references. They usually consider them as strings. In our context, these
references should be interpreted when possible. As for object databases, the reference can be
replaced by the identifier of the node containing the referred data.
Note that the only identification of data in the middleware model is given by the nodes identifiers.
This means that it is the responsability of the data sources to keep relationships between the exported
data and the node identifiers. This relationship is not always needed (e.g., for a translation process),
and may be of a fine or large grain according to the application needs and the data source capacities.
The identification of data in the data sources can take various forms. It can be the key of a row
or some internal address in relational databases. For object databases, it can be the internal oid (for
objects), a query leading to the object/value, or similar ideas as in the relational case. For files it
can be an offset in the file, node in the parse tree, etc.
2.2 Correspondence
We are concerned with establishing/maintaining correspondences between objects. Some objects
may come from one data source with particular forest F 1 , and others from another forest, say F 2 .
To simplify, we consider here that we have a single forest (that can be viewed as the union of the
two forests) and look for correspondences within the forest. (If we feel it is essential to distinguish
between the sources, we may assume that the nodes of each tree from a particular data source have
the name of that source, e.g., F 1 as part of the label.) We describe correspondences between
objects using predicates.
Example 1. Consider the following forest with the SGML and OODB trees of Figure 2.
article f:::; &12 title "Correspondence:::"; &13 author "S:Abiteboul";
&14 author "S:Cluet"; &15 author "T:M ilo"; &16 abstract ":::"; ::: g
reference f key "ACM96"; &22 title "Correspondence:::"; authorsf &3; &4; &5
abstract ":::" g
We may want to have the following correspondences:
Note that there is an essential difference between the two predicates above: is relates objects that
represent the same real world entity, whereas concat is a standard concatenation predicate/function
that is defined externally. The is-relationship is represented on Figure 2.
Definition2. Let R be a relational schema. An R-correspondence is a pair (F; I) where F is a data
forest and I a relational instance over R with values in vertex(F ) [ dom(F ).
For instance, consider Example 1. Let R consists of binary relation is and a ternary one concat.
For the forest F and correspondences I as in the example, (F; I) is an R-correspondence. Note that
we do not restrict our attention to 1-1 correspondences. The correspondence predicates may have
arbitrary arity, and also, because of data duplication, some n-m correspondences may be introduced.
3 The Core Language
In this section, we develop the core language. This is in the style of rule-based languages for objects,
e.g., IQL [4], LDL [5], F-logic [15] and more precisely, of MedMaker [19]. The language we present in
this section is tailored to correspondence derivation, and thus in some sense more limited. However,
we will consider in a next section a powerful new feature.
We assume the existence of two infinite sorts: a sort data-var of data variables, and vertex-var
of vertex variables. Data variables start with capitals (to distinguish them from names); and vertex
variables start with the character & followed by a capital letter.
Rules are built from correspondence literals and tree terms. Correspondence literals have the form
are data/vertex variables/constants.
Tree terms are of the form &X L, &X L t 1 , and &X L where &X is a vertex vari-
able/constant, L is a label and t are tree terms. The &X and Ls can also be omitted. A
rule is obtained by distinguishing some correspondence literals and tree terms to be in the body,
and some to be in the head. Semantics of rules is given in the sequel. As an example, consider the
following rule that we name r so . Note again the distinction between concat which is a predicate on
data values and can be thought of as given by extension or computed externally, and the derived is
correspondence predicate.
reference f &X 14 ;
authorsf &Y
&X 19 abstract X 11 g
A rule consists of a body and a head. When a rule has only literals in its head, it is said to be a
correspondence rule. We assume that all variables in the head of a correspondence rule also occur in
the body. We now define the semantics of correspondence rules.
Given an instance (F; I) and some correspondence rule r, a valuation - over (F; I) is
a mapping over variables in r such that
1. - maps data variables to dom(F ) and object variables to vertex(F ).
2. For each term H in the body of r
(a) H is a correspondence literal and -(H) is true in I ; or
(b) H is a tree term and -(H) is an object 5 of F .
We say that a correspondence C(&U;&V ) is derived from (F; I) using r if C(&U;&V
some term H in the head of r, and some valuation - over (F; I).
Let P be a set of rules. Let I derived from (F; I) using some r in Pg. Then,
is denoted TP (F; I). If P is recursive, we may be able to apply TP to TP (F; I) to derive new
correspondences. The limit T !
exists, of the application of TP is denoted, P(F; I).
Theorem4. For each (possibly recursive) finite set P of correspondence-rules and each data forest
well-defined (in particular, the sequence of applications of TP converges in a finite
number of stages). Furthermore, P(F; I) can be computed in ptime.
We represent data forests using a relational database. A relation succ gives a portion of
the successor function over the integers. The number of facts that can be derived is polynomial.
Each step can be computed with a first-order formula, so it is in ptime. 2
The above rule r so is an example of a non-recursive correspondence rule. (We assume that the extension
of concat is given in I.) To see an example of a recursive rule, we consider the correspondence
between "left-deep" and "right-deep" trees. For instance, we would like to derive a correspondence
between the right and left deep trees shown in Figure 4. This is achieved using the program r2l
which consists of the following rules:
5 Recall that an object of a forest F is a maximal subtree of F rooted in some vertex of F .
&U rightfX; &Y g
Suppose that we start with I = ;, and the forest F shown on Figure 4. Then we derive the correspondences
The computation is:
r2l
r2l
r2l
r2l
r2l
r2l
This kind of deep trees is frequent in data exchange formats and it is important to be able to handle
them. However, what we have seen above is not quite powerful enough. It will have to be extended
with particular operations on trees and to handle data collections. This is described next.
4 Dealing with Collections
When data sources are mapped into the middleware model, some forest vertexes may represent data
collections. Observe that, in the above rules, the tree describes vertexes with a bounded number of
children, (where the number depends on the term structure). Data collections may have an arbitrary
number of members, and thus we need to extend our language to deal with vertexes having arbitrary
number of children. Also observe that ordered trees are perfect to represent ordered data collections
such as lists or arrays. However, if we want to model database constructs such as sets or bags, we
have to consider properties such as insensitivity to order or duplicates. The rules that we developed
so far do not support this. In this section, we address these two issues by extending our framework
to incorporate (i) operators on trees and (ii) special collection properties.
4.1 Tree Constructors
We consider two binary operations on trees. The first, cons(T takes two objects as input. The
first one is interpreted as an element and the second as the collection of its children vertexes. The
operation adds the element to the collection. The second operator, merge, allows to merge two data
collections into one. (The cons operator can be defined using a merge with a singleton collection.)
For example
More formally, let T; T trees where the roots of T 0 and T 00 have the same label l and
n and S 00
respectively. Then
tree with root labeled by l and children T; S 0
n , in that order.
is a tree with root labeled by l and children S 0
m , in that order.
The cons and merge operators provide alternative representations for collections that are essential
to describe restructuring. The data trees in the forests we consider are all reduced in the sense that
they will not include cons or merge vertexes. But, when using the rules, we are allowed to consider
alternative representations of the forest trees. The vertexes/objects of the trees with cons and merge
are regarded as implicit. So, for instance if we have the data tree &1 mylistf&2; &3g, we can view it
as &1 cons(&2; &v) where the object &v is implicit and has the structure mylistf&3g. Indeed, we
will denote this object &v by mylist(&1; &3) to specify that it is an object with label mylist, that
it is a subcollection of &1, and that it has a single child &3. This motivates the following definition:
Given a forest F , a vertex &v in F with children label
l, the expression l(&v; called an implicit object of F for each subsequence 6
. The set of all implicit objects of F is denoted impl(F ).
6 A subsequence &v obtained by removing 0 or more elements from the head
and the tail of &v1 ; :::; &vn .
Observe that vertex(F ) can be viewed as a subset of impl(F ) if we identify the object
of the definition, with &v. Observe also that the cardinality of impl(F ) is polynomial
in the size of F .
We can now use cons and merge in rules. The following example uses cons to define a correspondence
between a list structured as a right-deep tree and a list structured as a tree of depth one
(Observe that in the example mylist is not a keyword but only a name with no particular semantics;
cons is a keyword with semantics, the cons operation on trees):
Of course, to use such rules, we have to extend the notion of valuation to allow terms containing
cons. The new valuation may now assign implicit objects to object variables.
The fixpoint T !
computed as before using the new definition of valuation. Observe that
may now contain correspondences involving vertexes in impl(F ) and not only F . Since
we are interested only in correspondences between vertexes in F , we ultimately ignore all other
correspondences. So, P(F; I) is the restriction of T !
to objects in F . For instance, consider
rule tl and
Then:
tl
tl
tl
tl
tl
tl
In the sequel, we call the problem of computing P(F; I), the matching problem.
Theorem6. The matching problem is in ptime even in the presence of cons and merge.
The number of facts that can be derived is polynomial and each step can be computed
with a first-order formula, so is polynomial. 2
4.2 Special Properties
Data models of interest include collections with specific properties: e.g., sets that are insensitive to
order or duplicates, bags that are insensitive to order. In our context this translates to properties
of vertexes with particular labels. We consider here two cases, namely insensitivity to order (called
bag property), and insensitivity to both order and duplicates (called set property). For instance, we
may decide that a particular label, say mybag (resp. myset) denotes a bag (resp. a set). Then, the
system should not distinguish between:
The fact that these should be the same implicit objects is fundamental. (Otherwise the same set
would potentially have an infinite number of representations and computing correspondences would
become undecidable.) In the context of set/bag properties, the definition of implicit objects becomes
a little bit more intricate.
Given a forest F , a vertex &v in F with children label
l, implicit objects of vertexes with bag/set properties are obtained as follows:
l has set property: l(&v; &v i 1
for each subset f&v i 1
g of f&v g.
l has bag property: l(&v; &v i 1
for each subbag ff&v i 1
gg of ff&v 1 ; :::; &v n gg.
The notion of valuation is extended in a straightforward manner to use the above implicit objects
and take into consideration tree equivalence due to insensitivity to order and duplicates, (details
omitted for lack of space). It is important to observe at this point that the number of implicit
objects is now exponential in the size of F .
The next example shows how cons, and the set property can be used to define a correspondence
between a list and a set containing one copy for each distinct list member:
label myset : set
mylist fg
&V myset fg
Observe the symmetry of the rules between set and list. The only distinction is in the specification
of label myset. Using essentially the same proof as in Theorem 6 and a reduction to 3-sat, one can
prove:
Theorem8. In the presence of cons, merge, and collections that are insensitive to order/duplicates,
the matching problem can be solved in exptime. Even with insensitivity to order and cons only, the
matching problem becomes np-hard.
Remark. The complexity is data complexity. This may seem a negative result (that should have
been expected because of the matching of commutative collections). But in practice, merging is
rarely achieved based on collections. It is most often key-based and, in some rare cases, based on
the matching of "small collections", e.g., sets of authors.
To conclude the discussion of correspondence rules, and demonstratethe usage of cons and merge,
let us consider the following example where a correspondence between articles and OO references
is defined. Observe that while the correspondence rule r so presented at the beginning of the paper
handles articles with exactly three authors, articles/references here deal with arbitrary number of
authors. They are required to have the same title and abstract and the same author list (i.e., the
authors appear in the same order). The definition uses an auxiliary predicate same list.
The first rule defines correspondence between authors. The second and third rules define an
auxiliary correspondence between sequences from both world. It is used in rule R 4 that defines
correspondence between articles and references. It also defines correspondence between titles and
abstracts from both worlds.
same list(&X 2 ; &X 5 )
same list(&X 4 ; &X 11 )
We illustrated the language using rather simple examples. Nevertheless, it is quite powerful and
can describe many desired correspondences, and in particular all the document-OODB mappings
supported by the structuring schemas mechanism of [2, 3] (Omitted).
5 Data Translation
Correspondence rules are used to derive relationships between vertexes. We next consider the problem
of translating data. We first state the general translation problem (that is undecidable). We then
introduce a decidable subcase that captures the practical applications we are interested in. This is
based on translation rules obtained by moving tree terms from the body of correspondence rules to
the head.
We start with a data forest and a set of correspondence rules. For a particular forest object &v
and a correspondence predicate C , we want to know if the forest can be extended in such a way that
&v is in correspondence to some vertex &v 0 . In some sense, &v 0 could be seen as the "translation"
of &v. This is what we call the data translation problem.
input: an R-correspondence (F; I), a set P of correspondence rules, a vertex &v of F , and a binary
predicate C.
output: an extension F 0 of F such that C(&v;&v 0 ) holds in P(F 0 ; I) for some &v 0 ; or no if no such
extension exists.
For example consider a forest F with the right deep tree &1 f1; f2; f3; fgggg. Assume we want to
translate it into a left deep tree format. Recall that the r2l correspondence rules define correspondences
between right deep trees and left deep trees. So, we can give the translation problem the
R-correspondence (F; I), the root vertex &1, and the correspondence predicate R2L. The output
will be a forest F 0 with some vertex &v 0 s.t. R2L(&1;&v 0 ) holds. The tree rooted at &v 0 is exactly
the left deep tree we are looking for.
Remark. In the general case: (i) we would like to translate an entire collection of objects; and (ii) the
correspondence may be a predicate of arbitrary arity. To simplify the presentation, we consider the
more restricted problem defined above. The same techniques work for the general case with minor
modifications.
It turns out that data translation is in general very difficult. (The proof is by reduction of the
acceptance problem of Turing machines.)
Proposition 5.1 The translation problem is undecidable, even in absence of cons, merge, and labels
with set/bag properties.
Although the problem is undecidable in general, we show next that translation is still possible in
many practical cases and can often be performed efficiently. To do this, we impose two restrictions:
1. The first restriction we impose is that we separate data into two categories, input vertexes
and output vertexes. Vertex variables and labels are similarly separated 7 . We assume that the
presence of an output object depends solely on the presence of some input object(s) and possibly
some correspondence conditions. It allows us to focus on essentially one kind of recursion: that
found in the source data structure.
2. The second restriction is more technical and based on a property called body restriction that
is defined in the sequel. It prevents pathological behavior and mostly prevent correspondences
that relate "inside" of tree terms.
These restrictions typically apply when considering data translation or integration, and in particular
we will see that all the examples above have the appropriate properties.
The basic idea is to use correspondence rules and transform them into translation rules by moving
data tree terms containing output variables from the body of rules to their head. For example,
consider the r2l correspondence rules. To translate a right deep tree into a left deep tree, we move
the terms of the left deep trees to the head of rules, and obtain the following translation rules.
(Variables with prime are used to stress the separation between the two worlds.)
Of course we need to extend the semantics of rules. The tree terms in the head, and in particular
those containing variables that do not appear in the body, are used to create new objects. (This
essentially will do the data translation). We use Skolem functions in the style of [9, 14, 16, 18]
to denote new object ids. There are some difficulties in doing so. Consider a valuation - of the
second rule above. One may be tempted to use the Skolem term r 0 (-(&U); -(X); -(&Y ); -(&Y 0
to denote the new object. But since &Y 0 is itself a created object, this may lead to a potentially
non-terminating loop of object creation. To avoid this we choose to create objects only as a function
of input objects and not of new output created objects. (Thus in the above case the created object
is denoted r 0 &U 0 (-(&U); -(X); -(&Y )).)
Now, the price for this is that (i) we may be excluding some object creation that could be of
interest; and (ii) this may result in inconsistencies (e.g., the same object with two distinct values).
We accept (i), although we will give a class of programs such that (i) never occurs. For (ii), we rely
on non determinism to choose one value to be assigned to one object. Note that we need some form
of nondeterminism for instance to construct a list representation from a set.
For lack of space we do not give here the refined semantics for rules, (it is given in the full paper,)
but only state that:
Proposition9. For each finite set P of translation-rules and each R-correspondence (F; I), each
of the possible sequences of application of TP converges in a finite number of stages. Furthermore,
for rules with no set/bag labels each sequence converges in ptime, and otherwise in exptime.
So far, a program computation can be viewed as purely syntactic. We are guaranteed to terminate,
but we don't know the semantic properties of the constructed new objects and derived correspondences
It turns out this evaluation of translation rules allows to solve the translation problem for a
large class of correspondence rules. (Clearly not all since the problem is unsolvable in general.) In
7 Note that vertexes can easily be distinguished using their label.
particular it covers all rules we presented in the previous section, and the translations specified by
the structuring schemas mechanisms of [3] (proof omitted). We next present conditions under which
the technique can be used.
correspondence rule r is said to be body restricted if in its body (1) all the variables
in correspondence literals are leaves of tree terms, and each such variable has at most one occurrence
in a correspondence literal, and (2) non-leaf variables have at most one occurrence (as non leafs) in
tree terms, and (3) the only variables that input and output tree terms share are leaf variables.
We are considering correspondences specified with input/output data forests.
Proposition 5.2 Consider an input/output context. Let P be a set of body restricted correspondence
rules where correspondence literals always relate input to output objects. Let (F; I) be an
R-correspondence where F is an input data forest, &v a vertex in F , and C a binary correspondence
predicate. Let P 0 be the translation rules obtained from P by moving all output tree terms to the head
of rules. Then,
- If the translation problem has a solution on input (F; I) P, &v, C that leaves the input forest
unchanged, then each possible computations of P 0 object
some computation of P 0 derives C(&v;&v 0 ) for some object &v 0 , then the forest F 0 computed
by this computation is a correct solution to the translation problem.
By Proposition 5.2, to solve the translation problem (with unmodified input) for body restricted
rules, we only need to compute nondeterministically one of the possible outputs of P , and test if
6 Conclusion
We presented a specification of the integration of heterogeneous data based on correspondence rules.
We showed how a unique specification can served many purposes (including two-way translation) assuming
some reasonable restrictions. We claim that the framework and restrictions are acceptable in
practice, and in particular one can show that all the document-OODB correspondences/translations
of [2, 3] are covered. We are currently working on further substantiating this by more experimentation
When applying the work presented here a number of issues arise such as the specification of
default values when some information is missing in the translation. A more complex one is the
introduction of some simple constraints in the model, e.g., keys.
Another important implementation issue is to choose between keeping one of the representations
virtual vs. materializing both. In particular, it is conceivable to apply in this larger setting the
optimization techniques developed in a OODB/SGML context for queries [2] and updates [3].
Acknowledgment
We thank Catriel Beeri for his comments on a first draft of the paper.
--R
On the power of languages for the manipulation of complex objects.
Querying and updating the file.
A database interface for files update.
Object identity as a query language primitive.
Sets and negation in a logic database language (LDL1).
A data transformation system for biological data sources.
Programming constructs for unstructured data
Towards heterogeneous multimedia information systems: The Garlic approach.
Using witness generators to support bi-directional update between objact- based databases
From structured documents to novel query facilities.
The story of O2
Amalgame: a tool for creating interoperating persistent
A grammar based approach towards unifying hierarchical data models.
ILOG: Declarative creation and manipulation of object-identifiers
F-logic: A higher-order language for reasoning about objects
Logical foundations of object-oriented and frame-based languages
The logical data model.
A logic for objects.
Medmaker: A mediation system based on declarative specifications.
Object exchange across heterogeneous information sources.
Querying semistructured heterogeneous information.
--TR
Sets and negation in a logic data base language (LDL1)
F-logic: a higher-order language for reasoning about objects, inheritance, and scheme
Object identity as a query language primitive
A grammar-based approach towards unifying hierarchical data models
ILOG: declarative creation and manipulation of object identifiers
The SGML handbook
The logical data model
From structured documents to novel query facilities
Logical foundations of object-oriented and frame-based languages
Using witness generators to support bi-directional update between object-based databases (extended abstract)
A database interface for file update
Foundations of Databases
The Story of O2
Object Exchange Across Heterogeneous Information Sources
Correspondence and Translation for Heterogeneous Data
Querying and Updating the File
A Data Transformation System for Biological Data Sources
Amalgame
time algorithm for isomorphism of planar graphs (Preliminary Report)
--CTR
Natalya F. Noy , Mark A. Musen, Promptdiff: a fixed-point algorithm for comparing ontology versions, Eighteenth national conference on Artificial intelligence, p.744-750, July 28-August 01, 2002, Edmonton, Alberta, Canada
Yannis Kalfoglou , Marco Schorlemmer, Ontology mapping: the state of the art, The Knowledge Engineering Review, v.18 n.1, p.1-31, January
Olga Brazhnik , John F. Jones, Anatomy of data integration, Journal of Biomedical Informatics, v.40 n.3, p.252-269, June, 2007 | data integration;middleware model;translation;data correspondence |
570592 | Observational proofs by rewriting. | Observability concepts contribute to a better understanding of software correctness. In order to prove observational properties, the concept of Context Induction has been developed by Hennicker (Hennicker, Formal Aspects of Computing 3(4) (1991) 326-345). We propose in this paper to embed Context Induction in the implicit induction framework of (Bouhoula and Rusinowitch, Journal of Automated Reasoning 14(2) (1995) 189-235). The proof system we obtain applies to conditional specifications. It allows for many rewriting techniques and for the refutation of false observational conjectures. Under reasonable assumptions our method is refutationally complete, i.e. it can refute any conjecture which is not observationally valid. Moreover this proof system is operational: it has been implemented within the Spike prover and interesting computer experiments are reported. | Introduction
Observational concepts are fundamental in formal methods since for proving
the correctness of a program with respect to a specication it is essential to
be able to abstract away from internal implementation details. Data objects
can be viewed as equal if they cannot be distinguished by experiments with
observable result. The idea that the semantics of a specication must describe
the behaviour of an abstract data type as viewed by an external user, is due
to [14]. Though a lot of work has been devoted to the semantical aspects of
observability (see [4] for a classication), few proof techniques have been studied
[35,7,25,24], and even less have been implemented. More recently there has
been an increasing interest for behavioural/observational proofs with projects
such as CafeOBJ (see e.g. [27]) and the new approach for validation of object-oriented
software that is promoted by B. Jacobs [18,19].
In this paper we propose an automatic method for proving observational properties
of conditional specications. The method relies on computing families of
Email addresses: Adel.Bouhoula@supcom.rnu.tn (Adel Bouhoula),
rusi@loria.fr (Michael Rusinowitch).
Preprint submitted to Elsevier Science
well chosen contexts, called critical contexts, that \cover" in some sense all observable
ones. These families are applied as induction schemes. Our inference
system basically consists in extending terms by critical contexts and simplifying
the results with a powerful rewriting machinery in order to generate new
subgoals. An advantage of this approach is that it allows also for disproving
false observational conjectures. The method is even refutationally complete
for an interesting class of specications. From a prototype implementation
on top of the Spike prover [9] computer experiments are reported. The given
examples have been treated in a fully automatic way by the program.
Related works
Hennicker [16] has proposed an induction principle, called context induction,
which is a proof principle for behavioural abstractions. A property is observationally
valid if it is valid for all observable experiments. Such experiments
are represented by observable contexts, which are context of observable sort
over the signature of a specication where a distinguished subset of its sorts
is specied as observable. Hence, a property is valid for all observable experiments
if it is valid for all corresponding observable contexts. A context c is
viewed as a particular term containing exactly one variable; therefore, the sub-term
ordering denes a noetherian relation on the set of observable contexts.
Consequently, the principle of structural induction induces a proof principle
for properties of contexts of observable sort, which is called context induction.
This approach provides with a uniform proof method for the verication of
behavioural properties. It has been implemented in the system ISAR [3]. How-
ever, in concrete examples, this verication is a non trivial task, and requires
human guidance: the system often needs a generalization of the current induction
assertion before each nested context induction, so that to achieve the
proof.
Malcolm and Goguen [25] have proposed a proof technique which simplies
Hennicker proofs. The idea is to split the signature into generators and dened
functions. Proving that two terms are behaviourally equivalent, comes to prove
that they give the same result in all observable contexts built from dened
functions, provided that the generators verify a congruence relation w.r.t.
behavioural equivalence. This proof technique is an e-cient optimization of
Hennicker proofs.
Bidoit and Henniker [6] have investigated how a rst order logic theorem
prover can be used to prove properties in an observational framework. The
method consists in computing automatically some special contexts called crucial
contexts, and in enriching the specication so that to automatically prove
observational properties. But this method was only developed for the proof
of equations and for specications where only one sort is not observable. Be-
sides, it fails on several examples (cf. Stack example), where it is not possible
to compute crucial contexts.
Bidoit and Hennicker [7] have also investigated characterization of behavioural
theories that allows for proving behavioural theorems with standard proof
techniques for rst order logic. In particular they propose general conditions
under which an innite axiomatization of the observational equality can be
transformed into a nitary one. However, in general there is no automatic
procedure for generating such a nite axiomatization of the observational equality
Puel [30] has adapted Huet-Hullot procedure for proof by consistency w.r.t.
the nal model. Lysne [24] extends Bachmair's method for proof by consistency
to the nal algebra framework. The proof technique is based on a special
completion procedure whose idea is to consider, not only critical pairs emerging
from positioning rewrite rules on equations, but also those emerging from
positioning equations on to rewrite rules. This approach is restricted to equations
and requires the ground convergence property of the axioms in order
to be sound (in our case, ground convergence is needed only for refutational
completeness).
A preliminary version of this paper has been presented in march 1998 [2].
In comparison the system we study here admits more powerful simplication
techniques. For instance contextual simplications are now allowed with conditional
rules.
There exists more recent related works [12,26]. For instance the circular coinductive
rewriting approach of Goguen and Rusu [12] is also based on computing
special contexts. However these contexts cannot be used in general for
refutation. Our approach also allows for more simplication techniques since
e.g. each clause which is smaller than the current subgoal can be used as an
induction hypothesis and contextual rewriting is available. Unlike others we
also allow specications with relations between constructors.
In section 3 we introduce our approach with a simple example. Then we give
in section 4 the concepts of algebraic specications and rewriting that are
required in order to describe the observational semantics in section 4, our
induction schemes in section 6 and inference system in section 7. Finally we
report computer experiments with a prototype implementation in section 8.
Future extensions of the technique are sketched in the conclusion.
3 An object-oriented example
Observational specication techniques are well adapted to the description of
object-oriented systems where non observable sorts are used to model the states
of objects, and states can be observed only by applying methods on
their attributes. Hence observational techniques allow to describe systems in
an abstract way, hiding implementation details. Objects are considered as behaviourally
equivalent whenever they produce the same reactions to the same
observable experiments (actions, transitions, messages Consider for instance
a simple class of points given with their cartesian coordinates.
class Point
private distance : nat
methods create
incry
decry
We assume that the point instances are initially located at the origin (0;
and they are \moved" by methods incrx, incry (resp decx, decy) for incrementing
coordinates. Two accessors getx, gety allow to
consult the public attributes x,y. A Point instance also comes with a private
attribute whose value is the distance it has covered since its creation. The
distance is incremented after each call to incrx or incry. Given two fresh instances
of the class, P and P' we can prove that
are behaviourally equivalen-
t, although their attributes are not identical. The behavioural equivalence
is dened here using observable contexts. A context is a term describing an
experience to be applied to an object. For instance getx( ) is an observable
context for the point class. Our approach relies on computing families of well
chosen contexts, called critical contexts. These families cover in some sense all
observation contexts and are applied as induction schemes. In the Point example
the critical contexts are getx(z point ); gety(z point ). Our inference system
basically consists in applying critical contexts to conjectures and simplifying
the results by rewriting rules in order to generate new subgoals. For instance
proving the behavioural equivalence A B of A and B reduces to the proof of
getx(B) and gety(A) gety(B). Both subgoals can be simplied
to tautological equations and this nishes the proof.
4 Basic notions
We assume that the reader is familiar with the basic concepts of algebraic
specications [37], term rewriting and equational reasoning. A many sorted
signature is a pair (S; F ) where S is a set of sorts and F is a set of function
symbols. For short, a many sorted signature will simply be denoted by F . We
assume that we have a partition of F in two subsets, the rst one, C, contains
the constructor symbols and the second, D, is the set of dened symbols. Let
X be a family of sorted variables and let T (F; X) be the set of sorted terms.
stands for the set of all variables appearing in t. A term is linear if all
its variables occur only once in it. If var(t) is empty then t is a ground term.
The set of all ground terms is T A be an arbitrary non-empty set,
and let Fg such that if f is of arity n then f A is a function
from A n to A. The pair (A; FA ) is called a -algebra, and A the carrier of the
algebra. For sake of simplicity, we will write A to denote the -algebra when
F and FA are non-ambiguous.
A substitution assigns terms of appropriate sorts to variables. The domain
of is dened by: xg. If t is a term, then t denotes
the application of to t. If applies every variable to a ground term, then
is a ground substitution. We denote by the syntactic equivalence between
objects. Let N be the set of sequences of positive integers. For any term
t, P os(t) N denotes its set of positions and the expression t=u denotes
the subterm of t at a position u. We write t[s] u (resp. t[s] ) to indicate that
s is a subterm of t at position u (resp. at some position).The top position
is written ". Let t(u) denote the symbol of t at position u. A position u in
a term t is said to be a strict position if position u in
a term t such that is a linear variable position if x
occurs only once in t, otherwise, u is a non linear variable position. A linear
variable of a term t is a variable that occurs only once in t. The depth of a
term t is dened as follows: is a constant or a variable, otherwise,
We denote by a transitive irre
exive relation
on the set of terms, that is monotonic (s t implies w[s] u w[t] u ), stable
per instantiation (s t implies s t) and satises the subterm property
(f( ; t; ) t). Note that these conditions imply that is noetherian.
The multiset extension of will be denoted by . An equation is a formula
of the form l = r. A conditional equation is a formula of the following form:
It will be written V n
called a
conditional rule if flg fr; a for each substitution .
The precondition of rule V n
. The term l is the
left-hand side of the rule. A rewrite rule c ) l ! r is left-linear if l is linear.
A set of conditional rules is called a rewrite system. A constructor is free if
it is not the root of a left-hand side of a rule. Let t be a term in T (C; X), t
is called a constructor term. A rewrite system R is left-linear if every rule in
R is left-linear. We dene jRj as the maximal depth of the strict positions in
its left-hand sides. Let R be a set of conditional rules. Let t be a term and u
a position in t. We write: t[l] u !R t[r] u if there is a substitution and a
conditional rule V n
r in R such
for all i 2 [1 n] there exists c i such that a i !
R c i .
Rewriting is extended to literals and clauses as expected.
A term t is irreducible (or in normal form) if there is no term s such that
t !R s. A term t is ground reducible i all its ground instances are reducible. A
completely dened if all ground terms with root f are reducible
to terms in T (C). We say that R is su-ciently complete if all symbols in D
are completely dened. A clause C is an expression of the
. The clause C is a Horn clause if m 1. The clause C is
positive if clause is a tautology if either it contains some subclause
or or some positive literal s. The clause C is a logical
consequence of E if C is valid in any model of E, denoted by E
say that C is inductively valid in E and denote it by E
ground substitution , (for all (there exists j,
We say that two terms s and t are joinable, denoted by
R v and t !
R v for some term v. The rewrite system R is
ground convergent if the terms u and v are joinable whenever
R
5 Observational semantics
The notion of observation technique (see e.g. [4]) has been introduced as a
mean for describing what is observed in a given algebra. Various observation
techniques have been proposed in the literature: observations based on sorts
[36,31,28,16], on operators [1], on terms [34,15,5] or on formulas [33,34,22]. An
observational specication is then obtained by adding an observation technique
to a standard algebraic specication.
Our observation technique is based on sorts but can easily be extended to
operators. Our observational semantics is based on a weakening of the satisfaction
relation. Informally speaking, behavioural properties of a data type
are obtained by forgetting unnecessary information. Then, objects which can
not be distinguished by experiments are considered as observationally equal.
5.1 Contexts
In the framework of algebraic specications, such experiments can be formally
represented by contexts of observable sorts and operators over the signature
of the specication. Thus, for showing that a certain property is valid for all
observable experiments, we formally reason about all contexts of observable
sorts and operators. The notion of context we use is close to the one used by
Bidoit and Hennicker [6].
Denition 5.1 (context) Let T (F; X) be a term algebra and
signature.
(1) a context over (or -context) is a non-ground term c 2 T
distinguished linear variable called the context variable of c. To indicate
the context variable occuring in c, we often write c[z s ] instead of c, where
s is the sort of z s . A variable z s of sort s is a context called empty context
of sort s.
(2) the application of a context c[z s ] to a term t 2 T (F; X) of sort s is denoted
by c[t] and is dened as the result of the replacement of z s by t in c[z s ].
The context c is said to be applicable to t. The application of a context
to an equation a = b is the equation
(3) by exception, var(c) will denote the set of variables occuring in c but the
context variable of c. A context c is ground if
(4) a subcontext (resp. strict subcontext) of c is a context which is a subterm
strict subterm) of c with the same context variable.
The next lemma gives some properties about contexts.
Lemma 5.2 Let c[z s ] and c 0 [z 0
be two contexts such that c 0 is of sort s, t
be a term of sort s 0 and be a substitution such that z s 62 dom(). Then
The notion of context is generalized to clauses. A clausal context for a clause
is a list of contexts which are to be applied in order to each equation (negated
or not) in the clause. The set of contextual variables of a clausal context is
the set of contextual variables of its components.
Denition 5.3 (clausal context) Let S be a set of contexts, then c is a
clausal context w.r.t. S for a clause
list of
contexts such that for i
applicable to e i . The application of c to C is denoted by c[C] and is equal to
the clause V m
We dene the composition of clausal contexts in the same way than for contexts
Denition 5.4 Let
n i be two clausal contexts
such that for i 2 [1::n] c i is applicable to c 0
. Then the composition of c
and c 0 denoted by c[c 0 ] is the clausal context fc 1 [c 0
]g.
Clausal contexts induce an ordering relation on clauses that we call context
subsumption since it is an extension of the classical subsumption ordering. It
can be viewed as a generalization of the functional subsumption rule dened
in [32] and is useful for redundancy elimination in rst-order theorem-proving.
Denition 5.5 (context subsumption) The clause C ) contextually
subsumes C 0 if there exists a clausal context c and a substitution such
that C 0 ) c[].
Note that the strict part of this ordering is well-founded by the same argument
than for standard subsumption. The lemma 5.2 can be extended to clausal
contexts in a straightforward way.
Lemma 5.6 Let c and c 0 be two clausal contexts, let C be a clause and be
a substitution such that the contextual variables of c are not in dom(). Then
5.2 Observational validity
The notion of observational validity is based on the idea that two objects
in a given algebra are observationally equal if they cannot be distinguished
by computations with observable results. Computations are formalized with
contexts. For dening observational specications we need only to specify the
observable sorts. The notion of observational specication has been generalized
both in BOBJ's hidden logic [12] and CafeOBJ's coherent hidden algebra
logic [26], as well as in Bidoit and Hennicker's observational logic [7], by also
allowing non-behavioral operations, and we expect our results to generalize
directly to the general framework.
Denition 5.7 (observational specication) An observational specica-
tion SP obs is a quadruple (S; F; E; S obs ) such that (S; F ) is a signature, E is
a set of conditional equations, S obs S is the set of observable sorts.
In the following we assume that an observational specication SP
obs ) is given with signature
specication: STACK
sorts: nat, stack
observable sorts: nat
constructors:
0: !nat;
s: nat !nat;
Nil: !stack;
nat stack !stack;
dened functions:
pop: stack ! stack;
axioms:
top(push(i,s))=i
pop(Nil)=Nil
pop(push(i,s))=s
Fig. 1. Stack specication
Example 5.8 The specication Stack in Figure 1 is an observational speci-
cation where S fnatg.
Denition 5.9 An observable context is a context whose sort belongs to S obs .
An observable clausal context is a clausal context whose all component contexts
are observable. The set of observable contexts is denoted by C obs . For sake of
simplicity it will also denote the set of observable clausal contexts.
Example 5.10 Consider the specication stack in Figure 1. They are innite-
ly many observable contexts: top(z stack ); top(pop(z stack
Denition 5.11 The terms a; b are observationally equal if for all c 2 C obs
We denote it by E or simply a = obs b.
Example 5.12 Consider the stack specication in Figure 1. The equation
s is not satised in the initial algebra. However intuitively
it is observationally satised when we just observe the elements occurring
in push(top(s); pop(s)) and s. This can be proved formally by considering
all observable contexts.
Lemma 5.13 The relation = obs is a congruence on T
PROOF. The relation = obs is obviously an equivalence relation on T
for all us prove that f(a It is
su-cient to show by induction on j that
For immediately
us show that f(a
We consider an arbitrary observable context c[z] of sort s. Then
by induction hypothesis. We deduce:
Since a
are observable context we have:
By transitivity of = ind we have: c[f(a
)]. The induction step is then completed. 2
Our main goal is to generalize implicit induction proofs to an observational
framework. In particular if all sorts are observable the theory we obtain reduces
to a standard initial one. However this generalization of initial semantics is not
straightforward since our specications admit conditional axioms. For instance
let be two ground equations and assume that E 6j= ind
We may have E v. In this
case v. For this reason we
adopt a semantics that is close to the one dened by P. Padawitz [29].
Denition 5.14 (observational property) Let C V n
. We say that C is an observational property (or observationally
valid) and we denote it by E
for all ground substitutions ,
(for all
such that E
obs is a congruence on T E) the quotient algebra
of T with respect to = obs . Some properties of T (F; E) are studied in [29].
In particular it is shown that T (F; E) is nal in the class of term-generated
and visibly initial algebras. The proof system that we develop in the following
sections is dedicated to the derivation of validity in the algebra T (F; E).
Theorem 5.15 Let C
E)
PROOF. This is a simple consequence of the fact that T (F; E)
6 Induction schemes
Our purpose in this section is to introduce the ingredients allowing us to
prove and disprove observational properties. This task amounts in general
to check an innite number of ground formulas for validity, since an innite
number of instances and an innite number of contexts have to be considered
for building these ground instances. This is where induction comes into play.
Test substitutions will provide us with induction schemes for substitutions
and critical contexts will provide us with induction schemes for contexts. In
general, it is not possible to consider all the observable contexts. However,
cover contexts are su-cient to prove observational theorems by reasoning on
the ground irreducible observable contexts rather than on the whole set of
observable contexts. In the following, we denote by R a conditional rewriting
system.
Denition 6.1 (cover set) A cover set, denoted by CS, for R, is a nite
set of irreducible terms such that for all ground irreducible term s, there exist
a term t in CS and a ground substitution such that t s.
We now introduce the notion of cover context that is used to schematize all
contexts. Note that a cover context need not be observable, (unlike crucial
contexts of [6]). The intuitive idea is to use cover context to extend the conjectures
by the top in order to create redexes. Then the obtained formulas can
be simplied by axioms and induction hypothesis.
Denition 6.2 (cover context set) A cover context set CC is a minimal
(for inclusion) set of contexts such that: for each ground irreducible context
c obs [z s substitution such that
var(c) and c is a subcontext of c obs .
A cover context set for the specication stack is fz nat ; top(z stack ); pop(z stack )g.
The context push(i; z stack ) cannot belong to a cover context set since
z stack )) and pop(push(i; z stack are reducible. Note that
usually there are innitely many possible cover context sets. For
instance,fz nat ; top(z stack ); top(pop(z stack )); pop(pop(z stack ))g is also a cover context
set.
Similar notions called complete set of observers have been proposed by Hennicker
[17]. More recently another close concept has been introduced by
Goguen et al. [12].
Cover sets and cover context sets are fundamental for the correctness of our
method. However, they cannot help us to disprove the non observationally
valid clauses. For this purpose, we introduce a new notion of critical context
sets and we use test sets as dened in [8]. In the following, we rene cover
context sets so that not only we can prove behavioural properties but we
can also disprove the non valid ones. We need rst to introduce the following
notions:
A context c is quasi ground reducible if for all ground substitution such that
A term t is strongly irreducible if none of its non-variable subterms matches a
left-hand side of a rule in R. A positive clause C pos W n
strongly
irreducible if C pos is not a tautology and the maximal elements of fa
w.r.t. are strongly irreducible by R.
An induction position of f 2 F is a position p such that there exists in R a
rewrite rule of left-hand side f(t is the position in f(t
of a function symbol or of a non-linear-variable subterm. Given R, the set of
induction variables of a term t, is the subset of variables of t whose elements
occur in a subterm of t of the form f(s
term for each i 2 [1::n], at an induction position of f . The notion of induction
variables is extended to clauses as expected.
Test sets and test substitutions are dened simultaneously.
Denition 6.3 (test set, test substitution) A test set is a cover set
which has the following additional properties: (i) the instance of a ground
reducible term by a test substitution matches a left-hand side of R. (ii) if the
instance of a positive clause C pos by a test substitution is strongly irreducible,
then C pos is not inductively valid w.r.t. R. A test substitution for a clause
C instanciates all induction variables of C by terms taken from a given test
set whose variables are renamed.
The following denition introduces our notions of critical context set and
critical clausal context.
Denition 6.4 (critical context set, critical clausal context) A critical
context set S is a cover context set such that for each positive clause
C pos , if c[C pos ] is strongly irreducible where is a test substitution of C pos
and c is a clausal context of C pos w.r.t. S, then C pos is not observationally
valid w.r.t. R.
A critical clausal context w.r.t. S for a clause C is a clausal context for C
whose contexts belongs to S.
Example 6.5 In Example 1 a set of critical contexts of R is:
fpop(z stack ); z nat ; top(z stack )g.
Test substitutions and critical context sets permit us to refute false conjectures
by constructing a counterexample.
Denition 6.6 (provably inconsistent) We say that the clause V n
j is provably inconsistent if and only if there exists a test
substitution and a clausal critical context c such that:
(1) for all i, a i is an inductive theorem w.r.t. R.
strongly irreducible by R.
Provably inconsistent clauses are not observationally valid.
Theorem 6.7 Let R be a ground convergent rewrite system. Let C be a provably
inconsistent clause. Then C is not observationally valid.
PROOF. Let C V n
j be a provably inconsistent
clause. Then there exists a critical context c and a test substitution such
(i) for all i, R
strongly irreducible by R.
By Denition 6.4,
j ) is not observationally val<id w.r.t. R. Then
R 6j= Obs C by using (i). 2
Example 6.8 Consider the stack specication in Figure 1 and let us check
whether the conjecture observationally valid.
We apply rst an induction step, then we obtain:
1.
2.
Theses subgoals can be simplied by R, we obtain:
3.
4.
The equation We apply an induction step to
then we obtain:
5. top(nil)=x
6. top(push(y,z))=x
The equation simplied by R into which is provably
inconsistent. Now, since R is ground convergent, we conclude that
is not observationally valid.
6.1 Computation of test sets
The computation of test sets and test substitutions for conditional specica-
tions is decidable if the axioms are su-ciently complete and the constructors
are specied by a set of unconditional equations (see [23,21]). Unfortunately,
no algorithm exists for the general case of conditional specications. However,
in [8], a procedure is described for computing test sets when the axioms are
su-ciently complete over an arbitrary specication of constructors.
6.2 Computation of critical contexts
Let us rst introduce the following lemma which gives us a useful characterization
of critical context sets:
Lemma 6.9 Let R be a left-linear conditional rewriting system. Let CC be
a cover context set that such that for each context c[z s the variables
of c[z s ] appear at depth greater than or equal to jRj 1, and there exists an
observable context c obs such that c obs [c] is strongly irreducible. Then, CC is a
critical context set for R.
PROOF. Let C be a positive clause such that c[C] is strongly irreducible,
where is a test substitution of C and c is a critical clausal context of C. Let us
prove that C is not observationally valid. If c 2 C obs , then, by Denition 6.3,
we conclude that c[C] is not an inductive theorem, and therefore C is not
observationally valid. Assume now that c 62 C obs . By assumption, there exists
c is not quasi ground
reducible, and does not contain any observable strict subcontextg
c is not quasi ground reducible,
c does not contain any observable subcontext, and all variables (including the
context one) in c occur at jRj g.
repeat
[c] is not quasi ground reducibleg
until CC
output: CC i
Fig. 2. Computation of Critical Contexts
an observable clausal context c obs such that c obs [c] is strongly irreducible. Let
us show that c obs [c[C]] is strongly irreducible. Assume otherwise that there
exists a rule with left-hand side l that applies to c obs [c[C]] at a position p. For
every nonvariable position p 0 of l, pp 0 is a nonvariable position of c obs [c] since
the variables of c[z s ] appear at depth greater than or equal to jRj 1. Since l is
linear, we can dene a substitution such that for every variable x that occurs
at position q of l we have x c obs [c]=pq. We then have c obs [c]=p l, which
contradicts the assumption that c obs [c] is strongly irreducible. So c obs [c[C]]
is strongly irreducible. Then, by Denition 6.3, c obs [c[C]] is not inductively
valid. Thus, R 6j= Obs C. 2
us present our method for constructing such critical contexts. The
idea of our procedure is the following: starting from the non quasi ground reducible
observable contexts of depth smaller than or equal to jRj, we construct
all contexts that can be embedded in one of those observable contexts to give
a non quasi ground reducible and observable context.
The quasi ground reducibility is co-semidecidable for conditional rewrite systems
by the same argument that has been employed for ground reducibility
[20]. The procedure amounts to enumerate all the ground instances of a term
and to check them for reducibility. It can be proved by reduction to ground
reducibility that quasi ground reducibility is decidable too for equational systems
Proposition 6.10 Given a set of non conditional rewrite rules R and a context
c[z s ], it is decidable whether c[z s ] is quasi ground reducible.
PROOF. Let us rst introduce a new constant symbol d 62 F . Let Red l (x)
be a unary predicate on T which is true if the ground term x contains a
subterm that is an instance of l. The context c[z s ] is quasi ground reducible i
all instances of c[d] by substitutions satisfying 8x x 2 T are reducible
by R. We denote by G the set of ground terms in T that contain one
and only one occurrence of d. Note that G is a regular tree language. Hence the
quasi ground reducibility of c[z s ] can be expressed by the rst-order formula:
Red
_
Red l i
where the set of left-hand sides of R is g. Such formula can be
decided thanks to Theorem 4.18 of Caron et al. [10]. 2
The following proposition is also useful for testing that a context is quasi
ground reducible.
Proposition 6.11 Let R be an equational rewriting system such that all dened
functions are completely dened over free constructors. Given a context
of the form f(t is a completely dened function and for
is a constructor term. If z s does not appear at an induction
position of f then c[z s ] is quasi ground reducible.
PROOF. Assume that there exists a ground instance of c[z s ] of the form
which is irreducible by R. Consider the substitution z s s
where s is a ground and irreducible constructor term. Then f(t
ground and irreducible. Assume otherwise that there exists a rule with left-hand
side l that applies to f(t be the position of z s in c[z s ].
Then, at the position p appear a function symbol in l and therefore p is an
induction position of f , in contradiction with the assumption that p is not
an induction position of f . So, f(t ground and irreducible. This
contradicts the assumption that f is completely dened. 2
Theorem 6.12 Let R be a rewriting system and CC be the result of the
application of the procedure given in Figure 2. Then:
(1) CC is a cover context set for R.
(2) if R is equational and left-linear then CC is a critical context set for R.
PROOF. It is relatively easy to show that CC is a cover context set for
R. Now, assume that R is equational and left-linear and let us prove that
CC is also a critical context set for R. By construction, any non-observable
context in CC has variables at depth greater than or equal to jRj. Now,
since R is equational, any non quasi ground reducible context is necessarily
strongly irreducible. On the other hand, R is left-linear and the variables of
non-observable context occur at jRj, then for each context c[z s
exists i such that c 2 CC i , we can show that there exists an observable context
c obs such that c obs [c] is strongly irreducible. The proof is done by induction on
specication: LIST
sorts: nat, bool, list
observable sorts: nat, bool
constructors:
0: !nat;
s: nat !nat;
Nil: !list;
insert: nat list !list;
True: !bool;
False: !bool;
dened functions:
union: list list ! list;
in: nat list ! bool;
eq: nat nat ! bool;
axioms:
union(Nil,l)=l
in(x,Nil)=False
eq(x,y)=True
eq(x,y)=False
eq(0,0)=True
eq(0,s(x))=False
eq(s(x),0)=False
Fig. 3. List specication
Example 6.13 Consider the Stack specication in Figure 1. We have
stack ); push(i; z stack )g.
critical context set for R.
Example 6.14 Consider the List specication in Figure 3. We have:
list )g,
list ; x); insert(x; z list )g,
list ; x)g.
is a cover context set for R. In fact, union(x; z list ) is quasi
ground reducible and in(y; union(z list ; x)) is not quasi ground reducible since
list ; N il)) is irreducible, but in(y; insert(x; z list )) is quasi ground
reducible.
It is possible to compute critical context sets in the case where R is a conditional
rewriting system. It is su-cient to apply our procedure given in Figure 2
to compute a cover context set CC, and then to check that for each non observable
context c 2 CC, there exists an observable context c obs such that c obs [c] is
strongly irreducible. In Example 6.14, we have in(x; (union(z list ; y) is strongly
irreducible, then we conclude that list ); union(z list ; x)g
is a critical context set for R.
7 Inference system
The inference system we use (see Figure 4) is based on a set of transition
rules applied to is the set of conjectures to prove and H
is the set of induction hypotheses. The initial set of conditional rules R is
oriented with a well founded ordering. An I-derivation is a sequence of states:
We say that an I-derivation is fair if
the set of persistent clauses ([ i \ ji
Context induction is performed implicitly by the Generation rule. A clause
is extended by critical contexts and test sets. These extensions are simplied
either by Deletion or by Contextual Simplification or by Case
Simplification. The resulting conjectures are collected in S
Simplification illustrates the case reasoning: it simplies a conjecture with
conditional rules.
Contextual Simplification can be viewed as a powerful generalization of
contextual rewriting [38] to allows to simplify observational properties. The
rule Context Subsumption appeared to be very useful for manipulating
non orientable conjectures.
An I-derivation fails when there exists a conjecture such that no rule can be
applied to it. An I-derivation succeeds if all conjectures are proved.
Example 7.1 Let us take the signature and the axioms of the specication in
Figure
1 and let us add a new function elem : nat ! bool and the following
axioms:
Assume that the observable sort is bool. A test set here is
and a critical context is
)g.
We can easily show that top(y) is not an inductive theorem but it is
observationally valid, since for all ground term t, top(t) reduces either to 0 or
to s(0), and
Note that in this example, we have relations between constructors and that it
cannot be handled by the other related approaches [12,26].
Theorem 7.2 (correctness of successful I-derivations) Let
be a fair I-derivation. If it succeeds then R
PROOF. Suppose R 6j= obs E 0 and let M be the set of minimal elements
w.r.t. of fC j is a ground irreducible substitution, such that
R 6j= obs Cg. Note that M 6= ; since R 6j= obs E 0 and is well founded.
Let C 0 be a clause in M such that C 0 is minimal w.r.t context subsumption.
Then there exist a clause and an irreducible ground substitution
such that C We have R 6j= obs C, then we can consider an observable
context c obs such that R 6j= c obs [C]. Without loss of generality we can assume
that c obs is irreducible: otherwise it can be simplied by R to an irreducible
one with the same property. Now, we show that no rule can be applied to C.
This shows that the derivation fails since C must not persist in the derivation
by the fairness hypothesis.
Hence let us assume that C
rule applied to C. We discuss now the situation according to which rule is
applied. In every case we shall derive a contradiction. In order to simplify the
notations we write E for E j and H for H j .
Case Simplification: suppose that the rule Case Simplification is
applied to C. Since R . Then, there exists k such that
R
have R 6j= c obs C[l], R . Then R 6j= Obs C k .
On the other hand, C Contradiction, since we
have proved the existence of an instance of a clause of [ which is not
observationally valid and which is smaller than C.
Contextual Simplification: suppose that the rule Contextual Simplification
is applied to C. Without loss of generality, we can assume
that C )
the context built from s by replacing l by the context variable z and let
c obs c 1::n+1
obs
obs i. We have:
and C is a minimal counterexample,
then R
obs [].
Hence, R
obs [c s [r]]. So, we conclude that
On the other hand, C
Generation: Suppose that the rule Generation is applied to C. Since the
substitution is ground and irreducible, there exists a ground substitution
and a test substitution such that = . Besides, since R 6j= Obs C,
then we can consider an irreducible ground observable context c obs such that
R 6j= c obs [C]. Since c obs is ground and irreducible, then there exists a critical
context c and c 0 2 C obs such that c
If Deletion is applied, then R
If Contextual Simplification or Case Simplification is applied to
c[C], then by following the same reasoning used in the proofs of soundness
of Contextual Simplification and Case Simplification we derive a
contradiction.
Context Subsumption: Since R 6j= Obs C, C cannot be contextually
subsumed by an axiom of R. If there exists C fCg) such that
we have R 6j= c[C 0 ], then, c is an empty context,
and since C is minimum in [ subsumption ordering.
Therefore, C 0 62 (E n fCg). On the other hand C 0 62 H, otherwise the rule
Generation can also be applied to C, in contradiction with a previous case.
Hence this rule cannot be applied to C.
Deletion: Since R 6j= Obs C, C is not a tautology and this rule need not be
considered.Theorem 7.3 (correctness of disproof) Let
an I-derivation. If there exists j such that Disproof is applied to
then R 6j=
PROOF. If there exists j such that Disproof is applied to
by Theorem 6.7, we conclude that R 6j= . Now, to prove that R 6j=
it is su-cient to prove the following claim: Let
an I-derivation step. If 8i j; R
by a simplication rule, then the equations which are used for
simplication occur in some and therefore are observationally
valid in R by assumption. Hence, E j+1 is observationally valid too in R. If
by Generation on C every auxiliary equation
which is used for rewriting an instance of C by a critical context c and a test
substitution , is either in R or observationally
valid in R. If Subsumption or
Deletion, then E j+1 E j and therefore E j+1 is observationally valid in R. 2
Now we consider boolean specications. To be more specic, we assume there
exists a sort bool with two free constructors ftrue; falseg. Every rule in R is
of type: V n
Conjectures will be boolean clauses, i.e. clauses whose negative literals are of
completely dened
symbol in R. Then f is strongly complete [8] w.r.t R if for all the rules
whose left-hand sides are identical up to a renaming i , we
have R
We say that R is strongly complete if for all f 2 D, f
is strongly complete w.r.t R.
Theorem 7.4 (refutational completeness) Let R be a conditional rewrite
system. Assume that R is ground convergent and strongly complete. Let E 0 be
a set of boolean clauses. Then R 6j= derivations issued from
Case Simplication:
if R
Contextual Simplication:
or A C)
where A
Context Subsumption:
contextually subsumes C
Deletion:
if C is a tautology
Generation:
if for all critical context c and test substitution :
where
" is the application of Deletion, or Contextual or Case Simplication
Disproof
if C is provably inconsistent
Fig. 4. Inference System I
PROOF. by Theorem 7.2. (: The only rule that permits us to introduce
negative clauses is Case Simplification. Since the axioms have boolean
preconditions and E 0 only contains boolean clauses, all the clauses generated
in an I-derivation are boolean. If Disproof is applied in an I-derivation, then
there exists a positive clause C such that Generation cannot be applied to
C. Therefore there exists a critical context c and a test substitution such that
R 6j= c[C]. Moreover c[C] does not match any left-hand side of R. Otherwise,
the Contextual Simplification rule or the Case Simplification rule
can be applied to c[C] since R is strongly complete. As a consequence, C is
a provably inconsistent clause and therefore R 6j=
8 Computer experiments
Our prototype is written in Objective Caml on top of Spike. It is designed
to prove and disprove behavioural properties in conditional theories. The nice
feature of our approach is that it has needed only a few modications of the
implicit induction system Spike to get an operational procedure for observational
deduction. Also most optimisation and strategies available with Spike
can also be applied to the observational proof system.
The rst step in a proof session is to compute test sets and critical contexts.
The second step is to check the ground convergence of the set of axioms. If
these steps succeed we can refute false behavioural properties. If the computation
fails the user can introduce his own cover sets and cover contexts. After
these preliminary tasks the proof starts.
Example 8.1 We proved automatically that push(top(S);
behavioural property of the stack specication (see Figure 1). Note that this
example fails with the approach of [6], since it is not possible to compute automatically
a set of crucial contexts: if two stacks have the same top they are not
necessarily equal. In the approach of [16], we have to introduce an auxiliary
function iterated pop : nat stack ! stack such that iterated pop(n; s) iterates
times pop. This is easy because pop is unary. The function iterated pop
is dened by:
iterated pop(0; iterated pop(n; pop(s))
Then, we have to prove the property for all contexts of the form
top(iterated pop(x; c[z stack ])). However, this schematization of contexts could
be more complicated in case of a function of arity greater than two. So, this
process seems to be not easy to automatize in general. In the approach of [25],
this problem remains too.
us describe our proof. The prover computes rst a test set for R and
the induction positions of functions, which are necessary for inductive proofs.
It also computes a critical context. These computation are done only once and
before the beginning of the proof.
test set of R:
critical contexts of R:
induction positions of functions:
Application of generation on:
it is subsumed Nil of R
Delete
it is subsumed
it is subsumed
Delete
it is subsumed
The initial conjectures are observationally valid in R
Example 8.2 Consider now the specication list in Figure 3. The theorem
automatically proved.
test set of R:
list
critical contexts of R:
induction positions of functions:
Application of generation on:
Delete
Delete
it is subsumed True of R
Simplification of:
Simplification of:
Delete
Application of generation on:
Delete
Delete
Delete
it is subsumed True of R
Simplification of:
False by R:
Delete
it is subsumed True of H4
The initial conjectures are observationally valid in R
In the same way we have proved the following conjectures:
9 Conclusion
We have presented an automatic procedure for proving observational properties
in conditional specications. The method relies on the construction of a
set of critical contexts which enables to prove or disprove conjectures. Under
reasonable hypotheses, we have shown that the procedure is refutational com-
plete: each non observationally valid conjecture will be detected after a nite
time.
We have shown the potential of our context induction technique for reasoning
about object behaviours and especially for renement. With our implementation
we proved several examples in a completely automatic way.
A cover context w.r.t. our denition 6.2 garantees the soundness of our proce-
dure. However, cover contexts computed by our procedure may contain unecessary
contexts, as in Example 3 where union(z list ; x) is useless for observations.
We plan to rene our notion of cover and critical contexts in order to select
only the needed contexts. We also plan to extend the observation technique
to terms and formulas. In the near future we plan to extend our approach to
verify properties of concurrent and distributed object systems.
Acknowledgements
We thank Diane Bahrami and the referees for their helpful
comments.
--R
Proving the correctness of algebraically speci
Observational Proofs with Critical Contexts.
Proving the correctness of algebraic implementations by the ISAR system.
Behavioural approaches to algebraic speci
Towards an adequate notion of observation.
How to prove observational theorems with LP.
Behavioural theories and the proof of behavioural properties.
Automated theorem proving by test set induction.
Implicit induction in conditional theories.
Encompassment properties and automata with constraints.
Fundamentals of Algebraic Speci
Rosu and K.
Towards an algebraic semantics for the object paradigm.
The speci
Implementation of parameterized observational speci
Context induction: a proof principle for behavioural abstractions and algebraic implementations.
Structured speci
Reasoning about Classes in Object-Oriented Languages: Logical Models and Tools
On the decidability of quasi-reducibility
Automating inductionless induction using test sets.
Testing for the ground (co-)reducibility property in term-rewriting systems
Extending Bachmair's method for proof by consistency to the
Proving correctness of re
Test Towards Automated Veri cation of Behavioural Properties
Verifying Behavioural Speci
Initial behaviour semantics for algebraic speci
Computing in Horn Clause Theories.
Proofs in the
Behavioural validity of conditional equations.
On observational equivalence and algebraic speci
Toward formal development of programs from algebraic speci
Towards formal development of ml programs: foundations and methodology.
Final algebra semantics and data type extensions.
Algebraic speci
Implementing Contextual Rewriting.
--TR
On observational equivalence and algebraic specification
Toward formal development of programs from algebraic specifications: implementations revisited
Computing in Horn clause theories
Initial behavior semantics for algebraic specifications
Theorem-proving with resolution and superposition
Automating inductionless induction using test sets
Algebraic specification
Towards an adequate notion of observation
Testing for the ground (co-)reducibility property in term-rewriting systems
Extending Bachmair''s method for proof by consistency to the final algebra
Behavioural approaches to algebraic specifications
Towards an algebraic semantics for the object paradigm
Behavioural theories and the proof of behavioural properties
Automated theorem proving by test set induction
Fundamentals of Algebraic Specification I
Reasonong about Classess in Object-Oriented Languages
Specifications with Observable Formulae and Observational Satisfaction Relation
Proving the Correctness of Algebraically Specified Software
Proving the Correctness of Algebraic Implementations by the ISAR System
How to Prove Observational Theorems with LP
Behaviour-Refinement of Coalgebraic Specifications with Coinductive Correctness Proofs
Implementation of Parameterized Observational Specifications
Toward Formal Development of ML Programs
Encompassment Properties and Automata with Constraints
Implementing Contextual Rewriting
Verifying Behavioural Specifications in CafeOBJ Environment
Circular Coinductive Rewriting
The specification and application to programming of abstract data types.
--CTR
Abdessamad Imine , Michal Rusinowitch , Grald Oster , Pascal Molli, Formal design and verification of operational transformation algorithms for copies convergence, Theoretical Computer Science, v.351 n.2, p.167-183, 21 February 2006
Grigore Rosu, On implementing behavioral rewriting, Proceedings of the 2002 ACM SIGPLAN workshop on Rule-based programming, p.43-52, October 05, 2002, Pittsburgh, Pennsylvania
Manuel A. Martins, Closure properties for the class of behavioral models, Theoretical Computer Science, v.379 n.1-2, p.53-83, June, 2007 | rewriting;induction;automated proofs;observational semantics |
570647 | Zero-interaction authentication. | Laptops are vulnerable to theft, greatly increasing the likelihood of exposing sensitive files. Unfortunately, storing data in a cryptographic file system does not fully address this problem. Such systems ask the user to imbue them with long-term authority for decryption, but that authority can be used by anyone who physically possesses the machine. Forcing the user to frequently reestablish his identity is intrusive, encouraging him to disable encryption.Our solution to this problem is Zero-Interaction Authentication, or ZIA. In ZIA, a user wears a small authentication token that communicates with a laptop over a short-range, wireless link. Whenever the laptop needs decryption authority, it acquires it from the token; authority is retained only as long as necessary. With careful key management, ZIA imposes an overhead of only 9.3% for representative workloads. The largest file cache on our hardware can be re-encrypted within five seconds of the user's departure, and restored in just over six seconds after detecting the user's return. This secures the machine before an attacker can gain physical access, but recovers full performance before a returning user resumes work. | Figure
1: Decrypting File Encrypting Keys
not impose undue usability burdens or noticeably reduce
le system performance.
The main contribution of this paper is not the construction
of a cryptographic le system. Blaze's CFS [1], Zadok's
Cryptfs [32], and Microsoft's EFS [19] all address the archi-
tecture, administration, and cryptographic methods for a
le system. However, none of these combine user authentication
and encryption properly. Some systems, such as EFS,
require the user to reauthenticate after certain events, such
as suspension, hibernation, or long idle periods, in an attempt
to bound the window of vulnerability. The user must
explicitly produce a password when any of these events oc-
cur. This burden, though small, will encourage some users
to disable or work around the mechanism.
2. DESIGN
ZIA's goal is to provide eective le encryption without
reducing performance or usability. All on-disk les are encrypted
for safety, but all cached les are decrypted for per-
formance. With its limited hardware and networking per-
formance, the token is not able to encrypt and decrypt le
data without a signicant performance penalty. Instead,
le keys are stored on the laptop's disk, encrypted by a key-encrypting
key. Only an authorized token holds the key-encrypting
key, thus the token is required to read les. This
process is illustrated in Figure 1.
There are two requirements for system security. First, a
user's token cannot provide key decryption services to other
users' laptops. Second, the token cannot send decrypted
le keys over the wireless link in cleartext form. Therefore,
the token and laptop use an authenticated, encrypted link.
Before the rst use of a token, the user must unlock it using
a PIN. Then he must bind the token and laptop, ensuring
that his token only answers key requests from his laptop.
Next, ZIA mutually authenticates the identity of the token
and laptop over the wireless link and exchanges a session
encryption key. After authentication, polling ensures that
the token, and thus the user, is still present. When the token
is out of range, ZIA encrypts cached objects for safety. The
cache retains these encrypted pages to minimize recovery
time when the user returns, preserving usability. The overall
process is illustrated in Figure 2. The remainder of this
section presents the detailed design of ZIA, starting with
the trust and threat model.
Use PIN
Bind to Token
Authentication/
Session Keys
Poll Token
Secure Laptop
Token Returns
This gure shows the process for authenticating and interacting
with the token. Once an unlocked token is bound
to a laptop, ZIA negotiates session keys and can detect the
departure of the token.
Figure
2: Token Authentication System
2.1 Trust and Threat Model
Our focus is to defend against attacks involving physical
possession of a laptop or proximity to it. Possession enables
a wide range of exploits. If the user leaves his login session
open, attacks are not even necessary; the attacker has all
of the legitimate user's rights. Even without a current login
session, console access admits a variety of well-known
attacks, some resulting in root access. An attacker can also
bypass the operating system entirely. For example, one can
remove and inspect the disk using another machine. A determined
attacker might even probe the physical memory of
a running machine.
ZIA must also defend against exploitation of the wireless
link between the laptop and token: observation, modica-
tion, or insertion of messages. Simple attacks include eavesdropping
in the hopes of obtaining decrypted le keys. A
more sophisticated attacker might record a session between
the token and laptop, and later steal the laptop in the hopes
of decrypting prior trac. ZIA defeats these attacks through
the use of well-known, secure mechanisms.
We assume that some collection of users and laptops belong
to a single administrative domain, within which data
can be shared. The domain includes at least one trusted authority
to simplify key management and rights revocation.
However, the system must be usable even when the laptop
is disconnected from the rest of the network. The token
and the laptop operating system form the trusted computing
base.
ZIA does not defend against a trusted but malicious user,
who can easily leak sensitive data and potentially extract key
material. ZIA does not provide protection for remote users;
they must be physically present. Attackers that jam the
spectrum used by the laptop-token channel will eectively
deny users access to their les. Our work is orthogonal to the
prevention of network-based exploits such as buer overow
attacks.
ZIA's security depends on the limited range of the radio
link between the token and the laptop. Repeaters could
be used to extend this range, though time-based techniques
to defeat such wormhole attacks exist [4, 13]. Similarly,
an attacker with an arbitrarily powerful and sensitive radio
could extend the range, though such attacks are dicult
given the attenuation of high frequency radios.
2.2 Key-Encrypting Keys
In ZIA, each on-disk object is encrypted by some symmetric
Ke. The link connecting the laptop and token is
slow, and the token is much less powerful than the laptop.
Consequently, le decryption must take place on the laptop,
not the token. The le system stores each Ke, encrypted
by some key-encrypting key, Kk; we write this as Kk(Ke).
Only tokens know key-encrypting keys; they are never di-
vulged. A token with the appropriate Kk can decrypt Ke,
and hence enable reading any le encrypted by Ke.
In our model, the local administrative authority is responsible
for assigning key-encrypting keys. For reliability in the
face of lost or destroyed tokens, the administrative authority
must also hold these keys in escrow. Otherwise, losing a
token is the equivalent of losing all of one's les. Escrowed
keys need not be highly available, eliminating the need for
oblivious escrow [3] or similar approaches.
Laptops are typically \owned" by a particular user; in
many settings, one could provide only a single, unique Kk
to each user. However, ZIA must support shared access
as well, because most installations within a single administrative
domain share notions of identity and privilege. For
example, two colleagues in a department may borrow each
others' machines, and ZIA cannot preclude such uses. To
support sharing, each le key, Ke, can be encrypted by both
a user key, Ku, and some number of group keys, Kg.
The specic semantics of group access and authorization
are left to the le system. For example, one can assign key-encrypting
keys to approximate standard UNIX le protec-
tions. Access to a le in the UNIX model is determined by
dividing the universe of users into three disjoint sets: the
le's owner, members of the le's group, and anyone else
empowered to log in to that machine. We refer to this last
set as the world. Each user has a particular identity, and is
a member of one or more groups. One could assign a user
key, Ku, to each user; a group key, Kg, to each group; and
a world key, Kw, to each machine. A user's token holds her
specic Ku, each applicable Kg, and one Kw per machine on
which she has an account. Each le's encryption key, Ke, is
stored on disk, sealed with its owner's key, Ku. If a le was
readable, writable, or executable by members of its owning
group, Kg(Ke) would also be stored. Finally, Kw(Ke)
would be stored for les that are world-accessible. Note that
the latter is not equivalent to leaving world-accessible les
unencrypted; only those with login authority on a machine
would hold the appropriate Kw.
Group keys have important implications for sharing and
revocation in ZIA. Members of a group have the implicit
ability to share les belonging to that group, since each
member has the corresponding Kg. However, if a user leaves
a group, that group's Kg must be changed to a new Kg0 . Fur-
thermore, the departing user|who is no longer authorized
to view these les|may have access to previously-unsealed
le keys. As a result, re-keying a group requires that the
contents of each le accessible to that group be re-encrypted
with a new Ke0 , and that new key be re-sealed with the appropriate
key-encrypting keys.
Re-keying can be done incrementally. To distribute a new
group key, the administrative authority must supply a certied
Kg0 to the token of each still-authorized user. This
must be done in a secure environment to prevent exposure
of Kg0 . Thereafter, a token encountering a laptop with \old"
can continue to use it until it is re-keyed. However,
this policy must be pursued judiciously, since it increases
the amount of data potentially visible to an ejected group
member.
2.3 Token Vulnerabilities
Tokens provide higher physical security than laptops, since
they are worn rather than carried. Unfortunately, it is still
possible for a user to lose a token. Token loss is a very serious
threat since tokens hold key-encrypting keys. How can
we limit the damage of such an occurrence?
The most serious vulnerability surrounding token loss is
the extraction of key-encrypting keys. PIN-protected, tamper-resistant
hardware [31] makes this more dicult, as does
storing all Kk encrypted with some password. In either
case, the PIN/password must be known only to the token's
rightful owner. At rst glance, this seems to merely shift
the problem of authentication from the laptop to the token.
However, since the token is worn, it is more physically secure
than a laptop|it is reasonable to allow long-lived authentication
between the token and the user, perhaps on the order
of once a day.
Bounding the authentication session between the user and
token also prevents an attacker from protably stealing a
token, and then later a laptop. After the authentication
period expires, the token will no longer be able to supply
any requested Ke. Such schemes can be further improved
through the use of server-assisted protocols to prevent oine
dictionary attacks [17], with the laptop playing the role of
the server to the token's device.
Even tokens that have not been stolen can act as liabili-
ties. Supposed an attacker has a stolen laptop but no token,
and is sitting near a legitimate user from the same domain.
This tailgating attacker can force the stolen laptop to generate
decryption requests that could use one of the legitimate
user's key-encrypting keys. If the legitimate token
were to respond, the system would be compromised.
To prevent this, we provide a mechanism that establishes
bindings between tokens and laptops. Before a token will
respond to a particular laptop's request, the user must acknowledge
that he intends to use this token with that laptop.
There are several ways one can accomplish this. For exam-
ple, a token with a rudimentary user interface would alert
the user when some new laptop rst asks it to decrypt a le
key. The user then chooses to allow or deny that laptop's
current and future requests. As with token authentication,
bindings have relatively long but bounded duration; after
a binding expires, the token/laptop pair must be rebound.
Since a user can use more than one machine, a token may
be bound to more than one laptop. Likewise, a laptop may
have more than one token bound to it.
User-token authentication and token-laptop binding are
necessarily visible to the user, and thus add to the burden
of using the system. However, since they are both long-
lived, they require infrequent user action. In practice, they
are no more intrusive than having to unlock your oce door
once daily, without the accompanying threat of forgetting
to re-lock it. The right balance between usability and security
depends on the physical nature of the token, its user
interface capabilities, and the user population.
2.4 Token-Laptop Interaction
The binding process must accomplish two things: mutual
authentication and session key establishment. Mutual
authentication can be provided with public-key cryptography
[22]. In public-key systems, each principal has a pair
of keys, one public and one secret. To be secure, each prin-
cipal's public key must be certied, so that it is known to
belong to that principal. Because laptops and tokens fall
under the same administrative domain, that domain is also
responsible for certifying public keys.
ZIA uses the Station-to-Station protocol [9], which combines
public-key authentication and Die-Hellman key ex-
change. Die-Hellman key exchange provides perfect forward
session keys cannot be reconstructed, even if
the private keys of both endpoints are known. Once a session
key is established, it is used to encrypt all messages between
the laptop and token. Each message includes a nonce,
a number that uniquely identies a packet within each session
to prevent replay attacks [5]. In addition, the session
key is used to compute a message authentication code, verifying
that a received packet was neither sent nor modied
by some malicious third party [21].
2.5 Assigning File Keys
What is the right granularity at which to assign le encryption
small grain size reduces the data exposed
if a le key is revealed, but a larger grain size provides more
opportunity for key caching and re-use.
ZIA hides the latency of key acquisition by overlapping it
with physical disk I/O. Further, it must amortize acquisition
costs by re-using keys when locality suggests that doing so
is benecial. In light of this, we have chosen to assign le
keys on a per-directory basis.
People tend to put related les together, so les in the
same directory tend to be used at the same time. Therefore,
many le systems place all les in a directory in the same
cylinder group to reduce seek time between them [18]. This
makes it dicult to hide key acquisition costs for per-le
keys. Instead, since each le in a directory shares the same
le key, key acquisition costs are amortized across intra-
directory accesses. Alternatively, one could imagine keeping
per-le keys in one le, and reading them in bulk; however,
maintaining this structure requires an extra seek on each le
creation or deletion.
In our prototype, we store the le key for a directory in
a keyle within that directory. The keyle contains two encrypted
copies of the le key; Ku(Ke) and Kg(Ke), where
Ku and Kg correspond to the directory's owner and group.
We have chosen not to implement world keys, but adding
them is straightforward. This borrows from the UNIX protection
model, though it does not replicate it exactly. AFS,
the Andrew File System, makes a similar tradeo in managing
access control lists on a per-directory basis rather than a
per-le one [28]. However, AFS is motivated by conceptual
simplicity and storage overhead, not eciency in retrieving
access control list entries.
2.6 Handling Keys Efficiently
Key acquisition time can be a signicant expense, so we
overlap key acquisition with disk operations whenever possi-
ble. Since disk layout policies and other optimizations often
reduce the opportunity to hide latency, we cache decrypted
keys obtained from the token.
Disk reads provide opportunities for overlap. When a read
requiring an uncached key commences, ZIA asks the token
to decrypt the key in parallel. Unfortunately, writes do not
oer the same opportunity; the key must be in hand to
encrypt the data before the write commences. However, it
is likely that the decryption key is already in the key cache
for writes. To write a le, one must rst open it. This open
requires a lookup in the enclosing directory. If this lookup
is cached, the le key is also likely to be cached. If not,
then key acquisition can be overlapped with any disk I/O
required for lookup.
Neither overlapping nor caching applies to directory cre-
ation, which requires a fresh key. Since this directory is new,
it cannot have a cached key already in place. Since this is
a write, the key must be acquired before the disk operation
initiates. However, ZIA does not need a particular key to associate
with this directory; any key will do. Therefore, ZIA
can prefetch keys from the authentication token, encrypted
with the current user's Ku and Kg, to be used for directories
created later. The initial set of fresh keys is prefetched
when the user binds a token to a laptop. Thereafter, if the
number of fresh keys drops below a threshold, a background
daemon obtains more.
Key caching and prefetching greatly reduce the need for
laptop/token interactions. However, frequent assurance that
the token is present is our only defense against intruders. To
provide this assurance, we add a periodic challenge/response
between the laptop and the token. The period must be short
enough that the time to discover an absence plus the time to
secure the machine is less than that required for a physical
attack. It also must be long enough to impose only a light
load on the system. We currently set the interval to be
one second; this is long enough to produce no measurable
load, but shorter than the time to protect the laptop in the
worst case. Thus, it does not contribute substantially to the
window in which an attacker can work.
2.7 Departure and Return
When the token does not respond to key requests or chal-
lenges, the user is declared absent. All le system state must
be protected and all cached le keys ushed. When the user
returns, ZIA must re-fetch le keys and restore the le cache
to its pre-departure state. This process should be transparent
to the user: it should complete before he resumes work.
There are two reasons why a laptop might not receive a
response from the token. The user could truly be away, or
the link may have dropped a packet. ZIA must recover from
the latter to avoid imposing a performance penalty on a still-
present user. To accomplish this, we use the expected round
time between the laptop and the token. Because this
is a single, uncongested network hop, this time is relatively
stable. ZIA retries key requests if responses are not received
within twice the expected round trip time, with a total of
three attempts. Retries do not employ exponential backo,
since we expect losses to be due to link noise, not congestion;
congestion from nearby users is unlikely because of the short
range.
If there is still no response, the user is declared absent and
the le system must be secured. ZIA rst removes all name
mappings from the name cache, forcing any new operations
to block during lookup. ZIA then walks the list of its cached
pages, removing the clear text versions of the pages. There
are two ways to accomplish this: writing dirty pages to disk
and zeroing the cache, or encrypting all cached pages in
place.
Zeroing the cache has the attractive property that little
work is required to secure the machine. Most pages will be
clean, and do not need to be written to disk. However, when
the user returns, ZIA must recover and decrypt pages that
were in the cache. They are likely to be scattered across the
disk, so this will be expensive.
Instead, ZIA encrypts all of the cached pages in place.
Each page belongs to a le on disk, with a matching le
key. The page descriptor holds a reference to the cached,
decrypted key. Referenced keys may not be evicted|they
are wired in the cache. Without a corresponding key, there
would be no way to encrypt a cached page, and such keys
cannot be obtained from the now-departed token.
The expense of encryption is tolerable given our goal of
foiling a physical attack. For example, the largest le cache
we can observe on our hardware can be encrypted within ve
seconds. To be successful, an attacker would have to gain
possession of the machine and extract information within
that time|an unlikely occurance.
While the user is absent, most disk operations block until
the token is once again within range; ZIA then resumes
pending operations. This means that background processes
cannot continue while the user is away. In a physically secure
location, such as an oce building, xed beacons can
provide authentication in lieu of the user. Unfortunately,
such beacons would not prevent intra-oce theft and must
be used judiciously. At insecure locations, such as an air-
port, the user must not leave unencrypted data exposed and
background computation should not be enabled. This would
defeat the purpose of the system.
2.8 Laptop Vulnerabilities
What happens when a laptop is stolen or lost? Since ZIA
automatically secures the le system, no data can be extracted
from the disk. Likewise, all le keys and session
have been zeroed in memory. However, the laptop's
private key, sd, must remain on the laptop to allow transparent
re-authentication. If the attacker recovers sd, he can
impersonate a valid laptop. To defend against this, the user
must remove the binding between the token and the stolen
device. This capability can be provided through a simple
interface on the token. Use of tamper-resistant hardware in
the laptop would make extracting sd more dicult.
Instead of oine inspection, suppose an attacker modies
the device and returns it to a user. Now the system may
contain trojans, nullifying all protections aorded by ZIA.
Any device that is stolen, and later recovered, should be
regarded as suspect and not used. Secure booting [6, 14]
can be used to guard against this attack.
VFS
Keyiod
Authentication
Page Cache Client
Keyd
Authentication
Server
Token
ZIA Key Cache
Underlying
FS Kernel Module
This gure shows ZIA's design. The kernel module handles
cryptographic le I/O. The authentication client and server
manage key decryption and detect token proximity. A key
cache is included to improve performance.
Figure
3: An overall view of ZIA
3. IMPLEMENTATION
Our implementation of ZIA consists of two parts: an in-kernel
encryption module and a user-level authentication
system. The kernel portion provides cryptographic I/O,
manages le keys, and polls for the token's presence. The
authentication system consists of a client on the user's lap-top
and a server on the token, communicating via a secured
channel.
Figure
3 is a block diagram of the ZIA prototype. The
kernel module handles all operations intended for our le
system and forwards key requests to the authentication sys-
tem. We used FiST [33], a tool for constructing stackable
le systems [11, 27], to build our kernel-resident code. This
code is integrated with the Linux 2.4.10 kernel.
The authentication system consists of two components.
The client, keyiod, runs on the laptop, and the server, keyd,
runs on the token; both are written in C. The client handles
session establishment and request retransmission. The
server must respond to key decryption and polling requests.
The processing requirements of keyd are small enough that
it can be implemented in a simple, low-power device.
3.1 Kernel Module
In Linux, all le system calls pass through the Virtual File
System (VFS) layer [15]. VFS provides an abstract view of
the le systems supported by the OS. A stackable le system
inserts services between the concrete implementations of an
upper and lower le system. FiST implements a general
mechanism for manipulating page data and le names; this
makes it ideal for constructing cryptographic le services.
The FiST distribution also includes a proof-of-concept cryptographic
le system, Cryptfs.
3.1.1 File and Name Encryption
The kernel module encrypts both le pages and le names
with the Rijndael cipher [8]. We selected Rijndael for two
reasons. First, it has been chosen as NIST's Advanced Encryption
Standard, AES. Second, it has excellent perfor-
mance, particularly for key setup|a serious concern in the
face of per-directory keys.
ZIA preserves le sizes under encryption. File pages are
encrypted in cipher block chaining (CBC) mode with a
byte block. We use the inode and page osets to compute
a dierent initialization vector for each page of a le. Tail
portions that are not an even 16 bytes are encrypted in
cipher feedback mode (CFB). We chose CFB rather than ciphertext
stealing [7], since we are concerned with preventing
exposure, not providing integrity.
ZIA does not preserve the size of le names under encryp-
tion; they are further encoded in Base-64, ensuring that encrypted
lenames use only printable characters. Otherwise,
the underlying le system might reject encrypted le names
as invalid. In exchange, limits on le and path name sizes
are reduced by 25%. Cryptfs made the same decision for the
same reasons [32].
The kernel module performs two additional tasks. First,
the module prefetches fresh le keys to be used during directory
creation. Second, the module manages the storage of
encrypted keys. The underlying le system stores keys in a
but keyles are not visible within ZIA. This is done
for transparency, not security; on-disk le keys are always
encrypted.
3.1.2 Polling, Disconnection, and Reconnection
ZIA periodically polls the token to ensure that the user
is still present. The polling period must be longer than a
small multiple of network round-trip time, but shorter than
the time required for an adversary to obtain and inspect the
laptop. This window is between hundreds of milliseconds
and tens of seconds. We chose a period of one second; this
generates unnoticeable trac, but provides tight control.
Demonstrated knowledge of the session key is sucient to
prove the token's presence. Therefore, a poll message need
only be an exchange of nonces [5]: the device sends a num-
ber, n, encrypted with the key and the token returns n
encrypted by the same key. The kernel is responsible for
polling; it cannot depend on a user-level process to declare
the token absent, since it must be fail-stop. Similarly, if
the user suspends the laptop, or it suspends itself due to
inactivity, the kernel treats this as equivalent to loss of communication
If the kernel declares the user absent, it secures the le
system. Cached data is encrypted, decrypted le keys are
ushed, and both are marked invalid. We added a ag to
the page structure to distinguish encrypted pages from those
that were invalidated through other means. Most I/O in ZIA
blocks during the user's absence; non-blocking operations
return the appropriate error code.
When keyiod reestablishes a secure connection with the
token, two things happen. First, decrypted le keys are re-fetched
from the token. Second, le pages are decrypted
and made valid. As pages are made valid, any operations
blocked on them resume. We considered overlapping key
validation with page decryption to improve restore latency.
However, the simpler scheme is suciently fast.
3.2 Authentication System
The authentication system is implemented in user space
for convenience. All laptop-token communication is encrypted
and authenticated by session keys plus nonces. Communication
between the laptop and the token uses UDP rather
than TCP, so that we can provide our own retransmission
mechanism. This enables a more aggressive schedule, since
congestion is not a concern. We declare the user absent after
three dropped messages; this parameter is tunable. The to-
ken, in the form of keyd, holds all of a user's key-encrypting
keys. Since session establishment is the most taxing operation
required of keyd, and it is infrequent, keyd is easily
implemented on low-power hardware.
4. EVALUATION
In evaluating ZIA, we set out to answer the following questions
What is the cost of key acquisition?
What overhead does ZIA impose? What contributes
to this overhead?
Can ZIA secure the machine quickly enough to prevent
attacks when the user departs?
Can ZIA recover system state before a returning user
resumes work?
To answer these questions, we subjected our prototype to
a variety of benchmarks. For these experiments, the client
machine was an IBM ThinkPad 570, with 128 MB of physical
memory, a 366 MHz Pentium II CPU, and a 6.4 GB
IDE disk drive with a 13 ms average seek time. The token
was a Compaq iPAQ 3650 with 32MB of RAM. They were
connected by an 802.11 wireless network running in ad hoc
mode at 1 Mb/s. All keys were 128 bits long. The token
is somewhat more powerful than current wearable devices.
However, the rapid advancements in embedded, low-power
devices makes this a realistic token in the near future.
4.1 Key Acquisition
Our rst task is to compare the cost of key acquisition
with typical le access times. To do so, we measured the
elapsed time between the kernel's request for key decryption
and the delivery of the key to the kernel. The average
acquisition cost is 13.9 milliseconds, with a standard deviation
of 0.0015. This is similar to the average seek time of
the disk in our laptops, though layout policy and other disk
optimizations will tend to reduce seek costs in the common
case.
4.2 ZIA Overhead
Our second goal is to understand the overhead imposed by
ZIA on typical system operation. Our benchmark is similar
to the Andrew Benchmark [12] in structure. The Andrew
Benchmark consists of copying a source tree, traversing the
tree and its contents, and compiling it. We use the Apache
source tree. It is 7.4 MB in size; when compiled, the
total tree occupies 9.7 MB. We pre-congure the source tree
for each trial of the benchmark, since the conguration step
does not involve appreciable I/O in the test le system.
While the Andrew Benchmark is well known, it does have
several shortcomings; the primary one is a marked dependence
on compiler performance. In light of this, we also
subject ZIA to three I/O-intensive workloads: directory cre-
ation, directory traversal, and tree copying. The rst two
highlight the cost of key creation and acquisition. The third
measures the cost of data encryption and decryption.
File System Time, sec Over Ext2fs
Ext2fs 52.63 (0.30) -
Base+ 52.76 (0.22) 0.24%
Cryptfs 57.52 (0.18) 9.28%
ZIA 57.54 (0.20) 9.32%
This shows the performance of Ext2fs against ve stacked
le systems using a Modied Andrew Benchmark. Standard
deviations are shown in parentheses. ZIA has an overhead
of less than 10% in comparison to an Ext2fs system
and performs similarly to a simple single key encryption
system, Cryptfs.
Figure
4: Modied Andrew Benchmark
4.2.1 Modified Andrew Benchmark
We compare the performance of Linux's ext2fs against
four stacking le systems: Base+, Cryptfs, ZIA, and ZIA-
NPC. Base+ is a null stacked le system. It transfers le
pages but provides no name translation. Cryptfs adds le
and name encryption; it uses a single, static key for the
le system. Both Base+ and Cryptfs are samples from
the FiST distribution [33]. To provide a fair comparison, we
replaced Blowsh [29] with Rijndael in Cryptfs, improving
its performance. ZIA is as described in this paper. ZIA-
NPC obtains a key on every disk access; it provides neither
caching nor prefetching of keys.
Each experiment consists of 20 runs. Before each set,
we compile the same source in a separate location. This
ensures that the test does not include the eects of loading
the compiler and linker from a separate le system. Each
run uses separate source and destination directories to avoid
caching les and name translations. The results are shown
in
Figure
standard deviations are shown in parenthesis.
The results for ext2fs give baseline performance. The result
for Base+ quanties the penalty for using a stacking
le system. Cryptfs adds overhead for encrypting and decrypting
le pages and names. ZIA encompasses both of
these penalties, plus any costs due to key retrieval, token
communication and key storage.
For this benchmark, ZIA imposes less than a 10% penalty
over ext2fs. Its performance is statistically indistinguishable
from that of Cryptfs, which uses a single key for all
cryptographic operations. Key caching is critical; without
it, ZIA-NPC is more than four times slower than the base
le system.
To examine the root causes of ZIA's overhead, we instrumented
the 28 major le and inode operations in both ZIA
and Base+. The dierence between the two, normalized by
the number of operations, gives the average time ZIA adds
to each. Most operations incur little or no penalty, but ve
operations incur measurable overhead. The result is shown
in
Figure
5.
Overhead in each operation stems from ZIA's encryption
and key management functions. In Base+, the readpage
and writepage functions merely transfer pages between the
upper and lower le system. Since writepage is asynchro-
nous, this operation is relatively inexpensive. In ZIA we
must encrypt the page synchronously before writing to the
lower le system. During readpage, we must decrypt the
pages synchronously; this leads to the overheads shown.
ZIA's mkdir must write the keyle to the disk. This adds
an extra le creation to every mkdir. Finally, filldir and20001000
Time (us) / Operation0
Filldir Mkdir Readpage Writepage Lookup
Operation Type
This shows the per-operation overhead for ZIA compared
to the Base+ le system. Writing and reading directory
keys from disk is an expensive operation, as is encrypting
and decrypting le pages.
Figure
5: Per-Operation Overhead
File System Time, sec Over Ext2fs
Ext2fs 9.67 (0.23) -
Base+ 9.66 (0.13) -0.15%
Cryptfs 9.88 (0.14) 2.17%
ZIA 10.25 (0.09) 5.9%
This table shows the performance for the creation of 1000
directories, each containing one zero-length le. Standard
deviations are shown in parentheses. Although ZIA has
a cache of fresh keys for directory creation, it must write
those keyles to disk.
Figure
Creating Directories
lookup must encrypt and decrypt le names, and must sometimes
acquire a decrypted le key.
4.2.2 I/O Intensive Benchmarks
Although the Modied Andrew Benchmark shows only
a small overhead, I/O intensive workloads will incur larger
penalties. We conducted three benchmarks to quantify them.
The rst two stress directory operations, and the third measures
the cost of copying data in bulk.
The rst experiment measures the time to create 1000
directories, each containing a zero length le. The results
are shown in Figure 6. Each new directory requires ZIA to
write a new keyle to the disk, adding an extra disk write to
each operation; the write-behind policy of ext2fs keeps these
overheads manageable. In addition, the lenames must be
encrypted, accounting for the rest of the overhead.
The next benchmark examines ZIA's overhead for reading
1000 directories and a zero length le in each directory. This
stresses keyle reads and key acquisition. Note that without
the empty le ZIA does not need the decrypted key and the
token would never be used. We ran a find across the 1000
directories and les created during the previous experiment.
We rebooted the machine between the previous test and
this one to make sure the name cache was not a factor. The
results are shown in Figure 7.
The results show a large overhead for ZIA. This is not
surprising since we have created a le layout with the smallest
degree of directory locality possible. ZIA is forced to
fetch 1000 keys, one for each directory; there is no locality
File System Time, sec Over Ext2fs
Base+ 15.72 (1.16) 1.04%
Cryptfs 15.41 (1.07) -0.94%
ZIA 29.76 (3.33) 91.24%
This table shows the performance for reading 1000 direc-
tories, each containing one zero-length le. Standard deviations
are shown in parentheses. In this case, ZIA must
synchronously acquire each le key.
Figure
7: Scanning Directories
File System Time, sec Over Ext2fs
Ext2fs 19.68 (0.28) -
Base+ 31.05 (0.68) 57.78%
Cryptfs 42.81 (1.34) 117.57%
ZIA 43.56 (1.13) 121.38%
This table shows the performance for copying a 40MB
source tree from one directory in the le system to an-
other. Standard deviations are shown in parentheses. Synchronously
decrypting and encrypting each le page adds
overhead to each page copy. This is true for ZIA as well as
Cryptfs.
Figure
8: Copying Within the File System
for key caching to exploit. This inserts a network round
into reading the contents of each directory, accounting
for an extra 14 milliseconds per directory read. Note that
the dierences between Base+, Cryptfs and Ext2fs are not
statistically signicant.
Each directory read in ZIA requires a keyle read and a
acquisition in addition to the work done by the underlying
ext2fs. Interestingly, the amount of unmasked acquisition
time plus the time to read the keyle was similar to
the measured acquisition costs. To better understand this
phenomenon, we instrumented the internals of the directory
operations. Surprisingly, the directory read completed
in a few tens of microseconds, while the keyle read was a
typical disk access. We believe that this is because, in our
benchmark, keyles and directory pages are always placed
on the same disk track. In this situation, the track buer
will contain the directory page before it is requested.
It is likely that an aged le system would not show such
consistent behavior [30]. Nevertheless, we are considering
moving keyles out of directories and into a separate location
in the lower le system. Since keys are small, one
could read them in batches, in the hopes of prefetching useful
encrypted le keys. When encrypted keys are already in
hand, the directory read would no longer be found in the
track buer, and would have to go to disk. However, this
time would be overlapped with key acquisition, reducing total
overheads.
The nal I/O intensive experiment is to copy the Pine 4.21
source tree from one part of the le system to another. The
initial les are copied in and then the machine is rebooted to
avoid hitting the page cache. This measures data intensive
operations. The Pine source is 40.4 MB spread across 47
directories. The results are shown in Figure 8. In light of
the previous experiments, it is clear why Crypt and ZIA are
slow in comparison to Base+ and Ext2fs. Each le page is
synchronously decrypted after a read and encrypted before
a
Time (s)20
Reconnection
Disconnection
Page Cache Size (MB)
This plot shows the disconnection encryption time and re-connection
decryption time. The line shows the time required
to encrypt all the le pages when the token moves
out of range. The blocks show the time required to refetch
all the cached keys and decrypt the cached le pages.
Figure
9: Disconnection and Reconnection
4.3 Departure and Return
In addition to good performance, ZIA must have two additional
properties. For security, all le page data must be encrypted
soon after a user departs. To be usable, ZIA should
restore the machine to the pre-departure state before the
user resumes work. Recall that when the user leaves, the
system encrypts the le pages in place. When the user re-
turns, ZIA requests decryption of all keys in the key cache
and then decrypts the data in the page cache. To measure
both disconnection and reconnection time, we copied several
source directories of various sizes into ZIA, removed the
token, and then brought it back into range. Figure 9 shows
these results. The line shows the time required to secure
the le system and the points represent the time required
to restore it. The right-most points on the graph represent
the largest le cache we could produce in our test system.
The encryption time depends solely on the amount of data
in the page cache. Unsurprisingly, encryption time is linear
with page cache size. Decryption is also linear, though
fetching requires a variable amount of time due to the
unknown number of keys in the cache. We believe that a
window of ve seconds is too short for a thief to obtain the
laptop and examine the contents of the page cache. Further-
more, the user should come back to a system with a warm
cache. Once the user is within radio range, he must walk to
the laptop, sit down, and resume work; this is likely to be
more than six seconds.
5. RELATED WORK
To the best of our knowledge, ZIA is the rst system to
provide encrypted ling services that defend against physical
attack while imposing negligible usability and performance
burdens on a trusted user. ZIA accomplishes this by separating
the long-term authority to act on the user's behalf
from the entity performing the actions. The actor holds
this authority only over the short term, and refreshes it as
necessary.
There are a number of le systems that provide transparent
encryption; the best known is CFS [1]. CFS is built
as an indirection layer between applications and an arbitrary
underlying le system. This layer is implemented as
a \thin" NFS server that composes encryption atop some
other, locally-available le system. Keys are assigned on a
directory tree basis. These trees are exposed to the user; the
secure le system consists of a set of one or more top-level
subtrees, each protected by a single key.
When mounting a secure directory tree in CFS, the user
must supply the decryption keys via a pass-phrase. These
remain in force until the user consciously revokes them.
This is an explicit design decision, intended to reduce the
burden on users of the system. In exchange, the security of
the system is weakened by vesting long-term authority with
the laptop. CFS also provides for the use of smart cards
to provide keys [2], but they too are fetched at mount time
rather than periodically. Even if fetched periodically, a user
would be tempted to leave the smart card in the machine
most of the time.
CFS' overhead can be substantial. One way to implement
a cryptographic le system more eciently is to place
it in the kernel, avoiding cross-domain copies. This task is
simplied by a stackable le system infrastructure [11, 27].
Stackable le systems provide the ability to interpose layers
below, within, or above existing le systems, enabling
incremental construction of services.
FiST [33] is a language and associated compiler for constructing
portable, stackable le system layers. We use FiST
in our own implementation of ZIA, though our use of the virtual
memory and buer cache mechanisms native to Linux
would require eort to port to other operating systems. We
have found FiST to be a very useful tool in constructing le
system services.
Cryptfs is the most complete prior example of a stacking
implementation of encryption. It was rst implemented as
a custom-built, stacked layer [32], and later built as an example
use of FiST. Cryptfs|in both forms|shares many
of the goals and shortcomings of CFS. A user supplies his
only once; thereafter, the le system is empowered to
decrypt les on the user's behalf. Cryptfs signicantly out-performs
CFS, and our benchmarks show Cryptfs in an even
better light. This is primarily due to the replacement of
Blowsh [29] with Rijndael [8].
Microsoft Windows 2000 provides the Encrypting File System
(EFS) [19]. While EFS solves many administrative is-
sues, it is essentially no dierent from CFS or Cryptfs. A
single password serves as the key-encrypting key for on-disk,
per-le keys. EFS still depends on screen saver or suspension
locks to revoke this key-encrypting key, rather than
departure of the authorized user. The user may disable the
screen saver or suspension locks after nding them intru-
sive. Anecdotally, we have found that many Windows 2000
laptop users have done exactly that.
In addition to le system state, applications often hold
sensitive data in their address spaces. If any of this state is
paged out to disk, it will be available to an attacker much as
an unencrypted le system would be. Provos provides a system
for protecting paging space using per-page encryption
with short lifetimes [26]. ZIA is complimentary to this
system; ZIA protects le system state, while Provos' system
protects persistent copies of application address spaces.
Several eorts have used proximity-based hardware tokens
to detect the presence of an authorized user. Landwehr [16]
proposes disabling hardware access to the keyboard and
mouse when the trusted user is away. This system does not
fully defend against physical possession attacks, since the
contents of disk and possibly memory may be inspected at
the attackers leisure. Similar systems have reached the commercial
world. For example, the XyLoc system [10] could
serve as the hardware platform for ZIA's authentication token
Rather than use passwords or hardware tokens, one could
instead use biometrics. Biometric authentication schemes
intrude on users in two ways. The rst is the false-negative
rate: the chance of rejecting of a valid user [25]. For face
recognition, this ranges between 10% and 40%, depending
on the amount of time between training and using the recognition
system. For ngerprints, the false-negative rate can
be as high as 44%, depending on the subject. The second intrusion
stems from physical constraints. For example, a user
must touch a special reader to validate his ngerprint. Such
burdens encourage users to disable or work around biometric
protection. A notable exception is iris recognition. It can
have a low false-negative rate, and can be performed unobtrusively
[23]. However, doing so requires three cameras|an
expensive and bulky proposition for a laptop.
6. CONCLUSION
Because laptops are vulnerable to theft, they require additional
protection against physical attacks. Without such
protection, anyone in possession of a laptop is also in possession
of all of its data. Current cryptographic le systems
do not oer this protection, because the user grants
the le system long-term authority to decrypt on his be-
half. Closing this vulnerability with available mechanisms|
passwords, secure hardware, or biometrics|would place unpleasant
burdens on the user, encouraging him to forfeit security
entirely.
This paper presents our solution to this problem: Zero-
Interaction Authentication, or ZIA. In ZIA, a user wears an
authentication token that retains the long-term authority to
act on his behalf. The laptop, connected to the token by a
short-range wireless link, obtains this authority only when it
is needed. Despite the additional communication required,
this scheme imposes an overhead of only 9.3% above the
local le system for representative workloads; this is indistinguishable
from the costs of simple encryption.
If the user leaves, the laptop encrypts any cached le system
data. For the largest buer cache on our hardware, this
process takes less than ve seconds|less time than would be
required for a nearby thief to examine data. Once the user
is back in range, the le system is restored to pre-departure
state within six seconds. The user never notices a performance
loss on return. ZIA thus prevents physical possession
attacks without imposing any performance or usability burden
We are currently extending ZIA's model to system services
and applications [24]. By protecting application state
and access to sensitive services, ZIA can protect the entire
machine|not just the le system|from attack.
Acknowledgements
The authors wish to thank Peter Chen, who suggested the
recovery time metric, and Peter Honeyman, for many valuable
conversations about this work. Mary Baker, Landon
Cox, Jason Flinn, Minkyong Kim, Sam King, and James
Mickens provided helpful feedback on earlier drafts.
This work is supported in part by the Intel Corporation; [15]
Novell, Inc.; the National Science Foundation under grant
CCR-0208740; and the Defense Advanced Projects Agency
(DARPA) and Air Force Materiel Command, USAF, under
agreement number F30602-00-2-0508. The U.S. Government [16]
is authorized to reproduce and distribute reprints for Governmental
purposes notwithstanding any copyright annotation
thereon. The views and conclusions contained herein
are those of the authors and should not be interpreted as [17]
necessarily representing the ocial policies or endorsements,
either expressed or implied, of the Intel Corporation; Nov-
ell, Inc.; the National Science Foundation; the Defense Advanced
Research Projects Agency (DARPA); the Air Force
[18]
Research Laboratory; or the U.S. Government.
7.
--R
Scale and performance in a distributed le
Wormhole detection in wireless ad hoc networks.
Personal secure booting.
An architecture for multiple
le system types in Sun UNIX.
Association Summer Conference
Protecting unattended computers
Computer Security Applications Conference
cryptographic devices resilient to capture.
http://www.
howitworks/security/encrypt.
design for a smart watch with a high resolution
Symposium on Wearable Computers
National Institute of Standards and Technology.
Computer data authentication.
encryption for authentication in large networks of
system for public and personal use.
The case for transient
An introduction to evaluating biometric
Encrypting virtual memory.
of the Ninth USENIX Security Symposium
Evolving the vnode interface.
USENIX Association Conference
Integrating security in a large
distributed system.
Description of a new variable-length key
File system
Measurement and Modeling of Computer Systems
pages 203-13
Secure coprocessors in
A stackable vnode level encryption le system.
--TR
A fast file system for UNIX
Scale and performance in a distributed file system
Integrating security in a large distributed system
A logic of authentication
A cryptographic file system for UNIX
File-system development with stackable layers
Distance-bounding protocols
BITS: a smartcard protected operating system
File system agingMYAMPERSANDmdash;increasing the relevance of file system benchmarks
Using encryption for authentication in large networks of computers
An Introduction to Evaluating Biometric Systems
An Iris Biometric System for Public and Personal Use
Personal Secure Booting
Oblevious Key Escrow
Description of a New Variable-Length Key, 64-bit Block Cipher (Blowfish)
Application Design for a Smart Watch with a High Resolution Display
Protecting unattended computers without software
Networked Cryptographic Devices Resilient to Capture
--CTR
Shwetak N. Patel , Jeffrey S. Pierce , Gregory D. Abowd, A gesture-based authentication scheme for untrusted public terminals, Proceedings of the 17th annual ACM symposium on User interface software and technology, October 24-27, 2004, Santa Fe, NM, USA
Brian D. Noble , Mark D. Corner, The case for transient authentication, Proceedings of the 10th workshop on ACM SIGOPS European workshop: beyond the PC, July 01-01, 2002, Saint-Emilion, France
Kenta Matsumiya , Soko Aoki , Masana Murase , Hideyuki Tokuda, A zero-stop authentication system for sensor-based embedded real-time applications, Journal of Embedded Computing, v.1 n.1, p.119-132, January 2005
Andrew D. Wilson , Raman Sarin, BlueTable: connecting wireless mobile devices on interactive surfaces using vision-based handshaking, Proceedings of Graphics Interface 2007, May 28-30, 2007, Montreal, Canada
Naveen Sastry , Umesh Shankar , David Wagner, Secure verification of location claims, Proceedings of the ACM workshop on Wireless security, September 19-19, 2003, San Diego, CA, USA
Shelley Zhuang , Kevin Lai , Ion Stoica , Randy Katz , Scott Shenker, Host mobility using an internet indirection infrastructure, Wireless Networks, v.11 n.6, p.741-756, November 2005
Mark D. Corner , Brian D. Noble, Protecting applications with transient authentication, Proceedings of the 1st international conference on Mobile systems, applications and services, p.57-70, May 05-08, 2003, San Francisco, California
Yih-Chun Hu , Adrian Perrig, A Survey of Secure Wireless Ad Hoc Routing, IEEE Security and Privacy, v.2 n.3, p.28-39, May 2004
Srdjan Capkun , Jean-Pierre Hubaux, BISS: building secure routing out of an incomplete set of security associations, Proceedings of the ACM workshop on Wireless security, September 19-19, 2003, San Diego, CA, USA
Levente Buttyn , Jean-Pierre Hubaux, Report on a working session on security in wireless ad hoc networks, ACM SIGMOBILE Mobile Computing and Communications Review, v.7 n.1, January
Erez Zadok , Rakesh Iyer , Nikolai Joukov , Gopalan Sivathanu , Charles P. Wright, On incremental file system development, ACM Transactions on Storage (TOS), v.2 n.2, p.161-196, May 2006
Julie Thorpe , P. C. van Oorschot , Anil Somayaji, Pass-thoughts: authenticating with our minds, Proceedings of the 2005 workshop on New security paradigms, September 20-23, 2005, Lake Arrowhead, California
Shelley Zhuang , Kevin Lai , Ion Stoica , Randy Katz , Scott Shenker, Host Mobility Using an Internet Indirection Infrastructure, Proceedings of the 1st international conference on Mobile systems, applications and services, p.129-144, May 05-08, 2003, San Francisco, California | mobile computing;stackable file systems;transient authentication;cryptographic file systems |
570663 | On the interdependence of routing and data compression in multi-hop sensor networks. | We consider a problem of broadcast communication in a multi-hop sensor network, in which samples of a random field are collected at each node of the network, and the goal is for all nodes to obtain an estimate of the entire field within a prescribed distortion value. The main idea we explore in this paper is that of jointly compressing the data generated by different nodes as this information travels over multiple hops, to eliminate correlations in the representation of the sampled field. Our main contributions are: (a) we obtain, using simple network flow concepts, conditions on the rate/distortion function of the random field, so as to guarantee that any node can obtain the measurements collected at every other node in the network, quantized to within any prescribed distortion value; and (b), we construct a large class of physically-motivated stochastic models for sensor data, for which we are able to prove that the joint rate/distortion function of all the data generated by the whole network grows slower than the bounds found in (a). A truly novel aspect of our work is the tight coupling between routing and source coding, explicitly formulated in a simple and analytically tractable model---to the best of our knowledge, this connection had not been studied before. | Appendix
B)), for is the minimum amount of bits that should be transported through the
network to solve the transmission problem we are interested in.
C. Contributions and Paper Organization
The main ideas of this paper are: 1) to de?ne a model for the sensor data to derive the scaling law of
versus combining routing and data-compression to prevent congestion in highly populated sensor networks. Our
main results are:
(a) The proof of the existence of routing algorithms and source codes that require no more than
bits in transmissions, where is the joint rate/ distortion function of all samples in the ?eld. This
would prove that, even under decentralization constraints, classical source codes can still achieve optimal compression
ef?ciency. And furthermore, attaining that optimal performance requires a number of transmissions that is sub-linear
in the number of nodes in the network.
(b) The proof that , under some mild regularity conditions on the random ?eld.
That is, if the average distortion per sample is kept constant, the ?eld generates a bounded amount of information,
independently of its size. And if the total distortion is kept constant, the growth of is only logarithmic in , well
below the total transport capacity of the network, which grows like (as will be argued in Section I-A in
agreement with the result in [9]).
Our paper shows that performing routing and data compression in a combined fashion can prevent congestion, however
several questions still remain to be answered: Is there an optimal strategy that allows to visit the nodes so as to
minimize the effective data-?ow? What is the optimum trade-off between routing delay and traf?c? These questions,
though practically very important, are outside the scope intended for this paper.
The rest of this paper is organized as follows. In Section IV we compute bounds on the transport capacity of our
network, and we use these bounds to impose constraints on total traf?c that can ?ow through the network. Then, in
Section V, we propose a model for the generation of sensor data, based on which we prove that indeed, the amount
of data generated by the network is well below network capacity. Numerical examples and concluding remarks are
presented in Section VII and VIII respectively.
II. INDEPENDENT ENCODERS
To provide the sought conditions, the ?rst thing we need to do is ?nd out how much traf?c does this network
generates. Very clearly this question depends on the statistics of the sensed data and the quantization/compression
technique used. In standard communication networks that support the transmission of analog sources (voice, images
and video) every information source independently encodes and compresses the data, which are then routed through an
high speed core network to the destination (see Fig.2). Networking issues and physical aspects of the the information
Routing
Quantization
Compression
Fig. 2. Traditional network setting.
processing are naturally kept separated in the traditional network structure in Fig. 2. It is obvious to ask ourselves how
would the bits generated by the sensors increase with the network size if the sensors operate as independent encoders,
following the traditional structure discussed above.
For illustration purposes we can consider a simple example: suppose that is uniform in the range , that
each node uses a scalar quantizer with bits of resolution (i.e., the quantization step is ), and that the distortion
is measured in the mean-square sense (i.e. A well known result from basic quantization
theory states that on this particular source the average distortion achieved by such a quantizer (also called operational
distortion-rate function) is [6]:
The total distortion on the entire vector of samples is , and hence,
solving for in , we derive that to maintain a total distortion over the entire network of each sample
requires bits. As a result, the total amount of traf?c generated by the whole network scales like
in network size. Interestingly, even using optimal vector quantizers at each node, if the compression of
the node samples is performed without taking into consideration the statistics of the other sensors' measurements, the
scaling law is still : in other words, one could certainly reduce the number of bits generated for a ?xed
distortion level but this reduction would only affect constants hidden by the big-oh notation and the
scaling behavior of network traf?c would remain unchanged [3] (examples of analyses of this type can be found in [14,
Sec. 3] and [19, Sec. 7]). In fact, even if each node utilizes a dimesional vector quantizer to compress optimally
a sequences of subsequent samples, say , high resolution methods [6] show that the operational
distortion rate function would be:
where , is the differential entropy of , i.e.:
is the joint density of the samples collected at the node and is a source speci?c
constant. Hence, the node distortion rate function with respect of a general source has a factor replacing the
of the example discussed above, but the dependence on is still through the exponential term .
Once we have determined how much data our particular coding strategy (independent quantizers at each node)
produces, we need to know if the network has enough capacity to transport all that data. For independent encoders the
answer is no [9]. To see how this is so, consider a partition of the network as shown in Fig. 3.
Fig. 3. nodes are spread uniformly over the unit square ( large). Take a differential volume of
size ( small). With high probability, the number of nodes in a differential volume is , and so the
number of nodes in a strip as shown in the ?gure is . Since the total number of nodes is , we must have
(because the total area of the unit square is the product of the areas of two strips as shown in the
?gure, one horizontal and one vertical), and hence we must have that the number of nodes in a strip as shown is
.
In the sensor broadcast problem of interest in this work, all the nodes in the network must receive information
about the measurements collected by all other nodes. As a result, all the traf?c generated on the left portion of the
network must be carried to the right, and all the traf?c generated on the right portion of the network must be carried
to the left. That is, according to our calculation above, the nodes within the strip marked in Fig. 3 must share the load
of moving bits across this network cut. But since the links present on the stripe have a capacity ,
the capacity of this cut cannot be larger than . From the max-?ow/min-cut theorem [4, Ch. 27], we know
that the value of any ?ow in this network is upper bounded by the capacity of any network cut, and therefore, the total
transport capacity of this network cannot be higher than .
And now we see what the problem is: bits must go across a cut of capacity . That is, even
optimal vector quantization strategies cannot compress the data enough so that the network can carry it?the network
Encoder
Decoder
Y
Fig. 4. Coding with side information.
does not scale up.1
A. Correlated Samples
The scaling analysis for independent encoders presented above ignores one fundamental aspect: increasing correlations
in sensor data as the density of nodes increases. And indeed, if the data is so highly correlated that all sensors
observe essentially the same value, at least intuitively it seems clear that almost no exchange of information at all
is needed for each node to know all other values: knowledge of the local sample and of the global statistics already
provide a fair amount of information about remote samples. And this naturally raises the question: are there other
coding strategies that can compress sensor data enough?
III. DEPENDENT ENCODERS
In [13], [20], [14] was ?rst introduced the idea that coding the sensors' data exploiting the correlation among the
samples can prevent network congestion. Speci?cally, in [13], [20], [14] the authors proposed to compress separately
the correlated samples at each node by mean of distributed source coding techniques. In this section we ?rst discuss
distributed source coding and then introduce the novel approach we propose, which consists in combining multi-hop
routing and data compression.
A. Distributed Source Coding
The idea of distributed source coding was ?rst introduced by Slepian and Wolf [18] who quanti?ed the number
of bits that are necessary to encode a source when the receiver has side information on the source (see Fig.4).
In the sensor network broadcast problem the measurements of the other sensors are themselves side information that
the receiver will have available [13], [20], [14]. Hence, each node can quantize the data considering that the side
information of the other samples will also be utilized by the decoder.
In [14], for the scenario that is described in Fig. III-A, it was shown that one can reduce the amount of bits per
node per square meter to an .
The result in [14] provides the ?rst theoretical evidence that coding techniques that exploit the dependence among
the sensors' samples are key to counteract the vanishing throughput of multi-hop networks. In fact, even if the transport
Capacity per node per square meter is vanishing so is the number of non-redundant bits that each node generates and,
furthermore, the latter is vanishing at a faster rate than the throughput is.
Techniques based on ?ows and cuts to analyze information theoretic capacity problems in networks have been proposed in [1],
[5, Ch. 14.10], In this context, those techniques provide an alternative interpretation for the Gupta/Kumar results [9].
Correlated N nodes
Sensors
Multi-Hop
Communication
Network N nodes
Central Control
Fig. 5. The sensor network setting in [14].
Even though multi-hop sensor networks appear to be the ?killer application? for distributed source coding there
are several reasons why the approach in [13], [20], [14] can be improved: 1) so far the theoretical evidence that
congestion can be avoided through distributed source coding is limited to a restrictive setting: in fact, the bounds in
[14] where derived for the situation described in Fig. 1 where the sensors are in a one dimensional space, and the relays
are in a two dimensional area which suggests that the nodes are physically separated, even if they are exactly in the
same ratio; 2) the approach of distributed source coding requires complex encoders to achieve signi?cant compression
gains. For example, the proof developed in [14] involves the use of codes for the problem of rate/distortion with side
information [5, Ch. 14.9] which are ef?cient when all codewords are nearly uniformly distributed and this is true for
highly correlated data only when the vector have large sizes. High-dimensional vector quantizers are not practical and
in short-block settings the gains obtained are in general less signi?cant. Last but not least: it is true that the encoding
is performed without sharing one single bit of data among the nodes. However, there is no real need to impose such
constraint. The separation of source-coding an routing in different layers of the communication system architecture
does not re?ect a physical separation of functionalities in the multi-hop sensor network setting of Fig. 1. After all, the
trademark of multi-hop networks is that power ef?cient transmission is achieved when the data travel through several
intermediate close-by nodes before reaching their ?nal destination. Hence, from the point of view of an engineer
engaged in the design of a practical sensor network this fact creates an opportunity for using simpler compression
techniques that cannot be missed. In fact, if the neighboring nodes are jointly compressing the data in their queues
before forwarding them remotely, when the network is dense and the ?eld is smooth, they can drastically reduce the
number of bits per sample while transmitting with the same or even greater precision. To accomplish this gain the
nodes can use a variety of techniques that are used to compress sequences with no need of resorting to highly complex
distributed source coding techniques. This is the truly novel and interesting aspect of our paper: the combination of
classical source coding methods and routing algorithms which, to the best of our knowledge, has not been explored by
other authors.
B. Routing and Source Coding
The scheme we propose is based on the idea described in Fig. Our idea is to use classical source codes as the
samples travel re-encoding jointly the data in the queues as they hop around the network, removing bits of information
that are redundant.
In general the th node will have to transmit to the th sensor a set of samples . If both the th and th node have
already received a set the encoder at the th node will need to pass to the encoder at the th node a number
Fig. 6. Routing and source coding.
of bits which is at least . This said, effectively designing the network ?ow using for every data ?ow the
rate/ditortion function lower bound becomes tedious, for it requires de?ning the level of distortion allowed for every
set of data exchanged which is a non-trivial function of the total distortion allowed .
A much simpler approach is to refer to the joint entropy of the samples which are discretized by quantizing them
?nely at every node with a quantizer with cell . The entropy induced on the codewords by quantization of the source
is denoted by . Because the shortest codeword length that represents uniquely a set of discrete data is equal
to the data entropy (see Appendix B) we can determine the number of bits transferred in every ?ow by using the joint
entropy of the data that ought to be transmitted; can be chosen so as to satisfy the global distortion constraint.
Obviously, this approach is suboptimal because the entropy of the quantized data is larger then the rate/distortion
function, since the rate/disortion function is by de?nition a lower bound on the number of bits necessary to represent
the data. However, as we will see in Section VI, in the worst case scenario, which is the case of Gaussian samples, the
entropy grows with the same scale as the rate/distortion function with respect to .
For now we can assume that the quantized samples are entropy-coded using an optimum vector quantizer, so that
for every ?ow we are transmitting the data using the shortest codeword length (see Appendix B) which is equal to
the data entropy. In practice, entropy coding is not a viable solution and there are a variety of alternatives that are
used in sequence compression available. In fact, contrary to the complicated vector quantizers required by distributed
source coding, when routing and source coding are combined the processing at each node can be done using any of
the standard compression technique which are used normally to compress sequences from analog sources (Predictive
Encoding, , Transform-Coding such as JPEG, etc. The dif?culty in applying such techniques only resides in
the dif?culty of implementing the algorithms in a distributed fashion, an interesting subject dealt with elsewhere [15],
[16].
In Section IV, we provide an example of possible network ?ow which is not by any mean optimized but yet
allows us to set up a study case where we can establish formally the condition under which the traf?c generated can be
transported by the network. The condition should be satis?ed by the data joint entropy (for the suboptimum
approach) or by the joint rate/distortion (when the ideal optimum quantization takes place). Then, in Section
V, we derive the asymptotic rate distortion function of the data and prove that, for a large class of sensors' data models,
the condition found is satis?ed in the limit as .
IV. TRANSPORT CAPACITY
Using simple network ?ow concepts, in Section I-A we argued that an upper bound on the transport capacity of a
network of size is . Our goal in this section is to construct one particular ?ow: from the amount of data
that this ?ow needs to push across the network, and from the upper bound on the capacity of the network, we derive a
constraint on the amount of data that the source can generate if it is to be broadcast over the whole network.
A. A Network Flow - The case of a Regular Grid
We consider ?rst the case of a regular grid, as it naturally precedes the construction for a general random grid. As
anticipated, we will construct our conditions on the network ?ow using the entropies of the quantized data .
We start with the case of four nodes and then generalize it to the entire network with a recursive algorithm. Two
examples (among many more possibilities) of transmission schemes are shown in Fig. 7.
(a) 1 (a) 1
(b) 2|1
(b) 2|1
(d) 3|1,2
(c) 1,2 (f) 1,2,3 (c) 1,2 (c) 1,2
(a) 3
(b) 4|3
Fig. 7. Consider a network with 4 nodes, each of which observes a variable ( ), with joint entropy
. Two possible ways of scheduling transmissions are shown. The notation used in the ?gure
is that (a) is the ?rst transmission, (b) the second, (c) the third, and so on; if two transmissions have the same letter
label, they can be performed in parallel; means that the sample of node is encoded when knowledge of the
sample of node is available. Using chain rules for entropies, we see that in the transmission schedule of the left
?gure we generate a total traf?c of , and it takes 8 time slots to complete. In the schedule of
the right ?gure, we generate more traf?c ( bits), but now we
only require 4 time slots to complete all transmissions.
We see from the examples in Fig. 7 that, at the expense of increased transmission delays, we can communicate
all samples to all nodes generating an amount of traf?c which is essentially the same as if one node had collected all
the samples and encoded them jointly, and then this information had been broadcast to all other nodes. Alternatively,
by sacri?cing some compression ef?ciency, it is also possible to incur in lower transmission delays. That is, there is
an inherent tradeoff between bandwidth use and decoding delay, and these two quantities are linked together by the
routing strategy employed.
For the entire network we can construct a ?ow recursively. For simplicity, assume , for some integer
. When , we have the trivial case of a network of size , in which all nodes (i.e., ) know the value
of all samples (i.e., ) without any transfer of information.
Consider now a partition of nodes into 4 groups: containing all the nodes in the upper-left corner of size
, nodes in the upper-right corner, lower-left, and lower-right. denotes the set of variables
observed by nodes in a set . This partition is illustrated in Fig. 8.
UL UR
Fig. 8. Partition of a network of size into four subnets.
We have that are all subnets of size , and so from our recursion we get that all
nodes within each subnet know the values of all quantized variables. But now, we have reduced our problem to the
problem with four nodes considered in Fig. 7, and we know that exchanging a total of
bits, in a total of 8 transmissions across cuts (plus transmissions to spread data within cuts), is enough to
ensure that every node in the network of size knows every quantized value. This construction is illustrated
in Fig. 9.
Fig. 9. Since each node on the boundary of a cut has knowledge of all samples within its subnet, each one of them
can encode all these samples jointly and send -th of this data across the cut. Then, in more
transmissions, all these pieces can be spread throughout the subnet to reach all nodes.
The idealized system, where all the data are quantized optimally without error propagation and with the minimum
number of bits necessary to represent the network samples, still must generate at least a total of bits of
traf?c, and still requires transmissions to complete the broadcast.
B. A Network Flow - The Case of a Random Grid
We next address the general case where the sensors are located in random positions, and in this case, the only
difference with the case of a regular grid as considered above is the fact that, in the random case, there is a non-zero
probability that in the recursive de?nition of the ?ow above, we may encounter an empty subnet. But in that case, only
trivial modi?cations can take care of this problem. The basic idea is that we can divide our network into squares of
area , and then with probability that tends to 1 as , we have that uniformly over all such squares,
the number of points falling into each square is [12, Ch. 2]. Therefore, in almost all networks with points,
the problem with a random grid is trivially reduced to the problem with a regular grid discussed above, plus a local
?ood within each square involving only about nodes. More details on the capacity of random networks are
presented in [11].
C. Constraints for network stability
We know the following facts:
Since , bits must go across the 4-way cut
de?ned by .
The capacity of the 4-way cut is .
Using the chain-rule we showed that we are capable of broadcasting all the data transferring
is the minimum amount of bits necessary to represent the same data (Appendix
B).
Therefore, the minimum requirement we need to satisfy is:
and, as we are going to show in the following sections, this requirement is satis?ed with probability one as ,
for the wide class of data models introduced in Section V.
V. A MODEL FOR SENSOR
In Section IV we saw that, by appropriate routing and re-encoding along routes, we can compress all the data
generated by the entire network down to . Our goal in this section is to verify that, for reasonable
models of sensor data, we have that eqn. (5) is satis?ed, so that the broadcast problem can be effectively solved.
To be able to talk about ?the rate/distortion function of the data generated by the entire network? we need a
model for this data. The main idea that we would like to capture in our models for sensor data is that, if this data
corresponds to measurements of a random process with some kind of regularity conditions, then these measurements
have to become increasingly correlated as the density of nodes becomes large. We propose to this end a fairly general
class of such models, under two assumptions: (a) the data are Gaussian random variables (b) the correlation among
samples is an arbitrary spatially homogeneous function; and (c), as we let the number of nodes in the network grow,
the correlation matrix converges to a smooth two-dimensional function.
Assumption (a) is a worse case scenario as far as compression is concerned, as a consequence of Theorem 4 in
Appendix
B. Spatial stationarity, even though not totally general is a technical assumption common to many statistical
analysis and captures well local properties of random processes.
A. Source Model
This section establishes the basic model upon which we will base our asymptotic analysis. Let denote the
random vector of the samples collected by the sensors at time . Our ?rst assumption is:
(c.I) , i.e. is a spatially correlated random Gaussian vector. The samples are temporally uncorrelated.
The samples are temporally independent if the power spectrum of is band-limited and the data are sampled at the
Nyquist rate. In any case, it is not a restrictive assumption and further gains in terms of compression could be obtained
exploiting the temporal dependence of the samples. Because of the temporal independence, we will focus on only one
vector of samples , and from now on we will drop the time index.
With the mean square error (MSE) as distortion measure and with the constraint
we can calculate under assumption (c.I) the rate/distortion function of the network using the reverse water-?lling
result [5]. Indicating by the ordered eigenvalues of , the rate/distortion function is
where
if
otherwise
and is such that
For , there exists an such that and , therefore:
and
The rate-distortion function is a function of the eigenvalues of only and is formed with the samples of the
continuous multivariate function that represents the correlation between the samples taken two arbitrary points in the
network:
The eigenvectors , with entries satisfy the following equation:
B. Asymptotic Eigenvalues' Distribution
Our derivations in the following try to capture the fact that the process cannot have in?nite spatial degrees of
freedom. The asymptotic rate-distortion function is obtained using the following two basic steps: 1) we prove that the
eigenvalues of the correlation matrix tend to the eigenvalues of the continuous integral equation:
we provide a model for the kernel of the continuous integral equation (13) which is bandlimited in the spatial
frequencies, and this allows us to obtain the asymptotic rate distortion bound.
As we said, the ?rst step is to rewrite (12) as a quadrature formula which approximate the integral equation (13).
For a general integral, there will be quadrature coef?cients such that the approximation of (13) holds:
Since we want to explore the convergence of the eigenvalues of (12) we can set in which case the ?rst
side (14) is equivalent to the right side of (12) normalized by . Therefore, when (14) is valid, the left sides of (12)
normalized by is approximately equal to the left side of (13) which leads to the following approximation:
The error in the approximation (14) determines the error in (15). The two errors are related by the following
theorem, derived from [10, Sec. 5.4]:
Lemma 1: Denoting by an arbitrary eigenvalue of (13) and by the corresponding normalized
eigenvector, for suf?ciently large there exist an eigenvalue of such that:
where denotes the quadrature error, i.e.:
Assuming that:
(c.II) For any continuous the grid is such that with the quadrature error ;
then (16) implies that:
Condition (c.II) in Lemma 1 is the operational condition on the distribution of the sensor nodes: the nodes should
be distributed in such a way that the quadrature error vanishes as .
There is another interesting and intuitively obvious consequence of Lemma 1 and (c.II), which is summarized in
the following corollary (it can be easily proved using the bounds in Lemma 1 and the triangular inequality):
Corollary 1: The eigenvalues of and corresponding to two different grids are such that
Hence, if (c.II) is satis?ed by both grids, and have the same eigenvalues asymptotically.
Corollary 1 implies that we can rely on any grid that has the same asymptotic behavior, such as for example a regular
lattice, and extrapolate the asymptotic behavior of the eigenvalues from the latter.
C. A tractable case
We assume that
(c.III) the correlation between points in (11) is spatially homogeneous:
The consequent structure of on a regular grid is also known as doubly Toeplitz, i.e.
with blocks that are Toeplitz themselves.
Assumption (c.III) implies that
The useful aspect of this model is that the empirical distribution of the eigenvalues of
sumptions to the 2-D Fourier Spectrum of (18), as we see next.
Under the assumption (c.III) we can de?ne:
is a block Toeplitz matrix
converges under mild as-
Adopting a regular grid covering the square of area , the spacing between them is
and like-wise . Szego?'s theorem [7] establishes that asymptotically the eigenvalues of a Toeplitz matrix converge
to the spectrum of the correlation function. Essentially, Szego?'s theorem establishes that the eigenfunctions of an
homogeneous kernel tend to be the Fourier basis of complex exponentials. The result can be generalized to the two
dimensional case when the matrix is doubly Toeplitz, i.e.:
Before proceeding, the ?nal modelling assumption is:
c.IV is bandlimited with respect to bandwidth , i.e. for .
With c.IV, we capture the notion that the limit covariance function varies smoothly in space.
D. Rate Distortion Function
The asymptotic rate-distortion function is obtained replacing the summations in (9) and (10) with integrals over
. Because the eigenvalues become asymptotically a continuous function, in (7) correspond to points
where crosses the threshold , i.e.
Let us denote the sets of points and where as , i.e.:
Let us also indicate as the set
The rate/distortion function is:
Because of c.IV the areas of both and are smaller than . Thus, we have the following lower bound on
(also illustrated in Fig. 10):
I
Fig. 10. One dimensional illustration of how the lower bound on from eqn. (27) is obtained by bounding the
integrals that de?ne in eqn. (25).
Together with c.IV, this justi?es the following upper-bound on the rate/distortion function:
So, we see that the total rate-distortion function over the entire network is , and because is
the average distortion per sample if that is kept ?xed, then the total amount of traf?c generated by the network is
upper bounded by a constant, irrespective of network size. Alternatively, if we keep the total distortion ?xed, by
considering increasingly large we let the average distortion . Even though the rate distortion function is
an asymptotic bound, the result is very signi?cant because we can observe that the amount of traf?c generated by the
entire network grows only logarithmically in , well below the capacity bound proved in eqn. (5).
VI. QUANTIZATION AND COMPRESSION
In Section IV we constructed an algorithm for the network ?ow which is based on the joint entropy of the samples
which are discretized by quantizing them at every nodes. The data ?les can be compressed using universal source
coding algorithms, such as Huffman coding or simpler suboptimal alternatives like Lempel-Ziv coding. Previously, in
Section III-B, we have argued that this approach is suboptimum but the growth of the entropy has the same behavior
of the rate/distortion function, which is what we prove next. High-resolution analysis shows that if are individually
quantized with an uniform quantizer with cell size their joint entropy is [6]:
where and is the joint differential entropy of the spatial samples, whose
de?nition is analogous to (4). For a Gaussian -dimensional multivariate process with full rank covariance matrix :
where is the determinant of the covariance matrix. Under conditions c.I through c.IV we have shown that
becomes singular as with an increasingly large null-space. For an -dimensional Gaussian process with a
singular covariance matrix having rank , the joint density of the samples can be expressed as the product
of the densities of an auxiliary set of independent Gaussian random variables with variance equal to the non-zero
eigenvalues of (also called principal components) whose number is equal to , and a set of Dirac
functions. Consequently, if we denote by the product of the non-zero eigenvalues and by the rank of
, the joint entropy of a Gaussian multivariate density can be in general written as:
It is not dif?cult to prove that the high resolution approximation for the general case is:
To determine the size of we can, in agreement with the high resolution analysis, consider the quantization
error as nearly independent from the signal and from sample to sample (which is a pessimistic assumption) and with
uniform distribution [6]. Thus, to have a total distortion in the order of the cell size has to be such that
. Therefore, . For , with arguments similar to those used to prove (28), we
can show that under c.IV , while . Hence, from (32) for
in other words also grows logarithmically with .
A. A simple compression strategy: down-sampling while routing
A simple-minded compression strategy that the nodes can implement is to down-sample appropriately the sensor
measurements as they are spread through the network. This simple strategy allows us the reach with some approximations
the same conclusion of our asymptotic analysis. In fact, even though a spatially bandlimited process requires
in?nite samples to be correctly reconstructed through interpolation, Nyquist theorem indicates that if condition c.II is
met, we can sample the ?eld with frequency in the and axis respectively. Because the network area is equal to
one, even if we over-sample to reduce the interpolation error at the borders, we need samples. In fact Nyquist
criterion strictly requires samples per unit area but border effects, due to the fact that the area is limited, and quantization
errors, which propagate when the missing data are interpolated, can be reduced by over-sampling. However,
the important conclusion is that number of samples does not grow with . On the other hand, if we constrain the total
distortion error to be , the average distortion per sample has to be . Let us assume that the variance of
each sample is : the interpolated samples will have distortion which is greater or equal to the distortion of the
non interpolated ones which implies that the non interpolated samples have to be quantized at a rate that is at least:
Therefore, the total amount of traf?c produced by the network will be in the order of:
q.e.d.
VII. NUMERICAL EXAMPLES
In this section we provide numerical evidence that validates our asymptotic claims.
The ?rst numerical example is aimed at corroborating Corollary 1. We assumed the area of the network is normalized
to one and that the function de?ned in (18) is:
where and it can be easily veri?ed that condition C.II is met. In ?g. 11 we show samples obtained
over the regular grid that are and in ?g. 12 we show the eigenvalues of the matrix , whose entries are
where in one case , are on a random grid (red line) and in the other case
they are on a regular lattice (blue line): we can observe that they are nearly identical in both cases and that the support
of the non zero eigenvalues does not grow with while their values increase proportionally to it. Finally, in ?g.0.5-0.5
Fig. 11. Samples where is de?ned in (38) obtained over the regular grid of sensors.300evd(R)150500 1 2 3
regular grid
random grid
eigenvalue index
Fig. 12. Eigenvalues of for various values of .
13 for the same covariance model in (38) and total distortion , we show the rate-distortion function calculated
numerically using the inverse water-?lling in (6). As expected, the growth is clearly logarithmic.
VIII. CONCLUSIONS
In recent work on the transport capacity of large-scale wireless networks, it has been established that the per-node
throughput of these networks vanishes as the number of nodes becomes large. This result poses a serious challenge
for the design of such networks; some have even argued that large networks are not feasible, precisely because of this
reason [9]. Previous work however pointed out that, in the context of sensor networks, the amount of information
Fig. 13. Rate distortion function for versus number of nodes in the network.
generated by each node is not a constant, but instead decays as the density of sensing nodes increases?this was
illustrated with an example based on the transmission of samples of a Brownian process, with arbitrarily low distortion
(even with vanishing per-node throughput), by means of using distributed source coding techniques [14].
In this work we have shown an alternative approach to work around the vanishing per-node throughput constraints
of [9]. This new approach is not based on distributed coding techniques, but instead is based on the use of classical
source codes combined with suitable routing algorithms and re-encoding of data at intermediate relay nodes. To the
best of our knowledge, these are the ?rst results in which interdependencies between routing and data compression
problems are captured in a system model that is also analytically tractable. And a key (and enabling) step in our
derivation was the construction of a family of spatial processes satisfying some fairly mild (and easily justi?able from
a physical point of view) regularity conditions, for which we were able to show that the amount of data generated
by the sensor network is well below its transport capacity. This provides further evidence that large-scale multi-hop
sensor networks are perfectly feasible, even under the network model considered in [9].
I. ENTROPY AND CODING
The entropy of a random variable with probability mass function is de?ned as:
For multivariate random variables the generalization is straightforward:
Note also that, the entropy of a vector can be decomposed according to the so called chain rule, resulting from the
iterated application of the chain rule for probability :
which was used in Figure 7. The importance of the de?nition of entropy lies in the fact that it provides a very accurate
answer to the following question regarding discrete data sources (i.e. sources producing symbols drawn form a discrete
What is the minimum number of bits necessary to represent the data from a discrete source so that they can be
reconstructed without distortion?
The answer is given by the the following theorem [5, Ch.5]:
Theorem 1: The expected length of any instantaneous D-ary code for a random variable is:
The proof of the theorem is based on the existence of a coding technique, Huffman coding, is known to achieve
the entropy within one bit if one symbol is encoded and, if multiple symbols are encoded together, the ef?ciency of
Huffman coding tends to be 100% [5]. This theory cannot be directly generalized to handle the case of analog sources
because their entropy is in?nite even if the signal is a discrete-time sequence. In fact, the source signal can take any
real value so that each sample still requires in?nite precision to be represented exactly. However, once a certain level
of distortion is accepted, the minimum number of bits necessary to represent the source can be calculated just as
rigorously as in the case of discrete sources, resorting to the parallel theory for analog sources which is called Rate
Distortion Theory.
II. RATE DISTORTION THEORY IN A NUTSHELL
Rate Distortion is based on the seminal contribution of Claude Shannon who tried to provided a theoretical frame-work
for the representation of a continuous source through discrete symbols [17].
Suppose a memoryless continuous source produces a random sample with density : the quantization problem
boils down to representing through discrete values so that, if our measure of the distortion is , our
mapping is such that .
To quantify how many bits are needed to represent , in his 1959 paper Shannon de?ned the so called rate
distortion function:
where:
is the average mutual information between and . In the same paper he proved the following two fundamental
theorems:
Theorem 2: The minimum information rate necessary to represent the output of a discrete-time, continuous-
amplitude memoryless Gaussian source with variance based on a mean-square distortion measure per symbol
(single letter distortion measure) is:
if
Theorem 3: There exists an encoding scheme that maps the source output into code words such that for any given
distortion , the minimum rate bits per symbol (sample) is suf?cient to reconstruct the source output with an
average distortion that is arbitrarily close to .
The implication of the two theorems above is that is the minimum number of bits that can represent within the
prescribed mean-square error if the source is Gaussian. In other words any discrete code that represents a Gaussian
source with a mean-quare distortion has
and the lower bound is asymptotically achievable. The following important theorem, proven by Berger in 1971 [2],
generalizes the results by Shannon:
Theorem 4: The rate-distortion function of a memoryless, continuous -amplitude source with zero mean and ?nite
variance with respect to the mean-square-error distortion measure is upper-bounded as:
Berger's theorem implies that the Gaussian source is the one the requires the maximum encoding rate, if the distortion
function is the MSE. Hence, if our distortion metric is the mean-square error, the case of a Gaussian source has to be
seen as a worse case scenario.
The theorems above have been extended to multivariate sources which are analogous to the ones we consider in
this paper. In particular, the so called inverse water-?lling result used in Section V is the direct generalization of
Theorem 2 to the multivariate Gaussian source.
--R
Network Information Flow.
Rate distortion Theory
Sphere Packings
Introduction to Algorithms.
Elements of Information Theory.
Vector Quantization and Signal Compression.
Critical Power for Asymptotic Connectivity in Wireless Networks.
The Capacity of Wireless Networks.
On the Maximum Stable Throughput Problem in Random Networks.
Convergence of Stochastic Processes.
Distributed Source Coding: Symmetric Rates and Applications to Sensor Networks.
Lattice Quantization with Side Information.
Distributed Signal Processing Algorithms for the Sensor Broadcast Problem.
Sensing Lena?
Coding Theorems for a Discrete Source With a Fidelity Criterion Institute of Radio Engineers
Noiseless coding of correlated information sources.
Multiple Description Vector Quantization with Lattice Codebooks: Design and Analysis.
Optimal Code Design for Lossless and Near Lossless Source Coding in Multiple Access Networks.
--TR
Introduction to algorithms
Vector quantization and signal compression
Elements of information theory
Distributed Source Coding
Lattice Quantization with Side Information
Optimal Code Design for Lossless and Near Lossless Source Coding in Multiple Access Networks
--CTR
Animesh Kumar , Prakash Ishwar , Kannan Ramchandran, On distributed sampling of smooth non-bandlimited fields, Proceedings of the third international symposium on Information processing in sensor networks, April 26-27, 2004, Berkeley, California, USA
Tamer ElBatt, On the scalability of hierarchical cooperation for dense sensor networks, Proceedings of the third international symposium on Information processing in sensor networks, April 26-27, 2004, Berkeley, California, USA
Bharath Ananthasubramaniam , Upamanyu Madhow, Virtual radar imaging for sensor networks, Proceedings of the third international symposium on Information processing in sensor networks, April 26-27, 2004, Berkeley, California, USA
Sundeep Pattem , Bhaskar Krishnamachari , Ramesh Govindan, The impact of spatial correlation on routing with compression in wireless sensor networks, Proceedings of the third international symposium on Information processing in sensor networks, April 26-27, 2004, Berkeley, California, USA
Yaron Rachlin , Rohit Negi , Pradeep Khosla, Sensing capacity for discrete sensor network applications, Proceedings of the 4th international symposium on Information processing in sensor networks, April 24-27, 2005, Los Angeles, California
G. Barriac , R. Mudumbai , U. Madhow, Distributed beamforming for information transfer in sensor networks, Proceedings of the third international symposium on Information processing in sensor networks, April 26-27, 2004, Berkeley, California, USA
Abhinav Kamra , Vishal Misra , Jon Feldman , Dan Rubenstein, Growth codes: maximizing sensor network data persistence, ACM SIGCOMM Computer Communication Review, v.36 n.4, October 2006
Huiyu Luo , Gregory J. Pottie, Designing routes for source coding with explicit side information in sensor networks, IEEE/ACM Transactions on Networking (TON), v.15 n.6, p.1401-1413, December 2007
Mihaela Enachescu , Ashish Goel , Ramesh Govindan , Rajeev Motwani, Scale-free aggregation in sensor networks, Theoretical Computer Science, v.344 n.1, p.15-29, 11 November 2005
An-swol Hu , Sergio D. Servetto, Asymptotically optimal time synchronization in dense sensor networks, Proceedings of the 2nd ACM international conference on Wireless sensor networks and applications, September 19-19, 2003, San Diego, CA, USA
Bhaskar Krishnamachari , Fernando Ordez, Fundamental limits of networked sensing: the flow optimization framework, Wireless sensor networks, Kluwer Academic Publishers, Norwell, MA, 2004
Alexandre Ciancio , Sundeep Pattem , Antonio Ortega , Bhaskar Krishnamachari, Energy-efficient data representation and routing for wireless sensor networks based on a distributed wavelet compression algorithm, Proceedings of the fifth international conference on Information processing in sensor networks, April 19-21, 2006, Nashville, Tennessee, USA
Hong Luo , Jun Luo , Yonghe Liu, Energy efficient routing with adaptive data fusion in sensor networks, Proceedings of the 2005 joint workshop on Foundations of mobile computing, September 02-02, 2005, Cologne, Germany
Christina Peraki , Sergio D. Servetto, On the maximum stable throughput problem in random networks with directional antennas, Proceedings of the 4th ACM international symposium on Mobile ad hoc networking & computing, June 01-03, 2003, Annapolis, Maryland, USA
Mehmet C. Vuran , Ian F. Akyildiz, Spatial correlation-based collaborative medium access control in wireless sensor networks, IEEE/ACM Transactions on Networking (TON), v.14 n.2, p.316-329, April 2006
Mehmet C. Vuran , zgr B. Akan , Ian F. Akyildiz, Spatio-temporal correlation: theory and applications for wireless sensor networks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.45 n.3, p.245-259, 21 June 2004
Xun Su, A combinatorial algorithmic approach to energy efficient information collection in wireless sensor networks, ACM Transactions on Sensor Networks (TOSN), v.3 n.1, p.6-es, March 2007
Micah Adler, Collecting correlated information from a sensor network, Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, January 23-25, 2005, Vancouver, British Columbia
Qun Li , Michael De Rosa , Daniela Rus, Distributed algorithms for guiding navigation across a sensor network, Proceedings of the 9th annual international conference on Mobile computing and networking, September 14-19, 2003, San Diego, CA, USA
Alexandra Meliou , David Chu , Joseph Hellerstein , Carlos Guestrin , Wei Hong, Data gathering tours in sensor networks, Proceedings of the fifth international conference on Information processing in sensor networks, April 19-21, 2006, Nashville, Tennessee, USA
Tarik Arici , Toygar Akgun , Yucel Altunbasak, A prediction error-based hypothesis testing method for sensor data acquisition, ACM Transactions on Sensor Networks (TOSN), v.2 n.4, p.529-556, November 2006
Himanshu Gupta , Vishnu Navda , Samir R. Das , Vishal Chowdhary, Efficient gathering of correlated data in sensor networks, Proceedings of the 6th ACM international symposium on Mobile ad hoc networking and computing, May 25-27, 2005, Urbana-Champaign, IL, USA
Razvan Cristescu , Baltasar Beferull-Lozano , Martin Vetterli , Roger Wattenhofer, Network correlated data gathering with explicit communication: NP-completeness and algorithms, IEEE/ACM Transactions on Networking (TON), v.14 n.1, p.41-54, February 2006
Sergio D. Servetto , Guillermo Barrenechea, Constrained random walks on random graphs: routing algorithms for large scale wireless sensor networks, Proceedings of the 1st ACM international workshop on Wireless sensor networks and applications, September 28-28, 2002, Atlanta, Georgia, USA
Christopher M. Sadler , Margaret Martonosi, Data compression algorithms for energy-constrained devices in delay tolerant networks, Proceedings of the 4th international conference on Embedded networked sensor systems, October 31-November 03, 2006, Boulder, Colorado, USA
Kai-Wei Fan , Sha Liu , Prasun Sinha, Scalable data aggregation for dynamic events in sensor networks, Proceedings of the 4th international conference on Embedded networked sensor systems, October 31-November 03, 2006, Boulder, Colorado, USA
J. A. Paradiso , J. Lifton , M. Broxton, Sensate Media Multimodal Electronic Skins as Dense Sensor Networks, BT Technology Journal, v.22 n.4, p.32-44, October 2004 | routing;sensor networks;cross-layer interactions;multi-hop networks;source coding |
570664 | A two-tier data dissemination model for large-scale wireless sensor networks. | mobility brings new challenges to large-scale sensor networking. It suggests that information about each mobile sink's location be continuously propagated through the sensor field to keep all sensor nodes updated with the direction of forwarding future data reports. Unfortunately frequent location updates from multiple sinks can lead to both excessive drain of sensors' limited battery power supply and increased collisions in wireless transmissions. In this paper we describe TTDD, a Two-Tier Data Dissemination approach that provides scalable and efficient data delivery to multiple mobile sinks. Each data source in TTDD proactively builds a grid structure which enables mobile sinks to continuously receive data on the move by flooding queries within a local cell only. TTDD's design exploits the fact that sensor nodes are stationary and location-aware to construct and maintain the grid structures with low overhead. We have evaluated TTDD performance through both analysis and extensive simulation experiments. Our results show that TTDD handles multiple mobile sinks efficiently with performance comparable with that of stationary sinks. | INTRODUCTION
Recent advances in VLSI, microprocessor and wireless communication
technologies have enabled the deployment of large-scale
sensor networks where thousands, or even tens of thousands
of small sensors are distributed over a vast eld to
obtain ne-grained, high-precision sensing data [9, 10, 15].
These sensor nodes are typically powered by batteries and
communicate through wireless channels.
This paper studies the problem of scalable and e-cient
data dissemination in a large-scale sensor network from potentially
multiple sources to potentially multiple, mobile sinks.
In this work a source is dened as a sensor node that detects
a stimulus, which is a target or an event of interest, and
generates data to report the stimulus. A sink is dened as
a user that collects these data reports from the sensor net-
work. Both the number of stimuli and that of the sinks may
vary over time. For example in Figure 1, a group of soldiers
collect tank movement information from a sensor network
deployed in a battleeld. The sensor nodes surrounding a
tank detect it and collaborate among themselves to aggregate
data, and one of them generates a data report. The
soldiers collect these data reports. In this paper we consider
a network made of stationary sensor nodes only, whereas
sinks may change their locations dynamically. In the above
example, the soldiers may move around, and must be able
to receive data reports continuously.
mobility brings new challenges to large-scale sensor
networking. Although several data dissemination protocols
have been developed for sensor networks recently, such as Directed
Diusion [10], Declarative Routing Protocol [5] and
GRAB [20], they all suggest that each mobile sink need to
continuously propagate its location information throughout
the sensor eld, so that all sensor nodes get updated with the
direction of sending future data reports. However, frequent
location updates from multiple sinks can lead to both increased
collisions in wireless transmissions and rapid power
consumption of the sensor's limited battery supply. None
of the existing approaches provides a scalable and e-cient
solution to this problem.
In this paper, we describe TTDD, a Two-Tier Data Dissemination
approach to address the multiple, mobile sink
problem. Instead of propagating query messages from each
sink to all the sensors to set up data forwarding informa-
tion, TTDD design uses a grid structure so that only sensors
located at grid points need to acquire the forwarding
information. Upon detection of a stimulus, instead of passively
waiting for data queries from sinks | the approach
taken by most of the existing work, the data source proactively
builds a grid structure throughout the sensor eld and
sets up the forwarding information at the sensors closest to
grid points (henceforth called dissemination nodes). With
this grid structure in place, a query from a sink traverses
two tiers to reach a source. The lower tier is within the
local grid square of the sink's current location (henceforth
called cells), and the higher tier is made of the dissemination
nodes at grid points. The sink
oods its query within a cell.
When the nearest dissemination node for the requested data
receives the query, it forwards the query to its upstream dissemination
node toward the source, which in turns further
forwards the query, until it reaches either the source or a
dissemination node that is already receiving data from the
source (e.g. upon requests from other sinks). This query
forwarding process lays information of the path to the sink,
to enable data from the source to traverse the same two tiers
as the query but in the reverse order.
TTDD's design exploits the fact that sensor nodes are
both stationary and location-aware. Because sensors are assumed
to know their locations in order to tag sensing data [1,
8, 18], and because sensors' locations are static, TTDD can
use simple greedy geographical forwarding to construct and
maintain the grid structure with low overhead. With a grid
structure for each data source, queries from multiple mobile
sinks are conned within their local cells only, thus avoiding
excessive energy consumption and network overload from
global
ooding by multiple sinks. When a sink moves more
than a cell size away from its previous location, it performs
another local
ooding of data query which will reach a new
dissemination node. Along its way toward the source this
query will stop at a dissemination node that is already receiving
data from the source. This dissemination node then
forwards data downstream and nally to the sink. In this
way, even when sinks move continuously, higher-tier data
forwarding changes incrementally and the sinks can receive
data without interruption. Furthermore, because only those
sensors on the grid points (serving as dissemination nodes)
of a data source participate in its data dissemination, other
sensors are relieved from maintaining states. Thus TTDD
can eectively scale to a large number of sources and sinks.
The rest of this paper is organized as follows. Section 2 describes
the main design including grid construction, the two-tier
query and data forwarding, and grid maintenance. Section
3 analyzes the communication overhead and the state
complexity of TTDD, and compares with other sink-oriented
data dissemination designs. Simulation results are presented
in Section 4 to evaluate the eectiveness of our design and
analyze the impact of important parameters. We discuss
several design issues in Section 5 and compare with the related
work in Section 6. Section 7 concludes the paper.
2. TWO-TIER
This section presents the basic design of TTDD which is
based on the following assumptions:
A vast eld is covered by a large number of homogeneous
sensor nodes which communicate with each
other through short-range radios. Long distance data
delivery is accomplished by forwarding data across multiple
hops.
Each sensor node is aware of its own location (for example
through receiving GPS signals or through techniques
such as [1]). However, mobile sinks may or may
not know their own locations.
Once a stimulus appears, the sensors surrounding it
collectively process the signal and one of them becomes
the source to generate data reports [20].
Sinks (users) query the network to collect sensing data.
There can be multiple sinks moving around in the sensor
eld and the number of sinks may vary over time.
The above assumptions are consistent with the models for
real sensors being built, such as UCLA WINS NG nodes
[15], SCADDS PC/104 [4], and Berkeley Motes [9].
In addition, TTDD design assumes that the sensor nodes
are aware of their missions (e.g., in the form of the signatures
of each potential type of stimulus to watch). Each
mission represents a sensing task of the sensor network. In
the example of tank detection of Figure 1, the mission of the
sensor network is to collect and return the current locations
of tanks. In scenarios where the sensor network mission
changes, the new mission can be
ooded through the eld
to reach all sensor nodes. In this paper we do not discuss
how to manage the missions of sensor networks. However we
do assume that the mission of a sensor network changes only
infrequently, thus the overhead of mission dissemination is
negligible compared to that of sensing data delivery.
As soon as a source generates data, it starts preparing
for data dissemination by building a grid structure. The
source starts with its own location as one crossing point of
the grid, and sends a data announcement message to each of
its four adjacent crossing points. Each data announcement
message nally stops on a sensor node that is the closest to
the crossing point specied in the message. The node stores
the source information and further forwards the message to
its adjacent crossing points except the one from which it
received the message. This recursive propagation of data
announcement messages noties those sensors that are closest
to the crossing locations to become the dissemination
nodes of the given source.
Once a grid for the specied source is built, a sink can
then
ood its query within a local cell to receive data. The
query will be received by the nearest dissemination node on
the grid, which then propagates the query upstream through
other dissemination nodes toward the source. Requested
data will
ow down in the reverse path to the sink.
The above seemingly simple TTDD operation poses several
research challenges. For example, given that locations
of sensors are random and not necessarily on the crossing
points of a grid, how do nearby sensors of a grid point decide
a
a
Figure
2: One source B and one sink S
which one should serve as the dissemination node? Once the
data stream starts
owing, how can it be made to follow the
movement of a sink to ensure continued delivery? Given
individual sensors are subject to unexpected failures, how
is the grid structure maintained, once it is built? The remaining
of this section will address each of these questions
in detail. We start with the grid construction in Section
2.1, and present the two-tier query and data forwarding in
Section 2.2. Grid maintenance is described in Section 2.3.
2.1 Grid Construction
To simplify the presentation, we assume that a sensor eld
spans a two-dimensional plane. A source divides the plane
into a grid of cells. Each cell is an square. A source
itself is at one crossing point of the grid. It propagates
data announcements to reach all other crossings, called dissemination
points, on the grid. For a particular source at
location dissemination points are located at
A source calculates the locations of its four neighboring
dissemination points given its location (x; y) and cell
size . For each of the four dissemination points Lp , the
source sends a data-announcement message to Lp using simple
greedy geographical forwarding, i.e., it forwards the message
to the neighbor node that has the smallest distance to
Lp . The neighbor node continues forwarding the data announcement
message in a similar way till the message stops
at a node that is closer to Lp than all its neighbors. If this
node's distance to Lp is less than a threshold =2, it becomes
a dissemination node serving dissemination point Lp
for the source. In cases where a data announcement message
stops at a node whose distance to the designated dissemination
point is greater than =2, the node simply drops the
message.
A dissemination node stores a few pieces of information for
the grid structure, including the data announcement mes-
sage, the dissemination point Lp it is serving and the up-stream
dissemination node's location. It then further propagates
the message to its neighboring dissemination points
on the grid except the upstream one from which it receives
the announcement. The data announcement message is recursively
propagated through the whole sensor eld so that
each dissemination point on the grid is served by a dissemination
node. Duplicate announcement messages from different
neighboring dissemination points are identied by the
sequence number carried in the announcement and simply
dropped.
Figure
2 shows a grid for a source B and its virtual grid.
The black nodes around each crossing point of the grid are
the dissemination nodes.
2.1.1 Explanation of Grid Construction
Because the above grid construction process does not assume
any a-priori knowledge of potential positions of sinks,
it builds a uniform grid in which all dissemination points are
regularly spaced with distance in order to distribute data
announcements as evenly as possible. The knowledge of the
global topology is not required at any node; each node acts
based on information of its local neighborhood only.
In TTDD, the dissemination point serves as a reference
location when selecting a dissemination node. The dissemination
node is to be selected as close to the dissemination
point as possible, so that the dissemination nodes are evenly
distributed to form a uniform grid infrastructure. However,
the dissemination node is not required to be globally closest
to the dissemination point. Strictly speaking, TTDD
ensures that a dissemination node is locally closest but not
necessarily globally closest to the dissemination point, due
to irregularities in topology. This will not aect the correct
operation of TTDD. The reason is that each dissemination
node includes its own location (not that of the dissemination
point) in its further data announcement messages. This way,
downstream dissemination nodes will still be able to forward
future queries to this dissemination node, even though the
dissemination node is not globally closest to the dissemination
point in the ideal grid. This is to be further discussed
in Section 2.2.1.
We set the =2 distance threshold for a node to become
a dissemination node in order to stop the grid construction
at the network border. For example, in Figure 3, sensor
node B receives a data announcement destined to P which
is out of the sensor eld. Because sensor nodes are not
aware of the global sensor eld topology, they cannot tell
if a location is out of the network. Comparing with =2
provides nodes a simple rule to decide if the propagation
should be terminated.
When a dissemination point falls in a void area without
any sensor nodes in it, the data announcement propagation
might stop on the border of the void area. But propagation
can continue along other paths of the grid and go around
the void area, since each dissemination node forwards the
data announcement to all three other dissemination points.
As long as the grid is not partitioned, data announcements
will bypass the void by taking alternative paths.
We choose to build the grid on a per-source basis, so
that dierent sources recruit dierent sets of dissemination
nodes. This design choice enhances scalability and provides
load balancing and better robustness. When there are many
sources, as long as their grids do not overlap, a dissemination
node only has states about one or a few sources. This
allows TTDD to scale to large numbers of sources. We will
analyze the state complexity in section 3.3. In addition,
the per-source grid eectively distributes data dissemination
load among dierent sensor nodes to avoid bottlenecks.
This is motivated by the fact that each sensor node is energy-constrained
and the sensor nodes' radios usually have limited
bandwidth. The per-source grid construction also leads
to enhanced robustness in the presence of node failures.
a
Figure
3: Termination on border
The grid cell size is a critical parameter. As we can see
in the next section, the general guideline to set the cell size
is to localize the impact of sink mobility within a single cell,
so that the higher-tier grid forwarding remains stable. The
choice of aects energy e-ciency and state complexity.
It will be further analyzed in Section 3 and evaluated in
Section 4.
2.2 Two-Tier Query and Data Forwarding
2.2.1 Query Forwarding
Our two-tier query and data forwarding is based on the
virtual grid infrastructure to ensure scalability and e-ciency.
When a sink needs data, it
oods a query within a local
area about a cell size large to discover nearby dissemination
nodes. The sink species a maximum distance in the
query thus the
ooding stops at nodes that are more than
the maximum distance away from the sink.
Once the query reaches a local dissemination node, which
is called an immediate dissemination node for the sink, it is
forwarded on the grid to the upstream dissemination node
from which this immediate dissemination node receives data
announcements. The upstream one in turn forwards the
query further upstream toward the source, until nally the
query reaches the source. During the above process, each
dissemination node stores the location of the downstream
dissemination node from which it receives the query. This
state is used to direct data back to the sink later (see Figure
4 for an illustration).
With the grid infrastructure in place, the query
ooding
can be conned within the region of around a single cell-size.
It saves signicant amount of energy and bandwidth compared
to
ooding the query across the whole sensor eld.
Moreover, two levels of query aggregation 1 are employed
during the two-tier forwarding to further reduce the over-
head. Within a cell, an immediate dissemination node that
receives queries for the same data from dierent sinks aggregates
these queries. It only sends one copy to its upstream
dissemination node, in the form of an upstream update. Sim-
ilarly, if a dissemination node on the grid receives multiple
upstream updates from dierent downstream neighbors, it
forwards only one of them further. For example in gure
4, the dissemination node G receives queries from both the
cell where sink S1 is located and the cell where sink S2 is
located, and G sends only one upstream update message
toward the source.
When an upstream update message traverses the grid,
it installs soft-states in dissemination nodes to direct data
1 For simplicity, we do not consider semantic aggregation [10]
here, which can be used to further improve the aggregation
gain for dierent data resolutions and types.
Lower-tier Data Trajectory Forwarding
Higher-tier Data Grid Forwarding
Higher-tier Query Grid Forwarding
Lower-tier Query Flooding
G
PA
A
Figure
4: Two-tier query and data forwarding between
Source A and Sink S 1
.
starts with
ooding its
query with its primary agent PA's location, to its immediate
dissemination node Ds . Ds records PA's location
and forwards the query to its upstream dissemination
node until the query reaches A. The data are returned
to Ds along the way that the query traverses. Ds forwards
the data to PA, and nally to Sink S 1
. Similar
process applies to Sink S 2
, except that its query stops
on the grid at dissemination node G.
streams back to the sinks. Unless being updated, these
states are valid for a certain period only. A dissemination
node sends such messages upstream periodically in order
to receive data continuously; it stops sending such update
messages when it no longer needs the data, such as when
the sink stops sending queries or moves out of the local re-
gion. An upstream disseminate node automatically stops
forwarding data after the soft-state expires. In our current
design, the values of these soft-state timers are chosen an
higher than the interval between data
messages. This setting balances the overhead of generating
periodic upstream update messages and that of sending data
to places where it is no longer needed.
The two-level aggregation provides scalability with the
number of sinks. A dissemination node on the query forwarding
path maintains at most three states about which
neighboring dissemination nodes need data. An immediate
dissemination node maintains in addition only the states of
sinks located within the local region of about a single cell-
size. Sensor nodes that do not participate in query or data
forwarding do not keep any state about sinks or sources. We
analyze the state complexity in details in Section 3.3.
2.2.2 Data Forwarding
Once a source receives the queries (in the form of upstream
updates) from anyone of its neighbor dissemination nodes,
it sends out data to this dissemination node, which in turn
forwards data to where it receives the queries, so on and so
forth until the data reach each sink's immediate dissemination
node. If a dissemination node has aggregated queries
from dierent downstream dissemination nodes, it sends a
data copy to each of them. For example in Figure 4 the dissemination
node G will send data to both S1 and S2 . Once
the data arrive at a sink's immediate dissemination node,
trajectory forwarding (see Section 2.2.3) is employed to further
relay the data to the sink which might be in continuous
motion.
IA
PA
Immediate Agent (IA) Update
Trajectory Data Forwarding
Figure
5: Trajectory Forwarding from immediate dissemination
node Ds to mobile sink S 1 via primary agent
PA and immediate agent IA. Immediate agent IA is one-hop
away from S 1 . It relays data directly to sink S 1 .
moves out of the one-hop transmission range
of its current IA, it picks a new IA from its neighboring
nodes.
then sends an update to its PA and old IA
to relay data. PA is not changed as long as S 1
remains
within some range to PA.
With the two-tier forwarding as described above, queries
and data may take globally suboptimal paths, thus introducing
additional cost compared to forwarding along shortest
paths. For example in Figure 4, sink S1 and S2 may
nd straight-line paths to the source if they each
ooded
their queries across the whole sensor eld. However, the
path a message travels between a sink and a source by the
two-tier forwarding is at most
2 times the length of that
of a straight-line. We believe that the sub-optimality is well
worth the gain in scalability. A detailed analysis is given in
Section 3.
2.2.3 Trajectory Forwarding
Trajectory forwarding is employed to relay data to a mobile
sink from its immediate dissemination node. In trajectory
forwarding, each sink is associated with two sensor
nodes: a primary agent and an immediate agent. A sink
picks one neighboring sensor node as its primary agent and
includes the location of the primary agent in its queries. Its
immediate dissemination node sends data to the primary
agent, which in turn relays data to the sink. Initially the
primary agent and the immediate agent are the same sensor
node.
When a sink is about to move out of the range of its
current immediate agent, it picks another neighboring sensor
node as its new immediate agent, and sends the location of
the new immediate agent to its primary agent, so that future
data are forwarded to the new immediate agent. To avoid
losing data that have already been sent to the old immediate
agent, the location is also sent to the old immediate agent
(see
Figure
5). The selection of a new immediate agent can
be done by broadcasting a solicit message from the sink,
which then chooses the node that replies with the strongest
signal-to-noise ratio.
The primary agent represents the mobile sink at the sink's
immediate dissemination node, so that the sink's mobility is
made transparent to its immediate dissemination node. The
immediate agent represents the sink at the sink's primary
agent, so that the sink can receive data continuously while
in constant movement. Thus a user that does not know his
own location can still collect data from the network.
When the sink moves out of a certain distance, e.g., a cell
size, from its primary agent, it picks a new primary agent
and
oods a query locally to discover new dissemination
nodes that might be closer. To avoid receiving duplicate
data from its old primary agent, TTDD lets each primary
agent time out once its timer, which is set approximately to
the duration a mobile sink remains in a cell, expires. The
old immediate agent times out in a similar way, except that
it has a shorter timer which is approximately the duration
a sink remains within the one-hop distance. If a sink's immediate
dissemination node does not have any other sinks
or neighboring downstream dissemination nodes requesting
data for a certain period of time (similar to the timeout
value of the sink's primary agent), it stops sending update
messages to its upstream dissemination node so that data
are no longer forwarded to this cell.
An example is shown in gure 4, when the soft-state at the
immediate dissemination node Ds expires, Ds stops sending
upstream updates because it does not have any other sinks
or neighboring downstream dissemination nodes requesting
data. After a while, data forwarded at G only go to sink S2 ,
if S2 still needs data. This way, all states built on the grid
and in the old agents by a sink's old query are cleared.
With trajectory forwarding, sink mobility within a small
range, e.g., a cell size, is made transparent to the higher-
tier grid forwarding. Mobility beyond a cell-size distance
that involves new dissemination node discoveries might affect
certain upstream dissemination nodes on grids. Since
the new dissemination nodes that a sink discovers are likely
to be in adjacent cells, the adjustment to grid forwarding
will typically aect a few nearby dissemination nodes only.
2.3 Grid Maintenance
To avoid keeping grid states at dissemination nodes in-
denitely, a source includes a Grid Lifetime in the data announcement
message when sending it out to build the grid.
If the lifetime elapses and the dissemination nodes on the
grid do not receive any further data announcements to up-date
the lifetime, they clear their states and the grid no
longer exists.
Proper grid lifetime values depend on the data availability
period and the mission of the sensor network. In the example
of
Figure
1, if the mission is to return the \current"
tank locations, a source can estimate the time period that
the tank will stay around, and use this estimation to set the
grid lifetime. If the tank stays longer than the original es-
timation, the source can send out new data announcements
to extend the grid's lifetime.
For any structure, it is important to handle unexpected
component failures for robustness. To conserve the scarce
energy supply of sensor nodes, we do not periodically refresh
the grid during its lifetime. Instead, we employ a mechanism
called information duplication, in which each
dissemination node recruits from its neighborhood several
sensor nodes and replicates in them the location of its up-stream
dissemination node. When this dissemination node
fails, the upstream update messages from its downstream
dissemination node that needs data will stop at one of these
recruited nodes. The one then forwards the update message
to the upstream dissemination node according to the stored
information. 2 When data come from upstream later, a new
dissemination node will emerge following the same rule as
This neighbor can detect the failure of the dissemination
the source initially builds the grid.
Since this new dissemination node does not know which
downstream dissemination node neighbors need data, it simply
forwards data to all the other three dissemination points.
A downstream dissemination node neighbor that needs data
will continue to send upstream update messages to re-establish
the forwarding state; whereas one that does not need data
drops the data and does not send any upstream update, so
that future data reports will not
ow to it. Note that this
mechanism also handles the scenario where multiple dissemination
nodes fail simultaneously along the forwarding path.
The failure of the immediate dissemination node is detected
by a timeout at a sink. When a sink stops receiving
data for a certain time, it re-
oods a query to locate a new
dissemination node. The failures of primary agents or immediate
agents are detected by similar timeouts and new
ones will be picked.
Our grid maintenance is triggered on-demand by on-going
queries or upstream updates. Compared with periodic grid
refreshing, it trades computational complexity for less consumption
of energy, which is a more critical resource. We
show the performance of our grid maintenance through simulations
in Section 4.4.
3. OVERHEAD ANALYSIS
In this section we analyze the e-ciency and scalability of
TTDD. We measure two metrics: the communication overhead
for a number of sinks to retrieve a certain amount of
data from a source, and the complexity of the states that
are maintained in a sensor node for data dissemination. We
study both the stationary and the mobile sink cases.
We compare TTDD with the sink-oriented data dissemination
approach (henceforth called SODD), in which each
sink rst
oods the whole network to install data forwarding
state at all the sensor nodes, and then sources react to deliver
data. Directed Diusion [10], DRP [5] and GRAB [20]
all take this approach, although each employs dierent optimization
techniques, such as data aggregation and query
aggregation, to reduce the number of messages to be deliv-
ered. Because both aggregation techniques are applicable to
TTDD as well, we do not consider these aggregations when
we compare the communication overhead. Instead, our analysis
will focus on the worst-case communication overhead
of each protocol. We aim at making the analysis simple
and easy to follow while capturing the fundamental dier-
ences between TTDD and other approaches. We will add
the consideration of the aggregations when we analyze the
complexity in sensor state maintenance.
3.1 Model and Notations
We consider a square sensor eld of area A in which N
sensor nodes are uniformly distributed so that on each side
there are approximately
sensor nodes. There are k sinks
in the sensor eld. They move at an average speed v, while
receiving d data packets from a source in a time period of
T . Each data packet has a unit size and both the query and
data announcement messages have a comparable size l. The
communication overhead to
ood an area is proportional to
the number of sensor nodes in it, and to send a message along
node either through MAC layer mechanisms such as acknowledgments
when available, or explicitly soliciting a reply
if it does not overhear the dissemination node.
a path by greedy geographical forwarding is proportional to
the number of sensor nodes in the path. The average number
of neighbors within a sensor node's wireless communication
range is D.
In TTDD, the source divides the sensor eld into cells;
each has an area 2 . There are
A sensor nodes in
each cell and p n sensor nodes on each side of a cell. Each
sink traverses m cells, and m is upper bounded by 1
.
For stationary sinks,
3.2 Communication Overhead
We rst analyze the worst-case communication overhead
of TTDD and SODD. We assume in both TTDD and SODD
a sink updates its location m times, and receives d
data
packets between two consecutive location updates. In TTDD,
a sink updates its location by
ooding a query locally to
reach an immediate dissemination node, from which the
query is further forwarded to the source along the grid. The
overhead for the query to reach the source, without considering
query aggregation, is:
N)l
where nl is the local
ooding overhead, and c
N is the
average number of sensor nodes along the straight-line path
from the source to the sink (0 < c
2). Because a query
in TTDD traverses a grid instead of straight-line path, the
worst-case path length is increased by a factor of
2.
Similarly the overhead to deliver d
data packets from a
source to a sink is
m . For k mobile sinks, the
overhead to receive d packets in m cells is:
km
d
d)
Plus the overhead N l in updating the mission of the sensor
network and 4N p n l in constructing the grid, the total overhead
of TTDD becomes:
l d)
In SODD, every time a sink
oods the whole network, it
receives d
data packets. Data traverse straight-line path(s)
to the sink. Again, without considering aggregation, the
communication overhead is:
d
For k mobile sinks, the total worst-case overhead is:
c
d
Note that here we do not count the overhead to update
the sensor network mission because SODD can potentially
update the mission when a sink
oods its queries.
To compare TTDD and SODD, we have:
COTTDD
COSODD
mk
d
Thus, in a large-scale sensor network, TTDD has asymptotically
lower worst-case communication overhead compared
TTDD
Normalized
Figure
v.s. cell size
with an SODD approach as the sensor network scale (N ),
the number of sinks (k), or the sink mobility (characterized
by m) increases.
For example, a sensor network consists of
sensor nodes, there are sensor nodes in a TTDD
grid cell. Suppose data
packets:
COTTDD
COSODD
For the stationary sink case, suppose we have
four sinks
CO SODD
0:89. When the sink mobility
increases, CO TTDD
CO SODD
1. In this
network setup, TTDD has consistently lower overhead compared
with SODD in both the stationary and mobile sink
scenario.
Equation (1) shows the impact of the number of sensor
nodes in a cell (n) on TTDD's communication overhead.
For the example above, Figure 6 shows the TTDD communication
overhead as a function of n under dierent sink
moving speed. Because the overhead to build the grid decreases
while the local query
ooding overhead increases as
the cell size increases, Figure 6 shows the total communication
overhead as a tradeo between these two competing
components. We can also see from the Figure 6 that the
overall overhead is lower with smaller cells when the sink
mobility is signicant. The reason is that high sink mobility
leads to frequent in-cell
ooding, and smaller cell size limits
the
ooding overhead.
3.3 State Complexity
In TTDD, only dissemination nodes and their neighbors
which duplicate upstream information, sinks' primary agents
and immediate agents maintain states for data dissemina-
tion. All other sensor nodes do not need to maintain any
state. The state complexities at dierent sensor nodes are
analyzed as follows:
Dissemination nodes There are totally
dis-
semination nodes in a grid, each maintains the location
of its upstream dissemination node for query forward-
ing. For those on data forwarding paths, each maintains
locations of at most all the other three neighboring
dissemination nodes for data forwarding. The state
complexity for a dissemination node is thus O(1). A
dissemination node's neighbor that duplicate upstream
dissemination node's location also has O(1) state complexity
Immediate dissemination nodes A dissemination node
maintains states about the primary agents for all the
sinks within a local cell-size area. Assume there are
local sinks within the area, the state complexity for
an immediate dissemination node is thus O(k local ).
Primary and immediate agents A primary agent maintains
its sink's immediate agent's location, and an immediate
agent maintains its sink's information for trajectory
forwarding. Their state complexities are both
O(1).
Sources A source maintains states of its grid size, and locations
of its downstream dissemination nodes that request
data. It has a state complexity of O(1).
We consider data forwarding from s sources to k mobile
sinks. Assume in SODD the total number of sensor nodes on
data forwarding paths from a source to all sinks is P , then
the number of sensor nodes in TTDD's grid forwarding paths
is at most
2P . The total number of states maintained
for trajectory forwarding in sinks' immediate dissemination
nodes, primary agents, and immediate agents are k(s
The total state complexity is:
s
r
where b is the number of sensor nodes around a dissemination
point that has the location of the upstream dissemination
node, a small constant.
In SODD, each sensor node maintains a state to its up-stream
sensor node toward the source. In the scenario of
multiple sources, assuming perfect data aggregation, a sensor
node maintains at most per-neighbor states. For those
sensor nodes on forwarding paths, due to the query aggre-
gation, they maintain at most per-neighbor states to direct
data in the presence of multiple sinks. The state complexity
for the whole sensor network is:
(D 1) N (D
The ratio of TTDD and SODD state complexity is:
That is, for large-scale sensor networks, TTDD maintains
around only sb
of the states as an SODD approach.
For the example of Figure 1 where we have 2 sources and 3
sinks, suppose 5 and there are 100 sensor nodes within
a TTDD grid cell and each sensor node has 10 neighbors on
average, TTDD maintains only 1.1% of the states of that
of SODD. This is because in TTDD sensor nodes that are
out of the grid forwarding infrastructure generally do not
maintain any state for data dissemination.
3.4
Summary
In this section, we analyze the worst-case communication
overhead, and the state complexity of TTDD. Compared
with an SODD approach, TTDD has asymptotically lower
worst-case communication overhead as the sensor network
size, the number of sinks, or the moving speed of a sink
increases. TTDD has a lower state complexity, since sensor
nodes that are not in the grid infrastructure do not need to
maintain states for data dissemination. For a sensor node
that is part of the grid infrastructure, its state complexity is
bounded and independent of the sensor network size or the
number of sources and sinks.
number
Average
success
ratio
over
all
source-sink
pairs
source
sources
sources
sources
Figure
7: Success rate for dierent
numbers of sinks and sources
Total
energy
consumed
in
transmission
and
receiving
source
sources
4 sources
8 sources
Figure
8: Energy for dierent numbers
of sinks and sources
number
Average
delay
(sec)
source
sources
4 sources
8 sources
Figure
9: Delay for dierent numbers
of sinks and sources
4. PERFORMANCE EVALUATION
In this section, we evaluate the performance of TTDD
through simulations. We rst describe our simulator imple-
mentation, simulation metrics and methodology in Section
4.1. Then we evaluate how environmental factors and control
parameters aect the performance of TTDD in Sections
4.2 to 4.5. The results conrm the e-ciency and scalability
of TTDD to deliver data from multiple sources to multiple,
mobile sinks. Section 4.6 shows that TTDD has comparable
performance with Directed Diusion [10] in stationary sink
scenarios.
4.1 Metrics and Methodology
We implement the TTDD protocol in ns-2. We use the
basic greedy geographical forwarding with local
ooding to
bypass dead ends [6]. To facilitate comparisons with Directed
Diusion, we use the same energy model as adopted
in its implementation in ns-2.1b8a, and the underlying MAC
is 802.11 DCF. A sensor node's transmitting, receiving and
idling power consumption rates are 0.66W, 0.395W and 0.035W
respectively.
We use three metrics to evaluate the performance of TTDD.
The energy consumption is dened as the communication
(transmitting and receiving) energy the network consumes;
the idle energy is not counted since it depends largely on
the data generation interval and does not indicate the efciency
of data delivery. The success rate is the ratio
of the number of successfully received reports at a sink to
the total number of reports generated by a source, averaged
over all source-sink pairs. This metric shows how eective
the data delivery is. The delay is dened as the average
time between the moment a source transmits a packet and
the moment a sink receives the packet, also averaged over
all source-sink pairs. This metric indicates the freshness of
data packets.
The default simulation setting has 4 sinks and 200 sensor
nodes randomly distributed in a 20002000m 2 eld, of
which 4 nodes are sources. Each simulation run lasts for
200 seconds, and each result is averaged over three random
network topologies. A source generates one data packet per
second. Sinks' mobility follows the standard random Way-point
model. Each query packet has 36 bytes and each data
packet has 64 bytes. Cell size is set to 600 meters and a
sink's local query
ooding range is set to 1:3; it is larger
than to handle irregular dissemination node distributions.
4.2 Impact of the numbers of sinks and sources
We rst study the impact of the number of sinks and
sources on TTDD's performance. The number of sinks and
sources varies from 1, 2, 4 to 8. Sinks have a maximum
speed of 10m/s, with a 5-second pause time. Figure 7 shows
the success rates. Each curve is for a specic number of
sources. For each curve, given the xed number of sources,
the success rate slightly decreases as the number of sinks
increases. For example, in the 2-source case, success rate
decreases slightly from 0.98 to 0.92 when the number of sinks
reaches 8. For a specic number of sinks, the success rate
decreases more visibly as the number of source increases. In
the 8-sink case, the success rate decreases from close to 1.0
to about 0.8 as the sources increase to 8. This is because
more sources generate more data packets, which lead to more
contention-induced losses. Despite some
uctuations, the
reasonably high success rates show that TTDD delivers most
data packets successfully from multiple sources to multiple,
mobile sinks, and the delivery quality does not degrade much
as the number of sources or sinks increases.
Figure
8 shows the energy consumption. We make two ob-
servations. First, for each curve, the energy increases as the
number of sinks increases, but the slope tends to decrease
slightly. As the number of sinks doubles, the energy consumed
for queries at the lower-tier
ooding typically dou-
bles. However, in the higher-tier grid forwarding, queries
may be merged into one upstream update message toward
the source and thus lead to energy savings. Therefore, when
the number of sinks doubles, total energy consumption does
not double. This is why the slope tends to decrease. Sec-
ond, for a specic number of sinks (e.g., 4 sinks), energy
consumption typically doubles as the number of sources dou-
bles. This is because the total data packets generated by the
sources increase proportionally and result in proportional increase
in energy consumptions. An exception happens when
the number of sources increases from one to two. This is
because the lower-tier query
ooding remains xed as the
number of sources increases, but it contributes a large portion
of the total energy consumption in the 1-source case.
Figure
9 plots the delay metric, which tends to increase
when there are more sinks or sources. More sources generate
more data packets, and more sinks need more local
query
ooding. Both increase the tra-c volume and lead
to longer delivery time. Some exception points of the gure
occur because data packets that have been cached in a sink's
immediate dissemination node for some time are included in
the delay calculation.
4.3 Impact of Sinks' Mobility
We next evaluate the impact of the sink's moving speed
on TTDD. In the default simulation setting, we vary the
maximum speed of sinks from 0, 5, 10, to 20m/s.
Max sink speed
Average
success
ratio
over
all
source-sink
pairs
Figure
10: Success rate for sinks' mobilit
Max sink speed
Total
energy
consumed
in
transmission
and
receiving
Figure
11: Energy for sinks' mobility
Max sink speed
Average
delay
(sec)
Figure
12: Delay for sinks' mobility
Average
Success
Figure
13: Success rate for sensor
node failures
Total
energy
consumed
in
transmission
and
receiving
Figure
14: Energy for sensor node failure
Cell size
Average
energy
consumption
in
transmission
and
receiving
Figure
15: Energy consumption with
dierent cell sizes
Figure
shows the success rate as the sinks' moving
speed changes. The success rate remains around 0:9 1:0
as sinks move faster. This shows that trajectory forwarding
is able to deliver data to mobile sinks without much interruption
even when sinks move at very high speed of 20m/s.
Figure
11 shows that the energy consumption increases as
the sinks' moving speed increases. The faster a sink moves,
the more frequently the sink needs a new immediate dissemination
node, and the more frequently the sink
oods its
local queries to discover it. However, the slope of the curve
decreases since the higher-tier grid forwarding is much less
aected by the mobility speed. Figure 12 plots the delay for
data delivery, which increases only slightly as the sink moves
faster. This shows that high-tier grid forwarding eectively
localizes the impact of sink mobility.
4.4 Resilience to Sensor Node Failures
We further study how node failures aect TTDD. In the
default simulation setting of 200 nodes, we allow 5% or 10%
randomly-chosen nodes to experience sudden, simultaneous
failures at 20s. The detailed study of simulation traces
shows that under such scenarios, some dissemination nodes
on the grid fail. Without any repairing, failure of such dissemination
nodes would have stopped data delivery to all
downstream sinks and decreased the success ratio substan-
tially. However, Figure 13 shows that the success rate drops
mildly. This conrms that our grid maintenance mechanism
of Section 2.3 works eectively to reduce the impact of node
failures. As node failure becomes more severe, energy consumption
also decreases due to reduced data packet delivery.
This is shown in Figure 14. Overall, TTDD is quite resilient
to node failures in all simulated scenarios.
4.5 Cell Size
We have explored the impact of various environmental
factors in previous sections. In this section we evaluate how
cell size aects TTDD. To extend the cell size to larger
values while still having enough number of cells in the simulated
sensor eld, we would have to simulate over 2000
sensor nodes if the node density remains the same. Given
the computing power available to us to run ns-2, we have to
reduce the node density in order to reduce the total number
of simulated sensor nodes. We use 960 sensor nodes in
a 6200x6200m 2 eld, which are spaced at 200m distances
regularly to make the simple, greedy geographical forwarding
algorithm still work. There are one source and one sink.
The cell size varies from 400m to 1800m with an incremental
step of 200m. Because of the regular node placement, the
success rate and the delay do not change much so we focus
on studying the energy consumption trend.
Figure
15 shows that the energy consumption evolves the
same as predicted in our analysis of Section 3. The energy
rst decreases as the cell size increases because it takes less
energy to build a grid with larger cell size. Once the cell size
increases to 1000m, however, the energy starts to increase.
This is because the local query
ooding consumes more energy
in large cells. It degrades to global
ooding if only one
big cell exists in the entire sensor network.
4.6 Comparison with Directed Diffusion
In this section we compare the performance of TTDD and
Directed Diusion in the scenario of stationary sinks. We
apply the same scenarios of Section 4.2 (except that sinks are
stationary now) on both TTDD and Directed Diusion to
study the impact of dierent numbers of sinks and sources.
All simulations have 200 sensor nodes in a 20002000m 2
eld. The simulation results are shown in Figures 16{21.
We rst look at success rates, shown in Figures 16 and 19.
TTDD and Directed Diusion have similar success rates,
ranging between 0.8 and 1.0, except that Directed Diusion
number
Average
success
ratio
over
all
source-sink
pairs
source
sources
4 sources
8 sources
Figure
Success rate for TTDD of
stationary sinks
number
Energy
consumed
by
Tx/Rx
packets
(Joules)
sources
source
4 sources
8 sources
Figure
17: Energy for TTDD of stationary
sinks
Average
delay
(sec)
source
sources
sources
sources
Figure
18: Delay for TTDD of stationary
sinks
Average
success
ratio
over
all
source-sink
pairs
source
sources
4 sources
8 sources
Figure
19: Success rate for Directed
Diusion
number
Energy
consumed
by
Tx/Rx
packets
(Joules)
source
sources
4 sources
8 sources
Figure
20: Energy for Directed Diu-
sion
Average
delay
(sec)
source
sources
4 sources
8 sources
Figure
21: Delay for Directed Diu-
sion
has slightly larger
uctuations in some cases.
Figures
17 and 20 plot the energy consumption for TTDD
and Directed Diusion. For the same number of sinks, energy
consumption nearly doubles as the number of sources
doubles (except for the case of 1 to 2 sources). Given a
specic number of sources, more energy is consumed as the
number of sinks increases, but the slope of each curve tends
to decrease. This shows that both TTDD and Directed Diffusion
scale similarly to the number of sources and stationary
sinks in terms of energy consumption. When the number
of sinks is small (1 or 2 sinks), TTDD consumes less energy
than Directed Diusion. This is because query
ooding in
TTDD is conned to a local cell, while in Directed Diusion
a query propagates throughout the network eld. For larger
number of sinks (say, 8 sinks), Directed Diusion aggregates
queries from dierent sinks more aggressively; therefore, its
energy consumption increases less rapidly.
Figures
and 21 plot the delay experienced by TTDD
and Directed Diusion, respectively. When the number of
sources is 1, 2, or 4, they have comparable delay values.
When the number of sources increases to 8, TTDD experiences
much lower delay. This is because in Directed Diu-
sion data forwarding paths from dierent sources may cross
or overlap with each other anywhere, thus there are more
interferences when the number of sources is large. Whereas
in TTDD each source has its own grid, thus data
ows on
dierent grids do not interfere each other that much.
5. DISCUSSIONS
In this section, we comment on several design issues and
discuss future work.
Knowledge of the cell size Sensor nodes need to know
the cell size so as to build grids once they become sources.
The knowledge of can be specied through some external
mechanism. One option is to include it in the mission
statement message, which noties each sensor the sensing
task. The mission statement message is
ooded to each sensor
at the beginning of the network operation or during a
mission update phase. The sink also needs to specify the
maximum distance a query should be
ooded. It can obtain
from its neighbor. To deal with irregular local topology
where dissemination nodes may fall beyond a xed
ooding
scope, the sink may apply expanded ring search to reach the
dissemination node.
Greedy geographical routing failures Greedy geographical
forwarding may fail in scenarios where the greedy
path does not exist, that is, a path requires temporarily forwarding
the packet away from the destination. This problem
has been solved by several approaches such as GPSR [11].
However, due to the the complexity of the complete solutions
and the fact the greedy path almost always exists in
a densely deployed sensor network [12], we only use a very
basic version of greedy forwarding. In the rare cases where
the greedy path does not exist, that is, the packet is forwarded
to a sensor node without a neighbor that is closer
to the destination, the node locally
oods the packets to
get around the dead end [6]. We nd out that this simple
technique works quite well in TTDD.
Mobile stimulus TTDD focuses on handling mobile sinks.
In the scenario of a mobile stimulus, the sources along the
stimulus' trail may each build a grid. To reduce the overhead
of frequent grid construction, a source can reuse the grid already
built by other sources. Specically, before a source
starts to build its grid, it locally
oods a \Grid Discovery"
message within the scope of about a cell size to probe any
existing grid for the same stimulus. A dissemination node
on the existing grid replies to the new source. The source
can then use the existing grid for its data dissemination. We
leave this as future work.
Non-uniform grid layout So far we assume no a priori
knowledge on sink locations. For this case, a uniform
grid is constructed to distribute the forwarding states as
evenly as possible. However, this even distribution has a
drawback of incurring certain amount of resource waste in
regions where sinks never roam into. This problem can be
partially addressed through learning or predicting the sinks'
locations. If the sinks' locations are available, TTDD can
be further optimized to build a globally non-uniform grid
where the grid only exists in regions where sinks currently
reside or are about to move into. The accuracy in estimation
of the current locations or prediction of the future locations
of sinks will aect the performance. We intend to further
explore this aspect in the future.
Mobile sensor node This paper considers a sensor net-work
that consists of stationary sensor nodes only. It is
possible to extend this design to work with sensor nodes of
low mobility. However, the grid states may be handed over
between mobile dissemination nodes. Fully addressing data
dissemination in highly mobile sensor network needs new
mechanisms and is beyond the scope of this paper.
mobility speed TTDD addresses sink mobility by
localizing the mobility impact on data dissemination within
a single cell and handling the intra-cell mobility through
trajectory forwarding. However, there is also a limit for our
approach to accommodate sink mobility. The sink cannot
move faster than the local forwarding states are updated
(within a cell size). The two-tier forwarding is best suited
to deal with \localized" mobility patterns, in which a sink
does not change its primary agent frequently.
Data aggregation We assume a group of nodes that detect
an object or an event of interests can collaboratively
process the sensing data and only one node generates a report
as a source. Although TTDD benets further from
en-route semantic data aggregation [10], we do not evaluate
this performance gain since it is highly dependent on the
specic applications and their semantics.
6. RELATED WORK
Sensor networks have been a very active research eld in
recent years. Energy-e-cient data dissemination is among
the rst set of research issues being addressed. SPIN [7] is
one of the early work that focuses on e-cient dissemination
of an individual sensor's observations to all the sensors in a
network. SPIN uses meta-data negotiation to eliminate the
transmission of redundant data. More recent work includes
Directed Diusion [10], Declarative Routing Protocol (DRP)
[5] and GRAB [20]. Directed Diusion and DRP are similar
in that they both take the data-centric naming approach
to enable in-network data aggregation. Directed Diusion
employs the techniques of initial low-rate data
ooding and
gradual reinforcement of better paths to accommodate certain
levels of network and sink dynamics. GRAB targets
at robust data delivery in an extremely large sensor net-work
made of highly unreliable nodes. It uses a forwarding
mesh instead of a single path, where the mesh's width can
be adjusted on the
y for each data packet.
While such previous work addresses the issue of delivering
data to stationary or very low-mobility sinks, TTDD design
targets at e-cient data dissemination to multiple, both stationary
and mobile sinks in large sensor networks. TTDD
diers from the previous work in three fundamental ways.
First of all, TTDD demonstrates the feasibility and benets
of building a virtual grid structure to support e-cient data
dissemination in large-scale sensor elds. A grid structure
keeps forwarding states only in the nodes around dissemination
points, and only the nodes between adjacent grid
points forward queries and data. Depending on the chosen
cell size, the number of nodes that keep states or forward
messages can be a small fraction of the total number of sensors
in the eld. Second, this grid structure enables mobile
sinks to continuously receive data on the move by
ooding
queries within a local cell only. Such local
oodings minimize
the overall network load and the amount of energy
needed to maintain data-forwarding paths. Third, TTDD
design incorporates eorts from both sources and sinks to
accomplish e-cient data delivery to mobile sinks; sources
in TTDD proactively build the grid structure to enable mobile
sinks learning and receiving sensed data quickly and
e-ciently.
Rumor routing [3] avoids
ooding of either queries or data.
A source sends out \agents" which randomly walk in the sensor
network to set up event paths. Queries also randomly
walk in the sensor eld until they meet an event path. Although
this approach shares a similar idea of making data
sources play more active roles, rumor routing does not handle
mobile sinks. GEAR [21] makes use of geographical location
information to route queries to specic regions of a
sensor eld. If the regions of data sources are known, this
scheme provides energy savings over network
ooding approaches
by limiting the
ooding to a geographical region,
however it does not handle the case where the destination
location is not known in advance.
TTDD also bears certain similarity to the study on self-
conguring ad hoc wireless networks. GAF [19] proposes
to build a geographical grid to turn o nodes for energy
conservation. The GAF grid is pre-dened and synchronized
in the whole sensor eld, with the cell size determined by
the communication range of nodes' radios. The TTDD grid
diers from that of GAF in that the former is constructed
dynamically as needed by data sources, and we use it for a
dierent purpose of limiting the impact of sink mobility.
There is a rich literature on mobile ad hoc network clustering
algorithms [2, 13, 14, 16]. Although they seem to
share similar approaches of building virtual infrastructures
for scalable and e-cient routing, TTDD targets at communication
that is data-oriented, not that based on underlying
network addressing schemes. Moreover, TTDD builds
the grid structure over stationary sensor nodes using location
information, which leads to very low overhead in the
construction and maintenance of the infrastructure. In con-
trast, node mobility in a mobile ad hoc network leads to
signicantly higher cost in building and maintaining virtual
infrastructures, thus osetting the benets.
Perhaps TTDD can be most clearly described by contrasting
its design with that of DVMRP [17]. DVMRP supports
data delivery from multiple sources to multiple receivers and
faces the same challenge as TTDD, that is how to make
all the sources and sinks meet without a prior knowledge
about the locations of either. DVMRP solves the problem
by letting each source
ood data periodically over the entire
network so that all the interested receivers can grasp on
the multicast tree along the paths data packets come from.
Such a source
ooding approach handles sink mobility well
but at a very high cost. TTDD inherits the source proactive
approach with a substantially reduced cost. In TTDD
a data source informs only a small set of sensors of its existence
by propagating the information over a grid structure
instead of notifying all the sensors. Instead of sending data
over the grid TTDD simply stores the source information;
data stream is delivered downward specic grid branch or
branches only upon receiving queries from one or more sinks
down that direction or directions.
7. CONCLUSION
In this paper we described TTDD, a two-tier data dissemination
design, to enable e-cient data dissemination in large-scale
wireless sensor networks with sink mobility. Instead of
passively waiting for queries from sinks, TTDD exploits the
property of sensors being stationary and location-aware to
let each data source build and maintain a grid structure in an
e-cient way. Sources proactively propagates the existence
information of sensing data globally over the grid structure,
so that each sink's query
ooding is conned within a local
gird cell only. Queries are forwarded upstream to data
sources along specic grid branches, pulling sensing data
downstream toward each sink. Our analysis and extensive
simulations have conrmed the eectiveness and e-ciency
of the proposed design, demonstrating the feasibility and
benets of building an infrastructure in stationary sensor
networks.
8.
ACKNOWLEDGMENT
We thank our group members of Wireless Networking
Group (WiNG) and Internet Research Lab (IRL) at UCLA
for their help during the development of this project and
their invaluable comments on many rounds of earlier drafts.
Special thanks to Gary Zhong for his help on the ns-2 simu-
lator, and Jesse Cheng for his verication on the simulation
results. We would also like to thank the anonymous reviewers
for their constructive criticisms.
9.
--R
Recursive Position Estimation in Sensor Networks.
Distributed Clustering for Ad Hoc Networks.
Rumor Routing Algorithm for Sensor Networks.
Application Driver for Wireless Communications Technology.
Declarative ad-hoc sensor networking
Routing and Addressing Problems in Large Metropolitan-scale Internetworks
Adaptive Protocols for Information Dissemination in Wireless Sensor Networks.
Location Systems for Ubiquitous Computing.
System Architecture Directions for Networked Sensors.
Directed Di
GPSR: Greedy Perimeter Stateless Routing for Wireless Networks.
A Scalable Location Service for Geographic Ad Hoc Routing.
Adaptive Clustering for Mobile Wireless Networks.
A Mobility-Based Framework for Adaptive Clustering in Wireless Ad-Hoc Networks
Integrated Network Sensors.
CEDAR: Core Extraction Distributed Ad hoc Routing.
Distance Vector Multicast Routing Protocol.
A New Location Technique for the Active O-ce
Geography Informed Energy Conservation for Ad Hoc Routing.
GRAdient Broadcast: A Robust
A Recursive Data Dissemination Protocol for Wireless Sensor Networks.
--TR
Adaptive protocols for information dissemination in wireless sensor networks
integrated network sensors
Directed diffusion
A scalable location service for geographic ad hoc routing
GPSR
Habitat monitoring
System architecture directions for networked sensors
Geography-informed energy conservation for Ad Hoc routing
Location Systems for Ubiquitous Computing
Distributed Clustering for Ad Hoc Networks
--CTR
Wensheng Zhang , Guohong Cao, Optimizing tree reconfiguration to track mobile targets in sensor networks, ACM SIGMOBILE Mobile Computing and Communications Review, v.7 n.3, July
Kavitha Gundappachikkenahalli , Hesham H. Ali, ADPROC: an adaptive routing framework to provide QOS in wireless sensor networks, Proceedings of the 24th IASTED international conference on Parallel and distributed computing and networks, p.76-83, February 14-16, 2006, Innsbruck, Austria
Bhaskar Krishnamachari , Fernando Ordez, Fundamental limits of networked sensing: the flow optimization framework, Wireless sensor networks, Kluwer Academic Publishers, Norwell, MA, 2004
Wensheng Zhang , Guohong Cao , Tom La Porta, Dynamic proxy tree-based data dissemination schemes for wireless sensor networks, Wireless Networks, v.13 n.5, p.583-595, October 2007
Wensheng Zhang , Hui Song , Sencun Zhu , Guohong Cao, Least privilege and privilege deprivation: towards tolerating mobile sink compromises in wireless sensor networks, Proceedings of the 6th ACM international symposium on Mobile ad hoc networking and computing, May 25-27, 2005, Urbana-Champaign, IL, USA
Xuhui Hu , Yong Liu , Myung J. Lee , Tarek N. Saadawi, A novel route update design for wireless sensor networks, ACM SIGMOBILE Mobile Computing and Communications Review, v.8 n.1, January 2004
Zehua Zhou , Xiaojing Xiang , Xin Wang, An Energy-Efficient Data-Dissemination Protocol inWireless Sensor Networks, Proceedings of the 2006 International Symposium on on World of Wireless, Mobile and Multimedia Networks, p.13-22, June 26-29, 2006
Umut Ozertem , Deniz Erdogmus, Information Regularized Sensor Fusion: Application to Localization With Distributed Motion Sensors, Journal of VLSI Signal Processing Systems, v.49 n.2, p.291-299, November 2007
Jeong-Hun Shin , Jaesub Kim , Keuntae Park , Daeyeon Park, Railroad: virtual infrastructure for data dissemination in wireless sensor networks, Proceedings of the 2nd ACM international workshop on Performance evaluation of wireless ad hoc, sensor, and ubiquitous networks, October 10-13, 2005, Montreal, Quebec, Canada
Yu He , Cauligi S. Raghavendra , Steven Berson , Robert Braden, An autonomic routing framework for sensor networks, Cluster Computing, v.9 n.2, p.191-200, April 2006
Jeff Hornsberger , Gholamali C. Shoja, Geographic grid routing: designing for reliability in wireless sensor networks, Proceeding of the 2006 international conference on Communications and mobile computing, July 03-06, 2006, Vancouver, British Columbia, Canada
Sushant Jain , Rahul C. Shah , Waylon Brunette , Gaetano Borriello , Sumit Roy, Exploiting mobility for energy efficient data collection in wireless sensor networks, Mobile Networks and Applications, v.11 n.3, p.327-339, June 2006
Hsing-Jung Huang , Ting-Hao Chang , Shu-Yu Hu , Polly Huang, Magnetic diffusion: disseminating mission-critical data for dynamic sensor networks, Proceedings of the 8th ACM international symposium on Modeling, analysis and simulation of wireless and mobile systems, October 10-13, 2005, Montral, Quebec, Canada
Fan Ye , Gary Zhong , Songwu Lu , Lixia Zhang, GRAdient broadcast: a robust data delivery protocol for large scale sensor networks, Wireless Networks, v.11 n.3, p.285-298, May 2005
Rui Zhang , Hang Zhao , Miguel A. Labrador, The Anchor Location Service (ALS) protocol for large-scale wireless sensor networks, Proceedings of the first international conference on Integrated internet ad hoc and sensor networks, May 30-31, 2006, Nice, France
Gregory Hartl , Baochun Li, Loss inference in wireless sensor networks based on data aggregation, Proceedings of the third international symposium on Information processing in sensor networks, April 26-27, 2004, Berkeley, California, USA
Wei-Peng Chen , Jennifer C. Hou , Lui Sha, Dynamic Clustering for Acoustic Target Tracking in Wireless Sensor Networks, IEEE Transactions on Mobile Computing, v.3 n.3, p.258-271, July 2004
Xiaojiang Du , Fengjing Lin, Maintaining differentiated coverage in heterogeneous sensor networks, EURASIP Journal on Wireless Communications and Networking, v.5 n.4, p.565-572, September 2005
Wei Liu , Yanchao Zhang , Wenjing Lou , Yuguang Fang, A robust and energy-efficient data dissemination framework for wireless sensor networks, Wireless Networks, v.12 n.4, p.465-479, July 2006
Hao Yang , Fan Ye , Yuan Yuan , Songwu Lu , William Arbaugh, Toward resilient security in wireless sensor networks, Proceedings of the 6th ACM international symposium on Mobile ad hoc networking and computing, May 25-27, 2005, Urbana-Champaign, IL, USA
K. Shashi Prabh , Tarek F. Abdelzaher, Energy-conserving data cache placement in sensor networks, ACM Transactions on Sensor Networks (TOSN), v.1 n.2, p.178-203, November 2005
Hyung Seok Kim , Tarek F. Abdelzaher , Wook Hyun Kwon, Dynamic delay-constrained minimum-energy dissemination in wireless sensor networks, ACM Transactions on Embedded Computing Systems (TECS), v.4 n.3, p.679-706, August 2005
Ossama Younis , Sonia Fahmy, HEED: A Hybrid, Energy-Efficient, Distributed Clustering Approach for Ad Hoc Sensor Networks, IEEE Transactions on Mobile Computing, v.3 n.4, p.366-379, October 2004
Yihong Wu , Lin Zhang , Yiqun Wu , Zhisheng Niu, Interest dissemination with directional antennas for wireless sensor networks with mobile sinks, Proceedings of the 4th international conference on Embedded networked sensor systems, October 31-November 03, 2006, Boulder, Colorado, USA
Hyung Seok Kim , Tarek F. Abdelzaher , Wook Hyun Kwon, Minimum-energy asynchronous dissemination to mobile sinks in wireless sensor networks, Proceedings of the 1st international conference on Embedded networked sensor systems, November 05-07, 2003, Los Angeles, California, USA
Xin Li , Young Jin Kim , Ramesh Govindan , Wei Hong, Multi-dimensional range queries in sensor networks, Proceedings of the 1st international conference on Embedded networked sensor systems, November 05-07, 2003, Los Angeles, California, USA
Qun Li , Daniela Rus, Navigation protocols in sensor networks, ACM Transactions on Sensor Networks (TOSN), v.1 n.1, p.3-35, August 2005
Qun Li , Michael De Rosa , Daniela Rus, Distributed algorithms for guiding navigation across a sensor network, Proceedings of the 9th annual international conference on Mobile computing and networking, September 14-19, 2003, San Diego, CA, USA
Nadeem Ahmed , Salil S. Kanhere , Sanjay Jha, The holes problem in wireless sensor networks: a survey, ACM SIGMOBILE Mobile Computing and Communications Review, v.9 n.2, April 2005
Rik Sarkar , Xianjin Zhu , Jie Gao, Double rulings for information brokerage in sensor networks, Proceedings of the 12th annual international conference on Mobile computing and networking, September 23-29, 2006, Los Angeles, CA, USA
Taejoon Park , Kang G. Shin, Soft Tamper-Proofing via Program Integrity Verification in Wireless Sensor Networks, IEEE Transactions on Mobile Computing, v.4 n.3, p.297-309, May 2005
Tian He , Brian M. Blum , Qing Cao , John A. Stankovic , Sang H. Son , Tarek F. Abdelzaher, Robust and timely communication over highly dynamic sensor networks, Real-Time Systems, v.37 n.3, p.261-289, December 2007
Niki Trigoni , Yong Yao , Alan Demers , Johannes Gehrke , Rajmohan Rajaraman, WaveScheduling: energy-efficient data dissemination for sensor networks, Proceeedings of the 1st international workshop on Data management for sensor networks: in conjunction with VLDB 2004, August 30-30, 2004, Toronto, Canada
Jian Li , Prasant Mohapatra, Analytical modeling and mitigation techniques for the energy hole problem in sensor networks, Pervasive and Mobile Computing, v.3 n.3, p.233-254, June, 2007
Zoltn Vincze , Dorottya Vass , Rolland Vida , Attila Vidcs , Andrs Telcs, Adaptive sink mobility in event-driven multi-hop wireless sensor networks, Proceedings of the first international conference on Integrated internet ad hoc and sensor networks, May 30-31, 2006, Nice, France
Niki Trigoni , Yong Yao , Alan Demers , Johannes Gehrke , Rajmohan Rajaraman, Wave scheduling and routing in sensor networks, ACM Transactions on Sensor Networks (TOSN), v.3 n.1, p.2-es, March 2007
Alan Demers , Johannes Gehrke , Rajmohan Rajaraman , Niki Trigoni , Yong Yao, The Cougar Project: a work-in-progress report, ACM SIGMOD Record, v.32 n.4, December
Brian Blum , Prashant Nagaraddi , Anthony Wood , Tarek Abdelzaher , Sang Son , Jack Stankovic, An entity maintenance and connection service for sensor networks, Proceedings of the 1st international conference on Mobile systems, applications and services, p.201-214, May 05-08, 2003, San Francisco, California
Taejoon Park , Kang G. Shin, LiSP: A lightweight security protocol for wireless sensor networks, ACM Transactions on Embedded Computing Systems (TECS), v.3 n.3, p.634-660, August 2004
S. Subramaniam , T. Palpanas , D. Papadopoulos , V. Kalogeraki , D. Gunopulos, Online outlier detection in sensor data using non-parametric models, Proceedings of the 32nd international conference on Very large data bases, September 12-15, 2006, Seoul, Korea
Xiang-Yang Li , Wen-Zhan Song , Weizhao Wang, A unified energy-efficient topology for unicast and broadcast, Proceedings of the 11th annual international conference on Mobile computing and networking, August 28-September 02, 2005, Cologne, Germany | two-tier model;sensor networks;sink mobility |
570684 | An on-demand secure routing protocol resilient to byzantine failures. | An ad hoc wireless network is an autonomous self-organizing system ofmobile nodes connected by wireless links where nodes not in directrange can communicate via intermediate nodes. A common technique usedin routing protocols for ad hoc wireless networks is to establish therouting paths on-demand, as opposed to continually maintaining acomplete routing table. A significant concern in routing is theability to function in the presence of byzantine failures whichinclude nodes that drop, modify, or mis-route packets in an attempt todisrupt the routing service.We propose an on-demand routing protocol for ad hoc wireless networks that provides resilience to byzantine failures caused by individual or colluding nodes. Our adaptive probing technique detects a malicious link after log n faults have occurred, where n is the length of the path. These links are then avoided by multiplicatively increasing their weights and by using an on-demand route discovery protocol that finds a least weight path to the destination. | INTRODUCTION
Ad hoc wireless networks are self-organizing multi-hop
wireless networks where all the hosts (nodes) take part in
the process of forwarding packets. Ad hoc networks can
easily be deployed since they do not require any fixed in-
frastructure, such as base stations or routers. Therefore,
they are highly applicable to emergency deployments, natural
disasters, military battle fields, and search and rescue
missions.
A key component of ad hoc wireless networks is an efficient
routing protocol, since all of the nodes in the net-work
act as routers. Some of the challenges faced in ad
hoc wireless networks include high mobility and constrained
power resources. Consequently, ad hoc wireless routing protocols
must converge quickly and use battery power e#-
ciently. Traditional proactive routing protocols (link-state
[1] and distance vectors [1]), which use periodic updates or
beacons which trigger event based updates, are less suitable
for ad hoc wireless networks because they constantly
consume power throughout the network, regardless of the
presence of network activity, and are not designed to track
topology changes occurring at a high rate.
On-demand routing protocols [2, 3] are more appropriate
for wireless environments because they initiate a route discovery
process only when data packets need to be routed.
Discovered routes are then cached until they go unused for
a period of time, or break because the network topology
changes.
Many of the security threats to ad hoc wireless routing
protocols are similar to those of wired networks. For exam-
ple, a malicious node may advertise false routing informa-
tion, try to redirect routes, or perform a denial of service
attack by engaging a node in resource consuming activities
such as routing packets in a loop. Furthermore, due to their
cooperative nature and the broadcast medium, ad hoc wireless
networks are more vulnerable to attacks in practice [4].
Although one might assume that once authenticated, a
node should be trusted, there are many scenarios where this
is not appropriate. For example, when ad hoc networks are
used in a public Internet access system (airports or con-
ferences), users are authenticated by the Internet service
provider, but this authentication does not imply trust between
the individual users of the service. Also, mobile devices
are easier to compromise because of reduced physical
security, so complete trust should not be assumed.
Our contribution. We focus on providing routing survivability
under an adversarial model where any intermediate
node or group of nodes can perform byzantine attacks such
as creating routing loops, misrouting packets on non-optimal
paths, or selectively dropping packets (black hole). Only the
source and destination nodes are assumed to be trusted. We
propose an on-demand routing protocol for wireless ad hoc
networks that operates under this strong adversarial model.
It is provably impossible under certain circumstances, for
example when a majority of the nodes are malicious, to attribute
a byzantine fault occurring along a path to a specific
node, even using expensive and complex byzantine agree-
ment. Our protocol circumvents this obstacle by avoiding
the assignment of "guilt" to individual nodes. Instead it reduces
the possible fault location to two adjacent nodes along
a path, and attributes the fault to the link between them.
As long as a fault-free path exists between two nodes, they
can communicate reliably even if an overwhelming majority
of the network acts in a byzantine manner.
Our protocol consists of the following phases:
. Route discovery with fault avoidance. Using flooding
and a faulty link weight list, this phase finds a least
weight path from the source to the destination.
. Byzantine fault detection. This phase discovers faulty
links on the path from the source to the destination.
Our adaptive probing technique identifies a faulty link
after log n faults have occurred, where n is the length
of the path.
. Link weight management. This phase maintains a weight
list of links discovered by the fault detection algorithm.
A multiplicative increase scheme is used to penalize
links which are then rehabilitated over time. This list
is used by the route discovery phase to avoid faulty
paths.
The rest of the paper is organized as follows. Section 2
summarizes related work. We further define the problem
we are addressing and the model we consider in Section 3.
We then present our protocol in Section 4 and provide an
analysis in Section 5. We conclude and suggest directions
for future work in Section 6.
2. RELATED WORK
Secure routing protocols for ad hoc wireless networks is a
fairly new topic. Although routing in ad hoc wireless networks
has unique aspects, many of the security problems
faced in ad hoc routing protocols are similar to those faced
by wired networks. In this section, we review the work done
in securing routing protocols for both ad hoc wireless and
wired networks.
One of the problems addressed by researchers is providing
an e#ective public key infrastructure in an ad hoc wireless
environment which by nature is decentralized. Examples
of these works are as follows. Hubaux et al.[5] proposed
a completely decentralized public-key distribution system
similar to PGP [6]. Zhou and Haas [7] explored threshold
cryptography methods in a wireless environment. Brown et
al.[8] showed how PGP, enhanced by employing elliptic curve
cryptography, is a viable option for wireless constrained devices
A more general trust model where levels of security are defined
for paths carrying specific classes of tra#c is suggested
in [9]. The paper discusses very briefly some of the cryptographic
techniques that can be used to secure on-demand
routing protocols: shared key encryption associated with a
security level and digital signatures for data source authentication
As mentioned in [10], source authentication is more of a
concern in routing than confidentiality. Papadimitratos and
Haas showed in [11] how impersonation and replay attacks
can be prevented for on-demand routing by disabling route
caching and providing end-to-end authentication using an
HMAC [12] primitive which relies on the existence of security
associations between sources and destinations. Dahill
et al.[16] focus on providing hop-by-hop authentication for
the route discovery stage of two well-known on-demand pro-
tocols: AODV [2] and DSR [3], relying on digital signatures.
Other significant works include SEAD [13] and Ariadne [4]
that provide e#cient secure solutions for the DSDV [14] and
DSR [3] routing protocols, respectively. While SEAD uses
one-way hash chains to provide authentication, Ariadne uses
a variant of the Tesla [15] source authentication technique
to achieve similar security goals.
Marti et al.[18] address a problem similar to the one we
consider, survivability of the routing service when nodes selectively
drop packets. They take advantage of the wireless
cards promiscuous mode and have trusted nodes monitoring
their neighbors. Links with an unreliable history are
avoided in order to achieve robustness. Although the idea of
using the promiscuous mode is interesting, this solution does
not work well in multi-rate wireless networks because nodes
might not hear their neighbors forwarding communication
due to di#erent modulations. In addition, this method is
not robust against collaborating adversaries.
Also, relevant work has been done in the wired network
community. Many researchers focused on securing classes
of routing protocols such as link-state [10, 19, 20, 21] and
distance-vector [22]. Others addressed in detail the security
issues of well-known protocols such as OSPF [23] and
BGP [24]. The problem of source authentication for routing
protocols was explored using digital signatures [23] or
symmetric cryptography based methods: hash chains [10],
chains of one-time signatures [20] or HMAC [21]. Intrusion
detection is another topic that researchers focused on, for
generic link-state [25, 26] or OSPF [27].
Perlman [28] designed the Network-layer Protocol with
Byzantine Robustness (NPBR) which addresses denial of
service at the expense of flooding and digital signatures.
The problem of byzantine nodes that simply drop packets
(black holes) in wired networks is explored in [29, 30]. The
approach in [29] is to use a number of trusted nodes to probe
their neighbors, assuming a limited model and without discussing
how probing packets are disguised from the adver-
sary. A di#erent technique, flow conservation, is used in [30].
Based on the observation that for a correct node the number
of bytes entering a node should be equal to the number of
bytes exiting the node (within a threshold), the authors suggest
a scheme where nodes monitor the flow in the network.
This is done by requiring each node to have a copy of the
routing table of their neighbors and reporting the incoming
and outgoing data. Although interesting, the scheme does
not work when two or more adversarial nodes collude.
3. PROBLEM DEFINITION AND MODEL
In this section we discuss the network and security assumptions
we make in this paper and present a more precise
description of the problem we are addressing.
3.1 Network Model
This work relies on a few specific network assumptions.
Our protocol requires bi-directional communication on all
links in the network. This is also a requirement of most
wireless MAC protocols, including 802.11 [31] and MACAW
[32]. We focused on providing a secure routing protocol,
which addresses threats to the ISO/OSI network layer. We
do not specifically address attacks against lower layers. For
example, the physical layer can be disrupted by jamming,
and MAC protocols such as 802.11 can be disrupted by attacks
using the special RTS/CTS packets. Though MAC
protocols can detect packet corruption, we do not consider
this a substitute for cryptographic integrity checks [33].
3.2 Security Model and Considered Attacks
In this work we consider only the source and the destination
to be trusted. Nodes that can not be authenticated do
not participate in the protocol, and are not trusted. Any
intermediate node on the path between the source and destination
can be authenticated and can participate in the
protocol, but may exhibit byzantine behavior. The goal of
our protocol is to detect byzantine behavior and avoid it.
We define byzantine behavior as any action by an authenticated
node that results in disruption or degradation of the
routing service. We assume that an intermediate node can
exhibit such behavior either alone or in collusion with other
nodes. More generally, we use the term fault to refer to
any disruption that causes significant loss or delay in the
network. A fault can be caused by byzantine behavior, external
adversaries, lower layer influences, and certain types
of normal network behavior such as bursting tra#c.
An adversary or group of adversaries can intercept, mod-
ify, or fabricate packets, create routing loops, drop packets
selectively (often referred to as a black hole), artificially delay
packets, route packets along non-optimal paths, or make
a path look either longer or shorter than it is. All the above
attacks result in disruption or degradation of the routing
service. In addition, they can induce excess resource consumption
which is particularly problematic in wireless networks
There are strong attacks that our protocol can not pre-
vent. One of these strong attacks, referred to as a wormhole
[4], is where two attackers establish a path and tunnel packets
from one to another. For example, the attackers can
tunnel route request packets that can arrive faster than the
normal route request flood. This may result in non-optimal
adversarial controlled routing paths. Our protocol addresses
this attack by treating the wormhole as a single link which
will be avoided if it exhibits byzantine behavior, but does not
prevent the wormhole formation. Also, we do not address
traditional denial of service attacks which are characterized
by packet injection with the goal of resource consumption.
Whenever possible, our protocol uses e#cient cryptographic
primitives. This requires pairwise shared keys 1 which are
established on-demand. The public-key infrastructure used
We discourage group shared keys since this is an invitation
for impersonation in a cooperative environment.
Route
Discovery
Byzantine
Fault
Detection
Link
Weight
Management
Weight List Path Faulty Link
Figure
1: Secure Routing Protocol Phases
for authentication can be either completely distributed (as
described in [5]), or Certificate Authority (CA) based. In
the latter case, a distributed cluster of peer CAs sharing a
common certificate and revocation list can be deployed to
improve the CA's availability.
3.3 Problem Definition
The goal of this work is to provide a robust on-demand
ad hoc routing service which is resilient to byzantine behavior
and operates under the network and security models
described in Sections 3.1 and 3.2. We attempt to bound the
amount of damage an adversary or group of adversaries can
cause to the network.
4. SECURE ROUTING PROTOCOL
Our protocol establishes a reliability metric based on past
history and uses it to select the best path. The metric is
represented by a list of link weights where high weights correspond
to low reliability. Each node in the network maintains
its own list, referred to as a weight list, and dynamically
updates that list when it detects faults. Faulty links
are identified using a secure adaptive probing technique that
is embedded in the normal packet stream. These links are
avoided using a secure route discovery protocol that incorporates
the reliability metric.
More specifically, our routing protocol can be separated
into three successive phases, each phase using as input the
output from the previous (see Figure 1):
. Route discovery with fault avoidance. Using flooding,
cryptographic primitives, and the source's weight list
as input, this phase finds and outputs the full least
weight path from the source to the destination.
. Byzantine fault detection. The goal of this phase is
to discover faulty links on the path from the source
to the destination. This phase takes as input the full
path and outputs a faulty link. Our adaptive probing
technique identifies a faulty link after log n faults
occurred, where n is the length of the path. Cryptographic
primitives and sequence numbers are used to
protect the detection protocol from adversaries.
. Link weight management. This phase maintains a weight
list of links discovered by the fault detection algo-
rithm. A multiplicative increase scheme is used to
penalize links which are then rehabilitated over time.
The weight list is used by the route discovery phase to
avoid faulty paths.
4.1 Route Discovery with Fault Avoidance
Our route discovery protocol floods both the route request
and the response in order to ensure that if any fault free path
exists in the network, a path can be established. However,
there is no guarantee that the established path is free of
Procedure list:
creates a message of the concatenated item list, signed by the current node, and broadcasts it
broadcasts a message
verifies the signature and exits the procedure if the signature is not valid
Find( list, item returns an item in a list, or NULL if the item does not exist
InsertList( list, item inserts an item in a list
UpdateList( list, item ) - replaces the item in a list
returns the listed weight of the link between A and B, or one if the link is not listed
Code executed at node source when a new route to node destination is needed:
(1) CreateSignSend( REQUEST, destination, source, req sequence, weight list )
Code executed at node this node when a request message req is received:
(2) if( Find( requests list,
(4) if( this node = req.destination )
(5) CreateSignSend( RESPONSE, req.destination, req.source, req.req sequence, req.high weights list )
else
InsertList(requests list, req)
Code executed at node this node when a response message res is received:
(12) prev node = res.destination
total weight += LinkWeight( res.weight list, prev node, res.hops[i].node )
list, prev node, this node )
res )
else
(32) if( this node = source )
path list, res )
else
CreateSignSend( res, this node )
res )
Figure
2: Route Discovery Algorithm
adversarial nodes. The initial flood is required to guarantee
that the route request reaches the destination. The response
must also be flooded because if it was unicast, a single adversary
could prevent the path from being established. If an
adversary was able to prevent routes from being established,
the fault detection algorithm would be unable to detect and
avoid the faulty link since it requires a path as input in order
to operate.
A digital signature is used to authenticate the source.
This is required to prevent unauthorized nodes from initiating
resource consuming route requests. An unauthorized
route request would fail verification and be dropped by each
of the requesting node's immediate neighbors, preventing
the request from flooding through the network.
At the completion of the route discovery protocol, the
source is provided with the complete path to the destina-
tion. Many on-demand routing protocols use route caching
by intermediate nodes as an optimization; we do not consider
it in this work because of the security implications. We
intend to address route caching optimizations with strong
security semantics in a future work.
Our route discovery protocol uses link weights to avoid
faults. A weight list is provided by the link weight management
phase (Section 4.3). The route discovery protocol
chooses a route that is a minimum weight path between the
source and the destination. This path is found during a
flood by accumulating the cost hop by hop and forwarding
the flood only if the new cost is less than the previously
forwarded cost. The protocol uses digital signatures at each
hop to prevent an adversary from specifying an arbitrary
path. For example, it can stop an adversary from inventing
a short path in an attempt to draw packets into a black
hole. Since the cost associated with signing a message at
each hop is very high, the weights are accumulated as part
of the response flood instead of the request flood in order to
minimize the cost of route requests to unreachable destinations
If only the source verifies all of the weights and signa-
tures, then the protocol becomes vulnerable to attacks on
the response flood propagation. The adversaries could block
correct information from reaching the source by propagating
low cost fabricated responses. The source can ignore non-
authentic responses, however, since intermediate nodes only
re-send lower cost information, a valid response would never
reach the source. Therefore, each intermediate node must
verify the weights and the signatures carried by a response,
in order to guarantee that a path will be established.
An adversary can still influence the path selection by creating
what we refer to as virtual links. A virtual link is
formed when adversaries form wormholes, as described in
Section 3.2, or any other type of shortcuts in the network.
A virtual link can be created by deleting one or more hops
from the end of the route response. Our detection algorithm
(Section 4.2) can identify and avoid virtual links if they exhibit
byzantine behavior, but our route discovery algorithm
does not prevent their formation. We present a detailed
analysis of the e#ect of virtual links in Section 5.
As part of the route discovery protocol, each node maintains
a list of recent requests and responses that it has already
forwarded. The following five steps comprise the route
discovery protocol (see also Figure 2):
I. Request Initiation. The source creates and signs a request
that includes the destination, the source, a sequence
number, and a weight list (see Line 1, Figure 2). The source
then broadcasts this request to its neighbors. The source's
signature allows the destination and intermediate nodes to
authenticate the request and prevents an adversary from
creating a false route request.
II. Request Propagation. The request propagates to the
destination via flooding which is performed by the intermediate
nodes as follows. When receiving a request, the node
first checks its list of recently seen requests for a matching
request (one with the exact same destination, source, and
request identifiers). If there is no matching request in its list,
and the source's signature is valid, it stores the request in
its list and rebroadcasts the request (see Lines 2-10, Figure
2). If there is a matching request, the node does nothing.
III. Request Receipt / Response Initiation. Upon receiving
a new request from a source for the first time, the destination
verifies the authenticity of the request, creates and
signs a response that contains the source, the destination, a
response sequence number and the weight list from the request
packet. The destination then broadcasts this response
(see Lines 2-10, Figure 2).
IV. Response Propagation. When receiving a response,
the node computes the total weight of the path by summing
the weight of all the links on the specified path to this
node (Lines 12-18, Figure 2). If the total weight is less than
any previously forwarded matching response (same source,
destination and response identifiers), the node verifies the
signatures of the response header and every hop listed on
the packet so far 2 (Lines 28-31, Figure 2). If the entire
packet is verified, the node appends its identifier to the end
of the packet, signs the appended packet, and broadcasts
the modified response (Lines 35-36, Figure 2).
V. Response Receipt. When the source receives a response,
it performs the same computation and verification as the
intermediate nodes as described in the response propagation
step. If the path in the response is better than the best path
received so far, the source updates the route used to send
packets to that specific destination (see Line 33, Figure 2).
4.2 Byzantine Fault Detection
Our detection algorithm is based on using acknowledgments
(acks) of the data packets. If a valid ack is not received
within a timeout, it is assumed that the packet has
been lost. Note that this definition of loss includes both malicious
and non-malicious causes. A loss can be caused by
packet drop due to bu#er overflow, packet corruption due to
interference, a malicious attempt to modify the packet con-
tents, or any other event that prevents either the packet or
the ack from being received and verified within the timeout.
A network operating "normally" exhibits some amount of
loss. We define a threshold that sets a bound on what is
considered a tolerable loss rate. In a well behaved network
the loss rate should stay below the threshold. We define a
fault as a loss rate greater than or equal to the threshold.
To maximize the performance of multiple verifications we
use RSA keys with a low public exponent.
The value of the threshold also specifies the amount of loss
that an adversary can create without being detected. Hence,
the threshold should be chosen as low as possible, while still
greater than the normal loss rate. The threshold value is
determined by the source, and may be varied independently
for each route to accommodate di#erent situations, but this
work uses a fixed threshold.
While this threshold scheme may seem overly "simple",
we would like to emphasize that our protocol provides fault
avoidance and never disconnects nodes from the network.
Thus, the impact of false positives, due to normal events
such as bursting tra#c, is drastically reduced. This provides
a much more flexible solution than one where nodes
are declared faulty and excluded from the network. In ad-
dition, this avoidance property allows the threshold to be
set very low, where it may be periodically triggered by false
positives, without severely impacting network performance
or a#ecting network connectivity.
A substantial advantage of our protocol is that it limits
the overhead to a minimum under normal conditions. Only
the destination is required to send an ack when no faults
have occurred. If losses exceed the threshold, the protocol
attempts to locate the faulty link. This is achieved by requiring
a dynamic set of intermediate nodes, in addition to
the destination node, to send acks to the source.
Normal topology changes occur frequently in ad hoc wireless
networks. Although our detection protocol locates "faulty
links" that are caused by these changes, an optimized mechanism
for detecting them would decrease the overhead and
detection time. Any of the mechanisms described in the
route maintenance section of the DSR protocol [3], for instance
MAC layer notification, can be used as an optimized
topology change detector. When our protocol receives notification
from such a detector, it reacts by creating a route
error message that is propagated along the path back to the
source. The node that generates this message, signs it, in
order to provide integrity and authentication. Upon receipt
of an authenticated route error message, the source passes
the faulty link to the link weight management phase. Note
that an intermediate node exhibiting byzantine behavior can
always incriminate one of its links, so adding a mechanism
that allows it to explicitly declare one of its links faulty, does
not weaken the security model.
Fault Detection Overview. Our fault detection protocol
requires the destination to return an ack to the source, for
every successfully received data packet. The source keeps
track of the number of recent losses (acks not received over
a window of recent packets). If the number of recent losses
violates the acceptable threshold, the protocol registers a
fault between the source and the destination and starts a
binary search on the path, in order to identify the faulty
link. A simple example is illustrated in Figure 3.
The source controls the search by specifying a list of intermediate
nodes on data packets. Each node in the list, in
addition to the destination, must send an ack for the packet.
We refer to the set of nodes required to send acks as probed
nodes, or for short probes. Since the list of probed nodes
is specified on legitimate tra#c, an adversary is unable to
drop tra#c without also dropping the list of probed nodes
and eventually being detected.
The list of probes defines a set of non-overlapping intervals
that cover the whole path, where each interval covers the
Source Destination
Trusted End Point
Intermediate Router
Successful Probe
Failed Probe
Fault Location
Good Interval
Faulty Interval
Unknown Interval
Success
Failure 1
Failure 2
Failure 3
Failure 4
Figure
3: Byzantine Fault Detection
sub-path between the two consecutive probes that form its
endpoints. When a fault is detected on an interval, the
interval is divided in two by inserting a new probe. This
new probe is added to the list of probes appended to future
packets. The process of sub-division continues until a fault is
detected on an interval that corresponds to a single link. In
this case, the link is identified as being faulty and is passed
as input to the link weight management phase (see Figure
1). The path sub-division process is a binary search that
proceeds one step for each fault detected. This results in the
identification of a faulty link after log n faults are detected,
where n is the length of the path.
We use shared keys between the source and each probed
node as a basis for our cryptographic primitives in order to
avoid the prohibitively high cost of using public key cryptography
on an per packet basis. These pairwise shared keys
can be established on-demand via a key exchange protocol
such as Di#e-Hellman [34], authenticated using digital sig-
natures. The on-demand key exchange must be fully integrated
into the fault detection protocol in order to maintain
the security semantics. The integrated key exchange operates
similarly to the probe and ack specification discussed
below (see also Figure 4), but it is not described in detail in
this work.
Probe Specification. The mechanism for specifying the
probe list on a packet is essential for the correct operation of
the detection protocol. The probes are specified in the list
in the same order as they appear on the path. The list is
"onion" encrypted [17]. Each probe is specified by the identifier
of the node, an HMAC of the packet (not including the
list), and the encrypted remaining list (see Lines 3-6, Figure
4). Both the HMAC and the encrypted remaining list are
computed with the shared key between the source and that
node. An HMAC [12] using a one-way hash function such as
SHA1 [35] and a standard block cipher encryption algorithm
such as AES [36] can be used.
A node can detect if it is required to send acks by checking
the identifier at the beginning of the list (see Lines 8-12,
Figure
4). If it matches, then it verifies the HMAC of the
packet and replaces the list on the packet with the decrypted
version of the remaining list. This mechanism forces the
Procedure list:
returns the concatenation of a, b, etc.
Hmac( data, key ) - compute and return the hmac of data using key
data with key and return result
Report Loss and Return( node ) - reports that a loss was detected on the interval before node and exit the procedure
Code executed at source when sending a packet with the contents data to destination :
Code executed at this node when receiving a packet with the contents source, destination,
enc data, id, hmac, enc remainder :
node and
waiting for ack = true
Schedule ack timer()
Code executed at destination when receiving a packet with the contents source, destination, counter, enc data, hmac:
prev counter and
Code executed at probed node when receiving an ack with the contents source, ack node, counter, enc remainder:
waiting for ack )
encrypted ack = Encrypt( Cat( ack node, counter, enc remainder
probed node.id, counter, enc ack, Hmac( Cat( probed node.id, counter, enc ack ), source.key
waiting for ack = false
Unschedule ack timer()
Code executed at this node when ack timer expires:
waiting for ack = false
source, this node.id, counter, Hmac( Cat( this node.id, counter ), source.key
Code executed at source when ack timer expires:
waiting for ack = false
Report Loss and Return( probe list[0] )
Code executed at source when receiving an ack with the contents source, ack node, counter, enc remainder, hmac:
ack and ack node = probe list[0].id and
waiting for ack = false
Unschedule ack timer()
Report Loss and Return( probe list[i] )
ack node, counter, enc remainder,
(35) if( ack node #= probe list[i].id or hmac #= Hmac( Cat( ack node, counter, enc remainder ), ack node.key )
Report Loss and Return( probe list[i] )
Report Loss and Return( destination )
ack node, counter,
destination or hmac #= Hmac( Cat( ack node, counter
Report Loss and Return( destination )
return
Figure
4: Probe and Acknowledgement Specification
packet to traverse the probes in order, which verifies the
route taken. Additionally, it verifies the contents of the
packet at every probe point. The onion encryption prevents
the adversary from incriminating other links by removing
specific nodes from the probe list. Note that the adversary is
able to remove the entire probe list, but this will incriminate
one of its own links.
Acknowledgment
Specification. If the adversary can drop
individual acks, it can incriminate any arbitrary link along
the path. In order to prevent this, each probe does not send
its ack immediately, but waits for the ack from the next
probe and combines them into one ack. Each ack consists of
the identifier of the probe, the identifier of the data packet
that is being acknowledged, the ack received from the next
probe encrypted with the key shared by this probe and the
source, and an HMAC of the new combined ack (see Lines
15 and 18-19,
Figure
4).
If no ack is received within a timeout, the probe gives up
waiting, and creates and sends its ack (see Line 24, Figure
4). The timeouts are set up in such a way that if there is a
failure, all the acks before the failure point can be combined
without other timeouts occurring. This is accomplished by
setting the timeout for each probe to be the upper bound of
the round-trip from it to the destination.
Upon receipt of an ack, the source checks the acks from
each probe by successively verifying the HMACs and decrypting
the next ack (see Lines 27-54, Figure 4). The source
either verifies all the acks up through the destination, or discovers
a loss on the interval following the last ack.
Interval and Probe Management. Let # be the acceptable
threshold loss rate. By using the above probe and acknowledgment
specifications, it is simple to attribute losses
to individual intervals. A loss is attributed to an interval
between two probes when the source successfully received
and verified an ack from the closer probe, but does not from
the further probe. When the loss rate on an interval exceeds
# , the interval is divided in two.
Maintaining probes adds overhead to our protocol, so it is
desirable to retire probes when they are no longer needed.
The mechanism for deciding when to retire probes is based
on the loss rate # and the number of lost packets. The goal
is to amortize the cost of the lost packets over enough good
packets, so that the aggregate loss rate is bounded to # .
Each interval has an associated counter C that specifies
its lifetime. Initially, there is one interval with a counter
of zero (there are initially no losses between the source and
destination). When a fault is detected on an interval with a
counter C, a new probe is inserted which divides the interval.
Each of the two new intervals have their counters initialized
to is the number of losses that caused
the fault. The counters are decremented for every ack that
is successfully received, until they reach zero. When the
counters of both intervals on either side of a probe reach
zero, the probe is retired joining the two intervals.
In the worst case scenario, a dynamic adversary can cause
enough loss to trigger a fault, then switch to causing loss just
under # in order to wait out the additional probe, and then
repeat when the probe is removed. This results in a loss
rate bounded to 2# . If the adversary attempts to create a
higher loss rate, the algorithm will be able to identify the
faulty link.
4.3 Link Weight Management
An important aspect of our protocol is its ability to avoid
faulty links in the process of route discovery by the use of
link weights. The decision to identify a link as faulty is made
by the detection phase of the protocol. The management
scheme maintains the weight list using the history of faults
that have been detected. When a link is identified as faulty,
we use a multiplicative increase scheme to double its weight.
The technique we use for reseting a link weight is similar
to the one we use for retiring probes (see Section 4.2). The
weight of a link can be reset to half of the previous value
after the counter associated with that link returns to zero. If
- is the number of packets dropped while identifying a faulty
link, then the link's counter is increased by -/# where # is
the threshold loss rate. Each non-zero counter is reduced by
1/m for every successfully delivered packet, where m is the
number of links with non-zero counters. This bounds the
aggregate loss rate to 2# in the worst case.
5. ANALYSIS
Our protocol ensures that, even in a highly adversarial
controlled network, as long as there is one fault-free path,
it will be discovered after a bounded number of faults have
occurred. As defined in Section 4.2, a fault means a violation
of the threshold loss rate. We consider a network of n nodes
of which k exhibit adversarial behavior. The adversaries
cooperate and create the maximum number of virtual links
possible in order to slow the convergence of our algorithm.
We provide an analysis of the upper bound for the total
number of packets lost while finding the fault free path. This
bound is defined by the number of losses that result in an
increase of the costs of all adversarial controlled paths above
the cost of the
Let q - and q + be the total number of lost packets and
successfully transmitted packets, respectively. Ideally, q -
# 0, where # is the transmission success rate, slightly
higher than the original threshold. This means the number
of lost packets is a #-fraction of the number of transmitted
packets. While this is not quite true, it is true "up to an
additive constant", i.e. ignoring a bounded number # of
packets lost. Specifically, we prove that there exists an upper
bound # for the previous expression. We show that:
Assume that there are k adversarial nodes, k < n. We denote
by -
E the set of "virtual links" controlled by adversarial
nodes. The maximum size of -
E is kn.
Consider a faulty link e, convicted je times and rehabilitated
ae times. Then, its weight, we , is at most n,
means that the whole path is adversarial. By the algorithm,
we is given by the formula:
The number of convictions is at least q -
, so
je < 0. (3)
Also, the number of rehabilitations is at most q
-/# , so
ae
-/#
where - is the number of lost packets that exposes a link.
Thus
-/#
(je - ae) (5)
From Eq. 2 we have je log we . Therefore:
(je
log we (6)
By combining Eq. 5 and 6, we obtain
log we # - kn - log n (7)
and since is the number of lost packets
per window, Eq. 7 becomes
Therefore, the amount of disruption a dynamic adversary
can cause to the network is bounded. Note that kn represents
the number of links controlled by an adversary. If
there is no adversarial node Eq. 8 becomes the ideal case
6. CONCLUSIONS AND FUTURE WORK
We presented a secure on-demand routing protocol resilient
to byzantine failures. Our scheme detects malicious
links after log n faults occurred, where n is the length of the
routing path. These links are then avoided by the route discovery
protocol. Our protocol bounds logarithmically the
total amount of damage that can be caused by an attacker
or group of attackers.
An important aspect of our protocol is the algorithm used
to detect that a fault has occurred. However, it is di#cult to
design such a scheme that is resistant to a large number of
adversaries. The method suggested in this paper uses a fixed
threshold scheme. We intend to explore other methods, such
as adaptive threshold or probabilistic schemes which may
provide superior performance and flexibility.
In order to further enhance performance, we would like to
investigate ways of taking advantage of route caching without
breaching our security guarantees.
We also plan to evaluate the overhead of our protocol with
respect to existing protocols, in normal, non-faulty conditions
as well as in adversarial environments. Finally, we
are interested in investigating means of protecting routing
against traditional denial of service attacks.
Acknowledgments
We are grateful to Giuseppe Ateniese, Avi Rubin, Gene
Tsudik and Moti Yung for their comments. We would like to
thank Jonathan Stanton and Ciprian Tutu for helpful feed-back
and discussions. We also thank the anonymous referees
for their comments.
We would like to thank the Johns Hopkins University Information
Security Institute for providing the funding that
made this research possible.
7.
--R
Computer Networking
DSR: The Dynamic Source Routing Protocol for Multi-Hop Wireless Ad Hoc Networks
"Ariadne: A secure on-demand routing protocol for ad hoc networks,"
"The quest for security in mobile ad hoc networks,"
"Securing ad hoc networks,"
"PGP in constrained wireless devices,"
"Security-aware ad hoc routing for wireless networks,"
"Reducing the cost of security in link-state routing,"
"Secure routing for mobile ad hoc networks,"
The Keyed-Hash Message Authentication Code (HMAC)
"SEAD: Secure e#cient distance vector routing for mobile wireless ad hoc networks,"
"Highly dynamic destination-sequenced distance-vector routing (DSDV) for mobile computers,"
"E#cient and secure source authentication for multicast,"
"A secure routing protocol for ad hoc networks,"
"Anonymous connections and onion routing,"
"Mitigating routing misbehavior in mobile ad hoc networks,"
"An e#cient message authentication scheme for link state routing,"
"E#cient protocols for signing routing messages,"
"E#cient and secure network routing algorithms."
"Securing distance-vector routing protocols,"
"Digital signature protection of the OSPF routing protocol,"
"E#cient security mechanisms for the border gateway routing protocol,"
"Intrusion detection for link-state routing protocols,"
"Statistical anomaly detection for link-state routing protocols,"
"JiNao: Design and implementation of a scalable intrusion detection system for the OSPF routing protocol,"
Network Layer Protocols with Byzantine Robustness.
"Protecting routing infrastructures from denial of service using cooperative intrusion detection,"
"Detecting disruptive routers: A distributed network monitoring approach,"
ANSI/IEEE Std 802.11
"When the CRC and TCP checksum disagree,"
"New directions in cryptography,"
--TR
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers
The official PGP user''s guide
Protecting routing infrastructures from denial of service using cooperative intrusion detection
Mitigating routing misbehavior in mobile ad hoc networks
When the CRC and TCP checksum disagree
DSR
The quest for security in mobile ad hoc networks
Security-aware ad hoc routing for wireless networks
Computer Networking
Ariadne:
Digital signature protection of the OSPF routing protocol
Reducing The Cost Of Security In Link-State Routing
Securing Distance-Vector Routing Protocols
An efficient message authentication scheme for link state routing
Anonymous Connections and Onion Routing
A Secure Routing Protocol for Ad Hoc Networks TITLE2:
--CTR
Sylvie Laniepce , Jacques Demerjian , Amdjed Mokhtari, Cooperation monitoring issues in ad hoc networks, Proceeding of the 2006 international conference on Communications and mobile computing, July 03-06, 2006, Vancouver, British Columbia, Canada
Taojun Wu , Yuan Xue , Yi Cui, Preserving Traffic Privacy in Wireless Mesh Networks, Proceedings of the 2006 International Symposium on on World of Wireless, Mobile and Multimedia Networks, p.459-461, June 26-29, 2006
Rajendra V. Boppana , Saman Desilva, Evaluation of a Stastical Technique to Mitigate Malicious Control Packets in Ad Hoc Networks, Proceedings of the 2006 International Symposium on on World of Wireless, Mobile and Multimedia Networks, p.559-563, June 26-29, 2006
Yih-Chun Hu , David B. Johnson, Securing quality-of-service route discovery in on-demand routing for ad hoc networks, Proceedings of the 2nd ACM workshop on Security of ad hoc and sensor networks, October 25-25, 2004, Washington DC, USA
Mike Burmester , Tri Van Le , Alec Yasinsac, Adaptive gossip protocols: Managing security and redundancy in dense ad hoc networks, Ad Hoc Networks, v.5 n.3, p.313-323, April, 2007
Imad Aad , Jean-Pierre Hubaux , Edward W. Knightly, Denial of service resilience in ad hoc networks, Proceedings of the 10th annual international conference on Mobile computing and networking, September 26-October 01, 2004, Philadelphia, PA, USA
A. A. Pirzada , C. McDonald, Trust Establishment In Pure Ad-hoc Networks, Wireless Personal Communications: An International Journal, v.37 n.1-2, p.139-168, April 2006
Jiejun Kong , Xiaoyang Hong , Yunjung Yi , Joon-Sang Park , Jun Liu , Mario Gerla, A secure ad-hoc routing approach using localized self-healing communities, Proceedings of the 6th ACM international symposium on Mobile ad hoc networking and computing, May 25-27, 2005, Urbana-Champaign, IL, USA
Sasikanth Avancha , Jeffrey Undercoffer , Anupam Joshi , John Pinkston, Security for wireless sensor networks, Wireless sensor networks, Kluwer Academic Publishers, Norwell, MA, 2004
Jiejun Kong , Xiaoyan Hong , Mario Gerla, Modeling Ad-hoc rushing attack in a negligibility-based security framework, Proceedings of the 5th ACM workshop on Wireless security, September 29-29, 2006, Los Angeles, California
Bogdan Carbunar , Ioanis Ioannidis , Cristina Nita-Rotaru, JANUS: towards robust and malicious resilient routing in hybrid wireless networks, Proceedings of the 2004 ACM workshop on Wireless security, October 01-01, 2004, Philadelphia, PA, USA
Bharat Bhargava , Xiaoxin Wu , Yi Lu , Weichao Wang, Integrating heterogeneous wireless technologies: a cellular aided mobile Ad Hoc network (CAMA), Mobile Networks and Applications, v.9 n.4, p.393-408, August 2004
Ignacy Gawedzki , Khaldoun Al Agha, How to avoid packet droppers with proactive routing protocols for ad hoc networks, International Journal of Network Management, v.18 n.2, p.195-208, March 2008
Sathishkumar Alampalayam , Anup Kumar, An Adaptive and Predictive Security Model for Mobile Ad hoc Networks, Wireless Personal Communications: An International Journal, v.29 n.3-4, p.263-281, June 2004
Panagiotis Papadimitratos , Zygmunt J. Haas, Secure data transmission in mobile ad hoc networks, Proceedings of the ACM workshop on Wireless security, September 19-19, 2003, San Diego, CA, USA
Xiaoxin Wu , Ninghui Li, Achieving privacy in mesh networks, Proceedings of the fourth ACM workshop on Security of ad hoc and sensor networks, October 30-30, 2006, Alexandria, Virginia, USA
Yih-Chun Hu , Adrian Perrig, A Survey of Secure Wireless Ad Hoc Routing, IEEE Security and Privacy, v.2 n.3, p.28-39, May 2004
I. G. Niemegeers , S. M. Heemstra De Groot, Research Issues in Ad-Hoc Distributed Personal Networking, Wireless Personal Communications: An International Journal, v.26 n.2-3, p.149-167, | on-demand routing;ad hoc wireless networks;security;byzantine failures |
570688 | A distributed monitoring mechanism for wireless sensor networks. | In this paper we focus on a large class of wireless sensor networks that are designed and used for monitoring and surveillance. The single most important mechanism underlying such systems is the monitoring of the network itself, that is, the control center needs to be constantly made aware of the existence/health of all the sensors in the network for security reasons. In this study we present plausible alternative communication strategies that can achieve this goal, and then develop and study in more detail a distributed monitoring mechanism that aims at localized decision making and minimizing the propagation of false alarms. Key constraints of such systems include low energy consumption and low complexity. Key performance measures of this mechanism include high detection accuracy (low false alarm probabilities) and high responsiveness (low response latency). We investigate the trade-offs via simulation. | INTRODUCTION
The rapid advances in wireless communication technology
and micro-electromagnetic systems (MEMS) technology
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
September 28, 2002, Atlanta, Georgia, USA.
have enabled smart, small sensor devices to integrate mi-
crosensing and actuation with on-board processing and wireless
communications capabilities. Due to the low-cost and
small-size nature, a large number of sensors can be deployed
to organize themselves into a multi-hop wireless network for
various purposes. Potential applications include scientific
data gathering, environmental monitoring (air, water, soil,
chemistry), surveillance, smart home, smart o#ce, personal
medical systems and robotics.
In this study, we consider the class of surveillance and
monitoring systems used for various security purposes, e.g.,
battlefield monitoring, fire alarm system in a building, etc.
The most important mechanism common to all such systems
is the detection of anomalies and the propagation of alarms.
In almost all of these applications, the health (or status of
well-functioning) of the sensors and the sensor network have
to be closely monitored and made known to some remote
controller or control center. In other words, even when no
anomalies take place, the control center has to constantly
ensure that the sensors are where they are supposed to be,
are functioning normally, and so on. In [10] a scheme was
proposed to monitor the (approximate) residual energy in
the network. However, to the best of our knowledge, a general
approach to the network health monitoring and alarm
propagation in a wireless sensor network has not been studied
The detection of anomalies and faults can be divided into
two categories: the explicit detection and the implicit detec-
tion. An explicit detection occurs when an event or fault is
directly detected by a sensor, and the sensor is able to send
out an alarm which by default is destined for the control cen-
ter. An example is the detected ground vibration caused by
the intrusion of an enemy tank. In this case the decision to
trigger an alarm or not is usually defined by a threshold. An
implicit detection applies to when the event or intrusion disables
a sensor from communication, and thus the occurrence
of this event will have to be inferred from the lack of communication
from this sensor. Following an explicit detection,
an alarm is sent out and the propagation of this alarm is
to a large extent a routing problem, which has been studied
extensively in the literature. For example, [2] proposed
a braided multi-path routing scheme for energy-e#cient recovery
from isolated and patterned failures; [4] considered
a cluster-based data dissemination method; [5] proposed an
approach for constructing a greedy aggregation tree to improve
path sharing and routing. Within this context the
accuracy of an alarm depends on the pre-set threshold, the
sensitivity of the sensory system, etc. The responsiveness
of the system depends on the e#ectiveness of the underlying
routing mechanism used to propagate the alarm.
To accomplish implicit detection, a simple solution is for
the control center to perform active monitoring, which consists
of having sensors continuously send existence/update
(or keep-alive) messages to inform the control center of their
existence. Thus the control center always has an image
about the health of the network. If the control center has
not received the update information from a sensor for a pre-specified
period of time (timeout), it can infer that the sensor
is dead. The problem with this approach is the amount
of tra#c it generates and the resulting energy consumption.
This problem can be alleviated by increasing the timeout
period but this will also increase the response time of the
system in the presence of an intrusion. Active monitoring
can be realized more e#ciently in various ways. Below we
discuss a few potential solutions.
The most straightforward implementation is to let each
sensor transmit the update messages at a same fixed rate.
However, due to the multi-hop transmission nature and possible
packet losses, this results in sensors far away from the
control center (thus with more hops to travel) achieving a
much lower goodput. This means that although updates
are generated at a constant rate, the control center receives
updates much less frequently from sensors further away. In
order to achieve a balanced reception rate from all sensors
the tra#c load has to be kept very low, which means that
the system must have a relatively low update rate. This
approach then is obviously not suitable for systems that require
high update rate (high alertness). Alternatively we
could let each sensor adjust its update sending rate based
on various information. For example, increase the sending
rate if a sensor is further away from the controller (which
would require the sensor to have knowledge on hop-count).
[8] suggested adjustment based on channel environment, i.e.,
let sensors with higher goodput increase their sending rate
and sensors with lower goodput decrease their sending rate.
However, parameter tuning is likely to be very di#cult and
it is not clear if this approach is scalable.
A second approach is to use inference and data aggrega-
tion. Suppose the control center knows all the routes, then
receiving an update message from a sensor would allow it to
infer the existence of all intermediate sensors on the route
between that sensor and the control center. Therefore, a
single packet conveys all the necessary update information.
The problem with this approach is the inferrence of the occurrence
of an attack or fault. A missing packet can be due
to any sensor death on the route, therefore extra mechanisms
are needed to locate the problem. Alternatively, in
order to reduce the amount of tra#c some form of aggregation
can be used. Aggregation has been commonly proposed
for sensor network applications to reduce tra#c volume and
improve energy e#ciency. Examples include data naming for
in-network aggregation considered in [3], the greedy aggregation
tree construction in [5] to improve path sharing, and
the abstracted scans of sensor energy in [10] via in-network
aggregation of network state. In our scenario, along a single
route, sensors can concatenate their IDs or addresses into
the update packet they relay, so that when the control center
gets this packet, it can update information regarding all
sensors involved in relaying this packet. However, the packet
size increases due to aggregation, especially if addresses are
not logically related which is often the case. As the size of
the network increases, this aggregation may cease to be ef-
fective. In addition, the same inferrence problem remains in
the presence of packet losses.
If the sensor network is organized into clusters, then based
on the cluster size, di#erent approaches can be used. For
example if clusters are small (e.g., less than 10 nodes), the
cluster head can actively probe each sensor in the cluster [7],
or TDMA schedules can be formed within the cluster so that
each sensor can periodically update the cluster head. In this
case the responsibility is on the cluster heads to report any
intrusion in the cluster to a higher-level cluster head or to
the central controller. If the clusters are large, then any of
the aforementioned schemes can potentially be considered by
regarding the cluster head as the local "central controller".
In any of these cases, there has to be extra mechanisms to
handle the death of a cluster head.
Following the above discussion, a distinctive feature of active
monitoring is that decisions are made in a centralized
manner at the control center, and for that matter it becomes
a single point of concentration of data tra#c (same applies
to a cluster head). Subsequently, the amount of bandwidth
and energy consumed a#ects its scalability. In addition, due
to the multi-hop nature and high variance in packet delay, it
will be di#cult to determine a desired timeout value, which
is critical in determining the false alarm probability and responsiveness
of the system as we will discuss in more detail
in the next section. All the above potential solutions could
function well under certain conditions. However, we will
deviate from them in this paper and pursue a di#erent, distributed
approach.
Our approach is related to the concept of passive moni-
toring, where the control center expects nothing from the
sensors unless something is wrong. Obviously this concept
alone does not work if a sensor is disabled from communicating
due to intrusion, tampering or simply battery outage.
However, it does have the appealing feature of low overhead,
i.e., when everything is normal there will be no tra#c at all!
Our approach to a distributed monitoring mechanism is thus
to combine the low energy consumption of a passive monitoring
method and high responsiveness and reliability of an
active monitoring method.
Throughout this paper we assume that the MAC used is
not collision free. In particular, we will examine our scheme
with random access and carrier sensing types of MAC. Thus
all packets are subject to collision because of the shared wireless
channel. Collision-free protocols, e.g., TDMA, as well
as reliable point-to-point transmission protocols, e.g. the
DCF RTS-CTS function of IEEE 802.11, may or may not
be available depending on sensor device platforms [8] and
are not considered in this paper. We assume that sensors
have fixed transmission power and transmission range. We
also assume that sensors are awake all the time. The discussion
on integrating our scheme with potential sleep-wake
schedule of sensors are given in Section 5.
The rest of the paper is organized as follows. In Section 2,
we describe an overview of our system and the performance
metrics we consider. Section 3 describes in detail the monitoring
mechanism and some robustness issues. In Section
4, simulation results are presented to validate our approach.
Section 5 discusses implications and possible extensions to
our system. Section 6 concludes with future works.
2. A DISTRIBUTED APPROACH
2.1 Basic Principles
The previous discussions and observations have lead us to
the following principles. Firstly, some level of active monitoring
is necessary simply because it is the only way of detecting
communication-disabling events/attacks. However,
because of the high volume of tra#c it involves, active monitoring
has to be done in a localized, distributed fashion,
rather than all controlled by the control center. Secondly,
the more decision a sensor can make, the less decision the
control center has to make, and therefore less information
needs to be delivered to the control center. In other words,
the control center should not be bothered unless there really
is something wrong. Arguably, there are scenarios where
the control center is at a better position to make a decision
with global knowledge, but whenever possible local decisions
should be utilized to reduce tra#c. Similar concepts have
been used in for example [6], where a sensor advertises to its
neighbors the type of data it has so a neighbor can decide
if a data transmission is needed or redundant. Thirdly, it
is possible for a sensor to reach a decision with only local
information and with minimum embedded intelligence and
thus should be exploited.
The first principle leads us to the concept of neighbor mon-
itoring, where each sensor sends its update messages only to
its neighbors, and every sensor actively monitors its neigh-
bors. Such monitoring is controlled by a timer associated
with a neighbor, so if a sensor has not heard from a neighbor
within a pre-specified period of time, it will assume that
the neighbor is dead. Note that this neighbor monitoring
works as long as every sensor is reachable from the control
center, i.e., there is no partition in the network that has no
communication path to the control center. Since neighbors
monitor each other, the monitoring e#ect gets propagated
throughout the network, and the control center only needs
to monitor a potentially very small subset of nodes.
The second and the third principles lead us to the concept
of local decision making. The goal is to allow a sensor
make some level of decision before communicating with the
control center. We will also allow a sensor to increase its
fidelity or confidence in the alarm it sends out by consulting
with its neighbors. By adopting a simple query-rejection or
query-confirmation procedure and minimal neighbor coordination
we expect to significantly increase the accuracy of an
alarm, and thus, reduce the total amount of tra#c destined
for the control center. To summarize, in our mechanism
the active monitoring is used but only between neighbors;
therefore, the tra#c volume is localized and limited. Over-all
network-wide, the mechanism can be seen as a passive
monitoring system in that the control center is not made
aware unless something is believed to be wrong with high
confidence in some localized neighborhood. Within that localized
neighborhood, a decision is made via coordination
among neighbors.
2.2 Performance Metrics
In this study we consider two performance metrics: the
probability of false alarm and the response delay.
Due to the nature of the shared wireless channel, packets
transmitted may collide with each other. We assume perfect
capture and regard (partially) collided packets as packets
lost. The existence/update packets transmitted by neighboring
sensors may collide. As a result a sensor may fail to
receive any one of the packets involved in the collision. If a
sensor does not receive the updates from a neighbor before
its timer expires and subsequently decides that the neighbor
is dead while it is still alive, it will transmit an alarm
back to the control center. We call this type of alarm false
alarm. False alarms are very costly. The transmissions of
false alarms are multi-hop and consume sensor energy. They
may increase the tra#c in the network and the possibility
of further collision. Furthermore, a false alarm event may
cause the control center to take unnecessary actions, which
can be very expensive in a surveillance system.
Another important performance metric is responsiveness.
The measure of responsiveness we use is the response delay,
which is defined as the delay between a sensor's death and
the first transmission of the alarm by a neighbor. Strictly
speaking, response delay should be defined as the delay between
a sensor's death and the arrival of this information
at the control center. The total delay can therefore be separated
into the delay in triggering an alarm and the delay
in propagating the alarm. However, as mentioned earlier
the process of propagating an alarm to the control center
is mostly a routing problem and does not depend on our
proposed approach. Therefore, in this study we only focus
on the delay in triggering an alarm and define this as the
response delay.
It is very important to make the response delay as small
as possible in a surveillance system subject to a desired false
alarm probability. An obvious tradeo# exists between the
probability of false alarm and the response delay. In order
to decrease the response delay, the timeout value needs to be
decreased which leads to a higher probability of false alarm.
Our work in this paper is to utilize the distributed monitoring
system to achieve a better tradeo# between the probability
of false alarm and the response delay. We also aim to
reduce the overall tra#c and increase energy e#ciency.
3. A TWO-PHASE TIMEOUT SYSTEM
In this section we first present the key idea of our ap-
proach, then describe in more details the di#erent components
of our approach.
3.1 Key to our approach
With the goal being reducing the probability of false alarm
and the response delay, combining the principles we out-
lined, we propose a timeout control mechanism with two
timers: the neighbor monitoring timer C1(i) and the alarm
query timer C2(i). The idea of C1(i) is the same as an ordinary
neighbor monitoring scheme. During C1(i), a sensor
s collects update packets from sensor i. If sensor s does
not receive any packet from i before C1(i) expires, it enters
the phase of the alarm query timer C2(i). The purpose of
the second timer C2(i) is to reduce the probability of false
alarm and to localize the e#ect of a false alarm event. In
C2 (i), sensor s consults the death of i with its neighbors. If
a neighbor claims that i is still alive, s will regard its own
C1 (i) expiration as a false alarm and discard this event. If
s does not hear anything before C2(i) expires, it will decide
that i is dead and fire an alarm for i. We will call the
two-phase approach the improved system and the ordinary
neighbor monitoring system with only on timer the basic
system. Fig. 1 shows the di#erence between the basic system
and the improved system. C1 + C2 is the initial value
Basic:
Improved:
Figure
1: Basic System V.S. Improved System
of C(i). C1 and C2 are the initial value of C1(i) and C2 (i),
respectively.
There are several ways to consult neighbors. One approach
is to consult all common neighbors that are both
reachable from i and s; therefore, all common neighbors can
respond. We call this two-phase timer approach the original
improved system. Another approach is to consult only
the neighbor which is assumed dead, i in this case. We will
call this neighbor the target and this approach the variation.
Neighbors can potentially not only claim liveliness of a target
but also confirm its death if their timers also expired.
In this study we will focus on the case where neighbors only
respond when they still have an active timer. In contrast
to our system, an ordinary neighbor monitoring system has
only one timer C(i). If a sensor does not receive packets
from i before C(i) expires, it will trigger an alarm.
Let us take a look at the intuition behind using two timers
instead of one. Let P r[F A] basic and P r[F A]improved denote
the probabilities of a false alarm event with respect to a
neighboring sensor in the basic system and in the improved
system, respectively. Let f(t) denote the probability that
there is no packet received from the target in time t. We
then have the following relationship.
(1)
where p is the probability that the alarm checking in C2 (i)
fails. This can be caused by a number of reasons as shown
later in this section. Note that f(t) in general decreases with
t. Since p is a value between 0 and 1, from Eqn (1), we know
A]improved is less than P r[F A] basic . The response delays
in both system are approximately the same, assuming
that a neighbor only responds when it has an active timer
C1 (i). However, extra steps can be added in the phase of
C2 (i) to reduce the response delay in the improved system,
e.g., by using aforementioned confirmation, which we will
not study further in this paper.
Note that Eqn (1) is only an approximation. The improved
system does not always perform better than the basic
system. By adding alarm checking steps in the improved
system, extra tra#c is generated. The extra tra#c may collide
with update packets, and thus, increase the false alarm
events. However, as long as C1 (i) expiration does not happen
too often, we expect the improved system to perform
better than the basic system in a wide range of settings. We
will compare the performance di#erences between the basic
system and the improved system under di#erent scenarios.
Note that C1(i) is reset upon receipt of any packet from i,
not just the update packet. For the same reason, a sensor
can replace a scheduled update packet with a di#erent
packet (e.g., data) given availability.
3.2 State Transition Diagram and Its Component
In this section we present the state transition diagram
Rec. packets(i)
or Par(i)
C2 expires
Neighbor
Monitoring
Alarm
Checking
Alarm
Propagation
expires
Reset C1(i)
Act. C2(i)
Xmit Alarm,
Deact. C1(i)
Random
Delay
Suspend
Rec. Paq(i)
Xmit Par(i)
Rec. packets(i)
or Par(i)
Reset C1(i),
Deact. C2(i)
Rec. Paq(i)
Deact. C1(i),C2(i),
Rec. packets(i)
Act. C1(i)
Delay expires
Rec. Paq(i)
Deact. C1(i),C2(i)
Rec. packets(i)
or Par(i)
Reset C1(i),
Deact. C2(i)
C2 expires
Xmit Alarm,
Deact. C1(i)
Rec. packets(i)
Act. C1(i)
Xmit Paq(i)
Rec.: receive
Xmit: transmit
Act.: activate
Deact: deactivate
Packets(i): packets from
Par with target
Pex with target
Figure
2: State Diagram for Neighbor i with Transition
Metrics condition
action
Rec.
Upon the scheduled
time of Pex
Xmit Pex, Schedule
the next Pex Self
Figure
3: State Diagram for Sensor s Itself with
Transition Metrics condition
action
of our approach. We will assume that the network is pre-
configured, i.e., each sensor has an ID and that the control
center knows the existence and ID of each sensor. However,
we do not require time synchronization. Note that timers are
updated by the reception of packets. Di#erences in reception
times due to propagation delays can result in slight di#erent
expiration times in neighbors.
A sensor keeps a timer for each of its neighbors, and keeps
an instance of the state transition diagram for each of its
neighbors. Fig. 2 shows the state transitions sensor s keeps
regarding neighbor i. Fig. 3 shows the state transition of s
regarding itself. They are described in more detail in the
following.
3.2.1 Neighbor Monitoring
Each sensor broadcasts its existence packet Pex with TTL
time chosen from an exponential distribution
with rate 1/T , i.e., with average inter-arrival time
T. Di#erent T values represent the alertness of the system as
we will discuss further in later sections. The reason for using
exponential distribution is to obtain a large variance of the
inter-arrival times to randomize transmissions of existence
packets. In Section 5 we will also discuss using constant
inter-arrival times. Each sensor has a neighbor monitoring
timer C1 (i) for each of its neighbor i with an initial value
C1 . After sensor s receives Pex or any packet from its neighbor
resets timer C1(i) to the initial value. When C1 (i)
goes down to 0, a sensor enters the random delay state for
its neighbor i.
When sensor s receives an alarm query packet Paq with
target i in neighbor monitoring, it broadcasts an alarm
reject packet Par with target i with TTL=1. Par contains
the IDs of s and i, and its remaining timer C1(i) as a reset
value for the sender of this query packet. When sensor s
receives an alarm reject packet Par with target i in this
state, it resets C1(i) to the C1(i) reset value in Par if its
own C1 (i) is of a smaller value.
3.2.2 Random Delay
Upon entering the random delay state for its neighbor
schedules the broadcast of an alarm query packet Paq
with TTL=1 and activates an alarm query timer C2 (i) for
neighbor i with initial value C2 . After the random delay
incurred by MAC protocol is complete, sensor s enters the
alarm checking state by sending Paq which contains IDs of s
and i. In this study we focus on random access and carrier
sensing types of MAC protocols. For both protocols, this
random delay is added to avoid synchronized transmissions
from neighbors [8]. Note that if a sensor is dead, timers in a
subset of neighbors expire at approximately the same time
(subject to di#erences in propagation delays which can be
very small in this case) with a high probability. The random
delay therefore aims to de-synchronize the transmissions of
Paq . Typically this random delay is smaller than C2 , but
it can reach C2 in which case the sensor enters the alarm
propagation state directly from random delay.
In order to reduce network tra#c and the number of alarms
generated, when sensor s receives Paq with target i in the
random delay state, it cancels the scheduled transmission
Paq with target i and enters the suspend state. This means
that sensor s assumes that the sensor which transmitted Paq
with target i will take the responsibility of checking and firing
an alarm. Sensor s will simply do nothing. Such alarm
aggregation can a#ect the robustness of our scheme, especially
when a network is not very well connected or is sparse.
The implication of this is discussed further in Section 5.
If sensor s receives any packet from i or Par with target i
in the random delay state, it knows that i is still alive and
goes back to neighbor monitoring. Sensor s also resets its
C1 (i) to C1 if it receives packets from i or to the C1(i) reset
value in Par if it receives Par with target i.
3.2.3 Alarm Checking
When sensor s enters the alarm checking state for neighbor
it waits for the response Par from all its neighbors.
If it receives any packet from i or Par with target i before
C2 (i) expires, it goes back to neighbor monitoring. Sensor
s also resets its C1 (i) to C1 if it receives packets from
i or to the C1(i) reset value in Par if it receives Par with
target i. When timer C2 expires, sensor s enters the alarm
propagation state.
3.2.4 Suspend
The purpose of the suspend state is to reduce the tra#c
induced by Paq and Par . If sensor s enters suspend for its
neighbor i, it believes that i is dead. However, di#erent
from the alarm propagation state, sensor s does not fire
an alarm for i. If sensor s receives any packet from i, it goes
back to neighbor monitoring and resets C1(i) to C1 .
3.2.5 Alarm Propagation
After sensor s enters the alarm propagation state, it
deletes the target sensor i from its neighbor list and transmits
an alarm packet P alarm to the control center via multi-hop
routes. The way such routes are generated is assumed
to be in place and is not discussed here. If sensor s receives
any packet from i, it goes back to the neighbor monitoring
state and resets C1 (i) to C1 . If sensor s receives packets from
i after the alarm is fired within a reasonable time, we expect
extra mechanisms to be needed to correct the false alarm for
i. On the other hand, a well-designed system should have
very low false alarm probability; thus, this situation should
only happen rarely.
3.2.6 Self
In the self state, if sensor s receives Paq with itself as
the target, it broadcasts an alarm reject packet Par with
TTL=1.
In this state, sensor s also schedules the transmissions
of the existence/update packets. In order to reduce redundant
tra#c, each sensor checks its transmission queue before
scheduling the next existence packet. After a packet transmission
completes, a sensor checks its transmission queue.
If there is no packet waiting in the queue, it schedules the
next transmission of the existence packet based on the exponential
distribution. If there are packets in the transmission
queue, it will defer scheduling until these packets are trans-
mitted. The reason is that each packet transmitted by a
sensor can be regarded as an existence packet of that sensor
3.3 Robustness
In this subsection, we consider the robustness of the distributed
monitoring system proposed and show there will
not be a dead lock.
Proposition 1. Assume all transmissions experience propagation
delays that are proportional to propagation distances
by the same proportion. A query following expiration of
C1 (i) due to the death of sensor i will not be rejected by
a neighbor.
Proposition 2. In the event of an isolated death, the
system illustrated in the state transition diagram Fig. 2 will
generate at least one alarm.
Proposition 3. The response delays in both the basic
and the improved systems are upper bounded by C1
Propositions 1 and 3 are relatively straightforward. Below
we briefly explain proposition 2. An isolated death event is
a death event, e.g., of sensor i, which does not happen in
conjunction with the deaths of i's neighbors. From Fig. 2,
in the event of a death, a neighboring sensor's possible state
transition paths can only lead to two states, suspend or
alarm propagation. When a sensor receives Paq with target
i in the random delay state or the alarm checking state,
it enters the suspend state and does not transmit Paq or an
alarm with target i. However, the fact that it received the
Paq packet means that there exists one sensor that's in the
alarm checking state, since that's the only state in which
a sensor sends out a Paq packet. Since a sensor cannot send
and receive a Paq packet at the same time, at least one sensor
will remain in the alarm checking state, and will eventually
fire an alarm.
Note Proposition 2 does not hold if correlated death events
happen, or when massive sensor destruction happens. This
will be discussed in more detail in Section 5.
4. SIMULATION RESULTS
We use Matlab to simulate the distributed monitoring system
and obtain performance results. During a simulation,
the position of each sensor is fixed, i.e. sensors are not mo-
bile. We create 20 sensors which are randomly deployed
in a square area. The side of this square area is 600 me-
ters. From the simulation, the average number of neighbors
of each sensor is between 5 and 6; therefore, this is a net-work
with moderate density. We can vary the side of this
square area to control the average degree of each sensor.
However, for all the results shown here 600 meters is used.
Each sensor runs the same monitoring protocol to detect
sensor death events. A sensor death generator controls the
time when a sensor death happens and to which sensor this
happens. Only one sensor is made dead at a time (thus
we only simulated isolated death events). Before the next
death event happens, the currently dead sensor is recovered.
In this study, we separately measure the two performance
metrics. In measuring probability of false alarm, no death
events are generated. In measuring the response delay, death
event are generated as described above. Although in reality
false alarms and deaths coexist in a network, separate measurements
help us to isolate tra#c due to di#erent causes,
and do not a#ect the validity of the results presented here.
For the response delay, we measure the delay between
the time of a sensor's death and the time when the first
alarm is fired. In our simulation alarms are not propagated
back to the control center, but we record this time. For the
probability of false alarm, denote the number of false alarms
generated by sensor s for its neighbor i by # si ; denote the
total number of packets received by s from i by # si . P r[F A]
is then estimated by
s
s
This is because s resets its timer upon every packet received
from i. So each packet arrival marks a possible false alarm
event.
We simulate three di#erent monitoring schemes: the basic
system (with one timer), the improved system (with
two timers), and the variation (only the target itself can
respond). To compare the performance di#erences due to
di#erent MAC protocols, we run the simulation under random
access and carrier sensing. A random period of time is
added to both schemes before transmissions of Paq and Par .
As mentioned before, a sensor waits for a random period
of time to de-synchronize transmissions before transmitting
Paq and Par . This period of time in both random access and
carrier sensing is chosen exponentially with rate equals the
product of packet transmission time and the average number
of neighbors a sensor has. The channel bandwidth we use
is 20K bits per second. The packets sizes are approximately
bytes. The radio transmission range is 200 meters.
4.1 Heavy Traffic Load Scenario with T=1
Fig. 4 shows the simulation results with varying C1 . T
is the average inter-arrival time of update packets in sec-
Random Access & Basic System
Random Access & Improved System
Random Access & The Variation
Carrier Sensing & Basic System
Carrier Sensing & Improved System
Carrier Sensing & The Variation
Response
Delay
Figure
4: The Simulation Results with
onds. are fixed. The value of C2 is
chosen to be approximately larger than the round-trip time
of transmissions of Paq and Par . With second we
have a very high update rate. This scenario thus represents
a busy and highly alert system. As can be seen in Fig. 4, the
improved systems (both the original one and the variation)
have much lower probabilities of false alarm than the basic
system under the same MAC protocol. The response delays
under di#erent systems under the same MAC protocol
have very little di#erence among di#erent systems (maxi-
mum 0.5 seconds). There is no consistent tendency as to
which system results in the highest or lowest response de-
lay. Under a predetermined probability of false alarm level,
the improved systems have much lower response delay than
the basic system. The di#erence is very limited in comparing
the performances of the original improved system and
the variation. In deciding which scheme to use in practice
we need to keep in mind that the variation results in lower
tra#c volume and thus possibly lower energy consumption.
From Fig. 4 we can also see that carrier sensing has lower
false alarm than random access under the same system and
parameters. We will see that carrier sensing always has lower
false alarm than random access in subsequent simulation
results. The reason is that carrier sensing can help reduce
the number of packet collisions and thus the number of false
alarm events. However, carrier sensing results in sensing
delay. Thus carrier sensing has larger response delay than
random access under the same system and parameters.
Fig. 5 shows the simulation results with various C2 .
and are fixed. The value of C1 is chosen to be larger
than T . As can be seen in Fig. 5, when C2 increases, false
alarm decreases and the response delay increases. All other
observations are the same as when we vary C1 and keep C2
fixed. However, for the response delay, the systems with
lower false alarm have larger response delays. The reason is
that when C2 is large, system with lower false alarm usually
has more chances to receive Par and reset C1 (i), thus causing
the response delay to increase. The di#erences between
response delays of di#erent systems are not significant.
4.2 ModerateTraffic Load Scenario with T=10
Random Access & Basic System
Random Access & Improved System
Random Access & The Variation
Carrier Sensing & Basic System
Carrier Sensing & Improved System
Carrier Sensing & The Variation
Response
Delay
Figure
5: The Simulation Results with
Random Access & Basic System
Random Access & Improved System
Random Access & The Variation
Carrier Sensing & Basic System
Carrier Sensing & Improved System
Carrier Sensing & The Variation
Response
Delay
Figure
The Simulation Results with
Fig. 6 shows the simulation results with various C1 .
seconds and are fixed. This represents a system
with lower volume of updating tra#c. Compared to Fig. 4,
we observe some interesting di#erences. Firstly, in Fig. 6,
the response delays at T=10 are larger than the delays at
T=1. This is easy to understand since C1 at T=10 is larger
than C1 at T=1. Secondly, since the tra#c with T=10 is
lighter than the tra#c with T=1, we expect that false alarm
at T=10 is smaller than that at T=1. However, the probability
of false alarm in the basic system seems not to decrease
when we increase T from 1 to 10. This is because as
increases, false alarms are more likely to be caused by the
increased variance in the update packet inter-arrival times
than caused by collisions as when T is small. Since the Pex
intervals are exponentially distributed, in order to achieve
low false alarm probability comparable to results shown in
Fig. 4, C1 needs to be set appropriately.
Total
Power
Consump.(mW)
Random Access & Basic System
Random Access & Improved System
Random Access & The Variation
Carrier Sensing & Basic System
Carrier Sensing & Improved System
Carrier Sensing & The Variation
Figure
7: Total Power Consumption with
Random Access & Basic System
Random Access & Improved System
Random Access & The Variation
Carrier Sensing & Basic System
Carrier Sensing & Improved System
Carrier Sensing & The Variation
Response
Delay
Figure
8: The Simulation Results with =Although the improved systems achieve small false alarm
probability, the control packets result in extra power con-
sumption. Fig. 7 shows the total power consumption of 20
sensors under the same scenario as Fig. 6. The total power
consumption is calculated by counting the total number of
bits transmitted and received by each sensor and using the
communication core parameters provided in [1]. We do not
consider sensing energy and sensors are assumed to be active
all the time. As can be seen in Fig. 7, the improved systems
have slightly larger total power consumption than the basic
system under the same MAC protocol. Overall the largest
increase does not exceed 1.6%. Thus the improved systems
achieve much better performance at the expense of minimal
energy consumption. Note that here we only consider the
energy consumed in monitoring. In reality higher false alarm
probability will also increase the alarm tra#c volume in the
network, thus resulting in higher energy consumption. Also
note that the power consumption between di#erent MAC
protocols are not comparable because the channel sensing
power is not included.
Fig. 8 shows the simulation results with various C2 .
are fixed. As can be seen in Fig. 8, when C2
increases, false alarm decreases and the response delay in-
creases. The improved systems have much lower false alarm
Random Access & Basic System
Random Access & Improved System
Random Access & The Variation
Carrier Sensing & Basic System
Carrier Sensing & Improved System
Carrier Sensing & The Variation
50 100 150 200 250 300 350 40050150250350
Response
Delay
Figure
9: The Simulation Results with
than the basic system. For the response delay, similar to
Fig. 5, the systems with lower false alarm have larger response
delays. Furthermore, we can see that in order to
reduce false alarm significantly we need to increase C2 significantly
for a fixed C1 . However, in practice we should
choose to increase C1 rather than increase C2 . This is because
by increasing C1 , we can reduce the C1(i) expiration
events and reduce the network tra#c, while increasing C2
has no such e#ect. Increasing C1 has approximately the
same e#ect on the response delay as increasing C2 .
4.3 Light Traffic Load Scenario with T=60
Fig. 9 shows the simulation results with various C1 and
Fig. 10 shows the simulation results with
various C2 . All results are consistent with previous observations
and therefore the discussion is not repeated here.
5. DISCUSSION
In the previous sections we presented a two-phase timer
scheme for a sensor network to monitor itself. Under this
scheme, the lack of update from a neighboring sensor is taken
as a sign of sensor death/fault. We also assumed that connectivity
within the network remains static unless attacks
or faults occur. If connectivity changes due to disruption in
signal propagation, then it becomes more di#cult to distinguish
a false alarm from a real alarm. If a sensor does not
loose communication with all its neighbors then neighbor
consultation can still help in this case. As discussed before,
if a sensor reappears after a period of silence (beyond the
timeout limit), then extra mechanisms are needed to handle
alarm reporting and alarm correction.
All our simulation results are for isolated death events.
In addition we have assumed that sensors are alive all the
time. In this section we will discuss our scheme and possible
extensions under di#erent attacks and sensor scenarios.
5.1 Partition Caused by Death
Low connectivity of the network may result in security
problems under the proposed scheme, e.g., as illustrated in
Random Access & Basic System
Random Access & Improved System
Random Access & The Variation
Carrier Sensing & Basic System
Carrier Sensing & Improved System
Carrier Sensing & The Variation
Response
Delay
Figure
10: The Simulation Results with
C A
Figure
Partition Caused by Death
Fig. 11. If sensor A has only one neighbor B and B is
dead, no one can monitor A. One possible solution is to
use location information. Assuming that the control center
knows the locations of all sensors, when the control center
receives an alarm regarding B, it checks if this causes any
partition by using the location information. If a partition
occurs, the control center may assume that the sensors in
the partition are all dead. Thus it will attempt to recover
all sensors in the partition.
5.2 Correlated Attacks
We define by correlated attack the situation where multiple
sensors in the same neighborhood are destroyed or disabled
simultaneously or at almost the same time. Fig. 12
shows a simple scenario with 7 sensors. Each circle represents
a sensor, and a line between circles represents direct
communication between sensors. If sensor A is disabled, sensors
B, C, and D will detect this event from lack of update
from A. Assume C and D are in suspend state because they
both received alarm query from B. Thus, only B is responsible
for transmitting an alarm for A. Suppose now B is
also disabled due to correlated attacks before being able to
transmit the alarm. A's death will not be discovered under
the scheme studied in this paper, since both C and D will
A
Figure
12: Correlated Attacks.
6.5 73545556575C1 (sec)
Alarm
Generation
Percentage
Random Access & Improved System
Random Access & The Variation
Carrier Sensing & Improved System
Carrier Sensing & The Variation
Figure
13: Alarm Generation Percentage with T=1,
have removed A from their neighbor list by then. This problem
can be tackled by considering the concept of suspicious
area, which is defined as the neighborhood of B in this case.
For example, the control center will eventually know B's
death from alarms transmitted by B's neighbors, assuming
no more correlated attacks occur. The entire suspicious area
can then be checked, which includes B and B's neighbors.
As a result A's death can be discovered. A complete solution
is subject to further study.
This example highlights the potential robustness problem
posed by aggregating alarms (i.e., by putting nodes in the
suspend state) in the presence of correlated attacks. The
goal of alarm aggregation is to reduce network tra#c. There
is thus a trade-o# between high robustness and low overhead
tra#c. Intuitively increased network connectivity or node
degree can help alleviate this problem since the chances of
transmitting multiple alarms are increased as a result of increased
number of neighbors. We thus measured the percentage
of neighbors generating an alarm in the event of an
isolated death under the same scenario as in Fig. 4. The
simulation results are shown in Fig. 13. The alarm generation
percentage is the ratio between the number of alarms
transmitted for a sensor's death and the total number of
neighbors of that dead sensor. As can be seen, all the improved
systems have alarm generation percentage greater
than 40%.
These results are for an average node degree of 6. We
also run the simulation for average node degree of 3 and
9. There is no significant di#erence between these results.
Note this result is derived based on an isolated attack model,
under which we ensure at least one alarm will be fired (see
Proposition 2 in Section even when using alarm aggrega-
tion. However, if correlated attacks are a high possibility,
we do not recommend the aggregation of alarms, thus removing
the suspend state. Further study is needed to see
how well/bad alarm aggregation performs in the event of
correlated attacks with di#erent levels of connectivity.
5.3 Massive Destruction
Fig. 14 illustrates an example of massive destruction, where
nodes within the big circle are destroyed, including Sensor
A and its neighbors. If this happens simultaneously, the
control center will not be informed of A's death since all
the sensors that were monitoring A are now dead. However,
the nodes right next to the boundary of the destruction area,
i.e., nodes outside of the big circle in this case, are still alive.
They will eventually discover the death events of their cor-
Figure
14: Massive Destruction. Nodes with dashed
circles are dead sensors. Nodes with solid circles are
healthy sensors.
responding neighbors and inform the control center. From
these alarms the control center can derive a dead zone (the
big circle in Fig. 14), which includes all dead sensors. A's
death will be, therefore, discovered.
5.4 Sensor Sleeping Mode
Sensors are highly energy constrained devices, and a very
e#ective approach to conserve sensor energy is to put sensors
in the sleeping mode periodically. Many approaches
proposed use a centralized schedule protocol, e.g. TDMA,
to perform sleep scheduling. Although our improved mechanism
is not designed for such collision-free protocols, it can
potentially be modified to function in conjunction with the
sensor sleeping mode. A straightforward approach is to put
sensors in the sleeping mode randomly. However, by doing
so a sensor may lose packets from neighbors while it is
asleep, which then increases the false alarm probability. Increased
timer value can be used to reduce false alarm at the
expense of larger response delay.
In [9] a method is introduced to synchronize sensors' sleeping
schedule. Under this approach, sensors broadcast SYNC
packets to coordinate its sleeping schedule with its neigh-
bors. As a result, sensors in the same neighborhood wake up
at almost the same time and sleep at almost the same time.
Sensors in the same neighborhood contend with neighbors
for channel access during the wake-up time (listen time). In
our scheme studied in this paper, the control packets (Pex ,
Paq , and Par ) can all be regarded as the SYNC packets used
to coordinate sensor sleep schedule. The stability of coordination
as well as the resulting performance need to be
further studied.
5.5 Response Delay
The improved system may have longer response delay
than the basic system. Fig. 15 shows such a scenario. In
the improved system, if sensor A fails to receive the last Pex
from neighbor i and i is dead right after this failure, A sends
Paq to neighbors upon C1(i) expiration in the improved sys-
tem. Its neighbor B successfully receiving the last Pex from
responds with a Par including a C1 (i) reset value. Thus, A
resets its expired C1 (i) when it receives Par . An alarm for
i will be fired after this new C1(i) and C2 (i) expire. How-
ever, in the basic system sensor A fires an alarm upon the
expiration of the original C(i), which in this scenario occurs
earlier than in the improved system. Note that although the
response delay in the improved system is sometimes longer
Response Delay
Response
Delay
Send Pex(i) Send Pex(i)
Sensor is
dead
Alarm
Alarm
Alarm
R
R
R
R
(1)Basic:
(2)Improved:
Neighbor A
Neighbor B
Neighbor A
Remaining
query
Figure
15: Event Schedule of A Potential Problem.
"R" means Pex is received. "L" means Pex is lost.
than the basic system, the response delay is bounded by
Also note that the above scenario can occur in
opposite direction as well, i.e., alarm is fired earlier in the
improved system.
5.6 Update Inter-Arrival Time
In our simulation, the existence/update packet inter-arrival
time is exponential distributed with mean T in order to obtain
a large variance to randomize transmissions. As shown
in the simulation results, when T is large, the variance of
the inter-arrival times is also large. As a result a large C1
is needed to achieve small false alarm probability. An alternative
is to use a fixed inter-arrival time T along with
proper randomization via a random delay before transmis-
sion. By doing this, we can eliminate the false alarm caused
by the large variance of the update inter-arrival time when
the network tra#c load is light (T is large).
6. CONCLUSION
In this paper, we proposed and examined a novel distributed
monitoring mechanism for a wireless sensor network
used for surveillance. This mechanism can monitor sensor
health events and transmit alarms back to the control center.
We show via simulation that the proposed two-phase mechanism
(both the original and the variation) achieves much
lower probability of false alarm than the basic system with
only one timer. Equivalently for a given level of false alarm
probability, the improved systems can achieve much lower
response delays than the basic system. These are achieved
with minimal increase in energy consumption. We also show
that carrier sensing performs better than random access and
their performances converge when we increase the average
update period T . Increasing timer values results in lower
false alarm and larger response delays.
There are many interesting problems which need to be further
studied within the context of the proposed mechanism,
including those discussed in Section 5. Di#erent patterns of
attacks and their implication on the e#ectiveness of the proposed
scheme needs to be studied. Necessary modifications
to our current scheme will also be studied in order for it to
e#ciently operate in conjunction with sensor sleeping mode.
7.
--R
Bounding the lifetime of sensor networks via optimal role assignments.
Highly resilient
Building e
Impact of network density on data aggregation in wireless sensor networks.
An architecture for building self-configurable systems
A transmission control schemes for media access in sensor networks.
An energy-e#cient mac protocol for wireless sensor networks
Residual energy scan for monitoring sensor networks.
--TR
A transmission control scheme for media access in sensor networks
Building efficient wireless sensor networks with low-level naming
Negotiation-based protocols for disseminating information in wireless sensor networks
An architecture for building self-configurable systems
Energy-Efficient Communication Protocol for Wireless Microsensor Networks
Impact of Network Density on Data Aggregation in Wireless Sensor Networks
--CTR
Hai Liu , Xiaohua Jia , Peng-Jun Wan , Chih-Wei Yi , S. Kami Makki , Niki Pissinou, Maximizing lifetime of sensor surveillance systems, IEEE/ACM Transactions on Networking (TON), v.15 n.2, p.334-345, April 2007
Mario Strasser , Harald Vogt, Autonomous and distributed node recovery in wireless sensor networks, Proceedings of the fourth ACM workshop on Security of ad hoc and sensor networks, October 30-30, 2006, Alexandria, Virginia, USA
Hai Liu , Pengjun Wan , Xiaohua Jia, Maximal lifetime scheduling for K to 1 sensor-target surveillance networks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.15, p.2839-2854, October 2006 | wireless sensor networks;security;system design;monitor and surveillance |
570689 | Using signal processing to analyze wireless data traffic. | Experts have long recognized that theoretically it was possible to perform traffic analysis on encrypted packet streams by analyzing the timing of packet arrivals (or transmissions). We report on experiments to realize this possiblity using basic signal processing techniques taken from acoustics to perform traffic analysis on encrypted transmissions over wireless networks. While the work discussed here is preliminary, we are able to demonstrate two very interesting results. First, we can extract timing information, such as round-trip times of TCP connections, from traces of aggregated data traffic. Second, we can determine how data is routed through a network using coherence analysis. These results show that signal processing techniques may prove to be valuable network analysis tools in the future. | INTRODUCTION
Network security experts have long known that examining
even subtle timing information in a tra-c stream could, in
This work was sponsored by the Defense Advanced Re-search
Projects Agency (DARPA) under contract No.
MDA972-01-C-0080. Views and conclusions contained in
this document are those of the authors and should not be
interpreted as representing o-cial policies, either expressed
or implied.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
September 29, 2002, Atlanta, Georgia, USA.
theory, be exploited to achieve eective tra-c analysis [15].
Consider the packet arrival pattern in a TCP
ow. The
pattern is a function of a number of key network parameters
such as round-trip times, send rates, and various TCP
and MAC layer timeouts, as well as the values for all other
ows that share network links with the
ow in question [18].
In theory, therefore, a trace of packet arrivals should be a
possibly noisy composite of all of these patterns.
The problem of extracting characteristics from an otherwise
noisy environment is very similar to the extraction of
features from sonar data. Sonar signals are passed through
sophisticated signal processing lters to identify the signals
that have structure not otherwise visible.
The key idea, then, is to convert packet traces into signals,
and then examine the signals to identify prominent recurring
frequencies and time-periods. With an eective signal
encoding, many well-known frequency analysis techniques
from the signal processing literature can be applied. We use
the frequency analysis techniques to perform tra-c analysis
to reconstruct the network topology or extract network
tra-c parameters.
In this paper, we consider the use of techniques similar to
those employed in acoustics processing to do tra-c analysis
in the presence of noise, whether the noise is inherent in the
tra-c stream or placed there intentionally to camou
age the
interesting tra-c
ows. We take packet traces of streams
and convert them into signals suitable for signal processing.
We then show examples of the kind of information that can
be extracted from the signals using two techniques: Lomb
Periodograms and Coherence.
2. DESIRED RESULTS
There is a wide range of questions that one might ask a
tra-c analysis system to answer. We, however, had particular
types of results in mind when we began our work with
signal processing techniques.
We assumed an environment in which senders seek to
mask or hide their tra-c using techniques such as tunnel-
ing, tra-c aggregation, false tra-c generation, and data
padding. Tunneling hides the original source and ultimate
destination and uses security gateways as the endpoints as
tra-c traverses hostile networks. Tra-c aggregation works
with tunneling under the theory of protection in numbers|
many tra-c
ows all sharing the same tunnel may mask any
one particular
ow's characteristics. If there is not enough
aggregated tra-c to hide individual
ows, false tra-c can be
generated to help hide the tra-c of interest. Data padding
tries to hide information that can be extracted from the
packet length.
We then sought techniques which answered one or more
of the following questions:
Who is talking to whom? Ideally, we would be able
to identify each individual application endpoint. How-
ever, a very useful result would be to determine, for
instance, how many dierent sites are sending their
tra-c over the same IPsec tunnel.
What path is tra-c taking over the network? This
question is of particular interest in wireless networks
(where determining how tra-c is routed is di-cult),
but may also be useful in multi-tunnel environments
such as Onion Routing [20].
What types of application data are being sent? Are
we seeing interactive applications or le transfer applications
or both?
Can we associate transmissions with a particular
For instance, if we determine that ve concurrent
ows
are underway over an IPsec tunnel, can we (with high
probability) determine which IPsec packets are associated
with which
ow? If we could break aggregate
ows into their components, we could potentially use
additional tra-c analysis tools that are tuned to sin-
gle
ows (e.g., the password inference technology developed
by [22]).
3. RELATED WORK
Signal processing has been used to analyze the nature of
aggregate network tra-c, and to develop accurate models
of tra-c consisting of asymptotically large number of
ows,
such as the tra-c on a large intranet, or on the Internet
backbone [4, 2, 16]. It has been shown that aggregate tra-c
on the Internet is self-similar, or shows long-range dependence
[16]. Self-similarity means that no single time-scale
completely captures the rich behavior of the aggregate net-work
tra-c. This observation implies that one needs to
describe the evolution and steady progression of characteristics
(such as the number of active TCP connections or
the distribution of IP packet interarrival times) of aggregate
network tra-c across all scales, because no single scale can
describe all of the
uctuations and variations [2, 9]. This
observation has led to work on long-term memory models,
self-similar models and models with fractal features, where
signal processing tools such as the Wavelet transform are
especially applicable because of their ability to capture frequency
responses at various scales simultaneously [12, 3].
Though the work on the nature of aggregate network trafc
is relevant to the material presented in this paper, the
general focus of our work is not to model aggregate tra-c,
but rather the inverse problem, but to deconstruct the tra-c
into individual
ows, or sessions.
Another related area is that of network tomography [24, 6].
Network tomography is concerned with identifying network
bandwidth, performance and topology by taking measure-
ments, either actively from the network nodes, with their
cooperation [7, 8], or passively using measurements from
preexisting tra-c [23, 5]. Most network tomography work
has also dealt exclusively with network monitoring and inference
of wired networks such as the Internet ([6]). Moreover,
traditional network tomography relies on the ability of the
Network
with Tap
Range of tap p
Figure
1: Wireless network with nodes (n1-n7) and
tap (p)
measuring agents to be able to participate in the commu-
nications, possibly at the network layer. The participation
may either be in the form of the ability to take measure-
ments, or even the ability to explicitly transmit packets to
other nodes in the network.
However, in some scenarios, such as in adversarial wireless
networks, we cannot assume that the measuring agents can
participate on the network. Indeed, in many military do-
mains, the nature of the network protocols deployed on the
adversary's network may not even be known. As a result,
the work in this paper makes far more conservative assumptions
about what a measuring agent may do. Our aim is to
discover network topology purely from the raw transmission
traces.
4. NETWORK AND TAP MODEL
Our goal in this work is to make the tra-c analysis techniques
broadly applicable. To that end, we make as few
assumptions about the network and the observed tra-c as
possible.
We assume that there is some network over which discrete
pieces of data are transmitted by senders. The transmission
of these pieces of data cause network events. An event
is individually detectable or distinguishable|that is, a listening
device can tell when an event is over and will not
combine concurrent events from multiple senders into one
event. It is important to note that an event need not perfectly
correspond to a data packet. An event may represent
the transmission of part of a packet (e.g., a frequency hop),
or multiple packets (say two packets contained in a single
wireless burst transmission).
A sender in this model is the device that caused the event.
The sender is not necessarily the device that actually originated
the data that caused the event.
We assume that there are one or more tra-c taps within
the network. A tap seeks to observe tra-c on as much of the
network as is possible from the tap's location. This broad
denition is chosen to accommodate the dierence between
a tap on a wire or ber, where the tap is restricted to data
placed on the wire, and a wireless tap, which is observing
some (potentially very large) fraction of the wireless spec-
trum, and thus may see transmissions from a wide range of
sources. This range is shown in Figure 1.
A tap collects event information in a trace. For most of
the work discussed in this paper, the trace is assumed to
contain only the time the event was seen and the identity of
the sender of the event.
The concept of identity used here is intentionally vague|
the identity could be the IP address of an IPsec gateway, the
location of a radio transmitter, the upstream or downstream
transmitter on a point-to-point link, or simply \the same
sender as the one that sent these other events." The identity
of a sender must be unique among all senders known to
the tap (or set of cooperating taps); we assume the data
collection process is setting identity and maintaining the
uniqueness property.
We assume each tap has access to a clock used to record
when when each event was heard. In a wireless network, this
time of detection may be the middle of the transmission due
to propagation or other eects such a frequency hopping.
The granularity of the clock used to record time must be
su-ciently small that two consecutive events on the same
channel will be given dierent timestamps.
We note that there is no assumption about knowledge of
the length of the event, the destination of the data corresponding
to the event, signal strength, or any insight into
the contents of the event, even though, in many cases, this
and other additional information may be available. How
this additional information might be used is discussed in
later sections.
A tap may not capture all tra-c. For instance, reception
on a wireless network may be variable due to environment,
noise, transmission power, or jamming such that a tap is
unable to observe some transmissions. Furthermore, a tap
may occasionally make an error and mistakenly believe it
has seen an event when no event was sent (e.g., due to noise
on the wireless network).
There are some other characteristics of taps worth commenting
on:
Multiple taps: Multiple taps may be used together to
develop a more complete picture of the network tra-c.
Resource limitations: A tap (or a network of taps)
must be capable of storing all the transmissions it detects
for a su-cient amount of time for analysis to take place.
For example, the round-trip time of a transport layer
ow
cannot be determined if the history that can be stored at
taps is less than one round-trip time. The total volume of
data that must be stored depends on the capacity of the
channel and the maximum round-trip time of
ows seen on
the channel.
In the wireless environment, a tap may also be limited
by the amount of spectrum it can examine in any given
time. Indeed the spectrum range covered by the tap may be
dierent from the spectrum range used by the sender, with
the result that some events are not observed.
Mobility: Nodes may move around the network. Thus
senders may move in and out of the range of one or more
taps. We assume that senders typically dwell in the range
of one or more taps long enough for events to be heard, and
the senders identied and recorded.
Analysis
Signal
Encoding
Signal
Processing
Tap
Network
Figure
2: Model of Analysis
5. A NOTE ABOUT
Even though the techniques described in this paper have
all been tested on real wireless data, the examples presented
here all use simulated network data. We chose to present
simulated data for two reasons.
The rst reason is that, so far, we have not had the equipment
to collect the kinds of wireless traces we need. Rather,
we have taken existing traces and attempted to adapt them.
So, for instance, one wireless data set we have used is a tcpdump
trace of the wireless data and lacks the MAC layer
ACK and RTS signals, and has deleted any errored pack-
ets. As a result, some of the key frequency information is
lost. (One paradoxical consequence is that real data actually
makes some results look better than they should because
confusing signals have been edited from the traces).
The second reason is that no real trace, so far, has come
with all the required \ground truth" data needed to cross-check
results. So real data often involves making guesses
about the meaning of results.
Simulation data does not suer these limitations. We have
all the signals and can present them in all their complexity.
And if we cannot explain a result from a simulation, it represents
a serious challenge in interpretation, not the lack of
the necessary supporting data. So, for the purposes of clear
exposition, we have used simulation data.
6. SIGNAL ENCODING
Figure
shows our tra-c analysis processing model. Trafc
is captured from the network via taps. The traces from
the taps are encoded into signals. The signals are then pro-
cessed, using various signal processing techniques and the
nal result is analyzed. This is precisely the same model of
analysis used in signal processing of acoustic data.
The rst step in producing a signal is acquiring the sam-
ples. Signal processing makes a distinction between whether
the samples are gathered by a uniform or non-uniform sampling
process. The type of signal produced must be appropriate
for the target signal processing algorithm. With data
tra-c, the major concern is that the sampling frequency
allow the separation of meaningful events. We assume the
sampling process meets event separation criterion. Given
separation, we can convert a trace into an event stream that
is appropriate for any target signal processing algorithm.
The trace represents a set of discrete events x(n), logged
at times tn , for is the number of
events in the trace. The general approach to producing a
uniformly sampled signal representing the time of arrival of
event x(n) is to pick an appropriate time quantization interval
into increments at that quantization mT ,
where m is a integer, and then place a marker in the bin
representing the nearest time to tn when the event x(n) was
detected. That is, is the quantization
function such as the
oor or the ceiling function. The
Time Duration T R O D Description
3.587807
3.588986
Figure
3: Excerpt of trace capturing transmissions
from four nodes of Figure 5. There is an FTP
ow
between nodes 0 and 3 and a pair of UDP
ows
between nodes 1 and 3. All tra-c is routed through
node 2. The Time and Duration of the transmissions,
and the transmitter (T) node id, are captured by
the tap. The extra information within (/*/) is
listed here purely to give the reader an insight into
the trace dynamics, and is not known to or captured
by the tap. The extra info includes the receiver
id. (R), the global origin (O) and destination (D)
of the packet contained in this transmission, and a
Description of the packet contents.
Nyquist limit provides the means for determining the size of
the time increment; we aim to minimize the number of bins
and yet meet the Nyquist limit. This process is known as
resampling. Due to the errors introduced by quantizing the
time of arrival, some information contained in x(n) may be
lost in the resultant encoding.
To produce a non-uniformly sampled signal representing
the time of arrival of events x(n), markers are placed only
at times tn . Since there is no resampling, no quantization
error is introduced into the encoded signal.
The trace may be rich with information that can be encoded
as a signal. Consider a function g as the encoding
function. For a binary, or impulse, representation of time
of arrival, g(mT ) is 1 when
A sign encoding function (+1; 1) can be used to indicate
which end of a wire the signal came from. A weighted encoding
function can represent the transmission duration or
signal strength. Additional parameters for each event can be
represented in the signal by rening the encoding function
g.
When multiple events are occurring simultaneously (i.e.,
within the same sample period) and would be set to the same
time bin mT , we jitter the time of the con
icting events into
empty adjacent sample times in order to keep data from
being obscured.
While it is possible to encode the events of multiple senders
into a single signal, better signal processing results usually
come when one generates a separate signal representation
for each sender. Recall that the sender is the most recent
transmitter of the data that caused the event|it is not the
originator of the data. Thus, a single sender's trace may
contain the data of multiple
ows (e.g., when the sender is a
router). The idea here is simply to split the traces as much
as possible before processing.
An example of a trace captured by a tap monitoring transmissions
in a wireless network is shown in Figure 3. As discussed
earlier, a duration-weighted sign encoding function
can be used to encode the captured transmissions into a
signal appropriate for signal processing. Figure 4 shows an
encoding of transmissions from nodes 2 and 3, which can be
used to analyze communications that span these nodes.
Amplitude
Time
Figure
4: A non-uniformly sampled signal representation
of the trace in Figure 3. f = (1duration) for
transmissions of node 2 and
transmissions from node 3.
7. SIGNAL PROCESSING AND ANALYSIS
Given an encoded signal, we can make use of a wide range
of signal processing algorithms to try to extract tra-c in-
formation. In this section, we will describe some signal processing
techniques which we have found useful for trace anal-
ysis. 1
Most spectral processing techniques use the standard Discrete
Fourier Transform (DFT) to compute the spectral power
densities. The DFT requires that the signal be uniformly
sampled.
The DFT of a uniformly sampled signal x(n) (with
q(tN )=T samples) provides an M-point discrete spectrum
1 Unless otherwise noted, more information about these techniques
can be found in signal processing textbooks such as
x(n)e j2kn=M
DFT fx(n)g (1)
is the M-point DFT. The values of k correspond to M
equally spaced frequency bins of the sampling frequency of
x.
The resulting spectrum X(k) is a vector of complex num-
bers. The peak values in X(k) correspond to frequencies
of event times of arrival. The magnitudes of the peaks are
proportional to the product of how often the arrival pattern
occurs and the weighting of the data performed by encoding
the signal. The phase of the peaks shows information on the
relative phases between arrival patterns. The Fast Fourier
Transform (FFT) is a computationally e-cient decomposition
of Equation 1, made possible when M is a product of
powers of small integers, though powers of two are the most
commonly used.
If the characteristics of the signal (due to variations in the
tra-c
vary markedly during the DFT analysis, then
the resulting spectrum can be misleading, since the resolved
peaks may be present for only part of the time in the signal.
Also, it is often the case with signal representations that the
spectral content contains many harmonically related peaks. 2
In these situations, the spectral peaks of interest may not
be readily visible due to the overlap of the various harmonic
peaks, causing the spectra to look like noise. Thus, the
examination of the spectrum given by the DFT can provide
visualization of
ows in the form of characteristic peaks, the
DFT, when used alone, can give spectra that are insu-cient
for further detailed analysis. In the remainder of this section
we describe signal processing techniques which address this
deciency.
Periodograms, or Power Spectral Density (PSD) estima-
tors, are spectral analysis techniques that are used to compute
and plot the signal power (or spectral density) at various
frequencies. A periodogram can be examined to identify
those frequencies that have high power, that is, power
above a certain predetermined threshold. As a consequence,
periodograms are useful for identifying important or key fre-
quencies, even in the absence of any prior knowledge about
the nature of the signal.
Another important characteristic of periodogram techniques
is that they work very well even in the presence of noise or
interference. This is fortunate for analyzing network tra-c
because a
ow of interest is often embedded in an aggregation
of other tra-c. In this case, from the perspective
of the
ow of interest, all other tra-c contributes to the
interference.
When signals are expected to be noisy (i.e., they have
a high degree of randomness associated with them due to
corruption by noise or consisting of random processes them-
selves), conventional DFT/FFT processing does not provide
a good unbiased estimate of the signal power spectrum. 3
2 For example, the spectral content of a square pulse is the
fundamental frequency of the pulse, plus all the odd numbered
harmonics.
3 That is, processing larger sets of data does not make the
A better estimate of the signal periodogram, Pxx(k), may
be obtained with the Welch Averaged Periodogram [25, 14]
which utilizes averaging in order to reduce the in
uence of
noise. It uses windowing to account for the aperiodic nature
of the signal. The periodogram is generated by averaging
the power of K separate spectra X (r)
computed over K
dierent segments of the data, each of length
KU
(2)
where
where the windowed data xr (n) is the r th windowed segment
of x(n), w(n) is a windowing function 4 used to reduce
artifacts caused by the abrupt changes at the endpoints of
the window, and U is the normalized window power. The
value of the number of samples L within each segment depends
on the window function, w(n). The result can be
interpreted as a decomposition of the signal into a set of discrete
sinusoids (at frequencies 2k=M) and an estimation of
the average contribution (or power) of each one. While the
spectrum, X(k), obtained by the DFT was complex valued,
the peaks in Pxx(k) are real valued, they also correspond to
frequencies of event times of arrival. Similar to the DFT, the
power of the peaks is proportional to the product of how often
the arrival pattern occurs and the weighting of the data
performed by encoding the signal. In addition to this similarity
to the DFT, the Welch Averaged Periodogram permits
the computation of condence bounds on the peaks.
7.1 Flow Analysis using Lomb Periodograms
Recall that DFT-based periodograms require uniform sam-
ples, which requires resampling of the original trace and may
lead to loss of information. In this section, we discuss a technique
which overcomes this hurdle.
Packet arrivals in computer networks are inherently unevenly
spaced, naturally resulting in a signal encoding that
is non-uniformly sampled. Lomb, Scargle, Barning, Vancek
[17, 19] developed a spectral analysis technique specically
designed for data that is non-uniformly sampled. The Lomb
method computes the periodogram by evaluating data only
at the times for which a measurement is available. Although
the Lomb method is computationally more complex than the
DFT (O(NlogN)), this property makes it an especially appropriate
PSD estimator for examining event arrival traces.
Moreover, since only the event arrivals need to be stored in
the time series (no resampling, as discussed in Section 6, is
required), the Lomb method has an added advantage that
the input data is sparse and consumes less storage memory.
answer converge to a good result.
4 The term windowing or shading refers to the time-wise
multiplication of the data stream x(n) by a smoothing function
w(n). Many typical smoothing functions are used (e.g.,
Hamming, Kaiser-Bessel, Taylor), all of which reduce spectral
background noise and clutter levels at the cost of some
smearing of the peak energies in the frequency domain.
So, at the cost of increased CPU requirements, but decreased
memory requirements, the Lomb method oers all the attractions
of periodograms, such as condence intervals for
various peaks, with the added advantage of a more precise
power density computations for non-uniform time series.
The Lomb method estimates a power spectrum for N
points of data at any arbitrary angular frequencies. The
power density (PN ) at a frequency f Hz or angular frequency
Where
hn
sin 2!tn
Also, hn are the N unevenly spaced
samples of the signal at times tn . The Lomb periodogram
is equivalent to least-squares tting a sinusoid of frequency
! to the given unevenly spaced data. In case tn are evenly
spaced (i.e., the signal is uniformly sampled), the Lomb periodogram
reduces to the standard squared Fourier transform.
Note that while analyzing network traces, it may sometimes
be more convenient to work with time periods rather
than angular frequencies. We will see this in the next sec-
tion, where we take specic networks and illustrate the use
of the Lomb method. The power density at a time period
X can be easily computed since it is simply equal to
7.1.1 Wireless Network Analysis
In wireless networks, we model taps as nodes that can detect
transmissions above a certain signal strength threshold,
and uniquely identify (and tag) each signal reception with
its transmitting node. Consequently, a tap may only hear a
subset of nodes in the network. Moreover, we do not assume
that the taps participate (or, indeed, even know about the
MAC layer) in the network. They only detect the lowest
level physical transmissions.
Consider the four node wireless network in Figure 5. We
simulated this network in ns-2, with an 802.11b MAC layer,
and a 2Mb/s transmission bandwidth (we used the ns-2 settings
for Lucent WaveLAN). The nodes were deliberately
placed in a conguration so that any tra-c from nodes 0 or
1 to node 3 has to be routed through node 2, because node
3 is too far away and cannot directly hear nodes 0 and 1.
Therefore, the wireless link between nodes 2 and 3 is the
bottleneck link. Three
ows were set up: One FTP
ow
from node 0 ! 3, one CBR
ow from node 1 ! 3 and one
CBR
ow from 3 ! 1.
We then place the tap p in the network such that it can
only detect transmissions from nodes 0 and 3. The tap does
send ms
CBR 3->1:
send ms
CBR 1->3:
rttvar (mean
ms
FTP 0->3:
Probe only sees transmissions
from nodes 0 (+1) and node 3 (-1)
signal captured:
CBR
1->3
CBR
(Acks for
Figure
5: A wireless network with one FTP
ow
and two CBR
ows. The network is congured to
route tra-c from nodes 0 and 1 to node 3 (and vice
versa) via node 2. The tap is placed such that it
only hears transmissions from nodes 0 and 3, and
creates a simple signal encoding.
not hear any transmission from nodes 1 and 2 because node
1 is too far away, and node 2 is both far away and has low
signal strength.
A simple signal encoding is created from the trace by assigning
the amplitude +1 to all receptions from node 0, and
1 to all receptions from node 3. A small snapshot of this
signal is shown in the box in Figure 5.
This simulation was run in ns-2 for 300 seconds using the
Dynamic Source Routing (DSR) protocol [13] to maintain
connectivity in the ad hoc network. The CBR
ow from 1 !
3 was congured to send packets of 1024 bytes each, at an
average transmission rate of one packet every 173 ms. The
CBR
ow from 3 ! 1 was also congured to send packets
of 1024 bytes each, but at a rate of one packet every 75
ms. The statistics reported by ns-2 for the FTP
round trip time (rtt ) of 371 ms, with a mean deviation
(rttvar ) of 92.5 ms.
It should be noted that the trace produced by the tap in
this network is complex and noisier than the trace would
be on a wired network. This dierence is not simply due
to transmission media, but in the kinds of support tra-c
used in wireless networks. For instance, the events received
at the tap include the DSR routing updates, which do not
correspond to any end-to-end
ow. Furthermore, due to
the nature of 802.11b, the packet transmissions are interspersed
with the corresponding RTS, CTS and MAC layer
transmissions [1]. Also, due to the nature of wireless
networks, and the hidden-node problem, there are collisions
which are resolved at the MAC layer, leading to retrans-
missions. Finally, there is interference in the signal from
transmissions at node 3 that are not intended for node 1.
We are interested in identifying the characteristics of the
various
ows, so after collecting the signal from tap p, we
compute the Lomb periodogram of that signal. Inspection
of the Lomb periodogram plot shown in Figure 6 reveals
that its three most prominent peaks correspond to each of
Spectral
Power
Time Period (milliseconds)ms
FTP (0->3) Round Trip Timems
CBR (1->3) Send Ratems
CBR (3->1) Send Rate
Figure
The Lomb periodogram for the wireless
network of Figure 5 reveals all three
ows involving
four nodes, even though the tap only hears nodes
0 and 3. The 0 ! 3 FTP is identied by the peaks
spread near its RTT (328.85 ms).
the three
ows.
Both CBR
ows are revealed by the peaks very close to
their transmission rates. The transmission intervals for CBR
3 from Figure 5 were 75 ms and 173
ms, respectively, whereas the peaks are found at 75.01 ms
and 173.08 ms, respectively.
The FTP
ow from 0 ! 3 can be identied by the peaks
spread around 328.85 ms, which correspond to the round-trip
time for this TCP
ow. This value is well within the
standard deviation of the measured round-trip time (the deviation
and RTT were reported to be 92.5 ms and 371 ms
by ns-2).
Observe that the plot is able to show the eects of both
CBR
ows, even though it does not receive any signal from
node 1, an end-point for both these
ows. The fact that we
can see CBR from 1 ! 3 is even more interesting because
not only can the tap not hear the transmissions of node 1
(or node 2), but there is no way for the tap to know when
node 3 receives a packet either. So eectively, the tap never
hears any transmission directly related to this CBR
its peak is one of the most prominent peaks. 5
This example is a good illustration of the Lomb peri-
odogram's utility in extracting useful information for detection
of conversations even in complex wireless networks
where the trace may be quite noisy (due to the routing traf-
c, for example), incomplete (due to the limited range of
taps), and complex (due to an inherently complex MAC
layer transmissions). In this example, the Lomb method is
able to identify the key timing parameters of the
ows, and
thus reveal all three IP
ows.
7.1.2 Discussion
This example shows the promise of Lomb's technique for
revealing key
ow information, even when the signal did not
explicitly contain data from transmissions related to some of
5 We speculate that this relationship is caused by a form of
imprinting. The CBR
ow from 1 ! 3 shares part of its
path with the FTP and the interactions between the FTP
data and the CBR
ow causes the timing of the CBR
ow
to be re
ected in the FTP acknowledgements.
those
ows. Work with other traces, some simulated, some
real, have conrmed this promise.
At the same time, there are challenges in using Lomb.
The rst major challenge is nding ways to explain each
peak in a graph. Even with simulated tra-c (where presumably
we know or can nd all the time constants), there
are peaks that sometimes elude understanding (such as the
small peaks at 100 and 66 ms in Figure 6). Also, we have
found that the Lomb periodogram technique identies dier-
ent network characteristics for dierent networks. It is able
to identify the round-trip times of the FTP
ow in
Figure
6, but in a similar experiment using a wired network high-lighted
the transmission intervals rather than the round-trip
time. For our purposes, the Lomb periodogram is not yet a
rened tool.
Finally, the biggest challenge is to scale the Lomb periodogram
method to larger networks. We have applied this
technique to some large publicly available tcpdump traces,
and found that even though there are some prominent peaks,
it is di-cult to identify the key timings that they represent.
Moreover, despite the fact that Lomb periodogram works
well in the presence of noise, we have found that the noise in
large network traces can overwhelm this method by reducing
the condence in prominent peaks. Developing techniques
to further reduce the eects of noise in large networks is an
important challenge for reducing this approach to practice.
7.2 Tracking Network Dynamics using Time
Varying Spectra
Until now, we have limited ourselves to collecting the entire
trace for the full duration of a
ow, and analyzing the
aggregate signal using a one-dimensional (description of the
signal only as a function of the frequency) representation of
its spectra. However, these spectral techniques (e.g., Lomb
Periodogram), are only valid when the underlying process
that generated the signal is wide sense stationary, 6 i.e., its
frequency content does not change with time. These techniques
are still valuable when the signal statistics vary slowly
enough such that they are nominally constant over an observation
period which is long enough to generate good es-
timates. That is why it was appropriate to use Lomb periodograms
for the analysis of round-trip times or the send
rates of
ows on networks whose nodes are static. On these
networks (which includes most of the Internet), the RTT and
mean send rates remain rates remain stable and relatively
constant over the duration of individual
ows.
However, in many scenarios, the network and
ows are
more dynamic in nature. For example, in mobile ad hoc
networks, the nodes are mobile and the topology changes
with time. Or, even in a static network, the objective may
be to analyze the evolution of
ows over time (to detect TCP
stabilization times etc. Such scenarios where the network
or the
ow characteristics dynamically change require techniques
that can track changes in the spectra with time {
or can develop a time-varying spectral representation of the
signal. Such two-dimensional representations permit a description
of the signal characteristics that involves both time
and frequency, and provide an indication of the specic times
6 Wide sense stationary (WSS) usually requires that the
mean and autocorrelation (and in the case of multiple
streams, cross correlation) functions of the process are constant
with respect to the the time and duration of observation
at which certain spectral components of the signal are observed
whose spectra changes with time, are known as
nonstationary processes [10]. Many (linear and quadratic)
techniques have been developed for nonstationary signal pro-
cessing, but of special importance for us are two linear tech-
niques: (1) the Short Term Fourier Transform, or STFT
[11], which is a natural extension of the Fourier transform
that employs shifting temporal windows to divide a non-stationary
signal into components over which stationarity
can be assumed, and (2) the Wavelet Transform [21], which
is more complex than the STFT, but oers better time-frequency
resolution by trading
resolution and vice versa.
In this paper, we use temporal windows, similar to those
in the STFT. In Section 7.3, we will use the windowing technique
to track topology changes in a network with mobile
nodes. Our general approach for analyzing dynamic networks
using windowing is as follows.
The tap trace is divided up into temporal windows of
a constant duration and spectral estimates are computed
for each window. Often the windows are overlapped by
a xed percentage to ensure smooth boundary transitions
from one window to the next. The output vector from
spectral analysis (which can be cepstrum, coherences, cross-
spectral-densities, or indeed power spectral densities computed
using Lomb Periodograms) of each window is stacked
together as columns of a two dimensional matrix, forming an
image with time along the horizontal axis and the estimated
parameter (such as amplitude or spectral density) along the
other. This kind of representation is often known as a spec-
trogram. In the simplest form, a spectrogram is simply the
squared modulus of the Short Term Fourier Transform of a
nonstationary signal. Since spectrogram eectively plot the
spectra, as it varies in time, it is useful for discovering variations
in
ow and network characteristics in a dynamically
evolving tra-c scenario.
Recall that the Lomb method, which is relatively new,
permits the analysis of non-uniformly sampled data, at the
cost of increased computational complexity. However, there
are a multitude of classical signal processing techniques that
are applicable to uniformly sampled data only. In order
to exploit these techniques we will use uniformly sampled
signals to analyze the time-varying spectra. 7
7.3 Link and Path Discovery using Coherence
The previous sections focused upon the analysis of one
signal stream. We now move to the analysis of signals from
multiple trace les in order to relate transmissions in one
location with those at another. We will use the windowing
technique to capture variations in these signal relationships.
The idea is to look for relationships between time windows
at dierent locations or between time windows for tra-c
from dierent sources. For instance, if we nd a strong relationship
between a time window for source 1 and a slightly
later time window from source 2, we can infer that some
of the tra-c from source 1 is being forwarded through or
acknowledged by source 2. Expressed in signal processing
terms, if there is enough periodicity in a trace le to show
spectral peaks, and if the transmissions of one source are
forwarded or answered by another source at some layer of
7 We are currently exploring ways to extend Lomb's method
to analyze time-varying spectra using windows.
the network (such as with ACKs in TCP or via the MAC
protocols in a wireless network), then we can compute (us-
ing a classical signal processing technique called coherence)
the degree that the two dierent signals are related.
For the rest of this section, we use time-varying windows
and coherence to identify all active (one-hop, or MAC layer)
links between the various nodes in a network. Moreover, we
will now work in a mobile ad hoc wireless network. Such ad
hoc networks require our technique to recognize that links
are transient because the nodes are mobile.
The multiple input extension of the periodogram in Equation
2 is Cross Spectral Density (CSD) which is essentially
the cross spectrum (the spectrum of the cross correlation)
Pxy (k) of two random sequences. The formula is
KU
Y (r)
denotes the complex conjugate. The resulting
CSD shows how much the two spectra X(k) and Y (k) have
in common. If two signals are randomly varying together
with components at similar frequencies, and stay in phase for
a statistically signicant amount of time, then their CSD will
show peak at the appropriate frequencies. Two independent
signals do not give peaks. CSD may be complex valued, so
the magnitude of the CSD is generally used in the same way
the magnitude of the PSD is.
One can compute a version of the CSD known as coher-
ence, whose value is mapped between 0 and 1. The formula
is
This formulation is useful in situations where the typical
dynamic range of spectra would cause scaling problems, such
as in automated detection processing. Since the coherence
is nicely bounded, it allows easier automation. However, as
we lose the absolute levels of Pxy (k), Pxx(k), and Pyy(k), it
should still be used in conjunction with the CSD rather than
as a replacement. CSD and coherence may also be presented
in gram form in a manner identical to that discussed above.
CSD and coherence answer the question: what was the
power of the conversation between any two sources in the
network during a certain time-slice? Furthermore, if we encode
transmission durations to amplitude, then the power
of the peaks would give a sense of the bandwidth of the
communications between the nodes. We have found this
technique quite useful for discovering routing topology in
wireless networks.
First we demonstrate the Coherence technique without
the added complication of mobility. Figure 7 shows the results
of analyzing seconds of trace data for coherence.
The data is taken from a simulated wireless network with a
topology similar to Figure 5. Two simple
ows are present.
An FTP from 0 ! 3 by way of node 2, and a CBR from
by way of node 2. The gure shows one coherence
plot for each pair of nodes in the lower diagonal of the
matrix of nodes. Each coherence plot is labeled Coherencexy
and shows the coherence between nodes x and y. Plots with
visible peaks indicate stronger coherence, which suggests
two-way transactions (hence a conversation). Furthermore,
the shapes of the peaks also provides information which may
allow us to dierentiate the types of data transfers (FTP vs.
Coherence
CBR Source
Coherence 20
Coherence 21
CBR
Coherence
FTP Endpoints
Coherence 31 Coherence
CBR
Figure
7: Coherence Between Nodes in the Wireless
Network from Figure 5.
CBR, etc.
One can see that strong peaks occur between node pairs
2 and 0, 2 and 1, 3 and 0, and 3 and 2. The links 2
are carrying the FTP, and links 2
are carrying the CBR. The peaks in Coheregram 30 do not
correspond to a link, but instead are due to the fact that
the FTP transfer between nodes 0 and 3 cause those nodes
to interact in a strongly periodic pattern due to the ACK
feedback of TCP. There is a lack of coherence between nodes
because they do not share any information. We
speculate that the coherence between nodes 3 and 1 is due
to the tra-c periodicity pattern of the FTP being aected
by the UDP transmission, but we have not conrmed this.
Next, we demonstrate our solution to the problem of not
only discovering the topology, but tracking topology and
routing changes, in mobile networks.
Figure
8 shows a coheregram generated by analyzing another
seconds of trace data taken from the same wireless
network in Figure 5, except that now node 1 moves around
node 2 at a constant speed (while it moves), stopping for
a short duration rst between nodes 0 and 2, and then between
nodes 2 and 3. This motion causes rerouting to occur
twice, rst at 14 seconds into the run, and again at 25.5
seconds. Initially, tra-c from 1 ! 3 is routed through node
2, until at time 14 seconds, node 1 gets close enough to node
3 to route directly. This continues until 25.5 seconds, when
node 1 has circled far enough away from node 3 to resume
routing through node 2.
Coherence spectra were computed for each 512 ms interval
and displayed as a two-dimensional time-frequency gram
where intensity is proportional to power at that time and
frequency low level to black = high level). The
result is a gram plot for each pair of nodes (laid out exactly
as in Figure 7). When the coherence remains similar
from one interval to the next, peaks resolve as horizontal
lines in the plot. However, when the network reroutes at
14 seconds and node 1 begins to communicate directly with
node 3, the coherence peaks change visibly in Coheregram 21
and Coheregram 31 . At 25.5 seconds, they coherence peaks
Figure
8: Coheregrams Showing Time Varying
Coherence Between Nodes in the Wireless Net-work
from Figure 5, due to a mobile node 1.
Link/Routing changes are observed at 14 seconds
and 25.5 seconds.
change visibly, and remain such until the network resume
their old form. Such a change could be detected by automated
means.
8. CONCLUSIONS
There's something very tantalizing about nding a new
way to look at data tra-c. For instance, the experience
of seeing coherence techniques map the path a
ow's tra-c
took through the network, and to recognize changing communication
patterns in a mobile ad-hoc network was extremely
exciting.
We started this paper with four questions we hoped signal
processing techniques might address.
Clearly the coherence techniques give us insights into who
is talking to whom, and the paths tra-c take. We are currently
working on rening these techniques to larger and
more complex networks.
The Lomb periodogram gives us some insight into determining
how many
ows are traveling over a particular path:
the peaks in the periodogram can be used to reveal features
of individual
ows. But we are a long way from using that
data to determine which particular applications are in use
or which individual events correspond to a particular
ow.
At the same time the results reported in this paper obviously
raise more questions than they answer. There are a
number of opportunities to substantially rene algorithms,
including:
How best to encode a trace as a signal? Encoding
is a key part of the analysis process and yet we've
only just begun to explore the issues. It seems likely
that dierent encodings will give dierent results, and
perhaps highlight dierent aspects of a trace.
How to separate wheat from cha in the results? The
Lomb periodogram is a good example. Even for modest
amounts of tra-c, it reveals a number of heavily
used frequencies. How do we identify the frequencies
we most care about?
As mentioned in Section 7.2, often network tra-c produces
nonstationary processes, which require specialized
techniques such as windowing and the Welch Average
Periodogram described in Section 7. However,
even these techniques also work well only if the signal
statistics vary slowly enough, at least within the
observation time covered by the window. Another alternative
(which we are exploring) is to develop techniques
which do not require the signal to be wide sense
stationary at any time scale. Wavelets analysis is a relatively
new tool in signal processing, developed only in
1980s [21], and they are applicable to completely non-stationary
signals. We are exploring the use of such
techniques for discovering time varying network properties
Finally, given that these techniques are beginning to
work, what can we do to hide tra-c patterns from
them? What (possibly new) techniques should we use
to make tra-c less vulnerable to this sort of tra-c
analysis?
ACKNOWLEDGMENTS
We are indebted to Steve Kent, Greg Troxel, Chip Elliott,
Alex Snoeren, and Paul Kolodzy for their suggestions for
directions and reviews of early drafts.
9.
--R
IEEE Std 802.11b
Multiscale nature of network tra-c
Wavelet analysis of long-range-dependent tra-c
Maximum likelihood network topology identi
Internet tomography.
Trajectory sampling for direct tra-c observation
Multicast inference of packet delay variance at interior network links.
The changing nature of network tra-c: Scaling phenomena
Communication Systems
Linear and quadratic time-frequency signal representations
Dynamic source routing in ad hoc wireless networks.
Modern Spectral Estimation: Theory and Application.
On the self-similar nature of Ethernet tra-c
A duality model of tcp ow controls.
Numerical Recipes in C
Anonymous connections and onion routing.
IEEE Signal Processing Magazine 8
Timing analysis of keystrokes and timing attacks on ssh.
Passive unicast network tomography based on tcp monitoring.
Network tomography: estimating source-destination tra-c intensities from link data
The use of fast fourier transform for estimation of power spectra: A method based on time averaging over short
--TR
On the self-similar nature of Ethernet traffic
The changing nature of network traffic
Trajectory sampling for direct traffic observation
A non-instrusive, wavelet-based approach to detecting network performance problems
Maximum likelihood network topology identification from edge-based unicast measurements
Encryption-based protection for interactive user/computer communication
--CTR
Collaborative detection and filtering of shrew DDoS attacks using spectral analysis, Journal of Parallel and Distributed Computing, v.66 n.9, p.1137-1151, September 2006
Alefiya Hussain , John Heidemann , Christos Papadopoulos, A framework for classifying denial of service attacks, Proceedings of the conference on Applications, technologies, architectures, and protocols for computer communications, August 25-29, 2003, Karlsruhe, Germany
Tadayoshi Kohno , Andre Broido , K. C. Claffy, Remote Physical Device Fingerprinting, IEEE Transactions on Dependable and Secure Computing, v.2 n.2, p.93-108, April 2005
Cherita L. Corbett , Raheem A. Beyah , John A. Copeland, Passive classification of wireless NICs during rate switching, EURASIP Journal on Wireless Communications and Networking, v.2008 n.2, p.1-12, January 2008
Alefiya Hussain , John Heidemann , Christos Papadopoulos, Distinguishing between single and multi-source attacks using signal processing, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.46 n.4, p.479-503, 15 November 2004 | wireless networks;encryption;signal processing;traffic analysis |
570792 | Providing stochastic delay guarantees through channel characteristics based resource reservation in wireless network. | This paper is directed towards providing quality of service guarantees for transmission of multimedia traffic over wireless links. The quality of service guarantees require transmission of packets within prespecified deadlines. Oftentimes, bursty, location dependent channel errors preclude such deadline satisfaction leading to packet drop. Wireless systems need resource reservation to limit such deadline violation related packet drop below acceptable thresholds. The resource reservation depends on the scheduling policy, statistical channel qualities and arrival traffic. We choose Earliest Deadline First as the baseline scheduling policy and design an admission control strategy which provides delay guarantees and limits the packet drop by regulating the number of admitted sessions in accordance with the long term transmission characteristics and arrival traffic of the incoming sessions. We analytically quantify the stochastic packet drop guarantees provided by the framework, and show using simulation that the design results in low packet drop. | INTRODUCTION
Third generation wireless packet networks will have to
support real-time multimedia applications. The real-time
applications need Quality of Service (QoS) guarantees from
the network. One of the most important QoS requirements
is the packet delay, which specifies the upper bound on the
delay experienced by every packet of the application. For
most of the real-time applications packets delayed beyond
certain time are not useful. Further, the tra#c characteristics
and the delay requirements of various applications are
di#erent. In this heterogeneous environment, the challenge
is to design a methodology that allows us to decide whether
the required QoS can be guaranteed. In this paper, we focus
on designing such a methodology for a single hop wireless
packet network. Obtaining an e#cient solution to the one
hop problem is an impotent preliminary step toward solving
the problem for multi-hop wireless packet networks like
ad-hoc networks.
Real time tra#c often associates a service deadline with
every packet. If the packet is not served before the dead-
line, then the corresponding information loses its utility, and
the packet must be dropped. It is well known that earliest
deadline first based scheduling minimizes delay violations
and thereby minimizes packet drops[24]. However, the actual
amount of this packet drop depends on the tra#c load,
bandwidth resources and the transmission conditions, and
this amount may become unacceptable if the tra#c load is
not regulated. In other words, an e#cient admission control
framework is required to ensure that su#cient amount of
resources are available to meet the packet drop constraints
of the admitted session. The framework will depend on the
scheduling policy, as the resource utilization is di#erent for
di#erent scheduling policies. We choose Earliest Deadline
First (EDF) as an appropriate scheduling strategy on account
of its loss optimality properties, and propose an admission
control framework which caters towards EDF.
Sessions arrive with specific deadline requirements and estimated
channel statistics. The admission control frame-work
admits a session only if the deadline requirements of
the incoming session can be met with a high probability.
This decision depends on the tra#c loads and channel characteristics
of the existing sessions, and also those of the in-coming
session. Thus designing such a framework will involve
quantifying delay of any given session in terms of arrival
process and channel characteristics of all the contending
sessions. The key contribution of the paper is to provide
such a quantification, and thereafter present an admission
control condition which exploits the statistical multiplexing
of tra#c load and channel errors of the existing sessions so
as to reduce the overall packet drops of the admitted sessions
to acceptable values. We corroborate our results using
analysis and extensive simulations.
Admission control schemes have been studied extensively
in the wireline case[29]. However these schemes do not
counter the detrimental e#ects of location dependent and
bursty channel errors. Admission control conditions need
to specifically consider channel statistics while taking admission
control decisions, and reserve additional resources
for guaranteeing the desired QoS in spite of channel errors.
Further location dependent channel errors imply that the
channel may be accessible to some users but not to others
because of the di#erent fading characteristics, and thus admission
control decisions will be di#erent for di#erent ses-
sions, even if the desired delay guarantees and the tra#c
load are the same.
The existing research in admission control in cellular networks
focusses on resource reservation for providing promised
QoS to the mobile users during hand-o#s [1, 2, 3, 15, 23, 16,
26, 27, 28, 17, 7]. These schemes assume the knowledge
of resources to be reserved to provide the desired QoS and
then obtain the good balance between call dropping and call
blocking via resource reservation. In this paper, we focus on
quantifying the resources required to provide packet delay in
presence of channel errors.
Most of the prior wireless scheduling work [5, 19, 20, 21]
obtains delay guarantees for sessions that do not experience
channel errors. In [19, 20], authors have obtained the
worst case delay bound for the sessions with channel errors.
However, these bounds hold only for Head of Line (HoL)
packets, in other words no bound has been provided for the
overall delay experienced by the packet. Another area extensively
explored is that of fair scheduling [4, 6, 10, 12,
14, 25]. The main objective in this branch of research has
been to distribute available resources fairly among the flows
[5, 19, 21, 22, 20], which is important for transmitting data
tra#c. Major concerns for transmission of real time tra#c
such as deadline expiry and related packet drop have not
been addressed though.
The main contribution of this paper can be summarized
as follow. We propose a channel statistics aware admission
control mechanism for EDF scheduler in wireless case. The
proposed mechanism guarantees delay bounds to the admitted
sessions with a high probability in spite of channel errors.
The deadline violation related packet drops are correspondingly
probabilistically upper bounded.
The paper is arranged as follows. We describe our system
model in 2. In section 3 we present the admission control
algorithm for EDF scheduler. In section 4 we present the
simulation results and discussion. We conclude in section 5.
2. SYSTEM MODEL
In this section we define our system model and specify our
assumptions. A system that we consider is shown in Figure
1. Further, we assume that a sender node S is transmitting
packets to several receivers in its communication range
via error-prone wireless channels. The sender node has an
admission control scheme and EDF scheduler. We assume
R
R
R
R
R
Arrivals from
Outside World24
Node
Transmission Range of
Sender
Figure
1: Figure shows a sender node S and its receivers
to R5 . Sessions arrive at the sender from
outside world and seek to transfer data to one of
the receivers. We do not preclude a possibility of
multiple sessions between the sender and a receiver.
Dotted circle represents the transmission range of
the sender.
that the sessions 1 arrive at the sender dynamically and seek
admission. Each arriving session specifies its required delay
guarantee and tra#c characteristics. Further, as we describe
below, each of the receivers periodically communicate
the estimates of long term channel error rate to the sender.
Using these parameters and the information about the available
resources, admission control mechanism at the sender
node decides whether the desired delay can be guaranteed 2
If the required packet delay can be guaranteed in the sys-
tem, then the session is admitted, otherwise it is blocked.
Packets that belong to the admitted sessions are served as
per EDF scheduling. We next describe each component of
our system model in detail.
We assume slotted time axis, where each time slot is of
unit length. Every packet has unit length. We assume leaky
bucket constrained sessions [9], i.e. the total data arriving
from a typical session i in any duration # is less than or
equal to #
replenishment rate. Wireless channel for a session is
either good or bad. In good channel state, the probability
of correct reception is 1, while in bad state the probability
is 0. There can be many sessions between the sender and
a given receiver. We allow the possibility that di#erent sessions
between the sender and a receiver can have di#erent
channel characteristics. This scenario arises if di#erent coding
and/or power control schemes are used for the di#erent
sessions.
We note that all the receivers are in the transmission range
of the sender. Hence, each receiver can receive all the transmissions
from the sender (including transmissions for other
receivers). Depending on the quality of reception, a receiver
can estimate its channel state parameters. So, we consider
a scenario where a receiver estimates its long term channel
error rate by observing the quality of receptions. We as-
1 A session is a stream of packets pertaining to a single application
that have to be transmitted from the sender to a
specific receiver. We do not preclude a scenario, where multiple
sessions exist between the sender and a certain receiver.
All guarantees are in a stochastic sense, i.e., a required
delay must be guaranteed with a high probability. We will
omit "with high probability" for brevity.
sume that receivers communicate the estimates of the long
term channel error rates to the sender periodically. Sender
uses these estimates and the leaky bucket parameters of the
arrival process to make the admission control decisions.
The EDF scheduler assigns a deadline to a packet as and
when it arrives. The deadline of a packet is the sum of its
arrival time and the required delay guarantee. In wireline
case, EDF schedules packets in the increasing order of their
respective deadlines. But in wireless case, to utilize channel
bandwidth e#ciently, packets that belong to sessions with
good channel should be scheduled. We assume a channel
state aware EDF scheduling where the packet with the earliest
deadline amongst those which experience good channels
is scheduled. The underlying assumption is that the scheduler
knows the channel state for each session in the beginning
of each slot. Such knowledge may not be feasible in real
systems. However channel state may be predicted given the
transmission feedback in the previous slot. We incorporate
a prediction based EDF strategy in our simulations.
We describe the proposed admission control scheme in the
next section.
3. ADMISSION CONTROL
In this section we present an admission control algorithm
for channel state aware EDF scheduler in wireless case. There
has been extensive work in admission control algorithm for
EDF scheduler in wireline case[11, 13, 18]. These algorithms
does not apply to wireless case as they do not accommodate
channel errors. We intuitively explain how channel errors
can be taken into consideration while making admission
control decision. This will enable the generalization of wire-line
admission control strategies for operation in the wireless
case.
Let us consider a case, where V is the maximum number
of erroneous slots in a busy period, where erroneous slot is
a slot in which at least one of the sessions have bad chan-
nel. A busy period is defined as the maximum contiguous
time interval in which packet from at least one session is
waiting for transmission. As shown in Figure 2, in the duration
of channel errors sessions can loose service that they
would have gotten if the channel were good. The system
has to compensate for the lost service. This compensation
causes additional delay in the wireless system. When the
erroneous slots are bounded above by V in any busy period,
the maximum compensation that a system needs to provide
is also bounded above by V . Hence, intuitively the excess
packet delay over the delay in the system with perfect channel
should be no more than the total number of erroneous
slots in a busy period. Further, a system in which channel
is always good for all the sessions is equivalent to the
wireline system. Hence packet delay for a session in wireless
case should be equal to the sum of packet delay in a
wireline case and V . We shall refer to the delay under EDF
in wireline system as WEDF-delay. Hence, if D i is the required
packet delay for the session i, then it su#ces to check
whether can be guaranteed in the wireline system
under EDF. This observation will allow us to use the admission
control schemes designed for wireline system in wireless
systems. Formally we state the following result.
Proposition 1. Every packet of all the admitted sessions
meets its deadline if the total number of erroneous slots in
a busy period is bounded above by V . Consequently, there isp (1)
(2)
(1)p (1)
(2)
(2)
(2)(a)
(b)
Figure
2: Example to show that the compensation
taken by one session a#ects the delay guarantees
of all sessions. Arrows pointing towards and away
from the axis represent packet arrivals and depar-
tures, respectively. p (i)
th packet from i th
session. V1 indicates a time slot in which session 1
has a bad channel. We note that the scheduling is
as per EDF. Consider two sessions 1 and 2. The delay
requirement for packets of session 1 and 2 are 3
and 2, respectively. We note that these delays can
be guaranteed with the perfect channel assumption.
In (a) After V1 , p (1)
takes immediate compensation
and as a result p (2)
experiences additional delay of 1
unit. In part (b), session 1 does not seek immediate
compensation after V1 , but it takes it eventually.
As a result packet p (2)
again experiences additional
delay of 1 unit. Hence as a result of channel error
of one unit for session 1, guaranteed delay for both
the sessions is increased by 1 unit.
no packet drop in the system.
The proof for the proposition is given in [8].
In wireless case, the channel errors are random. Hence,
the number of erroneous slots in a busy period are not
bounded in general, and thus the above proposition can
not be applied directly. Further, the length of busy period
and the number of erroneous slots in a busy period are
inter-dependent, i.e. the busy period length depends on the
number of erroneous slots in the busy period and vice versa
(refer to Figure 3). In spite of these issues, now we will show
how the above proposition can be generalized to account for
random channel errors. We observe that Proposition 1 can
be extended simply to obtain the packet drop probability
Pd . To illustrate the point, we fix some value for V . The
choice of V is not trivial and as we will show later appropriate
choice of V allows us to obtain desired balance between
session blocking and packet drop. Now, lets assume that the
probability of erroneous slots exceeding value V in a busy
period of length Z is Pd . So, from Proposition 1 it clear
that with probability Pd all the deadlines will be met and
system will not have packet drop. Now, the question we
need to investigate is "How to compute the drop probability
Pd ?" For doing this, first we investigate a relation between
the number of erroneous slots and the busy period length
Z. We note that if the erroneous slots in a busy period were
bounded by V , then the maximum length of busy period (Z)
time
time
Work in
the system
Channel Errors
Figure
3: Figure demonstrates the e#ect of channel
errors on the length of the busy periods under
EDF. Only the bottom figure experiences channel
errors, which lengthen its busy period duration by
7 additional time units (from 12 to 19).
under EDF scheduling is given by the following equation.
if
where C is the set of admitted sessions. The above expression
can be explained as follows from the definition of the busy
period. The total work which arrives in the system during
a busy period must be served during the same busy period.
Hence, Z the maximum length of a busy period satisfies
hand side of the equation
is the maximum data that can arrive in duration Z and
right hand side is the minimum service that the sessions will
obtain in the busy period. Thus equation 1 follows. Now,
given this busy period length, we can obtain the probability
that the total number of erroneous slots will exceed the value
(denoted by Pd ), using the channel parameter estimates
obtained from the receivers. We will show the computations
in the Section 3.1. We note that the value of V can be chosen
so that Pd is small.
Now, the drop probability in the system depends on the
value of V . If V is large(small) then the drop probability
is small(large). But V can not be made arbitrarily large
in order to reduce the drop probability, since increasing V
would increase the session blocking. The packet delay in
wireless system is WEDF-delay plus V , and hence increasing
would require a decrease in WEDF-delay, which can
be guaranteed only if the system has low tra#c load. The
quantity V can be looked upon as additional resource reservation
in anticipation of channel errors. The system does not
have any packet drop as long as the total erroneous slots in
a busy period are not more than the reservation V . Thus,
the choice of V is a policy decision or V can be adjusted
dynamically so as to cater to the system requirement.
Based on the above discussion, we propose the following
admission control algorithm. The proposed admission
control algorithm guarantees the packet drop probability
smaller than the acceptable packet drop probability P . The
pseudo code for the algorithm is given in Figure 4.
The proposed algorithm works as follows. Whenever a
new session arrives, sender node computes the new busy period
length Z and the probability Pd that the total number
of erroneous slots will exceed the fixed value V . If the drop
Procedure Admission Control Algo1()
begin
Fix V
When a new session arrives do the following
Compute new busy period length Z using equation (1);
Using the channel characteristics compute the probability P d that
the total number of erroneous slots exceed value V in the busy
period of length Z;
/* The probability that the total number of channel errors exceeding
V in a busy period of length Z (P d ) is greater than the
required packet drop probability P */
Block the session;
else
Check admission control under EDF in wireline case with D i -V ;
can be guaranteed in wireline system then
Admit the session;
else
Block the session;
Figure
4: Pseudo code of a general admission control
algorithm for an error-prone wireless channel
probability Pd is greater than the permissible value P , then
it blocks the session, otherwise it checks if the delay D i -V
can be guaranteed under WEDF. If delay D i - V can be
guaranteed the session is accepted, otherwise it is blocked.
In the following section we discuss the analytical computation
of the probability Pd that the total number of channel
errors in the duration Z will exceed value V .
3.1 Numerical Computation of Drop Probability
Pd
In this section we compute the probability that the total
number of erroneous slots in any duration Z exceeds the
value V (Pd ). Recall that Pd is an upper bound for the
packet drop probability. We consider the following analytical
model. We assume that the system has admitted N
sessions. For each session, we assume a two-state Markovian
channel error model as shown in Figure 5. The two
states are good and bad. Let the transition probability from
good to bad and from bad to good be # and #, respectively
for each session. Furthermore, we assume that the channel
state processes are independent.
The number of sessions for whom the channel is in the bad
state is a markov process with state space {0, 1, . , N}. The
transition probabilities for this Markov chain can be given
as follows.
Good Bad
a
1-a
Figure
5: Figure shows two state Markov process
for the channel state process.
Packet
Probability
Allowed Compensation (V)
Packet Drop Probability Obtained by Numerical Compution
Figure
Figure shows packet drop probability for
various values of busy period length (Z) and parameter
). The total number of active sessions are
assumed to be 10. Further, we assume that each
session has
u=i
We observe that the defined Markov chain is (a) finite
state space (b) aperiodic and (c) irreducible. Hence using
transition probabilities given in equation (2) and (3), we can
compute the steady state distribution. In state 0, all the
sessions have good channel. The drop probability Pd is the
probability that in Z slots Markov process visits state zero
less than Z -V times. This probability can be obtained using
computational techniques. In Figure 6 we present some
numerical results. We note that the numerical results are
consistent with the intuition. In particular, the packet drop
probability reduces as the value of V increases and increases
as the length of busy period increases.
In the above discussion, we have assumed that the transition
probabilities for all the sessions are identical. If this
is not the case, the number of sessions with a bad channel
state is not markovian. Thus the system can only be represented
by an N-dimensional vector, where each component
denotes the channel state of individual sessions. Similar to
the previous case, the Markovian structure can be exploited
to obtain Pd numerically.
The numerical computations involve calculation of steady
state distribution, which can be obtained by solving the ma-
Procedure Admission Control Algo2()
begin
When a new session i arrives do the following
Compute new busy period length Z using equation (1);
Obtain new value of channel errors in the busy period of length Z
/* the value of channel errors if i were admitted
than a maximum allowed value (V ) */
Block the session
else
Check admission control under EDF in wireline case with D i -V ;
if (D can be guaranteed in wireline system) then
Admit the session;
else
Block the session;
Figure
7: Pseudo code of a general admission control
algorithm for an error-prone wireless channel
trix equation. This can become computationally prohibitive.
Hence we propose a simpler approach to provide packet delay
in presence of channel error using EDF scheduler in the
following subsection.
3.2 Simplistic Approach
The computationally simple alternative approach is based
on the following observation. If # i is a long term channel error
rate for the session i then the number of erroneous slots
for the session in su#ciently long interval L is close to L# i
with high probability. This observation follows from the
Strong Law of Large Numbers or ergodicity property of the
Markovian channel error models, e.g. Gilbert-Elliot model.
Each receiver can estimate the required long term error rate
# i from the previous transmissions of the sender, and communicate
this estimate periodically to the sender node.
If the maximum length of a busy period, Z is small, then
the total number of erroneous slots must be few and hence
the packet drop rate is low. We need to carefully upper
bound the total number of erroneous slots for large values
of Z. If Z is su#ciently large, then Z
the total number of erroneous slots in a busy period with
a high probability. Using this computationally simple estimate
for the number of erroneous slots in the busy period, we
propose a new admission control algorithm, which uses the
same intuition as the previous one, but di#ers in computing
the estimate for the packet drop probability. Recall that in
the previous algorithm (refer Figure 4), we have explicitly
computed the packet drop probability Pd and then we have
ensured that the computed drop probability is smaller than
the required value P . In this scheme, we compute the average
number of erroneous slots in a busy period of length
Z based on the long term channel error rates (denoted by
If the computed number of erroneous slots V # is less
than the fixed value V , we assume that the system will not
have packet drop with high probability. Pseudo code for the
algorithm is given in Figure 7.
The proposed algorithm works as follows. Whenever a
new session arrives, sender node computes the new busy period
length Z and the total long term channel error rate if
the arriving session were admitted. If the expected total erroneous
slots Z
blocks the
session, otherwise it checks if the delay D i -V can be guar-
0Session
Blocking
System Parameter (V)
Blocking Performance of EDF
Figure
8: Figure provides session blocking performance
of EDF with perfect channel knowledge and
the proposed predication based scheme. T denotes
the inter-arrival time for the sessions.
anteed under WEDF. If delay D i -V can be guaranteed the
session is accepted, otherwise it is blocked. In the following
section we present the simulation results.
4. SIMULATION RESULTS AND DISCUS-
SION
In this section, we evaluate the packet drop and session
blocking for the proposed admission control algorithm for
EDF schedule. We assume that the sessions arrive at a fixed
node in a Poisson process with rate #. An arriving session
specifies its leaky bucket parameters. We assume bucket
depth # and token replenishment rate # to be uniformly distributed
between 0-10 packets and 0-0.1 packets per unit
time, respectively. The required delay guarantee for the session
is assumed to be uniformly distributed between 5-100
time units. We model error prone wireless channel as an
ON-OFF process. The transition probabilities from good to
bad and bad to good are 0.001 and 0.1, respectively. These
channel parameters correspond to Raleigh fading channel
where the mean fade duration is slots and the channel
is good for 99% of the total time [24].
We have performed simulations for two systems (a) when
the sender has perfect knowledge of channel state of each
session in the beginning of the slot (b) when sender predicts
the future channel state based on the outcome of the present
transmission. We note that in practice it is di#cult to obtain
the perfect knowledge of instantaneous channel state for
every session and hence option (b) is more suited for practical
applications. In (b), we use simple two step prediction
model, where if the current transmission is successful the
sender assumes that the channel state for the session will
be good in the next slots, but if the communication is not
successful then sender assumes that the session will have a
bad channel for the next 1
slots. We note that 1
is the
expected number of slots for which a session will have erroneous
channel, given that it has bad channel in the current
slot.
Figures
show the performance of the designed
admission control scheme for the EDF scheduler in0.0020.0060.01
Packet
System Parameter (V)
Packet Drop Performance of EDF with Perfect Channel Knowledge
Figure
9: Figure provides packet drop performance
of EDF when the scheduler has the perfect knowledge
of channel state for every session before packet
transmission.
Table
1: The table investigates the reduction in
packet drop brought about by additional resource
reservation. The chosen channel parameters are
and the EDF scheduler has instantaneous
knowledge of channel state before packet
transmission
Packet Drop with
resource reservation
Packet Drop without
resource reservation
200
wireless system.
We note that the session blocking curve is cup-shaped (see
Figure
8). For small values of V , the system reserves less
resources for compensating for channel errors, and hence can
only accommodate a few sessions with low long term channel
error rate (otherwise Z
when V is high the session blocking is high again as the
guaranteed delay is the sum of the WEDF delay and V, and
hence the former must be small so as to compensate for the
large value of the latter. The WEDF delay is small only if
a few sessions are admitted.
The packet drop performance is intuitive. When the value
of V is small the packet drop is higher and the packet drop
goes to zero as V becomes large. We note that the overall
packet drop performance of the system is better than the
calculated packet drop probability (see Figures 9 and 10).
This is because the numerical computations do not account
for the service given to other sessions when one session has
a bad channel, and hence only upper bound the packet drop
rate.
Tables
1 and 2 demonstrate that the packet drop can be substantially
reduced by reserving additional resources. We examine
the drop performances of two schemes: (a) the scheme
Table
2: Comparison of packet drop performances
with and without resource reservation for the prediction
based EDF scheduling.
Packet Drop with
resource reservation
Packet Drop without
resource reservation
200
300 0.178824 0.5377670.0020.0060.01
Packet
System Parameter (V)
Packet Drop Performance of EDF with Channel Predication Scheme
Figure
10: Figure provides packet drop performance
of EDF with the proposed two-step prediction
scheme.
we propose which reserves additional resources for compensating
for channel errors another scheme
which does not reserve any resources and admits sessions as
long as the WEDF delay is less than the required guarantee,
disabling the verification of other admission control criteria.
The latter has substantially higher packet drop.
5. CONCLUSIONS AND FUTURE WORK
In this paper, we have proposed connection admission control
algorithms to provide stochastic delay guarantees in a
single hop wireless network. EDF is used as a baseline sched-
uler. We have argued that the wireline admission control solutions
are not suitable for the wireless case as they do not
take into consideration the channel errors which can result
in high packet drop on account of deadline expiry. In the
proposed approach, we consider channel characteristics of
the sessions while taking admission control decisions. As a
result, we can provide the delay guarantees with the desired
packet drop probability. The guarantees can be provided
only if the EDF scheduler uses instantaneous channel states
in the scheduling decision mechanism. The channel states in
the previous slots can be used to predict the instantaneous
channel states if the latter is not known. The analytical
guarantees do not hold in this case, but extensive simulation
results indicate low packet drop.
The proposed admission control algorithms assume EDF
as a baseline scheduler. We are currently looking at admission
control schemes for other important scheduling disciplines
like fair queueing disciplines. Further, in this paper
we have only been able to upper-bound the overall packet
probability. No bound for the packet drop of individual
sessions is obtained. It is entirely possible that excessive
channel errors of one session deteriorate the packet drop rate
of other sessions. A framework which guarantees individual
packet drop rates is a topic for future research. Also, it
has been assumed throughout that the sender receives instantaneous
feedback after every transmission. We plan to
investigate the e#ects of delayed feedbacks.
6.
--R
An Architecture and Methodology for Mobile-executed Hando# in Cellular ATM Networks
QoS Provisioning in Micro-cellular Networks Supporting Multimedia Tra#c
A Framework for Call Admission Control and QoS Support in Wireless Environments.
Enhancing Throughput over Wireless LANs Using Channel State Dependent Packet Scheduling.
Fair Queueing in Wireless Networks: Issues and Approaches.
Scheduling Algorithms for Broadband Wireless Networks.
Connection Admission Control for Mobile Multiple-class Personal Communications Networks
Providing Stochastic Delay Guarantees through Channel Characteristics Based Resource Reservation in Wireless Network.
A Calculus for Network Delay
Controlled Multimedia Wireless Link Sharing via Enhanced Class-based Queueing with Channel-state Dependent Packet Scheduling
The Havana Framework for Supporting Application and
A Framework for Design and Evaluation of Admission Control Algorithms in Multi-service Mobile Networks
Qos and Fairness Constrained Convex Optimization of Resource Allocation for Wireless Cellular and Ad hoc Networks.
A Resource Estimation and Call Admission Control Algorithm for Wireless Multimedia Networks Using the Shadow Cluster Concept.
Exact Admission Control for Networks with Bounded Delay Service.
Fair Scheduling in Wireless Packet Networks.
Design and Analysis of an Algorithm for Fair Service in Error-prone Wireless Channels
Packet Fair Queueing Algorithms for Wireless Networks with Location Dependent Errors.
Adapting Fair Queueing Algorithms to Wireless Systems.
Architecture and Algorithms for Scalable Mobile QoS.
Scheduling Real Time Tra
Scheduling Algorithm for a Mixture of Real-time and Non-real-time Data in HDR
Admission Control of Multiple Tra
Capability Based Admission Control for Broadband CDMA Networks.
On Accommodating Mobile Hosts in an Integrated Services Packet Networks.
Service Disciplines of Guaranteed Performance Service in Packet-switching Networks
--TR
Efficient network QoS provisioning based on per node traffic shaping
Exact admission control for networks with a bounded delay service
A resource estimation and call admission algorithm for wireless multimedia networks using the shadow cluster concept
Fair scheduling in wireless packet networks
Adapting packet fair queueing algorithms to wireless networks
Design and analysis of an algorithm for fair service in error-prone wireless channels
On Accommodating Mobile Hosts in an Integrated Services Packet Network
Efficient Admission Control for EDF Schedulers
The Havana Framework for Supporting Application and Channel Dependent QOS in Wireless Networks
QOS provisioning in micro-cellular networks supporting multimedia traffic | wireless networks;stochastic delay guarantees;connection admission control |
570814 | Asymptotically optimal geometric mobile ad-hoc routing. | In this paper we present AFR, a new geometric mobile ad-hoc routing algorithm. The algorithm is completely distributed; nodes need to communicate only with direct neighbors in their transmission range. We show that if a best route has cost c, AFR finds a route and terminates with cost &Ogr;(c2) in the worst case. AFR is the first algorithm with cost bounded by a function of the optimal route. We also give a tight lower bound by showing that any geometric routing algorithm has worst-case cost $Ogr;(c2). Thus AFR is asymptotically optimal. We give a non-geometric algorithm that also matches the lower bound, but needs some memory at each node. This establishes an intriguing trade-off between geometry and memory. | INTRODUCTION
A mobile ad-hoc network consists of mobile nodes equipped
with a wireless radio. We think of mobile nodes as points in
the Euclidean plane. Two nodes can directly communicate
with each other if and only if they are within transmission
range of each other. Throughout this paper we assume that
all nodes have the same transmission range R 1 . Two nodes
with distance greater than R can communicate by relaying
their messages through a series of intermediate nodes; this
process is called multi-hop routing.
In this paper we study so-called geometric routing; in
networks that support geometric routing a) each node is
equipped with a location service, i.e. each node knows its
Euclidean coordinates, b) each node knows all the neighbor
nodes (nodes within transmission range R) and their coordi-
nates, and c) the sender of a message knows the coordinates
of the destination.
In addition to the standard assumptions a), b) and c), we
take for granted that mobile nodes are not arbitrarily close
to each other, i.e. d) there is a positive constant d0 such that
the distance between any pair of nodes is at least d0 . This
is motivated by the fact that there are physical limitations
on how close to each other two mobile nodes can be placed.
Further, distances between neighboring nodes in an ad-hoc
network will typically be in the order of the transmission
range. 2
In this paper we present a new geometric routing algorithm
which borrows from the eminent Face Routing algorithm
by Kranakis, Singh, and Urrutia [14]. As it is the
tradition in the community, we give our algorithm a name:
AFR which stands for Adaptive Face Routing 3 . Our algorithm
is completely local; nodes only exchange messages
with their direct neighbors, i.e. nodes in their transmission
range R. We show that if a best route has cost c, our algorithm
finds a route and terminates with cost O(c 2 ) in the
worst case. This bound holds for many prominent cost models
such as distance, energy, or the link metric. Note that
the distance of the best route (the sum of the distances of
the single hops) can be arbitrarily larger than the Euclidean
distance of source and destination. Our algorithm is the
first algorithm that is bounded by a function of the optimal
route; the original Face Routing algorithm and all other geo-
1 In the technical part of the paper we simplify the presentation
by scaling the coordinates of the system such that
Meanwhile, we have achieved similar results without assumption
d) in [15].
3 Is it a coincidence that AFR also reflects our first names?
metric routing algorithms are only bounded by a function
of the number of nodes.
Moreover we show that any geometric routing algorithm
has
This tight lower bound proves that our algorithm
is asymptotically optimal 4 . The lower bound also
holds for randomized algorithms. Apart from the theoretical
relevance of our results, we feel that our algorithm has
practical potential, especially as a fall-back mechanism for
greedy geometric routing algorithms (which are e#cient in
an average case).
It is surprising that the cost of geometric routing algorithms
is quadratic in the cost of the best route. We show
that this bound can also be achieved by a simple non-geometric
routing algorithm. In exchange for the missing location
service we give the algorithm some extra memory at each
node. We show that this algorithm also has cost O(c 2 ),
which, contrary to intuition, proves that in the worst case a
GPS is about as useful as some extra bits of memory.
The paper is organized as follows. In the next section we
discuss the related work. In Section 3 we formally model
mobile ad-hoc networks and geometric routing algorithms.
In Section 4 we present and analyze our geometric routing
algorithm AFR. We give a matching lower bound in Section
5. Section 6 concludes and discusses the paper.
2. RELATED WORK
Traditionally, multi-hop routing for mobile ad-hoc networks
can be classified into proactive and reactive algo-
rithms. Proactive routing algorithms copycat the behavior
of wireline routing algorithms: Each node in the mobile
ad-hoc network maintains a routing table that lays down
how to forward a message. Mobile nodes locally change the
topology of the network, which in turn provokes updates
to the routing tables throughout the network. Proactive
routing algorithms are e#cient only if the ratio of mobility
over communication is low. If the nodes in the network
are reasonably mobile, the overhead of control messages to
update the routing tables becomes unacceptably high. Also
storing large routing tables at cheap mobile nodes might be
prohibitively expensive. Reactive routing algorithms on the
other hand find routes on demand only. The advantage is
that there is no fixed cost for bureaucracy. However, whenever
a node needs to send a message to another node, the
sender needs to flood the network in order to find the receiver
and a route to it. Although there are a myriad of
(often obvious and sometimes helpful) optimization tricks,
the flooding process can still use up a significant amount of
scarce wireless bandwidth. Reviews of routing algorithms in
mobile ad-hoc networks in general can be found in [4] and
[21].
Over a decade ago researchers started to advocate equipping
every node with a location information system [7, 11,
23]; each node knows its geometric coordinates [10]. If the
(approximate) coordinates of the destination are known too,
a message can simply be sent/forwarded to the "best" di-
rection. This approach is called directional, geometric, ge-
ographic, location-, or position-based routing. With the
growing availability of global positioning systems (GPS or
Galileo), it can easily be imagined to have a corresponding
4 The constant between the lower and the upper bound depends
on the cost model, but can generally become quite
large.
receiver at each node [12]. Even if this is not the case, one
can conceive that nodes calculate their position with a local
scheme; a research area that has recently been well studied
[22]. Geometric routing only works if nodes know the
location of the destination. Clearly, the (approximate) location
of the destination changes much less frequently than
the structure of the underlying graph. In this sense it is
certainly less expensive to keep the approximate locations
of the destinations than the whole graph. In the area of
peer-to-peer networking a lot of data structures have been
presented that store this type of information in an e#cient
way. It would be possible to use an overlay peer-to-peer net-work
to maintain the position of all destinations [16]. Last
but not least one could imagine that we want to send a message
to any node in a given area, a routing concept that is
known as geocasting [13, 19]. Overviews of geometric routing
algorithms are given in [9, 18, 20].
s
Figure
1: Greedy routing fails with nodes distributed
on the letter "C".
The first geometric routing algorithms were purely greedy:
The message is always forwarded to the neighboring node
that is closest to the destination [7, 11, 23]. It was shown
that even simple location configurations do not guarantee
that a message will reach its destination when forwarded
greedily. For example, we are given a network with nodes
that are distributed "on" the letter "C" (see figure 1). Assume
that the northernmost node s of "C" wants to send a
message to destination t (the southeastern tip of "C"). With
greedy routing the message is forwarded from the source to
the best neighbor, i.e. in the southeastern direction. At node
(the north eastern tip of "C") there is no neighbor node
closer to the destination, and the routing algorithm fails.
To circumvent the gap of the "C", the source should have
sent the message to the west. It has been shown that many
other definitions of "best" neighbor (e.g. best angle a.k.a.
Compass Routing in [14]) do not guarantee delivery either.
The first geometric routing algorithm that guarantees delivery
is the Face Routing algorithm, proposed in a seminal
paper by Kranakis, Singh, and Urrutia [14] (in their short
paper they call the algorithm Compass Routing II ). The
Face Routing algorithm is a building block of our routing
algorithm AFR and will therefore be discussed in more detail
later. The Face Routing algorithm guarantees that the
message will arrive at the destination and terminates in O(n)
steps, where n is the number of nodes in the network. This
is not satisfactory, since already a very simple flooding algorithm
will terminate in O(n) steps. In case the source and
the destination are close, we would like to have an algorithm
that terminates earlier. In particular, we are interested in
the competitive ratio of the route found by the algorithm
over the best possible route.
There have been other suggestions for geometric routing
algorithms with guaranteed delivery [3, 5], but in the
worst case (to the best of our knowledge) none of them
is better than the original Face Routing algorithm. Other
(partly non-deterministic) greedy routing algorithms have
been shown to find the destination on special planar graphs,
such as triangulations or convex subdivisions [2], without
any performance guarantees.
It has been shown that the shortest path between two
nodes on a Delaunay triangulation is only a small constant
factor longer than their distance [6]. It has even been shown
that indeed there is a competitive routing algorithm for Delaunay
triangulations [1]. However, nodes can only communicate
within transmission range R: Delaunay triangulation
is not applicable since edges can be arbitrarily long in Delaunay
triangulations. Accordingly, there have been attempts
to approximate the Delaunay triangulation locally [17] but
no better bound on the performance of routing algorithms
can be given for such a construction.
A more detailed discussion of geometric routing can be
found in [25].
3. MODEL
This section introduces the notation and the model we use
throughout the paper. We consider routing algorithms on
Euclidean graphs, i.e. weighted graphs where edge weights
represent Euclidean distances between the adjacent nodes
in a particular embedding in the plane. As usual, a graph
G is defined as a pair G := (V, E) where V denotes the set
of nodes and denotes the set of edges. The number
of nodes is called | and the Euclidean length of an
denoted by cd (e). A path p := v1 , . , vk for
is a list of nodes such that two consecutive nodes
are adjacent in G, i.e. (v i , that edges can
be traversed multiple times when walking along p. Where
convenient, we also denote a path p by the corresponding
list of edges.
As mentioned in the introduction, we consider the standard
model for ad-hoc networks where all nodes have the
same limited transmission ranges. This leads to the definition
of the unit disk graph (UDG).
Definition 1. (Unit Disk Graph) Let V # R 2 be a set
of points in the 2-dimensional plane. The Euclidean graph
with edges between all nodes with distance at most 1 is
called the unit disk graph.
We also make the natural assumption that the distance between
nodes is limited from below.
Definition
2.(# (1)-model) If the distance between any
two nodes is bounded from below by a term of order #der i.e.
there is a positive constant d0 such that d0 is a lower bound
on the distance between any two nodes, this is referred to
as
del.
This paper mainly focuses on geometric ad-hoc routing algorithms
which can be defined as follows.
Definition 3.
(Geometric Ad-Hoc Routing Algorithm)
E) be a Euclidean graph. The aim of a geometric
ad-hoc routing algorithm A is to transmit a message
from a source s # V to a destination t # V by sending packets
over the edges of G while complying with the following
conditions:
. Initially all nodes v # V know their geometric positions
as well as the geometric positions of all of their
neighbors in G.
. The source s knows the position of the destination t.
. The nodes are not allowed to store anything except for
temporarily storing packets before transmitting them.
. The additional information which can be stored in
a packet is limited by O(log n) bits, i.e. information
about O(1) nodes is allowed.
In the literature geometric ad-hoc routing has been given
various other names, such as O(1)-memory routing algorithm
in [1, 2], local routing algorithm in [14] or position-based
routing. Due to the storage restrictions, geometric
ad-hoc routing algorithms are inherently local.
For our analysis we are interested in three di#erent cost
models: the link distance metric (the number of hops), the
Euclidean distance metric (the total traversed Euclidean dis-
tance) and the energy metric (the total energy used). Each
cost model implies an edge weight function. As already de-
fined, the Euclidean length of an edge is denoted by cd (e). In
the link distance metric all edges have weight 1
and the energy weight of an edge is defined as the square of
the Euclidean length (cE (e) := cd 2 (e)). The cost of a path
defined as the sum of the costs of its edges:
c# (p) :=
The cost c# (A) of an algorithm A is defined analogously
as the sum over the costs of all edges which are traversed
during the execution of an algorithm on a particular graph
G 5 .
Lemma 3.1. In the #e -model, the Euclidean distance,
the link distance, and the energy metrics of a path
e1 , . , ek are equal up to a constant factor on the unit disk
graph 6 .
Proof. The cost of p in the link distance metric is c #
k. We have that d0 # cd
fore, the Euclidean distance and the energy costs of p are
upper-bounded by k and lower-bounded by cd (p) # d0k and
4. AFR: ADAPTIVE FACE ROUTING
In this section, we describe our algorithm AFR which is
asymptotically optimal for unit disk graphs in
model. Our algorithm is an extension of the Face Routing
algorithm introduced by Kranakis et al. [14] (in the original
paper the algorithm is called Compass Routing II ).
5 For the energy metric it is usually assumed that a node
can send a message simultaneously to di#erent neighbors
using only the energy corresponding to the farthest of those
neighbors. We neglect this because it does not change our
results.
6 More generally, all metrics whose edge weight functions are
polynomial in the Euclidean distance weight are equal up to
a constant factor on the unit disk graph in
the#e504 del.
This formulation would include hybrid models as well as
energy metrics with exponents other than 2.
Figure
2: The faces of a planar graph (the white
region is the infinite outer face).
Face Routing and AFR work on planar graphs. We use
the term planar graph for a specific embedding of a planar
graph, i.e. we consider Euclidean planar graphs. In this
case, the nodes and edges of a planar graph G partition the
Euclidean plane into contiguous regions called the f faces of
G (see
Figure
2 as an illustration). Note that we get f - 1
finite faces in the interior of G and one infinite face around
G.
The main idea of the Face Routing algorithm is to walk
along the faces which are intersected by the line segment st
between the source s and the destination t. For completeness
we describe the algorithm in detail (see Figure 3).
s
Figure
3: The Face Routing algorithm
Face Routing
0. Start at s and let F be the face which is incident to s
and which is intersected by st in the immediate region
of s.
1. Explore the boundary of F by traversing its edges and
remember the intersection point p of st with the edges
of F which is nearest to t. After traversing all edges, go
back to p. If we reach t while traversing the boundary
of F , we are done.
2. p divides st into two line segments where pt is the not
yet "traversed" part of st. Update F to be the face
which is incident to p and which is intersected by the
line segment pt in the immediate region of p. Go back
to step 1.
In order to simplify the subsequent proofs, we show that
Face Routing terminates in linear time.
Lemma 4.1. The Face Routing algorithm reaches the destination
after traversing at most O(n) edges where n is the
number of nodes.
Proof. First we show that the algorithm terminates. By
the choices of the faces F in step 0 and 2, respectively, we
see that in step 1 we always find a point p which is nearer
to t than the previous p where we start the tour around
F . Therefore we are coming nearer to t with each iteration,
and since there are only finitely many intersections between
st and the edges of G, we reach t in a finite number of
iterations.
For the performance analysis, we see that by choosing p as
the st-"face boundary" intersection which is nearest to t, we
will never traverse the same face twice. Now, we partition
the edges E into two subsets E1 and E2 where E1 are the
edges which are incident to only one face (the same face lies
on both sides of the edge) and E2 are the edges which are
incident to two faces (the edge lies between two di#erent
faces). During the exploration of a face F in step 2, an
edge of E2 is traversed at most twice and an edge of E1 is
traversed at most four times. Since the edges of E1 appear
in only one face and the edges of E2 appear in two faces, all
edges of E are traversed at most four times during the whole
algorithm. Each face in a planar connected graph (with at
least 4 nodes) has at least three edges on its boundary. This
together with the Euler polyhedral formula (n -m+
yields that the number of edges m is bounded by m # 3n-6
which proves the lemma.
In order to obtain our new algorithm AFR, we are now going
to change Face Routing in two steps. In a first step we
assume that an upper-bound b
cd on the (Euclidean) length
cd (p # ) of a shortest route p # from s to t on graph G is
known to s at the beginning. We present a geometric ad-hoc
routing algorithm which reaches t with link distance cost
at most O( b
Bounded Face Routing (BFR[c cd ]). Let E be the ellipse
which is defined by the locus of all points the sum of whose
distances from s and t is b
cd , i.e. E is an ellipse with foci s
and t. By the definition of E , the shortest path (in R 2 ) from
s to t via a point q outside E is longer than b
cd . Therefore,
the best path from s to t on G is completely inside or on E .
We change step 1 of Face Routing such that we always stay
0. Start at s and let F be the face which is incident to s
and which is intersected by st in the immediate region
of s.
Figure
4: Bounded Face Routing (no success: b
cd is
chosen too small)
Figure
5: Successful Bounded Face Routing
1. As before, we explore the face F and remember the
best intersection between st and the edges of F in p.
We start the exploration of F as in Face Routing by
starting to walk into one of the two possible directions.
We continue until we come around the whole face F
as in the normal Face Routing algorithm or until we
would cross the boundary of E . In the latter case, we
turn around and walk back until we get to the boundary
of E again. In any case we are then going back
to p. If the exploration of F does not yield a better
has the same value as in the previous itera-
tion, Bounded Face Routing does not find a route to t
and we restart BFR to find a route back from p to the
source s. Otherwise, proceed with step 2.
2. p divides st into two line segments where pt is the not
yet "traversed" part of st. Update F to be the face
which is incident to p and which is intersected by the
line segment pt in the immediate region of p. Go back
to step 1.
Figure
4 shows an example where b
cd is chosen too small,
Figure
5 shows a successful execution of the Bounded Face
Routing algorithm.
Lemma 4.2. If the length of an optimal path p # (w.r.t. the
Euclidean distance metric) between s and t in graph G in
the #e -model is upper-bounded by a constant b
Bounded Face Routing finds a path from s to t. If Bounded
Face Routing does not succeed in finding a route to t, it
does succeed in returning to s. In any case, Bounded Face
Routing terminates with link distance cost at most O( b
c d
Proof. We show that whenever there is a path from s to
t which is completely inside or on E , Bounded Face Routing
finds a route from s to t by traversing at most O( b
The lemma then follows.
F
r
q'
A
p'
Figure
If there is a path from s to t inside E, BFR
succeeds in routing from s to t (E is not drawn on
the picture).
Suppose that there is a path r from s to t where r lies
inside or on E . First we show that in this case BFR finds
a route from s to t. Consider a point p on st from which
we start to traverse a face F . We have to show that we find
a point p # on st which is nearer to t than p while exploring
face F . Assume that F does not completely lie inside the
ellipse E since otherwise we find p # as in the normal Face
Routing algorithm. Let q be the last intersection between
path r and st before p and let q # be the first intersection
between r and st after p (see Figure 6 as an illustration).
The part of the path r which is between q and q # and the
line segment qq # together define a polygon. We denote the
area which is covered by this polygon by A. To traverse
the boundary of F we can leave p in two possible directions
where one of them points into A. During the traversal we
will in any case take both directions. While walking along
the boundary of F , we cannot cross the path r because the
edges of r are part of the planar graph of which F is a face.
In order to leave A, we therefore have to cross st at a point
must be nearer to t than p because otherwise the
boundary of F would cross itself.
As a second step we show that each edge inside E is traversed
at most four times during the execution of the BFR
algorithm. In order to prove this, we consider the graph G #
which is defined as follows. Prune everything of G which is
outside the ellipse E . At the intersections between edges of
G and E , we introduce new nodes and we take the segments
of E between those new nodes as additional "edges" 7 . As
an illustration, all edges of G # are drawn with thick lines in
Figure
5. Now consider step 1 of BFR as exploring a face F
of G # instead of exploring a face of G. Let p be the intersection
between F and st where we start the traversal of F and
let p # be the st-"face boundary"-intersection which is closest
to t. If there is a path from s to t inside E , there must also
be a path between p and p # which is inside E . Assume that
this is not the case. The part of the boundary of F which
includes p and the part of the boundary of F which includes
would then only be connected by the additional edges on
E in G # . Thus, F would then separate E into two parts, one
of which containing s, the other one containing t. Therefore
step 1 of our algorithm BFR yields p # as a new point on
st, i.e. BFR is in a sense equivalent to Face Routing on G # .
Hence, in an execution of BFR each face of G # is visited at
most once. During the exploration of a face F in step 1 of
BFR each edge is traversed at most twice, no matter if we
walk around F as in the normal Face Routing algorithm or
if we hit E and have to turn around (the edges whose sides
belong to the same face can again be traversed four times).
Therefore, we conclude that each edge inside E is traversed
at most four times.
As a last step, we have to prove that there are only O( b
edges of G inside E . Since G is a planar graph, we know
that the number of edges is linear in the number of nodes
as shown in the proof of Lemma 4.1). We conside
del where the Euclidean distance between
any pair of nodes is at least d0 . Thus, the circles of radius
d0/2 around all nodes do not intersect each other. Since the
length of the semimajor axis a of the ellipse E is b
c d /2, and
since the area of E is smaller than #a 2 , the number of nodes
bounded by
d 2+ O(a) # O b
We have now proven that if there is a path from s to t
after traversing at
7 We do not consider that those additional edges are no
straight lines. By adding some additional new nodes on E
and connecting all new nodes by straight line segments, we
could also construct G # to be a real Euclidean planar graph.
most O( b
edges. The only thing which remains open in
order to conclude the proof of Lemma 4.2 is that an unsuccessful
execution of BFR also terminates after traversing at
most O( b
k, be the faces which
are visited during the execution of the algorithm. Fk is the
face where we do not find a better point on st, i.e. Fk is
the face which divides E into two parts. From the above
analysis it is clear that the first k - 1 faces are only visited
once. Fk is explored at most twice, once to find the best accessible
intersection with st and once to see that no further
improvement can be made. Hence, all edges are traversed at
most eight times until we arrive at the point p on st where
we have to turn around 8 . For our way back we know that
there is a path from p to s which lies inside E and therefore
we arrive at s after visiting every edge at most another four
times.
We are now coming to the definition of AFR. The problem
with Bounded Face Routing is that usually no upper-bound
on the length of the best route is known. In AFR we apply
a standard trick to get around this.
AFR Adaptive Face Routing. We begin by determining
an estimate e
cd for the unknown value cd (p # ), e.g. e
cd := 2st.
The algorithm then runs Bounded Face Routing with exponentially
growing e
cd until eventually the destination t is
reached:
1. Execute BFR[ e
cd ].
2. If the BFR execution of step 1 succeeded, we are done;
otherwise, we double the estimate for the length of the
shortest path ( e
cd ) and go back to step 1.
Lemma 4.3. Let p # be a shortest path from node s to node
t on the planar graph G. Adaptive Face Routing finds a path
from s to t while traversing at most O(c 2
edges.
Proof. We denote the first estimate e
cd on the optimal
path length by ec d,0 and the consecutive estimates by ec d,i :=
Furthermore, we define k such that ec d,k-1 < cd (p #
ec d,k . For the cost of BFR[ e
cd ] we have c # (BFR[ e
cd
and therefore
cd
for a constant # (and su#ciently large e
cd ). The total cost
of algorithm AFR can therefore be bounded by
d,0
For the remainder of this section we show how to apply AFR
to the unit disk graph. We need a planar subgraph of the
unit disk graph, since AFR requires a planar graph. There
8 It is possible to explore face Fk only once as well but for
our asymptotic analysis, we ignore this optimization.
are various suggestions on how to construct a planar sub-graph
of the unit disk graph in a distributed way. Often
the intersection between the UDG and the Relative Neighborhood
Graph (RNG [24]) or the Gabriel Graph (GG [8]),
respectively, have been proposed. In the RNG an edge between
nodes u and v is present i# no other node w is closer
to u and to v than u is to v. In the Gabriel Graph an edge
between u and v is present i# no other node w is inside or
on the circle with diameter uv. The Relative Neighborhood
Graph and the Gabriel Graph are easily constructed in a distributed
manner. There have been other suggestions, such
as the intersection between the Delaunay triangulation and
the unit disk graph [17]. All mentioned graphs are connected
provided that the unit disk graph is connected as well. We
use the Gabriel Graph, since it meets all requirements as
shown in the following lemma.
Lemma 4.4. In the #e -model the shortest path for any
of the considered metrics (Euclidean distance, link distance,
and energy) on the Gabriel Graph intersected with the unit
disk graph is only by a constant longer than the shortest path
on the unit disk graph for the respective metric.
e"
e
Figure
7: The unit disk graph contains an energy
optimal path.
Proof. We show that at least one best path with respect
to the energy metric on the UDG is also contained in GG#
UDG. Suppose that is an edge of an energy
optimal path p on the UDG. For the sake of contradiction
suppose that e is not contained in GG # UDG. Then there
is a node w in or on the circle with diameter uv (see Figure
7). The edges e are also edges of
the UDG and because w is in the described circle, we have
e #2 +e #2 # e 2 . If w is inside the circle with diameter uv, the
energy for the path p # := p \ {e} # {e # , e # } is smaller than
the energy for p and p no energy-optimal path. If w is on
the above circle, p # is an energy-optimal path as well and
the argument applies recursively. Using Lemma 3.1, we see
that the optimal path costs with respect to the Euclidean
and the link distance metrics are only by a constant factor
greater than the energy cost of p. This concludes the proof.
Lemma 4.4 directly leads to Theorem 4.5.
Theorem 4.5. Let p # for # {d, #, E} be an optimal path
with respect to the corresponding metric on the unit disk
graph in
the#1 -model. We have
when applying AFR on GG # UDG in the #e -model.
Proof. The theorem directly follows from Lemma 3.1,
Lemma 4.3, and Lemma 4.4.
5. LOWER BOUND
In this section we give a constructive lower bound for geometric
ad-hoc routing algorithms.
Figure
8: Lower bound graph
Theorem 5.1. Let the cost of a best route for a given
source destination pair be c. Then any deterministic (ran-
domized) geometric ad-hoc routing algorithm has (expected)
link, distance, or energy cost.
Proof. We construct a family of networks as follows. We
are given a positive integer k and define a Euclidean graph
G (see
Figure
On a circle we evenly distribute 2k nodes
such that the distance between two neighboring points is exactly
thus, the circle has radius r # k/#. For every second
node of the circle we construct a chain of #r/2# - 1 nodes.
The nodes of such a chain are arranged on a line pointing
towards the center of the circle; the distance between two
neighboring nodes of a chain is exactly 1. Node w is one
arbitrary circle node with a chain: The chain of w consists
of #r# nodes with distance 1. The last node of the chain of
w is the center node; note that the edge to the center node
does not need to have distance 1.
Please note that the unit disk graph consists of the edges
on the circle and the edges on the chains only. In particular,
there is no edge between two chains because all chains except
the w chain end strictly outside radius r/2. Note that the
graph has k chains with #(k) nodes each.
We route from an arbitrary node on the circle (the source
s) to the center of the circle (the destination t). An optimal
route between s and t follows the shortest path on the circle
until it hits node w, and then directly follows w's chain to
t with link cost c # k routing
algorithm with routing tables at each node will find this best
route.
A geometric ad-hoc routing algorithm needs to find the
"correct" chain w. Since there is no routing information
stored at the nodes, this can only be done by exploring the
chains. Any deterministic algorithm needs to explore the
chains in a deterministic order until it finds the chain w.
Thus, an adversary can always place w such that w's chain
will be explored as the last one. The algorithm will therefore
explore #(k 2 ) (instead of only O(k)) nodes.
The argument is similar for randomized algorithms. By
placing w accordingly (randomly!), an adversary forces the
randomized algorithm to
explore# chains before chain
w with constant factor probability. Then the expected link
cost of the algorithm is # k 2 ).
Because all edges (but one) in our construction have length
1, the costs in the Euclidean distance, the link distance, and
the energy metrics are equal. Thus,
holds for all three metrics.
Note that our lower bound does hold generally, not only
for
#6479-461 However, if the graph is not
there might be a higher (worse) lower bound.
To conclude this section, we present the main theorem of
this paper stating that AFR is asymptotically optimal for
unit disk graphs in
del.
Theorem 5.2. Let c be the cost of an optimal path for
a given source destination pair on a unit disk graph in the
#he -model. In the worst case the cost for applying AFR
to find a route from the source to the destination is #(c 2 ).
This is asymptotically optimal.
Proof. Theorem 5.2 is an immediate consequence of Theorem
4.5 and of Theorem 5.1.
6. CONCLUSION
In this paper we proved a lower bound for geometric ad-hoc
routing algorithms on the unit disk graph. Specifically,
we showed that in the worst case the cost of any geometric
ad-hoc routing algorithm is quadratic in the cost of an optimal
path. This result holds for the Euclidean distance, the
link distance, and the energy metric. Furthermore, we gave
an algorithm (AFR) which matches this lower bound and is
therefore optimal.
It is interesting to see that if we allow the nodes to store
O(log n) bits, we can achieve the same results even if the
source does not know anything about the coordinates of
the destination. The lower bound still holds and the upper
bound can be achieved by a simple flooding algorithm.
The source floods the network (we again take GG # UDG)
with an initial time to live ttl 0 , i.e. all nodes up to depth
ttl 0 are reached. The result of the flood (destination reached
or not reached) is then echoed back to the source along the
same paths in the reverse direction. We iterate the process
with exponentially growing time to live until we reach the
destination. All nodes which are reached by flooding with
TTL ttl are in a circle with radius ttl around the source.
In this circle there are O(ttl 2 ) nodes and hence also O(ttl 2 )
edges each of which is traversed at most 4 times (including
the echo process). Therefore, the cost of iteration i (with
and the cost of the whole algorithm is
quadratic in the cost of the best path for any of the three
considered metrics. We find it intriguing that a few storage
bits in each node appear to be as good as the geometric
information about the destination.
7.
--R
Online routing in triangulations.
Online routing in convex subdivisions.
Routing with guaranteed delivery in ad hoc wireless networks.
A performance comparison of multi-hop wireless ad hoc network routing protocols
Internal node and shortcut based routing with guaranteed delivery in wireless networks.
Delaunay graphs are almost as good as complete graphs.
Routing and addressing problems in large metropolitan-scale internetworks
A new statistical approach to geographic variation analysis.
Position based routing algorithms for ad hoc networks: a taxonomy
Location systems for ubiquitous computing.
Transmission range control in multihop packet radio networks.
Geocasting in mobile ad hoc networks: Location-based multicast algorithms
Compass routing on geometric networks.
Geometric ad-hoc routing for unit disk graphs and general cost models
A scalable location service for geographic ad-hoc routing
Distributed construction of planar spanner and routing for ad hoc wireless networks.
A survey on position-based routing in mobile ad-hoc networks
A survey of routing techniques for mobile communications networks.
A review of current routing protocols for ad-hoc mobile wireless networks
Dynamic fine-grained localization in ad-hoc networks of sensors
Optimal transmission ranges for randomly distributed packet radio terminals.
The relative neighborhood graph of a finite planar set.
Routing with guaranteed delivery in geometric and wireless networks.
--TR
Delaunay graphs are almost as good as complete graphs
GeoCastMYAMPERSANDmdash;geographic addressing and routing
A survey of routing techniques for mobile communications networks
A performance comparison of multi-hop wireless ad hoc network routing protocols
Routing with guaranteed delivery in <italic>ad hoc</italic> wireless networks
A scalable location service for geographic ad hoc routing
Dynamic fine-grained localization in Ad-Hoc networks of sensors
Routing with guaranteed delivery in geometric and wireless networks
Location Systems for Ubiquitous Computing
Online Routing in Triangulations
Online Routing in Convex Subdivisions
Internal Node and Shortcut Based Routing with Guaranteed Delivery in Wireless Networks
Location-Aided Routing (LAR) in Mobile Ad Hoc Networks
--CTR
Qing Fang , Jie Gao , Leonidas J. Guibas, Locating and bypassing holes in sensor networks, Mobile Networks and Applications, v.11 n.2, p.187-200, April 2006
Stefan Funke, Topological hole detection in wireless sensor networks and its applications, Proceedings of the 2005 joint workshop on Foundations of mobile computing, September 02-02, 2005, Cologne, Germany
Roland Flury , Roger Wattenhofer, MLS:: an efficient location service for mobile ad hoc networks, Proceedings of the seventh ACM international symposium on Mobile ad hoc networking and computing, May 22-25, 2006, Florence, Italy
Jongkeun Na , Chong-kwon Kim, GLR: a novel geographic routing scheme for large wireless ad hoc networks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.17, p.3434-3448, 5 December 2006
Wen-Zhan Song , Yu Wang , Xiang-Yang Li , Ophir Frieder, Localized algorithms for energy efficient topology in wireless ad hoc networks, Proceedings of the 5th ACM international symposium on Mobile ad hoc networking and computing, May 24-26, 2004, Roppongi Hills, Tokyo, Japan
Minimizing recovery state In geographic ad-hoc routing, Proceedings of the seventh ACM international symposium on Mobile ad hoc networking and computing, May 22-25, 2006, Florence, Italy
Wen-Zhan Song , Yu Wang , Xiang-Yang Li , Ophir Frieder, Localized algorithms for energy efficient topology in wireless ad hoc networks, Mobile Networks and Applications, v.10 n.6, p.911-923, December 2005
Young-Jin Kim , Ramesh Govindan , Brad Karp , Scott Shenker, On the pitfalls of geographic face routing, Proceedings of the 2005 joint workshop on Foundations of mobile computing, September 02-02, 2005, Cologne, Germany
Ittai Abraham , Dahlia Malkhi, Compact routing on euclidian metrics, Proceedings of the twenty-third annual ACM symposium on Principles of distributed computing, July 25-28, 2004, St. John's, Newfoundland, Canada
Wang , Xiang-Yang Li, Localized construction of bounded degree and planar spanner for wireless ad hoc networks, Proceedings of the joint workshop on Foundations of mobile computing, p.59-68, September 19, 2003, San Diego, CA, USA
Vishakha Gupta , Gaurav Mathur , Anil M. Shende, Wireless ad hoc lattice computers (WAdL), Journal of Parallel and Distributed Computing, v.66 n.4, p.531-541, April 2006
Fabian Kuhn , Roger Wattenhofer , Aaron Zollinger, Worst-Case optimal and average-case efficient geometric ad-hoc routing, Proceedings of the 4th ACM international symposium on Mobile ad hoc networking & computing, June 01-03, 2003, Annapolis, Maryland, USA
Wang , Xiang-Yang Li, Localized construction of bounded degree and planar spanner for wireless ad hoc networks, Mobile Networks and Applications, v.11 n.2, p.161-175, April 2006
Leszek Gsieniec , Chang Su , Prudence W. H. Wong , Qin Xin, Routing of single-source and multiple-source queries in static sensor networks, Journal of Discrete Algorithms, v.5 n.1, p.1-11, March, 2007
Fabian Kuhn , Roger Wattenhofer , Yan Zhang , Aaron Zollinger, Geometric ad-hoc routing: of theory and practice, Proceedings of the twenty-second annual symposium on Principles of distributed computing, p.63-72, July 13-16, 2003, Boston, Massachusetts
Ittai Abraham , Danny Dolev , Dahlia Malkhi, LLS: a locality aware location service for mobile ad hoc networks, Proceedings of the 2004 joint workshop on Foundations of mobile computing, October 01-01, 2004, Philadelphia, PA, USA
Fabian Kuhn , Aaron Zollinger, Ad-hoc networks beyond unit disk graphs, Proceedings of the joint workshop on Foundations of mobile computing, p.69-78, September 19, 2003, San Diego, CA, USA
Bharat Bhargava , Xiaoxin Wu , Yi Lu , Weichao Wang, Integrating heterogeneous wireless technologies: a cellular aided mobile Ad Hoc network (CAMA), Mobile Networks and Applications, v.9 n.4, p.393-408, August 2004
Radha Poovendran , Loukas Lazos, A graph theoretic framework for preventing the wormhole attack in wireless ad hoc networks, Wireless Networks, v.13 n.1, p.27-59, January 2007
Xiang-Yang Li , Wen-Zhan Song , Weizhao Wang, A unified energy-efficient topology for unicast and broadcast, Proceedings of the 11th annual international conference on Mobile computing and networking, August 28-September 02, 2005, Cologne, Germany
Gady Kozma , Zvi Lotker , Micha Sharir , Gideon Stupp, Geometrically aware communication in random wireless networks, Proceedings of the twenty-third annual ACM symposium on Principles of distributed computing, July 25-28, 2004, St. John's, Newfoundland, Canada
Fabian Kuhn , Roger Wattenhofer , Aaron Zollinger, An algorithmic approach to geographic routing in ad hoc and sensor networks, IEEE/ACM Transactions on Networking (TON), v.16 n.1, p.51-62, February 2008 | routing;geometric routing;face routing;wireless communication;unit disk graphs;ad-hoc networks |
570818 | Approximation algorithms for the mobile piercing set problem with applications to clustering in ad-hoc networks. | The main contributions of this paper are two-fold. First, we present a simple, general framework for obtaining efficient constant-factor approximation algorithms for the mobile piercing set (MPS) problem on unit-disks for standard metrics in fixed dimension vector spaces. More specifically, we provide low constant approximations for L1- and L-norms on a d-dimensional space, for any fixed d > 0, and for the L2-norm on 2- and 3-dimensional spaces. Our framework provides a family of fully-distributed and decentralized algorithms, which adapts (asymptotically) optimally to the mobility of disks, at the expense of a low degradation on the best known approximation factors of the respective centralized algorithms: Our algorithms take O(1) time to update the piercing set maintained, per movement of a disk. We also present a family of fully-distributed algorithms for the MPS problem which either match or improve the best known approximation bounds of centralized algorithms for the respective norms and dimensions.Second, we show how the proposed algorithms can be directly applied to provide theoretical performance analyses for two popular 1-hop clustering algorithms in ad-hoc networks: the lowest-id algorithm and the Least Cluster Change (LCC) algorithm. More specifically, we formally prove that the LCC algorithm adapts in constant time to the mobility of the network nodes, and minimizes (up to low constant factors) the number of 1-hop clusters maintained; we propose an alternative algorithm to the lowest-id algorithm which achieves a better approximation factor without increasing the cost of adapting to changes in the network topology. While there is a vast literature on simulation results for the LCC and the lowest-id algorithms, these had not been formally analysed prior to this work. We also present an O(log n)-approximation algorithm for the mobile piercing set problem for nonuniform disks (i.e., disks that may have different radii), with constant update time. | Introduction
The mobile piercing set (MPS) problem is a variation of the (classical) piercing set problem that arises in
dynamic distributed scenarios. The MPS problem has many applications outside its main computational
geometry domain, as for example in mobile ad-hoc communication networks, as we will see later.
We start by formalizing some basic denitions. A disk D of radius r with center q in < d with respect
to L p norm 1 is given by the set of points rg. Let q(D) denote the center of a
disk D. A piercing set of a given collection of disks D is a set of points P such that for every disk D 2 D,
there exists a point pierces every disk D 2 D. The (classical) k-piercing
set problem seeks to nd whether a piercing set P of cardinality k of D exists, and if so, produces it. If
the value of k is minimal over all possible cardinalities of piercing sets of D then the set P is called a
minimum piercing set of D. The minimum piercing set problem asks for the minimum piercing set of a
given collection D.
We consider a dynamic variation of the classical piercing set problem, which arises in mobile and
distributed scenarios, where disks are moving in space. In the mobile piercing set (MPS) problem, we
would like to maintain a dynamic piercing set P of a collection of mobile disks D such that, at any time
t, P is a minimum piercing set of the current conguration of the disks. In other words, P must adapt
to the mobility of the disks. Moreover, we would like to be able to devise a distributed algorithm to
solve this problem, where the individual disks can decide in a distributed fashion (with no centralized
control) where to place the piercing points. In this scenario, we assume that the disks are able to detect
whether they intersect other disks. We can think about a disk as being the communication range of a
given mobile device (node), which resides at the center of the disk: A disk can communicate with all of
its adjacent nodes by a broadcast operation within O(1) time. Below, we will present applications of the
1 The Lp norm, for any xed p, of a vector z in < d is given by jjzjj
mobile piercing set problem in mobile networks.
In this paper, we focus on the case when the disks are all of the same radius r | or equivalently,
of same diameter 2r. Hence, without loss of generality, in the remainder of this paper, unless stated
otherwise, we assume that therefore that we have a collection of unit-diameter disks, or
unit-disks for short. In Section 5, we address an extension of our algorithms to the nonuniform case,
where the disks may not all have the same radius.
In recent years, the technological advances in wireless communications have led to the realization of
ad-hoc mobile wireless networks, which are self-organizing and which do not rely on any sort of stationary
backbone structure. These networks are expected to signicantly grow in size and usage in the next few
years. For scalability, specially in order to be able to handle updates due to the constant changes in
network topology, clustering becomes mandatory.
As hinted above, mobile unit-disks can be used to model an ad-hoc network where all mobile wireless
nodes have the same range of communication. Each mobile node's communication range is represented
by a disk in < 2 (or < 3 ) centered at the node with radius equal to 1; a mobile node A can communicate
with mobile node B if and only if B is within A's communication range. The ad-hoc network scenario is a
direct application scenario for the unit-disk MPS problem, since an ad-hoc network is fully decentralized
and any algorithm running on such a network must adapt to mobility in an e-cient way.
If all disks are of the same size, then the k-piercing set problem is equivalent to the decision version
of a well-known problem: the geometric k-center problem [2]. The k-center problem under L p metric
is dened as follows: Given a set S of n demand points in < d , nd a set P of k supply points so that
the maximum L p distance between a demand point and its nearest supply point in P is minimized. The
corresponding decision problem is to determine, for a given radius r, whether S can be covered by the
union of k L p -disks of radius r, or in other words, to determine whether there exists a set of k points that
pierces the set of n L p -disks of radius r centered at the points of S. In some applications, P is required
to be a subset of S, in which case the problem is referred to as the discrete k-center problem. When
we choose the L 2 metric, the problem is called the Euclidean k-center problem, while for L1 metric the
problem is called the rectilinear k-center problem. Since the Euclidean and rectilinear k-center problems
in < 2 are NP-complete (see e.g. [26, 29]) when k is part of the input, the planar unit-disk k-piercing
set problem in < 2 under norms is also NP-complete. Unfortunately, an approximation
algorithm for the k-center problem does not translate directly into an approximation algorithm for the
unit-disk piercing set problem (and vice-versa), since an algorithm for the former problem will give an
approximation on the radius of the covering disks, while for the latter problem we need an approximation
on the number of piercing points. Still, the two approximation factors are highly related [2].
The remainder of this paper is organized as follows. In Section 1.1, we state our main contributions in
this work. In Section 2, we discuss more related work in the literature. Section 3 proves some geometric
properties of the piercing set problem. We use the results in Section 3 to develop the approximation
algorithms presented in Sections 4 and 5: The algorithm introduced in Section 4 leads to lower approximation
factors, for the norms and dimensions considered, while the one in Section 5 adapts optimally
to the movement of disks. In Section 6, we relate the algorithms presented for the MPS problem to
clustering in ad-hoc networks. Finally, we present some future work directions in Section 7.
1.1 Our results
In this paper we propose fully distributed (decentralized) approximation algorithms for the unit-disk
MPS problem for some xed norms and dimensions. All of the approximation factors presented in this
paper are with respect to the number of points in a minimum piercing set.
For each algorithm, we are interested in computing the cost associated with building an initial approximate
piercing set for the given initial conguration of the collection of disks | which we call the
setup cost of the algorithm | and the cost associated with updating the current piercing set due to the
movement of a disk | which we call the update cost of the algorithm. Actually we charge the update
costs per event, as we explain below. We assume that all the costs that do not involve communication
between disks are negligible when compared to the cost of a disk communicating with its neighbors
(through a broadcast operation). Therefore we will only consider communication costs when evaluating
the algorithms considered.
In order to maintain an optimal or approximate piercing set of the disks, there are two situations
which mandate an update on the current piercing set. The rst situation is when the movement of a
disk D results in having at least one disk D 0 of D unpierced (note that D 0 may be D itself). The second
situation is when some piercing points in the set maintained become \redundant", and we may need to
remove them from the set. Thus, we say that an (update) event is triggered (or happened) whenever one
of the two situations just described occurs.
The main contributions of this paper are two-fold. First, we present a family of constant-factor
approximation algorithms | represented by the M-algorithm | for the unit-disk MPS problem with
(asymptotically) optimal setup and update costs, for all the norms and space dimensions considered.
Moreover, we achieve this without a signicant increase in the approximation factor of the corresponding
best known approximation algorithms for the classical piercing set problem. Let P be a minimum
piercing set. More specically, in d dimensions, we devise a 2 d -approximation algorithm under L 1 or
L1 . For L 2 norm, we devise a seven-approximation algorithm in < 2 , and a 21-approximation algorithm
in < 3 . All these algorithms have O(jP cost and O(1) update cost. Note that any dynamic
algorithm that approximates the minimum piercing set of a collection of mobile disks has setup cost
jP j), and update cost
st These algorithms are the rst constant-approximation algorithms for
the unit-disk MPS problem, with asymptotically optimal setup and update costs. We summarize these
results in Table 1. 2
We also present a second family of fully distributed algorithms | represented by the A-algorithm |
for L 1 or L1 norms in any space < d , and for L 2 norm in < 2 and < 3 . These algorithms achieve the same,
or better, constant approximation factors as the best known centralized algorithms for the corresponding
norm and space dimension, but have a poorer update cost of O(jP j). These algorithms are, to the best
of our knowledge, the rst fully distributed (decentralized) approximation algorithms which achieve the
same approximation factors as their centralized counterparts. These algorithms are of interest since, for
example, they provide an alternative algorithm to the lowest-id clustering algorithm in ad-hoc networks,
which would achieve a four- (resp., 11-) approximation factor in < 2 (resp., < 3 ) without an increase on
setup and update costs. We summarize these results in Table 2. 2
The simple framework presented for the M-algorithm, which can handle mobility e-ciently in a
dynamic scenario, is an important contribution of this work on its own. It avoids the use of involved data
structures, which in general cannot avoid the use of some sort of \centralization" (even if implicitly). In
order to be able to apply the given framework to a particular norm and dimension, one needs only to
be able to compute a set of piercing points which are guaranteed to pierce the immediate neighborhood
2 All the results are for unit-disks; Lp is equivalent to L1 for any p in one dimension.
of any disk D: The number of such points will be used in bounding the approximation factor of the
algorithms proposed.
The second main contribution of this work is the application of the algorithms developed for the MPS
problem to the problem of nding an e-cient self-organizing one-hop underlying clustering structure
for (wireless and mobile) ad-hoc networks, as seen in Section 6. In fact, one can use the algorithms
developed for the MPS problem to derive the rst theoretical performance analyses of the popular Least
proposed by Chiang et al. [7], and of the lowest-id algorithm (discussed
by Gerla and Tsai in [15]), both in terms of the number of one-hop clusters maintained and in terms of
update and setup costs, thus providing a deeper understanding of these two algorithms and validating
the existing simulation results for the same. No previous formal analysis of either algorithm exists in
the literature. Namely, we show that the LCC algorithm has the same approximation factor, setup and
update costs as the M-algorithm for L 2 in < 2 (or < 3 ), and that the lowest-id algorithm also maintains
the same approximation factor as the M-algorithm, while incurring higher update costs.
Another contribution of our work addresses the MPS problem on nonuniform radius disks. Then, if
the ratio between the maximum and minimum disk radii is bounded by a polynomial on
present a fully-distributed O(log n)-approximation algorithm for this problem, with constant update cost.
Related Work
The k-center and k-piercing problems have been extensively studied. In d dimensions, a brute-force
approach leads to an exact algorithm for the k-center problem with running time O(n dk+2 ). For the
planar case of the Euclidean k-center problem, Hwang et al. [24] gave an n O(
improving
Drezner [9] solution which runs in time O(n 2k+1 ). An algorithm with the same running time was
presented in Hwang et al. [23] for the planar discrete Euclidean k-center problem. Recently, Agarwal and
Procopiuc [1] extended and simplied the technique by Hwang et al. [24] to obtain an n O(k 1 1=d
algorithm for computing the Euclidean k-center problem in d dimensions.
Sharir and Welzl [32] explain a reduction from the rectilinear k-center problem to the k-piercing set
problem (under L1 metric), using a sorted matrix searching technique developed by Frederickson and
Johnson [13]. Ko et al. [26] proved the hardness of the planar version of the rectilinear k-center and
presented an O(n log n) time 2-approximation (on the covering radius) algorithm. (In fact, Ko et al. [26]
proved that, unless the best approximation factor that can be achieved in polynomial time for
the rectilinear k-center problem is 2.) Several approximation results (on the radii of the disks) have been
obtained in [11, 17, 20, 21]. For more results on the k-center problem, please refer to [2].
Regarding the k-piercing set problem in < d , Fowler et al. [12] proved the NP-completeness of nding
the minimum value of k for a given set of n disks. Hochbaum and Maas [19] gave an O(l d n 2l d +1 )
polynomial time algorithm for the minimum piercing set problem with approximation factor (1
l ) d for
any xed integer l 1. Thus, for yields an O(n 3
with performance ratio 2 d . For the one-dimensional case, Katz et al. [25] presented an algorithm that
maintains the exact piercing set of points for a collection of n intervals in O(jP j log n) time, where
P is a minimum piercing set. Their solution can be adapted to obtain an algorithm with distributed
running time O(jP computing a minimum piercing set of n intervals. Nielsen [30] proposed a 2 d 1 -
approximation algorithm that works in d-dimensional space under L1 metric in O(dn
where c is the size of the piercing set found. This algorithm is based on the divide-and-conquer paradigm.
Although not stated explicitly, the approximation on the radius of the k-center problem in [1] implies
a four-approximation algorithm for the minimum piercing set problem for < 2 and L 2 . Efrat et al. [10]
introduced a dynamic data structure based on segment trees which can be used for the piercing set
problem. They presented a sequential algorithm which gives a constant factor approximation for the
minimum piercing set problem for \fat" objects with polynomial setup and update time. See [10] for the
denition of \fatness" and more details.
A large number of clustering algorithms have been proposed and evaluated through simulations in
the ad-hoc network domain, as for example in [3, 4, 15, 27, 28, 31]. Gerla and Tsai in [15] considered two
distributed clustering algorithms, the lowest-id algorithm and the highest-degree algorithm, which select
respectively the lowest-id mobile or the highest degree mobile in a one-hop neighborhood as the cluster-
head. A weight oriented clustering algorithm, more suitable to \quasi-static" networks, was introduced
by Basagni [4], where one-hop clusters are formed according to a weight-based criterion that allows the
choice of the nodes that coordinate the clustering process based on node mobility-related parameters.
In [27], Lin and Gerla described a non-overlapping clustering algorithm where clusters are able to be
dynamically recongured.
The LCC algorithm proposed by Chiang et al. [7] aims to maintain a one-hop clustering of a mobile
network with least number of changes in the clustering structure, where clusters will be broken and
re-clustered only when necessary. In fact, our algorithm for the MPS problem, when translated to a
clustering algorithm in the ad-hoc scenario, is essentially the LCC algorithm, as discussed in Section 6.
Recently, researchers have investigated using geometric centers as clusterheads in order to minimize
the maximum communication cost between a clusterhead and the cluster members. Bepamyatnikh et
al. [6] discussed how to compute and maintain the one-center and the one-median for a given set of n
moving points on the plane (the one-median is a point that minimizes the sum of all distances to the
input points). Their algorithm can be used to select clusterheads if mobiles are already partitioned into
clusters.
Gao et al. [14] proposed a randomized algorithm for maintaining a set of clusters based on geometric
centers, for a xed radius, among moving points on the plane. Their algorithms have expected approximation
factor on the optimal number of centers (or, equivalently, of clusters) of c 1 log n for intervals
and of c 2
n for squares 3 , for some constants c 1 and c 2 . The probability that there are more than c
times the optimal number of centers is 1=n for the case of intervals; for squares, the probability that
there are more than c
times the optimal number of centers is 1=n
extension of this basic algorithm led to a hierarchical algorithm, also presented in [14], based on kinetic
data structures [5]. The hierarchical algorithm admits an expected constant approximation factor on the
number of discrete centers, where the approximation factor also depends linearly on the constants c 1 and
. The dependency of the approximation factor and the probability that the algorithm chooses more
than a constant times the optimal number of centers is similar to that of the non-hierarchical algorithm
for the squares case. The constants c 1 and c 2 , which have not been explicitly determined in [14], can be
shown to be very large (certainly more than an order of magnitude larger than the corresponding approximation
constant presented in this paper), even if we allow the probability of deviating from the expected
constant approximation on the number of centers (which depends linearly on c 1 and c 2 ) not to be close
to one. Their algorithm has an expected update time of O(log 3:6 n) (while the update cost is constant in
our algorithm), the number of levels used in the hierarchy is O(log log n), with O(n log n log log n) total
space.
3 Disks in 1D correspond to intervals on the line; in 2D, Disks under L1 or L1 are called squares.
Har-Peled [18] found a scheme for determining centers in advance, if the degree of motion of the nodes
is known: More specically, if in the optimal solution the number of centers is k and r is the optimal
radius for the points moving with degree of motion ', then his scheme guarantees a 2 '+1 -approximation
(of the radius) with k '+1 centers chosen from the set of input points before the points start to move.
3 Geometry for the Piercing Set Problem
In this section, we prove some geometric properties of the minimum piercing set problem. More specif-
ically, we solve the minimum piercing set problem on the neighborhood of a disk, which will provide the
basic building block for our approximation algorithms presented in the following sections. The main
step of the approximation algorithms is to select an unpierced unit-disk and pierce all of its neighbors.
By repeating this procedure, we will eventually pierce all unit-disks and form a piercing set. The approximation
factors are determined by the number of piercing points chosen for each selected unpierced
unit-disk.
If two disks D and D 0 intersect, we say that D is a neighbor of D 0 . The neighborhood of a disk D,
denoted by N (D), is dened as the collection of all disks that intersect D, N
Note that D 2 N (D).
We are interested on the minimum number of points that pierce all disks in the neighborhood of a
given disk. However, this number may vary, depending on the distribution of the disks in the particular
neighborhood in consideration. Thus, we compute the minimum number (along with the xed positions)
of points needed to pierce any possible neighborhood of a disk. This number is called the neighborhood
piercing number. The neighborhood piercing number is tight in the sense that for any set of points with
smaller cardinality, we can nd some conguration of the neighborhood of a disk which has an unpierced
disk. The corresponding piercing points are called the neighborhood piercing points. Clearly, the piercing
number is a function of both dimension d and norm index p. Hence, we denote the neighborhood piercing
number for dimension d and norm index p as N(d; p), and we use PN(D; d; p) to denote a corresponding set
of neighborhood piercing points of a unit-disk D. 4 We prove in this section that N(d;
for all d 1, and that N(2; 7. We also place an upper bound of 21 on N(3; 2). For each of the
norms and dimensions considered, we give a corresponding set of neighborhood piercing points.
4 In general, we omit the parameters p, d, or D, whenever clear from the context.
First we reduce the minimum piercing set problem into an equivalent disk covering problem. Let D
be a collection of unit-disks and P be a set of points. Let P 0 be the set of centers of all disks in D, and
D 0 be a collection of unit-disks centered at points in P . Then P pierces D if and only if D 0 covers P 0 .
Moreover, P is a minimum piercing set for D if and only if D 0 is a minimum set of disks (with respect
to cardinality) that covers P 0 . We dene the unit-disk covering problem to be the problem of nding the
minimum k such that there are k unit-disks whose union covers a given point set.
We now reduce the problem of nding the neighborhood piercing number to a unit-disk covering
problem as follows. For a unit-disk D, all the centers of unit-disks in N (D) are located in the region
is the center of D. Conversely, a disk centered at any
point in G must intersect D. Therefore, we seek for the minimum number of unit-disks that cover region
G. The centers of those disks serve as the set of neighborhood piercing points PN(D). The tightness of
N can be seen from the fact that in the disk covering problem, we cannot cover the entire region G with
less than N disks, as proven in the following lemma. Note that the region G is a disk of radius 1, and
that all of the disks that we use to cover G are unit-disks (i.e., of radius 1).
Lemma 1 The neighborhood piercing number is equal to 2 d for a d-dimensional space under L 1 or L1
norm. The neighborhood piercing number for two dimensions and L 2 is equal to seven.
Proof: For any L p norm, in a d-dimensional space, the ratio of the area of G to the area of a unit-disk
is 2 d . Thus, we need at least 2 d disks to cover G | i.e., N 2 d for any dimension d 1 and any norm
. The lower bound of 2 d is in fact tight for since in any dimension d, the unit-disk
D has 2 d \corners" under these norms, and the set of unit-disks centered at those \corners" cover the
entire region G.
The case more involved since we cannot pack \spheres" as tightly as \hypercubes" without
leaving uncovered points in the region G, if no intersection of disks is allowed. Without loss of generality,
assume that we are covering the neighborhood of a disk D centered at the origin. Any given point
can be represented by ('; ), where 0 ' 1 and 0 2 are the radius and angle on the polar axis
coordinates of p, respectively. A set PN(D) is given by the points with polar coordinates (0; 0), (
6 ), as shown. (If we assume that point q represents the origin
in
Figure
1-(a), then PN(D) is given by the points q,r,s,t,u,v and w.) Consider the sector 0
For the other sectors, analogous arguments apply after a rotation. Let 1, be a point
in G such that 0 . If ' 1=2, then p is covered by D. The boundary of the unit-disk centered at
intersects the boundary of region G at points (1; 0) and (1; ). It also intersects the boundary of
D at points ( 1
is located in the unit-disk centered at (
The perimeter of the boundary of region G is 2, and one unit-disk can cover at most of this
perimeter. Thus we need at least six unit-disks to cover the boundary of G | i.e., seven is the minimum
number of unit-disks covering the entire region G. Hence N(2; 7.
Figure
1-(a) shows an optimal seven-disk covering with disks centered at q,r,s,t,u,v and w, for the
region G under L 2 norm in < 2 . If is the center of the unit-disk D, the Cartesian coordinates
of the six other points are (x 3
For L 2 norm in < 3 , we were only able to place an upper bound on the number of unit-disks needed
to cover a disk of diameter 2, hence placing an upper bound on N(3; 2). A simple argument [22] su-ces
to verify that 20 unit-disks centered at some evenly spaced points on the surface of G plus a unit-disk
D centered at the origin cover a disk G of diameter two also centered at the origin. Hence we have
It remains an open problem to compute the exact value of N(3; 2). The neighborhood
piercing number for L 2 is closely related to the sphere packing and sphere covering problems described
in [8].
When compared to the results in the literature, the approximation factors based on the neighborhood
piercing points are not the best known. For example, we have shown that N(1; which leads to a
two-approximation algorithm for piercing unit-intervals on the line (see Section 5). In [16, p. 193] (see
also [25]), an exact solution (i.e., one-approximation) for piercing unit-intervals is proposed. The idea
there, shown in Figure 2, is to start from the rightmost interval D, where only one endpoint of D | the
left endpoint l | is enough for piercing all neighbors of D (since D has no neighbor to its right). In order
to be able to extend and generalize this idea to other norms and higher dimensions, we need to dene,
the halfspace of a disk D with orientation ~n, denoted by HD (~n): HD
0g.
For the one-dimensional case, all of the centers of the neighboring disks of the rightmost interval D are
located in the half space HD ( 1) (to the \left" of D), and only half of the neighborhood piercing points
(i.e., only N(1)=2 points) are enough for piercing N (D). More generally, in any d-dimensional space,
there exists an orientation ~n, such that we need roughly half of the neighborhood piercing points to pierce
all the neighbors of disk D located in HD (~n). The minimum number of piercing points needed for the
halfspace HD (~n), over all possible orientations ~n, is called the halfspace neighborhood piercing number,
and is denoted by N . The set of corresponding piercing points are called the halfspace neighborhood
piercing points of D and are denoted by PN(D).
If PN(D) is symmetric with respect to the center of the unit-disk D, then
e if the center of
D does not belong to PN , or
otherwise. Note that this is the case for PN(d; 1), PN(d;1)
and PN(2; 2). The set of piercing points which correspond to the upper bound of 21 for N(3; 2) is not
symmetric, but we can still nd an orientation such that 11 points are enough to pierce the halfspace
neighborhood of a disk with respect to the orientation. Figure 1-(b) illustrates halfspace neighborhood
piercing points | points q,r,s and t | for < 2 under L 2 norm. The orientation considered is
(0; 1).
Table
3 summarizes some values of neighborhood piercing number and that on halfspace for lower
dimensions and norms L 1 ; L1 and L 2 , where we denote the minimum of N(~n) as N and corresponding
PN(~n) as PN . It follows from the upper bound on N(3; 2) that N 11 for L 2 and < 3 . The corresponding
halfspace neighborhood piercing points are also a subset of the points used for establishing the upper
bound on N(3; 2). It also remains an open question to determine the exact value of N for L 2 and < 3 .
For an orientation ~n, if we order all unit-disks D in D according to the values ~q(D) ~n, then a
unit-disk D bearing the smallest ~q(D) ~n value satises the property that all its neighbors are located
in the halfspace HD (~n). Thus, by carefully choosing the order in which we consider the neighborhoods
of disks to be pierced, we can use the halfspace neighborhood piercing points as the basis of the fully-distributed
algorithms for the MPS problem presented in Section 4, which match or improve the best
known approximation factors of the respective centralized algorithms.
The problem of computing N for other L p metrics is more involved and may not have many practical
applications. A method to estimate an upper bound on N and compute the corresponding set of
neighborhood piercing points for arbitrary L p metrics is discussed in [22] for completeness.
4 Better Approximation Factors
In this section we present a family of constant-factor fully-distributed (decentralized) approximation
algorithms for the piercing set problem, which at least match the best known approximation factors of
centralized algorithms for the respective norms and dimensions. This algorithm introduces some basic
concepts which will be useful when developing the algorithms in Section 5. Also, the algorithm presented
in this section directly translates into an alternative to the lowest-id clustering algorithm for ad-hoc
networks (discussed in Section which achieves a better approximation factor on the number of clusters
maintained. The algorithms in this section all follow a general algorithmic framework, which we call the
A-algorithm (for having better approximation factors) in contrast with the slightly looser approximation
factors of the other family of algorithms presented in Section 5 (represented by the M-algorithm) which
can better handle mobility.
Consider a set of unit-disks in a d-dimensional space under norm L p . As shown in Section 3, we need
at most N piercing points to pierce the neighborhood of a unit-disk D bearing the smallest ~q(D) ~n among
the (unpierced) disks in its neighborhood, where ~n is an orientation that gives N . We call such a disk
D a top disk. Thus, at each step of the algorithm, each top unpierced disk D elects itself as a piercing
disk and selects the points in PN(D) as piercing points. Since all the unpierced disks in N (D) are now
pierced by PN(D), we mark all the unpierced disks in N (D) as pierced, and repeat the procedure above.
After repeating this step for at most jP j times, all the unit-disks in D are pierced and a piercing set
with cardinality at most N times jP j is produced, as shown in Theorem 1. Provided that broadcasting
has O(1) cost, the running time of the distributed A-algorithm is O(jP j). Theorem 1 states the main
properties of the A-algorithm. This theorem actually extends the results in [25] and in [30] | for L 1 and
L1 norms in d-dimensional spaces | to a more general distributed scenario, and also to the L 2 norm in
two- and three-dimensional space.
We re-invoke the A-algorithm to maintain the piercing set every time an event (as dened in Section
happens. In a distributed scenario, this can be done by
ooding a reset message to unpierce all
disks. Thus the update cost of the A-algorithm is also O(jP j).
Theorem 1 The approximation factor of the distributed A-algorithm is N , and its setup and update
costs are both O(jP j).
Proof: For each piercing unit-disk D, we need at least one point in the minimum piercing set P to
pierce D. For any two distinct piercing unit-disks D and E, the point in P that pierces D cannot pierce
since no two (distinct) piercing disks intersect. Thus we have at most jP j piercing unit-disks. For
each piercing unit-disk, we select N piercing points. Hence the approximation factor follows. It takes
constant time to pierce the neighborhood of each piercing unit-disk using a broadcast operation. Hence
the running time for both setup and update operations is O(jP j).
5 Better Handling of Mobility
We now present the M-algorithm, a fully distributed constant approximation algorithm for the mobile
piercing set problem that adapts optimally to the mobility of disks: The update cost of the M-algorithm is
O(1). We break the M-algorithm into two parts: the M-Setup algorithm, which builds an initial piercing
set, and the M-Update algorithm, which is in charge of adapting the piercing set maintained in response to
the mobility of disks (we will see later that the M-Update algorithm may initiate a local call to M-Setup
as a subroutine at some of the disks). The M-algorithm is more suitable for highly dynamic ad-hoc mobile
network scenarios.
The key idea behind the M-algorithm is to break the sequential running fashion of the A-algorithm.
In the A-algorithm, an ordering of the unit-disks is mandatory (even if implicitly). As shown in Figure 3,
in the worst-case, the movement of one disk (the rightmost one in the gure) could lead to a global
update of all selected piercing disks, while the cardinality of the minimum piercing set does not change.
In order to maintain a relatively stable piercing set, the desired algorithm needs to be able to sever this
\cascading eect" | i.e., the algorithm needs to be able to keep the updates local. Lemma 2 shows
that the cardinality of an optimal piercing set cannot change by much, due to the movement of a single
disk. This property suggests that an update can be kept local. The proof of this lemma, while trivial, is
presented here for completeness.
Lemma 2 If at one time only one unit-disk moves, then jjP j jP jj 1, where P denotes a minimum
piercing set before the movement, and P denotes a minimum piercing set after the movement.
Proof: If the cardinality of the minimum piercing set changes, then it can either increase or decrease.
Since the reverse of a movement that increases the cardinality of the minimum piercing set is a movement
that decreases it, we only need to show that the cardinality of the minimum piercing set cannot be
increased by more than 1. Let D be the moving disk. Since only D moves, D is the only disk which may
become unpierced. Let piercing set after the movement. Let P be a
minimum piercing set after the movement of D, jP j jP 1. Hence jjP j jP jj 1.
In the M-Setup algorithm, instead of choosing a disk with respect to the ordering given by a direction
~n, we select arbitrary unpierced disks as piercing disks in each step, then pierce the neighborhood of each
selected disk D using the points in PN(D). By repeating this procedure O(jP times, we will generate
a piercing set for D: Since now we use N points to pierce the neighborhood of each selected piercing disk,
the approximation factor is roughly doubled compared to that of the A-algorithm. However, this small
degradation in the approximation factor pays for an optimal update strategy, as will be shown later.
In order to implement the above idea in a distributed fashion, we repeat the following procedure. Each
disk D rst checks if there are any piercing disks in its neighborhood. If so, then D marks itself as pierced.
Otherwise, each unpierced disk tries to become a piercing disk itself. In order to guarantee that only one
disk becomes a piercing disk in an unpierced disk's neighborhood | this is a key property for proving the
approximation factor of this algorithm | a mechanism such as \lowest labeled neighbor wins" (assuming
that each disk has a unique identication label) is required. Note that, unlike the A-algorithm, in the
M-Setup algorithm disks do not need to know the disks' coordinates (since no comparisons of the ~q(D) ~n
values are required), which may be desirable in an ad-hoc network scenario. The proof of Theorem 2 is
analog to that of Theorem 1, and is therefore omitted.
Theorem 2 The M-Setup generates a piercing set of cardinality within a factor of N of jP j in O(jP
time.
As disks start moving in space, each disk needs to be able to trigger an update procedure whenever
an update is necessary. To facilitate the following discussion, we call a disk that is not a piercing disk
a normal disk. When a disk moves, the following events may make the current piercing set invalid
and trigger an update: (i) the boundaries of two piercing disks D and E meet (thus D may become a
redundant piercing disk); (ii) the boundaries of one piercing disk D and some normal disk D 0 pierced by
separate (thus at least one of the disks becomes unpierced). An M-Update procedure is initiated at
disk D in events of type (i), or at disks D and D 0 for events of type (ii). The M-Update procedure can
be divided into two phases: In the rst phase, we will unmark some of the disks as to be now unpierced;
in the second phase, we select piercing disks for those unpierced disks. The second phase is executed by
a local call to M-Setup initiated at each unpierced disk.
The details of the M-Update procedure are as follows. If we have an event of type (i), the M-Update
will degrade disk D to a normal disk and unpierce all disks that were currently pierced by D (including
D itself). Otherwise, if case (ii) applies, the M-Update will simply unpierce disk D 0 . Each node that
is marked unpierced by the M-Update procedure will invoke M-Setup locally. The M-Setup procedure
invoked at an unpierced disk F will rst check if any of its neighbors is a piercing disk. If so, it marks
itself pierced. Otherwise, if F has the lowest label among its unpierced neighbors, it elects itself as a
piercing disk and marks all its unpierced neighbors as pierced. The M-Setup and M-Update algorithms
are shown in Figure 4.
As proven in Theorem 3, all unit-disks will be pierced at the end of the calls to M-Setup, and the
approximation factor on the size of the piercing set maintained is still guaranteed to be N .
Theorem 3 The M-Update procedure maintains an N-approximation of the MPS, with update cost of
O(1) for each event.
Proof: First we show that the running time of M-Update is constant per event. Assume that at one
time only one event occurs. All the disks possibly aected by the event are located in the neighborhood
of a disk D. Thus the operation of marking disks as unpierced (in the rst phase) takes constant time.
Since all nodes that invoked a call to M-Setup were neighbors of a former piercing disk D, it follows
that the calls to M-Setup will have at most a constant number, N , of rounds of \lowest labeled neighbor
wins" until a valid set of piercing disks is restored. Therefore the total time taken by each of the invoked
M-Setup calls also takes constant time. If several events occur at the same time, then the nal eect is
the same as if a sequence of events occurs in a row, and the update cost per event remains the same.
Now we show that the approximation factor maintained is equal to N . Clearly the resulting piercing
set is determined by the collection of selected piercing unit-disks. We will show that the updated collection
of piercing disks produced by the M-Update procedure could have been the initial collection of piercing
disks produced by the M-Setup algorithm (for a given ordering of the labels of the disks), thus proving
the claimed approximation factor. Assume that the collection of selected piercing
unit-disks before the call to M-Update is invoked is an N-approximation on the MPS. Let E 0 be the
collection of selected piercing unit-disks after the call to M-Update is completed at all nodes (which may
involve calling the M-Setup algorithm locally). One of the following four cases may occur:
Case 1. A normal unit-disk D 0 moves and after the movement, it is still pierced by some piercing unit-
disk in E. In this case, the M-Update procedure never invokes M-Setup at a node, and
still need at least one piercing point to pierce each of the selected piercing disks (no two piercing disks
overlap) and since E was an N-approximation of the MPS, the approximation factor still holds.
Case 2. A normal unit-disk D 0 moves, and after the movement, D 0 is no longer pierced by a piercing
unit-disk in E. In this case, the M-Setup procedure invoked by the call to M-Update will upgrade D 0 to
a piercing disk. Thus g.
We prove the bound on the size of the piercing set maintained by showing that E 0 could have been
obtained by a general call to the M-Setup algorithm to the current conguration (placement in space) of
the disks if all disks were currently unpierced, for a given assignment of labels to the disks. Suppose that
the labels of the disks in E 0 are smaller than the labels of all other disks in D, and that label(D 1
Thus, on or before step i m, disk D i will be selected by M-Setup to become
a piercing disk (since all D i 's were unpierced disks in E initially, and no two piercing disks intersect).
After all disks D are selected, only disk D 0 is not pierced. Thus M-Setup must select D 0 to be
a piercing disk. Hence E 0 is obtained, proving the N-approximation factor on the cardinality of the set
of piercing points produced.
Case 3. A piercing unit-disk D moves and after the movement, D is pierced by some other piercing
. The M-Update will degrade D to a normal disk and
unpierce all unit-disks previously pierced by D. The M-Update procedure then invokes local calls to
M-Setup at all unpierced disks. For each unit-disk D 0 previously pierced by D, M-Setup will rst check
if there is another piercing disk that pierces D 0 . If so, D 0 will be marked pierced. Otherwise, if there
are neighbors of D which still remain unpierced, then the M-Setup algorithm will upgrade some normal
disks to piercing disks. Let E be the collection of those upgraded piercing disks.
Then we have as the new set of piercing disks. As in Case 2, if label(D 1 ) <
disks E not in E 0 , the M-Setup
algorithm when applied to the current conguration of the disks in D, assuming all disks are unpierced
at start, will produce E 0 as the resulting set of piercing disks. Thus the N-approximation factor follows.
Case 4. A piercing unit-disk D moves and after the movement, D is not pierced by any other piercing
Essentially the same as Case 3, but for the fact that we do not degrade D to a normal
disk.
A simple extension of the M-algorithm provides a polylog approximation algorithm for the nonuniform
case. If the collection contains disks of various radii, then we can guarantee an N-approximation if at
each step we nd the unpierced disk of smallest radii in the collection and pierce all of its neighborhood.
However, we cannot guarantee having O(1) update cost in this case. Without loss of generality, assume
that the minimum radius of a disk is equal to 1. If the largest disk radius is bounded by a polynomial on
then we have the following Corollary:
Corollary 1 By grouping the disks into O(log n) classes such that each class contains disks of radii in
have an O(log n) approximation for the MPS problem on nonuniform disks with distributed
update cost of O(1).
Proof: In each class, as we show below, N 2 points are enough to pierce an arbitrary neighborhood.
Since we have O(log n) classes, and the piercing set for each class is a N 2 -approximation of the overall
minimum piercing set, the approximation factor is bounded by O(log n). Once a disk moves, it only
aects the piercing set selected for one class, thus the update cost is still constant. We now show that
points are in fact enough for covering a disk of diameter 2 i+2 , using disks of diameter in [2 In
the worst case, we need to cover a region of diameter 2 i+2 with disks of diameter 2 i . We can do this in
two phases. First we cover the region using N disks of diameter 2 i+1 . Then for each disk D of diameter
using N disks of diameter 2 i .
6 Applications to Clustering in Mobile Networks
For the ad-hoc network scenario described in the introduction, where all nodes have equal range of
communication, the algorithms proposed for the mobile piercing set problem can be directly applied in
order to obtain a one-hop clustering of the network. A clustering of a network G is a partition of the
nodes of G into subsets (clusters) C i , where for each C i , we elect a node v 2 C i as its clusterhead. A
one-hop clustering of G is a clustering of G such that every node in the network can communicate in
one-hop with the clusterhead of the cluster it belongs to. We can view the network G as a collection of
unit-disks in < 2 (resp., < 3 ) under L 2 (as discussed in the introduction).
The algorithm in Section 5 can be used to obtain an almost optimal (with respect to number of clusters)
one-hop clustering of a wireless network where all nodes have equal communication range. We have that
is a lower bound on the minimum number of 1-hop
clusters (and therefore on the number of selected clusterheads) needed to cover the entire network, since
we need at least one clusterhead for each neighborhood of a piercing disk (the clusterhead centered at
the center of the respective piercing disk can communicate with all disks in the neighborhood), and since
we use at most seven (resp., 21) piercing points for each of these neighborhoods in a minimum piercing
set in < 2 (resp., < 3 ). The number of piercing disks selected by the algorithm in Section 5 is at most jP j.
Since each of these piercing disks D corresponds uniquely to a one-hop cluster C in the network (given
by all the disks pierced by D), and since the union of all these clusters covers the entire network, we have
that the number of clusters is at most jP j, which is a seven-approximation (resp., 21-approximation) on
the minimum number of one-hop clusters needed in < 2 (resp., < 3 ). This algorithm is also suitable for
maintaining such an optimal structure as nodes start moving in space, with optimal update costs. The
algorithm tends to keep the number of changes in the set of selected clusterheads low.
In fact, the algorithm presented in Section 5, when translated to a clustering algorithm on ad-hoc
networks, is essentially the same as the Least Cluster Change (LCC) algorithm presented by Chiang et
al. [7]. Therefore, in this paper we provide a theoretical analysis of the performance of this popular
clustering algorithm, validating the simulation results that showed that the clusters maintained by this
algorithm are relatively stable. More specically, we have proved that this algorithm sustains a seven-
approximation on the number of one-hop clusters maintained, while incurring optimal setup and update
costs.
A closer look at the lowest-id algorithm, investigated by Gerla and Tsai in [15], shows that this
algorithm corresponds to several applications of the M-Setup procedure of Section 5. Every time a disk
becomes unpierced, or two piercing disks intersect, the lowest-id algorithm starts updating the clustering
cover maintained in a fashion that may correspond to an application of the M-Setup algorithm on the
current conguration of the disks if all disks were unpierced | in the worst-case, the lowest-id algorithm
may generate a \cascading eect" which correspond to an application of the M-Setup algorithm on a
collection of all unpierced disks, if the disk labels are given by the node ids. Thus the setup and the
worst-case update costs of the lowest-id algorithm are both O(jP j), and the approximation on the number
of clusters maintained is equal to seven and 21, for < 2 and < 3 respectively.
7 Future work
There are many natural extensions of the work in this paper. We would like to extend the one-hop
clustering structure to a full network clustering hierarchy. One idea would be to apply the same algorithm
presented to construct O(log n) clustering covers of the network: Clustering i would be obtained by
assuming that all disks have radius equal to 2 i , for One problem with
this strategy is that by articially increasing the communication ranges on the nodes in the network (radii
of the disks), a resulting cluster in the hierarchy may not even be connected. Other directions for future
work are (i) to develop constant approximation algorithms for piercing a collection of disks of dierent
radii; (ii) to extend any results on nonuniform radius disks to ad-hoc network clustering | note that if
we have nonuniform radius disks, we can no longer guarantee symmetric communication between nodes
in the network; and (iii) to determine the exact neighborhood piercing number for L 2 norm in three- (or
higher) dimensional spaces.
Acknowledgment
We would like to express our thanks to Martin Ziegler for valuable discussions on estimating N(3; 2).
--R
Exact and approximation algorithms for clustering.
Distributed and mobility-adaptive clustering for multimedia support in multi-hop wireless networks
Distributed clustering for ad-hoc networks
Data structures for mobile data.
Mobile facility location.
Routing in clustered multihop
Sphere packings
Dynamic data structures for fat objects and their applications.
Optimal algorithms for approximate clustering.
Optimal packing and covering in the plane are NP-complete
Generalized selection and ranking: sorted matrices.
Discrete mobile centers.
Multicluster mobile multimedia radio networks.
Algorithmic Graph Theory.
Covering a set of points in multidimensional space.
Clustering motion.
Approximation schemes for covering and packing problems in image processing and vlsi.
A best possible heuristic for the k-center problem
Approximation algorithms for the mobile piercing set problem with applications to clustering.
The generalized searching over separators strategy to solve some np-hard problems in subexponential time
The slab dividing approach to solve the euclidean p-center problem
Maintenance of a piercing set for intervals with applications.
An optimal approximation algorithm for the rectilinear m-center problem
Adaptive clustering for mobile wireless networks.
A mobility-based framework for adaptive clustering in wireless ad-hoc networks
On the complexity of some common geometric location problems.
Fast stabbing of boxes in high dimensions.
Hierarchically-organized, multihop mobile wireless for quality- of-service support
--TR
A unified approach to approximation algorithms for bottleneck problems
Sphere-packings, lattices, and groups
Optimal algorithms for approximate clustering
generalized selection and ranking: sorted matrices
Approximation algorithms for hitting objects with straight lines
Approximation schemes for covering and packing problems in image processing and VLSI
Rectilinear and polygonal <italic>p</italic>-piercing and <italic>p</italic>-center problems
Multicluster, mobile, multimedia radio network
Hierarchically-organized, multihop mobile wireless networks for quality-of-service support
Efficient algorithms for geometric optimization
Data structures for mobile data
Exact and approximation algorithms for clustering
Mobile facility location (extended abstract)
Discrete mobile centers
Dynamic Data Structures for Fat Objects and Their Applications
Maintenance of a Percing Set for Intervals with Applications
Fast Stabbing of Boxes in High Dimensions
Distributed Clustering for Ad Hoc Networks
Clustering Motion
--CTR
Fabian Kuhn , Aaron Zollinger, Ad-hoc networks beyond unit disk graphs, Proceedings of the joint workshop on Foundations of mobile computing, p.69-78, September 19, 2003, San Diego, CA, USA | clustering;mobile ad-hoc networks;approximation algorithms;piercing set;distributed protocols |
570970 | The complexity of propositional linear temporal logics in simple cases. | It is well known that model checking and satisfiability for PLTL are PSPACE-complete. By contrast, very little is known about whether there exist some interesting fragments of PLTL with a lower worst-case complexity. Such results would help understand why PLTL model checkers are successfully used in practice. In this article we investigate this issue and consider model checking and satisfiability for all fragments of PLTL obtainable by restricting (1) the temporal connectives allowed, (2) the number of atomic propositions, and (3) the temporal height. 2002 Elsevier Science (USA). | INTRODUCTION
Background. PLTL is the standard linear-time propositional temporal logic
used in the specification and automated verification of reactive systems [MP92,
Eme90]. It is well-known that model checking and satisfiability for PLTL are
PSPACE-complete [SC85, HR83, Wol83]. This did not deter some research
groups from implementing PLTL model checkers or provers, and using them successfully
in practice [BBC Hol97]. The fundamental question this
raises is "what makes PLTL feasible in practice ?".
To this question, the commonanswer starts with the observation that the PSPACE
complexity only applies to the formula part of the problem [LP85], and it is only
a worst-case complexity. Then, it is often argued that the PLTL formulae used
in actual practical situations are not very complex, have a low temporal height
This article is a completed version of [DS98].
S. DEMRI AND PH. SCHNOEBELEN
(number of nested temporal connectives) and are mainly boolean combinations of
simple eventuality, safety, responsiveness, fairness, . properties.
Certainly the question calls for a systematic theoretical study, aiming at turning
the above answers into formal theorems and helping understand the issue at hand.
If we consider for example SAT, the famous Boolean Satisfiability problem, there
are current in-depth investigations of tractable subproblems (e.g., [Dal96, FV98]).
Regarding PLTL, we know of no systematic study of this kind in the literature.
This is all the more surprising when considering the wide use of PLTL model
checkers for reactive systems.
Our objectives. In this article, we develop a systematic study, looking for natural
subclasses of PLTL formulae for which complexity decreases. The potential
results are (1) a better understanding of what makes the problem PSPACE-hard,
(2) the formal identification of classes of temporal formulae with lower complex-
ity, called simple cases, (3) the discovery of more efficient algorithms for such
simple cases. Furthermore, since PLTL is the most basic temporal logic, simple
cases for PLTL often have corollaries for other logics.
As a starting point, we revisit the complexity questions from [SC85] when
there is a bound on the number of propositions and/or on the temporal height of
formulae. More precisely, let us write H 1 , H 2 , . for an arbitrary set of linear-time
combinators among {U, F, X, .} and let L k
.) denote the fragment of
restricted to formulae (1) only using combinators H 1 , ., (2) of temporal
height at most k, and (3) with at most n distinct atomic propositions. In this
article we measure the complexity of model checking and satisfiability for all
these fragments.
The choice of this starting point is very natural, and it is relevant for our original
motivations:
. For the propositional calculus and for several modal logics (K45, KD45, S5,
von Wright's logic of elsewhere, . ), satisfiability becomes linear-time when at
most n propositions can be used (see [Hal95, Dem96]). By contrast, satisfiability
for K remains PSPACE-complete even when only one proposition is allowed.
What about PLTL ?
. In practical applications, the temporal height often turns out to be at most
(or 3 when fairness is involved) even when the specification is quite large and
combines a large number of temporal constraints. This bounded height is often
invoked as a reason why PLTL model checking is feasible in practice. Can this be
made formal ?
Our contribution.
1. Our first contribution is an evaluation of the computational complexity of
model checking and satisfiability for all L k
fragments. A table in section
8 summarizes this.
2. We also identify new simple cases for which the complexity is lowered (only
NP-complete). For these we give (non-deterministic) algorithms. We think it
is worth investigating whether the ideas underlying these algorithms could help
develop deterministic algorithms that perform measurably better (on the relevant
simple case) than the usual methods. These results also have implications beyond
PLTL: e.g. NP-completeness of PLTL without temporal nesting (Prop. 7.4) leads
to a # P
model checking algorithm for CTL + and FCTL [LMS01].
3. A third contribution is the proof techniques we develop: we show how a
few logspace reductions allow to compute almost all the complexity measures
we needed (only a few remaining ones are solved with ad-hoc methods). These
reductions lead to a few rules of thumb (summarized in section 8) that can be
used as guidelines. Additionally, some of our reductions transform well-known
problems (SAT or QBF) into model checking problems for formulae with a simple
structure (e.g., low temporal height) and can be used in other contexts. The second
author used them for very restricted fragments of CTL+Past [LS00].
We believe that these constructions are interesting in their own right and think
that the scarcity of available proofs and exercices suitable for a classroom frame-work
is unfortunate when PLTL model checking is now widely taught in computer
science curriculums.
Related work. It is common to find papers considering extensions of earlier
temporal logics. The search for fragments with lower complexity is less
common (especially works considering model checking). [EES90] investigates
(very restricted) fragments of CTL (a branching-time logic) where satisfiability
is polynomial-time. [KV98] studies particular PLTL formulae for which there is
a linear-sized equivalent CTL formula: one of the aims is to understand when
and why PLTL model checking often behaves computationally well in practice.
[BK98] tries to understand why Mona performs well in practice and isolates a fragment
of WS1S where the usual non-elementary blowup does not occur. [Hal95]
investigates, in a systematic way, the complexity of satisfiability (not model check-
ing) for various multimodal logics when the modal height or the number of atomic
propositions is restricted: in fact PLTL is quite different from the more standard
multimodal logics and we found it behaves differently when syntactic restrictions
are enforced. In [Hem00], the complexity of fragments of modal logics is also
4 S. DEMRI AND PH. SCHNOEBELEN
studied by restricting the set of logical (boolean and temporal) operators. These
fragments are mainly relevant for description logics (see, e.g., [DLNN97]).
As far as PLTL is concerned, some complexity results for some particular restricted
fragments of PLTL can be found in [EL87, CL93, Spa93, DFR00] but
these are not systematic studies sharing our objectives. [Har85] has a simple
proof, based on a general reduction from tiling problems into modal logics, that
satisfiability for L(F, X) is PSPACE-hard. In fact, the same proof (or the proofs
from [Spa93, DFR00]) shows that PSPACE-hardness is already obtained with
temporal height 2.
Finally, there is a special situation with L(F) and L(X). These two very limited
fragments of PLTL actually coincide (semantically) with, respectively, the
modal logics S4.3Dum (also called S4.3.1 or D) [Bul65, Seg71, Gor94] and
NP-completeness of S4.3Dum satisfiability has been first proved
in [ON80] and generalized in [Spa93] to any modal logic extending the modal logic
S4.3. The complexity of L(X) satisfiability is also studied in [SR99].
Plan of the article. Section 2 recalls various definitions we need throughout
the article. Sections 3 and 4 study the complexity of PLTL fragments when the
number of atomic propositions is bounded. Logspace transformations from QBF
into model checking can be found in Section 5 and Section 6. Section 7 studies the
complexity of PLTL fragments when the temporal height is bounded. Section 8
contains concluding remarks and provides a table summarizing the complete picture
we have established about complexity for PLTL fragments.
2. BASIC DEFINITIONS AND RESULTS
Computational complexity. We assume that the reader understands what is
meant by complexity classes such as L (deterministic logspace), NL (non-de-
logspace), P (polynomial-time), NP and PSPACE, see e.g. [Pap94].
Given two decision problems P 1 and there exists a
logspace transformation (many-one reduction) from P 1 into P 2 . In the rest of the
article, all the reductions are logspace, and by "C-hardness" we mean "logspace
hardness in the complexity class C".
Temporal logic. We follow notations and definitions from [Eme90]: PLTL is a
propositional linear-time temporal logic based on a countably infinite set P
propositional variables, the classical boolean connectives
#, and the temporal operators X (next), U (until), F (sometimes).
The set {#, .} of formulae is defined in the standard way, using the connectives
# and G (always) as abbreviations with their standard meaning. We
let |#| denote the length (or size) of the string #, assuming a reasonably succinct
encoding.
Following the usual notations (see, e.g., [SC85, Eme90]), we let L(H 1 , H 2 , .)
denote the fragment of PLTL for which only the temporal operators H 1 , H 2 , .
are allowed 3 . For instance L(U) is "PLTL without X", as used in [Lam83].
denotes the set of propositional variables occurring in #. The temporal
height of #, written th(#), is the maximum number of nested temporal operators in
#. We write L k
.) to denote the fragment of L(H 1 , .) where at most n # 1
propositions are used, and at most temporal height k # 0 is allowed. We write
nothing for n and/or k (or we use #) when no bound is imposed: L(H 1 ,
For example, for # given as
and
Flat Until. We say a #, of the form #U# , uses flat Until when
the left-hand side, #, does not contain any temporal combinator (i.e., # is a boolean
combination of propositional variables) and we write #U - # when we want to
stress that this occurrence of U is flat. E.g., we sometimes write (AU - B)UC for
(AUB)UC.
To the best of our knowledge, Dams was the first to explicitely isolate and name
this restricted use of Until 4 and prove that U - is less expressive than U [Dam99].
He argued that flat Until is often sufficiently expressive in practice, and hoped
model checking and satisfiability would be simpler for U - than for U. In the
following, we treat U - as if it were one more PLTL combinator, more expressive
than F but less than U.
Semantics. A linear-time structure (also called a model) is a pair (S, #) of an
#-sequence . of states, with a mapping
labeling each state s i with the set of propositions that hold in s i . We often only
write S for a structure, and use the fact that a structure S can be viewed as an
infinite string of subsets of P rop. Let S be a structure, i # N a position, and # a
formula. The satisfiability relation |= is inductively defined as follows (we
omit the usual conditions for the propositional connectives):
. S, i |= A def
. S, i |= X# def
. S, i |= F# def
3 Negations are allowed. For instance, L(F) and L(G) denote the same fragment.
4 But flat fragments of temporal logics have been used in many places, e.g. [MC85, DG99, CC00].
6 S. DEMRI AND PH. SCHNOEBELEN
. S, i |= #U# def
# there is a j # i such that S, j |= and for all
We
Satisfiability. We say that a formula # is satisfiable iff S |= # for some S.
The satisfiability problem for a fragment L(.), written SAT (L(.)), is the set
of all satisfiable formulae in L(.
Model checking. A Kripke structure #) is a triple such that N
is a non-empty set of states, R # N - N is a total 5 next-state relation, and
labels each state s with the (finite) set of propositions that hold
in s. A path in T is an #-sequence . of states of N such that
path in T is a linear-time structure and a linear-time
structure is a possibly infinite Kripke structure where R is a total function.) We
follow [Eme90, SC85] and write T , s |= # when there exists in T a path S starting
from s such that S |= # 6 . The model checking problem for a fragment L(.),
written MC(L(.)), is the set of all #T , s, # such that T , s |= # where T is finite
and # is in L(. For the definition of |T |, the size of T , we use a reasonably
succinct encoding of In practice, it is convenient to pretend
Complexity of PLTL. As far as computational complexity is concerned we
make a substantial use of the already known upper bounds:
Theorem 2.1. [ON80, HR83, SC85]
SAT (L(F)) and MC(L(F)) are NP-complete.
SAT (L(F, X)), MC(L(F,X)), SAT (L(U)) andMC(L(U)) are PSPACE-complete.
As a consequence, most of our proofs establish lower bounds.
Stuttering equivalence. Two models are equivalent modulo stuttering, written
they display the same sequence of subsets of P rop when repeated
(consecutive) elements are seen as one element only (see [Lam83, BCG88] for a
Only considering Kripke structures with total relations is a common technical simplification. Usually
it has no impact on the complexity of temporal logic problems. However the "total R" assumption
implies that any two states satisfy the same temporal formulae in L #
fragment for which
satisfiability is trivial. In "non-total R" frameworks there is a branching-time formula that behaves
as a propositional variable. This can impact complexity: satisfiability for the fragment of K with no
propositions is PSPACE-complete in a "non-total R" framework [Hem00], and is in L in a "total R"
framework.
6 This existential formulation is well suited to complexity studies because it makes model checking
closer to satisfiability. It is the dual of the definition used in verification ("all paths from s satisfy #"),
so that all complexity results for model checking can be easily translated, modulo duality, between the
two formulations.
formal definition). Lamport argued that one should not distinguish between stutter-
equivalent models and he advocated prohibiting X in high-level specifications since
Theorem 2.2. [Lam83] S # S # iff S and S # satisfy the same L(U) formulae.
3. BOUNDING THE NUMBER OF ATOMIC PROPOSITIONS
In this section we evaluate the complexity of satisfiability and model checking
when the number of propositions is bounded, i.e. for fragments L n (.
When the number of propositions is bounded, satisfiability can be reduced to
model checking:
Proposition 3.1. Let H 1 , . be a non-empty set of PLTL temporal combi-
nators. Then for any n # N, SAT (L n (H 1 , .)) #L MC(L n (H 1 , .
Proof. Take # L n (H 1 , .) such that P rop(# {A 1 , . , A n }. Let
#) be the Kripke structure where N def
is the set of all 2 n
relates any two states, and for all s # N , s is its own
One can see that # is satisfiable iff there is a s # N s.t.
For a many-one reduction, we pick any s 0 # N and use
The reduction is logspace since n, and then |T |, are constants.
Prop. 3.1 is used extensively in the rest of the article. Note that the reduction does
not work for an empty set of combinators, as could be expected since SAT (L())
is NP-complete while MC(L()) amounts to evaluating a boolean expression and
is in L [Lyn77]. Also, Prop. 3.1 holds when n is bounded and should not be
confused with the reductions from model checking into satisfiability where one
uses additional propositions to encode the structure of T into a temporal formula
(used in, e.g., [SC85, Eme90]).
3.1. PSPACE-hardness with few propositions
The next two propositions show that, for model checking problems, n propositional
variables can be encoded into only two if U is allowed, and into only one
one if F and X are allowed.
Proposition 3.2. MC(L(H 1 , .)) #L MC(L 2 (U)) for any set H 1 , . of
temporal operators.
8 S. DEMRI AND PH. SCHNOEBELEN
Proof. With a Kripke structure #) and a formula # L(H 1 , .)
such that P we associate a Kripke structure D n
#s, i#R #s # ,
sRs # and
{} otherwise.
Fig. 1 displays an example. Here alternations between A and
s
A,B
A,B
A
A
A
A
A
A
FIG. 1. T and D3 (T ) - An example
-A in D n (T ) define visible "slots", the #s, 2j + 2#'s, that are used to encode the
truth value of the propositional variables: B in the i-th slot encodes that P i holds.
be given by
Alt 0
At D Alt k+1
At D is satisfied in D n (T ) at all #s, j# with only there. Alt k
the fact that there remain k "A-A" alternations before the next state satisfying
At D .
We now translate formulae over T into formulae over D n (T ) via the following
inductive definition:
This gives the reduction we need since
for any s #
Clearly the construction of D n (T ) can be done in space O(log(|T | + |#|)) and the
construction of D n (#) can be done in space O(log |#|).
Observe that D n (# L 2 (U - ) when # L(F, X). Combining with Theorem
2.1 we obtain
Corollary 3.1. MC(L 2 (U - )) is PSPACE-complete.
Proposition 3.3. MC(L(H 1 , .)) #L MC(L 1 (X, H 1 , .)) for any set
. of PLTL temporal operators.
Proof. With a Kripke structure #) and a formula # L(H 1 , .)
such that P
#s, j#R #s # ,
#s, 1#s, 2#) def
{} otherwise.
Fig. 2 displays an example.
S. DEMRI AND PH. SCHNOEBELEN
s
A
A
A
A
A
A
A
FIG. 2. T and C3 (T ) - An example
The idea is to use -A.A (resp. -A.-A) in the i-th slot after a A.A to encode
that P i holds (resp. does not hold). The A.A is a marker for the beginning of some
s and the -A in a #s, 2j + 1# is to distinguish slots for starting a new s and slots
for a P i . We now translate formulae over T into formulae over C n (T ) via the
following inductive definition:
with At C
-A. Clearly, At C is satisfied in C n (T ) at all #s, j#
only there. For any s # N , we have T , s |= # iff C n (T ), #s, 1# |=
Finally, the construction of C n (T ) can be done in space O(log(|T |+|#|)) and the
construction of C n (#) can be done in space O(log |#|).
Combining with Theorem 2.1 we obtain
Corollary 3.2. MC(L 1 (F, X)) is PSPACE-complete.
Similar results exist for satisfiability problems:
Proposition 3.4. For H 1 , . a set of PLTL temporal operators,
Proof. (1) Let # L(H 1 , .) be such that P
be the formula
#G At D # At DU -A # -B # ((-A # -B)U - Alt n
describes the shape of models that have the form of some D n (S). More
formally, one can show that for any model S, D n (S) |= #
n and for any S # over
{A, B}, if S # n then there exists a (unique) S such that S # D n (S). Then
an L n (H 1 , .) formula # is satisfiable iff the L 2 (U, H 1 , .) formula #
is satisfiable. We already know that D n (#) can be built in space O(log |#|).
Moreover, #
n can be also built in space O(log |#|) since we already know that
Alt n
n can be built in space O(log n) which is a fortiori in space O(log |#|).
.) such that P be the
describes the shape of models of the form C n (S): for any model S, C n (S) |=
# n and for any S # over {A} if S # n then there exists a (unique) S such that
S # is (isomorphic to) C n (S). Then the L n (H 1 , .) formula # is satisfiable iff the
(#) is satisfiable. We already know that C n (#)
can be built in space O(log |#|). Moreover, # n can be also built in space O(log |#|)
since we need to count until n which requires space in O(log n). So, computing
space in O(log |#|).
the proof of Proposition 3.4 also shows that
Combining with Theorem 2.1, we get
Corollary 3.3. SAT (L 2 (U - are PSPACE-complete.
3.2. NP-hardness with few propositions
We now show that MC(L 2 are NP-hard using Prop. 3.1
and
Proposition 3.5. SAT (L 0
Proof. We consider structures on P Say S has n A-alternations
iff there exist positions
that k. Hence S contains an alternation of
2n consecutive non-empty segments : A holds in the first and all odd-numbered
segments, A does not hold in even-numbered segments. Then there is an infinite
suffix where A holds continually.
Let us define the following formulae:
. # 0
. #
. #
One can check that # n [# 0 that a structure has n # A-alternations
for some n # n. Thus
is a formula with size in O(n), stating that the modelS has exactly nA-alternations.
An A-alternation is a segment composed of an A-segment followed by an -A-
segment. For l # {A, -A}, an l-segment is a (non-empty) finite sequence of states
where l holds true. Generally, # n [#] expresses that there is n # n such that #
holds at some state belonging to the n # th A-alternation in which -A also holds.
When S has exactly n A-alternations, we can view it as the encoding of a
valuation v S of {P 1 , . , P n } by saying that P k holds iff both B and -B can
be found in the k-th -A-segment in S. Formally, v S
# iff there exist
We now encode a propositional formula # over {P 1 , . , P n } into f n (#), an
L(F)-formula with
and the obvious homomorphic rules for # and -. One can see that, for S with n
A-alternations, v S |= # iff S |= f n (#), so that # is satisfiable iff f n (# n is
The proof is completed by checking that f n (# n is an L #
2 (F)-formula that can
be computed from # in space O(log |#|) .
The transformation from 3SAT into MC(L(F)) in [SC85] only uses formulae
of temporal height 1. Here we provide a logspace transformation from 3SAT into
using only formulae with two different propositional variables.
Proposition 3.6. 3SAT #L MC(L #
Proof. Consider an instance I of 3SAT. I is a conjunction V m
clauses,
where each C i is some disjunction W 3
l i,j of literals, where each l i,j is a propositional
variable x r(i,j) or the negation -x r(i,j) of a propositional variable from
W.l.o.g. we assume that n # 3 - m and that, for any i, the
r(i, are all distinct.
We consider the structure T n labeled with propositions A and B as in Figure 3.
Observe that T n only depends on n, the number of different boolean variables
occuring in I.
A
A
A
FIG. 3. The structure
With a path S from s 0 , we associate a valuation v S
=#). Symmetrically, any valuation
v is v S for a unique path S in T n .
For
, an L(F) formula stating that v S does not satisfy
clause C i . This is done in several steps: define
and, for
l i,j is x r or its negation.
Because it involves alternations between -(A#B) andA#B,# r
i cannot be satisfied
starting from s n-r # for r # > r. Thus, if S |= # 0
i , the rth positive occurence of A
or B or A # B is necessarily satisfied in t r or u r . Hence
Now define # I
I is satisfiable. Finally, both
I can be computed in space O(log |I|).
Corollary 3.4. MC(L 2 are NP-complete.
14 S. DEMRI AND PH. SCHNOEBELEN
4. FRAGMENTS WITH ONLY ONE PROPOSITION
In this section, we give a polynomial-time algorithm for L #
1 (U) that relies on
linear-sized B- uchi automata.
Recall that the standard approach for PLTL satisfiability and model checking
computes, for a given PLTL formula #, a B- uchi automaton 7
A# recognizing
exactly the models of # (the alphabet of the B- uchi automaton is the set of possible
valuations for the propositional variables from #).
Satisfiability of # is non-emptiness of A# . Checking whether a path in some T
satisfies # is done by computing a synchronous product of T and A# and checking
for non-emptiness of the resulting system (a larger B- uchi automaton) This method
was first presented in [WVS83], where a first algorithm for computing A# was
given.
The complexity of this approach comes from the fact that A# can have exponential
size. Indeed, once we have A# the rest is easy:
Lemma 4.1. [Var94] It is possible, given a B-uchi automaton A recognizing
the models of formula #, and a Kripke structure T , to say in non deterministic
space O(log|T | log|A|) whether there is a computation in T accepted by A.
From these remarks, it easily follows that fragments of PLTL will have low
complexity if the corresponding A# are small.
4.1. The fragment L #(U)
Here we consider a single proposition: P linear model is
equivalent, modulo stuttering, to one of the following: for n # N
-A # , S ndef
3 and S n
6 do not depend on n.
Now a satisfiable L #
Lemma 4.2. For any
# iff S n
7 or a Muller automaton, or an alternating B- uchi automaton, or .
Proof. By structural induction on # and using the fact that the first suffix of a S n
is a S n #
e.g. the first suffix of S n
1 is S n-1
and the
first suffix of S n
2 is S n
1 .
Recognizing the S n
i 's is easy:
Lemma 4.3. For any 1 # i # 6 and n # N, there exists a B-uchi automaton
A =n
i and a B-uchi automaton A #n i s.t. A =n
accepts a model S iff
Furthermore, the A =n
's and A #n i 's have O(n) states and can be generated uniformly
using log n space.
Proof. We only show A 2
3 as examples (see Fig. 4.1).
A =2
A =n
FIG. 4. B- uchi automata for Lemma 4.3
Combining lemmas 4.1 and 4.3, we see that the problem of deciding, given T
with s 0 a state, given n # N and 1 # i # 6, whether there is a path S in T that starts
from s 0 and s.t. S # S n
i , can be solved in non deterministic space O(log(n- |T |))
or in deterministic time O(n - |T |). Similarly, the problem of deciding whether
there is a path S and a m # n s.t. S
i can be solved with same complexity.
Theorem 4.1. Model checking for L #
1 (U) is in P.
Proof. Consider a Kripke structure #) and some state s 0 # N .
If there is a path S from s 0 satisfying # L #
and some
Conversely, if S n
and there is a path
i starting from s 0 , then T , s 0 |= #.
It is possible to check whether T contains such a path in polynomial-time: We
consider all
seen in time O(k.|#|), we check
S. DEMRI AND PH. SCHNOEBELEN
in time O(k.|T |), whether, from s 0 , T admits a path S # S k
i . We also consider all
know that S k+m
so that it is correct to check whether there is a m such that T admits a path S #
. Because k #|, the complete algorithm only needs O(|T |-|#| 2 )-time.
Remark 4. 1. We do not know whether MC(L #
1 (U)) is P-hard. We only know
it is NL-hard 8 . The same open question applies to SAT (L #
Looking at the algorithm used in the proof of Theo. 4.1, it appears 9 that this
open question is linked to an important open problem that remained unnoticed for
many years:
Open Problem 4.1. What is the complexity of model checking a path?
Here a "path" is a finitely presented linear-time structure. It can be given by a
deterministic Kripke structure (i.e., where any state has exactly one successor) or
by an #-regular expression u.v # where u and v are finite sequences of valuations.
Model checking a path is clearly in P but it is not known whether it is P-hard or
in NL or somewhere in between.
4.2. The fragment L #
Proposition 4.2. SAT (L #
1 (X)) are NP-complete.
Proof. Satisfiability for L(X) is in NP because, for # L(X) with temporal
height k, it is enough to guess the first k states of a witness S. Model checking
also is in NP for the same reason.
NP-hardness of SAT (L #
1 (X)) can be shown by a reduction from 3SAT: consider
a boolean formula # with propositional variables P 1 , . , P n and replace the
A's: the resulting L #
is. Then, by Prop. 3.1,
1 (X)) is NP-hard too.
Proposition 4.3. For any k, n < #, SAT (L k
(U, X)) is in L.
Proof. Here the key observation is that there are only a finite number of
essentially distinct formulae in a given fragment L k
Given n and k, one
can compute once and for all a finite subset J k
that
8 One easily shows that already MC(L 1
1 is NL-hard by a reduction from GAP, the graph accessibility
problem of [Jon75].
9 M. Y. Vardi pointed out the connection to us.
1. any # L k
equivalent to a # i # J k
is the canonical
representative for #);
2. for i #= j, i and j are not equivalent. Then a given # is satisfiable iff its
canonical representative is not the canonical representative of #.
Any J k
n is finite and, more precisely, |J 0
and |J k+1
n | is in 2 2 O(|J k
We assume n and k are fixed and we consider the problem, given #, of computing
its canonical representative (or equivalently its index 1 # i # N ). This can be
done in a compositional way: if # i and # j then the representative # k of
#U# (say) is the representative of # i U# j , so that we just need to compute once
and for all a finite table t U : (i,
operators, temporal or boolean.
Once we have these tables, computing the canonical representative of any #
amounts to evaluating an expression over a fixed finite domain,which can
be done in logspace (see [Lyn77]).
Proposition 4.4. For any k, n < #, MC(L k
(U, X)) is in NL.
Proof. As in the proof of Prop. 4.3, for # L k
logspace
a canonical representative
. By Lemma 4.1, checking whether T , s |= i
can be done in non deterministic space O(log |T | log |A i |). Since n and k are
fixed, is a constant, so that MC(L k
(U, X)) is in
NL.
1 is NL-hard (Remark 4.1), we get
Corollary 4.1. For any 1 # k < # and 1 # n < #, for any set H 1 , . of
temporal operators, MC(L k
By contrast, by [Lyn77], MC(L 0
# (U, X)) is in L.
This concludes the study of all fragments with a bounded number of proposi-
tions. In the remaining of the article, this bound is removed.
5. FROM QBF TO MC(L(U))
In this section, we offer a logspace transformation from validity of Quantified
Boolean Formulae (QBF) into model checking for L(U) that involves rather simple
constructions of models and formulae. This reduction can be adapted to various
fragments and, apart from the fact that it offers a simple means to get PSPACE-
hardness, we obtain a new master reduction from a well-known logical problem.
S. DEMRI AND PH. SCHNOEBELEN
As a side-effect, we establish that MC(L 2
is PSPACE-hard, which is not
subsumed by any reduction from the literature.
Consider an instance I of QBF. It has the form
I # Q
z }| {
where every Q r (1 # r # n) is a universal, #, or existential, #, quantifier. I 0
is a propositional formula without any quantifier. Here we consider w.l.o.g. that
I 0 is a conjunction of clauses, i.e. every l i,j is a propositional variable x r(i,j) or
the negation -x r(i,j) of a propositional variable from
question is to decide whether I is valid or not. Recall that
Lemma 5.1. I is valid iff there exists a non-empty set V #} X of
valuations such that
correctness:
closure: for all v # V , for all r such that Q #, there is a v # V such that
With I we associate the Kripke structure T I as given in Figure 5, using labels
1 , . AssumeS is an infinite path starting
. A n
FIG. 5. The structure T I associated with I # Q1x1 . Qnxn # m
from s 0 . Between s 0 and s n , it picks a boolean valuation for all variables in X ,
then reaches wm and goes back to some B r -labeled state (1 # r # n) where
(possibly distinct) valuations for x r , x r+1 , . , x n are picked.
In S, at any position lying between a s n and the next wm , we have a notion of
current valuation which associates # or # with any x r depending on the latest u r
or t r node we visited. With S we associate the set V(S) of all valuations that are
current at positions where S visits s n (there are infinitely many such positions).
Now consider some r with Q assume that whenever S visits s r-1
then it visits both t r and u r before any further visit to s r-1 . In L(U), this can be
Let # clo
clo , then V(S) is closed in the sense
of Lemma 5.1.
Now, whenever S visits a L j
-state, we say it agrees with the current valuation v
if v |= l i,j . This too can be written in L(U), using the fact that the current valuation
for x r cannot be changed without first visiting the B r -state. For
then V(S) is correct in the sense
of Lemma 5.1.
Lemma 5.2. Let # I
I is valid.
Proof. If S |= # I , then V(S) is non-empty, closed and correct for I so that I
is valid. Conversely, if I is valid, there exists a validating V (Lemma 5.1). From V
one can build an infinite path S starting from s 0 such that
from a lexicographical enumeration of V , S is easily constructed so that S |= clo .
Then, to ensure S |= corr , between any visit to s n and to the next wm , S only visits
-states validated by the current valuation v, which is possible because v |= I 0 .
It is worth observing that # I belongs to L 2
I and
# I can be computed from I in logspace, and because th(# I using
Prop. 3.2), we get
Corollary 5.1. QBF #L MC(L 2
Corollary 5.2. MC(L 2
are PSPACE-hard.
6. FROM QBF TO MC(L(F, X))
As in section 5, we consider an instance I # Q
l i,j of
QBF . With I we associate the Kripke structure T #
I given in Figure 6. Here, any
S. DEMRI AND PH. SCHNOEBELEN
path S starting from s 0 can be seen as an infinite succession of segments of length
Each segment directly yields a valuation for X: they form an
infinite sequence v 1 , v 2 , . (necessarily with repetitions) and we let V(S) denote
the associated set.
FIG. 6. The structure T #
I associated with I # Q1x1 . Qnxn # m
Using F and X, it is easy to state that any segment in S visits the L j
-states
in a way that agrees with the corresponding valuation. For
Now
implies that V(S) is correct in the sense of Lemma 5.1.
There remains to enforce closure of V(S). For this, we require that the valuations
are visited according to the lexicographical ordering, and then cycling.
This means that the successive choices of truth values for universally quantified
propositional variables behave as the successive binary digits of counting modulo
(assuming there are n # universal quantifiers in Q 1 , . , Q n ). As usual, the
existentially quantified variables are free to vary when an earlier variable varied.
Assume When moving from a valuation v t to its successor v t+1 , we
require that v t remains unchanged iff for some r # > r with Q r # we have
This is written
does not change"
z }| {
r
{ r | Q restricted to the universally quantified variables,
behaves like counting modulo 2 n # .
Assume now that Q r #. When moving from v t to its successor, v t
not change unless v t #, or equivalently
for the latest r < r # with Q (thanks to our assumption
about counting). Equivalently, this means that if for a universally quantified x r ,
does not change, then for any following existentially quantified x r # , v t
does not change either. By "following" we mean that there is no other # between
Q r and Q r # , i.e. that r # sc(r) with
This behaviour can be written:
"if v(x r ) does not change"
z }| {
r #sc(r)
"then v(x # r ) does not change"
z }| {
r # .
Now we define
I
Lemma 6.1. T #
I , s 0 |= # I iff I is valid.
Proof. If S |= # I then V(S)validates I as we explained. Conversely, if someV
validates I, then, enumerating V in lexicographical order, it is easy to build a S such
that S |= # I .
I and #
I can be computed from I in logspace (and using
Prop. 3.3) we get
Corollary 6.1. QBF #L MC(L(F,X)) #L MC(L #
Corollary 6.2. MC(L #
7. BOUNDING THE TEMPORAL HEIGHT
In this section we investigate the complexity of satisfiability and model checking
when the temporal height is bounded. From Section 5, we already know that
22 S. DEMRI AND PH. SCHNOEBELEN
We first consider ways of reducing the temporal height (sections 7.1 and 7.2).
Then we show how to improve the upper bounds when temporal height is below 2
(sections 7.3 and 7.4).
7.1. Elimination of X for model checking
Assume T is a Kripke structure and k # N. It is possible to partially unfold T
into a Kripke structure T k where a state s (in codes for a state s 0 in T with the
k next states s 1 , . , s k already chosen. In T k , s is labeled with new propositions
encoding the fact that some s i 's satisfy some A j 's.
Formally, let k # N and P First let P rop k def
defined as the Kripke
structure
.
R and for any j # {1, . , k},
This peculiar unraveling is also called bulldozing (see e.g. [Seg71]). Fig. 7 contains
a simple example. Observe that |T k
| is in O(|T | k+1 ) and T k can be computed in
A
FIG. 7. An example of bulldozing: T and T 2 side by side
space O(log(k
Say a formula # has inner-nexts if all occurrences of X are in subformulae of
the form XX . XA (where A is a propositional variable).
If now # has inner-nexts, with at most k nested X, and if we replace all X i A j in
# by propositions A i
, we obtain a new formula, denoted # k , such that
starting with s. (1)
Both T k and # k can be computed in space O(log(|T |
Not all formulae have inner-nexts but, using the following equivalences
as left-to-right rewrite-rules, it is possible to translate any PLTL formula into an
equivalent one with inner-nexts. This translation may involve a quadratic blow-up
in size but it does not modify the number of propositional variables or the temporal
height of the formula 10 .
Corollary 7.1. For any k # N and set H 1 , . of PLTL temporal combina-
tors, MC(L k
Proof. Given # in L k
# (X, .), and someT , we transform # into some equivalent
# with inner-nexts and then evaluate # k on T k .
Corollary 7.2. MC(L k
# (X)) is in L and MC(L k
# (F, X)) is in NP for any
fixed k # 0.
# NP-hard as can be seen from the proof of NP-hardness of
# in [SC85]. Hence for k # 1, MC(L k
7.2. Elimination of X for satisfiability
Elimination of X for satisfiability relies on the same ideas. If # is satisfiable,
then, thanks to (1), # k is. The converse is not true: consider # given as GA#G-XA,
clearly not satisfiable. Here # 1 is GA 0
#G-A 1 which is satisfiable. This is because
if # k is satisfiable, then it may be satisfiable in a model that is not a S k for some
S. But, using an L 2
express the fact that a given model is
a S k , so that
# is satisfiable iff # k
Actually, this approach based on standard renaming techniques can get us fur-
ther. We write # A} to denote a formula obtained by replacing all occurences
These rules may introduce X's in the right-hand side of U - 's but this will be repaired when we
later replace the X i A j 's with the A i
's.
S. DEMRI AND PH. SCHNOEBELEN
of # with A inside #. If A does not occur in #, then
# is satisfiable iff # A} # G(A #) is.
By using this repeatedly and systematically, we can remove (by renaming) all
and, (2) there exists at least one occurence of #
in # that is under the scope of two temporal combinators (or in the left-hand side
of a U). For example, F(AU(FGB # GB)) is replaced by F(AU(FA 1
new #
new
new # GB) in turn replaced by
new # A 1
new
new # GB) #
new # FA 1
new ).
Starting from some #, this repetitive construction eventually halts (when no #
can be found), the resulting formula # has temporal height at most 2, uses flat
until, and is satisfiable iff # is. It can be computed in logspace, so that
Proposition 7.1. For any set H 1 , . of PLTL temporal combinators,
Corollary 7.3. SAT (L 2
are PSPACE-hard
7.3. Satisfiability without temporal nesting
We now consider formulae in L 1
# (U, X), i.e. without nesting of temporal opera-
tors. The main result is
Proposition 7.2. Assume # L 1
# (U, X). If # is satisfiable then it is satisfiable
in a model S . such that for any i, j #(s i
Such a S # can be guessed and checked in polynomial time, hence
Corollary 7.4. For any set H 1 , . of PLTL temporal combinators,
is in NP, and hence is NP-complete.
We now proceed with the proof of Prop. 7.2. Our main tool is a notion of
extracted structure:
Definition 7.1. An extraction pattern is an infinite sequence n 0 <
. of increasing natural numbers. Given an extraction pattern (n i ) i#N and a
structure S, the extraction from S along (n i ) i#N is the structure s # 0 , s # 1 , . where,
a copy of s n i
Now consider a formula # L 1
# (U, X). Since # has temporal height 1, it is a
boolean combination of atomic propositions and of temporal subformulae of the
form X or U # where # and # have temporal height 0. For example, with #
given as
the temporal subformulae of # are
Definition 7.2. From any
# (U, X), we
extract a set of positions, called the witnesses for # in S. The rules are that 0 is
always a witness, and that each temporal subformula of # may require one witness:
1. for a temporal subformula X#, 1 is the witness,
2. for a temporal subformula #U# , we have three cases
(i) if S |= #U# and i is the smallest position such that S, i |= # , then i is
the witness. (Observe that for all j < i, S, j |= # .)
(ii) if S #|= F# , then no witness is needed.
(iii) otherwise S #U# and S |= F# . Let i be the smallest position such
that S, i #, then i is the witness. (Observe that S, i # and for all j < i,
Clearly, if {n 0 , are the witnesses for #, then k < |#|.
We continue our earlier example: let S be the structure
S: A
A
A
A
A
A
where C never holds. Here S |= #. Indeed, S |=
S |= XA and S #|= AUC. The witness for XA is 1. The witness for
is 6 since we are in case (a) from definition 7.2, and s 6 is the first position where
holds. No witness is needed for AUC since we are in case (b). The witness for
AUB is 4 since we are in case (c) and s 4 is the first position where A does not
hold. Finally, the witnesses for # are {0, 1, 4, 6}.
Lemma 7.1. Let # L 1
# (U, X) and S be a structure. Let (n i ) i#N be an
extraction pattern containing all witnesses for # in S. Let S # be the structure
extracted from # along (n i ) i#N . Then for any subformula # of #, S |= # iff
26 S. DEMRI AND PH. SCHNOEBELEN
Proof. By induction on the structure of #. Since all other cases are obvious, we
only need deal with the case # 1 U# 2 and show that S # 1 U# 2 iff S |= # 1 U# 2 .
Assume S |= # 1 U# 2 . Let i be the witness for # 1 U# 2 . So S, i |= # 2 and, for any
appears in S # as some s # n and all s # n # for n # < n
are (copies of) s j 's for j < i, hence S # 1 U# 2 (remember that # 1 and # 2 have
no temporal operator.)
Now assume S # 1 U# 2 . If # 1 U# 2 has no witness, then no s
and therefore no s is the witness for # 1 U# 2 , then
appears as s # n in S # : we
have
We may now conclude the proof of Prop. 7.2: Consider now a satisfiable #
# (U, X) and assume S |= #. Let {n 0 , . , n k } be the witnesses for # in S.
We turn these into an extraction pattern by considering the sequence n 0 <
prolongated by some n k+1 < n k+2 < . where the n k+i are positions
of states carrying the same valuation (there must be at least one valuation appearing
infinitely often). The extracted S # has the form required for Prop. 7.2.
Continuing our previous example, and assuming the valuation of s 6 appears
infinitely often, the resulting S # is made out of s 0 , s 1 , s 4 and s 6 , and it satisfies #:
A
7.4. Model checking without temporal nesting
We now consider model checking of formulae where the temporal height is at
most 1.
Proposition 7.3. MC(L 1
# (U, X)) is in NP.
Proof. Consider # L 1
# (U, X) and assume T , s |= #. Then there is a path S
in T starting from s such that S, s |= #.
The witnesses for # in S are some We consider an extraction
pattern containing all witnesses of W and such that the extracted S # be a path
in T : this may imply to retain some positions from S, between a n i # W and the
following to ensure connectivity in T . In any case, it is possible to find an
extraction pattern where n k appears as some position l # k - |T |.
Therefore, if T , s |= # then this can be seen along a path S of the form
|. Guessing this path and
checking it can be done in non deterministic polynomial-time.
#
Corollary 7.5. For any set H 1 , . of PLTL temporal combinators,
8. CONCLUDING REMARKS
In this article we have measured the complexity of model checking and satisfiability
for all fragments of PLTL obtained by bounding (1) the number of atomic
propositions, (2) the temporal height, and (3) restricting the temporal operators
one allows. Table 1 provides a complete summary.
In this table we use U ? to denote any of U and U - since one outcome of our study
is that all the problems we considered have the same computational complexity
when "Until" is replaced by the weaker "flat Until", thereby ruining some hopes
of [Dam99].
Some general conclusions can be read in the table. In most cases no reduction in
complexity occurs when two propositions are allowed, or with temporal height two.
Moreover, in most cases, for equal fragments, satisfiability and model checking
belong to the same complexity class. Still the table displays some exceptions, two
of which deserve comments :
1. Model checking and satisfiability for L #
(only one proposition) are in
P. Admittedly this fragment is not very relevant when it comes to, say, protocol
verification. Moreover, it is open whether those problems are P-hard or in NL, to
quote a few possibilities.
2. Model checking for L k
This shows that F+X
can be simpler than U. Because NP-hardness is already intractable, this result
does not immediately suggest improved deterministic algorithms. However, the
isolated fragment is very relevant.
Another way to see our results is to focus on the general techniques that we
developed: we provided a simple transformation from QBF into model checking
problems, and we formalized a number of logspace transformations leading to a
few basic rules of thumb:
(1) when the number of propositions is fixed, satisfiability can be transformed
into model checking,
28 S. DEMRI AND PH. SCHNOEBELEN
1.
A complete summary of complexity measures
checking Satisfiability
L(F) NP-complete [SC85] NP-complete [ON80]
propositional variables can be encoded into
only one if F (sometimes) and X (next) are allowed,
(2.2) only two if U (until) is allowed,
(3) when arbitrarily many propositions are allowed, temporal height can be
reduced to 2 if F is allowed, and
model checking for logics with X can be transformed into model checking
without X.
Besides, when the formula # has temporal height at most 1, knowing whether
only depends on O(|#|) places in S.
Most of the time, these techniques are used to strengthen earlier hardness results,
showing that they also apply to specific fragments. In some cases we develop specific
arguments showing that the complexity really decreases under the identified
threshold values.
The general situation in our study is that lower bounds are preserved when
fragments are taken into account. Hence our investigations do not give a formal
justification of the alleged simplicity of "simple practical applications". Rather,
we show that several natural suggestions are not sufficient.
Understanding and taming the complexity of linear temporal logics remains an
important issue and the present work can be seen as some additional contribu-
tion. The ground is open for further investigations. We think future work could
. different, finer definitions of fragments (witness [EES90]) that can be inspired
by practical examples, or that aim at defeating one of our hardness proofs, e.g.
forbidding the renaming technique we use in sections 7.1 and 7.2,
. restrictions on the models rather than the formulae,
. other complexity measures: e.g. average complexity, or separated complexity
measure for models and formulae, or analysis of hard and easy distributions.
Additionaly, it must be noted that we only considered satisfiability and model
checking, and ignored other problems that are important for verification: module
checking, semantic entailment, .
--R
Automata based symbolic reasoning in hardware verification.
An algebraic study of Diodorean modal systems.
Flatness is not a weakness.
Another look at LTL model checking.
The computational complexity of satisfiability of temporal Horn formulas in propositional linear-time temporal logic
The complexity of theorem-proving procedures
An almost quadratic class of satisfiability problems.
Flat fragments of CTL and CTL
A simple tableau system for the logic of elsewhere.
Execution and proof in a Horn-clause temporal logic
An expressively complete temporal logic without past tense operators for Mazurkiewicz traces.
The complexity of concept languages.
Information and Computation
The complexity of propositional linear temporal logics in simple cases (extended abstract).
On the limits of efficient temporal decidability.
Modalities for model checking: Branching time logic strikes back.
Temporal and modal logic.
A perspective on certain polynomial time solvable classes of Satisfiability.
The effect of bounding the number of primitive propositions and the depth of nesting on the complexity of modal logic.
Recurring dominos: Making the highly undecidable highly understandable.
The complexity of Poor Man's logic.
The model checker Spin.
The propositional dynamic logic of deterministic
Relating linear and branching model checking.
What good is temporal logic?
Model checking CTL
Checking that finite state concurrent programs satisfy their linear specification.
Specification in CTL
Log space recognition and translation of parenthesis languages.
Hierarchical verification of asynchronous circuits using temporal logic.
The Temporal Logic of Reactive and Concurrent Systems: Specification.
On the size of refutation Kripke models for some linear modal and tense logics.
Computational Complexity.
The complexity of propositional linear temporal logics.
An essay in classical modal logic (three vols.
Complexity of Modal Logics.
"initially"
Temporal logic can be more expressive.
Reasoning about infinite computation paths (extended abstract).
--TR
The complexity of propositional linear temporal logics
Modalities for model checking: branching time logic strikes back
Characterizing finite Kripke structures in propositional temporal logic
Temporal and modal logic
The temporal logic of reactive and concurrent systems
The computational complexity of satisfiability of temporal Horn formulas in propositional linear-time temporal logic
The effect of bounding the number of primitive propositions and the depth of nesting on the complexity of modal logic
The complexity of concept languages
The Model Checker SPIN
Automata based symbolic reasoning in hardware verification
The logic of MYAMPERSANDldquo;initiallyMYAMPERSANDrdquo; and MYAMPERSANDldquo;nextMYAMPERSANDrdquo;
Checking that finite state concurrent programs satisfy their linear specification
Log Space Recognition and Translation of Parenthesis Languages
Specification in CTL + Past for verification in CTL
Another Look at LTL Model Checking
A perspective on certain polynomial-time solvable classes of satisfiability
Nontraditional Applications of Automata Theory
The Complexity of Propositional Linear Temporal Logics in Simple Cases (Extended Abstract)
The Complexity of Poor Man''s Logic
Model Checking CTL+ and FCTL is Hard
Relating linear and branching model checking
Flatness Is Not a Weakness
A Simple Tableau System for the Logic of Elsewhere
An Expressively Complete Temporal Logic without Past Tense Operators for Mazurkiewicz Traces
The complexity of theorem-proving procedures
--CTR
Nicolas Markey , Philippe Schnoebelen, Mu-calculus path checking, Information Processing Letters, v.97 n.6, p.225-230, 31 March 2006
Alexander Rabinovich , Philippe Schnoebelen, BTL
F. Laroussinie , Ph. Schnoebelen , M. Turuani, On the expressivity and complexity of quantitative branching-time temporal logics, Theoretical Computer Science, v.297 n.1-3, p.297-315, 17 March
Stphane Demri, A polynomial space construction of tree-like models for logics with local chains of modal connectives, Theoretical Computer Science, v.300 n.1-3, p.235-258, 07 May
S. Demri , F. Laroussinie , Ph. Schnoebelen, A parametric analysis of the state-explosion problem in model checking, Journal of Computer and System Sciences, v.72 n.4, p.547-575, June 2006 | model checking;computational complexity;temporal logic;logic in computer science;verification |
571159 | Towards a primitive higher order calculus of broadcasting systems. | Ethernet-style broadcast is pervasive style of computer communication.In this style,the medium is single nameless channel.Previous work on modelling such systems proposed .rst order process calculus called CBS.In this paper, we propose fundamentally different calculus called HOBS.Compared to CBS, HOBS 1) is higher order rather than first order, 2) supports dynamic subsystem encapsulation rather than static,and does not require an "underlying language" to be Turing-complete. Moving to higher order calculus is key to increasing the expressivity of the primitive calculus and alleviating the need for an underlying language. The move, however, raises the need for significantly more machinery to establish the basic properties of the new calculus.This paper develops the basic theory for HOBS and presents two example programs that illustrate programming in this language. The key technical underpinning is an adaptation of Howe's method to HOBS to prove that bisimulation is congruence. From this result, HOBS is shown to embed the lazy -calculus. | INTRODUCTION
Ethernet-style broadcast is a pervasive style of computer
communication. The bare medium provided by the Ethernet
is a single nameless channel. Typically, more sophisticated
programming idioms such as point-to-point communication
or named channels are built on top of the Ethernet. But
using the Ethernet as is can allow the programmer to make
better use of bandwidth, and exploit broadcast as a powerful
and natural programming primitive. This paper proposes
a primitive higher order calculus of broadcasting systems
(HOBS) that models many of the important features of the
bare Ethernet, and develops some of its basic operational
properties.
1.1 Basic Characteristics of the Ethernet
The basic abstractions of HOBS are inspired by the Ethernet
protocol:
The medium is a single nameless channel.
Any node can broadcast a message, and it is instantaneously
delivered to all the other nodes.
Messages need not specify either transmitter or receiver
The transmitter of a message decides what is transmitted
and when.
Any receiver has to consume whatever is on the net,
at any time.
Only one message can be transmitted at any time.
Collision detection and resolution are provided by the
protocol (for HOBS, the operational semantics), so the
abstract view is that if two nodes are trying to transmit
simultaneously, one is chosen arbitrarily to do so.
All nodes are treated equally; their position on the net
does not matter.
HOBS renes and extends a previously proposed system
called the calculus of broadcasting systems (CBS) [16]. Although
both HOBS and CBS are models of the Ethernet,
the two systems take fundamentally dierent approaches to
subsystem encapsulation. To illustrate these dierences, we
take a closer look at how the Ethernet addresses these issues.
1.2 Modelling Ethernet-Style Encapsulation
Whenever the basic mode of communication is broadcast,
encapsulating subsystems is an absolute necessity. In the
Ethernet, bridges are used to regulate communication between
Ethernet subsystems. A bridge can stop or translate
messages crossing it, but local transmission on either side is
unaected by the bridge. Either side of a bridge can be seen
as a subsystem of the other.
CBS models bridges by pairs of functions that lter and
translate messages going in each direction across the bridge.
While this is an intuitively appealing model suitable for
many applications, it has limitations:
1. CBS relies on a completely separate \underlying lan-
guage". In particular, CBS is a rst order process cal-
culus, meaning that messages are distinct from pro-
cesses. A separate, computationally rich language is
needed to express the function pairs.
2. CBS only provides a static model of Ethernet architec-
tures. But, for example real bridges can change their
routing behaviour. CBS provides neither for such a
change nor for mobile systems that might cross bridges.
3. Any broadcast that has to cross a bridge in CBS does
so instantly. This is unrealistic; real bridges usually
buer messages.
HOBS addresses these limitations by:
Supporting rst-class, transmittable processes, and
Providing novel encapsulation primitives.
Combined, these features of HOBS yield a Turing-complete
language su-cient for expressing translators (limitation 1
above), and allow us to model dynamic architectures (lim-
itation 2). The new encapsulation primitives allow us to
model the buering of messages that cannot be consumed
immediately (limitation 3).
1.3 Problem
Working with a higher order calculus comes at a cost to
developing the theory of HOBS. In particular, whereas the
denition of behavioural equivalence in a rst order language
can require an exact match between the transmitted mes-
sages, such a denition is too discriminating when messages
involve processes (non-rst order values). Thus, behavioural
equivalence must only require the transmission of equivalent
messages.
Unfortunately, this syntactically small change to the notion
of equivalence introduces signicant complexity to the
proofs of the basic properties of the calculus. In particu-
lar, the standard technique for proving that bisimulation
is a congruence [11] does not go through. A key di-culty
seems to be that the standard technique cannot be used directly
to show that the substitution of equivalent processes
for variables preserves equivalence (This problem is detailed
in Section 4.1.)
1.4 Contributions and Organisation
The main contributions of this paper are the design of
HOBS and formal verication of its properties.
After presenting the syntax and semantics of HOBS (Sec-
tion 2), we give the formal denition of applicative equivalence
for HOBS (Section 3).
A key step in the technical development is the use of
Howe's method [10] to establish that applicative equivalence
for HOBS is a congruence (Section 4). As is typical in the
treatment of concurrent calculi, we also introduce a notion
of weak equivalence. Essentially the same development is
used to develop this notion of equivalence (Section 5).
As an application of the these results, in section 6 we use
them to show that HOBS embeds the lazy -calculus [1],
which in turn allows us to dene various basic datatypes.
This encoding relies on the fact that HOBS is higher order.
Possible encodings of CBS and the -calculus [12] are discussed
brie
y (More details can be found in the extended
version of the paper [14]). Two examples, one dealing with
associative broadcast [2] and the other dealing with database
consistency [2] are presented in Section 7.
These results lay the groundwork for further applied and
theoretical investigation of HOBS. Future work includes developing
implementations that compile into Ethernet or Internet
protocols, and comparing the expressive power of different
primitive calculi of concurrency. We conclude the paper
by discussing the related calculi and future works (Sec-
tion 8).
Remarks
No knowledge is assumed of CBS or any other process
calculus. The formal development in this paper is self-contained
For readers familiar with CCS [11], CHOCS [18] and CBS:
Roughly, HOBS is to CBS what CHOCS is to CCS.
The extended version of the paper (available online [14])
gives details of all denitions, proofs and some additional
results.
2. SYNTAX AND SEMANTICS
HOBS has eleven process constructs, formally dened in
the next subsection. Their informal meaning is as follows:
- 0 is a process that says nothing and ignores everything
it hears.
- x is a variable name. Scoping of variables and substitution
is essentially as in the -calculus.
receives any message q and becomes p1 [q=x].
- p1 !p2 can say p1 and become p2 . It ignores everything
it hears.
becomes p3 except if it hears
something, say q, whereupon it becomes p1 [q=x].
- p1 jp2 is the parallel composition of p1 and p2 . They
can interact with each other and with the environment,
p1 jp2 interacts as if it were one process.
countable set of names
Ground-terms
Guarded Choice
Composition
Buers f 2 F ::= p / p Feed Buer
Messages
Actions a 2 A ::= m! m?
Contexts
Syntax indexed by a set of free-variables L
Figure
1: Syntax
is the in-lter construct, and behaves as p1
except that all incoming messages are ltered through
p2 . This in-lter is asymmetric; p2 hears the environment
and p1 speaks to it. A nameless private channel
connects p2 to p1 . This construct represents an in-
lter that is waiting for an incoming message. Process
p1 can progress independently.
represents in-lter in a busy state. Process
p1 is suspended while p2 processes an input message.
Later p2 sends the processed message to p1 and p1 is
resumed.
is the out-lter construct, and behaves as
p2 except all outgoing messages from p2 are ltered
through p1 . This out-lter is also asymmetric; p2 hears
the environment and p1 speaks to it. A nameless private
channel connects p2 to p1 . This ltering construct
represents out-lter in passive state waiting for p2 to
produce an output message. Process p2 can progress
independently.
represents out-lter in busy state. Process
p3 is suspended while p1 processes an output message.
After processing, p1 sends the message to the envi-
ronment, and p3 is resumed. In case process p1 fails
to process the message before the environment sends
some other message, out-lter will \roll-back" to its
previous state represented by process p2 .
is the feed construct, and consists of p1 being
\fed" p2 for later consumption as an incoming mes-
sage. It cannot speak to the environment until p1 has
consumed p2 .
Thus, HOBS is built around the same syntax and semantics
as CBS [16], but without a need for an underlying language.
Instead, the required expressivity is achieved by adding the
ability to communicate processes (higher-orderness), as well
as the feed and ltering constructs.
2.1 Formal Syntax and Semantics of HOBS
The syntax of HOBS is presented in Figure 1. Terms are
treated as equivalence classes of -convertible terms. The
set PL is a set of process terms with free variables in the set
L, for example P? is the set of all closed terms.
Remark 1 (Notation). The names ranging over each
syntactic category are given in Figure 1. Thus process terms
are always p, q or r. Natural number subscripts indicate
subterms. So p1 is a subterm of p. Primes mark the result
of a transition (or transitions), as in p q?
Figure
2 denes the semantics in terms of a transition
relation ! P? A? P? . Free-variables, substitution
and context lling are dened as usual (see [14] for
details). The semantics is given by labeled transitions, the
labels being of the form p! or p? where p is a process. The
former are broadcast, speech, or transmission, and the latter
are hearing or reception. Transmission can consistently be
interpreted as an autonomous action, and reception as controlled
by the environment. This is because processes are
always ready to hear anything, transmission absorbs reception
in parallel composition, and encapsulation (ltering)
Receive Transmit
Silence
Nil
Input x?p1 q?
Output p1 !p2 q?
Choice
Compose
Out-Filter
Internal
Figure
2: Semantics
can stop reception but only hide transmission. For more
discussion, see [16].
The lter constructs required careful design. One diculty
is that lters should be able to hide messages. Tech-
nically, this means that lters should be able to produce
silent messages as a result. But \silence" message is not
a process construct. Therefore, each lter produces a \mes-
senger" process as a result, and this messenger process then
sends the actual result of ltration and is then discarded.
This way the messenger process can produce any message
(that is any process or silent ).
The transition relation
fails to
be a function (in the rst two arguments) because the composition
rule allows arbitrary exchange of messages between
sub-processes. The choice construct does not introduce non-determinism
by itself, since any broadcast collision can be
resolved by allowing the left sub-process of a parallel composition
to broadcast.
However, the calculus is deterministic on input, and is
input enabled. That is,
This is easily shown by induction on the derivation p m?
For further discussion of these and other design decisions
we refer the reader to the extended version of the paper [14].
3. APPLICATIVE BISIMULATION
There are no surprises in the notions of simulation and
bisimulation for HOBS, and the development uses the same
techniques as for channel-based calculi [11].
Because the transition relation carries processes in labels,
and because notions of higher order simulation and bisimulation
have to account for the structure of these processes,
we use the following notion of message extension for convenience
be a relation on process terms. Its message extension R
M M is dened by the following rules
TauExt
MsgExt
Thomsen's notion of applicative higher order simulation
[18] is suitable for strong simulation in HOBS, because we
have to take the non-grounded (higher order) nature of the
messages into account.
(Applicative Simulation). A relation
R P? P? on closed process terms is a (strong, higher
applicative simulation, written S(R), when
1:
2:
We use a standard notion of bisimulation:
Definition 3 (Applicative Bisimulation). A relation
R P? P? on closed process terms is an applicative
bisimulation, written B(R), when both S(R) and S(R 1 )
hold. That is,
Using standard techniques, we can show that the identity
relation on closed processes is a simulation and a bisimula-
tion. The property of being a simulation, a bisimulation re-
spectively, is preserved by relational composition and union.
Also, the bisimulation property is preserved by converse.
Two closed processes p and q are equivalent, written p
q, if there exist a bisimulation relation R such that (p; q) 2
R. In other words, applicative equivalence is a union of all
bisimulation relations:
Definition 4 (Applicative Equivalence). The applicative
equivalence is a relation P? P? dened as a
union of all bisimulation relations. That is,
RP?P?
Proposition 1.
1. B(), that is, is a bisimulation.
2. is an equivalence, that is, is re
exive, symmetric
and transitive.
The calculus enjoys the following basic properties:
Proposition 2. Let
Input
1. x?0 0
Parallel Composition
1. pj0 p
2. p1 jp2 p2 jp1
3. p1 j(p2 jp3) (p1 jp2)jp3
Filters
1.
2. (p1 - p2
3. x?p -0 0
4. hx?p1
Choice
1.
2. hx?p1+p2 !p3 ijx?p4 hx?(p1 jp4 )+p2 !(p3 jp4 [p2 =x])i
4. EQUIVALENCE AS A CONGRUENCE
In this section we use Howe's method [10] to show that
the applicative equivalence relation is a congruence. To
motivate the need for Howe's proof method, we start by
showing the di-culties with the standard proof technique
[11]. Then, we present an adaptation of Howe's basic development
for HOBS. We conclude the section by applying
this adaptation to applicative equivalence.
An equivalence is a congruence when two equivalent terms
are not distinguishable in any context:
be an
equivalence relation on process terms. R is a congruence
when
Comp Nil p1 R q1 p2 R q2
R q1 !q2
Comp Out p1 R q1 p2 R q2
Comp InFilter
Rx
Comp Var p1 R q1 p2 R q2
R q1 jq2
Comp Comp p1 R q1 p2 R q2
R q1 q2
Comp InFilterB
Comp In p1 R q1 p2 R q2
R q1 / q2
Comp Feed p1 R q1 p2 R q2
Comp OutFilter
Comp Choice p1 R q1 p2 R q2 p3 R q3
Comp OutFilterI
Figure
3: Compatible Renement
4.1 Difficulty with the Standard Proof Method
The notion of compatible renement b
R allows us to concisely
express case analysis on the outermost syntactic constructor
Definition 6 (Compatible Refinement). Let R
P P be a relation on process terms. Its compatible rene-
ment b
R is dened by the rules in Figure 3.
The standard congruence proof method is to show, by
induction on the syntax, that equivalence contains its
compatible renement b
. The standard method for proving
congruence centers around proving the following lemma:
Lemma 1. Let R P P be an equivalence relation on
process terms. Then R is a congruence i b
R R.
The standard proof (to show b
proceeds by case
analysis. Several cases are simple (nil 0, variable and output
x?). The case of feed / is slightly more complicated.
All the other cases are problematic (especially composition
j), since they require substitutivity of equivalence where
substitutivity is dened (in a usual way) as:
Definition 7 (Substitutivity). Let R P P be a
relation on process terms. R is called substitutive when the
following rule holds
p1 [p2=x] R q1 [q2 =x]
Rel Subst
In HOBS, the standard inductive proof of substitutivity
of equivalence requires equivalence to be a congruence.
And, we are stuck. Attempt to prove substitutivity and
simultaneously does not work either, since a term's
size can increase and this makes use of induction on syntax
impossible. Similar problems seem to be common for higher
order calculi (see for example [18, 7, 1]).
4.2 Howe's Basic Development
Howe [10] proposed a general method for proving that
certain equivalences based on bisimulation are congruences.
Following similar adaptations of Howe's method [9, 7] we
present the adaptation to HOBS along with the necessary
technical lemmas. We use the standard denition of the
restriction R? of a relation R to closed processes (cf. [14]).
Extension of a relation to open terms is also the standard
one:
Definition 8 (Open Extension). Let R P P be
a relation on process terms. Its open extension is dened by
the following rule
8:
The key part of Howe's method is the denition of the
candidate relation R :
Definition 9 (Candidate Relation). Let R P
P be a relation on process terms. Then a candidate relation
is dened as the least relation that satises the rule
Cand
The denition of the candidate relation R facilitates simultaneous
inductive proof on syntax and on reductions.
Note that the denition of the compatible renement b
R
involves only case analysis over syntax. And, inlining the
compatible renement b
R in the denition of the candidate
relation would reveal inductive use of the candidate relation
R .
The relevant properties of a candidate relation R are
summed up below:
Lemma 2. Let R P? P? be a preorder (re
exive,
transitive relation) on closed process terms. Then the following
rules are valid
Cand Ref p R - q
Cand Sim
R q
Cand Cong p R r r R - q
Cand Right
p1 [p2=x] R q1 [q2 =x]
Cand Subst
Corollary 1. We have R R
- as an immediate
consequence of rule Cand Subst and rule Cand Ref.
(eq.
c
r
c
r
where l n
Figure
4: Transmit Lemma
Lemma 3. Let R P P be an equivalence relation.
Then R is symmetric.
The next lemma says that if two candidate-related processes
are closed terms then there is a derivation
which involves only closed terms in the last derivation step.
Lemma 4 (Closed Middle).
R r R - q
4.3 Congruence of the Equivalence Relation
Now our goal of is twofold: rst, to show that the candidate
relation coincides with the open extension - , that
is and second, to use this fact to complete the
congruence proof.
First, we will x the underlying relation R to be . We
already know - from Lemma 2 (rule Cand Sim). To
show the converse we begin by proving that the closed restriction
of the candidate relation
? is a simulation. This
requires showing that the two simulation conditions of Definition
hold.
We split the proof into two lemmas: Lemma 5 (Receive)
and Lemma 6 (Transmit), which prove the respective condi-
tions. Similarly to the standard proof, the parallel composition
case is the most di-cult and actually requires a stronger
receive condition to hold. The rst lemma (Receive) below
proves a restriction of the rst condition. Then the second
lemma (Transmit) below proves the second condition and
makes use of the Receive Lemma.
Lemma 5 (Receive). Let p; q 2 P? be two closed processes
and p q. Then 8m; n;
Proof. Relatively straightforward proof by induction on
the height of inference of transition . The only interesting
case is the case of rule x?p1 q?
p1 [q=x], which makes
use of the substitutivity of the candidate relation .
Lemma 6 (Transmit). Let p; q 2 P? be two closed processes
and p q. Then 8m;
n) (3)
Proof. First note that from Lemma 4 (Closed Middle)
we have that
Also, from the denition of equivalence and the fact that
r and q are closed processes we know that
It remains only to prove that 8m;
Joining statements (5) and (4), and using Lemma 2 (rule
Cand Right) to infer p 0 q 0 and m n gives us the
result. Figure 4 shows the idea pictorially (normal lines,
respectively dotted lines, are universally, respectively existentially
quantied).
To prove the statement (5) we proceed by induction on the
height of inference of transition We only describe
the most interesting case { parallel composition.
Compose There are two parallel composition rules. Since
the rules are symmetric we only show the proof for one
of them. We know that p p1 jp2 , and we have four
related sub-processes: p1 r1 and p2 r2 . Now suppose
that m is a process, and that p made the following
0Since the candidate relation contains its compatible
renement, it is enough to show that each sub-process
of r can mimic the corresponding sub-process
of p. For r1 , using the induction hypothesis we get
that r1 l! ! r 0and p 0 r 0, m l. For r2 , if we
would use only the simulation condition, we would get
0, and this would not allow us to show that r
has a corresponding transition, since the inference of
a transition requires both labels to be the same. At
this point, we can use the stronger receive condition
of Lemma 5 and we get precisely what we need, that
is:
The case when m is is
similar, but simpler since it does not require the use
of Lemma 5 (Receive).
With the two lemmas above we have established that the
restriction of the candidate relation
? is a simulation.
Also, using Lemma 3 we get that ?
is symmetric, which
means that it is a bisimulation. To conclude this section, we
are now ready to state and prove the main proposition.
Proposition 3. - is a congruence.
Proof. First we show that
(rule Cand Sim) we know - . From the two lemmas
above and Lemma 3 we know that ?
is a bisimulation.
This implies
. Since open extension is
monotone we have ?
- . By Corollary 1 we get
- , and so - .
As - is an equivalence, and it is equal to the candidate
relation, it contains its compatible renement ( c
Lemma 2 rule Cand Cong). By Lemma 1 this implies that
- is a congruence.
5. WEAK BISIMULATION
For many purposes, strong applicative equivalence is too
ne as it is sensitive to the number of silent transitions performed
by process terms. Silent transitions represent local
computation, and in many cases it is desirable to analyse
only the communication behaviour of process terms, ignoring
intermediate local computations. For example, the
strong applicative equivalence distinguishes the following
two terms that have the same communication behaviour:
Equipped with the weak transition relation
tion 10) we dene the weak simulation and weak bisimulation
in the standard way[11]. Weak equivalence is also
dened in the standard way as a union of all weak bisimulation
relations. Just as for the strong equivalence, we prove
that is a weak bisimulation and that it is an equivalence
(Proposition 4). Moreover, the technique used in proving
that strong equivalence is a congruence works also for the
equivalence (Proposition 5).
=) be the
re
exive transitive closure of !
!. Then the weak transition
is dened as
a
=) if a !
Proposition 4.
1. is a weak bisimulation
2. is an equivalence.
Proposition 5. is a congruence.
Proof. The proof follows that of Proposition 3. An interesting
dierence is when the induction hypothesis is used
(for example in the case of parallel composition). When
using the induction hypothesis we get that the appropriate
subterms can perform weak transition a
=). Then we have to
interleave the ! transitions before possibly performing the
action a. This interleaving is possible since every process
can receive without any change to the process itself (see
rule Silence in Figure 2).
Expressions
Embedding
Figure
5: Syntax, semantics and embedding of
lazy -calculus.
6. EMBEDDINGS AND ENCODINGS
While a driving criterion in the design of HOBS is simplicity
and resemblance to the Ethernet, a long term technical
goal is to use this simple and uniform calculus to interpret
other more sophisticated proposals for broadcasting
calculi, such as the b-calculus [5]. Having HOBS interpretations
of hand-shake calculi such as -calculus, -calculus
with groups [3], and CCS could also provide a means for
studying the expressive power of these calculi.
In this section, we present the embedding of the lazy -
calculus and its consequences, and brie
y discuss possible
encodings of CBS and the -calculus.
6.1 Embedding of the -calculus
The syntax and semantics of the lazy -calculus, along
with a direct translation into HOBS, are presented in Figure
5. The function ! E E is the small-step
semantics for the language.
Using weak equivalence we can prove that the analog
of -equivalence in HOBS holds. A simple proof of this
proposition using weak bisimulation up to technique can
be found in the extended version of the paper [14].
Proposition 6.
Proposition 7. (Soundness) Let ' be the standard
-calculus notion of observation equivalence. Then
Being able to embed the -calculus justies not having
explicit construct in HOBS for either recursion or datatypes.
Just as in the -calculus there is a recursive Y combinator,
HOBS can express recursion by the derived construct rec
dened on the left below. This recursive construct has the
expected behaviour as stated on the right below.
rec x.p Wx(p) / Wx(p) rec x.p p[rec x.p=x]
From the point of view of the -calculus, it is signicant
that this embedding is possible. In particular, broadcasting
can be viewed as an impurity, or more technically as a computational
eect [13]. Yet the presence of this eect does not
invalidate the -rule. We view this as a positive indicator
about the design of HOBS.
6.2 Encodings of Concurrent Calculi
Having CBS as a precursor in the development of HOBS,
it is natural to ask whether HOBS can interpret CBS. First,
CBS assumes an underlying language with data types, which
HOBS provides in the form of an embedded lazy -calculus,
using the standard Church data encodings. Second, the CBS
translator is the only construct that is not present in HOBS.
To interpret it, we use a special parametrised queue con-
struct. The queue construct together with the parameter
(the translating function) is used as a one-way translator.
Linking two of these to a process (via lter constructs) gives
a CBS-style translator with decoupled translation.
Using Church numerals to encode channel names we can
easily interpret the -calculus without the new operator.
Devising a sound encoding of the full -calculus is more
challenging, since there are several technical di-culties, for
example explicit -conversion, that have to be solved.
For further discussion and proposed solutions see the extended
version of the paper [14]. The soundness proof for
these encodings is ongoing work.
7. EXAMPLES
HOBS is equiped with relatively high-level abstractions
for broadcasting communication. Because HOBS includes
the lazy -calculus, we can extend it to a full functional
language. This gives us tool for experimenting with broadcasting
algorithms. As the theory develops, we hope that
HOBS will also be a tool for formal reasoning about the
algorithms.
In this section we present an implementation of a coordination
formalism called associative broadcast [2], together
with an implementation of a database consistency algorithm
expressed in this formalism. Compared to previous implementations
of this algorithm (see for example [6]), the HOBS
implementation is generic, retains the full expressive power
of associative broadcast, and allows straightforward representation
of associative broadcast algorithms.
Our HOBS interpreter is implemented in OCaml. In addi-
tion, we also use an OCaml-like syntax for HOBS functional
terms and datatypes, and use just juxtaposition instead of
the feed construct symbol. For the purposes of this paper,
the reader can treat the typing information as simply comments
7.1 Associative Broadcast
Bayerdorer [2] describes a coordination formalism called
associative broadcast. This formalism uses broadcasting as
a communication primitive. In this formalism, each object
that participates in the communication has a prole. A
prole can be seen as a set of attribute-value pairs. A special
subset of these pairs contains values that are functions which
modify the prole. This subset is divided into functions that
are used in broadcasting, and those that are used locally.
Since associative broadcast is a coordination formalism, all
objects are locally connected to some external system, and
they can invoke operations on that system. Conversely, the
external system can invoke the local functions of the object.
Communication between objects proceeds as follows: each
broadcast message contains a specication of the set of re-
cipients, and an operation that recipients should execute.
The specication is a rst-order formula which examines a
prole. An operation is a call to one of the functions that
modify the prole. When a message is broadcasted by an
object, it is received by all objects including the sender.
Each object then evaluates the formula on its own prole
to determine whether it should execute the operation. The
operation is executed only if the prole satised the formula.
The generic part of associative broadcast is represented
by so called RTS (run time system), which takes care of
the communication protocol. The following are the basic
denitions needed for an implementation of RTS in HOBS:
type 'a operation = 'a -> ('a -> 'a) -> 'a;;
Message of 'a selector * 'a operation
| Internal of 'a operation;;
type 'a tag = In of 'a
| Out of 'a
let rec x?match x with
Out(p) -> (p!0)!p1
| In(p) -> (0 0)!p1
and
and x?match x with
Out(p) -> (0 0)!p3
| In(p) -> (p!0)!p3
and
Out(p) -> (In(p))!m
| In(p) -> m;;
let rec obj profile =
x?match x with
Message(sel,op) -> if (sel profile) then
op profile obj
else
obj profile
| Internal(op) -> op profile obj
Recipient specication formulas have type 'a selector, operations
have type 'a operation which should be viewed as
a function which takes a prole (type 'a) and a continuation
and returns a prole. Messages that an object can receive
have type 'a message. The key component of the RTS is
the representation of an object. In HOBS an object can be
implemented as:
where process obj executes the main object loop that runs
the protocol. The ltering processes p1, p2, p3, p4 and
the \mirroring" process m take care of the broadcast message
loopback, that is routing each message to all the other
objects and the object itself. Loopback routing uses simple
intuitive tagging of messages.
Having implemented the associative broadcast RTS, we
can implement any associative broadcast algorithms by creating
a prole required by the algorithm. To run the algorithm
we only need to create a parallel composition with the
appropriate number of objects with their proles.
7.2 Database Consistency
In a distributed database there may be several copies of
the same data entity, and ensuring the consistency of these
various copies becomes a concern. Inconsistency can arise if
a transaction occurs while a connection between two nodes
is broken. If we know that concurrent modications are rare
or that downtime is short, we can employ the following optimistic
protocol [2]: When a failure occurs, network nodes
are broken into partitions. While the network is down all
nodes log their transactions. After discovering a network re-connection
we construct a global precedence graph from the
log of all transactions. If this graph does not contain cycles,
then the database is consistent. A transaction t1 precedes
transaction
both t1 and t2 happened in one partition, and t2 read
data previously written by t1
both t1 and t2 happened in one partition, and t1 read
data later written by t2
t1 and t2 happened on dierent partitions, and t1 read
data written by t2
The algorithm represents each item (table, row, etc., depending
on the locking scheme) by an RTS coordination
object. This object will keep the log of all local transac-
tions, and so each object will hold a part of the precedence
graph. To connect these parts into the full graph each object
will broadcast a token to other objects along the precedence
graph edges. When an object receives a token it will
propagate the token along the precedence edges the object
already maintains. If a token returns to its originating object
then we have found an inconsistency. In parallel with
token propagation each object also sends a merge message
to actually merge (or update) values of the item in dierent
partitions. And, if an object that modied its item receives
merge message, it also declares inconsistency.
In what follows we present the key denitions in our im-
plementation. The code for the full implementation can be
found in Figure 6 on the last page. The prole used by each
object is dened as follows:
{oid:int;
mutable item_name: string;
mutable item_value: int;
mutable reads: transaction list;
mutable written: bool;
mutable partition: int;
mutable merged: bool;
mutable mcount: int;
mutable tw: transaction;
propagate: int -> int -> profile operation;
merge: int -> bool -> profile operation;
upon_partitioning: unit -> profile operation;
upon_commiting_transaction:
int -> profile operation;
upon_detecting_reconnection:
profile operation};;
Each object keeps a unique identier oid, the name of the
item it monitors, the value of that item, a set of transactions
reads, a
ag to signal if the item was written, and a partition
number. It also keeps a set of local attributes: merged
ag to
check whether the item values are already merged between
1 When an item is written it is also considered to be read.
partitions, number of merge messages received mcount, and
the last logged write transaction tw.
Each object has two broadcasting operations: propagate
to propagate a token along the precedence edges it contains,
and merge to possibly update the item to a new value. Each
object has three local operations: upon partitioning to
record the local partition number, upon commiting trans-action
to record committed transactions, and upon detecting
reconnection that starts the graph construction.
The functions that the external system is assumed to
provide are: local partition id to get a partition iden-
log to log transactions; modifies to check whether
a transaction modies an item; precedes to check (using
the local log) whether a transaction precedes other trans-
action; declare inconsistency to declare database incon-
delay transactions to pause the running trans-
actions; count objects to get the number of all objects;
count local objects to get the number of objects in a par-
tition; and write locked to check whether an item is locked
for writing.
8. RELATED WORK
In this section we review works related to our basic design
choices and the central proof technique used in the paper.
8.1 Alternative Approaches to Modelling Dynamic
Connectivity
One approach to modelling dynamic broadcast architectures
is to support special messages that can change bridge
behaviour. This corresponds to the transmission of channel
names in the -calculus [12]. Another approach is to allow
processes be transmitted, so that copies can be run elsewhere
in the system. This makes the calculus higher order,
like CHOCS [18]. This is the approach taken in this paper.
A preliminary variant of HOBS sketched in [15] retains the
underlying language of messages and functions. The resulting
calculus seems to be unnecessarily complex, and having
the underlying language seems redundant.
In HOBS, processes are the only entities, and they are
used to characterise bridges. Since processes can be broad-
casted, it will be interesting to see if HOBS can model some
features of the -calculus. Because arrival at a bridge and
delivery across it happen in sequence, HOBS avoids CBS's
insistence that these actions be simultaneous. This comes
at the cost of having less powerful synchronisation between
subsystems.
8.2 Related Calculi
The b-calculus [5] can be seen as a version of the -
calculus with broadcast communication instead of point-to-
point. In particular, the b-calculus borrows the whole channel
name machinery of the -calculus, including the operator
for creation of new names. Thus the b-calculus does not
model the Ethernet directly, and is not obviously a mobile
version of CBS. Reusing ideas from the sketched -calculus
encoding can yield a simple b-calculus encoding. Using l-
terss to model scopes of new names seems promising. More-
over, such an encoding might be compositional. We expect
that with an appropriate type system we can achieve a fully
abstract encoding of the b-calculus. The type system would
be a mixture of Hindley/Milner and polymorphic -calculus
systems.
The Ambient calculus [4] is a calculus of mobile processes
with computation based on a notion of movement. It is
equipped with intra-ambient asynchronous communication
similar to the asynchronous -calculus. Since the choice of
communication mechanism is independent from the mobility
primitives, it may be interesting to study a broadcasting
version of Ambient calculus. Also, broadcasting Ambient
calculus might have simple encoding in HOBS.
Both HOBS and the join calculus [8] can be viewed as extensions
of the -calculus. HOBS adds parallel composition
and broadcast communication on top of the bare -calculus.
The join calculus adds parallel composition and a parallel
pattern on top of the -calculus with explicit let. The relationship
between these two calculi remains to be studied.
The feed operator / is foreshadowed in implementations
of CBS, see [16], that achieve apparent synchrony while allowing
subsystems to fall behind.
8.3 Other Congruence Proofs
Ferreira, Hennessy and Jerey [7] use Howe's proof to
show that weak bisimulation is a congruence for CML. They
use late bisimulation. They leave open the question whether
Howe's method can be applied to early bisimulation; this
paper does not directly answer their question since late and
early semantics and bisimulations coincide for HOBS. A
proof for late semantics for HOBS is more elegant than the
one here, and can be found in the extended version of the
paper [14].
Thomsen proves congruence for CHOCS [18] by adapting
the standard proof, but with non-well founded induction.
That proof is in eect a similar proof to Howe's technique,
but tailored specically to CHOCS.
abandons higher order bisimulation for reasons
specic to point-to-point communication with private
channels, and uses context bisimulation where he adapts the
standard proof. His proof is of similar di-culty to the proof
presented here, especially in that the case of process appli-
cation, involving substitution, is di-cult.
9.
ACKNOWLEDGEMENTS
We would like to thank Dave Sands for bringing Howe's
method to our attention, Jorgen Gustavsson, Martin Wei-
chert and Gordon Pace for many discussions, and anonymous
referees for constructive comments.
10.
--R
The lazy lambda calculus.
Bryan Bayerdor
Secrecy and group creation.
Mobile ambients.
Expressivness of point-to-point versus broadcast communications
A broadcast-based calculus for communicating systems
Alan Je
Bisimilarity as a theory of functional programming: mini-course
Proving congruence of bisimulation in functional programming languages.
Communication and Concurrency.
Communicating and Mobile Systems: the
Notions of computation and monads.
Karol Ostrovsk
Status report on ongoing work: Higher order broadcasting systems and reasoning about broadcasts.
A calculus of broadcasting systems.
Expressing Mobility in Process Algebras: First-Order and Higher-Order Paradigms
Plain CHOCS: A second generation calculus for higher order processes.
The Polymorphic Pi-Calculus: Theory and Implementation
--TR
Communication and concurrency
Notions of computation and monads
The lazy lambda calculus
A calculus of broadcasting systems
Proving congruence of bisimulation in functional programming languages
The reflexive CHAM and the join-calculus
Communicating and mobile systems
Mobile ambients
Secrecy and Group Creation
Expressiveness of Point-to-Point versus Broadcast Communications
--CTR
Massimo Merro, An Observational Theory for Mobile Ad Hoc Networks, Electronic Notes in Theoretical Computer Science (ENTCS), 173, p.275-293, April, 2007
Patrick Eugster, Type-based publish/subscribe: Concepts and experiences, ACM Transactions on Programming Languages and Systems (TOPLAS), v.29 n.1, p.6-es, January 2007 | semantics;ethernet;concurrency;broadcasting;calculi;programming languages |
571163 | Modular termination of context-sensitive rewriting. | Context-sensitive rewriting (CSR) has recently emerged as an interesting and flexible paradigm that provides a bridge between the abstract world of general rewriting and the (more) applied setting of declarative specification and programming languages such as OBJ*, CafeOBJ, ELAN, and Maude. A natural approach to study properties of programs written in these languages is to model them as context-sensitive rewriting systems. Here we are especially interested in proving termination of such systems, and thereby providing methods to establish termination of e.g. OBJ* programs. For proving termination of context-sensitive re-writing, there exist a few transformation methods, that reduce the problem to termination of a transformed ordinary term rewriting system (TRS). These transformations, however, have some serious drawbacks. In particular, most of them do not seem to support a modular analysis of the termination problem. In this paper we will show that a substantial part of the well-known theory of modular term rewriting can be extended to CSR, via a thorough analysis of the additional complications arising from context-sensitivity. More precisely, we will mainly concentrate on termination (properties). The obtained modularity results correspond nicely to the fact that in the above languages the modular design of programs and specifications is explicitly promoted, since it can now also be complemented by modular analysis techniques. | INTRODUCTION
Programmers usually organize the programs into components
or modules. Components of a program are easier to de-
velop, analyze, debug, and test. Eventually, the programmer
wants that interesting computational properties like termination
hold for the whole program if they could be proved for
the individual components of the program. Roughly speak-
ing, this is what being a modular property means.
Context-sensitive rewriting (CSR [26]) is a restriction of
rewriting which forbids reductions on selected arguments of
functions. In this way, the termination behavior of rewriting
computations can be improved, by pruning (all) infinite
rewrite sequences. Several methods have been developed
to formally prove termination of CSR [8, 12, 13, 25, 43,
47]. Termination of (innermost) context-sensitive rewriting
has been recently related to termination of declarative languages
such as OBJ*, CafeOBJ, and Maude [27, 28]. These
languages exhibit a strong orientation towards the modular
design of programs and specifications. In this setting,
achieving modular proofs of termination is desirable. For
instance, borrowing Appendix C.5 of [16], in Figure 1 we
show an specification of a program using lazy lists. Modules
TRUTH-VALUE and NAT introduce (sorts and) the constructors
for boolean and natural numbers. Module ID-NAT provides
an specialization of the (built-in, syntactical) identity operator
1 '===' of OBJ (see module IDENTICAL on Appendix
1 The definition of binary predicate '===' is meaningful, provided
that the rules are attempted from top to bottom. This
is quite a reasonable assumption from the (OBJ) implementation
point of view. Nevertheless, our discussion on termination
of the program does not depend on this fact in any
way.
obj TRUTH-VALUE is
obj NAT is
Nat .
obj ID-NAT is
protecting TRUTH-VALUE .
var
obj LAZYLIST is
List .
List List -> List [assoc idr: nil strat (0)] .
obj INF is
protecting LAZYLIST[Nat] .
List [strat (1 0)] .
obj TAKE is
protecting LAZYLIST[Nat] .
List -> List [strat (1 2 0)] .
Figure
1: Modular specification in OBJ3
D.3 of [16]). Module INF specifies a function inf which is
able to build an infinite object: the list of all natural numbers
following number n. Module TAKE specifies a
function take which is able to select the first n components
of a (lazy) list given as the second argument of take. Fi-
nally, module LENGTH introduces a function for computing
the length of a (finite) list. Here, the use of the strategy annotation
strat (0) for the list constructor cons (in module
LAZYLIST) is intended for both (1) allow for a real terminating
behavior of the program due to disallowing the recursive
call inf(s(n)) in the definition of inf and (2) avoiding useless
reductions on the first component of a list when computing
its length. For instance, it is possible to obtain the
value s(s(0)) of length(take(s(0),inf(s(0)))) without
any risk of nontermination 2 .
Although very simple, program of Figure 1 provides an in-
This claim can be justified by using the results in [31, 30].
teresting application of our modularity results. For instance,
whereas it is not possible to prove termination of the program
using automatic techniques for proving termination
such as Knuth-Bendix, polynomial, or recursive path orderings
(see [1, 5]), it is pretty simple to separately prove termination
of modules ID-NAT, INF, TAKE, and LENGTH. Then,
our modularity results permit a formal proof of termination
which ultimately relies on the use of purely automatic techniques
such as the recursive path ordering (see Example 9
below). Moreover, automatic tools for proving termination
such as the CiME 2.0 system 3 can also be used to prove
termination of the corresponding modules. In this way, the
user is allowed to ignore the details of termination proofs/
techniques by (only) relying on the use of software tools.
Before going into details, let us mention that there exists
already abundant literature on rewriting with context-sensitive
and other related strategies, cf. e.g. [3], [7], [9], [10],
[11], [40].
2. PRELIMINARIES
2.1 Basics
Subsequently we will assume in general some familiarity with
the basic theory of term rewriting (cf. e.g. [1], [5]). Given
a set A, P(A) denotes the set of all subsets of A. Given a
binary relation R on a set A, we denote the reflexive closure
of R by its transitive closure by R + , and its reflexive and
transitive closure by R # . An element a # A is an R-normal
form, if there exists no b such that a R b; NFR is the set of R-
normal forms. We say that b is an R-normal form of a, if b is
an R-normal form and a R # b. We say that R is terminating
i# there is no infinite sequence a1 R a2 R a3 - . We say
that R is locally confluent if, for every a, b, c # A, whenever
a R b and a R c, there exists d # A such that b R # d and
We say that R is confluent if, for every a, b, c # A,
whenever a R # b and a R # c, there exists d # A such that
d. If R is confluent and terminating, then we
say that R is complete. Throughout the paper, X denotes a
countable set of variables and F denotes a signature, i.e., a
set of function symbols {f, g, .}, each having a fixed arity
given by a mapping ar : F # N. The set of terms built
from # and X is T is said to be linear if it
has no multiple occurrences of a single variable. Terms are
viewed as labelled trees in the usual way. Positions p, q, .
are represented by chains of positive natural numbers used
to address subterms of t. Given positions p, q, we denote its
concatenation as p.q. Positions are ordered by the standard
prefix ordering #. Given a set of positions P , maximal# (P )
is the set of maximal positions of P w.r.t. If p is a
position, and Q is a set of positions,
We denote the empty chain by #. The set of positions of
a term t is Pos(t). Positions of non-variable symbols in
are denoted as PosF (t), and PosX (t) are the positions
of variables. The subterm at position p of t is denoted as
t| p and t[s] p is the term t with the subterm at position p
replaced by s. The symbol labelling the root of t is denoted
as root(t).
A rewrite rule is an ordered pair (l, r), written l # r, with
l, r # T Var(l). The left-hand
side (lhs) of the rule is l and r is the right-hand side (rhs). A
TRS is a is a set of rewrite rules.
3 Available at http://cime.lri.fr
L(R) denotes the set of lhs's of R. An instance #(l) of a
lhs l of a rule is a redex. The set of redex positions in t is
PosR(t). A TRS R is left-linear if for all l # L(R), l is a
linear term. Given TRS's
we let R # R # be the TRS
rewrites to s (at position p), written
#R s (or just t # s), if t|
for some rule # : l # r # R, p # Pos(t) and substitution
#. A TRS is terminating if # is terminating. We say that
innermost rewrites to s, written t # i s, if t p
# s and
innermost terminating
2.2 Context-Sensitive Rewriting
Given a signature F , a mapping - : F # P(N) is a replacement
map (or F-map) if for all f # F , -(f) # {1, . , ar(f)}
[26]. Let MF (or MR if determines the considered
symbols), the set of all F-maps. For the sake of simplic-
ity, we will apply a replacement map - # MF on symbols
of any signature F # by assuming that
whenever f # F . A replacement map - specifies the argument
positions which can be reduced for each symbol in
F . Accordingly, the set of -replacing positions Pos - (t) of
. The set of positions of replacing redexes in t is
context-sensitive rewrite
system (CSRS) is a pair (R, -), where R is a TRS and -
is a replacement map. In context-sensitive rewriting (CSR
[26]), we (only) contract replacing redexes: t -rewrites to s,
#R s and p # Pos - (t).
Example 1. Consider the TRS R:
with
and 2}. The CSRS (R, -) corresponds to the
OBJ3 program of Figure 1 (with cons replaced by : here; see
[27] for further details about this correspondence). Then,
we have:
Since 2.2 # Pos - (take(s(0),0:inf(s(0)))), the redex inf(s(0))
cannot be -rewritten.
The #-normal forms are called (R, -normal forms. A
CSRS (R, -) is terminating (resp. locally confluent, conflu-
ent, complete) if #- is terminating (resp. locally confluent,
confluent, complete). Slightly abusing terminology, we shall
also sometimes say that the TRS R is -terminating if the
CSRS (R, -) is terminating. With innermost CSR, # i , we
only contract maximal positions of replacing redexes: t # i s
#R s and p # maximal# (Pos -
R (t)). We say that (R, -)
is innermost terminating if # i is terminating.
3. MODULARPROOFSOFTERMINATION
OF CSR BY TRANSFORMATION
Subsequently we assume some basic familiarity with the
usual notions, notations and terminology in modularity of
rewriting (cf. e.g. [45], [33], [17], [38, 39] 4 ). We say that
some property of CSRS's is modular (under disjoint, constructor
sharing, composable unions) if, whenever two (resp.
disjoint, constructor sharing, composable) CSRS's (R1 ,
and (R2 , -2 ) have the property, then the union (R1#R2 , -1#
property. 5 Two CSRS's (R1 , -1 ) and
are disjoint if the signatures of R1 and are dis-
joint. CSRS's (R1 , -1 ) and (R2 , -2 ) are constructor sharing
if they only share constructor symbols (see Definition 7 for
details). Finally, CSRS's (R1 , -1 ) and (R2 , -2 ) are composable
if they share defined symbols provided that they
share all of their defining rules, too (cf. [39]). Note that disjoint
TRS's are sharing constructor, and sharing constructor
TRS's are composable.
Termination of CSRS's (R, -) is usually proved by demonstrating
termination of a transformed TRS R -
obtained
from R and - by using a transformation 6 # [8, 12, 13, 25,
43, 47]. The simplest (and trivial) correct transformation for
proving termination of CSRS's is the identity: if R -
is terminating, then (R, -) is terminating for every replacement
map -.
When considering the interaction between modularity and
transformations for proving termination of a CSRS (R, -),
we can imagine two main scenarios:
. First modularize, next transform (M&T): first, given
(R, -), we look for (or have) some decomposition
for modularity (e.g., disjointness, constructor-sharing,
composability, etc. Then, we prove termination of
both S -
# and T -
# (for the same transformation #).
. First transform, next modularize (T&M): first, we transform
(R, -) into R -
we look for a suitable decomposition
such that termination of
S and T ensures termination of R -
# (hence, that of
(R, -)).
The second approach (T&M) is actually a standard problem
of (being able to achieve) a modular proof of termination of
# .
The first approach (M&T) succeeds if we have
1. (S#T
# (in this way, termination of S -
implies termination of (S # T
# which entails termination
of (S # T , -)), and
2. The transformation is 'compatible' with M, i.e.,
(possibly the
same) modularity criterion M # (in this way, termination
of S -
# and T -
would imply termination of S -
4 Further relevant works on modularity not mentioned elsewhere
include (this list is highly incomplete): [22, 23, 24],
5 Typically, the inverse implication is trivial.
6 See http://www.dsic.upv.es/users/elp/slucas/muterm
for a tool, mu-term 1.0, that implements these transformations
Indeed, the first requirement (S # T
# is satisfied
by the transformations reported in the literature, as
they have, in fact, a 'modular' definition (based on each individual
rule or symbol in the signature disregarding any
'interaction'), i.e., we actually have (S # T
# for
all these transformations. On the other hand, the second requirement
is not fulfilled by many of these transformations
(in general).
According to this, we review the main (nontrivial) correct
transformations for proving termination of CSR regarding
their suitability for modular proofs of termination.
3.1 The Contractive Transformation
Let F be a signature and - # MF be a replacement
map. With the contractive transformation [25], the non-
-replacing arguments of all symbols in F are removed and
a new, -contracted signature F -
L is obtained (possibly reducing
the arity of symbols). The function
drops the non-replacing immediate subterms of
a term t # T constructs a '-contracted' term
by joining the (also transformed) replacing arguments below
the corresponding operator of F -
L . A CSRS (R, -),
-contracted into R -
Example 2. Consider the CSRS (R, -) of Example 1.
is:
take(s,
According to this definition, it is not di#cult to see that,
given TRS's S and T , (S # T
L . It is also
clear that M(S, T ) implies M(S -
Compos}, i.e., the usual criteria for modularity are preserved
(as are) by the transformation.
3.2 Zantema's Transformation
Zantema's transformation marks the non-replacing arguments
of function symbols (disregarding their positions within
the term) [47]. Given
Z consists of two parts.
The first part results from R by replacing every function
occurring in a left or right-hand side with f # (a
fresh function symbol of the same arity as f which, then, is
included in F # ) if it occurs in a non-replacing argument of
the function symbol directly above it. These new function
symbols are used to block further reductions at this posi-
tion. In addition, if a variable x occurs in a non-replacing
position in the lhs l of a rewrite rule l # r, then all occurrences
of x in r are replaced by activate(x). Here, activate
is a new unary function symbol which is used to activate
blocked function symbols again.
The second part of R -
Z consists of rewrite rules that are
needed for blocking and unblocking function symbols:
for every f # F # , together with the rule activate(x) # x
The problem is that activate is a new defined symbol having
a defining rule for each new 'blocked' symbol appearing in
the signature. This means that, starting from composable
modules S and T , S -
Z and T -
Z are composable only if blocked
symbols in both S and T are the same.
Example 3. Consider the TRS's:
and
that correspond to modules INF and ID-NAT in Figure 1.
Viewed as modules, they are sharing constructor. Let - be
as in Example 1. Now we have that
and
are not composable. The problem is that S -
Z has a blocked
which is not present in T -
Z .
Note that, since the rule activate(x) # x is present in every
transformed system, composability is the best that we
can achieve after applying the transformation. For instance,
disjoint TRS's S and T lose disjointness after applying the
transformation.
In [8], Ferreira and Ribeiro propose a variant of Zantema's
transformation which has been proved strictly more powerful
than Zantema's one (see [13]). This transformation has
the same problems regarding modularity.
3.3 Giesl and Middeldorp's Transformations
Giesl and Middeldorp introduced a transformation that
explicitly marks the replacing positions of a term (by using a
new symbol active). Given a TRS
the TRS R -
{active, mark}, R -
consists of the
following rules (for all l # r # R and f # F):
mark(f(x1 , . , xk
[12].
Unfortunately, unless R = S, this transformation will
never yield a pair of composable TRS's. Note that two di#er-
ent composable systems R and S cannot share all symbols:
if they have the same defined symbols (i.e.,
all rules must coincide too (up to renaming of variables).
Hence R #= S implies that they di#er (at least) in a constructor
symbol. However, if, e.g., f # FR - FS , then a
new rule mark(f(x1 , . , xk
is in R -
GM but not in S -
GM . Since mark is a defined
GM and S -
GM are not composable. Thus, we have proven
the following:
Theorem 1. Let (R, -) and (S, -) be di#erent composable
CSRS's. Then, R -
GM and S -
GM are not composable.
Note that, since disjoint TRS's are sharing constructor; and
sharing constructor TRS's are composable, it follows that
Giesl and Middeldorp's transformation does not provide any
possibility for a M&T-analysis of termination of CSRS's (at
least regarding the modularity criteria considered here).
In [12], Giesl and Middeldorp suggest a slightly di#erent
presentation R -
mGM of the previous transformation. In this
presentation, symbol active is not used anymore. However,
since, regarding modularity, the conflicts are due to the use
of symbol mark, this new transformation has exactly the
same problem.
Giesl and Middeldorp also introduced a transformation
which is complete, i.e., every terminating CSRS (R, -) is
transformed into a terminating TRS R -
C [12].
Given a TRS replacement map -, the
proper, top}, R -
consists of the following rules (see [12] for
a more detailed explanation): for all l # r # R, f # F such
that
active(f(x1 , . , x i , . , xk
proper(c) # ok(c)
proper(f(x1 , . , xk
Unfortunately, it is not di#cult to see that, regarding a
M&T-modular analysis of termination (and due to the rules
defining proper), we have the following.
Theorem 2. Let (R, -) and (S, -) be di#erent composable
CSRS's. Then, R -
C and S -
C are not composable.
3.4 ANon-Transformational Approach to Modularity
The previous discussion shows that only the contractive
transformation seems to be a suitable choice for performing
a modular analysis of termination of CSRS's. However,
consider again the OBJ program of Figure 1 (represented by
the CSRS (R, -) of Example 1). Note that a direct proof
of termination of (R, -) is not possible with the contractive
transformation (as shown in Example 2, R -
L is not termi-
nating). Of course, in this setting, modularity is not useful
either. On the other hand, we note that S -
Z ) in
Example 3 is not kbo-terminating (resp. rpo-terminating).
Therefore, R -
Z (which contains both S -
Z and T -
Z ) is not either
kbo- or rpo-terminating. Moreover, our attempts to
prove termination of R -
Z by using CiME failed for every
considered combination (including techniques that potentially
deal with non-simply terminating TRS's like the use
of dependency pairs together with polynomial orderings) of
proof criteria. Similarly, termination of R -
GM or R -
C cannot
be proved either using kbo or rpo (see [2] for a formal
justification of this claim). In fact, we could even prove
that they are not simply terminating (see [31]). On the
other hand, our results in this section shows that M&T or
T&M approaches are not able to provide a (simpler) proof
of termination of (R, -). Hence, termination of (R, -) remains
di#cult to (automatically) prove. The following section
shows that this situation dramatically changes using a
direct modular analysis of termination of CSR.
4. MODULAR TERMINATION OF CSR
In this main section we investigate whether, and if so, how
known modularity results from ordinary term (cf. e.g. the
early pioneering work of Toyama [45, 44] and later surveys
on the state-of-the-art about modularity in rewriting like
[33], [37], [19]) extend to context-sensitive rewriting. Since
any TRS R can be viewed as the CSRS (R, -# ), all modularity
results about TRS's cover a very specific case of CSRS's,
namely, with no replacement restrictions at all. Yet, the
interesting case, of course, arises when there are proper replacement
restrictions. In this paper we only concentrate on
termination properties. First we study how to obtain criteria
for the modularity of termination of CSRS's. Later on
we'll also consider weak termination properties, which surprisingly
may help to guarantee full termination (and con-
fluence) under some conditions. Then we generalize the setting
and consider also to some extent certain non-disjoint
combinations. As for ordinary term rewriting, a deep understanding
of the disjoint union case appears to be indispensable
for properly treating non-disjoint unions. For that
reason we mainly focus here on the case of disjoint unions.
For practical purposes it is obvious, that non-disjoint combinations
are much more interesting. Yet, the lessons learned
from (the nowadays fairly well understood) modularity analysis
in term rewriting suggest to be extremely careful with
seemingly plausible conjectures and "obvious facts".
4.1 Modularity of Termination in Disjoint
Unions
In this section, we investigate modularity of termination
of disjoint unions of CSRS's. For simplicity, as a kind of
global assumption we assume that all considered CSRS's
are finite. Most of the results (but not all) do also hold for
systems with arbitrarily many rules.
The very first positive results on modularity of termina-
tion, after the negative ones in [45, 44], were given by Rusi-
nowitch [41], who showed that the absence of collapsing rules
or the absence of duplicating rules su#ce for the termination
of the disjoint union of two terminating TRS's. Later, Middeldorp
[32] refined and generalized this criterion by showing
that it su#ces that one system doesn't have any collapsing
or duplicating rules. A careful inspection of the proofs actually
shows that these results do also for CSRS's. Even
more interestingly, there is an additional source for gener-
alization. Consider e.g. the following variant of Toyama's
basic counterexample.
Example 4. The systems
with are both terminating CSRS's, as well as
their disjoint union (the latter is a consequence of Theorem
3 below). In Toyama's original version, where the union is
non-terminating, there are no restrictions, collapsing
and duplicating.
A careful inspection of these two CSRS's and of their interaction
shows that the duplicating R-rule is not a problem
any more regarding non-termination, because the first two
occurrences of x in the rhs of the R1 -rule become blocked
after applying the rule. In particular, any further extraction
of subterms at these two positions, that is crucial for
Toyama's counterexample to work, is prohibited by
{3}. This observation naturally leads to the conjecture that
blocked/inactive variables in rhs's shouldn't count for duplication
Definition 1. A rule l # r in a CSRS (R, -) is non-
duplicating if for every x # Var(l) the multiset of replacing
occurrences of x in r is contained in the multiset of replacing
occurrences of x in l. (R, -) is non-duplicating if all its rules
are.
Of course, in order to sensibly combine two CSRS's, one
should require some basic compatibility condition regarding
the respective replacement restrictions.
Definition 2. Two CSRS's (R1 , are said
to be compatible if they have the same replacement restrictions
for shared function symbols, i.e., if
and
Disjoint CSRS's are trivially compatible.
Theorem 3. Let (R1 , be two disjoint, terminating
CSRS's, and let (R, -) be their union. Then the
following hold:
(i) (R, -) terminates, if both R1 and are non-collapsing.
(ii) (R, -) terminates, if both R1 and are non-duplica-
ting.
(iii) (R, -) terminates, if one of the systems is both non-
collapsing and non-duplicating.
Proof. We sketch the proof idea and point out the differences
to the TRS case. All three properties follow immediately
from the following observations. For any infinite
(R, -derivation D : s1 # s2 # . of minimal rank (i.e.
in any minimal counterexample) we have:
(a) There are infinitely many outer reduction steps in D.
(b) There are infinitely many inner reduction steps in D
which are destructive at level 2.
(c) There are infinitely many duplicating outer reduction
steps in D.
(a) and (b) are proved as for TRS's, cf. e.g. the minimal
counterexample approach of [17], and (c) is proved as in [38],
but with a small adaptation: Instead of the well-founded
measure there, that is shown to be decreasing, namely
(the multiset of ranks of all special
subterms of level 2), we only take
active in s], i.e., we only count special subterms at
active positions. With this modification the proof goes
through as before.
With this result, we're able to explain termination of
Example 4 without having to use any sophisticated proof
method for the combined system.
In fact, in the case of TRS's, the above syntactical conditions
(non-collapsingness and non-duplication) had turned
out to be very special cases (more precisely, consequences)
of an abstract structure theorem that characterizes minimal
counterexamples (cf. [17] 7 ). For CSRS's this powerful result
also holds. To show this, we first need another definition
(where R# S mean R # S provided that R and S are dis-
joint).
Definition 3. A TRS R is said to be terminating under
FP-terminating for short, if R#{G(x, y) #
(R, -) is said to
be FP-terminating, if (R # {G(x, y) # x, G(x, y) # y}, -),
Theorem 4 (extends [17, Theorem 7]).
Let (R1 , be two disjoint, terminating CSRS's,
such that their union (R, -) is non-terminating. Then one
of the systems is not FP-terminating and the other system
is collapsing.
Proof. The quite non-trivial proof is practically the same
as for TRS's as a careful inspection of [17] reveals. All the
abstracting constructions and the resulting transformation
of a minimal counterexample in the disjoint union into a
counterexample in one of the systems extended by the free
projection rules for some fresh binary operator work as before
As already shown for TRS's, this abstract and powerful
structure result has a lot of - more or less straightforward -
direct and indirect consequences and corollaries. To mention
only two:
Corollary 1. Termination is modular for non-deterministically
collapsing 9 disjoint CSRS's.
Corollary 2. FP-termination is modular for disjoint
CSRS's.
Next we will have a look at (the modularity of) weak
termination properties.
7 For the construction in [17] the involved TRS's must be
finitely branching, which in practice is always satisfied. The
case of infinitely branching systems is handled in [38] by a
similar but more involved abstraction function, based on the
same underlying idea of extracting all relevant information
from deeper alien subterms.
8 This important property was later called CE -termination in
[38] which however isn't really telling or natural. A slightly
di#erent property in [17] had been called termination preserving
under non-deterministic collapses which is precise
but rather lengthy. Here we prefer to use FP-termination,
since it naturally expresses that a rewrite system under the
addition of the projection rules for a free (i.e., fresh) function
9 A CSRS is non-deterministically collapsing if there is a
term that reduces to two distinct variables (in a finite number
of context-sensitive rewrite steps).
4.2 Modularity of Weak Termination Proper-
ties
Weak termination properties are clearly interesting on
their own, since full termination may be unrealistic or need
not really correspond to the computational process being
modelled. Certain processes or programs are inherently non-
terminating. But still, one may wish to compute normal
forms for certain inputs (and guarantee their existence).
On the other hand, interestingly, for TRS's it has turned
out that weak termination properties can be very helpful
in order to obtain in a modular way the full termination
property under certain assumptions.
Definition 4. Let (R, -) be a CSRS. (R, -) (and #)
is said to be weakly terminating (WN), if # is weakly ter-
minating. (R, -) (and #) is weakly innermost terminating
(WIN) if the innermost context-sensitive rewrite relation # i
is weakly terminating, and (strongly) innermost terminating
(SIN) if the innermost context-sensitive rewrite relation # i
is (strongly) terminating.
For ordinary TRS's it is well-known (and not di#cult to
prove) that weak termination, weak innermost termination
and (strong) innermost termination are all modular properties
(cf. e.g. [6], [17, 18]), w.r.t. disjoint unions. Surprisingly,
this does not hold in general for CSRS's as shown by the following
counterexample.
Example 5. Consider the disjoint CSRS's
with are both innermost terminating,
in fact even terminating, but their union (R, -) is neither
terminating nor innermost terminating. We have
ever, that (R, -) is weakly innermost terminating, hence also
weakly terminating: f(a, b, G(a, b)) # i f(G(a, b), G(a, b),
the latter term is in (R, -normal form.
But even WIN and WN are not modular in general for
CSRS's as we next illustrate by a modified version of Example
5.
Example 6. Consider the disjoint CSRS's
f(b, a, x) # f(x, x, x)
with (and WN), but not
SIN. even SN. But their union (R, -)
is neither WIN nor WN.
is the (only) first innermost (R, -step issuing from
Then we can innermost reduce the first argument
of f and then the second one, or vice versa, but the
subsequent innermost step must be using one of the first
four necessarily yielding a cycle
Therefore, f(a, b, G(a, b)) doesn't have an innermost (R, -
normal form, and also no (R, -normal form at all.
A careful inspection of what goes wrong here, as well as an
inspection of the corresponding proofs for the context-free
case shows that the problem comes from (innermost) redexes
which, at some point, are blocked (inactive) because they
are below some forbidden position, but become unblocked
(active) again later on. The condition that is needed to
prevent this phenomenon is the following:
Definition 5 (conservatively blocking). A CSRS
(R, -) is said to be conservatively blocking, (CB for short),
if the following holds: For every rule l # r # R, for every
variable x occurring in l at an inactive position, all occurrences
of x in r are inactive, too. 10
Under this condition now, the properties WIN, WN, and
SIN turn out to be indeed modular for CSRS's.
Theorem 5 (modul. crit. for WIN, WN, SIN).
(a) WIN is modular for disjoint CSRS's satisfying CB.
(b) WN is modular for disjoint CSRS's satisfying CB.
(c) SIN is modular for disjoint CSRS's satisfying CB.
Proof. (a), (b) and (c) are all proved by structural induction
as in the case of TRS's, cf. e.g. [18, 19]. Condition
CB ensures that the innermost term rewrite derivation that
is constructed in these proofs in the induction step, is also
still innermost, i.e., that the proof goes through for CSRS's
as well.
4.3 Relating Innermost Termination and Ter-
mination
For TRS's a powerful criterion is known under which SIN
implies, hence is equivalent to, termination (SN). Namely,
this equivalence holds for locally confluent overlay TRS's
[18]. Via the modularity of SIN for TRS's this gave rise to
immediate new modularity results for termination (and com-
pleteness) in the case of context-free rewriting (cf. [18]). For-
tunately, the above equivalence criterion between SIN and
SN also extends to CSRS's, but not directly. The non-trivial
proof requires a careful analysis and a subtle additional assumption
(which is vacuously satisfied for TRS's). 11
Definition 6. A CSRS (R, -) is said to be a (context-
sensitive) overlay system or overlay CSRS or overlaying, if
there are no critical pairs 12
#(l1 )[#(r2 )] #(r1)# such that
# is an active non-root position in l 1 . (R, -) has left-homo-
geneous replacing variables (LHRV for short) if, for every
replacing variable x in l, all occurrences of x in both l and r
are replacing. 13
Formally: For every rule l # r # R, for every variable
x (l) #= ?, then Pos -
11 A similar claim was recently made in [14] without proof,
but it remains unclear whether this claim without our condition
LHRV is true.
are rewrite rules having no
variable in common, and # is a non-variable position in l 1
such that l 1 |# and l 2 are unifiable with most general unifier
#. Observe that by definition any overlay TRS is an overlay
CSRS, but not vice versa (when considering a CSRS (R, -)
as TRS R)!
13 Formally: For every l # r # (R, -), for every x # Var - (l)
we have Pos -
Theorem 6 (local completeness criterion).
Let (R, -) be a locally confluent overlay CSRS satisfying
LHRV and let t # T innermost terminating,
then t is terminating.
Proof. Since the proof uses essentially the same approach
and construction as the one in [18] for TRS's, we only focus
on the di#erences and problems arising due to context-
sensitivity. Basically, the proof is by minimal counterexam-
ple. Suppose, there is an infinite (R, -derivation issuing
from s. Then it is not di#cult to see that one can construct
an infinite minimal derivation of the following form (with
is non-terminating, but all proper subterms
are terminating, hence complete. Since reduction steps
strictly below p1p2 . p i in s i are only finitely often pos-
sible, eventually there must again be a root reduction step
after Now the
idea is to transform the infinite derivation D into an infinite
innermost derivation D # in such a way that the reduction
steps
are still proper (and in-
nermost) reduction steps using the same rules, whereas the
other reductions s
Technically
this was achieved for TRS's by a transformation # which
(uniquely) normalizes all complete subterms of a given term,
but doesn't touch the top parts of a given non-terminating
term. Formally: 14
such that t1 , . , tn are all maximal
terminating (hence complete) subterms of t. Now the crucial
(and subtle) issue is to make sure that #(s i
and to guarantee that the rule l i # r i applied in s #
s i+1 is still applicable to #(s # i ) at the same position p1 . p i ,
i.e., that the pattern of l i is not destroyed by #. In the
case of TRS's this was guaranteed by the overlay property
combined with completeness. In the (more general) case
of overlay CSRS's there may be (context-free) rewrite steps
in the pattern of l (strictly below the root) which would
invalidate this argument. To account for this problem, we
slightly modify the definition of # as follows:
such that t1 , . , tn are all maximal
terminating (hence complete) subterms at active positions
of t (i.e., maximal complete subterms at inactive positions
of t are left untouched!). Now we are almost done.
However, there is still a problem, namely with the "variable
parts" of the lhs l i of the rule l i # r i . In the TRS case we
did get # i (l
for CSRS's, we may have the problem, that (for non-left-
linear rules) "synchronization of normalization within variable
becomes impossible, because e.g. one occurrence
of x is active, while another one is inactive. Consequently,
the resulting transformed term # i (l i )) would not be an
instance of l i any more, and l i # r i not applicable. To avoid
this, we need the additional requirement LHRV, which also
14 Note that normalization is performed with # and not with
#.
accounts for enabling "synchronization of non-linear vari-
ables". With these adaptations and modifications the proof
finally goes through as in the TRS case. 15
Clearly, this stronger local version directly implies a global
completeness criterion as corollary.
Theorem 7 (global completeness criterion).
Let (R, -) be a locally confluent overlay CSRS satisfying
LHRV. If (R, -) is innermost terminating, then it is also
terminating (hence complete).
4.4 Modularity of Completeness
Combining previous results, we get another new criterion
for the modularity of termination of CSRS's, in fact also for
the modularity of completeness.
Theorem 8 (modularity crit. for completeness).
Let (R1 , be two disjoint, terminating CSRS's
satisfying LHRV and CB. Suppose both (R1 ,
are locally confluent and overlaying. Then their disjoint
union (R, -) is also overlaying, terminating and confluent,
hence complete.
Proof. Termination of (R i , - i ) clearly implies innermost
termination of (R 2. Theorem 5 thus yields
innermost termination of (R, -). (R, -) is an overlay CSRS,
too, by definition of this notion. Similarly, LHRV and CB
also hold for (R, -), since these syntactical properties are
purely rule-based. Now, the only assumption missing, that
we need to apply Theorem 7, is local confluence. But this is
indeed guaranteed by the critical pair lemma of [26, Theorem
4, pp. 25] for CSRS's (which in turn crucially relies
on the condition LHRV). Hence, applying Theorem 7
yields termination of (R, -), which together with local confluence
shows (via Newman's Lemma) confluence and completeness
4.5 Extensions to the Constructor-Sharing
Case
As in the case of TRS's there is justified hope that many
modularity results that hold for disjoint unions can also be
extended to more general combinations. The natural next
step are constructor sharing unions. Here we will concentrate
on the case of (at most) constructor sharing CSRS's.
The slightly more general setting of unions of composable
CSRS's is beyond the scope of the present paper and will
only be touched. But we expect our approach and analysis
also to be applicable to this setting (which, already for
TRS's, is technically rather complicated).
Definition 7. For a CSRS (R, -), where
the set of defined (function) symbols is
r # R}, its set of constructors is
Currently, we have no counterexample to the statement of
Theorem 6 without the LHRV assumption. But the current
proof doesn't seem to work without it. It remains to be
investigated whether imposing linearity restrictions would
help. Observe also, that the LHRV property plays a crucial
role in known (local and global) confluence criteria for
CSRS's. Very little is known about how to prove (local)
confluence of CSRS's without LHRV, cf. [26].
C2 , and D1 , D2 denoting their respective signatures, sets of
constructors, and defined function symbols. Then (R1 ,
and (R2 , -1 ) are said to be (at most) constructor sharing if
?. The set of shared constructors
between them is . A rule l # r # R i is said to be
(shared) constructor lifting if root(r) # C, for 2. R i is
said to be (shared) constructor lifting if it has a constructor
lifting rule (i = 1, 2). A rule l # r # R i is said to be shared
lifting if root(r) is a variable or a shared constructor.
R i is said to be shared symbol lifting if it is collapsing or
has a constructor lifting rule. R i is layer preserving if it is
not shared symbol lifting.
For TRS's the main problems in disjoint unions arise from
the additional interference between the two systems in
"mixed" terms. This interference stems from (a) non-left-
linearity, and (b) from rewrite steps that destroy the "lay-
ered structure" of mixed terms thereby potentially enabling
new rewrite steps that have not been possible before. Now,
(a) is usually not a severe problem and can be dealt with
by synchronizing steps. However, (b) is a serious issue and
the main source of (almost) all problems. In disjoint unions
such "destructive" steps are only possible via collapsing rules
(cf. e.g. Theorems 3, 4). In constructor sharing unions, interference
and fusion of previously separated layers is also
possible via (shared) constructor lifting rules. The basic example
in term rewriting is the following variant of Toyama's
counterexample.
Example 7. The two constructor sharing TRS's
are terminating, but their union admits a cycle
Observe how the application of the two constructor lifting
rules enables the application of the -rule previously not
possible.
Taking this additional source of interference into account,
namely, besides collapsing rules also constructor lifting rules,
some results for disjoint unions also extend to the constructor
sharing case.
First let us look at an illuminating example.
Example 8. Consider the two constructor sharing CSRS's
with shared constructor c and
Both systems are obviously terminating, but their union admits
a cycle
Observe that the CSRS (R1 , -) is not shared symbol lifting
and non-duplicating as TRS but duplicating (as CSRS),
whereas (R2 , -) is constructor lifting and non-duplicating
(as CSRS).
For the next general result we need an additional definition
Definition 8. Let ((F , R, F), -) be a CSRS and f # F.
We say that f is fully replacing if
n is the arity of f .
Now we are ready to generalize Theorem 4 to the constructor
sharing case (cf. [17, Theorem 34]).
Theorem 9 (Theorem 4 extended).
Let (R1 , be two constructor sharing, compat-
ible, terminating CSRS's with all shared constructors fully
replacing, such that their union (R, -) is non-terminating.
Then one of the systems is not FP-terminating and the other
system is shared symbol lifting (i.e., collapsing or constructor
lifting).
Proof. We just sketch the proof idea. The proof is very
similar to the one in the TRS case, i.e., as for Theorem 34
in [17]. Given a minimal counterexample in the union, i.e.,
an infinite (ground) derivation D of minimal rank, let's say,
with the top layer from F1 , an abstracting transformation
# is defined, which abstracts from the concrete syntactical
from of inner parts of the terms but retains all relevant
syntactical F1-information that may eventually pop up and
fuse with the topmost F1 -layer. The only di#erence is that in
the recursive definition of this abstraction function # we use
#R instead of # as in [17]. The abstracted F1-information
is collected and brought into a unique syntactical form via a
fresh binary function symbol G with
with a fresh constant A). With these preparations it is not
very di#cult to show:
(a) D contains infinitely many outer reduction steps.
(b) Any outer step in D translates into a corresponding
outer ((R1 , -1)-)step in #(D) (using the same rule at
the same position).
(c) D contains infinitely many inner reduction steps that
are destructive at level 2 (hence must be (R2 , -2 )-
steps).
(d) Any inner step in D which is destructive at level 2
translates into a (non-empty) sequence of rewrite steps
in #(D) using (only) the projection rules for G, i.e.,
Any inner step which is not destructive at level 2 (hence
it can be an (R1 , -1 )- or an (R2 , -2)-step) translates
into a (possibly empty) sequence of rewrite steps in
#(D) using (R1 # {G(x, y) # x, G(x, y) # y}, -).
Observe that without the assumption that all shared constructors
are fully replacing, the above properties (b), (d)
and (e) need not hold any more in general. 17 Now, (c) implies
that (R2 , -2 ) is shared lifting. And from (a), (b), (d)
and (e) we obtain the infinite (R1 #{G(x, y) # x, G(x, y) #
-derivation #(D) which means that (R1 , -1 ) is not FP-
terminating.
Note that, as in the case of TRS's, this result holds not
only for finite CSRS's, but also for finitely branching ones.
But, in contrast to the disjoint union case, it doesn't hold
any more for infinitely branching systems, cf. [38] for a corresponding
counterexample of infinitely branching constructor
sharing TRS's.
Roughly speaking, this failure is due to the fact that, for
non-fully replacing constructors, context-sensitivity makes
the abstracting transformation interfere with reduction
steps in a "non-monotonic" way.
Without the above assumption the statement of the Theorem
does not hold in general as is witnesses by Example
above. Clearly, both CSRS's R1 and R2 in this example
are FP-terminating, but their union is not even terminating.
Note that the (only) shared constructor c here is not fully
replacing.
Next let us consider the extension of the syntactical modularity
criteria of Theorem 3 to constructor sharing unions.
Theorem 10. Let (R1 , be two constructor
sharing, compatible, terminating CSRS's, and let (R, -) be
their union. Then the following hold:
(i) (R, -) terminates, if both R1 and are layer-pre-
serving.
(ii) (R, -) terminates, if both R1 and are non-duplica-
ting.
(iii) (R, -) terminates, if one of the systems is both layer-
preserving and non-duplicating.
Proof. The proof is essentially analogous to the one of
Theorem 3 one for disjoint CSRS's.
Example 9. Now we are ready to give a modular proof
of termination of the OBJ program of Figure 1: consider
the CSRS (R, -) of Example 1 as the (constructor sharing,
compatible) union of:
and
Note that S is rpo-terminating (use the precedence === >
true, false). Hence, (S, -) is terminating. On the other hand:
take(s,
is rpo-terminating too: use precedence take > nil, :; inf > :,
and length > 0, s. Hence, (T , -) is terminating. Also, polynomial
termination
can easily be proved by
using the CiME 2.0 system. Since (S, -) is layer-preserving
and non-duplicating, by Theorem 10 we conclude termination
of (R, -). According to [27], this implies termination of
the OBJ program.
Example 10. Consider the two constructor sharing TRS's
c # a
i.e., termination by using some well-founded polynomial
ordering
together with In [12] Giesl and Middeldorp show
that termination of (R1 # cannot be proved by any
existing transformation (but the complete one, see Section
3.3). However, no proof of termination of (R1 #
been reported in the literature yet. Now, we are able to give
a very simple (modular) proof: note that R1 and are ter-
minating, hence (R1 , -) and (R2 , -) are terminating. Since
(R1 , -) is layer-preserving and non-duplicating, termination
of (R1 # Theorem 10.
As for TRS's, Theorem 9 has a whole number of direct or
indirect corollaries stating more concrete modularity criteria
for termination in the case of constructor sharing unions.
We will not detail this here, but rather focus on some other
results along the lines of Sections 4.2 and 4.4.
Of course, the negative counterexamples of Section 4.2,
Examples 5 and 6, immediately extend to the constructor
sharing case, too. Yet, the positive results regarding
modularity of the "weak termination properties" WIN, WN,
and SIN extend from disjoint CSRS's to constructor sharing
ones.
Theorem 11 (Theorem 5 extended).
(a) WIN is preserved under unions of constructor sharing
CSRS's satisfying CB.
(b) WN is preserved under unions of constructor sharing
CSRS's satisfying CB.
(c) SIN is preserved under unions of constructor sharing
CSRS's satisfying CB.
Proof. The proofs of (a), (b) and (c) are essentially the
same as for Theorem 5 above, namely by structural induction
and case analysis. The only di#erence now is that in
the case of a shared constructor at the root, we may have
both top-white and top-black principal subterms below (in
the common modularity terminology, cf. e.g. [17]) such a
constructor symbol at the root. But this doesn't disturb the
reasoning in the proofs. Again condition CB ensures that
the innermost term rewrite derivation that is constructed in
these proofs in the induction step, is also still innermost,
i.e., that the proofs go through for CSRS's as well.
Similarly we obtain
Theorem 12 (Theorem 8 extended).
Let (R1 , be two constructor sharing, compati-
ble, terminating CSRS's satisfying LHRV and CB. Suppose
both (R1 , -1) and (R2 , -2 ) are locally confluent and overlay-
ing. Then their (constructor sharing) union (R, -) is also
overlaying, terminating and confluent, hence complete.
Proof. Analogous to the proof of Theorem 8 using Theorem
11 instead of Theorem 5.
5. RELATED WORK
As far as we know our results are the first to deal with
the analysis of modular properties of CSRS's. Some properties
of CSRS's have by now been fairly well investigated,
especially regarding termination proof techniques, but also
concerning other properties and verification criteria (cf. e.g.
[25, 26, 28, 27, 29, 31, 30], [47], [8], [12, 13, 15, 14],
Recent interesting developments include in particular the
approach of Giesl & Middeldorp for proving innermost termination
of CSRS's via transformations to ordinary TRS's
along the lines of [12], as well as the rpo-style approach of
[2] for directly proving termination of CSRS's without any
intermediate transformations and without recurring to ordinary
TRS's. A comparison of our results and approach with
the latter ones mentioned remains to be done.
5.1 Perspectives and Open Problems
In this paper we have started to systematically investigate
modular aspects of context-sensitive rewriting. We have almost
exclusively focussed on termination (properties). Of
course, this is only the beginning of more research to be
done. We have shown that, taking the additional complications
arising from context-sensitivity carefully into account,
it is indeed possible to extend a couple of fundamental modularity
results for TRS's to the more general case of CSRS's.
In this sense, the obtained results are quite encouraging.
They also seem to indicate that a considerable amount of
the structural knowledge about modularity in term rewriting
can be taken over to context-sensitive rewriting. However,
it has also turned that there are a couple of new phenomena
and ugly properties that crucially interfere with the traditional
approach for TRS's. In particular, it turns out that
the syntactical restrictions CB and LHRV of the replacement
- play a crucial role. These conditions are certainly
a considerable restriction in practice, and hence should also
be more thoroughly investigated. Apart from the disjoint
union case, we have also shown that the obtained results for
disjoint unions extend nicely to the case of shared construc-
tors. On the other hand, of course, modularity results do
not always help. A simple example is the following.
Example 11. The two CSRS's
with are constructor sharing and both termi-
nating, and their union R is terminating, too! However,
none of our modularity results is applicable here as the
reader is invited to verify. Intuitively, the reason for this
non-applicability of (generic) modularity results lies in the
fact, that any termination proof of R must somehow exploit
internal (termination) arguments about . A bit more
precisely, the decrease in the first argument of the second
-rule "lexicographically dominates" what happens in the
second argument. To make this more explicit, consider also
the following, semantically meaningless, variant of
with - as before. Clearly, R3 is also terminating. However,
now the union of the constructor sharing CSRS's R1 and R3
becomes non-terminating: nth(s(x),
. Here the failure
of our modularity criteria becomes comprehensible. Namely,
and do have the same syntactical modularity structure
as combined with R3 . And in the former case we got
termination, in the latter one non-termination. Thus it is
unrealistic to expect the applicability of a general modularity
result in this particular example. Yet, if we now consider
still another system
with -(:) as above and consider the union of the terminating
composable CSRS's then we might
wish to conclude termination of the combination by some
modularity criterion. This does not seem to be hopeless.
In other words, we expect that many results that hold for
constructor sharing CSRS's also extend to unions of composable
CSRS's which, additionally, may share defined function
symbols provided they then share all their defining rules, too
(cf. [39]), and to some extent also for hierarchical combinations
of CSRS's (cf. e.g. [21], [4]). However, since modularity
is known to be a very error-prone domain, any concrete such
claim has to be carefully verified. This will be the subject
of future work.
6. CONCLUSION
We have presented some first steps of a thorough modularity
analysis in context-sensitive rewriting. In the paper we
have mainly focussed on termination properties. The results
obtained by now are very encouraging. But there remains a
lot of work to be done.
7.
--R
rewriting and All That.
Recursive path orderings can be context-sensitive
Principles of Maude.
Hierarchical termination.
rewriting with operator evaluation strategies.
Termination of rewriting with local strategies.
Principles of OBJ2.
An overview of CAFE specification environment - an algebraic approach for creating
Transformation techniques for context-sensitive rewrite systems
Transforming context-sensitive rewrite systems
Innermost termination of context-sensitive rewriting
Transformation techniques for context-sensitive rewrite systems
Introducing OBJ.
Generalized su
Abstract relations between restricted termination and confluence properties of rewrite systems.
Termination and Confluence Properties of Structured Rewrite Systems.
Modularity of confluence: A simplified proof.
Modular proofs for completeness of hierarchical term rewriting systems.
Modularity of simple termination of term rewriting systems with shared constructors.
Termination of combination of composable term rewriting systems.
Modularity in noncopying term rewriting.
Termination of context-sensitive rewriting by rewriting
Termination of on-demand rewriting and termination of OBJ programs
Termination of rewriting with strategy annotations.
Transfinite rewriting semantics for term rewriting systems.
Termination of (canonical) context-sensitive rewriting
Modular Properties of Term Rewriting Systems.
Modular properties of conditional term rewriting systems.
Completeness of combinations of conditional constructor systems.
Completeness of combinations of constructor systems.
Modular Properties of Composable Term Rewriting Systems.
On the modularity of termination of term rewriting systems.
Modular properties of composable term rewriting systems.
On termination of the direct sum of term rewriting systems.
Counterexamples to termination for the direct sum of term rewriting systems.
On the Church-Rosser property for the direct sum of term rewriting systems
Termination for direct sums of left-linear complete term rewriting systems
Termination of context-sensitive rewriting
--TR
On the Church-Rosser property for the direct sum of term rewriting systems
Counterexamples to termination for the direct sum of term rewriting systems
On termination of the direct sum of term-rewriting systems
A sufficient condition for the termination of the direct sum of term rewriting systems
Modularity of simple termination of term rewriting systems with shared constructors
Completeness of combinations of constructor systems
Modularity of confluence
Modular properties of conditional term rewriting systems
Completeness of combinations of conditional constructor systems
On the modularity of termination of term rewriting systems
Modular proofs for completeness of hierarchical term rewriting systems
Modularity in noncopying term rewriting
Modular termination of <italic>r</italic>-consistent and left-linear term rewriting systems
Modular properties of composable term rewriting systems
Termination for direct sums of left-linear complete term rewriting systems
rewriting and all that
Principles of OBJ2
Context-sensitive rewriting strategies
Termination of Rewriting With Strategy Annotations
Termination of Context-Sensitive Rewriting by Rewriting
Context-Sensitive AC-Rewriting
Transforming Context-Sensitive Rewrite Systems
Transfinite Rewriting Semantics for Term Rewriting Systems
Termination of (Canonical) Context-Sensitive Rewriting
Termination of Context-Sensitive Rewriting
Recursive Path Orderings Can Be Context-Sensitive
Hierachical Termination
Termination of on-demand rewriting and termination of OBJ programs
An overview of CAFE specification environment-an algebraic approach for creating, verifying, and maintaining formal specifications over networks
--CTR
Beatriz Alarcn , Ral Gutirrez , Jos Iborra , Salvador Lucas, Proving Termination of Context-Sensitive Rewriting with MU-TERM, Electronic Notes in Theoretical Computer Science (ENTCS), 188, p.105-115, July, 2007
Bernhard Gramlich , Salvador Lucas, Simple termination of context-sensitive rewriting, Proceedings of the 2002 ACM SIGPLAN workshop on Rule-based programming, p.29-42, October 05, 2002, Pittsburgh, Pennsylvania
Jrgen Giesl , Aart Middeldorp, Transformation techniques for context-sensitive rewrite systems, Journal of Functional Programming, v.14 n.4, p.379-427, July 2004
Salvador Lucas, Context-sensitive rewriting strategies, Information and Computation, v.178 n.1, p.294-343, October 10, 2002
Salvador Lucas, Proving termination of context-sensitive rewriting by transformation, Information and Computation, v.204 n.12, p.1782-1846, December, 2006 | modular proofs of termination;declarative programming;context-sensitive rewriting;program verification;evaluation strategies;modular analysis and construction of programs |
571169 | Constraint-based mode analysis of mercury. | Recent logic programming languages, such as Mercury and HAL, require type, mode and determinism declarations for predicates. This information allows the generation of efficient target code and the detection of many errors at compile-time. Unfortunately, mode checking in such languages is difficult. One of the main reasons is that, for each predicate mode declaration, the compiler is required to decide which parts of the procedure bind which variables, and how conjuncts in the predicate definition should be re-ordered to enforce this behaviour. Current mode checking systems limit the possible modes that may be used because they do not keep track of aliasing information, and have only a limited ability to infer modes, since inference does not perform reordering. In this paper we develop a mode inference system for Mercury based on mapping each predicate to a system of Boolean constraints that describe where its variables can be produced. This allows us handle programs that are not supported by the existing system. | INTRODUCTION
Logic programming languages have traditionally been untyped
and unmoded. In recent years, languages such as Mer-
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
PPDP'02, October 6-8, 2002, Pittsburgh, Pennsylvania, USA.
cury [19] have shown that strong type and mode systems can
provide many advantages. The type, mode and determinism
declarations of a Mercury program not only provide useful
documentation for developers, they also enable compilers to
generate very e#cient code, improve program robustness,
and facilitate integration with foreign languages. Information
gained from these declarations is also very useful in
many program analyses and optimisations.
The Mercury mode system, as described in the language
reference manual, is very expressive, allowing the programmer
to describe very complex patterns of dataflow. How-
ever, the implementation of the mode analysis system in
the current release of the Melbourne Mercury compiler has
some limitations which remove much of this expressiveness
(see [8]). In particular, while the current mode analyser
allows the construction of partially instantiated data struc-
tures, in most cases it does not allow them to be filled in.
Another limitation is that mode inference prevents reordering
in order to limit the number of possibilities examined.
The e#ect of these limitations is that it is hard to write
Mercury programs that use anything other than the most
basic modes and that it is just not possible to write programs
with certain types of data flow.
As well as being limited in the ways described above, the
current mode analysis algorithm is also very complicated. It
combines several conceptually distinct stages of mode analysis
into a single pass. This makes modifications to the algorithm
(e.g. to include the missing functionality) quite dif-
ficult. The algorithm is also quite ine#cient when analysing
code involving complex modes.
In this paper, we present a new approach to mode analysis
of Mercury programs which attempts to solve some of these
problems of the current system. We separate mode checking
into several distinct stages and use a constraint based
approach to naturally express the constraints that arise during
mode analysis. We believe that this approach makes it
easier to implement an extensible mode analysis system for
Mercury that can overcome the limitations of the current
system.
We associate with each program variable a set of "posi-
tions", which correspond to nodes in its type graph. The key
idea to the new mode system is to identify, for each position
in the type of a variable, where that position is produced,
i.e. which goal binds that part of the variable to a function
symbol. We associate to each node in the type graph, and
each goal in which the program variable occurs, a Boolean
variable that indicates whether the node in the graph of
that program variable is bound within the goal. Given these
Boolean variables we can describe the constraints that arise
from correct moding in terms of Boolean constraints. We
can then represent and manipulate these descriptions using
standard data structures such as reduced ordered binary
decisions diagrams (ROBDDs). By allowing constraints between
individual positions in di#erent data structures, we
obtain a much more precise analysis than the current Mercury
mode system.
Starting with [11] there has been considerable research
into mode inference and checking. However, most of this
work is based on assumptions that di#er from ours in (at
least) one of two significant respects: types and reordering.
Almost all work on mode analysis in logic programming
has focused on untyped languages, mainly Prolog. As a
consequence, most papers use very simple analysis domains,
such as {ground, free, unknown}. One can use patterns from
the code to derive more detailed program-specific domains,
as in e.g. [3, 10, 12], but such analyses must sacrifice too
much precision to achieve acceptable analysis times. In [18],
we proposed fixing this problem by requiring type information
and using the types of variables as the domains of mode
analysis. Several papers since then (e.g. [14, 17]) have been
based on similar ideas. Like other papers on mode infer-
ence, these also assume that the program is to be analyzed
as is, without reordering. They therefore use modes to describe
program executions, whereas we are interested in using
modes to prescribe program execution order, and insist
that the compiler must have exact information about instantiation
states. The only other mode analysis systems that
do likewise work with much simpler domains (for example,
Ground Prolog [9] recognizes only two instantiation states,
Other related work has been on mode checking for concurrent
logic programming languages and for logic programming
languages with coroutining [1, 5, 7]: there the emphasis
has been on detecting communication patterns and
possible deadlocks. The only other constraint-based mode
analysis we are aware of is that of Moded Flat GHC [4].
Moded Flat GHC relies on position in the clause (in the
head or guard versus in the body) to determine if a unification
is allowed to bind any variables, which significantly
simplifies the problem. The constraints generated are equa-
tional, and rely on delaying the complex cases where there
are three or more occurrences of a variable in a goal.
There has been other work on constraint-based analysis
of Mercury. In particular, the work of [20] on a constraint-based
binding-time analysis is notable. It has a similar basic
approach to ours of using constraints between the "po-
sitions" in the type trees of variables to express data flow
dependencies. However, binding-time analysis has di#erent
objectives to mode analysis and, in fact, their analysis requires
the results of mode analysis to be available.
In Section 2 we give the background information the rest
of the paper depends on. In Section 3 we briefly outline the
current approach and some of its weaknesses. In Section 4
we give a simplified example of our constraint-based system
before presenting the full system in Section 5. In Section 6
we show how the results of the analysis are used to select an
execution order for goals in a conjunction. In Section 7 we
give some preliminary experimental results.
2. BACKGROUND
Mercury is a purely declarative logic programming language
designed for the construction of large, reliable and
e#cient software systems by teams of programmers [19].
Mercury's syntax is similar to the syntax of Prolog, but Mercury
also has strong module, type, mode and determinism
systems, which catch a large fraction of programmer errors
and enable the compiler to generate fast code. The rest of
this section explains the main features of Mercury that are
relevant for this paper.
2.1 Programs
The definition of a predicate in Mercury is a goal containing
atoms, conjunctions, disjunctions, negations and if-
then-elses. To simplify its algorithms, the compiler converts
the definition of each predicate into what we call superho-
mogeneous form [19]. In this form, each predicate is defined
by one clause, all variables appearing in a given atom (in-
cluding the clause head) are distinct, and all atoms are one
of the following three forms:
In this paper, we further assume that in all unifications,
neither side is a variable that does not appear outside the
unification itself. (Unifications that do not meet this requirement
cannot influence the execution of the program
and can thus be deleted.) For simplicity, we also assume
that all negations have been replaced by if-then-elses (one
can replace -G with (G # fail ; true)). The abstract syntax
of the language we deal with can therefore be written as
atom
goal
rule
In order to describe where a variable becomes bound, our
algorithms need to be able to uniquely identify each subgoal
of a predicate body. The code of a subgoal itself cannot
serve as its identifier, since a piece of code may appear more
than once in a predicate definition. We therefore use goal
paths for this purpose. A goal path consists of a sequence of
path components. We use # to represent the path with zero
components, which denotes the entire procedure body.
. If the goal at path p is a conjunction, then the goal
path p.cn denotes its nth conjunct.
. If the goal at path p is a disjunction, then the goal
path p.dn denotes its nth disjunct.
. If the goal at path p is an if-then-else, then the goal
path p.c denotes its condition, p.t denotes its then-
part, and p.e denotes its else-part.
Definition 1. The parent of goal path p.cn , p.dn , p.c, p.t,
or p.e, is goal path p. The function parent(p) maps a goal
path to its parent.
Definition 2. Let #(G) be the set of variables that occur
within a goal, G. Let #(G) #(G) be the set of
variables that are nonlocal to goal G, i.e. occur both inside
and outside G. For convenience, we also define # and
# for a set of goals, S:
Since each goal path uniquely identifies a goal, we sometimes
apply operations on goals to goal paths.
2.2 Deterministic Regular Tree Grammars
In order to be able to express all the di#erent useful modes
on a program variable, we must be able to talk about each
of the individual parts of the terms which that program
variable will be able to take as values. To do so in a finite
manner, we use regular trees, expressed as tree grammars.
A signature # is a set of pairs f/n where f is a function
symbol and n # 0 is the integer arity of f . A function symbol
with 0 arity is called a constant. Given a signature #, the
set of all trees (the Herbrand Universe), denoted #), is
defined as the least set satisfying:
{f(t1 , . , tn) | {t1 , . , tn} #)}.
For simplicity, we assume that # contains at least one constant
Let V be a set of symbols called variables. The set of all
terms over # and V , denoted #, V ), is similarly defined
as the least set satisfying:
{f(t1 , . , tn ) | {t1 , . , tn} #, V )}
A tree grammar r over signature # and non-terminal set
NT is a finite sequence of production rules of the form x # t
where x # NT and t is of the form f(x1 , . , xn) where
f/n # and {x1 , . , xn} # NT . A tree grammar is deterministic
regular if for each x # NT and f/n #, there can
be at most one rule of the form x # f(x1 , . , xn ).
For brevity we shall often write tree grammars in a more
compressed form. We use x
for the sequence of production rules: x # t1 , x # t2 , . ,
2.3 Types
Types in Mercury are polymorphic Hindley-Milner types.
Type expressions (or types) are terms in the language
#type , Vtype) where #type are type constructors and variables
Vtype are type parameters. Each type constructor
f/n #type must have a definition.
Definition 3. A type definition for f/n is of the form
where v1 , . , vn are distinct type parameters,
{f1/m1 , . , fk/mk} # tree are distinct tree con-
structor/arity pairs, and t 1
are type expressions
involving at most parameters v1 , . , vn .
Clearly, we can view the type definition for f as simply a
sequence of production rules over signature # tree and non-terminal
set #type , Vtype ).
In order to avoid type expressions that depend on an infinite
number of types, we restrict the type definitions to
be regular [13]. (Essentially regularity ensures that for any
type t, grammar(t), defined below, is finite.)
Example 1. Type definitions for lists, and a simple type
abc which includes constants a, b and c are
We can associate with each (non-parameter) type expression
the production rules that define the topmost symbol of
the type. Let t be a type expression of the form f(t1 , . , tn)
and let f/n have type definition of the form in Definition 3.
We define rules(t) to be the production rules:
#(f(v1 , . , vn
#(f(v1 , . , vn
Vtype we define rules(t) to be the empty sequence.
We can extend this notation to associate a tree grammar
with a type expression. Let grammar(t) be the sequence of
production rules recursively defined by
where the ++ operation concatenates sequences of production
rules, removing second and later occurrences of duplicate
production rules.
We call each nonterminal in a set of production rules a po-
sition, since we will use them to describe positions in terms;
each position is the root of one of the term's subterms. We
also sometimes call positions nodes, since they correspond
to nodes in type trees.
Example 2. Consider the type list(abc), then the corresponding
grammar is
list(abc)
abc # a
There are two nonterminals and thus two positions in the
grammar: list(abc) and abc.
Mode inference and checking takes place after type check-
ing, so we assume that we know the type of every variable
appearing in the program.
2.4 Instantiations and Modes
An instantiation describes the binding pattern of a variable
at a particular point in the execution of the program. A
mode is a mapping from one instantiation to another which
describes how the instantiation of a variable changes over
the execution of a goal.
Instantiations are also defined using tree grammars. The
di#erences are that no instantiation associated with a predicate
involves instantiation parameters (no polymorphic
modes-although these are a possible extension), and there
are two base instantiations: free and ground representing
completely uninitialized variables, and completely bound
terms. Instantiation expressions are terms in #inst , Vinst ).
Definition 4. An instantiation definition for g #inst is
of the form:
inst g(w 1 , ., wn )==bound(g 1 (i 1
)).
where w1 , . , wn are distinct instantiation parameters,
{g1/m1 , . , gk/mk} # tree are distinct tree constructors,
are instantiation expressions in #inst #
{free, ground}, Vinst ).
We can associate a set of production rules rules(i) with
an instantiation expression i just as we do for type expres-
sions. For the base instantiations we define
Example 3. For example the instantiation definition
inst list skel(I) == bound([] ; [I | list skel(I)] ).
defines an instantiation list skel(I). A variable with this instantiation
must be bound, either to an empty list ([]), or
to a cons cell whose first argument has the instantiation
given by instantiation variable I and whose tail also has the
instantiation list skel(I). For example list skel(free) describes
a list in which the elements are all free.
Each instantiation is usually intended to be used with a
specific type, e.g. list skel(I) with list(T), and it normally
lists all the function symbols that variables of that type can
be bound to. Instantiations that do not do so, such as
inst non empty skel(I) == bound([I | list skel(I)] ).
represent a kind of subtyping; a variable whose instantiation
is non empty skel(free) cannot be bound to [].
Definition 5. A mode i >> f is a mapping from an initial
instantiation i to a final instantiation f . Common modes
have shorthand expressions, e.g. in = ground >> ground and
goal that changes the instantiation
state of a position from free to ground is said to produce or
bind that position; a goal that requires the initial instantiation
state of a position to be ground is said to consume that
position.
3. CURRENT MODE ANALYSIS SYSTEM
The mode analysis algorithm currently used by the Mercury
compiler is based on abstract interpretation. The abstract
domain maps each program variable to an instantia-
tion. Before mode analysis, the compiler creates a separate
procedure for each mode of usage of each predicate. It then
analyses each procedure separately.
Starting with the initial instantiation states for each argument
given by the mode declaration, the analysis traverses
the procedure body goal determining the instantiation state
of each variable at each point in the goal. When traversing
conjunctions, if a conjunct is not able to be scheduled
because it needs as input a variable that is not su#ciently
bound, it is delayed and tried again later. Once the goal
has been analysed, if there are no unschedulable subgoals
and the computed final instantiation states of the arguments
match their final instantiation states in the mode declara-
tion, the procedure is determined to be mode correct. See [8,
for more details (although those papers talk about other
languages, the approach in Mercury is similar).
The system is also able to do mode inference for predicates
which are not exported from their defining module, using
a top down traversal of the module. However, to prevent
combinatorial explosion, the mode analysis algorithm does
not reorder conjunctions when performing
it arrives at a call, it assumes that the called predicate is
supposed to be able to run given only the variables that
have been bound to its left.
The mode analysis system has several tasks that it does
all at once. It must (1) determine the producer and the
consumers of each variable; (2) reorder conjunctions to ensure
that all consumers of a variable are executed after its
producer; and (3) ensure that sub-typing constraints are
met. This leads to a very complicated implementation.
One of the aims of our constraint-based approach is to simplify
the analysis by splitting these tasks up into distinct
phases which can be done separately.
3.1 Limitations
There are two main problems with the above approach.
The first is that it does not keep track of aliasing informa-
tion. This has two consequences. First, without may-alias
information about bound nodes we cannot handle unique
data structures in a nontrivial manner; in particular, we
cannot implement compile-time garbage collection. Second,
without must-alias information about free nodes, we cannot
do accurate mode analysis of code that manipulates partially
instantiated data structures.
Partially instantiated data structures are data structures
that contain free variables. They are useful when one wants
di#erent parts of a data structure to be filled in by di#erent
parts of a program.
Example 4. Consider the following small program.
pred length(list(int), int).
:- mode length(free >> list_skel(free), in) is det.
length(L, N) :-
pred iota(list(int), int).
:- mode iota(list_skel(free) >> ground, in) is det.
In the goal length(L, 10), iota(L, 3), length/2 constructs
the skeleton of a list with a specified length and
iota/2 fills in the elements of the list. The current system
is unable to verify the mode correctness of the second disjunct
of iota/2. One problem is that this disjunct sets up
an alias between the variable H and the first element of L
(which are both initially free variables), and then instantiates
H by unifying it with the second argument. Without
information about the aliasing between H and the first element
of L, the mode checker is unable to determine that this
also instantiates the first element of L.
The second problem is that the absence of reordering during
mode inference prevents many correct modes from being
detected.
Example 5. Consider mode inference for the predicate
pred append3(list(T), list(T), list(T), list(T)).
Inference will find only the mode app3(in,in,in,out); it
will not find the mode app3(out,out,out,in).
The reordering restriction cannot simply be lifted, because
without it, the current mode inference algorithm can explore
arbitrarily large numbers of modes, which will in fact be
useless, since it does not "look ahead" to see if the modes
inferred for a called predicate will be useful in constructing
the desired mode for the current predicate.
4. SIMPLIFIED EXAMPLE
The motivation for our constraint based mode analysis
system is to avoid the problems in the current system. In
order to do, so we break the mode analysis problem into
phases. The first phase determines which subgoals produce
which variables, while the second uses that information to
determine an execution order for the procedure. For now,
we will focus on the first task; we will return to the second
in Section 6.
For ease of explanation, we will first show a simplified form
of our approach. This simplified form requires variables to
be instantiated all at once, (i.e. the only two instantiation
states it recognizes are free and ground) and requires all variables
to eventually reach the ground state. This avoids the
complexities that arise when di#erent parts of variables are
bound at di#erent times, or some parts are left unbound.
We will address those complexities when we give the full
algorithm in Section 5.
4.1 Constraint Generation
The algorithm associates several constraint variables with
each program variable. With every program variable V , we
associate a family of constraint variables of the form Vp ; Vp
is true i# V is produced by the goal at path p in the predicate
body.
We explain the algorithm using append/3. The code below
is shown after transformation by the compiler into superho-
mogeneous form as described in Section 2.1. We also ensure
that each variable appears in at most one argument of one
functor by adding extra unifications if necessary.
pred append(list(T),list(T),list(T)).
append(AT, B, CT)
We examine the body and generate constraints from it.
The body is a disjunction, so the constraints we get simply
specify, for each variable nonlocal to the disjunction, that if
the disjunction produces a variable, then all disjuncts must
produce that variable, while if the disjunction does not produce
a variable, then no disjunct may produce that variable.
For append, this is expressed by the constraints:
(A# A d 1
(A# A d 2
We then process the disjuncts one after another. Both disjuncts
are conjunctions. When processing a conjunction, our
algorithm considers each variable occurring in the conjunction
that has more than one potential producer. If a variable
is nonlocal to the conjunction, then it may be produced either
inside or outside the conjunction; if a variable is shared
by two or more conjuncts, then it may be produced by any
one of those conjuncts. The algorithm generates constraints
that make sure that each variable has exactly one producer.
If the variable is local, the constraints say that exactly one
conjunct must produce the variable. If the variable is non-
local, the constraints say that at most one conjunct may
produce the variable.
In the first disjunct, there are no variables shared among
the conjuncts, so the only constraints we get are those ones
that say that each nonlocal is produced by the conjunction
i# it is produced by the only conjunct in which it appears:
The first conjunct in the first disjunct yields no nontrivial
constraints. Intuitively, the lack of constraints from this goal
reflects the fact that A = [] can be used both to produce A
and to test its value.
The second conjunct in the first disjunct yields one con-
straint, which says that the goal can be used to produce
at most one of B and C:
For the second disjunct, we generate constraints analogous
to those for the first conjunct for the nonlocal variables. But
this disjunct, unlike the first, contains some shared local
variables: AH, CH, AT and CT, each of which appears in two
conjuncts. Constraints for these variables state that each
of these variables must be produced by exactly one of the
conjuncts in which it appears.
The first conjunct in the second disjunct shows how we handle
unifications of the form
The key to understanding the behavior of our algorithm in
this case is knowing that it is trying to decide between only
two alternatives: either this unification takes all the Y i s as
input and produces X, or it takes X as input and produces
all the Y i s. This is contrary to most people's experience,
because in real programs, unifications of this form can also
be used in ways that make no bindings, or that produce
only a subset of the Y i . However, using this unification in
a way that requires both X and some or all of the Y i to
be input is possible only if those Y i have producers outside
this unification. When we transform the program into su-
perhomogeneous form, we make sure that each unification
of this form has fresh variables on the right hand side. So
if a Y i could have such a producer, it would be replaced on
the right hand side of the unification with a new variable
, with the addition of a new unification Y
. That
is, we convert unifications that take both X and some or all
of the Y i to be input into unifications that take only X as
input, and produce all the variables on the right hand side.
If some of the variables on the right hand side appear only
once then those variables must be unbound, and using this
unification to produce X would create a nonground term.
Since the simplified approach does not consider nonground
terms, in such cases it generates an extra constraint that
requires X to be input to this goal.
In the case of A = [AH | AT], both AH and AT appear
elsewhere so we get the constraints:
The first says that either this goal produces all the variables
on the right hand side, or it produces none of them. In
conjunction with the first, the second constraint says that
the goal cannot produce both the variable on the left hand
side, and all the variables on the right hand side.
The constraints we get for are analogous:
The equation acts just as the equation in the first
disjunct, generating:
The last conjunct is a call, in this case a recursive call. We
assume that all calls to a predicate in the same SCC as the
caller have the same mode. 1 This means that the call produces
its ith argument i# the predicate body produces its ith
argument. This leads to one constraint for each argument
position:
This concludes the set of constraints generated by our algorithm
for append.
4.2 Inference and Checking
The constraints we generate for a predicate can be used
to infer its modes. Projecting onto the head variables, the
constraint set we just built up has five di#erent solutions, so
append has five modes:
-A# -B# -C# append(in, in, in)
-A# B# -C# append(in, out, in)
A# -B# -C# append(out, in, in)
A# B# -C# append(out, out, in)
-A# -B# C# append(in, in, out)
Of these five modes, two (append(in, in, out) and
append(out, out, in)) are what we call principal modes.
The other three are implied modes, because their existence
is implied by the existence of the principal modes; changing
the mode of an argument from out to in makes the
job of a predicate strictly easier. In the rest of the pa-
per, we assume that every predicate's set of modes is downward
closed, which means that if it contains a mode pm,
it also contains all the modes implied by pm. In prac-
tice, the compiler generates code for a mode only if it is
declared or it is a principal mode, and synthesizes the other
modes in the caller by renaming variables and inserting extra
unifications. This synthesis does the superhomogeneous
form equivalent of replacing append([1], [2], [3]) with
append([1], [2], X),
Each solution also implicitly assign modes to the primitive
goals in the body, by specifying where each variable is
produced. For example, the solution that assigns true to the
constraint variables C# , C d 1 , C d 1
false to all others, which corresponds
to the mode append(in,in,out), also shows that
is a deconstruction (i.e. uses the fields of A to
define AH and AT) while is a construction (i.e.
it binds C to a new heap cell).
In most cases, the values of the constraint variables of
the head variables uniquely determine the values of all the
other constraint variables. Sometimes, there is more than
one set of value assignments to the constraint variables in
the body that is consistent with a given value assignment
for the constraint variables in the head. In such cases, the
compiler can choose the assignment it prefers.
5. FULL MODE INFERENCE
5.1 Expanded Grammars
We now consider the problem of handling programs in
which di#erent parts of a variable may be instantiated by
This assumption somewhat restricts the set of allowable
well-moded programs. However, we have not found this to
be an unreasonable restriction in practice. We have not
come across any cases in typical Mercury programs where
one would want to make a recursive call in a di#erent mode.
di#erent goals. We need to ensure that if two distinct positions
in a variable may have di#erent instantiation be-
haviour, then we have a way of referring to each separately.
Hence we need to expand the type grammar associated with
that variable.
We begin with an empty grammar and with the original
code of the predicate expressed in superhomogeneous normal
form. We modify both the grammar and the predicate body
during the first stage of mode analysis.
If the unification appears in the definition
of the predicate, then
. if X has no grammar rule for functor f/n, add a rule
An ), and for each A i which already
occurs in a grammar rule or in the head of the clause,
replace each occurrence of A i in the program by A # i
and add an equation A
. if X has a grammar rule X # f(B1 , . , Bn) replace
the equation by
This process may add equations of the form
one of X and Y occurs nowhere else. Such equations can be
safely removed.
After processing all such unifications, add a copy of the
rules in rules(t) for each grammar variable V of type t which
does not have them all.
Example 6. The superhomogeneous form of the usual
source code for append has (some variant of)
A=[AH | AT], C=[AH | CT], append(AT,B,CT)
as its second clause, which our algorithm replaces with
A=[AH | AT], C=[CH | CT],
yielding the form we have shown in Section 4. The expanded
I computed for append is
The nonterminals of this grammar constitute the set of positions
for which we will create constraint variables when we
generate constraints from the predicate body, so from now
we will use "nonterminal" and "position" (as well as "node")
interchangeably. Note that there is a nonterminal denoting
the top-level functor of every variable, and that some variables
(e.g. have other nonterminals denoting some of their
non-top-level functors as well. Note also that a nonterminal
can fulfill both these functions, when one variable is strictly
part of another. For example, the nonterminal AH stands
both for the variable AH and the first element in any list
bound to variable A, but the variables B and C, which are
unified on some computation paths, each have their own
nonterminal.
A predicate needs three Boolean variables for each position
in is true if the position is produced outside the
predicate, (b) V# is true if the position is produced inside
the predicate, and (c) V out is true if the position is produced
somewhere (either inside or outside the predicate). Note
that V out # V in # V# .
Definition 6. Let #(p/n) be the tuple #H1 , . , Hn# of
head variables (i.e. formal parameters) for predicate p/n.
Definition 7. For an expanded grammar I and position
X, we define the immediate descendants of X as
and the set of positions reachable from X as
When we are generating constraints between two variables
(which will always be of the same type) we will need to be
able to refer to positions within the two variables which
correspond to each other, as e.g. AH and CH denote corresponding
positions inside A and C. The notion of correspondence
which we use allows for the two variables to have been
expanded to di#erent extents in the expanded grammar. For
example, A has more descendant nonterminals in append's
than B, even though they have the same type. In
the unification A = B, the nonterminal B would correspond
to AT as well as A.
Definition 8. For expanded grammar I and positions X
and Y , we define the set of pairs of corresponding nodes in
X and Y as
where
#X, W1#X, Wn#
and
For convenience, we also define # for a pair of n-tuples:
#I (#X1 , . ,
Definition 9. Given an expanded grammar I and a rule
I we say that X is the parent node
of each of the nodes Y1 , . , Yn .
5.2 Mode Inference Constraints
We ensure that no variable occurs in more than one pred-
icate, renaming them as necessary. We construct an expanded
grammar I for P , the program module being com-
piled. We next group the predicates in the module into
strongly-connected components (SCCs), and process these
SCCs bottom up, creating a function CSCC for each SCC.
This represents the Boolean constraints that we generate for
the predicates in that SCC. The remainder of this section
defines CSCC .
The constraint function CSCC (I, S) for an SCC S is the
conjunction of the constraint functions C Pred (I, p/n) we generate
for all predicates p/n in that SCC, i.e. CSCC (I, S) is
The constraint function we infer for a predicate p/n is the
constraint function of its SCC, i.e. C Inf (I,
for each p/n # S. C Inf (I, p/n) may be stricter than
C Pred (I, p/n) if p/n is not alone in its SCC. For predicates
defined in other modules, we derive their C Inf from their
mode declarations using the mechanism we will describe in
Section 5.3.
C Pred itself is the conjunction of two functions: C Struct ,
the structural constraints relating in and out variables, and
Goal , the constraints for the predicate body goal:
C Pred (I, (p(H1 , . , Hn) :-
We define C Struct and C Goal below.
5.2.1 Structural Constraints
V in is the proposition that V is bound at call. V out is the
proposition that V is bound at return. V# is the proposition
that V is bound by this predicate. These constraints relate
the relationships between the above variables and relationships
of boundedness at di#erent times.
If a node is not reachable from one of the predicate's argument
variables, then it cannot be bound at call.
A node is bound at return if it is bound at call or it is
produced within the predicate body. A node may not be
both bound at call and produced in the predicate body.
If a node is bound at call then its parent node must also
be bound at call. Similarly, if a node is bound at return
then its parent node must also be bound at return.
C Struct (I,
D# I (V )
(D in # V in ) # (D out # V out )
Example 7. For append, the structural constraints are:
A out # A in # A# , -(A in # A# ),
AH out # AH in # AH# , -(AH in # AH# ),
AT out # AT in # AT# , -(AT in # AT# ),
AE out # AE in # AE# , -(AE in # AE# ),
BE out # BE in # BE# , -(BE in # BE# ),
CH out # CH in # CH# , -(CH in # CH# ),
CT out # CT in # CT# , -(CT in # CT# ),
CE out # CE in # CE# , -(CE in # CE# ),
AH in # A in , AH out # A out ,
AT in # A in , AT out # A out ,
AE in # AT in , AE out # AT out ,
BE in # B in , BE out # B out ,
CH in # C in , CH out # C out ,
CT in # C in , CT out # C out ,
CE in # CT in , CE out # CT out
5.2.2 Goal Constraints
There is a Boolean variable Vp for each path p which contains
a program variable X such that V #I (X). This variable
represents the proposition that position V is produced
in the goal referred to by the path p.
The constraints we generate for each goal fall into two
categories: general constraints that apply to all goal types
Gen ), and constraints specific to each goal type (C Goal ).
The complete set of constraints for a goal (CComp ) is the
conjunction of these two sets.
The general constraints have two components. The first,
that any node reachable from a variable that is
local to a goal will be bound at return i# it is produced
by that goal. The second, C Ext , says that a node reachable
from a variable V that is external to the goal G (i.e. does
not occur in G) cannot be produced by G. The conjunction
in the definition of C Ext could be over all the variables in the
predicate that do not occur in G. However, if a variable V
does not occur in G's parent goal, then the parent's C Goal
constraints won't mention V , so there is no point in creating
constraint variables for V for G.
5.2.3 Compound Goals
The constraints we generate for each kind of compound
goal (conjunction, disjunction and if-then-else) are shown in
Figure
1. In each case, the goal-type-specific constraints are
conjoined with the complete set of constraints from all the
subgoals.
In each conjunctive goal a position can be produced by at
most one conjunct.
In each disjunctive goal a node either must be produced
in each disjunct or not produced in each disjunct.
For an if-then-else goal a node is produced by the if-then-
else if and only if it is produced in either the condition, the
then branch or the else branch. A node may not be produced
by both the condition and the then branch. Nodes reachable
from variables that are nonlocal to the if-then-else must not
be produced by the condition. If a node reachable from a
nonlocal variable is produced by the then branch then it
must also be produced by the else branch, and vice versa.
5.2.4 Atomic Goals
Due to space considerations, we leave out discussion of
higher-order terms which may be handled by a simple extension
to the mode-checking algorithm.
We consider three kinds of atomic goals:
1. Unifications of the form
2. Unifications of the form
3. Calls of the form q(Y1 , . , Yn ).
A unification of the form may produce at most
one of each pair of corresponding nodes. Mercury does not
allow aliases to exist between unbound nodes so each node
reachable from a variable involved in a unification must be
produced somewhere. 2 For a unification at goal path
p, the constraints C Goal (I, are
Example 8. For the unification in append at goal
path d1 .c2 the constraints generated are:
2 During the goal scheduling phase, we further require that
a node must be produced before it is aliased to another
node. These two restrictions together disallow most uses of
partially instantiated data structures. In the future, when
the Mercury implementation can handle the consequences,
we would like to lift both restrictions.
A unification of the form , Yn) at path p
does not produce any of the arguments Y1 , . , Yn . X must
be produced somewhere (either at p or somewhere else). The
constraints C Goal (I, are
Example 9. For the unification in append at
goal path d2 .c1 the constraints generated are:
A call q(Y1 , . , Yn) will constrain the nodes reachable
from the arguments of the call. For predicates in the current
SCC, we only allow recursive calls that are in the same mode
as the caller. The constraints C Goal (I, p, q(Y1 , . , Yn)) are
#V,W# I (#(q/n),#Y 1 ,.,Y n #) (V#Wp) # (V in #W out )
The first part ensures that the call produces the position if
the position is produced by the predicate in the SCC. The
second part ensures that call variable W is produced somewhere
if it is required to be bound at call to the call (V in ).
in is true V# will not be true, we can't mistakenly
use this call site to produce W .
Example 10. For the recursive call append(AT,B,CT) at
goal path d2 .c4 in append the constraints generated on the
first argument are:
For calls to predicates in lower SCCs the constraints are
similar, but we must existentially quantify the head variables
so that it is possible to call the predicate in di#erent modes
from di#erent places within the current SCC:
C Goal (I, p, q(Y1 , . ,
#V,W# I (#(q/n),#Y 1 ,.,Y n #) (V# Wp) # (V in #W out )
5.3 Mode Declaration Constraints
For any predicate which has modes declared, the mode
analysis system should check these declarations against the
inferred mode information. This involves generating a set
of constraints for the mode declarations and ensuring that
they are consistent with the constraints generated from the
predicate body.
The declaration constraint C Decls (I, D) for a predicate with
a set of mode declarations D is the disjunction of the constraints
C Decl (I, d) for each mode declaration d:
d#D C Decl (I, d)
The constraint C Decl (I, d) for a mode declaration
p(m1 , . , mn ) for a predicate p(H1 , . , Hn) is the conjunction
of the constraints C Arg (I, m,H) for each argument mode
m with corresponding head variable H:
# C Struct (I, {H1 , . , Hn}, #)
The structural constraints are used to determine the H#
variables from H in and H out .
The constraint C Arg (I, m,H) for an argument mode
of head variable H is the conjunction of the constraint
C Goal (I, p, (G1 , . , Gn
Figure
1: Constraints for conjunctions, disjunctions and if-then-elses.
C Init (I, i, H) for the initial instantiation state i, and the constraint
C Fin (I, f, H) for the final instantiation state f :
The constraint C Init (I, i, H) for an initial instantiation
state i of a head variable H is given below:
W# I (H) W in
The constraint C Fin (I, i, H) for a final instantiation state
i of a head variable H is given below:
C Fin (I,
C Fin (I, ground,
W# I (H) W out
C Fin (I,
Mode checking is simply determining if the declared
modes are at least as strong as the inferred modes. For
each declared mode d of predicate p/n we check whether
the implication
holds or not. If it doesn't, the declared mode is incorrect.
If we are given declared modes D for a predicate p/n, they
can be used to shortcircuit the calculation of SCCs, since
we can use C Decls (I, D) in mode inference for predicates q/m
that call p/n.
Example 11. Given the mode definition:
:- mode lsg == (listskel(free) >> ground).
the mode declaration
gives C Decl (I, d1 (ignoring V out variables)
A in # -AH in # AT in # -AE in # B in # -BE in #
C in # CH in # CT in # CE in #
-A# AH# -AT# AE# -B# BE#
-C# -CH# -CT# -CE#
We can show that C Decl (I, d1) # C Inf (I, append/3).
6. SELECTING PROCEDURES AND EXECUTION
ORDER
Once we have generated the constraints for an SCC, we
can solve those constraints. If the constraints have no solu-
tion, then some position has consumers but no producer, so
we report a mode error. If the constraints have some solu-
tions, then each solution gives a mode for each predicate in
the SCC; the set of solutions thus defines the set of modes
of each predicate. We then need to find a feasible execution
order for each mode of each predicate in the SCC. The algorithm
for finding feasible execution orders takes a solution
as its input. If a given mode of a predicate corresponds to
several solutions, it is su#cient for one of them to have a
feasible ordering.
The main problem in finding a feasible schedule is that the
mode analyser and the code generator have distinct views
of what it means to produce a (position in a) variable. In
the grammar we generate for append, for example, the non-terminal
AH represents both the value of the variable AH
and the value of the first element of the variable A. In the
forward mode of append, AH in is true, so the mode analyser
considers AH to have been produced by the caller even before
execution enters append. However, as far as the code generator
is concerned, the producer of AH is the unification A
[AH|AT]. It is to cater for these divergent views that we
separate the notion of a variable being produced from the
notion of a variable being visible.
Definition 10. Given an expanded grammar I, an assignment
M of boolean values to the constraint variables of a
predicate p/n that makes the constraint C Inf (I, p/n) true is
a model of C Inf (I, p/n). We write M |= C Inf (I, p/n).
Definition 11. For a given model M |= C Inf (I, p/n) we
define the set of nodes produced at a goal path p by
produced M (I,
Definition 12. For a given model M |= C Inf (I, p/n) we
define the set of nodes consumed by the goal G at a goal
path p by the formula shown in Figure 2.
For a unification of the form we say that a node
on one side of the equation is consumed i# a corresponding
node on the other side is produced. (Due to the symmetric
nature of this relationship, if V 1 and V 2 both correspond
to W , then V 1 is consumed i# V 2 is consumed, and V 1 is
produced is produced.) It is also possible for a pair of
corresponding nodes to be neither produced nor consumed
by the unification. This can mean one of two things. If the
subterms of X and Y at that node are already bound, then
the unification will test the equality of those subterms; if
they are still free, then it will create an alias between them.
Note that if the unification produces either of the top level
nodes X or Y , then we call it an assignment unification.
For a unification of the form
that the node X is consumed i# it is not produced, and
that no other nodes are ever consumed. The reason for
the latter half of that rule is that our grammar will use
the same nonterminal for e.g. Y1 as for the first subterm of
X. Since this unification merely creates aliases between the
Y i and the corresponding subterms of X, the nonterminals
produced M (I, p)
#V, W #I (X, Y produced M (I, p)
produced M (I, p)
where #I (#Y1 , . , Yn#(q/n)) and M # |= C Inf (I, q/n)
such that #V, W # . M(Vd
produced M (I, p)
produced M (I, p)
produced M (I, p)
Figure
2: Calculating which nodes are "consumed" at which positions.
of the Y i cannot be produced by this unification; if they
are produced at all, they have to be produced elsewhere.
Note that if the unification produces X, then we call it a
construction unification; if it consumes X, then we call it a
deconstruction unification.
For a call to a predicate q, we know which nodes of the
actual parameters of the call the model M of the predicate
we are analyzing says should be produced by the call. We
need to find a model M # of the constraints of q that causes
the corresponding nodes in the actual parameters of q to
be output. Since the first stage of the analysis succeeded
we know such a model M # exists. The consumed nodes of
the call are then the nodes of the actual parameters that
correspond to the nodes of the formal parameters of q that
# requires to be input.
For compound goals, the consumed nodes are the union of
the consumed nodes of the subgoals, minus the nodes that
are produced within the compound goal.
Example 12. In the (in, in, out) mode of append, the
produced and consumed sets of the conjuncts are:
Path produced consumed
d2 .c4 {CT, CE} {AT, AE,B,BE}
Neither disjunct produces any position that it also con-
sumes. Therefore, if our ordering algorithm only required
a node to be produced before it is consumed, it would find
any order acceptable. On the other hand, the code generator
is more fussy; for example, before it can emit code for
the recursive call, it needs to know where variables AH and
AT are stored, even if they have not been bound yet. This is
why we need the concept of visibility.
Definition 13. A variable is visible at a goal path p if the
variable is a head variable or has appeared in the predicate
body somewhere to the left of p. The functions make visible
and need visible defined in Figure 3 respectively determine
whether a goal makes a variable visible or requires it to be
visible.
Example 13. Given the (in, in, out) mode of append
the make visible and need visible sets of the conjuncts are:
Path make visible need visible
Our algorithm needs to find, in each conjunction in the
body, an ordering of the conjuncts such that the producer
of each node comes before any of its consumers, and each
variable is made visible before any point where it needs to
be visible. We do this by traversing the predicate body top
down. At each conjunction, we construct a directed graph
whose nodes are the conjuncts. The initial graph has an
edge from c i to c j i# c i produces a node that c j consumes.
If this graph is cyclic, then mode ordering fails. If it isn't,
we try to add more edges while keeping the graph acyclic.
We sort the variables that need to be visible anywhere in
the conjunction that are also made visible in the conjunction
into two classes: those where it is clear which conjunct
should make them visible and those where it isn't. A variable
falls into the first class i# it is made visible in only
one conjunct, or if a conjunct that makes it visible is also
the producer of its top level node. (In the forward mode of
append, all variables fall into the first class; the only variable
that is made visible in more than one conjunct, CH, does not
need to be visible in any conjunct in that conjunction.) For
each of these variables, we add an edge from the conjunct
that makes the variable visible to all the conjuncts c j need
it to be visible. If the graph is still acyclic, we then start
searching the space of mappings that map each variable in
the second class to a conjunct that makes that variable visi-
ble, looking for a map that results in an acyclic graph when
we add links from the selected make visible conjunct of each
variable to all the corresponding need visible conjuncts.
It can also happen that some of the conjuncts need a variable
visible that none of the goals in the conjunction make
visible. If this variable is made visible by a goal to the left of
the whole conjunction, by another conjunction that encloses
this one, then everything is fine. If it isn't, then the ordering
of the enclosing conjunction would have already failed,
because if no conjunct makes the variable visible, then the
conjunction as a whole needs it visible.
If no mapping yields an acyclic graph, the procedure has
a mode error. If some mappings do, then the algorithm in
general has two choices to make: it can pick any acyclic
graph, and it can pick any order for the conjuncts that is
consistent with that graph.
make visibleM (I, p, G) => > > > > <
(make visible M (I, p.c, Gc) # make visible M (I, p.t, G t
# make visible M (I, p.e, Ge)
need visibleM
need visibleM
need visibleM
pc#{c,t,e} need visible M (I, p.pc, Gpc) \ make visible M (I, p, G) if
Figure
3: Calculating make visible and need visible.
All the nodes the forward mode of append consumes are
input to the predicate, so there are no ordering constraints
between producers and consumers. The first disjunct has
no visibility constraints either, so it can be given any or-
der. In the second disjunct, visibility requirements dictate
that must occur before both
append(AT, B, CT ), to make AH and AT visible where re-
quired. This leaves the compiler with this
append(AT, B, CT )
This graph does not completely fix the order of the con-
juncts. A parallel implementation may choose to execute
several conjuncts in parallel, although it this case that would
not be worth while. More likely, an implementation may
choose to schedule the recursive call last to ensure tail re-
cursion. (With the old mode analyser, we needed a program
transformation separate from mode analysis [15] to introduce
tail recursion in predicates like this.)
7. EXPERIMENTAL EVALUATION
Our analysis is implemented within the Melbourne Mercury
compiler. We represent the Boolean constraints as reduced
ordered binary decision diagrams (ROBDDs) [2] using
a highly-optimised implementation by Schachte [16] who has
shown that ROBDDs provide a very e#cient representation
for other logic program analyses based on Boolean domains.
ROBDDs are directed acyclic graphs with common-
subexpressions merged. They provide an e#cient, canonical
representation for Boolean functions.
In the worst case, the size of an ROBDD can be exponential
in the number of variables. In practice, however, with a
bit of care this worst-case behaviour can usually be avoided.
We use a number of techniques to keep the ROBDDs as
small and e#cient as possible.
We now present some preliminary results to show the feasibility
of our analysis. The timings are all taken from tests
run on a Gateway Select 950 PC with a 950MHz AMD
Athlon CPU, 256KB of L2 cache and 256MB of memory,
running Linux kernel 2.4.16.
Table
compares times for mode checking some simple
benchmarks. The column labelled "simple" is the time for
the simple constraint-based system for ground variables presented
in Section 4. The column labelled "full" is for the full
Table
1: Mode checking: ground.
simple full old simple/old full/old
cqueens 407 405 17 23 23
crypt 1032 1335 38 27 35
deriv 13166 32541 59 223 551
poly 1348 5245 63 21 83
primes
qsort 847 1084 112 7 9
queens 386 381 9 42 42
query 270 282 11 24 25
tak 204 201 2 102 100
constraint-based system presented in Section 5. The column
labelled "old" is the time for mode checking in the current
Mercury mode checker. The final two columns show the
ratios between the new and old systems. All times are in
milliseconds and are averaged over 10 runs.
The constraint-based analyses are significantly slower
than the current system. This is partly because they are
obtaining much more information about the program and
thus doing a lot more work. For example, the current system
selects a fixed sequential order for conjunctions during
the mode analysis - an order that disallows partially instantiated
data structures - whereas the constraint-based approaches
allow all possible orderings to be considered while
building up the constraints. The most appropriate scheduling
can then be selected based on the execution model con-
sidering, for example, argument passing conventions (e.g.
for the possibility of introducing tail calls) and whether the
execution is sequential or parallel.
Profiling shows that much of the execution time is spent
in building and manipulating the ROBDDs. It may be
worth investigating di#erent constraints solvers, such as
propagation-based solvers. Another possible method for improving
overall analysis time would be to run the old mode
analysis first and only use the new analysis for predicates
for which the old analysis fails.
It is interesting to observe the di#erences between the simple
constraint system and the full system. None of these
benchmarks require partially instantiated data structures
so they are all able to be analysed by the simple system.
In some cases, the simple system is not very di#erent to
the full system, but in others-particularly in the bigger
Table
2: Mode checking: partially instantiated.
check infer infer/check
iota 384 472 1.22
append
copytree 150 6174 41.16
benchmarks-it is significantly faster. We speculate that
this is because the bigger benchmarks benefit more from
the reduced number of constraint variables in the simple
analysis.
Table
shows some timings for programs that make use
of partially instantiated modes, which the current Mercury
system (and the simple constraint-based system) is unable
to analyse. Again the times are in milliseconds averaged
runs.
The "iota" benchmark is the program from Example 4.
The "append" benchmark is the classic append/3 (however,
the checking version has all valid combinations of in, out
and lsg modes declared). The "copytree" benchmark is
a small program that makes a structural copy of a binary
tree skeleton, with all elements in the copy being new free
variables.
The times in the "check" columns are for checking programs
with mode declarations whereas the "infer" column
shows times for doing mode inference with all mode declarations
removed. It is interesting to note the saving in analysis
time achieved by adding mode declarations. This is particularly
notable for the "copytree" benchmark where mode
inference is able to infer many more modes than the one we
have declared. (We can similarly declare only the (in, in,
out) mode of append and reduce the analysis time for that
to 210ms.)
8. CONCLUSION
We have defined a constraint based approach to mode
analysis of Mercury. While it is not as e#cient as the current
system for mode checking, it is able to check and infer
more complex modes than the current system, and de-couples
reordering of conjuncts from determining producers.
Although not described here the implementation handles all
Mercury constructs such as higher-order.
The constraint-based mode analysis does not yet handle
subtyping or unique modes. We plan to extend it to handle
these features as well as explore more advanced mode sys-
tems: complicated uniqueness modes, where unique objects
are stored in and recovered from data structures; polymorphic
modes, where Boolean variables represent a pattern of
mode usage; and the circular modes needed by client-server
programs, where client and server processes (modelled as
recursive loops) cooperate to instantiate di#erent parts of a
data structure in a coroutining manner.
We would like to thank the Australian Research Council
for their support.
9.
--R
Directional types and the annotation method.
Experimental evaluation of a generic abstract interpretation algorithm for Prolog.
Diagnosing non-well-moded concurrent logic programs
Abstract interpretation of concurrent logic languages.
Static inference of modes and data dependencies in logic programs.
Layered Modes.
Type synthesis for ground Prolog.
Deriving descriptions of possible value of program variables by means of abstract interpretation.
The automatic derivation of mode declarations for Prolog programs.
On the practicality of abstract equation systems.
A polymorphic type system for Prolog.
Typed static analysis: Application to groundness analysis of Prolog and lambda-Prolog
Making Mercury programs tail recursive.
Mode analysis domains for typed logic programs.
A system of precise modes for logic programs.
The execution algorithm of Mercury
--TR
A polymorphic type system for PROLOG.
Graph-based algorithms for Boolean function manipulation
Static inference of modes and data dependencies in logic programs
Abstract interpretation for concurrent logic languages
Deriving descriptions of possible values of program variables by means of abstract interpretation
Experimental evaluation of a generic abstract interpretation algorithm for PROLOG
Typed Static Analysis
Mode Analysis Domains for Typed Logic Programs
Making Mercury Programs Tail Recursive
Model Checking in HAL
--CTR
Lee Naish, Approximating the success set of logic programs using constrained regular types, Proceedings of the twenty-sixth Australasian conference on Computer science: research and practice in information technology, p.61-67, February 01, 2003, Adelaide, Australia | mode analysis;modes;boolean constraints |
571170 | Using the heap to eliminate stack accesses. | The value of a variable is often given by a field of a heap cell, and frequently the program will pick up the values of several variables from different fields of the same heap cell. By keeping some of these variables out of the stack frame, and accessing them in their original locations on the heap instead, we can reduce the number of loads from and stores to the stack at the cost of introducing a smaller number of loads from the heap. We present an algorithm that finds the optimal set of variables to access via a heap cell instead of a stack slot, and transforms the code of the program accordingly. We have implemented this optimization in the Mercury compiler, and our measurements show that it can reduce program runtimes by up to 12% while at the same time reducing program size. The optimization is straightforward to apply to Mercury and to other languages with immutable data structures; its adaptation to languages with destructive assignment would require the compiler to perform mutability analysis. | INTRODUCTION
Most compilers try to keep the values of variables in (per-
haps virtual) registers whenever possible. However, procedure
calls can (in the general case) modify the contents of
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
PPDP'02, October 6-8, 2002, Pittsburgh, Pennsylvania, USA.
all the registers. The standard solution to this problem is
to allocate a slot in the stack frame to every variable that
is live after the call, and to copy all these variables to the
stack before the call.
In this paper we investigate the possibility of avoiding
some of these allocations and the associated copies, exploiting
the fact that some of the variable values we must save
may already be available in memory locations that cannot
be updated by the call. Suppose we obtained the value of
a variable a from a eld in an immutable cell on the heap
pointed to by variable b. (From now on, we assume that all
cells are immutable, unless we specically say otherwise.)
This allows us to avoid storing a on the stack, provided we
can nd its location on the heap everywhere we need to. In
the case of a procedure in a single assignment language or
of a procedure in an imperative language when expressed in
a single-assignment form, this will be all places later in the
procedure that refer to a.
Storing the value of b instead of the value of a on the
stack may not look promising. In the worst case, it requires
the same number of stack slots (one), the same number of
stores to the stack before the call (one), and additional instructions
to load b from the stack at all the places later
in the procedure where we want to access a. However, it
may happen that the program needs b itself after the call, in
which case it needs a stack slot and the store instruction to
ll that stack slot anyway, so accessing a via b needs no additional
store. It may also happen that in every basic block
in which the program accesses a it also accesses b, in which
case accessing a via b requires no additional loads either. If
both these things happen, then accessing a via b eliminates
one stack slot and one store operation to ll in that stack
slot. If a is not used between its denition and the rst call
after the denition, then accessing a via b also saves the load
of a from the cell we would need before storing a in its stack
slot. Overall, accessing a via b may be better than storing
a in its own stack slot.
Of course, it may be that between two calls, the procedure
accesses a but not b. In that case, accessing a via b incurs the
cost of an additional load. However, it often happens that
between two calls, a procedure does not access a variable
pointing to a cell (e.g. b) but accesses more than one variable
whose values came from the elds of that cell. A single load
of b gives us access to a1 , a2 and a3 , then the cost of the load
of b needs to divided up among a1 , a2 and a3 . This means
that the cost of accessing instead of storing a1 in
its own stack slot depends on which of the other variables
reachable from b we access via b. Indeed, it may happen
that accessing just a1 via b is not worthwhile, accessing just
a2 via b is not worthwhile, but accessing both a1 and a2 via
b is worthwhile.
This interdependence of the decisions we need to make
for the dierent variables reachable from the same cell sig-
nicantly complicates the task of nding the optimal partition
of the variables stored in the cell between those which
should have their own stack slots and those which should be
accessed via the cell. Compared to the partition in which
all variables are stored in stack slots, the optimal partition
may reduce the total number of stack accesses (loads and
stores) performed by the program, it may reduce the number
of stack slots required, or it may do both. Of course, it
may also do neither, but then no optimization can guarantee
a speedup.
Trying all partitions is the obvious algorithm of computing
the best partition. Unfortunately this algorithm is not
feasible, because some cells contain dozens of elds, which
can require trying millions of partitions. In this paper we
describe a much more e-cient algorithm based on maximal
matching in bipartite graphs. We have implemented the algorithm
in the Mercury compiler, and our experiments show
that it gives real benets on real programs (e.g. the Mercury
compiler itself).
In the next section we give a very brief introduction to
Mercury, concentrating on the features that are relevant to
our optimization. In section 3 we give a worked example
explaining our technique. In section 4 we dene the conditions
on the applicability of the optimization, and show how
we collect the information we need to implement the opti-
mization. Section 5 gives our algorithm for computing optimal
partitions, while section 6 explains the source-to-source
transformation that we use to exploit optimal partitions. In
Section 7 we give the results of a preliminary experimental
evaluation of our optimization.
2. MERCURY
We assume familiarity with the basic concepts of logic pro-
gramming, and Prolog syntax, which is also Mercury syntax.
Mercury is a purely declarative logic programming language
designed for the construction of large, reliable and
e-cient software systems by teams of programmers [3, 6].
Mercury's syntax is similar to the syntax of Prolog, but Mercury
also has strong module, type-, mode- and determinism
systems, which catch a large fraction of programmer errors
and enable the compiler to generate fast code. The main
features of Mercury that are relevant for this paper are the
following.
Every predicate has one or more modes, which say, for
every argument, whether it is input or output. (The mode
system is more sophisticated than that, but that is all that
matters for this paper.) We call each mode of a predicate
a procedure. The mode system insists on being able to nd
out at compile time what the state of instantiation of every
variable is at every point in every procedure, and on being
able to reorder conjunctions so that all goals that use the
value of a variable come after the goal that generates the
value of that variable. If it cannot perform either task, it
rejects the program.
Mercury associates with every procedure a determinism,
which expresses upper and lower bounds on the number of
solutions the procedure can have. Procedures that are guaranteed
to have exactly one solution have determinism det.
Procedures that may have one solution for one set of inputs
and zero solutions for other inputs have determinism
semidet. Procedures that may have any number of solutions
have determinism nondet.
The denition of a predicate in Mercury is a body made
up of atoms, conjunctions, negations, disjunctions and if-
then-elses. To simplify its algorithms, the compiler converts
each body clause so that the only forms of atoms appearing
are
where b; an are distinct variables.
During mode analysis, the compiler classies all unica-
tions into one of ve types: copies a where one of a
and b is input and the other is output; tests
both are input; deconstructions
b is input and each of an is output; constructions
where each of an is input and b
is output; and complex unications otherwise. We use the
respective notations b := a, b == a, b =>
a) to indicate the mode of
each unication.
In this paper, we shall be mainly concerned with de-
constructions. The deconstruction b =>
whether the principal functor of b is f if this hasn't been
previously established, and, if n > 0, assigns the elds of
the cell pointed to by b to the various a i . Note that for eld
variables a i that are not used later (in particular anonymous
variables ) the compiler does not access the eld.
The compiler also detects disjunctions in which each disjunct
deconstructs the same input variable with dierent
function symbols. In such disjunctions, the value of that
variable on entry to the disjunction determines which dis-
junct, if any, can succeed; the others cannot succeed because
they try to unify the variable with a function symbol it is
not bound to. The compiler converts such disjunctions into
switches (they resemble the switch construct of C).
The Mercury compiler has several backends, which translate
Mercury into dierent target languages. The backend
that we work with in this paper is the compiler's original
backend, which translates Mercury into low level C code.
This backend uses the execution algorithm described in [6].
This execution algorithm uses a virtual machine whose data
structures consist of a heap, two stacks, a set of general purpose
registers used for argument passing, and a set of special
purpose registers such as heap and stack pointers. The differences
between the two stacks are irrelevant for purposes
of this paper.
The Mercury compiler is responsible for parameter pass-
ing, stack frame management, heap allocation, control
ow
(including the management of backtracking) and almost every
other aspect of execution; the only signicant tasks it
leaves for the C compiler are instruction selection, instruction
scheduling and register allocation within basic blocks.
In eect, we use the C compiler as a high level, optimizing
assembler. Besides achieving portability, this approach
allows the Mercury compiler to perform optimizations that
exploit semantic properties of Mercury programs (such as
the immutability of ground terms) that cannot be conveyed
to the C compiler.
The Mercury compiler assumes that every call can clobber
every virtual machine register, so at every call site it
ushes
all live variables to their slots in the current procedure's
switch on T0 % (A)
T0 => empty,
load K0, V0, L0, R0
store K, V, K0, V0, L0, R0
compare(Result, K, K0), % (B)
switch on Result
load K, V, L0
load K0, V0, R0
load V, K0, L0, R0
load K, V, R0
load K0, V0, L0
switch on T0 % (A)
T0 => empty,
load K0
store K, V, T0
compare(Result, K, K0), % (B)
switch on Result
load K, V, T0, L0 BC
load T0, K0 CE, V0 CE, R0 CE
load V, T0, K0 BE, L0 BE, R0 BE
load K, V, T0, R0 BD
update(R0 BD, K, V, R), % (D)
load T0, K0 DE, V0 DE, L0 DE
(a) (b)
Figure
1: update predicate in original form (a), and modied by our transformation (b).
stack frame. Similarly, at the starts of nondeterministic dis-
junctions, it
ushes to the stack all the variables that are live
at the start of the second or later disjuncts, since the starts
of these disjuncts can be reached by backtracking long after
further execution has clobbered all the virtual machine reg-
isters. An if-then-else is treated similarly (the else branch
corresponds to a second disjunct).
These are the only situations in which the Mercury execution
algorithm requires all live variables to be
ushed to
the stack. The Mercury compiler therefore has a pass that
gures out, for each of these
ush points, which variables
need to be
ushed to the stack at that point. Variables
which need to exists on the stack simultaneously need to be
stored in dierent stack slots, but one may be able to use
the same stack slot to store dierent variables at dierent
ush points. The compiler uses a standard graph colouring
approach (see e.g. [2]) to assign variables to stack slots.
3. MOTIVATING EXAMPLE
Consider the Mercury predicate illustrated in Figure 1(a),
which updates a binary search tree (T0) containing key-value
pairs so that the new version of the tree (T) maps key K to
value V. (Text after the symbol % is a comment.) Such predicates
(predicates that search in and update various kinds)
are fairly important in many Mercury programs, because
the declarative nature of Mercury encourages programmers
to use them to represent dictionaries instead of arrays (which
must be updated destructively).
In the original form of the predicate, all of the variables
(K0, V0, L0, R0) whose values are produced in the deconstruction
are live after the
call to compare that immediately follows the deconstruc-
tion. The compiler therefore allocates a stack slot to each
of these variables and saves their values in those stack slots
before the call. Saving the value of each of these variables
requires that it be loaded into a register rst from its original
location in the memory cell pointed to by T0. K and V
are also live after the call, but they have already been put
into registers by update's caller.
Execution can follow one of three paths after compare re-
turns. If K and K0 are equal, then execution takes the second
arm of the switch; the code there uses K0, V, L0 and R0 as
inputs, so those four variables must be loaded from stack
slots. If K is less than K0, then execution takes the rst arm
of the switch, which contains a call. Making the call requires
L0, K and V to be loaded into registers from their stack slots.
The call returns L in a register, so after the call we need to
load into registers only K0, V0 and R0. The third arm of the
switch is analogous to the rst.
We have added comments to indicate which variables the
code stores to the stack, and which variables it loads from
stack or from the heap. We can count the loads and stores of
the variables involved in the deconstruction (the cell variable
T0 and the eld variables K0,V0,L0,R0) that are required to
make the values of those variables available along each path
of execution that involves that deconstruction. If execution
takes the rst arm of the switch on Result, then we execute
four loads and four stores involving those variables between
program points A and B, three loads between B and C, and
three loads between C and E, for a total of ten loads and
four stores. If execution takes the third arm of the switch
on Result, then for similar reasons we also execute a total
of ten loads and four stores. If execution takes the second
arm of the switch on Result, then we execute four loads and
four stores between A and B, and three loads between B and
E, for a total of seven loads and four stores.
The key idea of this paper is the realization that the loads
and stores between A and B are a signicant cost, and that
we can avoid this cost if we are willing to insert clones of
the deconstruction later in the procedure body. These clones
incur an extra cost, the load of T0, but as long as we choose
to perform this transformation only if the initial saving is at
least as big as the extra cost on all paths of execution, we
have achieved a speedup.
Figure
1(b) shows the same predicate after our transfor-
mation. It has ve clones of the original deconstruction, one
for each region after the rst that uses the eld variables.
If execution takes the rst arm of the switch on Result,
then the transformed predicate executes one load and one
store between A and B, two loads between B and C (loading
T0 from its stack slot to a register, and then loading L0 BC
from the cell T0 points to) and four loads between C and E
(loading T0 from its stack slot to a register, and then loading
K0 BC, V0 BC and R0 BC from the cell T0 points to) for a
total of seven loads and one store. If execution takes the
third arm of the switch on Result, the analysis is analogous
and the total cost is again seven loads and one store. If
execution takes the second arm of the switch on Result,
then we execute one load and one store between A and B,
and four loads between B and E, for a total of ve loads and
one store.
Overall, the transformation reduces the costs of the paths
through the rst and third arms from ten loads and four
stores to seven loads and one store, and the cost of the path
through the second arm from seven loads and four stores to
ve loads and one store. The transformation also reduces the
number of stack slots required. The original code needed six
stack slots for variables, one for each of K,V,K0,V0,L0,R0.
The transformed code needs only three stack slots for vari-
ables, one for each of K,V,T0.
The source of the speedup is that we only added one or
two extra loads of T0 into each path of execution, but we
replaced four loads and four loads between A and B with
one load and one store. The extra cost is always in the form
of extra loads of the cell variable (T0) after the rst stack
ush after the deconstruction and possibly, as in this case,
an extra store of the cell variable before that stack
ush.
The savings is always in the form of eliminated stores of the
eld variables before that stack
ush, and the eliminated
loads of those eld variables that are not needed before the
stack
ush. In this case, we must keep the load of K0 but
can eliminate the loads as well as the stores of V0, L0 and
R0.
The reason for the reduction in stack slot requirements is
that saving T0 on the stack preserves the values of all the
eld variables in T0's heap cell across calls. and since the
number of eld variables in that cell is greater than one, we
are using one stack slot to save the value of more than one
variable across calls.
4. DETECTING OPPORTUNITIES FOR
OPTIMIZATION
Before we can describe our algorithm for performing the
transformation shown in the example above, we need to introduce
some background information and denitions.
The body of a Mercury procedure is a goal. A goal may be
an atomic goal or a compound goal. An atomic goal may be
a unication, a builtin operation (e.g. arithmetic) or a call.
(For the purposes of this paper, there is no distinction between
rst order calls, higher order calls and method calls.)
A compound goal may be a conjunction, a disjunction, a
switch, an if-then-else, a negation or an existential quanti-
er. In the rest of this paper, we will restrict our attention to
the rst four of those compound goal types. Our algorithms
treat negation as a special case of if-then-else (not(Goal) is
equivalent to (Goal -> fail ; true)), and they treat an
existentially quantied goal as the goal itself. We call dis-
junctions, switches and if-then-elses branched control structures
or branched goals.
Denition 1. A
ush point is a point in the body of a
procedure at which the code generator is required to store
some variables in stack slots or in registers. In Mercury,
there are four kinds of
ush points.
When execution reaches a call, the code generator
must
ush all variables that are live after the call to
the stack. This because like most compilers, the Mercury
compiler assumes that all calls can clobber all
registers.
When execution reaches the start of an if-then-else,
the code generator must
ush all variables that are
live at the start of the else case. If the else case can be
reached after a register may have been clobbered (e.g.
by a call inside the condition), then the code generator
must
ush all these variables to the stack; otherwise,
it can
ush variables to registers as well as stack slots.
When execution reaches the start of a disjunction, the
code generator must
ush all the variables that are live
at the start of the second disjunct or a later disjunct.
If a non-rst disjunct can be reached via deep back-tracking
(i.e. after the failure of a call inside a previous
disjunct or after the failure of a goal following the disjunction
as a whole), then the code generator must
ush all these variables to the stack; otherwise, it can
ush variables to registers as well as stack slots.
When execution reaches the end of a branched control
structure, the code generator must store each variable
that is live afterwards in a specic stack slot or in a
specic register; the exact location is determined by a
pre-pass of the code generator. This ensures that all
branches leave those variables in the same place.
Denition 2. An anchor is one of the following:
the start of the procedure body
a call site
the start of a branched control structure
the end of the condition of an if-then-else
the end of a branched control structure
the end of the procedure body
All
ush points are anchors, but not all anchors are
ush
points. In the example of Figure 1(a) the program points
A,B,C,D, and E (which represent the start of the outer
switch, the call to compare, the two calls to update, and
the end of the inner switch respectively) are all anchors,
and all but A are also
ush points. The code fragment also
contains two other anchors: the start of the inner switch
and the end of the outer switch. Our example did not distinguish
between the two anchors each at program points B
and E.
Denition 3. An interval is a sequence of atomic goals
delimited by a left-right pair of anchors, satisfying the property
that if forward execution starts at the left anchor and
continues without encountering failure (which would initiate
backtracking, i.e. backward execution), the next anchor it
reaches is the right anchor of the pair. We consider a call
to be part of the atomic goals of the interval only if the call
site is the right anchor of the interval, not the left anchor.
Denition 4. A segment is a maximal sequence of one or
more intervals such that the right anchor of each interval
in the sequence, except the last, is the left anchor of the
interval following it in the sequence. The sequence must
also satisfy the property that execution can get from the
left anchor of the rst interval to the right anchor of the
last interval without the code generator throwing away its
current record of the values of the live variables.
Most segments contain just one interval. However, if the
right anchor of an interval is the start of an if-then-else,
then the interval before the if-then-else and the interval at
the start of the condition can belong to the same segment. If
the right anchor of an interval is the start of a disjunction,
then the interval before the disjunction, and the interval
at the start of the rst disjunct can belong to the same
segment. If the right anchor of an interval is the start of
a switch, then the interval before the disjunction, and the
interval at the start of the any arm of the switch can belong
to the same segment. Intervals whose right anchor is the
start of a switch are the only intervals that can be part of
more than one segment.
In the example of Figure 1(a) each of AB, BC, CE, BE,
BD and DE are all segments. There is an empty interval
(containing no atomic goals) between the end of the call to
compare and the start of the following switch, which is part
of all the segments starting at B.
Our transformation algorithm has three phases. In the
rst phase, we nd all the intervals in the procedure body,
and for each interval, record its left and right anchors and
the set of variables needed as inputs in the interval. These
include the inputs of the atomic goals of the interval, and,
for intervals whose right anchor is the start of a switch, the
variable being switched on. The set of variables needed in a
segment s, which we denote vars(s), is the union of the sets
of variables needed by the component intervals of segment
s. We also record, for each interval, the leftmost anchor of
the segment(s) to which the interval belongs.
In the second phase, we traverse the procedure
body backwards, looking at each deconstruction
b => f(a1, ., an). We call the a i the eld vari-
ables, as opposed to b, which is the cell variable. When
we nd a deconstruction, we try to nd out which eld
variables we can avoid storing in stack slots, loading it
from the heap cell pointed to by the cell variable instead
wherever it is needed. We can access a eld variable via the
cell variable only if all the following conditions hold:
The memory cell pointed to by the cell variable must
be immutable; if it isn't, then the values of the elds
of the cell may change between the original deconstruction
and the copies of that deconstruction that
the transformation inserts elsewhere in the procedure
body. In Mercury, a cell is mutable only if the instantiation
state of the cell variable at the point of
the deconstruction states that it is the only pointer to
the cell that is live at that point (so the compiler may
choose to destructively update the cell).
All the program points at which the value of the eld
variable is needed (as the input of an atomic goal, as a
live variable to be
ushed, or as an output argument of
the procedure as a whole) must be within the eective
scope of the deconstruction.
Consider the variable A in the following program:
Y => f(A, B),
A <= a
r(X,A,Z).
We need to determine a single location for A, for use
in r/3, hence it must be stored in a stack slot.
Every interval which needs the value of a eld variable
but not the value of the cell variable must be reachable
at most once for each time execution reaches the
deconstruction.
Consider the variable A in the following program:
Y => f(A,B),
q(X,A,B),
r(X,A),
s(X,Y,A,Z).
If A were accessed indirectly through Y, then we need
to add a load of Y to the segment between q/3 and
r/2. If q/3 can succeed multiple times, then this load
is executed once per success, and not guaranteed to be
compensated by the removal of the store of A before
the call to q/3.
The value of the eld variable is required in a segment
after the deconstruction, otherwise there is no point in
investigating whether it would be worthwhile to access
it via the cell variable in other segments.
For deconstructions in which all four conditions hold for at
least some of the eld variables (these form the set of candidate
eld variables), we then partition the set of candidates
into two subsets: those we should access via the cell vari-
able, and those we should nevertheless store in and access
via stack slots. To do this, we rst nd the set of maximal
paths that execution can take in the procedure body from
the point of the deconstruction to the point at which the
deconstruction goes out of scope.
Denition 5. A path is a sequence of segments starting
at the segment containing the deconstruction, in which segment
can follow segment j if (a) the left anchor of the rst
interval in segment j is the right anchor of the last interval
in segment i, or (b) execution can resume at the left anchor
of the rst interval in segment j after backtracking initiated
within segment i. A path is maximal if isn't contained
within another path.
A maximal path through a disjunction includes a maximal
path through the rst disjunct, then a maximal path
through the second disjunct, then a maximal path through
the third, etc. A maximal path through an if-then-else is
either a maximal path through the condition followed by a
maximal path through the then part, or a maximal path
through the condition followed by a maximal path through
the else part. A maximal path through a switch is a maximal
path through one of the arms of the switch.
For the program of Figure 1(a), the maximal paths are
[AB,BC,CE], [AB,BE] and [AB,BD,DE], since there is no back-tracking
in the switch.
For each maximal path starting at a given deconstruction
which has a non-empty set of candidate eld variables, we
invoke the algorithm described in the next section to partition
the candidate variables into the set that, from the point
of view of an execution that takes that particular maximal
path through the procedure body, it is better to access via
the cell variable and the set that it is better to store in stack
slots.
5. DECIDING WHICH VARIABLES TO
LOAD FROM CELLS
5.1 Introduction to Maximal Matching
The algorithm we use for deciding which variable to load
from cells make use of maximal matching algorithms for bi-partite
graphs. In this section we introduce terminology,
and examples.
Denition 6. A bipartite graph G is made up of two disjoint
sets of vertices B and C and edges E where for each
In our application, the sets of
vertices represent benets and costs.
A matching of a bipartite graph G is a set of edges M E
such that each vertex occurs at most once in M .
A maximal matching M of G is a matching such that for
each other matching M 0 of G, jM 0 j jM j.
There are e-cient algorithms for maximal matching of
bipartite graphs. They are all based on searching for augmenting
paths.
Denition 7. Given a bipartite graph G and matching M ,
an alternating path is a path whose edges are alternatively
in E M and M .
Dene the set reachable(u; M) as the set of nodes reachable
from u by an alternating path.
Given a bipartite graph G and matching M , an augmenting
path p is an alternating path where the rst and last
vertices are free, i.e. do not occur in M .
Given a bipartite graph G, matching M and augmenting
path p,
is a matching where jM
costs:
benefits:
Figure
2: The stack optimization graph for the program
of Figure 3 together with a maximal matching.
An important property of maximal matchings is their relationship
with augmenting paths.
Property 1. A matching M for G is maximal i there
exist no augmenting paths p for M and G.
A straightforward algorithm for bipartite maximal matching
based on searching for augmenting paths using breadth-rst
search is O(jU j jEj) while more sophisticated algorithms
are O(
Example 1. Figure 2 shows a bipartite graph, with the
matching M illustrated by the solid arcs. The matching
is
5))g. An example
alternating path is load(T 0;
dened by the
edges: f(store(K0); load(T 0; 5)), (store(K0); store(T 0)),
2))g. It is not
an augmenting path as both endpoints are matched. Indeed
the matching M is a maximal matching.
5.2 Minimizing Stack Operations
Our aim is to nd, for each deconstruction unication b
an), the set of variables involved which should
be stored on the stack in order to minimize the number
of stack operations required. Let F ang be the
candidate eld variables.
For each maximal path from the deconstruction we assume
we are given a list of segments and a function
which determines the variables whose values are required
in program segment i. We determine for each maximal
path independently the set of variables that require
their own stack slot.
Denition 8. The costs that must be incurred if we access
candidate variable f via the cell variable b instead of via the
stack are:
load(b; i): we need to add a load of b in every segment
store(b): we need to add a store of b in the rst seg-
ment, if b is not live after the initial segment. 1
We call this set cost(f ).
The benets that are gained if we access candidate variable
f via the cell variable b instead of via the stack are:
If b is live after the initial segment, then even the original
program would need to store b on the stack, so this store is
not an extra cost incurred by accessing a eld variable via b.
store(f storing f in the initial segment;
and
load(f; 1): we avoid loading f in the initial segment if
We call this set benefit(f ).
We can use these to model the total costs and benets of
choosing to access a given subset V of F via the cell variable
instead of storing them on the stack.
The total set of costs we incur if we choose to access a
given subset V of F via the cell variable instead of storing
them on the stack is cost(V while the total
benets of that choice is benefit(V
Note that, while the benets for each f are independent, the
costs are not, since the cost of the load of b in a given segment
is incurred only once, even if it is used to access more
than one candidate variable. We therefore cannot decide
for each candidate variable individually whether it should
be stored on the stack or accessed via the cell variable, but
must consider a set of candidates at a time.
We need an e-cient algorithm for nding a set V F ,
such that benefit(V ) are greater than or equal to cost(V ).
For the time being we will assume that the cost of each
load and store operation is equal. We discuss relaxing this
assumption in Section 7.
Hence we are searching for a set V F such that
we have two choices V1 and V2 where
prefer V1 since it requires fewer stack slots.
Our algorithm reduces most of the stack optimization
problem to the maximal matching problem for bipartite
graphs, for which e-cient algorithms are known.
Denition 9. The stack optimization graph for a deconstruction
given by the bipartite graph G whose vertex sets are
[f2F benefit(f) and are dened
cost(f)g.
Each node in the graph represents a load or store instruc-
tion, and the edges represent the benets one can gain if one
is willing to incur a given set of costs. In our diagrams, the
cost nodes are at the top and the benet nodes are at the
bottom.
Example 2. Consider the program shown in Figure 3(a).
The default compilation requires 14 loads and stores. The
deconstruction T0 => tree(K0,V0,L0,R0) has a single maximal
path (the entire procedure). All the eld variables are
candidates. The segments are anchored by the end of each
call. The vars information is given by:
The costs and benets for each of the eld variables are
given by
cost benet
K0 fstore(T 0); load(T 0; 3); fstore(K0)g
load(T 0; 4); load(T 0; 5)g
Note that since T0 is not required after the deconstruction,
it is a cost for each candidate, and since each candidate is
required in the initial segment there are no load benets.
The stack optimization graph for the deconstruction is
shown in Figure 2.
The algorithm starts by nding a maximal match M in
the stack optimization graph. (Figure 2 shows the edges in
the maximal matching in solid lines.) It then marks each
unmatched cost node and each node reachable from these
nodes using an alternating path with respect to M . The
cost nodes this marks represent costs which are not \paid
for" by corresponding benets. The benet nodes which are
not marked are those where the benet equals or outweighs
the corresponding costs. The algorithm partitions the candidates
into those whose benets include marked nodes, and
those whose benets do not include marked nodes. The result
V of variables we want to access via the cell variable is
the latter set.
In fact the benet nodes for each candidate variable will
either be all marked or all unmarked. This is a consequence
of the following lemma.
Lemma 1. Let G be the stack optimization graph, where
are adjacent to the same subset A C
and M is a maximal matching of G. Let
Mg be the matched nodes in C. Let
M) be the nodes reachable by an
alternating path from an unmatched node in C. Then b1
if and only if b2 2 R.
Proof. Suppose to the contrary that w.l.o.g. b1 2 R and
there is an alternating path from some
c 2 MC to b1 . Hence there is an alternating path from c to
some a 2 A. Since c 2 MC, the rst edge in this path cannot
be in M ; since the path must have an even number of edges,
the last edge must be in M . Now b2 is also adjacent to a. If
then we can extend the alternating path from
c to a to reach b2 , which is a contradiction. Alternatively
but since there is an alternating path from c
to a, this means the path from c to a must use this edge,
and hence there is an alternating path from c to b2 , which
is again a contradiction.
Example 3. Figure 2 shows the stack optimization graph
for the program of Figure 3(a), together with a maximal
matching. We mark (with an 'm') all the nodes reachable
by an alternating path starting from an unmatched node in
C (in this case, the only such node is fload(T0; 4)g). In
Figure
2, the marked nodes are load(T 0; 4), store(K0) and
load(T 0; 5). The set V dened by the matching is the set of
candidate variables all of whose benet nodes are unmarked.
In this case 0g. The resulting optimized
program is shown in Figure 3(c) and requires 14 loads and
stores. Note that accessing all eld variables through the
cell results in the program in Figure 3(b), which requires 15
loads and stores.
We can show not only that the choice V is no worse than
the default of accessing every candidate through a stack slot,
but that the choice is optimal.
Theorem 1. Let G be the stack optimization graph, and
M a maximal matching of G. Let
load K0, V0, L0, R0
store K0, V0, L0, R0
dodgy(K0, V0, L0, R0),
load L0, R0
load K0, V0
check(K0, V0, C1),
load K0
check(K0, C1, C2),
load K0
check(K0, C2, C3).
load K0, V0, L0, R0
store T0
dodgy(K0, V0, L0, R0),
load T0, L0 2, R0 2
load T0, K0 3, V0 3
check(K0 3, V0 3, C1),
load T0, K0 4
check(K0 4, C1, C2),
load T0, K0 5
check(K0 5, C2, C3).
store T0, K0
dodgy(K0, V0, L0, R0),
load T0, L0 2, R0 2
load K0, T0, V0 3
check(K0, V0 3, C1),
load K0
check(K0, C1, C2),
load K0
check(K0, C2, C3).
(a) (b) (c)
Figure
3: The (a) original arbitrary program, (b) transformed program for maximal stack space savings, and
(c) optimal transformed program.
be the matched nodes in C, and be the
unmatched nodes in C. Let
maximal.
Proof. Let
Each node c 2
RC is matched, otherwise it would be in
MC, and it must be matched to a node in
RB . Suppose to
the contrary, that c is matched with b 2 RB . Then there
is an alternating path from some c 0 2 MC to b which ends
in an unmatched edge (since it starts with an unmatched
edge from C to B). Hence we can extend this path using
the matching edge (b; c), hence c 2 R. Contradiction.
implies that b 2
RB otherwise there
would be an augmenting path from some c 2 MC to b. Now
since each node in
RB either matches
a node in
RC or is unmatched.
By denition benefit(V
RB since V contains exactly
the f such that benefit(f) \
We now show that cost(V
RC . Now cost(V ) is all the
nodes in C adjacent to a node in benefit(V
RB . Since
each node in
RC is matched with a node in
RB , it must be
adjacent to a node in
RB , thus
RC cost(V ).
Suppose to the contrary there is a node c 62
RC adjacent
to b 2
RB . Now c 2 RC , hence there is an alternating path
from some c 0 2 MC to c which ends in an edge in M . But
then the alternating path can be extended to b since (b; c) is
not in M , and hence b 2 R, which is a contradiction. Thus
RC .
Now net(V
jMBj.
Consider any set of variables V 0 F . Let MB
denition each node
unmatched or matches a node in cost(V 0 ).
Hence jMC 0 j jMB 0 j. Also clearly MB 0
MB since
nodes in MB 0
are unmatched.
Now net(V 0
net(V ).
5.3 Merging results from different maximal
paths
Example 4. In the program in Figure 1(a), there are
3 maximal paths following T0 => tree(K0,V0,L0,R0):
[AB,BC,CE], [AB,BD,DE], and [AB,BE]. The stack optimization
graphs for each maximal path are shown in Figure 4.
None of the maximal matchings leave unmatched cost nodes
in V , and we get the same result
along each maximal path. We will therefore access all the
variables in fK0; V 0; L0; R0g via T0 along every maximal
path. The resulting optimized program in shown in Figure
1(b).
However, in general we may compute dierent sets V
along dierent maximal paths. If the value of V computed
along a given maximal path does not include a given eld
variable, then accessing that eld variable via the heap cell
along that maximal path may lead to a slowdown when execution
takes that maximal path. Accessing a eld variable
via the cell on some maximal path and via a stack slot on
other maximal paths doesn't make sense: if we need to store
that eld variable in a stack slot in the rst interval for use
on the maximal paths in which it is accessed via the stack
slot, we gain nothing and may lose something if we access
it via the cell along other maximal paths. Since we try
to make sure that our optimization never slows down the
program (\rst, do no harm"), we therefore access a eld
variable via the cell only if all maximal paths prefer to access
that eld variable via the cell, i.e. if the eld variable
is in the sets V computed for all maximal paths.
The value of V we compute along a given maximal paths
guarantee that accessing the variables in V via the cell instead
of via stack slots will not slow the program down.
However, there is no similar guarantee about subsets of
accessing a subset of the variables in V via the cell instead of
via stack slots can slow the program down. It would therefore
not be a good idea simply to take the intersection of
load(L0,1) store(L0) load(R0,1) store(R0) store(K0) load(V0,1) store(V0)
costs:
benefits:
load(L0,1) store(L0) load(R0,1) store(R0) store(K0) load(V0,1) store(V0)
costs:
benefits:
load(L0,1) store(L0) load(R0,1) store(R0) store(K0) load(V0,1) store(V0)
costs:
benefits:
Figure
4: The stack optimization graphs for maximal
paths [AB,BC,CE], [AB,BD,DE], and [AB,BE] of
the program in Figure 1(a).
the sets V computed along the dierent maximal paths and
access the variables in the intersection via the cell.
What we should do instead is restrict the candidate set
by removing from it all the variables that are not in the in-
tersection, and restart the analysis from the beginning with
this new candidate set, and keep doing this until we get the
same set V for all maximal paths. Each time we restart the
analysis we remove at least one variable from the candidate
set. The size of the initial candidate set thus puts an upper
bound on the number of times we need to perform the
analysis.
5.4 Cost of operations
Until now, we have assumed that all loads and stores cost
the same. While this is reasonably close to the truth, it is
not the whole truth. Our optimization deals with two kinds
of stores and four kinds of loads. The kinds of stores we deal
with are (1) the stores of eld variables to the stack, and (2)
the stores of cell variables to the stack. The kinds of loads
we deal with are (1) loading eld variables into registers
so we can store them on the stack (in the initial segment);
(2) loading a cell variable from the stack into a register (in
later segments) so that we can use that register as a base
for loading a eld variable from that cell; (3) loading a eld
variable from a cell; and (4) loading a variable from a stack
slot.
Our transformation adds type 2 loads and possibly a type
store while removing type 1 stores and possibly some type
1 loads; as a side eect, it also turns some type 4 loads into
type 3 loads. The stores involved on either side of the ledger
go the current stack frame, which means that they are likely
to be cache hits. Type 1 loads are clustered, which means
that they are also likely to be cache hits. For example,
if a unication deconstructs a cell with ve arguments on
a machine on which each cache block contains four words,
then the ve type 1 loads required to load all the arguments
in the cell into registers will have at most two cache misses.
Type 2 loads, occurring one per cell per segment, are not
clustered at all, and are therefore much more likely to be
cache misses. Type 3 loads are also more likely to be cache
loads.
Loads of type 1 will typically be followed within a few
instructions by a store of the loaded value. Loads of type
will typically be followed within a few instructions by a
load of type 3 using the loaded value as the cell's address.
Our optimization can turn a type 4 load into a type 3 load,
but when it does so, it does nothing to change the distance
between the load instruction and the next instruction that
needs the loaded value.
Both types of stores have the property that the value being
stored is not likely to be accessed in the next few in-
structions, making a pipeline stall from a data hazard most
unlikely. Type 1 and 2 loads, on the other hand, have a
signicant chance of causing a data hazard that results in a
stall. What this chance is and what the cost of the resulting
stall will be depends on what other, independent instructions
can be scheduled (by the compiler or by the hardware)
to execute between the load and the rst instruction that
uses the loaded value. This means that the probability and
cost (and thus the average cost) of such stalls is dependent
on the program and its input data.
Since the relative costs of the dierent types of loads and
stores depend on the average number and length of the
cache misses and stalls they generate, their relative costs are
program-dependent, and to a lesser extent data-dependent
as well. We have therefore extended our optimization with
four parameters that give the relative costs of type 1 and
loads and type 1 and 2 stores. (The cost parameter for
loads is also supposed to account for the associated
cost of turning some type 4 loads into type 3 loads.) The
parameters are in the form of small integers. Our extension
consists of replicating each node in the stack optimization
graph c times where c is the cost parameter of type of operation
represented by the node. All replicas of a given
original node have the same connectivity, so (according to
Lemma 1) we retain the property that all copies of the node
will either be marked or not, and hence the set V remains
well dened, and the theorem continues to hold. However,
the matching algorithm will now generate a solution whose
net eect is e.g. the addition of n type 2 (cell variable) loads
and the removal of m type 1 (eld variable) stores only if
arLoadCost m F ieldV arStoreCost. In our
experiments, we set CellV arLoadCost to 3, and the other
three parameters (CellV arStoreCost, F ieldV arLoadCost
and F ieldV arStoreCost) to 1.
6. TRANSFORMING THE CODE
Once we have determined the set V of eld variables that
should be accessed through the cell variable from a deconstruction
we transform the program by
adding clones of the deconstruction.
We perform a forward traversal of the procedure body
starting from the deconstruction, applying a current substitution
as we go. Initially the current substitution is
the identity substitution. When we reach the beginning of a
segment i where V \vars(i) 6= ;, we add a clone of the decon-
struction, with each eld variable f replaced by a new variable
f 0 . We construct a substitution
which has the eect of replacing each variable we will access
through the cell by the copy in the clone deconstruction.
The remaining new variables in the clone deconstruction will
never be used. We then proceed applying the substitution
until we reach the end of the segment.
Example 5. For the program given in Figure 3(a), where
R0g. Traversing forward from the decon-
struction, we reach segment 2, the segment following the
call to dodgy/4. Since vars(2) \ V is not empty, we
add a clone of the deconstruction T0 => f(K0 2, V0 2,
construct the substitution
2g. Continuing the traversal,
we apply the substitution to balanced(L0, R0), replacing
it with balanced(L0 2, R0 2). Note that K0 2 and V0 2 are
never used. On reaching the end of segment 2 and start
of segment 3, we insert a new clone deconstruction, T0 =>
construct a new current
substitution 3g.
The processing of the later segments is similar.
For segments that share intervals (which must be an interval
ending at the start of a switch), this transformation
inserts the clone unication after the (unique) anchor that
starts all those segments. In Figure 1(a), this would mean
inserting a single clone deconstruction immediately after the
call to compare instead of the three clone deconstructions at
the starts of the three switch arms we show in Figure 1(b).
However, the Mercury code generator does not load variables
from cells until it needs to. The code generated by the
transformation therefore has exactly the same eect as the
code in Figure 1(b).
The optimization extends in a straightforward manner to
cases where a cell variable b is itself the eld variable of
another deconstruction, e.g. c => g(b,b2,.,bk). Simply
applying the optimization rst to the deconstruction b =>
f(a1,.,an), and then applying the optimization to c =>
achieve the desired eect.
Our implementation of the optimization has two passes
over the procedure body. The rst pass is a backwards
traversal. It builds up data structures describing intervals
and segments as it goes along. When it reaches a deconstruction
unication, it uses those data structures to nd
the candidate variables, applies the matching algorithm to
nd which candidates should be accessed via the cell vari-
able, and then updates the data structures to re
ect what
the results of the associated transformation would be, but
does not apply the transformation yet. Instead the transformations
required by all the optimizable deconstructions
are performed all at once by the second, forward traversal
of the procedure.
7. PERFORMANCE EVALUATION
We have implemented the optimization we have described
in this paper in the Melbourne Mercury compiler. In our
initial testing, we have found it necessary to add two tuning
parameters.
If the one-path node ratio threshhold has a value OPR,
then we accept the results of the matching algorithm
on a given path only if the ratio between the number
of benet nodes and the number of cost nodes in the
computed matching is at least OPR%.
If the all-paths node ratio threshhold has a value APR,
then we accept the results of the matching algorithm
only if the ratio between the total number of benet
nodes and total number of cost nodes on all paths is
at least APR%.
To be accepted, a result of the matching algorithm must
pass both thresholds. If it fails one or both thresholds, the
algorithm will not use that cell as an access path to its eld
variables, and will store all its eld variables on the stack.
Example 6. Consider the program in Figure 3. This contains
only one path, whose matching is shown in Figure 2.
The ratio of the (unmarked) benet nodes to the (unmarked)
cost nodes (which correspond to the benets and costs of accessing
OPR > 100, this optimization will be rejected and each of
will be stored in its own stack slot. Since there
is only one path, the all-paths ratio is the same.
Example 7. The program in Figure 1(a) has three paths:
[AB,BC,CE], [AB,DB,DE] and [AB,BE]. The matchings are
shown in Figure 4. The ratio of the (unmarked) benet
nodes to the (unmarked) cost nodes for [AB,BC,CE] and for
[AB,DB,DE] is while the ratio for [AB,BE] is
so the one-path node threshold will not reject
the transformation leading to the code in Figure 1(b) unless
OPR > 233. While the three paths share all their benet
nodes, they share only one cost node, so when one looks at
all paths, the benet node to costs node ratio is only
116.66%. Hence the all-path node threshold will reject the
transformation if APR > 116.
Increasing the one-path node ratio threshhold beyond
100% has the same kind of eect as increasing the numbers
of nodes allocated to cell variable loads and stores relative
to the numbers of nodes allocated to eld variable loads
and stores. Its advantage is that setting this threshhold to
(say) 125% is signicantly cheaper in compilation time than
running the matching algorithm on graphs which have ve
copies of each cost node and four copies of each benet node.
The all-path node ratio threshhold poses a dierent test to
the one-path node ratio threshhold, because dierent paths
share their benets (the elimination of eld variable stores
and maybe loads) but not the principal component of their
costs (the insertion of cell variable loads into segments) in
controlling the impact of the optimization on executable
size. The all-path node ratio threshhold is useful in controlling
the impact of the optimization on executable size.
If all operations have the same number of nodes, then setting
this parameter to 100% virtually guarantees that the
optimization will not increase the size of the executable; if
the cost operations have more nodes than the benet opera-
tions, then setting this parameter to 100% virtually guarantees
that any application of the transformation will strictly
decrease the size of the executable. (One cannot make a concrete
guarantee because it is the C compiler, not the Mercury
compiler, that has the nal say on executable size.)
There are two reasons why we found these thresholds nec-
essary. First, the impacts of the pipeline eects and cache
program lines no opt opt: 100/125 opt: 133/133 opt: 150/100 opt: 150/125
mmc 262844 50.31 44.84 89.1% 44.05 87.6% 44.94 89.3% 45.55 90.5%
compress 689 15.66 15.67 100.0% 15.66 100.0% 15.65 100.0% 15.66 100.0%
ray 2102 13.42 13.31 99.2% 13.29 99.0% 13.30 99.1% 13.29 99.0%
Table
1: Performance evaluation
eects we discussed in Section 5.4 vary depending on the
circumstances. Sometimes these variations make the transformed
program faster than the original; sometimes they
make it slower. The thresholds allow us to lter out the
applications of the transformation that have the highest
chance of slowing down the program, leaving only the applications
that are very likely to yield speedups. Second, even
if the original program and the transformed program have
the same performance with respect to cache and pipeline
eects, we have reason to prefer the original program. This
reason concerns what happens when a cell variable becomes
dead while some but not all of its eld variables are still
alive. With the original program, the garbage collector may
be able to reclaim the storage occupied by the cell's dead
eld variables, since there may be no live roots pointing
to them. With the transformed program, such reclamation
will not be possible, because there will be a live root from
which they are reachable: the cell variable, whose lifetime
the transformation articially extends.
We have therefore tested each of our test programs with a
several sets of parameter values. (Unfortunately, the whole
parameter space is so big that searching it even close to
exhaustively is not really feasible.) Due to space limitations,
we cannot present results for all of these parameter sets.
However, the four we have chosen are representative of the
results we have; the comments we make are still true when
one looks at all our results to date.
All four sets of parameter values had the cost of a cell
variable load set to three while all other operations had cost
one, since our preliminary investigations suggested these as
roughly right. The sets diered in the values of the one-
path and all-path node ratio threshholds. The four combinations
of these parameters' values we report on are 100/125,
133/133, 150/100 and 150/125 (one-path/all-path).
Our test programs are the following. The mmc test case is
the Melbourne Mercury compiler compiling six of the largest
modules in its own code. compress is a Mercury version
of the 129.compress benchmark from the SPECint95 suite.
The next two entries involve our group's entries in recent
ICFP programming contests. The 2000 entry is a ray tracer
that generates .ppm les from a structural description of a
scene, while the 2001 entry is a source-to-source compression
program for a hypothetical markup language. nuc is a Mercury
version of the pseudoknot benchmark, executed 1000
times. ray is a ray tracing program generating a picture of
a helix and a dodecahedron. The benchmark machine was
a Dell PC, (1.6 MHz Pentium IV, 512 Mb, Linux 2.4.16).
Table
1 shows the results. The rst column identies the
benchmark program, and the second gives its size in source
lines of code (measured by the word count program wc). The
third column gives the time taken by the program when it
is compiled without stack slot optimization, while the following
four groups of two columns give both the time it
takes when compiled with stack slot optimization and the
indicated set of parameter values, and the ratio of this time
to the unoptimized time. All these times were derived by
executing each benchmark program eight times, discarding
the highest and lowest times, and averaging the remaining
times.
The table shows a wide range of behaviors. On compress,
the optimization has no eect; compress simply doesn't contain
the kind of code that our optimization applies to. On
two programs, icfp2001 and ray, stack slot optimization
consistently gives a speedup of about 1%. On icfp2000,
stack slot optimization consistently gives a slowdown of
around 1%. On nuc, stack slot optimization gives a speedup
of a bit more than 2% for some sets of parameter values
and a slowdown of a bit more than 2% for other sets of
parameter values, which clearly indicates that some of the
transformations performed by stack slot optimization are
benecial while others are harmful, and that dierent parameter
values admit dierent proportions of the two kinds.
In general, raising the threshold reduces the probability of a
slowdown but also reduces the amount of speedup available;
one cannot guarantee that a given threshold value excludes
undesirable applications of the transformation without also
guaranteeing that it also excludes the desirable ones. On
icfp2000, the parameter values we have explored (which
were all in the range 100 to 150) admitted too many of
the wrong kind; using parameter values above 150 or using
higher costs for cell loads and stores may yield speedups for
icfp2000 also.
On mmc, stack slot optimization achieves speedups in the
9% to 12% range. When one considers that one doesn't expect
any program to spend much more than 30% of its time
doing stack accesses (it also has to allocate memory cells,
ll them in, make decisions, perform calls and returns, do
arithmetic and collect garbage), and these results show the
elimination of maybe one quarter of stack accesses, these
results look very impressive. Interestingly, previous benchmark
runs (which may be more typical) with slightly earlier
versions of the compiler yielded somewhat smaller speedups,
around 5-9%, but these are still pretty good results.
We think there is a reason why it is the largest and most
complex program in our benchmark suite that gives by far
the best results: it is because it is also by far the program
that makes the most use of complex data structures. Small
programs tend to use relatively simple data structures, because
the code to traverse a complex data structure is also
complex and therefore big. There is therefore reason to believe
that the performance of the stack slot optimization on
other large programs is more likely to resemble its behavior
on mmc than on the other benchmark programs. However,
the fact that stack slot optimization can sometimes lead to
slowdowns means that it may not be a good idea for compilers
to turn it on automatically; it is probably better to
let the programmer do so after testing its eect.
Our benchmarking also shows that stack slot optimization
usually reduces the sizes of executables, usually by 0.1% to
0.5%; in the few cases it increases them by a tiny amount
(less than 0.1%). This is nice, given that many optimizations
improve speed only at the cost of additional memory.
Enabling stack slot optimization slows down compilation
only slightly. In most cases we have seen, the impact on
compilation time is in the 0%-2% range. In a few cases, we
have seen it go up to 6.5%; we haven't seen any higher than
that.
We have also looked at the eect of our optimization on
the sizes of stack frames. The tested version of the Mercury
compiler has 8815 predicates whose implementations
need stack frames. With the parameter values we explored,
stack slot optimization was able reduce the sizes of the stack
frames of up to 1331 of those predicates, or 15.1%. It also
reduced the average size of a stack frame, averaged over all
8815 predicates, by up to 13.3%, from 5.50 words to 4.77
words. Oddly enough, the optimization leads to only a trivial
reduction (less than 0.1%) in the stack space required to
execute the compiler; at the point where the compiler requires
maximum stack space, virtually all of the frames in
the stack are from predicates whose stack frame sizes are
not aected by our optimization.
8. CONCLUSION AND RELATED WORK
The optimization we have described replaces a number
of stack accesses with an equivalent or smaller number of
heap accesses. In many cases, the optimization will reduce
the number of memory accesses required to execute the pro-
gram. The optimization can also reduce the sizes of pro-
cedures' stack frames, which improves locality and makes
caches more eective. The optimization can lead to significant
performance improvements in programs manipulating
complex data structures. A default usage of this optimization
needs to be conservative in order to avoid slowdowns;
the parameter values 150/125 would appear to be suitable
for this. To gain maximum benet from this optimization,
the programmer may need to explore the parameter space,
since conservative threshholds can restrict the benet of the
optimization.
The optimization we dene is somewhat similar to rematerialization
for register allocation (e.g. [1, 2]), where sometimes
it is easier to recompute the value of a variable, than
to keep it in a register. Both rematerialization and our optimization
eectively split the lifetime of a variable in order
to reduce pressure on registers or stack slots. The key dier-
ence is that the rematerialization of a variable is independent
of other variables, whereas the complexity of our problem
arises from the interdependence of choices for stack slots.
In fact, the Mercury compiler has long had an optimization
that took variable denitions of the form b <= f where f
is a constant, and cloned them in the segments where b is
needed in order to avoid storing b on the stack, substituting
dierent variables for b in each segment. Unlike the optimization
presented in this paper, that optimization requires
no analysis at all.
Also somewhat related is partial dead code elimination [5].
The savings on loads in the initial segment eectively result
from \sinking" the calculation of a eld variable to a point
later in the program code. We restrict the candidate eld
variables to ensure this sinking does not add overhead.
It is worth discussing how the same optimization can be
applied to other languages. The optimization is applicable
to Prolog even without mode information, since even though
the rst occurrence of a unication may be
neither a construct or a deconstruct, after its execution all
copies added by our algorithm will be deconstructs. A good
Prolog compiler can take advantage of this information to
execute them e-ciently. Of course without information on
determinism, the optimization is more dangerous, but simply
assuming code is deterministic is reasonable for quite
a large proportion of Prolog code. Unfortunately any advantage
is not likely to be visible in WAM-based compilers,
since the \registers" of the WAM are themselves in memory.
For strict functional languages such as ML the optimization
is straightforwardly applicable, since mode information is
syntactically available and the code is always deterministic.
The optimization will be available in the next major
release of the Mercury system, and should also be
available in release-of-the-day of the Mercury system at
<http://www.cs.mu.oz.au/mercury/> in the near future.
9.
--R
Register allocation spilling via graph coloring.
David Je
Bernhard Ste
The execution algorithm of Mercury
--TR
Rematerialization
Partial dead code elimination
Register allocation MYAMPERSANDamp; spilling via graph coloring | maximal matching;stack accesses;stack frames;heap cells |
571171 | Transforming the .NET intermediate language using path logic programming. | Path logic programming is a modest extension of Prolog for the specification of program transformations. We give an informal introduction to this extension, and we show how it can be used in coding standard compiler optimisations, and also a number of obfuscating transformations. The object language is the Microsoft .NET intermediate language (IL). | INTRODUCTION
Optimisers, obfuscators and refactoring tools all need to
apply program transformations automatically. Furthermore,
for each of these applications it is desirable that one can
easily experiment with transformations, varying the applicability
conditions, and also the strategy by which transformations
are applied. This paper introduces a variation
of logic programming, called path logic programming, for
specifying program transformations in a declarative yet executable
manner. A separate language of strategies is used
for controlling the application order.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
PPDP'02, October 6-8, 2002, Pittsburgh, Pennsylvania, USA.
We illustrate the ideas by considering the .NET intermediate
language (IL), which is a typed representation used by
the backends of compilers for many di#erent programming
languages [15, 22]. IL is quite close to some high-level lan-
guages, in particular to C# [2, 17], and because of the ease
by which one can convert from IL to C#, obfuscation of IL
is important [8]. Our main examples are therefore drawn
from the literature on obfuscation, but we also consider a
few standard compiler optimisations.
The structure of this paper is as follows. First we provide
a brief introduction to IL, and a variant of IL that is useful
when applying program transformations (or indeed when
writing a decompiler). Next, we introduce the main ideas of
path logic programming, as extensions of standard Prolog,
and explain how we can use these ideas to transform IL pro-
grams. After these preliminaries, we present some concrete
examples, first a few simple optimisations, and then some
more complex obfuscating transformations.
2.NET
The core of Microsoft's .NET platform is an intermediate
language to which a variety of di#erent source languages can
be compiled. It is similar to Java bytecode, although rather
more complex because it has been specifically designed to
be a convenient compilation target for multiple source lan-
guages. Programs are distributed in this intermediate language
and just-in-time compiled to native code on the target
platform. In this paper we shall be concerned with a relatively
small subset of this language; it is on-going work to
expand the scope of our transformation system.
IL is a stack-based language. The fragment we shall be
considering has instructions to load literal values onto the
stack (ldc), to create new arrays allocated from the heap
(newarr ), and to load and store values between local variables
or method arguments and the stack (ldloc, ldarg , stloc
and starg). The standard arithmetic, boolean and comparison
operations (which all have intuitive names) operate
solely on stack values. Finally, there are instructions to do
nothing, to branch conditionally and unconditionally, and to
return from a method (nop, brfalse, brtrue, br and ret). All
programs must be verifiable; for our purposes, this means
that it should be possible to establish statically what type
each item on the stack will be for each position in the instruction
sequence. This also means that for any particular
position, the stack must be the same height each time the
flow of control reaches that position.
The stack-based nature of IL makes it di#cult to formulate
even quite simple conditions on IL code. For example,
the assignment y := might be represented by the
IL sequence
add
sub
stloc y
As a result, a condition that recognised the above assignment
would need to be quite long; this problem becomes
much worse with more complicated expressions, especially if
branches in control-flow occur while values are on the stack.
Therefore, the first step we take is to convert from IL to
expression IL (EIL). This language abstracts from the IL
stack by replacing stack-based computations with expres-
sions, introducing new local variables to hold values that
would previously have been stored on the stack. It is the
fact that we only deal with verifiable IL that makes this
translation possible.
The first stage simply introduces one extra local variable
for each stack location and replaces each IL instruction with
an appropriate assignment; thus the above would become
something like:
It is left to the transformations described later to merge
these assignments together to give the original assignment
y 1. EIL is analogous to both the Jimple
and Grimp languages from the SOOT framework [29, 30]
- the initial translation produces code similar to the three-address
code of Jimple, and assignment merging leaves us
with proper expressions like those of Grimp.
The above concrete syntax omits many significant details
of EIL; for example, all expressions are typed and arithmetic
operators have multiple versions with di#erent overflow han-
dling. We shall return to the structure of EIL in Section 4.
The fragment of EIL that we consider here enables us to
make many simplifying assumptions that would be invalid
for the whole language. In particular, we ignore aliasing
problems in this paper.
3. PATH LOGIC PROGRAMMING
A simple optimisation that can be performed is atomic
propagation. In our case, an atomic value is taken to be a
constant, a local variable or a parameter that was passed by
value to the method. The intention is that if a local variable
is assigned an atomic value (and neither the variable nor
the value are redefined) then a use of this variable can be
replaced by the atomic value.
In essence, atomic propagation is just a rewrite rule:
Here S(X ) stands for a statement that contains the variable
X , and S (V ) is the same statement with X replaced by
. Naturally the validity of this rule requires that X is a
variable that holds the value of V whenever statement S (X )
is reached. In the above formulation, X and V are meta-variables
that will be instantiated to appropriate fragments
of the program we wish to transform. We only wish to apply
the propagation if V is an atomic value, so that we do not
introduce repeated computations.
It is easy to implement rewriting in a logic programming
language, and doing so makes it particularly easy to keep
track of bindings to meta-variables, see e.g. [16, 28]. In-
deed, this observation has prompted De Volder to propose
the use of Prolog as an appropriate framework for programming
transformations [13]. Here we go one step further, and
extend Prolog with new primitives that help us to express
the side conditions that are found in transformations such
as atomic propagation.
The Prolog program will be interpreted relative to the
flow graph of the object program that is being transformed.
The new primitive predicate
all
holds true if N and M are nodes in the flow graph, and
all paths from N to M are of the form specified by the
pattern Q . Furthermore, there should be at least one path
that satisfies the pattern Q (we shall justify this slightly
non-intuitive condition later). Such a pattern is a regular
expression, whose alphabet consists of logic programming
goals.
To illustrate, let us return to atomic propagation. The
predicate propagate(N , X , V ) holds true if this transformation
is applicable at node N in the flow graph, with appropriate
bindings for X and V . For example, when this
predicate is evaluated relative to the flow graph in Figure 1,
all the possible bindings are
and
The definition of propagate is in terms of the new primitive
all:
all (
{ }# ;
{ # set(X , V ),
local(X
{
{ # use(X )
(entry , N ).
This definition says that all paths from program entry to
node N should satisfy a particular pattern. A path is a
Figure
1: An example flow graph
sequence of edges in the flow graph. The path pattern is
a regular expression, and in this example it consists of four
components:
. A path first starts with zero or more edges that we do
not particularly care about: this is indicated by { }#.
As we shall see shortly, the interpretation of { } is a
predicate that holds true always.
. Next we encounter an edge whose target node is an
assignment of the form X := V where X is a local
variable and V is atomic, so it is worth attempting to
do propagation.
. Next we have zero or more edges to nodes that do not
re-define X or the value of V .
. Finally, we reach an edge pointing to a use of the variable
X .
This pattern should be satisfied for one particular binding of
all paths from entry to N . The fragments between
curly brackets are ordinary logic programming goals,
except for the use of the tick mark ( # ) in front of some predi-
cates. Such predicates are properties of edges. For example,
predicate that takes two arguments: a variable
X and an edge E , and it holds true when the edge
points at a node where X is assigned. Similarly, use(X , E)
is true when the target of E is a node that is labelled with a
statement that makes use of X . When we place a tick mark
in front of a predicate inside a path pattern, the current edge
is added as a final parameter when the predicate is called.
We now return to our requirement that there should be
at least one path between the nodes N and M for the predicate
all Q (N , M ) to hold. Suppose we did not insist on this
restriction, and we had some node N to which there did not
exist a path from the entry node. Then propagate(N ,
should succeed for any value of X and V , which would lead
to a nonsensical situation when we tried to apply the trans-
formation. We could of course specifically add a check to
the definition of propagate to avoid this, but this would be
required for many other predicates, and we have not found
any where our requirement for all is a hindrance. There are
also pragmatic reasons for this requirement - the implementation
of our primitives demands it [12].
It may seem rather unnatural to represent paths as sequences
of edges rather than sequences of nodes, given that
patterns will usually examine the target node of an edge
rather than the edge itself. However, using edges rather
than nodes give us slightly more power - in particular, it
allows us to specify that a path goes down the "then" or
"else" branches of an if statement. Although thus far we
have not made use of this extra power, we did not wish to
rule out the possibility for the future.
3.1 Syntax and Semantics
Figure
2 summarises the syntax of our extended version
of Prolog. There are two new forms of predicate, namely
all and exists. Each of these takes a path pattern and two
terms. Both terms are meant to designate nodes, say A and
. The predicate all P (A, B) holds true if all paths from
A to B are of the form indicated by P , and there exists at
least one path of that form. The predicate exists P (A, B)
simply requires that there exists one path of this form.
A pattern is a regular expression whose alphabet is given
by temporal goals - the operator ; represents sequential com-
position, represents choice, # is zero or more occurrences
and # an empty path. A temporal goal is a list of temporal
predicates, enclosed in curly brackets. A temporal predicate
is either an ordinary predicate (like atomic in the example
we just examined), or a ticked predicate (like use).
We can think of these patterns in the usual way as au-
tomata, where the edges are labelled with temporal goals.
In turn, a temporal goal is interpreted as a property of an
edge in the flow graph. The pattern
holds at edge e if each of its constituents holds at edge e.
To check whether a ticked predicate holds at e, we simply
add e as a parameter to the given predicate. Non-ticked
predicates ignore e. We shall write g [e] for the interpretation
of a temporal goal g at edge e.
We can now be more precise about the meaning of exists:
exists
means
In words, there exists a path in the flow graph from S to T ,
and a sequence of goals in the pattern (which leads from an
initial state to a final state in the automaton Q) such that
each goal holds at the corresponding edge.
Universal path patterns are similarly defined, except that
we require that at least one path satisfies the given pattern.
To wit,
all
means
exists
In words, there exists a path between S and T of the desired
form, and additionally all other paths between S and T are
of this form too.
predicate ::= all pattern (term, term)
| exists pattern (term, term)
| predsym(term, . , term)
| not(predicate, . , predicate)
pattern ::= {tpred , . , tpred}
| pattern ; pattern
| pattern#
| #
tpred ::= predicate
| # predsym(term, . , term)
Figure
2: Syntax of path logic programming
At this point, it is worth mentioning that our proposal
to add temporal features to Prolog is by no means a new
idea [25]. The application of such features to the specification
of program transformations does however appear to be
novel.
4. TRANSFORMING EIL GRAPHS
4.1 Logic terms for EIL
As we remarked earlier, the abstract syntax of EIL carries
quite a lot of detailed information about expressions. This
is reflected in the representation of these expressions as logic
terms; thus, the integer literal 5 becomes the logic term
The expr type constructor reflects the fact that all expressions
are typed - its first parameter is the expression and the
second the type. The type int(true, b32) is a 32-bit signed
integer (the true would become false if we wanted an unsigned
one). To construct a constant literal, the constructor
ldc is used - it takes a type parameter, which is redundant
but simplifies the processing of EIL in other parts of our
transformation system, and the literal value.
For a slightly more complicated example, the expression
(where x is a local variable) is represented by
int(true, b32)),
int(true, b32))),
int(true,
The term localvar(sname("x ")) refers to the local variable
x - the seemingly redundant constructor sname reflects the
fact that it is also possible to use a di#erent constructor
to refer to local variables by their position in the method's
declaration list, although this facility is not currently used.
The constructor applyatom exists to simplify the relationship
between IL and EIL - the term add(false, true) directly
corresponds to the IL instruction add , which adds the top
two items on the stack as signed values without overflow.
Thus, the meaning of applyatom can be summarised as: "ap-
ply the IL instruction in the first parameter to the rest of
the parameters, as if they were on the stack".
Finally, it remains to explain how EIL instructions are
defined. It is these that shall be used to label the edges and
nodes of our flow graphs. An instruction is either an expres-
sion, a branch or a return statement, combined with a list of
labels for that statement using the constructor instr label .
For example, the following defines a conditional branch to
the label target :
branch(cond(.
Note that we borrow the notation for lists from functional
programming, writing in lieu of [X |Xs]. If the current
instruction is an expression, then exp enclosing an expression
would be used in place of branch, and similarly
return is used in the case of a return statement.
Other EIL constructors shall be introduced as we encounter
them.
4.2 Defining conditions on EIL terms
The nodes of the graph are labelled with the logic term
corresponding to the EIL instruction at that node. In ad-
dition, as described earlier, each edge is labelled with the
term of the EIL instruction at the node that the edge points
to; it is these labels that are used to solve the existential
and universal queries (we anticipate that in future versions
of the system, more complex analysis information will be
stored at nodes and edges, and that the information will
di#er between nodes and edges).
Our logic language provides primitives to access the relevant
label given a node or an edge - @elabel(E , I ) holds if
I is the instruction at edge E , and @vlabel(V , I ) holds if I
is the instruction at node V (we use a convention of giving
primitives names beginning with @).
Thus, we can define the set predicate used in Section 3 as
follows:
E) :-
exp(expr
Here and elsewhere, we adopt the Prolog convention that
singleton variables are named by an identifier that starts
with an underscore. It is then straightforward to define def
in terms of set :
E) :- set(X ,
We also need to define use. This is based on the predicate
occurs(R, X ), which checks whether X occurs in R (by
the obvious recursive traversal). In defining use(X , E ), we
want to distinguish uses of X from definitions of X , whilst
still finding the uses of the variable x in expressions such as
a[x
use(X , E) :- @elabel(E , S ), occurs(S , X ),
use(X , E) :- set( L, R, E), occurs(R, X ).
4.3 Modifying the graph
Although the logic language we have described makes it
convenient to define side conditions for program transfor-
mations, it would be rather di#cult to use this language to
actually apply these transformations, since that would require
the program flow graph to be represented as a logic
term. The approach we take is that a successful logic query
should also bind its parameter to a list of symbolic "actions"
which define a correct transformation on the flow graph. A
high-level strategy language is responsible for directing in
what order logic queries should be tried and for applying the
resulting transformations. The strategy language is similar
to those found in the literature on rewriting [5, 31], and we
shall not discuss it further here.
An action is just a term, which can be either of the form
replace vertex(V , W ) or new local(T , N ). The former replaces
the vertex V with the vertex W , while the latter
introduces a new local variable named N of type T .
Thus, the overall propagation rewrite can be defined as
follows:
propagate rewrite(replace vertex (N ,
The predicate build(N , V , creates a new vertex M ,
by copying the old vertex N , replacing uses of X with
@vlabel(N , Old),
listof
@new vertex (New , Es, M ).
We have already discussed the primitive @vlabel . The predicate
constructs the term New from
Old , replacing uses of X with V . As with use, it is defined
so as not to apply this to definitions of X - if we are replacing
x with 0 in x := x +1 we want to end up with x := 0+1,
not
New vertices are constructed by using @new vertex . This
primitive takes a vertex label and a list of outgoing edges
and binds the new vertex to its final parameter. In this case,
we use the same list of edges as the old vertex, since all we
wish to do is to replace the label.
The predicate source(N , E) is true if the vertex N is the
source of the edge E , whilst the listof predicate is the standard
Prolog predicate which takes three parameters: a term
T , a predicate involving the free variables of T , and a third
parameter which will be bound to a list of all instantiations
of T that solve the predicate. Thus the overall e#ect
of listof (E , source(N , E), Es) is to bind Es to the outgoing
edges from node N , as required.
5. OPTIMISATIONS
The remainder of this paper consists of examples, and
these are intended to evaluate the design sketched above.
We shall first examine a number of typical compiler optimi-
sations. In the present context, these transformations are
used to clean up code that results either from the translation
of IL to EIL, or from our obfuscations. In partic-
ular, we examine dead assignment elimination, a form of
constant folding, and dead branch elimination. These were
chosen because they are representative; this list is however
not exhaustive, and it is essential that they are applied in
conjunction with other transformations.
Before we embark on detailed coding, therefore, we summarise
one of these other transformations that is particularly
important. As we have discussed in Section 2, the transformation
of IL to EIL creates many extra local variables. For
example, the IL instructions:
stloc x
are translated to something of the form:
It can be easily seen that the above assignments can be
combined to give:
Newly created local variables are generally only used once
- the exception is when the original IL code has the control
flow split at a program point where there are values
on the stack. If a local variable is used only once, an assignment
to it can be propagated regardless of whether the
value being assigned is atomic or not (so long as the value is
not changed in the meantime). The transformation to carry
out this propagation follows a similar pattern to the atomic
propagation defined in Section 3 and we shall not spell out
the details.
5.1 Dead Assignment Elimination
After propagation of atomic values and the removal of
unique variable uses as described above, there will be places
in the code where variables are assigned a value which is not
used afterwards. Such assignments can be replaced by nop
- a subsequent transformation can remove nops completely.
It is more convenient to do this in two phases because there
are many transformations which remove code, and in each
we would otherwise have to be careful to preserve the labels
of the removed instructions in case any jumps to those labels
existed.
Let us look at the conditions needed for this transformation
pure
exists (
{ }# ;
{ # set(X , V ),
local(X
(entry , N ).
It should be noted that although we have used an exists
query here, we did not really need to - in this case we are
looking for a single node that satisfies certain conditions,
not an entire path. However, it turns out to be convenient
to express the query in this way because predicates such as
use are already defined with respect to edges.
The first part of the condition states we need an expression
of the form X := V at node N . We also require that X is
local to the method and that V is "pure", i.e. it has no side
e#ects (each of these conditions can be defined as standard
Prolog predicates).
For the second part of the condition, we require that after
node N , X is no longer used (except at node N itself) until
another definition of X , or the exit of the method body, is
reached.
We first define a predicate unused other than at to capture
the first half of this requirement. Note that it is permissible
to use negation, since X will already be bound to
a ground term.
unused other than at(N , X , E) :-isnode(N , E).
unused other than at(N , X , E) :-not(use(X , E)).
We can now define the path condition:
unused
all (
{ # unused other than at(N , X )
{ }#)
In other words, all paths from node N to the exit do not use
other than at node N , unless they first redefine X .
For this transformation, it is not appropriate to use build
to produce a new vertex, since the entire assignment needs
to be replaced by a nop. Instead, the vertex is created manually
dead code(replace vertex (N ,
pure assignment(X , V , N ),
unused
listof
@vlabel(N , instr label(L, ),
@new vertex (
instr label(
L,
exp(expr type(applyatom(nop), void))),
Es,
NopVert).
5.2 Evaluation
After the elimination of unique uses of a variable and
atomic value propagation have been performed, we are often
left with expressions involving only constants, which could
be evaluated. For example:
z
would be transformed to:
z
We would like the right hand side of the assignment to be
replaced by 52. So, we will need a predicate that tries to
evaluate any arithmetic operations that have constant expressions
try to evaluate I and bind the
resulting integer to J . The base case states that the value
of a constant is just that constant:
eval(applyatom(ldc(int(true, b32), N )), N ).
Here is another clause of eval , for evaluating an addition:
eval(L, V1 ),
eval(R, V2 ),
V is V1 +V2 .
It should be noted that we are not inspecting the overflow
and sign bits (Ov and S ) here, while the semantics dictate
that we should. Future versions of our implementation will
employ reflection to ensure that the semantics of compile-time
evaluation is exactly the same as run-time evaluation.
Using eval on an EIL expression leaves us with an integer
(if it succeeds), which we then convert back to an EIL
int(true, b32)).
To apply evaluation to a node, we look for a non-atomic
expression at that node (if we allowed atomic expressions
to be evaluated then constant values would be repeatedly
transformed into themselves!), and replace the expression
by its value if eval succeeds:
evaluate(replace vertex(N ,
exists (
{ }# ;
{ # use(F ),
(entry , N ),
build(N , CE , F , M ).
As before, we use build to replace the original expression F
with CE .
5.3 Dead Branch Elimination
One of our obfuscations (see Section 6.2.2) adds conditional
branches to the program. After evaluation of the conditions
in such branches (or elsewhere), we may find we have
a constant condition that is therefore redundant. In keeping
with the specification of IL, we assume that "true" is defined
to be a non-zero integer.
First, we need to find a suitable conditional branch. We
specify a predicate that will test whether a vertex has a
conditional branch instruction:
cond branch(Cond , Labels, E) :-
label(Labels,
branch(cond(Cond), ))).
To use this to find a true branch we look for constant
conditions whose value is non-zero, and then replace the
branch vertex with a nop pointing to the "true" branch.
As with dead assignment elimination, this is simpler overall
than just replacing the branch statement with the "true"
vertex.
dead branch(replace vertex (BranchVert ,
exists(
{ }# ;
{ # cond branch(
applyatom(ldc(int(true, b32), N )),
int(true, b32)),
Labels),
(entry , BranchVert),
listof (Edge,
source(BranchVert , Edge),
TrueEdge
@new vertex(
instr label(
Labels,
exp(expr type(applyatom(nop), void))),
NopVert).
We use listof as discussed earlier to obtain a list of the
outgoing edges from BranchVert . Our graph representation
guarantees that the edges will be ordered with the "true"
branch first.
For a false branch, we repeat the same definition, but require
that N equals 0 in the condition, and replace TrueEdge
by FalseEdge in the construction of NopVert .
6. OBFUSCATIONS
It is relatively easily to decompile IL code back to a high-level
language such as C#. Therefore, software distributors
who wish to avoid their source code being revealed, for example
to prevent tampering or to protect a secret algorithm,
need to take steps to make this harder. One possibility is to
obfuscate the IL code that they distribute. Although preventing
decompilation completely is likely to be impossible,
especially in the case of verifiable IL code, applying transformations
that make the source code that results from decompilation
di#cult to understand might be an acceptable
alternative.
In this section, we show how path logic programming can
be used to give formal, executable specifications to two representative
examples from Collberg's taxonomy of obfuscating
transformations [8]: variable transformation and array
splitting.
6.1 Variable Transformations
The idea of variable transformation is to pick a variable i
which is local to a method and to replace all occurrences of
i in that method with a new variable j , which is related to
For this, we need a function f which is bijective with domain
Z (or some subset of Z if the potential values of i are
known) and range Z (again, we could have some subset).
We also will need f the inverse of f (which exists as f
is bijective).
To transform i , we need to perform two types of replacements
. Any expressions of the form i := E (a definition of i)
are transformed to j := f
. Any uses of i (i.e. not a definition of i)
are replaced by f -1 (j ).
6.1.1 An example
Let us take f . The program
brif (i < 15) loop
should be transformed to
brif ((j /2) < 15) loop
We could also define transformations to conduct algebraic
simplification, which would turn the above into:
brif (j < 30) loop
6.1.2 Implementing the transformation
The initial phase of the transformation is to find a suitable
variable. All we require is that the variable is assigned
to somewhere and that it is local. After we choose our
variable (OldVar ), we generate a fresh variable name using
@fresh name, which takes a type as a parameter so that the
name generated can reflect this type.
As well as producing an action that adds this new local
variable to the method, the following introduce local predicate
returns the old and new local variables - these are
needed for the next phase, which is to actually transform
the uses and definitions of the old variable.
introduce local(OldVar ,
NewVar ,
new local(int(true, b32),
exists (
{ }# ;
{ # set(OldVar , V ),
int(true,
(entry , OldVarVert),
@fresh name(int(true, b32),
NewVarName),
sname(NewVarName)),
int(true, b32)).
Once the new local has been introduced, we can carry out
the obfuscation by exhaustively replacing uses and definitions
of the old variable as appropriate. We first specify
predicates which build representations of the functions f and
outlined in Section 6.1.1. The predicate use fn(A, B)
binds B to a representation of f -1 (A):
use fn (A,
applyatom(cdiv(true),
ldc(int(true, b32), 2)),
int(true, b32))),
int(true,
Similarly, we can define assign fn(C , D) which binds D to
a representation of f (C ).
It is now simple to replace uses of OldVar :
replace use(OldVar ,
NewVar ,
replace vertex (OldVert , NewVert)
exists (
{ }# ;
{ # use(OldVar)
(entry , OldVert),
use fn(NewVar , NewUse),
build(OldVert , NewUse, OldVar , NewVert).
Similarly, we can replace assignments to OldVar .
6.2 Array Transformations
If we have a method which uses one or more arrays, we
can rearrange some or all of those arrays, for example by
permuting the elements of one, or by merging two of the
arrays into one, or by splitting one into two separate arrays.
In essence, array transformations are just a type of variable
transformation. The key point is that each access (use
or definition) to an element of one of the original arrays
can be translated into an access to an element of one of the
transformed arrays. If one of the original arrays is used in
its entirety (for example by being assigned to another array
this would need to be replaced with code to
dynamically apply the transformation to the entire array,
which could have a major impact on the runtime performance
of the program. Therefore, we avoid applying array
transformations in situations where this would be necessary.
We consider that an array-typed local variable can have
an array transformation applied to it if every path through
the method reaches a particular initialisation of that variable
to a newly created array (using the IL instruction newarr ),
and that all occurrences of that array variable which can be
reached from that initialisation are accesses to an element
of the array, rather than a use or definition of the array
variable itself.
6.2.1 Array splitting
The obfuscation that we are going to specify is an array
split. The idea of array splitting is to take an array and
place the elements in two (or more) new arrays. To do this,
it is necessary to define functions which determine where
the elements of the original array are mapped to (i.e. which
array and the position in that array). Let us look at a simple
example.
Suppose that we have an array A of size n and we want
to split it into two new arrays B1 and B2 . We want B1
and B2 to have the same size (possibly di#ering by one ele-
so let B1 have size ((n + 1) div 2) and B2 have size
(n div 2). The relationship between A, B1 and B2 is given
by the following rule:
B1 [i div 2] if i is even
The program:
int [] a := new int[20]
int
brif (i < 20) loop
should be transformed to:
int [] b1 := new int[10]
int [] b2 := new int[10]
int
if (i%2 ==
else b2[(i - 1)/2] := i
brif (i < 20) loop
(The if . else . is not strictly part of EIL, but its implementation
in terms of branches is obvious.)
6.2.2 Specifying the transformation
In general, suppose we have an array A with size n. Then
to define an array split of A into two other arrays, we need
three functions c, f 1 and f 2 and two new arrays B1 and B2
of sizes m1 and m2 respectively (where m1 +m2 > n). The
types of the functions are as follows:
The relationship between A, B1 and B2 is given by the following
rule:
To ensure that there are no index clashes with the new ar-
rays, we require that f 1 and f 2 are injective. We should also
make sure that the elements of A are fairly distributed between
B1 and B2 . This means that c should partition [0.n)
into (approximately) equal pieces.
6.2.3 Finding a suitable array
Now, we show how to implement an array split using the
transformation outlined in the example in Section 6.2.1. Let
us look at the conditions necessary for the transformation.
First, we need to find a place where an array is initialised:
array initialise(InitVert ,
OldArray ,
Type,
exists (
{ }# ;
{ # set(OldArray ,
applyatom(
newarr(T
int(true, b32))),
array(Type))),
array(Type))),
ctype from type spec(Type, T )
(entry , InitVert).
This condition states that at InitVert , we have an instruction
of the form:
OldArray := newarr(Type)[Size]
where OldArray is a local variable of array type.
The standard representation of types in IL that we have
being using so far is known as a ctype; however the type
parameter to newarr takes a slightly di#erent form known
as a type spec. We use the predicate ctype from type spec
to compare the two and thus make sure that we are dealing
with a well-typed array initialisation.
The next step is to check that every path through the
method reaches the initialisation, and after that point the
array variable is not used except to access an array element.
The predicate unindexed(OldArray , E ) holds if OldArray is
used without an index at the node pointed to by edge E .
This gives us the following condition:
ok to transform(InitVert , OldArray) :-
all (
{ }# ;
{ # isnode(InitVert)
{ not( # unindexed(OldArray))
}#
(entry , exit).
6.2.4 Initialising the new arrays
Next, we need to create two new arrays with the correct
sizes. If the size of the old array is n, the sizes of the new
arrays should be (n + 1)/2 and n/2. We define predicates
div two and plus one div two that will construct the appropriate
expressions, given the original array size. Since
the expression that computes the original array size might
have side e#ects or be quite complicated, we first introduce
a new local variable to hold the value of this expression
so we do not repeat its computation. Thus, we need to
construct three new vertices to replace InitVert - one to
initialise the local variable, and two to initialise the new ar-
rays. We omit the details - the only minor di#culty is that
because @new vertex requires a list of outgoing edges, and
constructing new edges requires the target vertex, we must
construct the vertices in the reverse of the order they will
appear in the graph.
6.2.5 Replacing the old array
The next step is to exhaustively replace occurrences of
the old array. For each occurrence that follows the newly
inserted initialisations, we need to insert a dynamic test on
the index that the old array was accessed with, and access
one of the two new arrays depending on the result of that
dynamic test.
Finding occurrences is straightforward - we just define a
predicate array occurs(A, I , E) which looks for any occurrence
of the form A[I ] at E , and then search for nodes occurring
after the initialisation of the second new array (whose
vertex is bound to SecondInitVert):
find old array(OldArray ,
exists (
{ }# ;
{ # array occurs(OldArray , Index )
(SecondInitVert , OccursVert).
Using the newly bound value for Index , we can then construct
the necessary nodes with which to replace OccursVert
- again, we omit details.
7. RELATED WORK
This paper is a contribution to the extensive literature on
specifying program analyses and transformations in a declarative
style. In this section, we shall survey only the most
closely related work, which directly prompted the results
presented here.
The APTS system of Paige [26] has been a major source
of inspiration for the work presented here. In APTS, program
transformations are expressed as rewrite rules, with
side conditions expressed as boolean functions on the abstract
syntax tree, and data obtained by program analyses.
These functions are in some ways similar to those presented
here, but we have gone a little further in embedding them in
a variant of Prolog. By contrast, in APTS the analyses are
coded by hand, and this is the norm in similar systems that
have been constructed in the high-performance computing
community, such as MT1 [4], which is a tool for restructuring
Fortran programs. Another di#erence is that both these
systems transform the tree structure rather than the flow
graph, as we do.
We learnt about the idea of taking graph rewriting as
the basis of a transformation toolkit from the work of Assmann
[3] and of Whitfield and So#a [32]. Assmann's work
is based on a very general notion of graph rewriting, and it
is of a highly declarative nature. It is a little di#cult, how-
ever, to express properties of program paths. Whitfield and
So#a's system does allow the expression of such properties,
through a number of primitives that are all specialisations of
our predicate all P (S , T ). Our main contribution relative
to their work is that generalisation, and its embedding in
Prolog.
The use of path patterns (for examining a graph struc-
ture) that contain logical variables was borrowed from the
literature on querying semi-structured data [6]. There, only
existential path patterns are considered. So relative to that
work, our contribution is to explore the utility of universal
path patterns as well.
Liu and Yu [20] have derived algorithms for solving regular
path queries using both predicate logic and language
inclusion, although their patterns do not (yet) support free
variables, which for our purposes are essential for applying
transformations.
The case for using Prolog as a vehicle for expressing program
analyses was forcefully made in [11]. The same re-search
team went on to embed model checkers in Prolog [10],
and indeed that was our inspiration for implementing the
all and exists primitives through tabling. Kris De Volder's
work taught us about the use of Prolog for applying program
transformations [13].
8. DISCUSSION
We have presented a modest extension of Prolog, namely
path logic programming. The extension is modest both in
syntactic and semantic terms: we only introduced two new
forms of predicate, and it is easy to compile these to standard
Prolog primitives (the resulting program uses tabling
to conduct a depth-first search of the product automaton of
the determinised pattern and the flow graph [12]). We could
have chosen to give our specifications in terms of modal
logic instead of regular expressions, and indeed such modal
logic specifications were the starting point of this line of re-search
[19]. We believe that regular expressions are a little
more convenient, but this may be due to familiarity, and a
formal comparison of the expressivity of the alternate styles
would be valuable.
8.1 Shortcomings
There are a number of problems that need to be overcome
before the ideas presented here can be applied in practice:
. Our transformations need to take account of aliasing.
We have recently updated our implementation to add
Prolog primitives which conduct an alias analysis on
demand - an alternative approach would be to annotate
the graph with alias information.
. For e#ciency, it would be desirable that the Prolog
interpreter proceeds incrementally, so that work done
before a transformation can be re-used afterwards. Our
current implementation is oblivious of previous work,
and therefore very ine#cient. We are currently developing
an algorithm that should address this concern
and hope to be able to report results soon.
. To deal with mutually recursive procedures, it would
be more accurate to describe paths in the program
through a context-free grammar, instead of a traditional
flow graph [23].
We are now engaged in the implementation of a system that
all three of these problems. The main idea is to
compile logical goals of the form all Q (S , T ) to a standard
analysis [1]. This allows a smooth integration
with standard data flow analyses, and also a re-use of
previous work on incremental computation of such analyses.
It does however require us to make the step from standard
logic programming to Prolog with inequality constraints.
As with all programming languages, path logic programming
would greatly benefit by a static typing discipline. In
our experience, most programming errors come from building
up syntactically wrong EIL terms - this could easily be
caught with a type system such as that of Mercury [27]. In
fact, as Mercury already has a .NET implementation [14],
there is an argument for integrating our new features into
that language rather than standard Prolog. Our current
implementation language is Standard ML.
In transformations that involve scope information, it is
tricky to keep track of the exact binding information, and
the possibility of variable capture. The framework should
provide support for handling these di#culties automatically.
This di#culty has been solved by higher-order logic programming
[24], and we hope to integrate those ideas with
our own work at a later stage.
Transformation side-conditions can quickly become quite
complex, and the readability of the resulting description in
Prolog is something of a concern. We hope that judicious
modularisation of the Prolog programs and appropriate use
of our strategy language to separate di#erent transformations
addresses this issue, but more experience will be required
to determine whether this actually works in practice
with large sets of complex transformations.
8.2 Other applications
Path logic programming might have other applications beyond
those presented here. For example, in aspect-oriented
programming one needs to specify sets of points in the dynamic
call graph [18]. Current language proposals for the
specification of such pointcuts are somewhat ad hoc, and
path logic programming could provide a principled alterna-
tive. Furthermore, we could then use the transformation
technology presented here to achieve the compile-time improvement
of aspect-oriented programs [21].
Another possible application of path logic programming
is static detection of bugs: the path queries could be used
to look for suspect patterns in code. Indeed, the nature of
the patterns in Dawson Engler's work on bug detection [7]
is very similar to those presented here. Again we are not the
first to note that a variant of logic programming would be
well-suited to this application area: Roger Crew designed
the language ASTLOG for inspection of annotated abstract
syntax trees [9]. Our suggestion is that his work be extended
to the inspection of program paths, thus making the
connection with the extensive literature on software model
checking.
8.3
Acknowledgements
We would like to thank our colleagues in the Programming
Tools Research Group at Oxford for many enjoyable
discussions on the topic of this paper. We would also like
to thank David Lacey, Krzysztof Apt, Hidehiko Masuahara,
Paul Kwiatkowski and the three anonymous PPDP reviewers
for their many helpful comments. Stephen Drape and
Ganesh Sittampalam are supported by a research grant from
Microsoft Research.
9.
--R
Inside C
How to specify program analysis and transformation with graph rewrite systems.
Transformation mechanisms in MT1.
A query language and algebra for semistructured data based on structural recursion.
Checking system rules using system-specific
A taxonomy of obfuscating transformations.
ASTLOG: a language for examining abstract syntax trees.
Logic programming and model checking.
Practical program analysis using general purpose logic programming systems - a case study
Universal regular path queries.
Compiling Mercury to the
A logic programming approach to implementing higher-order term rewriting
A programmer's introduction to C
Imperative program transformation by rewriting.
Solving regular path queries.
Compilation semantics of aspect-oriented programs
A technical overview of the commmon language infrastructure.
Interconvertibility of a class of set constraints and context-free language reachability
An overview of temporal and modal logic programming.
Viewing a program transformation system at work.
Logic Programming for Programmers.
Optimizing Java bytecode using the Soot framework: Is it feasible?
A language for program transformation based on rewriting strategies.
An approach for exploring code improving transformations.
--TR
Compilers: principles, techniques, and tools
A logic programming approach to implementing higher-order term rewriting
Practical program analysis using general purpose logic programming systemsMYAMPERSANDmdash;a case study
An approach for exploring code improving transformations
Interconvertibility of a class of set constraints and context-free-language reachability
A programmer''s introduction to C# (2nd ed.)
Inside C#
Universal Regular Path Queries
An Overview of Temporal and Modal Logic Programming
Logic Programming and Model Checking
How to Uniformly Specify Program Analysis and Transformation with Graph Rewrite Systems
Optimizing Java Bytecode Using the Soot Framework
Imperative Program Transformation by Rewriting
Solving Regular Path Queries
Viewing A Program Transformation System At Work
UnQL: a query language and algebra for semistructured data based on structural recursion
Soot - a Java bytecode optimization framework
--CTR
Mathieu Verbaere , Ran Ettinger , Oege de Moor, JunGL: a scripting language for refactoring, Proceeding of the 28th international conference on Software engineering, May 20-28, 2006, Shanghai, China
Yanhong A. Liu , Tom Rothamel , Fuxiang Yu , Scott D. Stoller , Nanjun Hu, Parametric regular path queries, ACM SIGPLAN Notices, v.39 n.6, May 2004
Ganesh Sittampalam , Oege de Moor , Ken Friis Larsen, Incremental execution of transformation specifications, ACM SIGPLAN Notices, v.39 n.1, p.26-38, January 2004 | compiler optimisations;program analysis;obfuscation;logic programming;program transformation;meta programming |
571639 | Neural methods for dynamic branch prediction. | This article presents a new and highly accurate method for branch prediction. The key idea is to use one of the simplest possible neural methods, the perceptron, as an alternative to the commonly used two-bit counters. The source of our predictor's accuracy is its ability to use long history lengths, because the hardware resources for our method scale linearly, rather than exponentially, with the history length. We describe two versions of perceptron predictors, and we evaluate these predictors with respect to five well-known predictors. We show that for a 4 KB hardware budget, a simple version of our method that uses a global history achieves a misprediction rate of 4.6% on the SPEC 2000 integer benchmarks, an improvement of 26% over gshare. We also introduce a global/local version of our predictor that is 14% more accurate than the McFarling-style hybrid predictor of the Alpha 21264. We show that for hardware budgets of up to 256 KB, this global/local perceptron predictor is more accurate than Evers' multicomponent predictor, so we conclude that ours is the most accurate dynamic predictor currently available. To explore the feasibility of our ideas, we provide a circuit-level design of the perceptron predictor and describe techniques that allow our complex predictor to operate quickly. Finally, we show how the relatively complex perceptron predictor can be used in modern CPUs by having it override a simpler, quicker Smith predictor, providing IPC improvements of 15.8% over gshare and 5.7% over the McFarling hybrid predictor. | Figure
1: Perceptron Model. The input values , are propagated through the weighted connections by
taking their respective products with the weights . These products are summed, along with the bias
weight , to produce the output value .
4.1 How Perceptrons Work
The perceptron was introduced in 1962 [24] as a way to study brain function. We consider the simplest
of many types of perceptrons [2], a single-layer perceptron consisting of one articial neuron connecting
several input units by weighted edges to one output unit. A perceptron learns a target Boolean function
of inputs. In our case, the are the bits of a global branch history shift register, and the
target function predicts whether a particular branch will be taken. Intuitively, a perceptron keeps track
of positive and negative correlations between branch outcomes in the global history and the branch being
predicted.
Figure
1 shows a graphical model of a perceptron. A perceptron is represented by a vector whose
elements are the weights. For our purposes, the weights are signed integers. The output is the dot product
of the weights vector, , and the input vector, ( is always set to 1, providing a "bias" input).
The output of a perceptron is computed as
The inputs to our perceptrons are bipolar, i.e., each is either -1, meaning not taken or 1, meaning
taken. A negative output is interpreted as predict not taken. A non-negative output is interpreted as predict
taken.
4.2 Training Perceptrons
Once the perceptron output has been computed, the following algorithm is used to train the perceptron.
Let be -1 if the branch was not taken, or 1 if it was taken, and let be the threshold, a parameter to the
training algorithm used to decide when enough training has been done.
if sign or then
for := 0 to do
end for
Since and are always either -1 or 1, this algorithm increments the th weight when the branch
outcome agrees with , and decrements the weight when it disagrees. Intuitively, when there is mostly
agreement, i.e., positive correlation, the weight becomes large. When there is mostly disagreement, i.e.,
negative correlation, the weight becomes negative with large magnitude. In both cases, the weight has
a large inuence on the prediction. When there is weak correlation, the weight remains close to 0 and
contributes little to the output of the perceptron.
4.3 Linear Separability
A limitation of perceptrons is that they are only capable of learning linearly separable functions [11].
Imagine the set of all possible inputs to a perceptron as an -dimensional space. The solution to the
equation
is a hyperplane (e.g. a line, if ) dividing the space into the set of inputs for which the perceptron will
respond false and the set for which the perceptron will respond true [11]. A Boolean function over variables
is linearly separable if and only if there exist values for such that all of the true instances
can be separated from all of the false instances by that hyperplane. Since the output of a perceptron is
decided by the above equation, only linearly separable functions can be learned perfectly by perceptrons.
For instance, a perceptron can learn the logical AND of two inputs, but not the exclusive-OR, since there
is no line separating true instances of the exclusive-OR function from false ones on the Boolean plane.
As we will show later, many of the functions describing the behavior of branches in programs are
linearly separable. Also, since we allow the perceptron to learn over time, it can adapt to the non-linearity
introduced by phase transitions in program behavior. A perceptron can still give good predictions when
learning a linearly inseparable function, but it will not achieve 100% accuracy. By contrast, two-level
PHT schemes like gshare can learn any Boolean function if given enough training time.
4.4 Branch Prediction with Perceptrons
We can use a perceptron to learn correlations between particular branch outcomes in the global history
and the behavior of the current branch. These correlations are represented by the weights. The larger the
weight, the stronger the correlation, and the more that particular branch in the global history contributes
to the prediction of the current branch. The input to the bias weight is always 1, so instead of learning a
correlation with a previous branch outcome, the bias weight, , learns the bias of the branch, independent
of the history.
The processor keeps a table of perceptrons in fast SRAM, similar to the table of two-bit counters
in other branch prediction schemes. The number of perceptrons, , is dictated by the hardware budget
and number of weights, which itself is determined by the amount of branch history we keep. Special
circuitry computes the value of and performs the training. We discuss this circuitry in Section 5. When
the processor encounters a branch in the fetch stage, the following steps are conceptually taken:
1. The branch address is hashed to produce an index into the table of perceptrons.
2. The th perceptron is fetched from the table into a vector register, , of weights.
3. The value of is computed as the dot product of and the global history register.
4. The branch is predicted not taken when is negative, or taken otherwise.
5. Once the actual outcome of the branch becomes known, the training algorithm uses this outcome
and the value of to update the weights in .
6. is written back to the th entry in the table.
It may appear that prediction is slow because many computations and SRAM transactions take place
in steps 1 through 5. However, Section 5 shows that a number of arithmetic and microarchitectural tricks
enable a prediction in a single cycle, even for long history lengths.
5 Implementation
This section explores the design space for perceptron predictors and discusses details of a circuit-level
implementation.
5.1 Design Space
Given a xed hardware budget, three parameters need to be tuned to achieve the best performance: the
history length, the number of bits used to represent the weights, and the threshold.
History length. Long history lengths can yield more accurate predictions [9] but also reduce the number
of table entries, thereby increasing aliasing. In our experiments, the best history lengths ranged from 4 to
66, depending on the hardware budget. The perceptron predictor can use more than one kind of history.
We have used both purely global history as well as a combination of global and per-branch history.
Representation of weights. The weights for the perceptron predictor are signed integers. Although
many neural networks have oating-point weights, we found that integers are sufcient for our percep-
trons, and they simplify the design. We nd that using 8 bit weights provides the best trade-off between
accuracy and hardware budget.
Threshold. The threshold is a parameter to the perceptron training algorithm that is used to decide
whether the predictor needs more training.
5.2 Circuit-Level Implementation
Here, we discuss general techniques that will allow us to implement a quick perceptron predictor. We then
give more detailed results of a transistor-level simulation.
Computing the Perceptron Output. The critical path for making a branch prediction includes the computation
of the perceptron output. Thus, the circuit that evaluates the perceptron should be as fast as
possible. Several properties of the problem allow us to make a fast prediction. Since -1 and 1 are the
only possible input values to the perceptron, multiplication is not needed to compute the dot product.
Instead, we simply add when the input bit is 1 and subtract (add the two's-complement) when the input
bit is -1. In practice, we have found that adding the one's-complement, which is a good estimate for the
two's-complement, works just as well and lets us avoid the delay of a small carry-propagate adder. This
computation is similar to that performed by multiplication circuits, which must nd the sum of partial
products that are each a function of an integer and a single bit. Furthermore, only the sign bit of the result
is needed to make a prediction, so the other bits of the output can be computed more slowly without having
to wait for a prediction. In this paper, we report only results that simulate this complementation idea.
Training. The training algorithm of Section 4.2 can be implemented efciently in hardware. Since there
are no dependences between loop iterations, all iterations can execute in parallel. Since in our case both
and can only be -1 or 1, the loop body can be restated as "increment by 1 if , and decrement
otherwise," a quick arithmetic operation since the are 8-bit numbers:
for each bit in parallel
if then
else
Circuit-Level Simulation. Using a custom logic design program and the HSPICE and CACTI 2.0 simulators
we designed and simulated a hardware implementation of the elements of the critical path for the
perceptron predictor for several table sizes and history lengths. We used CACTI, a cache modeling tool,
to estimate the amount of time taken to read the table of perceptrons, and we used HSPICE to measure the
latency of our perceptron output circuit.
The perceptron output circuit accepts input signals from the weights array and from the history register.
As weights are read, they are bitwise exclusive-ORed with the corresponding bits of the history register. If
the th history bit is set, then this operation has the effect of taking the one's-complement of the th weight;
otherwise, the weight is passed unchanged. After the weights are processed, their sum is found using a
Wallace-tree of 3-to-2 carry-save adders [5], which reduces the problem of nding the sum of numbers to
the problem of nding the sum of numbers. The nal two numbers are summed with a carry-lookahead
adder. The Wallace-tree has depth , and the carry-lookahead adder has depth , so the
computation is relatively quick. The sign of the sum is inverted and taken as the prediction.
Table
1 shows the delay of the perceptron predictor for several hardware budgets and history lengths,
simulated with HSPICE and CACTI for 180nm process technology. We obtain these delay estimates by
selecting inputs designed to elicit the worst-case gate delay. We measure the time it takes for one of the
input signals to cross half of until the time the perceptron predictor yields a steady, usable signal.
For a 4KB hardware budget and history length of 24, the total time taken for a perceptron prediction is
2.4 nanoseconds. This works out to slightly less than 2 clock cycles for a CPU with a clock rate of 833
MHz, the clock rate of the fastest 180 nm Alpha 21264 processor as of this writing. The Alpha 21264
branch predictor itself takes 2 clock cycles to deliver a prediction, so our predictor is within the bounds
of existing technology. Note that a perceptron predictor with a history of 23 instead of 24 takes only 2.2
nanoseconds; it is about 10% faster because a predictor with 24 weights (23 for history plus 1 for bias)
can be organized more efciently than predictor with 25 weights, for reasons specic to our Wallace-tree
design.
History Table Size Perceptron Table Total # Clock Cycles
Length (bytes) Delay (ps) Delay (ps) Delay (ps) @ 833 MHz @ 1.76 GHz
9 512 725 432 1157 1.0 2.0
4.3
Table
1: Perceptron Predictor Delay. This table shows the simulated delay of the perceptron predictor at several
table sizes and history length congurations. The delay of computing the output and fetching the perceptron from
the table are shown separately, as well as in total.
Compensating for Delay. Ideally, a branch predictor operates in a single processor clock cycle. Jime.nez
et al. study a number of techniques for reducing the impact of delay on branch predictors [16]. For
example, a cascading perceptron predictor would use a simple predictor to anticipate the address of the
next branch to be fetched, and it would use a perceptron to begin predicting the anticipated address. If the
branch were to arrive before the perceptron predictor were nished, or if the anticipated branch address
were found to be incorrect, a small gshare table would be consulted for a quick prediction. The study
shows that a similar predictor, using two gshare tables, is able to use the larger table 47% of the time.
An overriding perceptron predictor would use a quick gshare predictor to get an immediate prediction,
starting a perceptron prediction at the same time. The gshare prediction is acted upon by the fetch engine.
Once the perceptron prediction completes, both predictions are compared. If they differ, the actions taken
by the fetch engine are rolled back and restarted with the new prediction, incurring a small penalty. The
Alpha 21264 uses this kind of branch predictor, with a slower hybrid branch predictor overriding a less
accurate but faster line predictor [18]. When a line prediction is overridden, the Alpha predictor incurs
a single-cycle penalty, which is small compared to the 7-cycle penalty for a branch misprediction. By
pipelining the perceptron predictor, or using the hierarchical techniques mentioned above, the perceptron
predictor can be used successfully in future microprocessors. The overriding strategy seems particularly
appropriate since, as pipelines continue to become deeper, the cost of overriding a less accurate predictor
decreases as a percentage of the cost of a full misprediction. We present a detailed analysis of the
overriding scheme in Section 6.4.
6 Results and Analysis
To evaluate the perceptron predictor, we use simulation to compare it against well-known techniques
from the literature. We rst compare the accuracy of two versions of the perceptron predictor against 5
predictors. We then evaluate performance using IPC as the metric, comparing an overriding perceptron
predictor against an overriding McFarling-style predictor at two different clock rates. Finally, we present
analysis to explain why the perceptron predictor performs well.
6.1 Methodology
Here we describe our experimental methodology. We discuss the other predictors simulated, the benchmarks
used, the tuning of the predictors, and other issues.
Predictors simulated. We compare our new predictor against gshare [22], and bi-mode [20], and a
combination gshare and PAg McFarling-style hybrid predictor [22] similar to that of the Alpha 21264,
with all tables scaled exponentially for increasing hardware budgets. For the perceptron predictor, we
simulate both a purely global predictor, as well as a predictor that uses both global and local history.
This global/local predictor takes some input to the perceptron from the global history register, and other
input from a set of per-branch histories. For the global/local perceptron predictor, the extra state used
by the table of local histories was constrained to be within 35% of the hardware budget for the rest of
the predictor, reecting the design of the Alpha 21264 hybrid predictor. For gshare and the perceptron
predictors, we also simulate the agree mechanism [28], which predicts whether a branch outcome will
agree with a bias bit set in the branch instruction. The agree mechanism turns destructive aliasing into
constructive aliasing, increasing accuracy at small hardware budgets.
Our methodology differs from our previous work on the perceptron predictor [17], which used traces
from x86 executables of SPEC2000 and only explored global versions of the perceptron predictor. We nd
that the perceptron predictor achieves a larger improvement over other predictors for the Alpha instruction
set than for the x86 instruction set. We believe that this difference stems from the Alpha's RISC instruction
set, which requires more dynamic branches to accomplish the same work, and which thus requires longer
histories for accurate prediction. Because the perceptron predictor can make use of longer histories than
other predictors, it performs better for RISC instruction sets.
Gathering traces. We use SimpleScalar/Alpha [3] to gather traces. Each time the simulator executes a
conditional branch, it records the branch address and outcome in a trace le. Branches in libraries are not
proled. The traces are fed to a program that simulates the different branch prediction techniques.
Benchmarks simulated. We use the 12 SPEC 2000 integer benchmarks. We allow each benchmark to
execute 300 million branches, which causes each benchmark to execute at least one billion instructions.
We skip past the rst 50 million branches in the trace to measure only the steady state prediction accuracy,
without effects from the benchmarks' initializations. For tuning the predictors, we use the SPEC train
inputs. For reporting misprediction rates, we test the predictors on the ref inputs.
Tuning the predictors. We tune each predictor for history length using traces gathered from the each
of the 12 benchmarks and the train inputs. We exhaustively test every possible history length at each
hardware budget for each predictor, keeping the history length yielding the lowest harmonic mean mis-prediction
rate. For the global/local perceptron predictor, we exhaustively test each pair of history lengths
such that the sum of global and local history length is at most 50. For the agree mechanism, we set bias
bits in the branch instructions using branch biases learned from the train inputs.
For the global perceptron predictor, we nd, for each history length, the best value of the threshold by
using an intelligent search of the space of values, pruning areas of the space that give poor performance.
We re-use the same thresholds for the global/local and agree perceptron predictors.
Table
2 shows the results of the history length tuning. We nd an interesting relationship between
history length and threshold: the best threshold for a given history length is always exactly
. This is because adding another weight to a perceptron increases its average output by some
constant, so the threshold must be increased by a constant, yielding a linear relationship between history
length and threshold. Through experimentation, we determine that using 8 bits for the perceptron weights
yields the best results.
6.2 Impact of History Length on Accuracy
One of the strengths of the perceptron predictor is its ability to consider much longer history lengths
than traditional two-level schemes, which helps because highly correlated branches can occur at a large
distance from each other [9]. Any global branch prediction technique that uses a xed amount of history
information will have an optimal history length for a given set of benchmarks. As we can see from Table 2,
the perceptron predictor works best with much longer histories than the other two predictors. For example,
with a 4K byte hardware budget, gshare works best with a history length of 14, the maximum possible
length for gshare. At the same hardware budget, the global perceptron predictor works best with a history
length of 24.
hardware gshare global perceptron global/local perceptron
budget history number history number global/local number
in bytes length of entries length of entries history of entries
Table
2: Best History Lengths. This table shows the best amount of global history to keep for gshare and two
versions of the perceptron predictor, as well as the number of table entries for each predictor.Percent Mispredicted
gshare
gshare with agree
bi-mode
global/local hybrid
global perceptron
global/local perceptron
global/local perceptron with agree0
Hardware Budget (Bytes)
Figure
2: Hardware Budget vs. Prediction Rate on SPEC 2000. This graph shows the mispredictionrates of various
predictors as a function of the hardware budget.
6.3 Misprediction Rates
Figure
2 shows the harmonic mean of misprediction rates achieved with increasing hardware budgets
on the SPEC 2000 benchmarks. At a 4K byte hardware budget, the global perceptron predictor has a
misprediction rate of 1.94%, an improvement of 53% over gshare at 4.13% and 31% over a 6K byte bi-mode
at 2.82%. When both global and local history information is used, the perceptron predictor still
has superior accuracy. A global/local hybrid predictor with the same conguration as the Alpha 21264
predictor using 3712 bytes has a misprediction rate of 2.67%. A global/local perceptron predictor with
3315 bytes of state has a misprediction rate of 1.71%, representing a 36% decrease in misprediction rate
over the Alpha hybrid. The agree mechanism improves accuracy, especially at small hardware budgets.
With a small budget of only 750 bytes, the global/local perceptron predictor achieves a misprediction
rate of 2.89%, which is less than the misprediction rate of a gshare predictor with 11 times the hardware
budget, and less than the misprediction rate of a gshare/agree predictor with a 2K byte budget. Figure 3
show the misprediction rates of two PHT-based methods and two perceptron predictors on the SPEC 2000
benchmarks for hardware budgets of 4K and 16K bytes.gshare, 4KB
bi-mode, 3KB
global perceptron, 4KB
global/local perceptron, 3.3KB
Percent Mispredicted514
z
r7g
c
c
r
c
a
f
a
sr
y
e
e
r
la
pr
er
z
pxi0o
e
a
a
r
l
f
c
Benchmarkgshare, 16KB
bi-mode, 12KB
global perceptron, 16KB
global/local perceptron, 13KB
Percent Mispredicted514
z
r7g
c
c
r
c
a
f
a
sr
y
e
r
l
r
a
oer
xb
z
e
a
a
r
l
f
c
Benchmark
Figure
3: Misprediction Rates for Individual Benchmarks. These charts show the misprediction rates of global
perceptron, gshare and bi-mode predictors at hardware budgets of 4 KB and 16 KB.
6.3.1 Large Hardware Budgets
As Moore's Law continues to provide more and more transistors in the same area, it makes sense to explore
much larger hardware budgets for branch predictors. Evers' thesis [10] explores the design space for multi-component
hybrid predictors using large hardware budgets, from KB to 368 KB. To our knowledge,
the multi-component predictors presented in Evers' thesis are the most accurate fully dynamic branch
predictors known in previous work. This predictor uses a McFarling-style chooser to choose between two
other McFarling-style hybrid predictors. The rst hybrid component joins a gshare with a short history
to a gshare with a long history. The other hybrid component consists of a PAs hybridized with a loop
predictor, which is capable of recognizing regular looping behavior even for loops with long trip counts.
We simulate Evers' multi-component predictors using the same conguration parameters given in his
thesis. At the same set of hardware budgets, we simulate a global/local version of the perceptron predictor.
The tuning of this large perceptron predictor is not as exhaustive as for the smaller hardware budgets, due
to the huge design space. We tune for the best global history length on the SPEC train inputs, and then
for the best fraction of global versus local history at a single hardware budget, extrapolating this fraction to
the entire set of hardware budgets. As with our previous global/local perceptron experiments, we allocate
35% of the hardware budgets to storing the table of local histories. The conguration of the perceptron
predictor is given in Table 3.
Size Global Local Number of Number of
History History Perceptrons Local Histories
53 50
Table
3: Congurations for Large Budget Perceptron Predictors.
Figure
4 shows the harmonic mean misprediction rates of Evers' multi-component predictor and the
global/local perceptron predictor on the SPEC 2000 integer benchmarks. The perceptron predictor outperforms
the multi-component predictor at every hardware budget, with the misprediction rates getting closer
to one another as the hardware budget is increased. Both predictors are capable of reaching amazingly low
misprediction rates at the 368 KB hardware budget, with the perceptron at 0.85% and the multi-component
predictor at 0.93%.
We claim that these results are evidence that the perceptron predictor is the most accurate fully dynamic
branch predictor known. We must point out that we have not exhaustively tuned either the multi-component
or the perceptron predictors because of the huge computational challenge. Nevertheless, there
is a clear separation between the misprediction rates of the multi-component and perceptron predictors,
and between the perceptron and all other predictors we have examined at lower hardware budgets; thus,
we are condent that our claim can be veried by independent researchers.
6.4 IPC
We have seen that the perceptron predictor is highly accurate but has a multi-cycle delay associated with it.
If the delay is too large, overall performance may suffer as the processor stalls waiting for predictions. We
Percent Mispredicted
Multi-Component Hybrid
Global/Local Perceptron0
Hardware Budget (Bytes)
Figure
4: Hardware Budget vs. Misprediction Rate for Large Predictors.
now evaluate the perceptron predictor in terms of overall processor performance, measured in IPC, and
taking into account predictor access delay. In particular, we compare an overriding perceptron predictor
against the overriding hybrid predictor of the Alpha 21264, and we consider two processor congurations.
One conguration uses a moderate clock rate that matches the latest Alpha processor, while the other
approximates the more aggressive clock rate and deeper pipeline of the Intel Pentium 4.
This remainder of this section describes congurations of the overriding perceptron predictor for these
two clock rates and reports on simulated IPC for the SPEC 2000 benchmark.
6.4.1 Moderate Clock Rate Simulations
Currently, the fastest Alpha processor in 180 nm technology is clocked at a rate of 833 MHz. At this clock
rate, both the perceptron predictor and Alpha hybrid predictor deliver a prediction in two clock cycles.
Using SimpleScalar/Alpha, we simulate a two-level overriding predictors at 833 MHz. The rst level
is a 256-entry Smith predictor [27], i.e., a simple one-level table of two-bit saturating counters indexed
by branch address. This predictor roughly simulates the line predictor of the overriding Alpha predictor.
Our Smith predictor achieves a harmonic mean accuracy of 85.2%, which is the same accuracy quoted
for the Alpha line predictor [18]. For the second level predictor, we simulate both the perceptron predictor
and the Alpha hybrid predictor. The perceptron predictor consists of 133 perceptrons, each with 24
weights. Although the 25 weight perceptron predictor was the best choice at this hardware budget in our
simulations, the 24 weight version has much the same accuracy but is 10% faster. We have observed that
the ideal ratio of per-branch history bits to total history bits is roughly 20%, so we use 19 bits of global
history and 4 bits of per-branch history from a table of 1024 histories. The total state required for this predictor
is 3704 bytes, approximately the same as the Alpha hybrid predictor, which uses 3712 bytes. Both
the Alpha hybrid predictor and the perceptron predictor incur a single-cycle penalty when they override
the Smith predictor. We also simulate a 2048-entry non-overriding gshare predictor for reference. This
gshare uses less state since it operates in a single cycle; note that this is the amount of state allocated to
the branch predictor in the HP-PA/RISC 8500 [21], which uses a clock rate similar to that of the Alpha.
We again simulate the 12 SPEC int 2000 benchmarks, this time allowing each benchmark to execute 2
billion instructions. We simulate the 7-cycle misprediction penalty of the Alpha 21264.
When a branch is encountered, there are four possibilities with the overriding predictor:
The rst and second level predictions agree and are correct. In this case, there is no penalty.
The rst and second level predictions disagree, and the second one is correct. In this case, the
second predictor overrides the rst, with a small penalty.
The rst and second level predictions disagree, and the second one is incorrect. In this case, there
is a penalty equal to the overriding penalty from the previous case as well as the penalty of a
full misprediction. Fortunately, the second predictor is more accurate that the rst, so this case is
unlikely to occur.
The rst and second level predictor agree and are both incorrect. In this case, there is no overriding,
but the prediction is wrong, so a full misprediction penalty is incurred.
Figure
5 shows the instructions per cycle (IPC) for each of the predictors. Even though there is a
penalty when the overriding Alpha and perceptron predictors override the Smith predictor, their increased
accuracies more than compensate for this effect, achieving higher IPCs than a single-cycle gshare. The
perceptron predictor yields a harmonic mean IPC of 1.65, which is higher than the overriding predictor at
1.59, which itself is higher than gshare at 1.53.
6.4.2 Aggressive Clock Rate Simulations
The current trend in microarchitecture is to deeply pipeline microprocessors, sacricing some IPC for
the ability to use much higher clock rates. For instance, the Intel Pentium 4 uses a 20-stage integer
pipeline at a clock rate of 1.76 GHz. In this situation, one might expect the perceptron predictor to yield
poor performance, since it requires so much time to make a prediction relative to the short clock period.
Nevertheless, we will show that the perceptron predictor can improve performance even more than in the
previous case, because the benets of low misprediction rates are greater.
At a 1.76 GHz clock rate, the perceptron predictor described above would take four clock cycles: one to
read the table of perceptrons and three to propagate signals to compute the perceptron output. Pipelining
the perceptron predictor will allow us to get one prediction each cycle, so that branches that come close
together don't have to wait until the predictor is nished predicting the previous branch. The Wallace-tree
for this perceptron has 7 levels. With a small cost in latch delay, we can pipeline the Wallace-tree in four
stages: one to read the perceptron from the table, another for the rst three levels of the tree, another for
the second three levels, and a fourth for the nal level and the carry-lookahead adder at the root of the
tree. The new perceptron predictor operates as follows:
1. When a branch is encountered, it is immediately predicted with a small Smith predictor. Execution
continues along the predicted path.
2. Simultaneously, the local history table and perceptron tables are accessed using the branch address
as an index.
3. The circuit that computes the perceptron output takes its input from the global and local history
registers and the perceptron weights.
4. Four cycles after the initial prediction, the perceptron prediction is available. If it differs from the
initial prediction, instructions executed since that prediction are squashed and execution continues
along the other path.
5. When the branch executes, the corresponding perceptron is quickly trained and stored back to the
table of perceptrons.
Figure
6 shows the result of simulating predictors in a microarchitecture with characteristics of the
Pentium 4. The misprediction penalty is 20, which simulates the long pipeline of the Pentium 4. The
Alpha overriding hybrid predictor is conservatively scaled to take 3 clock cycles, while the overriding
perceptron predictor takes 4 clock cycles. The 2048-entry gshare predictor is unmodied. Even though
the perceptron predictor takes longer to make a prediction, it still yields the highest IPC in all benchmarks
because of its superior accuracy. The perceptron predictor yields an IPC of 1.48, which is 5.7% higher
than that of the hybrid predictor at 1.40.1-cycle gshare
Alpha-like 2-cycle overriding predictor
Perceptron 2-cycle overriding predictor
Instructions per Cycle06g
z
r7g
c
c
r
c
a
f
a
sr
y
e
e
r
la
pr
er
z
pxi0o
e
a
a
r
l
f
c
Benchmark
Figure
5: IPC for overriding perceptron and hybrid predictors. This chart shows the IPCs yielded by gshare, an
Alpha-like hybrid, and global/local perceptron predictor given a 7-cycle misprediction penalty. The hybrid and
perceptron predictors have a 2-cycle latency, and are used as overriding predictors with a small Smith predictor.
6.5 Training Times
To compare the training speeds of the perceptron predictor with PHT methods, we examine the rst 100
times each branch in each of the SPEC 2000 benchmarks is executed (for those branches executing at
least 100 times). Figure 7 shows the average accuracy of each of the 100 predictions for each of the static
branches with a 4KB hardware budget. The average is weighted by the relative frequencies of each branch.
The perceptron method learns more quickly the gshare or bi-mode. For the perceptron predictor, training
time is independent of history length. For techniques such as gshare that index a table of counters,
training time depends on the amount of history considered; a longer history may lead to a larger working
set of two-bit counters that must be initialized when the predictor is rst learning the branch. This effect
has a negative impact on prediction rates, and at a certain point, longer histories begin to hurt performance
for these schemes [23]. As we will see in the next section, the perceptron prediction does not have this
weakness, as it always does better with a longer history length.
1-cycle gshare
Alpha-like 3-cycle overriding predictor
Perceptron 4-cycle overriding predictor
Instructions per Cycle06g
z
r7g
c
c
r
c
a
f
a
sr
y
e
e
r
la
pr
er
z
pxi0o
e
a
a
r
l
f
c
Benchmark
Figure
IPC for overridingperceptron and hybridpredictors with long pipelines. This chart shows the IPCs yields
by gshare, a hybrid predictor and a global/local perceptron predictor with a large misprediction penalty and high
clock rate.
6.6 Advantages of the Perceptron Predictor
We hypothesize that the main advantage of the perceptron predictor is its ability to make use of longer
history lengths. Schemes like gshare that use the history register as an index into a table require space
exponential in the history length, while the perceptron predictor requires space linear in the history length.
To provide experimental support for our hypothesis, we simulate gshare and the perceptron predictor
at a 64K hardware budget, where the perceptron predictor normally outperforms gshare. However, by
only allowing the perceptron predictor to use as many history bits as gshare (18 bits), we nd that gshare
performs better, with a misprediction rate of 1.86% compared with 1.96% for the perceptron predictor.
The inferior performance of this crippled predictor has two likely causes: there is more destructive aliasing
with perceptrons because they are larger, and thus fewer, than gshare's two-bit counters, and perceptrons
are capable of learning only linearly separable functions of their input, while gshare can potentially learn
any Boolean function.
Figure
8 shows the result of simulating gshare and the perceptron predictor with varying history lengths
on the SPEC 2000 benchmarks. Here, we use a 4M byte hardware budget is used to allow gshare to consider
longer history lengths than usual. As we allow each predictor to consider longer histories, each
becomes more accurate until gshare becomes worse and then runs out of bits (since gshare requires resources
exponential in the number of history bits), while the perceptron predictor continues to improve.
With this unrealistically huge hardware budget, gshare performs best with a history length of 23, where it
achieves a misprediction rate of 1.55%. The perceptron predictor is best at a history length of 66, where it
achieves a misprediction rate of 1.09%.
6.7 Impact of Linearly Inseparable Branches
In Section 4.3 we pointed out a fundamental limitation of perceptrons that perform ofine training: they
cannot learn linearly inseparable functions. We now explore the impact of this limitation on branch prediction
To relate linear separability to branch prediction, we dene the notion of linearly separable branches.
Percent Mispredictedgshare
bi-mode
Number of times a branch has been executed
Figure
7: Average Training Times for SPEC 2000 benchmarks. The axis is the number of times a branch has
been executed. The -axis is the average, over all branches in the program, of 1 if the branch was mispredicted,
otherwise. Over time, this statistic tracks how quickly each predictor learns. The perceptron predictor achieves
greater accuracy earlier than the other two methods.
Let be the most recent bits of global branch history. For a static branch , there exists a Boolean
function that best predicts 's behavior. It is this function, , that all branch predictors strive to
learn. If is linearly separable, we say that branch is a linearly separable branch; otherwise, is a
linearly inseparable branch.
Theoretically, ofine perceptrons cannot predict linearly inseparable branches with complete accuracy,
while PHT-based predictors have no such limitation when given enough training time. Does gshare predict
linearly inseparable functions better than the perceptron predictor? To answer this question, we compute
for each static branch in our benchmark suite and test whether the functions are linearly
separable.
Figure
9 shows the misprediction rates for each benchmark for a 4K budget, as well as the percentage
of dynamically executed branches that is linearly inseparable. For each benchmark, the bar on the left
shows the misprediction rate of gshare, while the bar on the right shows the misprediction rate of a global
perceptron predictor. Each bar also shows, using shading, the portion of mispredictions due to linearly
inseparable branches and linearly separable branches. We observe two interesting features of this chart.
First, most mispredicted branches are linearly inseparable, so linear inseparability correlates highly with
unpredictability in general. Second, while it is difcult to determine whether the perceptron predictor
performs worse than gshare on linearly inseparable branches, we do see that the perceptron predictor
outperforms gshare in all cases except for 186.crafty, the benchmark with the highest fraction of
linearly inseparable branches.
Some branches require longer histories than others for accurate prediction, and the perceptron predictor
often has an advantage for these branches. Figure 10 shows the relationship between this advantage and the
required history length, with one curve for linearly separable branches and one for inseparable branches.
The axis represents the advantage of our predictor, computed by subtracting the misprediction rate of the
perceptron predictor from that of gshare. We sorted all static branches according to their "best" history
Global Perceptron
gshare
Percent Mispredicted0
History Length
Figure
8: History Length vs. Performance. This graph shows how accuracy for gshare and the perceptronpredictor
improves as history length is increased. The perceptron predictor is able to consider much longer histories with the
same hardware budget.
length, which is represented on the axis. Each data point represents the average misprediction rate of
static branches (without regard to execution frequency) that have a given best history length. We use the
perceptron predictor in our methodology for nding these best lengths: Using a perceptron trained for
each branch, we nd the most distant of the three weights with the greatest magnitude. This methodology
is motivated by the work of Evers et al., who show that most branches can be predicted by looking at
three previous branches [9]. As the best history length increases, the advantage of the perceptron predictor
generally increases as well. We also see that our predictor is more accurate for linearly separable branches.
For linearly inseparable branches, our predictor performs generally better when the branches require long
histories, while gshare sometimes performs better when branches require short histories.
Linearly inseparable branches requiring longer histories, as well as all linearly separable branches, are
always predicted better with the perceptron predictor. Linearly inseparable branches requiring fewer bits
of history are predicted better by gshare. Thus, the longer the history required, the better the performance
of the perceptron predictor, even on the linearly inseparable branches.
We found this history length by nding the most distant of the three weights with the greatest magnitude
in a perceptron trained for each branch, an application of the perceptron predictor for analyzing branch
behavior.
6.8 Additional Advantages of the Perceptron Predictor
Assigning condence to decisions. Our predictor can provide a condence-level in its predictions that
can be useful in guiding hardware speculation. The output, , of the perceptron predictor is not a Boolean
value, but a number that we interpret as taken if . The value of provides important information
about the branch since the distance of from 0 is proportional to the certainty that the branch will be taken
[15]. This condence can be used, for example, to allow a microarchitecture to speculatively execute both
branch paths when condence is low, and to execute only the predicted path when condence is high.
Some branch prediction schemes explicitly compute a condence in their predictions [14], but in our
predictor this information comes for free. We have observed experimentally that the probability that a
Mispredictions
Mispredictions
Percent2016
c
r
a
a
sr
y
r6g
z
a
p7g
c
c0t
r
e
r
lo
e
c
z
r
e
x
Benchmark
Figure
9: Linear Separability vs. Accuracy at a 4KB budget. For each benchmark, the leftmost bar shows the
number of linearly separable dynamic branches in the benchmark, the middle bar shows the misprediction rate of
gshare at a 4KB hardware budget, and the right bar shows the misprediction rate of the perceptron predictor at the
same hardware budget.
branch will be taken can be accurately estimated as a linear function of the output of the perceptron
predictor.
Analyzing branch behavior with perceptrons. Perceptrons can be used to analyze correlations among
branches. The perceptron predictor assigns each bit in the branch history a weight. When a particular
bit is strongly correlated with a particular branch outcome, the magnitude of the weight is higher than
when there is less or no correlation. Thus, the perceptron predictor learns to recognize the bits in the
history of a particular branch that are important for prediction, and it learns to ignore the unimportant bits.
This property of the perceptron predictor can be used with proling to provide feedback for other branch
prediction schemes. For example, the methodology that we use in Section 6.7 could be used with a proler
to provide path length information to the variable length path predictor [29].
Conclusions
In this paper we have introduced a new branch predictor that uses neural learning techniquesthe perceptron
in particularas the basic prediction mechanism. Perceptrons are attractive because they can use
long history lengths without requiring exponential resources. A potential weakness of perceptrons is their
increased computational complexity when compared with two-bit counters, but we have shown how a
perceptron predictor can be implemented efciently with respect to both area and delay. In particular, we
believe that the most feasible implementation is the overriding perceptron predictor, which uses a simpler
Smith predictor to provide a quick prediction that may be later overridden. For the SPEC 2000 integer
benchmarks, this overriding predictor results in 36% fewer mispredictions than a McFarling-style hybrid
predictor. Another weakness of perceptrons is their inability to learn linearly inseparable functions, but
we have shown that this is a limitation of existing branch predictors as well.
gshare % mispredicted - perceptron % mispredicted
Linearly inseparable branches
Linearly separable branches0
Best History Length
Figure
10: Classifying the Advantage of the Perceptron Predictor. Each data point is the average difference in
misprediction rates of the perceptron predictor and gshare (on the axis) for length for those branches (on the
axis). Above the axis, the perceptron predictor is better on average. Below the axis, gshare is better on average.
For linearly separable branches, our predictor is on average more accurate than gshare. For inseparable branches,
our predictor is sometimes less accurate for branches that require short histories, and it is more accurate on average
for branches that require long histories.
We have shown that there is benet to considering history lengths longer than those previously consid-
ered. Variable length path branch prediction considers history lengths of up to 23 [29], and a study of the
effects of long branch histories on branch prediction only considers lengths up to 32 [9]. We have found
that additional performance gains can be found for branch history lengths of up to 66.
We have also shown why the perceptron predictor is accurate. PHT techniques provide a general
mechanism that does not scale well with history length. Our predictor instead performs particularly well
on two classes of branchesthose that are linearly separable and those that require long history lengths
that represent a large number of dynamic branches.
Perceptrons have interesting characteristics that open up new avenues for future work. As noted in
Section 6.8, perceptrons can also be used to guide speculation based on branch prediction condence
levels, and perceptron predictors can be used in recognizing important bits in the history of a particular
branch.
Acknowledgments
. We thank Steve Keckler and Kathryn McKinley for many stimulating discussions
on this topic, and we thank Steve, Kathryn, and Ibrahim Hur for their comments on earlier versions of
this paper. This research was supported in part by DARPA Contract #F30602-97-1-0150 from the US Air
Force Research Laboratory, NSF CAREERS grant ACI-9984660, and by ONR grant N00014-99-1-0402.
--R
Improving Branch Prediction by Understanding Branch Behavior.
Fundamentals of Neural Networks: Architectures
A neuroevolution method for dynamic resource allocation on a chip multiprocessor.
ComputerArchitecture: AQuantitativeApproach
Assigning condence to conditional branch predictions.
Dynamically weighted ensemble neural networks for classication.
The impact of delay on the design of branch predic- tors
Dynamic branch prediction with perceptrons.
The Alpha 21264 microprocessor.
Articial Neural Networks for Image Understanding.
Combining branch predictors.
Trading conict and capacity aliasing in conditional branch predictors.
Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms.
Correlationandaliasing in dynamicbranchpredictors.
Understanding neural networks via rule extraction.
A study of branch prediction strategies.
The Agree predictor: A mechanismforreducingnegative branch history interference.
Variable length path branch prediction.
Highly accurate data value prediction using hybrid predictors.
--TR
Introduction to algorithms
Two-level adaptive training branch prediction
Branch prediction for free
Fundamentals of neural networks
Correlation and aliasing in dynamic branch predictors
Evidence-based static branch prediction using machine learning
Assigning confidence to conditional branch predictions
The agree predictor
Trading conflict and capacity aliasing in conditional branch predictors
A language for describing predictors and its application to automatic synthesis
The bi-mode branch predictor
Highly accurate data value prediction using hybrid predictors
An analysis of correlation and predictability
Computer architecture (2nd ed.)
The YAGS branch prediction scheme
Variable length path branch prediction
The impact of delay on the design of branch predictors
Artificial Neural Networks for Image Understanding
The Alpha 21264 Microprocessor
A study of branch prediction strategies
Dynamic Branch Prediction with Perceptrons
Improving branch prediction by understanding branch behavior
--CTR
Veerle Desmet , Hans Vandierendonck , Koen De Bosschere, Clustered indexing for branch predictors, Microprocessors & Microsystems, v.31 n.3, p.168-177, May, 2007
Kaveh Aasaraai , Amirali Baniasadi , Ehsan Atoofian, Computational and storage power optimizations for the O-GEHL branch predictor, Proceedings of the 4th international conference on Computing frontiers, May 07-09, 2007, Ischia, Italy
Andre Seznec, Analysis of the O-GEometric History Length Branch Predictor, ACM SIGARCH Computer Architecture News, v.33 n.2, p.394-405, May 2005
Kreahling , Stephen Hines , David Whalley , Gary Tyson, Reducing the cost of conditional transfers of control by using comparison specifications, ACM SIGPLAN Notices, v.41 n.7, July 2006
Abhas Kumar , Nisheet Jain , Mainak Chaudhuri, Long-latency branches: how much do they matter?, ACM SIGARCH Computer Architecture News, v.34 n.3, p.9-15, June 2006
David Tarjan , Kevin Skadron, Merging path and gshare indexing in perceptron branch prediction, ACM Transactions on Architecture and Code Optimization (TACO), v.2 n.3, p.280-300, September 2005
Daniel A. Jimnez, Fast Path-Based Neural Branch Prediction, Proceedings of the 36th annual IEEE/ACM International Symposium on Microarchitecture, p.243, December 03-05,
Daniel A. Jimenez, Piecewise Linear Branch Prediction, ACM SIGARCH Computer Architecture News, v.33 n.2, p.382-393, May 2005
Daniel A. Jimnez, Improved latency and accuracy for neural branch prediction, ACM Transactions on Computer Systems (TOCS), v.23 n.2, p.197-218, May 2005
Ayose Falcon , Jared Stark , Alex Ramirez , Konrad Lai , Mateo Valero, Better Branch Prediction Through Prophet/Critic Hybrids, IEEE Micro, v.25 n.1, p.80-89, January 2005
Ayose Falcon , Jared Stark , Alex Ramirez , Konrad Lai , Mateo Valero, Prophet/Critic Hybrid Branch Prediction, ACM SIGARCH Computer Architecture News, v.32 n.2, p.250, March 2004
Veerle Desmet , Lieven Eeckhout , Koen De Bosschere, Improved composite confidence mechanisms for a perceptron branch predictor, Journal of Systems Architecture: the EUROMICRO Journal, v.52 n.3, p.143-151, March 2006
Alan Fern , Robert Givan , Babak Falsafi , T. N. Vijaykumar, Dynamic feature selection for hardware prediction, Journal of Systems Architecture: the EUROMICRO Journal, v.52 n.4, p.213-234, April 2006
Javier Verd , Jorge Garc , Mario Nemirovsky , Mateo Valero, Architectural impact of stateful networking applications, Proceedings of the 2005 symposium on Architecture for networking and communications systems, October 26-28, 2005, Princeton, NJ, USA | neural networks;branch prediction |
571650 | Modelling with implicit surfaces that interpolate. | We introduce new techniques for modelling with interpolating implicit surfaces. This form of implicit surface was first used for problems of surface reconstruction and shape transformation, but the emphasis of our work is on model creation. These implicit surfaces are described by specifying locations in 3D through which the surface should pass, and also identifying locations that are interior or exterior to the surface. A 3D implicit function is created from these constraints using a variational scattered data interpolation approach, and the iso-surface of this function describes a surface. Like other implicit surface descriptions, these surfaces can be used for CSG and interference detection, may be interactively manipulated, are readily approximated by polygonal tilings, and are easy to ray trace. A key strength for model creation is that interpolating implicit surfaces allow the direct specification of both the location of points on the surface and the surface normals. These are two important manipulation techniques that are difficult to achieve using other implicit surface representations such as sums of spherical or ellipsoidal Gaussian functions ("blobbies"). We show that these properties make this form of implicit surface particularly attractive for interactive sculpting using the particle sampling technique introduced by Witkin and Heckbert. Our formulation also yields a simple method for converting a polygonal model to a smooth implicit model, as well as a new way to form blends between objects. | We can create surfaces in 3D in exactly the same way as the 2D
curves in Figure 1. Zero-valued constraints are de?ned by the modeler
at 3D locations, and positive values are speci?ed at one or more
places that are to be interior to the surface. A variational interpolation
technique is then invoked that creates a scalar-valued function
over a 3D domain. The desired surface is simply the set of all points
at which this scalar function takes on the value zero. Figure 2 (left)
shows a surface that was created in this fashion by placing four
zero-valued constraints at the vertices of a regular tetrahedron and
placing a single positive constraint in the center of the tetrahedron.
The result is a nearly spherical surface. More complex surfaces
such as the branching shape in Figure 2 (right) can be de?ned simply
by specifying more constraints. Figure 3 show an example of
an interpolating implicit surface created from polygonal data.
The remainder of this paper is organized as follows. In Section
2 we examine related work, including implicit surfaces and
thin-plate interpolation techniques. We describe in Section 3 the
mathematical framework for solving variational problems using radial
basis functions. Section 4 presents three strategies that may
be used together with variational methods to create implicit sur-
faces. These strategies differ in where they place the non-zero con-
straints. In Section 5 we show that interpolating implicit surfaces
are well suited for interactive sculpting. In Section 6 we present a
new method of creating soft blends between objects, based on interpolating
implicits. Section 7 describes two rendering techniques,
one that relies on polygonal tiling and another based on ray tracing.
In Section 8 we compare interpolating implicit surfaces with traditional
thin-plate surface modeling and with implicit functions that
are created using ellipsoidal Gaussian functions. Finally, Section 9
indicates potential applications and directions for future research.
Figure
1: Curves de?ned using interpolating implicit functions.
The curve on the left is de?ned by four zero-valued and one positive
constraint. This curve is re?ned by adding three new zero-valued
constraints (shown in red at right).
Figure
2: Surfaces de?ned by interpolating implicit functions. The left surface is de?ned by zero-valued constraints at the corners of a
tetrahedron and one positive constraint in its center. The branching surface at the right was created using constraints from the vertices of the
inset polygonal object.
Background and Related Work melted together. A typical form for a blobby sphere function gi is
the following:
Iimntpelripcoitlastiunrgfacimesplaincidt sthuirnfa-pcleastedrianwteruppoolantitowno. aIrneatshiosfsmecotidoenlinwge: gi x exci2s2i(2)
brie?y review work in these two sub-areas. Interpolating implicit
In this equation, the constant si speci?es the standard deviation
surfaces are not new to graphics, and at the close of this section we
of the Gaussian function, and thus is the control over the radius of a
describe earlier published methods of creating interpolating implicit
blobby sphere. The center of a blobby sphere is given by ci.Eval-
surfaces.
uating an exponential function is computationally expensive, so
some authors have used piecewise polynomial expressions instead
2.1 Implicit Surfaces of exponentials to de?ne these blobby sphere functions [20, 33]. A
greater variety of shapes can be created with the blobby approach
An implicit surface is de?ned by an implicit function, a continuous
by using ellipsoidal rather than spherical functions.
scalar-valued function over the domain R3. The implicit surface of
Another important class of implicit surfaces are the algebraic
such a function is the locus of points at which the function takes on
surfaces. These are surfaces that are described by polynomial ex-
the value zero. For example, a unit sphere may be de?ned using the
3 pressions in x, y and z. If a surface is simple enough, it may be
implicit function f x 1x, for points x R . Points on the
described by a single polynomial expression. A good deal of atten-
sphere are those locations at which f x 0. This implicit function
tion has been devoted to this approach, and we recommend Gabriel
takes on positive values inside the sphere and is negative outside the
Taubin [28] and Keren and Gotsman [16] as starting points in this
surface, as will be the convention in this paper.
area. Much of the work on this method has been devoted to ?t-
An important class of implicit surfaces are the blobby or meta-
ting an algebraic surfaces to a given collection of points. Usually
ball surfaces [2, 20]. The implicit functions of these surfaces are the
it is not possible to interpolate all of the data points, so error min-sum
of radially symmetric functions that have a Gaussian pro?le.
imizing techniques are sought. Surfaces may also be described by
Here is the general form of such an implicit function:
piecing together many separate algebraic surface patches, and here
n again there is a large literature on the subject. Good introductions to
f x t?gix(1) these surfaces may be found in the chapter by Chanddrajit Bajaj and
i1 the chapter by Alyn Rockwood in [5]. It is easier to create complex
In the above equation, a single function gi describes the pro?le of surfaces using a collection of algebraic patches rather than using
a ?blobby sphere? (a Gaussian function) that has a particular center a single algebraic surface. The tradeoff, however, is that a good
and standard deviation. The bold letter x represents a point in the deal of machinery is required to create smooth joins across patch
domain of our implicit function, and in this paper we will use bold boundaries.
letters to represent such points, both in 2D and 3D. The value t is the We have only described some of the implicit surface representa-
iso-surface threshold, and it speci?es one particular surface from a tions that are most closely related to our own work. There are many
family of nested surfaces that are de?ned by the sum of Gaussians. other topics within the broad area of implicit surfaces, and we refer
When the centers of two blobby spheres are close enough to one the interested reader to the excellent book by Bloomenthal and his
another, the implicit surface appears as though the two spheres have co-authors [5].
Figure
3: Polygonal surface of a human ?st with 750 vertices (left) and an interpolating implicit surface created from the polygons (right).
2.2 Thin-Plate Interpolation interpolation is often used in the computer vision domain, where
there are often sparse surface constraints [12, 29]. The above cur-
Thin-plate spline surfaces are a class of height ?elds that are closely vature minimization process is sometimes referred to as regulariza-
related to the interpolating implicit surfaces of this paper. Thin- tion, and can be thought of as an additional constraint that selects
plate interpolation is one approach to solving the scattered data a unique surface out of an in?nite number of surfaces that match a
interpolation problem. The two-dimensional version of this prob- set of given height constraints. Solving such constrained problems
lem can be stated as follows: Given a collection of k constraint draws from a branch of mathematics called the variational calculus,
points c1 c2 ckthat are scattered in the xy-plane, together thus thin-plate techniques are sometimes referred to as variational
with scalar height values at each of these points h1 h2 hk,methods.
construct a ?smooth? surface that matches each of these heights at
The scattered data interpolation problem can be formulated in
the given locations. We can think of this solution surface as a scalar-
any number of dimensions. When the given points ci are positions
valued function f x so that f ci hi,for1 i k.Ifwede?ne in n-dimensions rather than in 2D, this is called the n-dimensional
the word smooth in a particular way, there is a unique solution to
scattered data interpolation problem. There are appropriate gener-
such a problem, and this solution is the thin-plate interpolation of
alizations to the energy function and to thin-plate interpolation for
the points. Consider the energy function E f that measures the
any dimension. In this paper we will make use of variational inter-
smoothness of a function f :
polation in two and three dimensions.
The notation fxx means the second partial derivative in the x di-
rection, and the other two terms are similar partial derivatives, one
of them mixed. This energy function is basically a measure of the
aggregate curvature of f x over the region of interest W (a portion
of the plane). Any creases or pinches in a surface will result
in a larger value of E. A smooth function that has no such regions
of high curvature will have a lower value of E. Note that because
there are only squared terms in the integral, the value for E can
never be negative. The thin-plate solution to an interpolation problem
is the function f x that satis?es all of the constraints and that
has the smallest possible value of E. Note that thin-plate surfaces
are height ?elds, and thus they are in fact parametric surfaces.
This interpolation method gets its name because it is much like
taking a thin sheet of metal, laying it horizontally and bending the
it so that it just touches the tips of a collection of vertical poles
that are set at the positions and heights given by the constraints of
the interpolation problem. The metal plate resists bending so that it
smoothly changes its height in the positions between the poles. This
springy resistance is mimicked by the energy function E. Thin-plate2.3 Related Work on Implicit Surfaces
The ?rst publication on interpolating implicits of which we are
aware is that of Savchenko et al. [24]. We consider this to be a pioneering
paper in implicit surfaces, and feel it deserves to be known
more widely than it is at present. Their research was on the creation
of implicit surfaces from measured data such as range data
or contours. Their work did not, however, describe techniques for
modelling. Their approach to implicit function creation is similar to
our method in the present paper in that both solve a linear system to
get the weights for radial basis functions. The work of [24] differs
from our own in that they use a carrier solid to suggest what part
of space should be interior to the surface that is being created. We
believe that the three methods that we describe for de?ning the interior
of a surface in Section 4 of this paper give more user control
than a carrier solid and are thus more appropriate for modelling.
The implicit surface creation methods described in this paper are
an outgrowth of earlier work in shape transformation by Turk and
O'Brien [30]. They created implicit functions in n 1 dimensions
to interpolate between pairs of n-dimensional shapes. These implicit
functions were created using the normal constraint formulation
2of interpolating implicit surfaces, as described in Section 4.3 In the above equation, cj are the locations of the constraints, the
of this paper. The present paper differs from that of [30] in its in- wj are the weights, and P x is a degree one polynomial that ac-
troduction of several techniques for de?ning interpolating implicit counts for the linear and constant portions of f . Solving for the
surfaces that are especially useful for model creation. weights wj and the coef?cients of P x subject to the given con-
Recently techniques have developed that allow the methods dis- straints yields a function that both interpolates the constraints and
cussed above to be applies to system with a large numbers of con- minimizes equation 3. The resulting function exactly interpolates
straints [19, 6]. The work of Morse et al. [19] uses Gaussian-like the constraints (if we ignore numerical precision issues), and is not
compactly supported radial basis functions to accelerate the sur- subject to approximation or discretization errors. Also, the number
face building process, and they are able to create surfaces that have of weights to be determined does not grow with the size of the re-
tens of thousands of constraints. Carr et al. use fast evaluation gion of interest W. Rather, it is only dependent on the number of
methods to reconstruct surfaces using up to a half millions basis constraints.
functions [6]. They use the radial basis function f x x, the bi- To solve for the set of wj that will satisfy the interpolation con-
harmonic basis function. Both of these improvements for creating straints, we begin with the criteria that the surface must interpolate
surfaces with many constraints are complementary to the work of our constraints: hi f ci (6)
the present paper, and the new techniques that we describe in Sec- We now substitute the right side of equation 5 for f ci to give
tions 4, 5 and 6 should work gracefully with the methods in both of us:
these papers. k
hi ? wjf ci cj Pci(7)
3 Variational Methods and Radial Basis Since the above equation is linear with respect to the unknowns,
Functions wj and the coef?cients of P x , it can be formulated as a linear
system. For interpolation in 3D, let ci cxicyicziand let fij
In this section we review the necessary mathematical background f ci cj . Then this linear system can be written as follows:
for thin-plate interpolation. This will provide the tools that we will
then use in Section 4 to create interpolating implicit surfaces.
vthaarTtiawhteiollnscmaalitnpteirmroebidzledmeaqtwauahitneirtoenrpt3hoesluadtbiejosenicrtettdaossktohleausitnifotoenrmpisoulalatftiueondnccatiobononsv,terfaisnxtas, f121 f122 f12k1cx1x2cy1y2cz1z2w12 h12
.
f ci hi. There are several numerical methods that can be used .
tnoitesoellvemtehniststyanpde o?fnipterodbilfefmer.enTcwinog cteocmhmnioqnuleys,udsiesdcrmetieztehotdhse, r?e- fk1fk2fkk 1 cxk cyk czk wk hk
bgiaosnisoffunincteiorenst,oWve,rinthtoe aelseemt eonftsc.elTlshoerfeulnecmtieontsfanxd cdaen?ntheelnocbael cx1y cx2y cxky00 0 0 p1 0
expressed as a linear combination of the basis functions so that a c1z c2z ckz00 0 0 p3 0
solution can be found, or approximated, by determining suitable
weights for each of the basis functions. This approach has been
widely used for height-?eld interpolation and deformable models, (8)
and examples of its use can be found in [29, 27, 7, 31]. While ?- The sub-matrix in equation 8 consisting of the fij's is condition-
nite elements and ?nite differencing techniques have proven useful ally positive-de?nite on the subspace of vectors that are orthogonal
for many problems, the fact that they rely on discretization of the to the lasts four rows of the full matrix, so equation 8 is guaranteed
function's domain is not always ideal. Problems that can arise due to have a solution. We used symmetric LU decomposition to solve
to discretization include visibly stair-stepped surfaces and the in- this system of equations for all of the examples shown in this paper.
ability to represent ?ne details. In addition, the cost of using such Our implementation to set up the system, call the LU decomposi-
methods grows cubically as the desired resolution grows. tion routine and evaluate the interpolating function of equation 5 for
An alternate approach is to express the solution in terms of radial agivenxconsists of about 100 lines of commented C++ code. This
basis functions centered at the constraint locations. Radial basis code plus the public-domain polygonalization routine described in
functions are radially symmetric about a single point, or center, and Section 7.1 is all that is needed to create interpolating implicit sur-
they have been widely used for function approximation. Remark- faces.
ably, it is possible to choose these radial functions is such a way Two concerns that arise with such matrix systems are compu-
that they will automatically solve differential equations, such as the tation times and ill-conditioned systems. For systems with up to
one required to solve equation 3, subject to constraints located at a few thousand centers, including all of the examples in this pa-
their centers. For the 2D interplation problem, equation 3 can be per, direct solution techniques such as LU decomposition and SVD
solved using the biharmonic radial basis function: are practical. However as the system becomes larger, the amount
f x x2log x (4) of work required to solve the system grows as O k3 .Wehave
used direct solution methods for systems with up to roughly 3,000
This is commonly know as the thin-plate radial basis function. constraints. LU decomposition becomes impractical for more con-
For 3D interpolation, one commonly used radial basis function is straints than this. We are pleased that other researchers, notably the
f x x3, and this is the basis function that we use. We note that authors of [19, 6], have begun to address this issue of computational
Carr et al. [6] used the basis function f x x. Duchon did much complexity.
of the early work on variational interpolation [8], and the report by As the number of constraints grows, the condition number of the
Girosi, Jones and Poggio is a good entry point into the mathematics matrix in equation 8 is also likely to grow, leading to instability for
of variational interpolation [11]. some solution methods. For the systems we have worked with, ill-
Using the appropriate radial basis functions, we can write the conditioning has not been a problem. If problems arose for larger
interpolation function in this form: systems, variational interpolation is such a well-studied problem
f x ?wjfxcjPx(5) that methods exist for improving the conditioning of the system of
equations [10].
Creating Interpolating Implicit Surfaces
With tools for solving the scattered data interpolation problem in
hand, we now turn our attention to creating implicit functions. In
this section we will examine three ways in which to de?ne a interpolating
implicit surface. Common to all three approaches is the
speci?cation of zero-valued constraints through which the surface
must pass. The three methods differ in specifying where the implicit
function takes on positive and negative values. These methods
are based on using three different kinds of constraints: interior,
exterior,andnormal constraints. We will look at creating both 2D
interpolating implicit curves and 3D interpolating implicit surfaces.
The 2D curve examples are for illustrative purposes, and our actual
goal is the creation of 3D surfaces.
4.1 Interior Constraints
The left portion of Figure 1 (earlier in this paper) shows the ?rst
method of describing a interpolating implicit curve. Four zero-valued
constraints have been placed in the plane. We call such zero-value
constraints boundary constraints because these points will be
on the boundary between the interior and exterior of the shape that
is being de?ned. In addition to the four boundary constraints, a single
constraint with a value of one is placed at the location marked
with a plus sign. We use the term interior constraint when referring
to such a positive valued constraint that helps to determine the interior
of the surface. We construct an implicit function from these
?ve constraints simply by invoking the 2D variational interpolation
technique described in earlier sections. The interpolation method
returns a set of scalar coef?cients wi that weight a collection of
radially symmetric functions f that are centered at the constraint
positions. The implicit curve shown in the ?gure is given by those
locations at which the variationally-de?ned function takes on the
value zero. The function takes on positive values inside the curve
and is negative at locations outside the curve. Figure 1 (right) shows
a re?nement of the curve that is made by adding three more boundary
constraints to the original set of constraints in the left portion of
the ?gure.
Why does an interior constraint surrounded by zero-valued constraints
yield a function that is negative beyond the boundary con-
straints? The key is that the energy function is larger for functions
that take on positive values on both sides of a zero-valued con-
straint. Each boundary constraint acts much like a see-saw? pull
the surface up on one side of a boundary constraint (using an interior
constraint) and the other side tends to move down.
Creating surfaces in 3D is accomplished in exactly the same way
as the 2D case. Zero-valued constraints are speci?ed by the modeler
as those 3D points through which the surfaces should pass, and
positive values are speci?ed at one or more places that are to be
interior to the surface. Variational interpolation is then invoked to
create a scalar-valued function over R3. The desired surface is simply
the set of all points at which this scalar function takes on the
value zero. Figure 2 (left) shows a surface that was created in this
fashion by placing four zero-valued constraints at the vertices of a
regular tetrahedron and placing a single interior constraint in the
center of the tetrahedron. The resulting implicit surface is nearly
spherical.
Figure
shows a recursive branching object that is a interpolating
implicit surface. The basic building block of this object
is a triangular prism. Each of the six vertices of a large prism
speci?ed the location of a zero-valued constraint, and a single interior
constraint was placed in the center of this prism. Next, three
smaller and slightly tilted prisms were placed atop the ?rst large
prism. Each of these smaller prisms, like the large one, contributes
boundary constraints at its vertices and has a single interior constraint
placed at its center. Each of the three smaller prisms haveeven smaller prisms placed on top of them, and so on.
Why does this method of creating an implicit function create
a smooth surface? We are creating the scalar-valued function in
3D that matches our constraints and that minimizes a 3D energy
functional similar to Equation 3. This energy functional selects a
smoothly changing implicit function that matches the constraints.
The iso-surface that we extract from such a smoothly changing
function will almost always be smooth as well. It is not the case
in general, however, that this iso-surface is also the minimum of a
curvature-based functional over surfaces. Satisfying the 3D energy
functional does not give any guarantee about the smoothness of the
resulting 2D surface.
Placing one or more positive-valued constraints on the interior
of a shape is an effective method of de?ning interpolating implicit
surfaces when the shape one wishes to create is well-de?ned. We
have found, however, that there is another approach that is even
more ?exible for interactive free-form surface sculpting.
4.2 Exterior Constraints
Figure
4 illustrates a second approach to creating interpolating implicit
functions. Instead of placing positive-valued constraints inside
a shape, negative-valued constraints can be placed on the exterior
of the shape that is being created. We call each such negative-
valued constraint an exterior constraint. As before, zero-valued
constraints specify locations through which the implicit curve will
pass through. In Figure 4 (left), eight exterior constraints surround
the region at which a curve is being created. As with positive-valued
constraints, the magnitude of the values is unimportant, and
we use the value negative one. These exterior constraints, coupled
with the curvature-minimizing nature of variational method,
induce the interpolation function to take on positive values interior
to the shape outlined by the zero-valued constraints. Even specifying
just two boundary constraints de?nes a reasonable closed curve,
as shown by the ellipse-like curve at the left in Figure 4. More
boundary constraints result in a more complex curve, as shown on
the right in Figure 4.
We have found that creating a circle or sphere of negative-valued
constraints is the approach that is best suited to interactive free-form
design of curves and surfaces. Once these exterior constraints are
de?ned, the user is free to place boundary constraints in any location
interior to this cage of exterior constraints. Section 5 describes
the use of exterior constraints for interactive sculpting.
Figure
4: Curves de?ned using surrounding exterior constraints.
Just two zero-valued constraints yield an ellipse-like curve (on the
left). More constraints create a more complex curve (at right).
Figure
5: A polygonal surface (left) and the interpolating implicit surface de?ned by the 800 vertices and their normals (right).
4.3 Normal Constraints
For some applications we may have detailed knowledge about the
shape that is to be modeled. In particular, we may know approximate
surface normals at many locations on the surface to be cre-
ated. In this case there is a third method of de?ning a interpolating
implicit function that may be preferred over the two methods described
above, and this method was originally described in [30].
Rather than placing positive or negative values far from the boundary
constraints, we can create constraints very close to the boundary
constraints. Figure 6 shows this method in the plane. In left portion
of this ?gure, there are six boundary constraints and in addition
there are six normal constraints. These normal constraints are
positive-valued constraints that are placed very near the boundary
constraints, and they are positioned towards the center of the shape
that is being created. A normal constraint is created by placing a
Figure
Two curves de?ned using nearly identical boundary and
normal constraints. By moving just a single normal constraint (the
north-west one, shown in red), the curve on the left is changed to
that shown on the right.positive constraint a small distance in the direction n,wherenis
an approximate normal to the shape that we are creating. (Alterna-
tively, we could choose to place negative-valued constraints in the
outward-pointing direction.) A normal constraint is always paired
with a boundary constraint, although not every boundary constraint
requires a normal constraint. The right part of Figure 6 shows that
a normal constraint can be used to bend a curve at a given point.
There are at least two ways in which a normal constraint might
be de?ned. One way is to allow a user to hand-specify the surface
normals of a shape that is being created. A second way allows
us to create smooth surfaces based on polyhedral models. If
we wish to create a interpolating implicit surface from a polyhedral
model, we simply need to create one boundary constraint and
one normal constraint for each vertex in the polyhedron. The location
of a boundary constraint is given by the position of the vertex,
and the location of a normal constraint is given by moving a short
distance in a direction opposite to the surface normal at the ver-
tex. We place normal constraints 0 01 units from the corresponding
boundary constraints for objects that ?t within a unit cube. Figure 5
(right) shows a interpolating implicit surface created in the manner
just described from the polyhedral model in Figure 5 (left). This is
a simple yet effective way to create an everywhere smooth analytically
de?ned surface. This stands in contrast to the complications
of patch stitching inherent in most parametric surface modeling ap-
proaches. Figure 3 is another example of converting polygons (a
?st) to an implicit surface.
4.4 Review of Constraint Types
In this section we have seen three methods of creating interpolating
implicit functions. These methods are in no way mutually exclu-
sive, and a user of an interactive sculpting program could well use
a mixture of these three techniques to de?ne a single surface. Table
4.4 lists each of the three kinds of constraints, when we believe
each is appropriate to use, and which ?gures in this paper were created
using each of the methods.
Figure
7: Interactive sculpting of interpolating implicit surfaces. The left image shows an initial con?guration with four boundary constraints
(the red markers). The right surface is a sculpted torus.
Constraint Types When to Use 2D Figure 3D Figure
Interior constraints Planned model construction Figure 1 Figure 2
Exterior constraints Interactive modelling Figure 4 Figures 7,
Normal constraints Conversion from polygons Figure 6 Figures 3, 5, 9
Table
1: Constraint Types
5 Interactive Model Building
Interpolating implicit surfaces seem ready-made for interactive 3D
sculpting. In this section we will describe how they can be gracefully
incorporated into an interactive modeling program.
In 1994, Andrew Witkin and Paul Heckbert presented an elegant
method for interactive manipulation of implicit surfaces [32].
Their method uses two types of oriented particles that lie on the
surface of an implicitly de?ned object. One class of particles, the
?oaters, are passive elements that are attracted to the surface of the
shape that is being sculpted. Floaters repel one another in order to
evenly cover the surface. Even during large changes to the surface,
a nearly constant density of ?oaters is maintained by particle ?s-
sioning and particle death. A second type of particle, the control
point, is the method by which a user interactively shapes an implicit
surface. Control points provide the user with direct control of
the surface that is being created. A control point tracks a 3D cursor
position that is moved by the user, and the free parameters of the implicit
function are adjusted so that the surface always passes exactly
through the control point. The mathematical machinery needed to
implement ?oaters and control points is presented clearly in Witkin
and Heckbert's paper, and the interested reader should consult it for
details.
The implicit surfaces used in Witkin and Heckbert's modeling
program are blobby spheres and blobby cylinders. We have created
an interactive sculpting program based on their particle sampling
techniques, but we use interpolating implicit surfaces instead
of blobbies as the underlying shape description. Our implementation
of ?oaters is an almost verbatim transcription of their equationsinto code. The only change needed was to represent the implicit
function as a sum of f x x3radial basis functions and to provide
an evaluation routine for this function and its gradient. Floater
repulsion, ?ssioning and death work for interpolating implicits just
as well as when using blobby implicit functions. As in the original
system, the ?oaters provide a means of interactively viewing
an object during editing that may even change the topology of the
The main difference between our sculpting system and Witkin
and Heckbert's is that we use an entirely different mechanism for
direct interaction with a surface. Witkin/Heckbert control points
provide an indirect link between a 3D cursor and the free parameters
of a blobby implicit function. We do not make use of Witkin
and Heckbert's control particles in our interactive modelling pro-
gram. Instead, we simply allow users to create and move the boundary
constraints of an interpolating implicit surface. This provides a
direct way to manipulate the surface.
We initialize a sculpting session with a simple interpolating implicit
surface that is nearly spherical, and this is shown at the left
in
Figure
7. It is described by four boundary constraints at the
vertices of a unit tetrahedron (the thick red disks) and with eight
exterior (negative) constraints surrounding these at the corners of
a cube with a side width of six. (The exterior constraints are not
drawn.) A user is free to drag any of the boundary constraint locations
using a 3D cursor, and the surface follows. The user may also
create any number of new boundary constraints on the surface. The
location of a new boundary constraint is found by intersecting the
surface with a ray that passing through the camera position and the
cursor. After a user creates or moves a boundary constraint, the matrix
equation from Section 3 is solved anew. The ?oaters are then
moved and displayed. The right portion of Figure 7 shows a toroidal
surface that was created using this interactive sculpting paradigm.
The interactive program repeatedly executes the following steps:
1. Create or move constraints based on user interaction.
2. Solve new variational matrix equation.
3. Adjust ?oater positions (with ?oater birth and death).
Figure
8: Changing a normal constraint. Left image shows original surface, and right image shows the same surface after changing a normal
constraint (shown as a red spike).
4. Render ?oaters.
An important consequence of the matrix formulation given by
equation 8 is that adding a new boundary constraint on the existing
surface does not affect the surface shape at all. This is because the
implicit function already takes on the value of zero at the surface,
so adding new zero-valued constraint on the surface will not alter
the surface. Only when such a new boundary constraint is moved
does it begin to affect the shape of the surface. This ability to retain
the exact shape of a surface while adding new boundary constraints
is similar in spirit to knot insertion for polynomial spline curves
and surfaces. We do not know of any similar capability for blobby
implicit surfaces.
In addition to control of boundary constraints, we also allow a
user to create and move normal constraints. By default, no normal
constraint is provided for a newly created boundary constraint. At
the user's request, a normal constraint can be created at any spec-
i?ed boundary constraint. The initial direction of the normal constraint
is given by the gradient of the current implicit function. The
value for such a constraint is given by the implicit function's value
at the constraint location. A normal constraint is drawn as a spike
that is ?xed at one end to the disk of its corresponding boundary
point. The user may drag the free end of this spike to adjust the
normal to the surface, and the surface follows this new constraint.
Figure
8 shows an example of changing a normal constraint during
an interactive modelling session.
What has been gained by using interpolating implicit functions
instead of blobby spheres and cylinders? First, the interpolating
implicit approach is easier to implement because the optimization
machinery for control points of blobby implicits is not needed. Sec-
ond, the user has control over the surface normal as well as the
surface position. Finally, the user does not need to specify which
implicit parameters are to be ?xed and which are to be free at different
times during the editing session. Using the blobby formulation,
the user must choose at any given time which parameters such as
sphere centers, radii of in?uence and cylinder endpoints may be altered
by moving a control point. With the variational formulation,
the user is always changing the position of just a single boundary ornormal constraint. We believe that this direct control of the parameters
of the implicit function is more natural and intuitive. Witkin
and Heckbert state the following [32]:
Another result of this work is that we have discovered
that implicit surfaces are slippery: when you attempt
to move them using control points they often slip out of
your grasp.
(emphasis from the original paper)
In contrast to blobby implicits, we have found that interpolating
implicit surfaces are not at all slippery. Users easily grasp and re-shape
these surfaces with no thought to the underlying parameters
of the model.
6 Object Blending
A blend is a portion of a surface that smoothly joins two sub-parts
of an object. One of the more useful attributes of implicit surfaces is
the ease with which they allow two objects to be blended together.
Simply summing together the implicit functions for two objects often
gives quite reasonable results for some applications. In some
instances, however, traditional implicit surface methods have been
found to be problematic when creating certain kinds of blends. For
example, it is dif?cult to get satisfactory results when summing together
the implicit functions for two branches and a trunk of a tree.
The problem is that the surface will bulge at the location where the
trunk and the two branches join. Bulges occur because the contribution
of multiple implicit functions causes their sum to take on
large values in the blend region, and this results in the new function
reaching the iso-surface threshold in locations further away from
the blend than is desirable. Several solutions have been proposed
for this problem of bulges in blends, but these methods are either
computationally expensive or are fairly limited in the geometry for
which they can be used. For an excellent description of various
blending methods, see Chapter 7 of [5].
Figure
9: Three polygonal tori (left), and the soft union created with interpolating implicits (right).
Interpolating implicit surfaces provide a new way in which to
create blends between objects. Objects that are blended using this
new approach are free of the bulging problems found using some
other methods. Our approach to blending together surfaces is to
form one large collection of constraints by collect together the constraints
that de?ne of all the surfaces to be blended. The new
blended surface is the surface de?ned by this new collection of
constraints. It is important to note that simply using all of the constraints
from the original surfaces will usually produce poor results.
The key to the success of this approach is to throw out those constraints
that would cause problems.
Consider the task of blending together two shapes A and B.If
we used all of the constraints from both shapes, the resulting surface
is not likely to be what we wish. The task of selecting which
constraints to keep is simple. Let fA x and fB x be the implicit
functions for shapes A and B respectively. We will retain those constraints
from object A that are outside of B. That is, a constraint
from A with position ci will be kept if fB ci 0. All other constraints
from A will be discarded. Likewise, we will keep only those
constraints from object B that are outside of object A. To create a
blended shape, we collect together all of the constraints that pass
these two tests and form a new surface based on these constraints.
This approach can used to blend together any number of objects.
Figure
9 (left) shows three polygonal tori that overlap one another
in 3D. To blend these objects together, we ?rst create a set of boundary
and normal constraints for each object, using the approach described
in Section 4.3. We then keep only those constraints from
each object that are outside of each of the other two objects, as
determined by their implicit functions. Finally, we create a single
implicit function using all of the constraints from the three objects
that were retained. Figure 9 (right) shows the result of this proce-
dure. Notice that there are no bulges in the locations where the tori
Rendering
In this section we examine two traditional approaches for rendering
implicit surfaces that both perform well for interpolating implicits.
7.1 Conversion to Polygons
One way to render an implicit surface is to create a set of polygons
that approximate the surface and then render these polygons. The
topic of iso-surface extraction is well-studied, especially for regularly
sampled volumetric data. Perhaps the best known approach
of this type is the Marching Cubes algorithm [17], but a number
of variants of this method have been described since the time of its
publication.
We use a method of iso-surface extraction known as a continuation
approach [3] for many of the ?gures in this paper. The models
in
Figure
2 and in the right images of Figures 5 and 9 are collections
of polygons that were created using the continuation method. This
method ?rst locates any position that is on the surface to be tiled.
This ?rst point can be thought of as a single corner of a cube that is
one of an in?nite number of cubes in a regular lattice. The continuation
method then examines the values of the implicit function at
neighboring points on the cubic lattice and creates polygons within
each cube that the surface must pass through. The neighboring vertices
of these cubes are examined in turn, and the process eventually
crawls over the entire surface de?ned by the implicit function. We
use the implementation of this method from [4] that is described in
detail by Bloomenthal in [3].
7.2 Ray Tracing
There are a number of techniques that may be used to ray trace
implicit surfaces, and a review of these techniques can be found
in [13]. We have produced ray traced images of interpolating implicit
surfaces using a particular technique introduced by Hart that
is known as sphere tracing [14]. Sphere tracing is based on the idea
that we can ?nd the intersection of a ray with a surface by traveling
Figure
10: Ray tracing of interpolating implicit surfaces. The left image shows re?ection and shadows of two implicit surfaces, and the right
image illustrates constructive solid geometry.
along the ray in steps that are small enough to avoid passing through
the surface. At each step along the ray the method conservatively
estimates the radius of a sphere that will not intersect the surface.
We declare that we are near enough to the surface when the value of
f x falls below some tolerance e. We currently use a heuristic to
determine the radius of the spheres during ray tracing. We sample
the space in and around our implicit surface at 2000 positions, and
we use the maximum gradient magnitude over all of these locations
as the Lipschitz constant for sphere tracing. For extremely pathological
surfaces this heuristic may fail, although it has worked well
for all of our images. Coming up with a sphere radius that is guaranteed
not to intersect the surface is a good area for future research.
We think it is likely that other ray tracing techniques can also be
successfully applied to ray tracing of interpolating implicits, such
as the LG-surfaces approach of Kalra and Barr [15].
Figures
(left) is an image of two interpolating implicit surfaces
that were ray traced using sphere tracing. Note that this ?gure
includes shadows and re?ections. Figure 10 (right) illustrates constructive
solid geometry with interpolating implicit surfaces. The
?gure shows (from left to right) intersection and subtraction of two
implicit surfaces. This ?gure was created using standard ray tracing
CSG techniques as described in [23].
The rendering techniques of this section highlight a key point
interpolating implicit surfaces may be used in almost all of the
contexts in which other implicit formulations have been used. This
new representation may provide fruitful alternatives for a number
of problems that use implicit surfaces.
8 Comparison to Related Methods
At this point it is useful to compare interpolating implicit surfaces
to other representations of surface geometry. Although they share
similarities with existing techniques, interpolating implicits are distinct
from other forms of surface modeling. Because interpolating
implicits are not yet well known, we provide a comparison of themto two more well-known modelling techniques.
8.1 Thin-Plate Surface Reconstruction
The scienti?c and engineering literature abound with surface re-construction
based on thin-plate interpolation. Aren't interpolating
implicits just a slight variant on thin-plate techniques? The most
important difference is that traditional thin-plate reconstruction creates
a height ?eld in order to ?t a given set of data points. The
use of a height ?eld is a barrier towards creating closed surfaces
and surfaces of arbitrary topology. For example, a height ?eld cannot
even represent a simple sphere-like object such as the surface
shown in Figure 2 (left). Complex surfaces can be constructed using
thin-plate techniques only if a number of height ?elds are stitched
together to form a parametric quilt over the surface. This also pre-supposes
that the topology of the shape to be modelled is already
known. Interpolating implicit surfaces, on the other hand, do not require
multiple patches in order to represent a complex model. Both
methods create a function based on variational methods, but they
differ in the dimension of the scalar function that they create. Traditional
thin-plate surfaces use a function with a 2D domain to create
a parametric surface, whereas the interpolating implicit method
uses a function with a 3D domain to specify the location of an implicit
8.2 Sums of Implicit Primitives
Section 3 shows that a interpolating implicit function is in fact a
sum of a number of functions that have radial symmetry (based on
the x 3 function). Isn't this similar to constructing an implicit function
by summing a number of spherical Gaussian functions (blobby
spheres or meta-balls)? Let us consider the process of modeling a
particular shape using blobby spheres. The unit of construction is
the single sphere, and two decisions must be made when we add
new sphere to a model: the sphere's center and its radius. We cannot
place the center of the sphere where we want the surface to be
? we must displace it towards the object's center and adjust its radius
to compensate for this displacement. What we are doing is
much like guessing the location of the medial axis of the object that
we are modeling. (The medial axis is the locus of points that are
equally distant from two or more places on an object's boundary.)
In fact, the task is more dif?cult than this because summing multiple
blobby spheres is not the same as calculating the union of the
spheres. The interactive method of Witkin and Heckbert relieves
the user from some of this complexity, but still requires the user
to select which blobby primitives are being moved and which are
?xed. These issues never come up when modeling using interpolating
implicit surfaces because we can directly specify locations that
the surface must pass through.
Fitting blobby spheres to a surface is an art, and indeed many
beautiful objects have been sculpted in this manner. Can this process
be entirely automated? Shigeru Muraki demonstrated a way
in which a given range image may be approximated by blobby
spheres [18]. The method begins with a single blobby sphere that
is positioned to match the data. Then the method repeatedly selects
one blobby sphere and splits it into two new spheres, invoking an
optimization procedure to determine the position and radii of the
two spheres that best approximates the given surface. Calculating
a model composed of 243 blobby spheres ?took a few days on a
UNIX workstation (Stardent TITAN3000 2 CPU).? Similar blobby
sphere data approximation by Eric Bittar and co-workers was limited
to roughly 50 blobby spheres [1]. In contrast to these methods,
the bunny in Figure 5 (right) is a interpolating implicit surface with
800 boundary and 800 normal constraints. It required 1 minute
43 seconds to solve the matrix equation for this surface, and the
iso-surface extraction required 7 minutes 43 seconds. Calculations
were performed on an SGI O2 with a 195 MHz R10000 processor.
9 Conclusion and Future Work
In this paper we have introduced new approaches for model creation
using interpolating implicit surfaces. Speci?c advantages of this
method include:
Direct speci?cation of points on the implicit surface
Speci?cation of surface normals
Conversion of polygon models to smooth implicit forms
Intuitive controls for interactive sculpting
Addition of new control points that leave the surface unchanged
(like knot insertion)
A new approach to blending objects
A number of techniques have been developed for working with
implicit surfaces. Many of these techniques could be directly applied
to interpolating implicits, indicating several directions for future
work. The critical point analysis of Stander and Hart could
be used to guarantee topologically correct tessellation of such surfaces
[26]. Interval techniques, explored by Duff, Snyder and oth-
ers, might be applied to tiling and ray tracing of interpolating im-
plicits [9, 25]. The interactive texture placement methods of Pedersen
should be directly applicable to interpolating implicit surfaces
[21, 22]. Finally, many marvelous animations have been produced
using blobby implicit surfaces [2, 33]. We anticipate that the
interpolating properties of these implicit surfaces may provide animators
with an even greater degree of control over implicit surfaces.
Beyond extending existing techniques for this new form of implicit
surface, there are also research directions that are suggested
by issues that are speci?c to our technique. Like blobby sphere im-
plicits, interpolating implicit surfaces are everywhere smooth. Perhaps
there are ways in which sharp features such as edges and corners
can be incorporated into a interpolating implicit model. We
have showed how gradients of the implicit function may be spec-
i?ed indirectly using positive constraints that are near zero con-straints, but it may be possible to modify the approach to allow the
exact speci?cation of the gradient.
Another direction for future research is to ?nd higher-level interactive
modelling techniques for creating these implicit surfaces.
Perhaps several new constraints could be created simultaneously,
maybe arranged in a line or in a circle for greater surface control. It
might also make sense to be able to move the positions of more than
one constraint at a time. Another modelling issue is the creation of
surfaces with boundaries. Perhaps a second implicit function could
specify the presence or absence of a surface. Another issue related
to interactivity is the possibility of displaying the surface with polygons
rather than with ?oaters. With suf?cient processor power, creating
and displaying a polygonal isosurface of the implicit function
could be done at interactive rates.
Acknowledgments
This work was funded under ONR grant N00014-97-0223. We
thank the members of the Georgia Tech Geometry Group for their
ideas and enthusiasm. Thanks also goes to Victor Zordan for helping
with video.
--R
Automatic reconstruction of unstructured 3d data: Combining a medial axis and implicit surfaces.
A generalization of algebraic surface drawing.
Polygonization of implicit surfaces.
An implicit surface polygonizer.
Introduction to Implicit Surfaces.
Reconstruction and representation of 3d objects with radial basis functions.
Deformable curve and surface
Spline minimizing rotation-invariant semi-norms in sobolev spaces
Interval arithmetic and recursive subdivision for implicit functions and constructive solid geometry.
Interpolation of scattered data by radial basis func- tions
Surface consistancy constraints in vision.
Ray tracing implicit surfaces.
Sphere tracing: A geometric method for the an- [30] tialiased ray tracing of implicit surfaces
Guarenteed ray intersection with implicit surfaces.
Marching cubes: A high resolution 3-d surface construction algorithm
Volumetric shape description of range data using 'blobby model'.
Interpolating implicit surfaces from scattered surface data using compactly supported radial basis functions.
Object modeling by distribution function and a method of image generation.
Decorating implicit surfaces.
A framework for interactive texturing on curved surfaces.
Ray casting as a method for solid modeling.
Function representation of solids reconstructed from scattered surface points and con- tours
Interval analysis for computer graphics.
Guaranteeing the topology of an implicit surface polygonization for interactive mod- eling
--TR
Marching cubes: A high resolution 3D surface construction algorithm
The Computation of Visible-Surface Representations
Polygonization of implicit surfaces
Guaranteed ray intersections with implicit surfaces
Fast Surface Interpolation Using Hierarchical Basis Functions
Volumetric shape description of range data using MYAMPERSANDldquo;Blobby ModelMYAMPERSANDrdquo;
Deformable curve and surface finite-elements for free-form shape design
Interval analysis for computer graphics
Interval arithmetic recursive subdivision for implicit functions and constructive solid geometry
An implicit surface polygonizer
Free-form shape design using triangulated surfaces
Using particles to sample and control implicit surfaces
Decorating implicit surfaces
A framework for interactive texturing on curved surfaces
Guaranteeing the topology of an implicit surface polygonization for interactive modeling
Shape transformation using variational implicit functions
A Generalization of Algebraic Surface Drawing
Reconstruction and representation of 3D objects with radial basis functions
Introduction to Implicit Surfaces
Interpolating Implicit Surfaces from Scattered Surface Data Using Compactly Supported Radial Basis Functions
Priors Stabilizers and Basis Functions: From Regularization to Radial, Tensor and Additive Splines
--CTR
Yoshitomo Jo , Masafumi Oka , Akinori Kimura , Kyoko Hasegawa , Ayumu Saitoh , Susumu Nakata , Akihiro Shibata , Satoshi Tanaka, Technical Section: Stochastic visualization of intersection curves of implicit surfaces, Computers and Graphics, v.31 n.2, p.230-242, April, 2007
John C. Hart, Some notes on radial basis functions and thin plate splines, ACM SIGGRAPH 2005 Courses, July 31-August
Jing Hua , Hong Qin, Haptics-Based Dynamic Implicit Solid Modeling, IEEE Transactions on Visualization and Computer Graphics, v.10 n.5, p.574-586, September 2004
Ming Li , Xiao-Shan Gao , Jin-San Cheng, Generating symbolic interpolants for scattered data with normal vectors, Journal of Computer Science and Technology, v.20 n.6, p.861-874, November 2005
Manfred Weiler , Ralf Botchen , Simon Stegmaier , Thomas Ertl , Jingshu Huang , Yun Jang , David S. Ebert , Kelly P. Gaither, Hardware-Assisted Feature Analysis and Visualization of Procedurally Encoded Multifield Volumetric Data, IEEE Computer Graphics and Applications, v.25 n.5, p.72-81, September 2005
Yutaka Ohtake , Alexander Belyaev , Hans-Peter Seidel, Sparse surface reconstruction with adaptive partition of unity and radial basis functions, Graphical Models, v.68 n.1, p.15-24, January 2006
Jing Hua , Hong Qin, Free-form deformations via sketching and manipulating scalar fields, Proceedings of the eighth ACM symposium on Solid modeling and applications, June 16-20, 2003, Seattle, Washington, USA
Ashraf Hussein, A hybrid system for three-dimensional objects reconstruction from point-clouds based on ball pivoting algorithm and radial basis functions, Proceedings of the 9th WSEAS International Conference on Computers, p.1-9, July 14-16, 2005, Athens, Greece
R. Schmidt , B. Wyvill , M. C. Sousa , J. A. Jorge, ShapeShop: sketch-based solid modeling with BlobTrees, ACM SIGGRAPH 2006 Courses, July 30-August 03, 2006, Boston, Massachusetts
T. Igarashi , T. Moscovich , J. F. Hughes, Spatial keyframing for performance-driven animation, Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, July 29-31, 2005, Los Angeles, California
Chen Shen , James F. O'Brien , Jonathan R. Shewchuk, Interpolating and approximating implicit surfaces from polygon soup, ACM SIGGRAPH 2005 Courses, July 31-August
T. Igarashi , T. Moscovich , J. F. Hughes, Spatial keyframing for performance-driven animation, ACM SIGGRAPH 2006 Courses, July 30-August 03, 2006, Boston, Massachusetts
Allgre , Eric Galin , Raphalle Chaine , Samir Akkouche, The HybridTree: mixing skeletal implicit surfaces, triangle meshes, and point sets in a free-form modeling system, Graphical Models, v.68 n.1, p.42-64, January 2006
Chen Shen , James F. O'Brien , Jonathan R. Shewchuk, Interpolating and approximating implicit surfaces from polygon soup, ACM Transactions on Graphics (TOG), v.23 n.3, August 2004
Bryan S. Morse , Weiming Liu , Terry S. Yoo , Kalpathi Subramanian, Active contours using a constraint-based implicit representation, ACM SIGGRAPH 2005 Courses, July 31-August
Yutaka Ohtake , Alexander Belyaev , Hans-Peter Seidel, 3D scattered data interpolation and approximation with multilevel compactly supported RBFs, Graphical Models, v.67 n.3, p.150-165, May 2005
Tobias Frank , Anne-Laure Tertois , Jean-Laurent Mallet, 3D-reconstruction of complex geological interfaces from irregularly distributed and noisy point data, Computers & Geosciences, v.33 n.7, p.932-943, July, 2007
Ignacio Llamas , Byungmoon Kim , Joshua Gargus , Jarek Rossignac , Chris D. Shaw, Twister: a space-warp operator for the two-handed editing of 3D shapes, ACM Transactions on Graphics (TOG), v.22 n.3, July
Lin Shi , Yizhou Yu, Controllable smoke animation with guiding objects, ACM Transactions on Graphics (TOG), v.24 n.1, p.140-164, January 2005
Patrick Reuter , Ireneusz Tobor , Christophe Schlick , Sbastien Dedieu, Point-based modelling and rendering using radial basis functions, Proceedings of the 1st international conference on Computer graphics and interactive techniques in Australasia and South East Asia, February 11-14, 2003, Melbourne, Australia
Haixia Du, Interactive shape design using volumetric implicit PDEs, Proceedings of the eighth ACM symposium on Solid modeling and applications, June 16-20, 2003, Seattle, Washington, USA
Haixia Du , Hong Qin, A shape design system using volumetric implicit PDEs, ACM SIGGRAPH 2005 Courses, July 31-August
Wang , Charlie C. L. Wang , Matthew M. F. Yuen, Duplicate-skins for compatible mesh modelling, Proceedings of the 2006 ACM symposium on Solid and physical modeling, June 06-08, 2006, Cardiff, Wales, United Kingdom
Yutaka Ohtake , Alexander Belyaev , Marc Alexa , Greg Turk , Hans-Peter Seidel, Multi-level partition of unity implicits, ACM SIGGRAPH 2005 Courses, July 31-August
Christian Rssl , Frank Zeilfelder , Gnther Nrnberger , Hans-Peter Seidel, Spline approximation of general volumetric data, Proceedings of the ninth ACM symposium on Solid modeling and applications, June 09-11, 2004, Genoa, Italy
A. Alexe , V. Gaildrat , L. Barthe, Interactive modelling from sketches using spherical implicit functions, Proceedings of the 3rd international conference on Computer graphics, virtual reality, visualisation and interaction in Africa, November 03-05, 2004, Stellenbosch, South Africa
Yutaka Ohtake , Alexander Belyaev , Marc Alexa , Greg Turk , Hans-Peter Seidel, Multi-level partition of unity implicits, ACM Transactions on Graphics (TOG), v.22 n.3, July
Ignacio Llamas , Alexander Powell , Jarek Rossignac , Chris D. Shaw, Bender: a virtual ribbon for deforming 3D shapes in biomedical and styling applications, Proceedings of the 2005 ACM symposium on Solid and physical modeling, p.89-99, June 13-15, 2005, Cambridge, Massachusetts
Anders Adamson , Marc Alexa, Approximating and intersecting surfaces from points, Proceedings of the Eurographics/ACM SIGGRAPH symposium on Geometry processing, June 23-25, 2003, Aachen, Germany
Denis Zorin, Modeling with multiresolution subdivision surfaces, ACM SIGGRAPH 2006 Courses, July 30-August 03, 2006, Boston, Massachusetts | thin-plate techniques;modeling;implicit surfaces;function interpolation |
571723 | A game-theoretic approach towards congestion control in communication networks. | Most of the end-to-end congestion control schemes are "voluntary" in nature and critically depend on end-user cooperation. We show that in the presence of selfish users, all such schemes will inevitably lead to a congestion collapse. Router and switch mechanisms such as service disciplines and buffer management policies determine the sharing of resources during congestion. We show, using a game-theoretic approach, that all currently proposed mechanisms, either encourage the behaviour that leads to congestion or are oblivious to it.We propose a class of service disciplines called the Diminishing Weight Schedulers (DWS) that punish misbehaving users and reward congestion avoiding well behaved users. We also propose a sample service discipline called the Rate Inverse Scheduling (RIS) from the class of DWS schedulers. With DWS schedulers deployed in the network, max-min fair rates constitute a unique Nash and Stackelberg Equilibrium. We show that RIS solves the problems of excessive congestion due to unresponsive flows, aggressive versions of TCP, multiple parallel connections and is also fair to TCP. | INTRODUCTION
Most of the end to end congestion control schemes [23,
17, 32, 26, 16] are voluntary in nature and critically depend
on end-user cooperation. The TCP congestion control algorithms
[23, 5, 20, 21, 19, 9] voluntarily reduce the sending
rate upon receiving a congestion signal such as ECN [25],
packet loss [14, 10, 18] or source quench [24]. Such congestion
control schemes are successful because all the end-users
cooperate and volunteer to reduce their sending rates using
IBM India Research Lab, Hauz Khas, New Delhi - 110016,
INDIA.
y Indian Institute of Technology, Delhi, Hauz Khas, New
z Network Appliance, CA, USA.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
similar algorithms, upon detection of congestion.
As the Internet grows from a small experimental network
to a large commercial network, the assumptions about co-operative
end-user behaviour may not remain valid. Factors
such as diversity, commercialization and growth may lead to
non-cooperative and competitive behaviours [12] that aim
to derive better individual utility out of the shared Internet
resources.
If an end-user does not reduce its sending rate upon congestion
detection, it can get a better share of the network
bandwidth. The
ows of such users are called unresponsive
ows [8, 7]. Even responsive
ows that react to congestion
signal can get unfair share of network bandwidth by being
more conservative in reducing their rates and more aggressive
in increasing their rates. Such
ows are termed as
TCP-incompatible
ows [8, 7]. Even TCP-compatible
ows
such as dierent variants of TCP give dierent performance
[21] under dierent conditions. Such behaviours though currently
not prevalent, are present in the Internet, and pose
a serious threat to Internet stability [12, 8]. If widespread,
such behaviours may lead to a congestion collapse of the Internet
(see Section 2). Therefore it is important to have an
approach towards congestion control that is not dependent
on cooperative end users voluntarily following an end-user
behaviour from a class of predened behaviours.
In this paper we propose a game-theoretic approach towards
congestion control. The crux of the approach is to
deploy schedulers and/or buer management policies at intermediate
switches and routers that punish misbehaving
ows by cutting down their rates thereby encouraging well
behaved
ows. There have been discussions [29] on punishing
misbehaving users but they do not talk about such
punishments in a game-theoretic framework. We propose a
class of scheduling algorithms called the Diminishing Weight
Scheduling (DWS) that punish misbehaving
ows in such
a way that the resulting game-theoretic equilibrium (Nash
Equilibrium) results in fair resource allocations. We show
that with DWS scheduling deployed in the network, max-min
fair rates [4] constitute a Nash as well as a Stackelberg
Equilibrium. Thus, with DWS scheduling deployed,
the \best selsh behaviour" for each user is to estimate its
fair rate and send tra-c at that rate.
Our game-theoretic approach is very similar to that proposed
by Shenker [27] to analyze switch service disciplines.
However, Shenker uses a discrete queueing theoretic model
of input tra-c which does not accurately model the tra-c
in today's data networks. Moreover, Shenker's analysis is
restricted to a single switch/router and does not extend to
an arbitrary network in a straight forward manner. We use a
continuous
uid-
ow based input tra-c model which is more
realistic and amenable to analysis in an arbitrary network.
This makes our approach more practical and applicable to
networks such as the Internet.
The steepness of the diminishing weight function determines
the amount of penalty imposed on a misbehaving
ow.
Steeper weight functions impose stricter penalties to misbe-
having
ows. As the weight function becomes
at, DWS
approaches WFQ scheduling which imposes no penalty on
a misbehaving
ow. We also present a sample service discipline
called Rate Inverse Scheduling (RIS) where the diminishing
weight function is the inverse of the input rate. By
using dierent weight functions in DWS, the switch designers
and ISPs can choose from a variety of reward-penalty
proles to meet their policy requirements.
With DWS deployed, we show that it is in the interest
of each individual end-user to follow a TCP-style in-
crease/decrease algorithm. Using simulations we show that
end-users using dierent versions of TCP are actually able
to converge to their fair rates, even in the presence of misbehaving
users.
In Section 2 we present our game-theoretic formulation
and show that in the presence of selsh users, the current
resource management policies will lead to a congestion col-
lapse. In Section 3 we present the DWS scheduling algorithm
and discuss its properties in Section 4. We present
some preliminary simulation results in Section 5. We conclude
in Section 6. The proofs are provided in the Appendix.
2. A GAME-THEORETIC MODEL OF A
Consider a link of capacity C shared by N users. There
is a su-ciently large shared buer, a buer management
policy, and a service discipline to partition the link capacity
among the users. Assume that user i sends a constant rate
tra-c
ow at a rate r i (the input rate). Some of this trafc
may be dropped due to buer over
ows. Assume that,
in steady state the tra-c of user i is delivered at the destination
with an average output rate
output rate is a function of sending rate of all the N users,
the switch service discipline S, and the buer management
policy B. Mathematically, this is written as
denotes the N-dimensional vector
of input rates and
denotes the N -
dimensional vector of output rates and
SB () is the function
(called the resource management function) dependent
on scheduling discipline S and buer management policy B
mapping the vector of input rates to the vector of output
rates.
Consider a network comprising multiple nodes and links.
Assume that tra-c of user i traverses links l 1
the resource management function at link l j be
denote the vector of input rates of all users at link l j . The
input rate of user i at link l 1 is r 1
. Since we assume
that all users send tra-c at a constant rate, the output rate
of user i at link l 1 is also a constant given by
Therefore, the input rate of user i at link l 2 will also be
constant given by r 2
Similarly we have:
Output
Rate
(Mbps)
Input Rate (Mbps)
All other flows sending at 1 Mbps
FCFS
RED
FRED
LQD
RIS
Figure
1: Uncongested link0.51.52.53.5
Output
Rate
(Mbps)
Input Rate (Mbps)
All other flows sending at 4 Mbps
FCFS
RED
FRED
LQD
RIS
Figure
2: Congested link
The nal output rate of the user will be given by:
In general, a user's utility (or satisfaction) depends on
its output rate, loss rate and end-to-end delay. However,
for a majority of applications the output rate is the most
important factor determining the user's utility. For instance,
\re-hose applications" described in [29] are completely loss
tolerant. For streaming media applications, loss tolerance
can be obtained using forward error correction techniques
[2]. For bulk transfer applications, loss tolerance can be
achieved using selective retransmissions [4]. Therefore, for
simplicity, we assume that a user's utility is an increasing
function of its output rate only. The class of such utility
functions U
, is formally dened as
1. U 2 U
maps a user's output rate
to a real-valued
non-negative utility,
2. U(
If user i was to act in a selsh manner, it would choose a
sending rate r i which would maximize its utility (and hence
the output rate), irrespective of the amount of inconvenience
caused (loss of utility) to other users. Consider what will
happen in such a scenario with dierent packet service disciplines
and buer management policies.
Consider a link of capacity shared by ve
users sending tra-c at a constant rate. Figures 1 and 2
1 Later, in Section 4 we also consider the class of utility function
U
l where a user's utility is also dependent on its loss
rate.
Legend Scheduling discipline Buer management policy
FCFS First come rst serve Shared buers, drop tail
Worst case fair weighted fair queueing Per
ow buers, drop tail [3]
LQD First come rst serve Longest queue tail drop [30]
First come rst serve Dynamic threshold [6]
RED First come rst serve Random early drop [10]
FRED First come rst serve Flow RED [18]
RIS Rate inverse scheduling Per
ow buers, drop tail
Table
1: Notations for schedulers and buer management policies
A
O
Link Speeds:
L, M, N: 1 Mbps
O,
Figure
3: Congestion collapse in a sample network
show the variation in a user's output rate as a function of
its input rate for dierent scheduling disciplines and buer
management policies. Refer to Table 1 for notations.
For FCFS, the output rate of a user (and hence the user's
utility) always increases as its input rate increases. How-
ever, the slope of the graph depends on the input rate of
other users. In such a case, there is an incentive for each
user to increase its sending (input) rate, irrespective of what
other users are doing. If each user was to act selshly, to
maximize its own utility, the link will end up becoming heavily
congested with each user sending tra-c at its maximum
possible rate and receiving only a tiny fraction of the tra-c
sent. This is characterized by the concept of Nash Equilibrium
[11].
the utility of player i when the player adopts the strategy
and all the other players adopt the strategy . A strategy
prole is a Nash Equilibrium if, for every player i,
for all i is the set of strategies that user i
can adopt.
In other words, a Nash Equilibrium is a strategy prole
where no user has an incentive to deviate unilaterally from
its strategy. If all the users are selsh and non-cooperating
they would eventually adopt strategies that constitute a
Nash Equilibrium.
Here, the vector of input rates r constitutes a strategy
prole, and each user's utility U(
) is a monotonically increasing
function of its output rate
. For FCFS, RED, and
DT resource management policies, the only Nash Equilibrium
is when the input rates approach innity.
Therefore, it is appropriate to say that FCFS encourages
behaviour that leads to congestion. In a network comprising
multiple nodes and links, selsh user behaviour will lead to
worse disasters [7, 8] (similar to the congestion collapse)
where input rate of each user will approach their maximum
possible and output rates will approach zero. To see this,
consider the network and
ows shown in Figure 3. Assume
that FCFS policy is deployed at every link. Every user will
send tra-c at the rate of the access link, 10 Mbps, and
will get a net output rate of less than 100Kbps at the nal
destination.
Now consider WF2Q. The output rate of a user remains
equal to its input rate as long as it is less than or equal
to its fair rate. When, the input rate becomes larger than
the fair rate, the output rate remains constant at fair rate
(C=N ). Above is true irrespective of the input rates of other
users. In such a scenario, a selsh user will increase its input
rate till the fair rate. However, a user has no incentive
to increase its input rate beyond the fair rate, nor does it
have any incentive to keep its input rate down to the fair
rate. Therefore, in a network comprising multiple nodes and
links, when a selsh user neither knows its fair rate, nor the
resource management policies employed (FCFS or WFQ),
it may simply nd it convenient to keep on increasing its
input rate much beyond the fair rate. For WF2Q, LQD,
and FRED, it seems that any vector of input rates where
each user's input rate is more than C=N constitutes a Nash
Equilibrium. We say that such policies are oblivious to congestion
causing behaviour.
Observe from Figures 1 and 2 that all the resource management
policies (except RIS which will be described in following
sections) either encourage behaviour leading to congestion
or are oblivious to it.
For end-to-end congestion control schemes to be eective
in the presence of selsh users (and in the absence of other
incentives such as usage based charging, congestion pricing
etc.), a resource management mechanism in the interior of
the network (i.e., the tra-c police) is needed that punishes
misbehaving users and rewards well behaved users, while
what is present in today's networks is just the opposite. In
the following section we describe a class of service disciplines
that achieve the purpose of rewarding the well behaved users
and punishing the misbehaving users.
3. DIMINISHING WEIGHTSCHEDULERS
The class of Diminishing Weight Schedulers (DWS) is dened
for the idealized
uid-
ow tra-c model. It is derived
from the Generalized Processor Sharing (GPS) scheduling
algorithm [22]. Consider a link of capacity C shared by
users sending tra-c as N distinct
ows. Let A i (t) be
the amount of tra-c of
ow i entering the scheduler buer
in the interval (0; t] and S i (t) be the amount of tra-c of
ow i served by the scheduler in the interval (0; t]. Dene
the backlog of
ow
i at time t > 0 as the total
system backlog at time t as
(t). Dene the
input rate of a
ow at the link at time t as r i
and dene the output rate of
ow i at the link at time t as
A GPS scheduler [22] on a link may be dened as the
unique scheduler satisfying the following properties:
Flow Conservation:
0: (2)
Work Conservation:
If B(t) > 0; then
GPS Fairness:
where i is the GPS weight assigned to
ow i.
The
ow conservation property implies that for a
ow, the
tra-c served cannot be more than the tra-c arrived. The
work conservation property implies that if there is a non-zero
backlog in the system, then the link is not kept idle. The
fairness property implies that the output (service) rates of all
the backlogged
ows will be proportional to their respective
GPS weights, while the output rates of non-backlogged
ows
will be smaller.
GPS assigns constant weights to all the
ows. DWS differ
from GPS in this regard. In DWS, each bit gets a GPS
weight that is a decreasing function of that bit's arrival (in-
put) rate. If the bit at the head of the queue of
ow i at time
t arrived at time i (t), then in DWS, i
where W (r) is the diminishing weight function, which is a
continuous and strictly decreasing function of r. The class
of DWS schedulers is formally dened as the schedulers satisfying
the following properties:
Flow Conservation:
Work Conservation:
If B(t) > 0; then
DWS Fairness:
where i is the DWS weight for
ow i. Thus, DWS rewards
ows with small rates by assigning them large GPS weights
and punishing
ows with large rates by assigning them small
GPS weights. The amount of punishment depends upon the
steepness of the diminishing weight function. If it is a
at
function such as the 1=log(r) function, then DWS resembles
the GPS scheduling. If the diminishing weight function is
steep then strict penalties are enforced to misbehaving users.
The DWS weights may be set in accordance with the pric-
ing, resource sharing or other administrative policies, in the
same way the as GPS weights are set when GPS based
schedulers are deployed.
The Rate Inverse Scheduler (RIS) is a special case of the
DWS scheduler where the diminishing weight function is
the inverse function (W 1=r). Thus the DWS fairness
condition for RIS reduces to the following:
RIS Fairness:
Assume, for simplicity that all DWS weights are set to 1
and all users send tra-c at a constant rate. Thus, all output
rates will also be constant. From the
ow conservation
property it follows that
. The DWS fairness condition
can be simplied as follows:
It follows that if two
ows i and j are backlogged, then
We now prove some important properties of DWS schedul-
ing. Dene the congestion characteristic function G(x; r) as:
Dene the rate constant for a link with a vector of input
rates r as :
Theorem 3.1 (Rate Constant). Rate constant as
dened in Eq 11 is unique and the output rate of
ow i is
uniquely given by
The proof is provided in the Appendix. Hence, given the
input rates of
ows, using the rate constant it is possible
to uniquely determine the output rate of any
ow. We dene
the fair rate f as follows:
Lemma 3.1. The fair rate f as dened in Eq.12 is unique.
The proof is provided in the Appendix. The output rate
can be represented in terms of the fair rate as follows:
We now show that if the input rate of a
ow is less than or
equal to the fair rate, then the
ow will get all its bits transmitted
without loss, otherwise it will suer a loss according
to the diminishing weight function W ().
Lemma 3.2. If r i f then
f .
Proof. Case
W (f ), since W () is a strictly decreasing function. Hence we
get r i =W (r i ) f=W (f ). Using Eq.12 we get r i :W (r i ).
Therefore,
Case In this case we get r i =W (r i ) > f=W (f ),
since W () is a strictly decreasing function. Use Eq.12 to get
f .
The above behaviour is also evident from Figures 2 and
7. We say that a
ow i contributes to a link's congestion
scheduling, the output rate of a
ow remains equal to its input rate as long as the
ow does
not contribute to congestion. However, as soon as the
ow
contributes to congestion its output rate begins to decline
according to the diminishing weight function W (). In DWS,
dierent weight functions can be chosen to meet specic
policy requirements. Observe from Figure 7 that the weight
function W imposes a very small penalty on
misbehaving
ows and is very similar to WFQ whereas the
weight function W imposes very strict penalties.
The following result follows from Lemma 3.2
Corollary 3.1. The output rate for any
ow i is less
than equal to the fair rate (
We now establish the relationship between the fair rate f ,
the link capacity C and the number of users N . It also
suggests that if all the users are equal (with equal DWS
weights), then the fair rate is indeed fair.
Lemma 3.3. Fair rate f is greater than or equal to C=N .
The proof is provided in the Appendix. From Lemmas
3.2 and 3.3, observe that if a user i's input rate is C=N , its
output rate will also be equal to C=N (since r
and hence DWS results in fair allocation of resources.
Lemma 3.4. If a
ow i is experiencing losses (
then decreasing the
ow's input rate by a su-ciently small
amount will either increase its output rate, or leave it unchanged
(i.e. 9- > 0 s.t. if r 0
The proof is provided in the Appendix.
Lemma 3.5. If a
ow i is not experiencing losses, then increasing
the
ow's input rate by a su-ciently small amount
will either increase its output rate or leave it unchanged (i.e.
The proof is provided in the Appendix.
The above two lemmas establish that a
ow experiencing
may have an incentive to reduce its input
rate, whereas a
ow experiencing no losses may have incentive
to increase its input rate. This is very similar to
the behaviour of TCP's increase/decrease algorithms which
increase input rate when there are no losses and decrease
the input rate as soon as losses are observed. Later in Section
5 using simulations we show that dierent versions of
TCP actually do converge close to their fair rate when DWS
schedulers are deployed.
3.1 Packetized Diminishing Weight
Schedulers (PDWS)
In a network, tra-c does not
ow as a
uid. Instead it
ows as packets containing chunks of data that arrive at
discrete time boundaries. Therefore, a scheduler is needed
Buffers RIS Scheduler
Packet
Collector
Output Link
packet
streams
Input
Figure
4: Hypothetical model for DWS with discrete
packet boundaries
that works with discrete packets. Packetized DWS is derived
from the DWS scheduler in the same way as packetized
GPS is derived from the GPS scheduler. Therefore the implementation
details of PDWS are very similar to those of
PGPS except for minor changes in equations computing the
timestamps. It should be straightforward to adapt PDWS
to the simplications of PGPS like Virtual Clock [34], Self
Clocked Fair Queueing [13], WF2Q+ [33], Frame based Fair
Queueing
Denote the arrival time of the k th packet of
ow i as a k
the length of the k th packet of
ow i as L k
. We model the
th arrival of
ow i as if it were
uid
owing at a rate r k
in the interval (a k 1
The rate of arrival
of all the bits of the packet is given by r k
and therefore this
packet gets a GPS weight given by k
being
a special case of DWS has k
RIS). A packet becomes eligible for service by the scheduler
only after its last bit has arrived, i.e., at time a k
. We assume
that there is a hypothetical packet collector before the DWS
scheduler which collects all the bits of a packet and gives
them to DWS only when they become eligible (see Figure 4).
Now, we dene the nish time of a packet as the time
when the last bit of the packet gets serviced in a hypothetical
DWS scheduler with a packet collector as shown in Figure 4.
PDWS is dened as the scheduler that schedules packets in
increasing order of their nish times.
Along the lines of PGPS implementation [31], PDWS is
based on the concept of system virtual time and virtual n-
ish time of packets. The scheduler maintains a virtual time
function v(t). Upon a packet arrival, each packet is tagged
with a virtual nish time as follows:
The packets are serviced in increasing order of their virtual
nish times. To compute the virtual time at any in-
stant, an emulation of hypothetical DWS system of Figure 4
is maintained which is similar to most PGPS implementa-
tions. When a packet k of
ow i arrives, it is tagged with
a GPS weight of k
i and is also given to the DWS emula-
tion. The emulation computes the virtual nish time of the
packet by Eq 15. The rate of change of virtual time with
real time is given by
the GPS weight corresponding to the packet of
ow i which
is currently in service in the DWS emulation.
In real practice tra-c arrivals are bursty. Therefore, it is
better to use a smoothed arrival rate, instead of instantaneous
arrival rates for GPS weight computation. The scheduler
PDWS with smoothing sets k
. The value of is taken such that
the half life of smoothing is of the order of one round trip
time (R) when packet size of Lmax is used to send tra-c.
This gives:
where f is the fair rate of the
ow.
4. PROPERTIES OF DWS
We now describe some desirable game-theoretic properties
of DWS Schedulers. In this section, for all results number
of users, N 2 unless otherwise specied.
4.1 Single Link
With DWS scheduling, the best strategy for each individual
user is to send tra-c at its fair rate. This is formally
illustrated in the following theorem.
Theorem 4.1. Consider a link of capacity C, shared by
users, using DWS scheduling with unit DWS weights.
is the unique Nash Equilibrium for the
system.
The proof is provided in the Appendix. The naive selsh
users will converge to a Nash Equilibrium. However, in case
a user (say a leader) had information about other users' utility
functions or behaviours, scheduling and/or queue management
policies at the gateways, it could choose a value for
its input rate and the other users would equilibrate to a Nash
Equilibrium in the N 1 user subsystem. The leading user
can then choose its input rate based on which of these N 1
subsystem Nash Equilibria maximizes the leading user's util-
ity. This is formally called a Stackelberg Equilibrium.
prole is a Stackelberg Equilibrium with user 1 leading
1. it is a Nash Equilibrium for users
2. the leader's utility is maximized, i.e.
a Nash Equilibrium for users
adopts the strategy 1 ,
is the set of strategies that user i can follow.
The leader's utility at a Stackelberg Equilibrium is never
less than that in any other Nash Equilibrium. So a user with
more information may try to drive the system towards one
of its Stackelberg Equilibria. This can be avoided if Nash
and Stackelberg Equilibria coincide. We now show this to
be true for a single link with DWS scheduling.
Theorem 4.2. Consider a link of capacity C, shared by
users, using DWS scheduling with unit DWS weights.
is the unique Stackelberg Equilibrium for
the system.
The proof is provided in the Appendix. Since the unique
Nash and Stackelberg Equilibria coincide, a user will benet
most by sending at its fair rate. Any user sending at a rate
higher than its fair rate will be penalized, and other users
can then receive a better output rate. This is characterized
by the concept of Nash rate.
Definition 3. Given a user with input rate r, dene
Nash rate for the remaining users as:
We now discuss some properties of the Nash rate.
Lemma 4.1. The Nash rate ((r)) is a strictly increasing
function of r in the range (C=N; 1).
Proof. For r 2 (C=N; 1), we rewrite denition 17 in the
Since W () is strictly decreasing, r increases as x increases
and vice-versa. Also note that the value of x satisfying the
above equation for a given value of r is unique.
Lemma 4.2. The Nash rate is greater than or equal to
C=N i.e. (r) C=N .
Proof. For r C=N , we see from the denition of Nash
rate (Eq. 17) that (r) C=N .
For r > C=N , note from Lemma 4.1, that (r) is increasing
in (C=N; 1). Also note that (r) is continuous at C=N
(Eq. 17), and (C=N) = C=N . Hence, (r) C=N .
When a user sends at r, the best strategy for other users
is to send tra-c at their Nash rate (r). This is formally
stated in the following theorem.
Theorem 4.3. If a user (say user 1) sends at r1 then
is the unique Nash Equilibrium for the
remaining N 1 users.
The proof is provided in the Appendix. Observe that
if a user sends at r1 C=N , thereby not contributing to
congestion, then the spare capacity is divided equally among
the others 1)). If a user misbehaves
and sends tra-c at a rate r1 > C=N , while other users
remain well behaved, then other users can safely increase
their rate upto (r1 ), whereas the misbehaving user gets
penalized to its residual Nash rate R (r), dened as follows.
Definition 4. Given a user with input rate r, dene its
residual Nash rate as:
The following two lemmas illustrate that the more a user
contributes to congestion the more penalty it incurs.
Lemma 4.3. The residual Nash rate is less than or equal
to C=N i.e. R (r) C=N .
This immediately follows from Eq.
Lemma 4.4. R(r) is strictly decreasing function of r in
the range (C=N; 1).
The proof immediately follows from Eq.
A steep weight function results in severer punishment for
a user contributing to congestion and larger equilibrium output
rate for well behaving users. This is illustrated in Figure
5 which plots Nash rate and residual Nash rate for different
diminishing weight functions. As can be easily ob-
served, a steeper weight function(W results in a
larger penalty as compared to a less steep weight function
Nash
Rate
(Mbps)
Input Rate (r) (Mbps)
Nash Rates
W(r)=1/r
Figure
5: Nash rate
4.2 Arbitrary Network of Links
Consider an arbitrary network servicing N
ows. Assume
that DWS scheduling is deployed at every link in the net-
work. Assume that the input rate of each
ow is constant.
Therefore, the input and output rates of users at every other
link will also be constant. Now, the input and output rate
of each
ow at each link can be computed using Eq. 1, 12
and 13.
The following theorem establishes that even in an arbitrary
network of links, max-min fair input rates constitute
a Nash as well a Stackelberg Equilibrium if DWS schedulers
are deployed at each link. Max-min fairness [4] is a well
known notion of fairness in an arbitrary network. Denote
by M, the 1 N vector of max-min fair rates of these
ows
through this network.
Theorem 4.4. Consider N users sending their tra-c as
distinct
ows through an arbitrary network with independent
DWS scheduling at each link. The max-min fair rates
M constitute a Nash as well as a Stackelberg Equilibrium
for the users.
The proof is provided in the Appendix. Furthermore, it
is not necessary that the same weight function be used at
each link. This makes it easier to adopt DWS in a heterogenous
environment with dierent administrative domains and
policies.
Besides M, there may be other equilibria also, and users
may try to aect which equilibrium to reach. In such a
case, it can be shown that at least one user will experience
losses in any other Nash Equilibria. This is illustrated in
the following Lemma.
Lemma 4.5. Consider N users sending their tra-c as N
distinct
ows through an arbitrary network with independent
DWS scheduling at each link. The max-min fair rates
M constitutes the unique Nash as well as the Stackelberg
Equilibrium in which there are no losses in the system.
The proof is provided in the Appendix. In general, a user's
utility may also depend on its loss rate in addition to the
output rate. Out of many Nash Equilibrias giving the same
output rates, generally users will prefer one with smaller
losses. We formally dene this class of utility functions (U
l
as follows:
Mbps 2ms
100 Mbps 30ms
100 Mbps 30ms
100 Mbps 30ms
100 Mbps 30ms
100 Mbps 30ms
Figure
Simulation Scenario
1. U 2 U
l maps a user's output rate
and loss-rate l to
a real-valued non-negative utility.
2. U(
3. U(
If all users have such utility functions, it turns out that
M is the unique Nash as well Stackelberg Equilibrium.
This is illustrated in the following theorem which is similar
to Theorem 4.1 and Theorem 4.2 for a single link.
Theorem 4.5. Consider N users sending their tra-c as
distinct
ows through an arbitrary network with independent
DWS scheduling at each link. Let U i be the utility function
of user i. If 8 i U
l , then the max-min fair rates
M constitutes the unique Nash and Stackelberg Equilibrium.
The proof is provided in the Appendix. Therefore the
\best selsh behaviour" for a user is to send tra-c at its
max-min fair rate.
5. SIMULATION RESULTS
The results in the previous section imply that the \best
selsh behaviour" for a user in the presence of other similar
users is to send tra-c at its max-min fair rate. However, the
max-min fair rate depends on (a) the link capacities, (b) the
number of
ows through each link, (c) the input rate of other
ows and (d) the path of each
ow. A user will not know
these parameters in general and thus will not be able to know
its max-min fair rate. However, from Lemmas 3.4 and 3.5 it
does seem that in case of a single link with DWS scheduling,
each iteration of a TCP style increase/decrease algorithm
with suitable parameters will bring input rates closer to the
fair rates. Therefore, if a single link with DWS scheduling
is modeled as a game, then TCP-like end user algorithms
seem to be reasonable strategies to play the game.
In this section we illustrate through simulations that the
dierent versions of TCP algorithms are indeed able to converge
to their max-min fair rate, (and at Nash rates in the
presence of misbehaving users), when DWS schedulers are
deployed in the network. The convergence to Nash rates is
also shown for dierent diminishing weight functions. More-
over, specic versions of TCP and the round trip times of
individual
ows have little impact on the average output
rate of a
ow. Therefore, DWS scheduling solves most of
the problems of congestion control in the presence of misbehaving
users [7, 8].
Output
Rate
(Mbps)
Input Rate (Mbps)
All other flows sending at 4 Mbps
FCFS
RED
w(r)=1/log(r)
RIS: w(r)=1/r
w(r)=1/r^2
w(r)=1/r^4
Figure
7: DWS Performance in the presence of CBR
ows
5.1 Simulation Scenario
The simulation scenario is shown in Figure 6. The bottle-neck
link has a capacity of 10 Mbps and propagation delay
of 2 ms. There are ve access links of capacity 100 Mbps
and propagation delay ms.
The buer size for a
ow at each link was set to its round
product. PDWS with smoothing,
per
ow buers and tail drop was used. NS [1] was used to
carry out all the simulations.
5.2 CBR flows
The bottleneck link is shared by ve CBR
ows, four of
which send tra-c at a constant rate of 4 Mbps. The rate
of the fth
ow is varied from 0 Mbps to 7.2 Mbps. A plot
of its output rate vs. the input rate for various scheduling
algorithms and buer management policies is shown in
Figure
7.
This is a scenario of heavy congestion. Note from Figure 7,
that with DWS scheduling the
ow is able to receive its fair
rate as long as it does not cause congestion. The
ow starts
receiving a penalty when its input rate exceeds the fair rate.
The amount of penalty depends on the weight function W ().
Note that the penalties are higher with W
lower with W log(r).
5.3 TCP with unresponsive flows
A
ow that does not change its input rate during congestion
is referred to as an unresponsive
ow [7]. Responsive
ows back o by reducing their sending rate upon detecting
congestion while unresponsive
ows continue to inject
packets into the network thereby grabbing a larger share of
the bandwidth. As a result the presence of unresponsive
ows gives rise to unfairness in bandwidth allocation. With
PDWS scheduling deployed, we show that TCP style AIMD
algorithms can estimate and send tra-c at their max-min
fair rate (or Nash rate) even in the presence of misbehaving
ows.
The bottleneck link is shared by 5 users, 4 using (re-
sponsive) TCP Tahoe and one unresponsive constant bit
rate (CBR) source. Responsive TCP
ows back during
congestion while unresponsive CBR
ows continue to inject
packets into the network thus attempting to grab a larger
share of the bandwidth. Figure 8 shows the average output
rates for a representative TCP
ow as the input rate of the0.51.52.5
Output
Rate
(Mbps)
Input Rate (Mbps) of CBR
4 TCP and 1 CBR
CBR (w(r)=1/r)
CBR (w(r)=1/r^2)
Figure
8: DWS Performance in the presence of TCP
and CBR
ows
CBR
ow is varied. Each point in the graph represents a
simulation of 20 seconds. However, the output rates correspond
to the average rate in the last 10 seconds of the
simulation when they get stabilized.
Figure
8 shows that TCP
ows are able to get close to
their Nash rate (shown in Figure 5) and hence to their Nash
Equilibrium (according to Theorem 4.3).
Note that with W log(r) the output rate of CBR
ow is greater than that of the TCP
ow. This is because
the inverse log weight function gives very little penalty to
the misbehaving CBR
ow and is very similar to WFQ. The
bandwidth left by TCP because of timeouts and retransmits
is grabbed by the CBR
ow despite its (slightly) small GPS
weight. As CBR increases its rate further, the penalty slowly
increases allowing TCP to grab a larger share.
5.4 TCP Versions
Dierent versions of TCP like Tahoe, Reno [21], Vegas [5],
and Sack [20] are known to perform dierently [21]. We show
that with DWS scheduling there is very little dierence in
the output rates achieved by these versions. The simulation
scenario is shown in Figure 6 with dierent versions of TCP
at n2, n3, n4 and n5. Packetized rate inverse scheduler
was used at the bottleneck link. Figure 9 shows
the total bytes of a
ow transferred as a function of time.
The output rate is given by the slope of the graph. Since
the slopes are almost identical we see that all versions get
identical rates.
5.5 Multiple vs Single Connection
Opening multiple simultaneous connections is a very simple
way to grab more bandwidth from simple FCFS (drop-
tail) gateways. The simulation scenario is shown in Figure 6
with FCFS employed at node n0. The users at nodes n2, n3,
n4, n5 and n6 open 1, 2, 3, 4 and 5 TCP Reno connections
to node n1 respectively.
Typically, a user opening more simultaneous connection
is able to grab more bandwidth. However, this is not the
case with DWS scheduling when all the TCP connections of
a user are treated as a single
ow.
Figure
shows a plot of
bytes transferred vs. time when PRIS is deployed at node
n0. We see that all users get an almost identical bandwidth.
5.6 TCP with different Round Trip Times
Total
data
received
time (sec)
Versions
Sack
Reno
Vegas
Tahoe
Figure
9: PRIS Performance in the presence of different
Total
data
received
time (sec)
Multiple TCP connections
connections
connections
connections
connections
connections
Figure
10: PRIS Performance in the presence of
multiple connections
The Round Trip Time (RTT) of a TCP connection determines
how fast it adapts itself to the current state of the
network. A connection with smaller RTT is able to infer the
status of the network earlier than a connection with larger
RTT. Therefore, large RTT connections typically achieve
lower output rates.
The simulation scenario is the same as shown in Figure 6
except that the propagation delays of links (n2-n0), (n3-
n0), (n4-n0), (n5-n0) and (n6-n0) are set to 5, 10, 20, 50,
100 ms respectively. PRIS scheduler was used and the value
of was taken to be 0:9 which corresponds to the of the
minimum RTT
ow according to Eq 16. Figure 11 shows
that when PRIS is deployed, after a few transients initially,
all
ows are able to achieve almost identical rates.
5.7 Network
In this section we show that with DWS scheduling deployed
in a network adaptive
ows like TCP are able to
estimate and converge to their max-min fair rates. The simulation
scenario is shown in Figure 12. There are 6 TCP
Reno
ows. The paths of
ows 0, 1, 2, 3, 4, and 5 are
(n0-n2-n1), (n4-n3-n5), (n0-n2-n3-n4), (n1-n2-n3-n5), (n0-
and (n1-n2-n3-n4) respectively. The buer size
of each
ow on each gateway was taken to be 300 packets
which is the bandwidth delay product of the
ow with
Total
data
received
time (sec)
5 TCPs with Different RTT
Rtt 14ms
Rtt 24ms
Rtt 44ms
104ms
Rtt 204ms
Figure
11: PRIS Performance in the presence
ows
with varying RTTs
ms
ms
ms
ms ms
Link Capacities 10Mbps
Figure
12: Simulation scenario for a network135790
Output
Rate
(Mbps)
time (sec)
RIS scheduling in a Network
Flowid 3
Flowid 4
Flowid 5
Figure
13: Output rates of
ows in a network with
PRIS
largest RTT. For this scenario the max-min fair rate [4] for
ows 0 and 1 is 5 Mbps and for
ows 2, 3, 4, and 5 is 2.5
Mbps.
Figure
13 shows a plot of output rate vs. time for all
6
ows. We see that after some initial transients all
ows
converge to their max-min fair rates.
6. CONCLUSIONS
Using the techniques of game-theory, we showed that the
current resource sharing mechanisms in the Internet either
encourage congestion causing behaviour, or are oblivious to
it. While these mechanisms may be adequate currently,
their applicability in the future remains questionable. With
growth, heterogeneity and commercialization of the Inter-
net, the assumption of end-users being cooperative might
not remain valid. This may lead to a congestion collapse of
the Internet due to selsh behaviour of the end-users.
We proposed a class of switch scheduling algorithms by
the name Diminishing Weight Schedulers (DWS) and showed
that they encourage congestion avoiding behaviour and punish
behaviours that lead to congestion. We showed that for
a single link with DWS scheduling, fair rates constitute the
unique Nash and Stackelberg Equilibrium. We also showed
that for an arbitrary network with DWS scheduling at every
link, the max-min fair rates constitute a Nash as well as a
Stackelberg Equilibrium. Therefore, when DWS schedulers
are deployed, even the selsh users will try to estimate their
max-min fair rate and send tra-c only at that rate.
It is possible to set dierent DWS weights for dierent
users (or tra-c classes). This should lead to (in a game-theoretic
manner) weighted fair sharing in case of a single
link and weighted max-min fair sharing in case of a network.
These weights may be set in accordance with the pricing or
other resource sharing policies.
We dened the concept of Nash rate and showed how
the choice of dierent weight functions can aect the
reward-penalty prole of DWS. With the 1=r 2 diminishing
weight function, the penalty imposed is large, whereas with
1= log(r) diminishing weight function, the behaviour of DWS
is only marginally dierent from that of GPS which imposes
no penalty. Therefore DWS may also be viewed as a generalization
of GPS scheduling with suitable game-theoretic
properties. DWS does not require dierent nodes to use the
same weight function. Therefore, it is well suited for heterogenous
environment consisting of dierent administrative
domains, where each domain may independently choose a
diminishing weight function according to its administrative
policies.
Although the max-min rate constitute Nash and Stackelberg
Equilibrium, it is not clear how users can estimate their
max-min fair rates. For this, a decentralized distributed
scheme such as the one is proposed [15] is required. Moreover
one needs to establish that such a distributed scheme
will be stable and will indeed converge to the max-min fair
rates when DWS schedulers are deployed in the network.
This is a topic under investigation. Our current paper does
not address this issue in a theoretical framework. Also, in
this paper we assumed that the input rate of every user is
constant. With a distributed scheme to estimate the max-min
fair rate, this assumption will not remain valid. Analyzing
DWS scheduling with dynamically changing rates is
another open problem.
We believe that, it should be possible to design distributed
algorithms that are stable and converge to the max-min fair
rates in presence of DWS scheduling. It seems that additive
increase and multiplicative decrease algorithms (such as the
one followed by TCP) with proper engineering may perform
well with DWS scheduling.
Using simulations we showed that in a network with DWS
scheduling, most of the TCP variants are able to estimate
their max-min fair rate reasonably well, irrespective of their
versions and round trip times (RTT). We also showed that
with DWS, the TCP users indeed get rewarded according to
their Nash rates in the presence of unresponsive, misbehaving
CBR
ows which get punished.
Our proposed model requires per-
ow queueing and
scheduling in the core routers, which may not be very easy
to implement in a realistic situation. However, this work
presents a signicantly dierent view of resource sharing and
congestion control in communication networks and gives a
a class of scheduling algorithms that can be used to solve
the problem in a game-theoretic framework. Based on this
work, one may be able to design \core stateless" policies [29]
with similar properties.
7.
ACKNOWLEDGEMENTS
We thank the computer communication review editors
and the anonymous reviewers for their helpful comments
on this paper.
8.
--R
Priority encoding transmission.
Data Networks.
Vegas: New techniques for congestion detection and avoidance.
Dynamic queue length thresholds in a shared memory ATM switch.
Congestion control principles.
Promoting the use of end-to-end congestion control in the internet
The NewReno modi
Random early detection gateways for congestion avoidance.
Game Theory.
Eliciting cooperation from sel
Congestion avoidance and control.
ERICA switch algorithm: A complete description.
Credit update protocol for ow-controlled ATM networks: Statistical multiplexing and adaptive credit allocation
Dynamics of random early detection.
Forward acknowledgement: Re
selective acknowledgement options.
Analysis and comparison of TCP Reno and Vegas.
A generalized processor sharing approach to ow control - the single node case
Transmission control protocol.
Something a host could do with source quench: The source quench introduced delay (SQuID).
A proposal to add explicit congestion noti
A binary feedback scheme for congestion avoidance in computer networks
Making greed work in networks: A game-theoretic analysis of switch service disciplines
Design and analysis of frame-based fair queuing: A new tra-c scheduling algorithm for packet switched networks
Hardware implementation of fair queueing algorithms for asynchronous transfer mode networks.
A taxonomy for congestion control algorithms in packet switching networks.
Hierarchical packet fair queueing algorithms.
Virtual clock
--TR
Data networks
Congestion avoidance and control
A binary feedback scheme for congestion avoidance in computer networks
VirtualClock
Random early detection gateways for congestion avoidance
Making greed work in networks
Credit-based flow control for ATM networks
Design and analysis of frame-based fair queueing
Hierarchical packet fair queueing algorithms
Forward acknowledgement
Dynamics of random early detection
<italic>Core</italic>-stateless fair queueing
Promoting the use of end-to-end congestion control in the Internet
--CTR
Luis Lpez , Gemma del Rey Almansa , Stphane Paquelet , Antonio Fernndez, A mathematical model for the TCP tragedy of the commons, Theoretical Computer Science, v.343 n.1-2, p.4-26, 10 October 2005
Petteri Nurmi, Modeling energy constrained routing in selfish ad hoc networks, Proceeding from the 2006 workshop on Game theory for communications and networks, October 14-14, 2006, Pisa, Italy
D. S. Menasch , D. R. Figueiredo , E. de Souza e Silva, An evolutionary game-theoretic approach to congestion control, Performance Evaluation, v.62 n.1-4, p.295-312, October 2005
Xiaojie Gao , Leonard J. Schulman, Feedback control for router congestion resolution, Proceedings of the twenty-fourth annual ACM symposium on Principles of distributed computing, July 17-20, 2005, Las Vegas, NV, USA
Altman , T. Boulogne , R. El-Azouzi , T. Jimnez , L. Wynter, A survey on networking games in telecommunications, Computers and Operations Research, v.33 n.2, p.286-311, February 2006
Luis Lpez , Antonio Fernndez , Vicent Cholvi, A game theoretic comparison of TCP and digital fountain based protocols, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.12, p.3413-3426, August, 2007 | game theory;nash equilibrium;fairness;DWS;GPS;congestion control;stackelberg equilibrium;generalized processor sharing;RIS;scheduling;TCP |
571841 | Detectable byzantine agreement secure against faulty majorities. | It is well-known that n players, connected only by pairwise secure channels, can achieve Byzantine agreement only if the number t of cheaters satisfies t < n/3, even with respect to computational security. However, for many applications it is sufficient to achieve detectable broadcast. With this primitive, broadcast is only guaranteed when all players are non-faulty ("honest"), but all non-faulty players always reach agreement on whether broadcast was achieved or not. We show that detectable broadcast can be achieved regardless of the number of faulty players (i.e., for all t < n). We give a protocol which is unconditionally secure, as well as two more efficient protocols which are secure with respect to computational assumptions, and the existence of quantum channels, respectively.These protocols allow for secure multi-party computation tolerating any t < n, assuming only pairwise authenticated channels. Moreover, they allow for the setup of public-key infrastructures that are consistent among all participants --- using neither a trusted party nor broadcast channels.Finally, we show that it is not even necessary for players to begin the protocol at the same time step. We give a "detectable Firing Squad" protocol which can be initiated by a single user at any time and such that either all honest players end up with synchronized clocks, or all honest players abort. | INTRODUCTION
Broadcast (a.k.a. Byzantine agreement) is an important primitive
in the design of distributed protocols. A protocol with a
designated sender s achieves broadcast if it acts as a megaphone
for s: all other players will receive the message s sends, and
moreover if an honest player receives a message, then he knows
that all other honest players received the same message | it
is impossible even for a cheating s to force inconsistency in the
outputs.
Lamport, Shostak, and Pease [24, 22] showed that if players
share no initial setup information beyond pairwise authenticated
channels, then in fact broadcast is possible if and only if t <
n=3, where n is the number of players and t is the number of
actively corrupted players to be tolerated by the protocol. By
the impossibility proofs in [20, 10, 11], even additional resources
(e.g., secret channels, private random coins, quantum channels
and computers) cannot help to improve this bound unless some
setup shared among more than just pairs of players is involved.
On the other hand, the picture changes dramatically if some
previous setup is allowed. If secure signature schemes exist and
the adversary is limited to polynomial time, then having pre-
agreement on a public verication key for every player allows
for e-cient broadcast for any t < n [22, 8]. Ptzmann and
Waidner [25] showed that broadcast among n players during a
precomputation phase allows for later broadcast that is e-cient
and unconditionally secure, also for any t < n. Those two works
will be key pieces for our constructions.
Surprisingly, very strong agreement protocols are still achievable
without previous setup. Fitzi, Gisin, Maurer, and von Rotz [13,
12] showed that a weaker variant of broadcast, detectable broad-
cast, can be achieved for any t < n=2. In a detectable broadcast,
cheaters can force the protocol to abort, but in that case all honest
players agree that it has aborted. This is ideal for settings in
which robust tolerance of errors is not necessary, and detection
su-ces.
1.1 Contributions
We show that detectable broadcast is possible for any t < n, in
three dierent models. The rst protocol requires only pairwise
authenticated channels, but assumes a polynomial-time adversary
and the existence of secure signature schemes. The second
protocol requires pairwise secure channels, but is secure against
unbounded adversaries. The third protocol requires authenticated
classical channels and (insecure) quantum channels, but
also tolerates unbounded adversaries.
Theorem 1. Detectable broadcast is achievable based on (1):
a network of pairwise authenticated channels and assuming a
secure signature scheme; or based on (2): a network of pairwise
secure channels with no computational assumptions; or based on
(3): a network of pairwise authenticated channels and insecure
quantum channels with no computational assumptions.
The protocol for (1) requires t rounds and O(n 3
message bits to be sent by correct players, where k is the length
of a signature. The protocol for (2) requires t
roughly O(n 8 (log message bits to be sent, where
k is the security parameter. The protocol for (3) requires
rounds and roughly O(kn 4 log n) bits (qubits) of communica-
tion. The message complexities above are stated with respect to
message domains of constant size. The exact complexities with
dependencies on the domain size are given later.
In particular, our results show that the impossibility of weak
broadcast for deterministic protocols, due to Lamport [21], does
not extend to randomized ones (Lamport's proof does apply to
protocols with only public coins, but fails when players are allowed
to have private random inputs).
Combined with results from the previous literature, our results
yield protocols for \detectable" versions of multi-party computation
(mpc) with resilience t < n, in which the adversary may
force the protocol to abort, assuming only pairwise authenticated
channels and the existence of trapdoor permutations. An
mpc protocol allows players with inputs x1 ; :::; xn to evaluate
some function f(x1 ; :::; xn) such that the adversary can neither
corrupt the output nor learn any information beyond the value
of f . We give two ways to apply our detectable broadcast protocol
to the generic construction of [15, 2, 14] to remove the
assumption of a broadcast channel.
In independent work, Goldwasser and Lindell [17] give a dier-
ent, more general transformation, which also eliminates the use
of a broadcast channel. Their transformation achieves a weaker
notion of agreement than ours|honest players may not always
agree on whether or not the protocol terminated successfully
(see their work for a more precise denition). On the other
hand, that transformation is more e-cient in round complexity
and satises partial fairness (which is not satised by the more
e-cient of our transformations). Additionally, they analyze the
behaviour of their transformation with respect to arbitrary mpc
protocols, not only that of [15, 2, 14], and with respect to concurrent
composition.
Finally, it can be observed that, in order to achieve detectable
broadcast or multi-party computation, no prior agreement among
the players is necessary. This implies that such a protocol can
be spontaneously initiated by any player at any time. It also
implies that players in a synchronous network can achieve \de-
tectable clock synchronization," namely either all honest players
end up with synchronized clocks, or all honest players abort (not
necessarily at exactly the same time).
1.2 Models and Definitions
Models: We consider a synchronous network in which every pair
of players is connected by an unjammable, authenticated chan-
nel. That is, for every pair i; j, player p i can always send messages
directly to p j . The adversary can neither prevent those
messages from being delivered nor introduce new messages on
the channel. By synchronous, we mean that all players run on a
common clock, and messages are always delivered within some
bounded time. Our protocols are secure even if the adversary
can rush messages, i.e., even if messages from honest players are
delivered before corrupted players send their messages.
Our protocols are secure against Byzantine (or \active") adver-
saries, that select up to t players and coordinatedly corrupt them
in an arbitrarily malicious way. The corruptions may be adap-
tive, that is the adversary grows the set of corrupted players on
the
y, based on the execution so far.
In this framework, we consider three models, denoted Mauth ,
1. Mauth : Authenticated channels, computational security.
The adversary may read all communication in the network,
even among honest players, but is limited to polynomial
time computations (and may not tamper with the chan-
nels). Here, the only information not available to the adversary
is the internal state and random coins of the honest
players.
2. Msec : Secure channels, unconditional security (a.k.a. \in-
security"). The channels between honest
players are unreadable, but the adversary is computationally
unbounded.
3. unconditional security. All pairs
of players share an authenticated classical channel and an
insecure quantum channel (which the adversary can tamper
with). The adversary is computationally unbounded.
If one is only interested in feasability results, then one only needs
to consider the second model, Msec . By (carefully) encrypting
communication over authenticated channels, one can implement
secure channels in Mauth [5], so in fact any protocol for Msec is
also a protocol for Mauth . However, protocols designed specifically
for Msec can use computational cryptographic tools for
greater simplicity and e-ciency 1 . Similarly, any protocol for
Msec leads to a protocol for Mq (by implementing secure channels
using quantum key distribution), but protocols designed
specically for Mq may be more e-cient.
Denitions: In the denitions below, D may be any nite domain
(say We require that the conditions for each task
hold except with probability exponentially small in the security
parameter (super-polynomially small in the case of computational
security). Protocols should have complexity polynomial
in both n and k.
ers, where player s (called the sender) holds an input value
xs 2 D and every player p i (i 2 [n]) nally decides on an output
value broadcast if it satises:
Validity: If the sender is honest then all honest players p i
decide on the sender's input value, y
Consistency: All honest players compute the same output
value
(Detectable Broadcast). A protocol among n
players achieves detectable broadcast if it satises:
Correctness: All honest players commonly accept or commonly
reject the protocol. If all honest players accept then
the protocol achieves broadcast.
Completeness: If no player is corrupted during the protocol
then all players accept.
Fairness: If any honest player rejects the protocol then
the adversary gets no information about the sender's input.It turns out that our protocols achieve a stronger notion, namely
they \detectably" establish the setup needed to perform strong
broadcast using the protocols of [8, 25].
Definition 3 (Detectable Precomputation). A protocol among
players achieves detectable precomputation for broadcast (or
detectable precomputation, for short) if it satises:
Correctness: All honest players commonly accept or commonly
reject the protocol. If all honest players accept then
strong broadcast will be achievable.
Completeness: If no player is corrupted during the protocol
then all honest players accept.
Independence: A honest player's intended input value
for any precomputed broadcast need not be known at the
time of the precomputation. 4
Independence implies two important properties: rst, the pre-computation
may be done long before the actual broadcasts it
designed specically for Mauth may also use potentially
weaker computational assumptions. For example, our protocols
require only the existence of one-way functions, while the
general reduction from Msec to Mauth requires semantically secure
encryption.
will be used for; and second, that the adversary gets no information
about any future inputs by honest senders (i.e., that
fairness as dened for detectable broadcast is guaranteed). In
particular, this means that detectable precomputation implies
detectable broadcast.
As opposed to detectable broadcast, the advantage of detectable
precomputation for broadcast is that the preparation is separated
from the actual execution of the broadcasts, i.e., only the
precomputation must be detectable. As soon as the precomputation
has been successfully completed, strong broadcast is
possible secure against any number of corrupted players.
2. GENERICPROTOCOLFORDETECTABLE
Along the lines of [13], we give constructions of protocols for
detectable precomputation, which implies detectable broadcast.
We present protocols for three models: a network of authenticated
channels with computational security (Mauth ), secure
channels with unconditional security (Msec ), and quantum channels
(Mq ). The protocols for models Mauth and Mq are more
e-cient, while the one for model Mauth is ultimately more general
(since it can be used, with small modications, in all three
models).
Note that, although stated dierently, the known results in [22,
8] and those in [25] can both be viewed in context of the second
one: In models Mauth and Msec , a temporary phase wherein
broadcast is achievable (for some reason) allows running a pre-computation
such that future broadcast will be achievable without
any additional assumptions. In Mauth , this precomputation
can simply consist of having every player p i compute a secret-
broadcast his public key
In Msec , more involved methods must be applied, but
still, the principle of the precomputation is similar: the players
broadcast some information that allows all players to consistently
compute keys for a pseudo-signature scheme among the
players.
Our construction for detectable precomputation is generic in the
sense that any \reasonable" precomputation protocol exploiting
temporary broadcast in order to allow for future broadcast can
be transformed into a protocol for detectable precomputation.
This transformation is based on an implementation of conditional
gradecast, dened below, which is a variant of graded
broadcast [9]. The independent work of Goldwasser and Lindell
[17] calls this task \broadcast with designated abort."
Definition 4 (Conditional Gradecast). A protocol among n
players, where player s (called the sender) holds an input value
xs 2 D and every player p i (i 2 [n]) nally decides on an output
value
gradecast if it satises:
Value validity: If the sender is honest then all honest
players decide on the sender's input value, y
Conditional grade validity: If all players are honest
then all players p i decide on grade
Consistency: If any honest player p i gets grade
then all honest players p j decide on the same output value
Assume set png of players in model M 2 fMauth ;
Msecg. Let be a precomputation protocol for model M where,
additionally, players are assumed to be able to broadcast mes-
sages; and let ng be a set of protocols for model
M where each protocol i achieves broadcast with sender p i
when based on the information exchanged during an execution
of . Furthermore, assume that satises the independence
property of Denition 1.2 with respect to the protocols in B.
If protocol always e-ciently terminates even if all involved
broadcast invocations fail and all protocols i always e-ciently
terminate even when based on arbitrary precomputed information
then a protocol - for detectable precomputation can be
achieved from and B as follows:
1. Run protocol wherein each invocation of broadcast is
replaced by an invocation of conditional gradecast with
the same sender.
2. Each player p i computes the logical AND over the grades
he got during all invocations of conditional gradecast in
(the modied) protocol , G
3. For each player p an invocation of protocol i is run
4. Each player p i accepts if and only if G received
during Step 3.
Note that the protocols i in Step 3 do not necessarily achieve
broadcast since the invocation of during Step 1 might have
failed. However, they will always e-ciently terminate by as-
sumption. We now informally argue that protocol - achieves
detectable precomputation.
Correctness. Suppose p j and pk to be honest players and suppose
that accepts. Then G invocations
of conditional gradecast during protocol achieved
broadcast (when neglecting the grade outputs) and hence,
all protocols in B indeed achieve broadcast. Since p j ac-
cepts, all players p i broadcasted G during Step 3,
especially all honest ones, and hence all honest players accept
at the end.
Completeness. If no player is corrupted during the invocation
of protocol - then no player p i ever computes a grade
and all players accept.
Independence. Independence directly follows from the assumed
independence property of .
Before giving a more detailed view on our concrete protocols for
the models Mauth and Msec , we rst describe a protocol that
achieves conditional gradecast in both models Mauth and Msec .
Protocol CondGradecast: [with respect to sender s]
1. Sender s sends his input xs to every player; player p i receives
2. Every player p i redistributes y i to every other player and computes
grade the value y i was reconrmed by everybody
during Step 2, and else
Lemma 1. Protocol CondGradecast achieves conditional grad-
ecast for t < n.
Proof. Both validity conditions are trivially satised. On
the other hand, suppose that p i is honest and that
honest player p j sent y during Step 2, and
hence consistency is satised.
3. COMPUTATIONAL SECURITY
Let (G; be a signature scheme, i.e., secure under adaptive
chosen message attack. Here G is a key generation algorithm,
S is the signing algorithm and V is the verication algorithm.
All algorithms take a unary security parameter 1 k as input. For
simplicity, we assume that signatures are k bits long.
The Dolev-Strong protocol [8] achieves (strong) broadcast when
a consistent public-key infrastructure has been setup:
Definition 5 (Consistent Public-Key Infrastructure (PKI)).
We say that a group of n players have a consistent public-key
infrastructure for (G; has a
verication key PK i which is known to all other players and was
chosen by p i (in particular, the keys belonging to honest players
will have been chosen correctly according to G, using private
randomness, and the honest players will know their signing keys).
Note that the cheaters' keys may (and will, in general) depend
on the keys of honest players. 4
Proposition 2 (Dolev-Strong [8]). Given a consistent
PKI and pairwise authenticated channels, (strong) broadcast tolerating
any t < n cheaters is achievable using t rounds and
an overall message complexity of O(n 2 log jDj bits where
D is the message domain (including possible padding for session
IDs, etc). An adversary which makes the protocol fail with probability
" can be used to forge a signature in an adaptive chosen
message attack with probability "=n.
Let DSBroadcast denote the Dolev-Strong broadcast protocol. A
precomputation \protocol" among n players
allowing for future broadcast is to set up a consistent infrastruc-
ture. Given broadcast channels this is very simple: have every
player generate a signing/verication key pair (SK
broadcast the verication key. Hence, by applying the generic
transformation described in the previous section we get the following
protocol for detectable precomputation:
Protocol DetPrecomp: [Protocol for Model Mauth
1. Every player p i generates a signing/verication key pair
according to G. For every player p j protocol
CondGradecast is invoked where p j inputs his public key PK j
as a sender. Every player p i stores all received public keys
i and grades g (1)
2. Every player p i computes G
3. For every player p j an instance of DSBroadcast is invoked
as a sender. Every player p i stores the
received values G (j)
ng n fig) and G (i)
4. All players p i accept if
reject otherwise.
Theorem 2. Protocol DetPrecomp achieves detectable precomputation
among n players for model Mauth tolerating any t < n
corrupted players. An adversary who can make the protocol fail
with probability " can be used to forge a signature with probability
"=n.
Proof. Completeness and independence are trivially satis-
ed. It remains to prove that the correctness condition is satis-
ed. Assume p j and pk to be two honest players and assume that
accepts. We show that hence pk also accepts: Since p j accepts
it holds that G
1. By the
denition of conditional gradecast, this implies that the players
have set up a consistent PKI and that hence any invocation of
Protocol DSBroadcast achieves broadcast. Since p j accepts, all
players during Step 3 and hence all honest
players decide to accept at the end of the protocol.
Note that Protocols DetPrecomp and DSBroadcast have another
important property, i.e., that they keep the players synchronized:
If the players accept then they terminate these protocols during
the same communication round. This can be easily veried.
As Protocol DSBroadcast requires t+1 rounds of communication
and an overall message complexity of O(n 2 log jDj +n 3 bits to
be sent by honest players, Protocol DetPrecomp requires t
rounds and an overall message complexity of O(n 3 log jDj+n 4
bits. Any later broadcasts are conventional calls to DSBroadcast.
The message complexity of Protocol DetPrecomp can be reduced
to O(n 3 (log jDj bits overall by replacing the n parallel
invocations of DSBroadcast during Step 3 by a consensus-like
protocol with default value 1. For this, the n DSBroadcast protocols
are run in parallel in a slightly modied way. In the rst
round, a sender ps who accepts simply sends bit
a signature (or no message at all), and only if he rejects sends
together with a signature on it. As soon as, during
some round accepts the value
from one or more senders ps because he received valid signatures
from r dierent players (including ps) on value
with respect to the protocol instance with sender ps then, for
excactly one arbitrary such sender ps , he adds his own signature
for 0 with respect to this protocol instance and, during the next
round, relays all r signatures to every other player, decides
on 0, and terminates. If a player never accepts value
from any sender ps then he decides on 1 after round t + 1. If
all players ps are honest and send value clearly, all
players decide on 1 at the end. On the other hand, if any correct
player decides 0 then all correct players do so. Finally, during
this protocol, no player distributes more signatures than during
a single invocation of Protocol DSBroadcast.
4. UNCONDITIONAL SECURITY WITH
PSEUDO-SIGNATURES
We now consider the applying of the same framework to the
setting of pairwise secure channels, but requiring information-theoretic
security. The basic procedures come from [25], which
is itself a modied version of the Dolev-Strong protocol, with
signatures replaced by information-theoretically secure \pseu-
dosignatures".
Proposition 3 (Pfitzmann-Waidner, [25]). There exist
protocols PWPrecomp and PWBroadcast such that if PWPrecomp is
run with access to a broadcast channel, then subsequent executions
of PWBroadcast (based on the output of PWPrecomp) achieve
strong broadcast secure against an unbounded adversary and tolerating
any t < n.
The total communication is polynomial in n; log jDj; k; log b, where
k is the security parameter (failure probability < 2 k ) and b is
the number of future broadcasts to be performed. The number of
rounds is at most 2n 2 .
The generic transformation described in Section 2 can be directly
applied to the PW-precomputation protocol resulting in a
protocol for detectable broadcast with at most r = 7
communication rounds and an overall message complexity of bits
polynomial in n and the security parameter k. However, most of
the PW-precomputation protocol consists of fault localization,
i.e., subprotocols that allow to identify players that have been
misbehaving. These steps are not required in our context since
we are only interested in nding out whether faults occurred or
not. We now give a protocol for the precomputation of one single
broadcast with sender s where these steps are stripped o.
Thereby, as in [25], log n)) and the key size is
log log log jDj)).
1. For (j = s; A), and (j; A), (j; B) (j ng n fsg) in parallel:
2. For
3. If i 6= j: select random authentication key
4. For
5. 8p h (h 6= i) agree on a pairwise key K (')
6. Broadcast 2' 1
7. If broadcast \accept" or \reject";
8. Decide to accept (h i := 1) if and only if all signers p j sent message
\accept" with respect to (j; A) and (j; B);
otherwise reject (h i := 0);
The proof of the following lemma follows from [25]:
Lemma 4. Given a broadcast channel, Protocol SimpPWPrecomp
detectable precomputation for broadcast (with respect to Protocol
PWBroadcast). It requires three rounds, two of which use the
broadcast channel.
Furthermore, for our purpose, Step 7 does not require broadcast
but can be done by simple multi-sending, since invocations
of precomputed broadcast will follow anyway (see Protocol
DetPrecomp in Section 4).
Protocol DetPrecomp: [Model Msec , for b later broadcasts]
1. Execute protocol SimpPWPrecomp for b
wherein each invocation of broadcast is replaced by an invocation
of Protocol CondGradecast.
2. Every player p i computes G
where the
are all grades received during an invocation of conditional
gradecast during Step 1 and h i is the bit indicating whether
accepted at the end of Step 1.
3. For every player p j an instance of PWBroadcast is invoked
as a sender. Every player p i stores the
received values G (j)
ng n fig) and G (i)
4. All players p i accept if
reject otherwise.
Again, as in the protocol for model Mauth of the previous sec-
tion, the parallel invocations of Protocol PWBroadcast during
Step 3 can be replaced by a consensus-like protocol saving a
factor of n for the bit-complexity of the whole protocol.
Theorem 3. Protocol DetPrecomp achieves detectable precomputation
for b future broadcasts among n players for model Msec
tolerating any number t < n of corrupted players with the following
property: If, for the underlying PWPrecomp, a security
parameter of at least chosen then the
overall error probability, i.e., the probability that either Protocol
DetPrecomp or any one of the b broadcasts prepared for fails,
is at most 2 k .
Proof. The proof proceeds along the lines of the proof of
Theorem 2. The error probability follows from the analysis
in [25] for one single precomputation/broadcast pair. Invoking
PWPrecomp with security parameter k0 hence implies an overall
error probability of at most (n
Protocol SimpPWPrecomp requires 4 rounds of message exchange
and an overall message complexity of O(n 7 log log jDj(k0 +log n+
log log log bits where k0 is the security parameter of the
underlying PWPrecomp and D is the domain of future messages to
be broadcast (including possible padding for session IDs, etc.
Protocol PWBroadcast requires t +1 rounds of message exchange
and an overall message complexity of O(n 2 log jDj
log n) 2 ) bits. Since Protocol DetPrecomp precomputing for b
later broadcasts invokes Protocol SimpPWPrecomp (n
in parallel and Protocol PWBroadcast n times in parallel, it requires
overall t+5 communication rounds and an overall message
complexity of
O(n 3 log jDj
log log log
bits to be sent by correct players. Again, as the protocols presented
in the previous section, if the players accept then they
terminate Protocols DetPrecomp and PWBroadcast during the
same communication round.
Furthermore, as follows from Proposition 3, using the recycling
techniques in [25], the bit complexity of Protocol DetPrecomp can
be reduced to polylogarithmic in the number b of later broadcasts
to be precomputed for, i.e., to polynomial in n, log jDj, k, and
log b.
5. UNCONDITIONAL SECURITY WITH
QUANTUM SIGNATURES
In this section we consider a third network model Mq , in which
participants are connected by pairwise authenticated channels
as well as pairwise (unauthenticated) quantum channels. As for
Msec , we require unconditional security with a small probability
of failure.
Now one can always construct secure channels on top of this
model by using a quantum key distribution protocol (e.g. Bennett-
Brassard [3]). This requires adding two rounds at the beginning
of the protocol. For noiseless quantum channels, agreeing on
a key of ' bits requires sending ' log ') qubits and
classical bits [1]. Note that the key distribution
protocol may fail if the adversary intervenes, but in such a case
the concerned players can set their grades to 0 in the later agreement
protocol and all honest players will abort. All in all, this
yields protocols with similar complexity to those of the previous
section.
One can improve the complexity of the protocols signicantly by
tailoring the pre-computation protocol to the quantum model,
and using the quantum signatures of Gottesman and Chuang [19]
instead of the pseudosignatures of [25]. The idea is to apply the
distributed swap test from [19] to ensure consistency of the distributed
quantum keys. As in [25], one gets a broadcast protocol
by replacing classical signatures in the Dolev-Strong protocol [8]
with quantum signatures. Note that quantum communication is
required only in the very rst round of the computation, during
which (possibly corrupted) EPR pairs are exchanged. Any
further quantum transmissions can be done using quantum tele-
portation. Authentication of the initial EPR transmissions can
be done with the protocols of Barnum et al. [1].
Theorem 4. There is a protocol which achieves detectable
precomputation for b future broadcasts among n players for model
Mq tolerating any number t < n of corrupted players. The protocol
rounds and O(k0n 5 b0) bits (qubits) of com-
munication, where
6. SECURE MULTI-PARTY COMPUTATION
The results of the previous sections suggest two general techniques
for transforming a protocol which assumes a broadcast
channel into a \detectable" protocol 0 which only assumes pair-wise
communication channels, but which may abort. Suppose
that there is an upper bound r on the number of rounds of interaction
required by (such a protocol is called \xed-round").
The rst transformation is straightforward, and is also used in
[13]: First, run a protocol for detectable precomputation for
broadcast. If it is succesful, then run , replacing calls to the
broadcast channel with executions of an authenticated broadcast
protocol 2 . The resulting protocol takes t rounds.
The second transformation is suggested by the constructions of
the previous sections. First run , replacing all calls to the
broadcast channel with executions of CondGradecast. Next, run
a detectable broadcast protocol to attempt to agree on whether
or not all the executions of CondGradecast were succesful (i.e.
each player uses the protocol to broadcast the logical and of
his grades from the executions of CondGradecast). If all of the
detectable broadcasts complete successfully with the message 1,
then accept the result of ; otherwise, abort. The resulting
protocol takes O(r rounds. A similar transformation is also
discussed by Goldwasser and Lindell [17] (see below).
Remarks: If protocol achieves unconditional security then
secure channels are required for both these transforma-
tions. For computational security, authenticated channels are
su-cient. Moreover, when applying the rst transformation in
the computational setting, no bound is needed ahead of time on
the number of rounds of interaction (though it should nonetheless
be polynomial in the security parameter).
At an intuitive level, the rst transformation preserves any security
properties of , except for robustness and zero error: robustness
is lost since the adversaries can force the protocol to
fail by interfering with the initial precomputation protocol, and
zero error is lost since detectable broadcast must have some small
2 Note that when using an authenticated broadcast protocol several
times, sequence numbers need to be added to the broadcast
messages to avoid replay attacks [18, 23].
probability of error when t n=3 (by the result of Lamport [21]).
In the following section, we formalize this intuition for the case
of secure multi-party computation (mpc).
The second transformation is more problematic: if the protocol
fails partway through, the adversary may learn information he
is not supposed to. Moreover, if the protocol has side eects
such as the use of an external resource, then some of those side
eects may occur even though the protocol aborts. Nonetheless,
in the case of multi-party computing, the transformation may
be modied to avoid some of these problems.
Multi-party Computation for t < n
Informally, an mpc protocol allows players with inputs x1 ; :::; xn
to collectively evaluate a function f(x1 ; :::; xn) such that cheaters
can neither aect the output (except by their choice of inputs),
nor learn anything about honest players' inputs except what can
be gleaned from the output. For simplicity, we consider deter-
ministic, single-output functions (this incurs no loss of general-
ity). Also, for this section of the paper, we restrict our attention
to static adversaries (i.e. the set of corrupted players is decided
before the protocol begins).
The security of multi-party computation is usually dened via
an ideal model for the computation and a simulator for the pro-
tocol. In the ideal model, a trusted third party (T T P) assists the
players, and the simulator transforms any adversary for the real
protocol into one for the ideal model which produces almost the
same output 3 . The standard ideal model for mpc when t n=2
essentially operates as follows [14]: The players hand their inputs
to the T T P, who computes the output (which is a special
any of the corrupted parties refuse to cooperate).
If Player 1 is honest, then all parties get the output. If Player
1 is corrupted, then the adversary A rst sees the output and
then decides whether or not to abort the protocol. If A decides
to abort, then the T T P sets the output to ?. Finally, the T T P
hands this possibly aborted output to the honest parties. The
notion of security corresponding to this ideal model is called \the
second malicious model" in [14], and \secure computation with
abort" in [17].
Given a broadcast channel, there is a protocol for this denition
of mpc tolerating any t < n static cheaters. This comes essentially
from Goldreich, Micali and Wigderson [15] and Beaver-
Goldwasser [2], though a more careful statement of the deni-
tion, protocol and proof of security appears in Goldreich [14].
That work also points out that replacing calls to the broadcast
channel with an authenticated broadcast protocol | given
a signature infrastructure | does not aect the security of the
protocol 4 . Applying the rst transformation to this protocol, we
obtain:
Theorem 5. Suppose that trapdoor one-way permutations ex-
ist. Then there is a secure mpc protocol (following the denition
of [14]) for any e-ciently computable function in the model of
authenticated channels, which tolerates any t < n.
3 The output here is the joint distribution of the adverary's view
and the outputs of the honest players.
One point which [14] does not discuss explicitly is that sequence
numbers are needed to ensure the independence of the various
executions of the authenticated broadcast protocol. This is not
a problem since the network is synchronous and so the sequence
numbers can be derived from the round number.
Proof. (sketch) The only dierence between our protocol
and that proved correct in Goldreich [14] is the initial detectable
precomputation phase. This allows the adversary to abort the
protocol before he gets any information on the honest parties'
inputs. However, she already has that power in the standard
ideal model (she can refuse to provide input to the T T P).
To construct a simulator S 0 for 0 , we can modify the simulator
S constructed in [14] to add an additional initial phase in which
simulates the detectable precomputation protocol, using the
signature keys of the (real) honest parties as the inputs for the
simulated ones. If the protocol aborts, then S 0 sends the input
? on behalf of all cheating parties to the T T P. Otherwise, S 0
runs the simulator S, using the output of the precomputation
phase as the signature infrastructure. The correctness of this
simulation follows from the correctness of the original simulator
S.
Round-e-cient Multi-party Computation
The protocol above is not very e-cient | the rst transformation
multiplies the round complexity of the original protocol by
t. Instead, consider applying the second transformation to an
mpc protocol: replace every call to the broadcast channel with
an invocation of Conditional Gradecast, and agree at the end of
the protocol on whether or not all the broadcasts were successful
by running the detectable broadcast of Section 3.
As mentioned above, this transformation must be made carefully.
In multi-party computing, it is important that the transformed
protocol not leak any more information than did the original
protocol. If honest players continue to run the protocol after an
inconsistent Conditional Gradecast has occurred, the adversary
could exploit inconsistencies to learn secret values (say by seeing
both possible answers in some kind of cut-and-choose proof). To
ensure that this is not a problem, once an honest player has computed
a grade in one of the executions of CondGradecast,
he no longer continues running the original protocol . Instead,
he only resumes participation during the nal detectable broadcast
phase, in which players agree on whether to accept or reject.
As pointed out by Goldwasser and Lindell [17], the resulting
protocol 0 achieves some sort of secure computation, but it is
not the denition of [14]. In particular, that denition implies
partial fairness, which 0 does not satisfy.
A protocol is fair if the adversary can never learn more about the
input than do the honest players. Fairness is in fact unavoidable
for the setting of t n=2, since it is unavoidable in the 2-party
setting (Cleve [6], Goldwasser and Levin [16]). A protocol is
partially fair if there is some designated player P1 such that when
P1 is honest, the protocol is fair. 0 is not even partially fair
since the adversary may wait until the end of the computation,
learn the output, and still force the honest players to abort.
Goldwasser and Lindell [17] give a similar transformation to this
second one, in which there is no detectable broadcast phase at
the end. They provide a rigorous analysis, and prove that it
achieves a denition of secure computation in which players need
not agree on whether or not the protocol aborted. They additionally
show how that construction can be modied to achieve
partial fairness. One can think of the protocol 0 as adding a detectable
broadcast phase to the initial protocol of [17], to ensure
agreement on the computation's result.
7. NON-UNISON STARTING AND PKI
For all previous \detectable protocols" it was implicitly assumed
that all players start the protocol in the same communication
round. This requires agreement among the players on which
protocol is to be run and on a point of time when the protocol
is to be started. We now argue that this assumption is unnecessary
| not even agreement on the player set among which the
protocol will be run.
Coan, Dolev, Dwork, and Stockmeyer [7] gave a solution for the
\Byzantine agreement with non-unison start" problem, a problem
related to the ring squad problem introduced by Burns and
Lynch [4]. Given the setup of a consistent PKI, their protocol
achieves broadcast for t < n even when not all honest players
start the protocol during the same round, i.e., the broadcast can
be initiated by any player on the
y. It turns out that this idea is
also applicable to detectable broadcast. This allows a player pI
who shares authenticated (or secure) channels with all members
of a player set P 0 to (unexpectedly) initiate a protocol among
the players in for a detectable precomputation.
Let such a player pI be called the initiator and the players in
be called the initiees of the protocol. The following protocol
description is split into the initiator's part and the initiees' part.
Protocol InitAdHocComp: [Initiator pI
1. Send to P 0 an initiation message containing a unique session
identier id, the specication of the player set P 0 and of a
multi-party computation protocol among player set
g.
2. Perform protocol DetPrecomp among P 0 to precompute for all
broadcast invocations required by protocol . In the nal
\broadcast round", instead of broadcasting the value G I to
indicate whether all conditional gradecast protocols achieved
broadcast, the value G I ^ S I is broadcast where S I indicates
whether all players in P 0 synchronously entered the precomputation
protocol and always used session identier id.
3. Accept and execute protocol if and only if G I ^ S I and all
players in P 0 broadcasted their acceptance at the end.
Protocol AdHocComp: [Initiee
1. Upon receipt of an initiation message by an initiator
decide whether you are interested to execute an instance of
among player set P .
check that the specied id is not being used in any concurrent
invocation.
check whether
check whether there are authenticated (or secure) channels
between p i and all other players in P 0 [ fp I g as required
by protocol .
2. If all checks in Step 1 were positive then perform protocol
among P 0 to precompute for all broadcast invocations
required by protocol . In the nal \broadcast round",
instead of broadcasting the value G i to indicate whether all
conditional gradecast protocols achieved broadcast, the value
players
in P 0 synchronously entered the precomputation protocol and
always used session identier id.
3. Accept and execute protocol if and only if G
players in P 0 broadcasted their acceptance at the end.
Note that the rst check in Step 1 of Protocol AdHocComp implicitly
prevents the adversary from \spamming" players with
initiations in order to overwhelm a player with work load. A
player can simply ignore initiations without aecting consistency
among the correct players.
Theorem 6. Suppose there is a player set P 0 and a player pI
such that pI shares authenticated (or secure) channels with every
player in P 0 (whereas no additional channels are assumed between
the players in P 0 ). Then pI can initiate a protocol among
player set that achieves the following properties
for t < n:
All honest players in P either commonly accept or reject
the protocol (instead of rejecting it is also possible that a
player ignores the protocol, which is an implicit rejection).
If they accept, then broadcast or multi-party computation
will be achievable among P (with everybody knowing this
If all players in P are honest and all players are connected
by pairwise authenticated (or secure) channels then
all players accept.
In particular, such a protocol can be used in order to detectably
set up a consistent PKI without the need for a trusted party.
8.
ACKNOWLEDGEMENTS
We thank Sha Goldwasser and Yehuda Lindell for discussions
which led to a substantial improvement of the treatment of the
results of Section 6, as well as for pointing out errors in earlier
versions of this work. We also thank Jon Katz, Idit Keidar, and
Ra Ostrovsky for helpful discussions. The work of Adam Smith
was supported by U.S. Army Research O-ce Grant DAAD19-
00-1-0177.
9.
--R
Multiparty computation with faulty majority.
An update on quantum cryptography.
The byzantine
Adaptively secure multi-party computation
Limits on the security of coin ips when half the processors are faulty (extended abstract).
The distributed
Authenticated algorithms for Byzantine agreement.
An optimal probabilistic protocol for synchronous Byzantine agreement.
Easy impossibility proofs for distributed consensus problems.
Minimal complete primitives for unconditional multi-party computation
Quantum solution to the Byzantine agreement problem.
Unconditional Byzantine agreement and multi-party computation secure against dishonest minorities from scratch
Secure multi-party computation
How to play any mental game
Fair computation of general functions in presence of immoral majority.
Secure computation without a broadcast channel.
Byzantine agreement with authentication: Observations and applications in tolerating hybrid and link faults.
Quantum digital signatures.
The weak Byzantine generals problem.
The Byzantine generals problem.
On the composition of authenticated byzantine agreement.
Reaching agreement in the presence of faults.
--TR
Easy impossibility proofs for distributed consensus problems
Limits on the security of coin flips when half the processors are faulty
An update on quantum cryptography
How to play ANY mental game
The distributed firing squad problem
Multiparty computation with faulty majority
Adaptively secure multi-party computation
An Optimal Probabilistic Protocol for Synchronous Byzantine Agreement
Reaching Agreement in the Presence of Faults
The Weak Byzantine Generals Problem
The Byzantine Generals Problem
On the composition of authenticated byzantine agreement
Minimal Complete Primitives for Secure Multi-party Computation
Fair Computation of General Functions in Presence of Immoral Majority
Unconditional Byzantine Agreement and Multi-party Computation Secure against Dishonest Minorities from Scratch
--CTR
S. Amitanand , I. Sanketh , K. Srinathant , V. Vinod , C. Pandu Rangan, Distributed consensus in the presence of sectional faults, Proceedings of the twenty-second annual symposium on Principles of distributed computing, p.202-210, July 13-16, 2003, Boston, Massachusetts
Yehuda Lindell , Anna Lysyanskaya , Tal Rabin, On the composition of authenticated Byzantine Agreement, Journal of the ACM (JACM), v.53 n.6, p.881-917, November 2006 | byzantine agreement;multi-party computation;public-key infrastructure;broadcast;quantum signatures |
572328 | Self-similarity in the web. | Algorithmic tools for searching and mining the Web are becoming increasingly sophisticated and vital. In this context, algorithms that use and exploit structural information about the Web perform better than generic methods in both efficiency and reliability.We present an extensive characterization of the graph structure of the Web, with a view to enabling high-performance applications that make use of this structure. In particular, we show that the Web emerges as the outcome of a number of essentially independent stochastic processes that evolve at various scales. A striking consequence of this scale invariance is that the structure of the Web is "fractal"---cohesive subregions display the same characteristics as the Web at large. An understanding of this underlying fractal nature is therefore applicable to designing data services across multiple domains and scales.We describe potential applications of this line of research to optimized algorithm design for Web-scale data analysis. | Introduction
As the the size of the web grows exponentially, data services on the web are becoming increasingly complex
and challenging tasks. These include both basic services such as searching and finding related pages, and
advanced applications such as web-scale data mining, community extraction, constructions of indices, tax-
onomies, and vertical portals. Applications are beginning to emerge that are required to operate at various
points on the "petabyte curve" - billions of web pages that each have megabytes of data, tens of millions of
users in a peer-to-peer setting each with several gigabytes of data, etc. The upshot of the rate and diversity
of this growth is that data service applications for collections of hyperlinked documents need to be efficient
and effective at several scales of operation. As we will show, a form of "scale invariance" exists on the web
that allows simplification of this multi-scale data service design problem.
The first natural approach to the wide range of analysis problems emerging in this new domain is to
develop a general query language to the web. There have been a number of proposals along these lines [34,
6, 43]. Further, various advanced mining operations have been developed in this model using a web-specific
query language like those described above, or a traditional database encapsulating some domain knowledge
into table layout and careful construction of SQL programs [18, 42, 8].
However, these applications are particularly successful precisely when they take advantage of the special
structure of the document collections and the hyperlink references among them. An early example of this
phenomenon in the marketplace is the paradigm shift witnessed in search applications - ranking schemes
for web pages that were based on link analysis [26, 12] proved to be vastly superior to the more traditional
text-based ones.
The success of these specialized approaches naturally led researchers to seek a finer understanding of the
hyperlinked structure of the web. Broadly, there are two (very related) lines of research that have emerged.
The first one is more theoretical and is concerned with proposing stochastic models that explain the hyperlink
structure of the web [27, 7, 1]. The second line of research [13, 7, 3, 28] is more empirical; new experiments
are conducted that either validate or refine existing models.
There are several driving applications that motivate (and are motivated by) a better understanding of
the neighborhood structure on the web. In particular, the "second generation" of data service applications
on the web - including advanced search applications [16, 17, 10], browsing and information foraging
[14, 39, 15, 40, 19], community extraction [28], taxonomy construction [30, 29] - have all taken tremendous
advantage of knowledge about the hyperlink structure of the web. As just one example, let us mention
the community extraction algorithm of [28]. In this algorithm, a characterization of degree sequences within
web-page neighborhoods allowed the development and analysis of efficient pruning algorithms for a sub-graph
enumeration problem that is in general intractable.
Even more recently, new algorithms have been developed to benefit from structural information about
the web. Arasu et al. [5] have shown how to take advantage of the macroscopic "bow-tie" structure of
the web [13] to design an efficient algorithmic partitioning method for certain eigenvector computations;
these are the key to the successful search algorithms of [26, 12], and to popular database indexing methods
such as latent semantic indexing [20, 36]. Adler and Mitzenmacher [4] have shown how the random graph
characterizations of the web given in [27] can be used to construct very effective strategies to compress the
web graph.
1.1 Our results
In this paper, we present a much more refined characterization of the structure of the web. Specifically, we
present evidence that the web emerges as the outcome of a number of essentially independent stochastic
processes that evolve at various scales, all roughly following the model of [27]. A striking consequence of
this is that the web is a "fractal" - each thematically unified region displays the same characteristics as the
web at large. This implies the following useful corollary:
To design efficient algorithms for data services at various scales on the web (vertical portals pertaining
to a theme, corporate intranets, etc.), it is sufficient (and perhaps necessary) to understand the structure that
emerges from one fairly simple stochastic process.
We believe that this is a significant step in web algorithmics. For example, it shows that the sophisticated
algorithms of [5, 4] are only the beginning, and the prospects are, in fact, much wider. We fully expect future
data applications on the web to leverage this understanding.
Our characterization is based on two findings we report in this paper. Our first finding is an experimental
result. We show that self-similarity holds for many different parameters, and also for many different
approaches to defining varying scales of analysis. Our second finding is an interpretation of the experimental
data. We show that, at various different scales, cohesive collections of web pages (for instances, pages on
a site, or pages about a topic) mirror the structure of the web at large. Furthermore, if the web is decomposed
into these cohesive collections, for a wide range of definitions of "cohesive," the resulting collections
are tightly and robustly connected via a navigational backbone that affords strong connectivity between the
collections. This backbone ties together the collections of pages, but also ties together the many different
and overlapping decompositions into cohesive collections, suggesting that committing to a single taxonomic
breakdown of the web is neither necessary nor desirable. We now describe these two findings in more detail.
First, self-similarity in the web is pervasive and robust - it applies to a number of essentially independent
measurements and regardless of the particular method used to extract a slice of the web. Second, we
present a graph-theoretic interpretation of the first set of observations, which leads to a natural hierarchical
characterization of the structure of the web interpreted as a graph. In our characterization, collections of web
pages that share a common attribute (for instance, all the pages on a site, or all the pages about a particular
topic) are structurally similar to the whole web. Furthermore, there is a navigational backbone to the web
that provides tight and robust connections between these focused collections of pages.
1. Experimental findings. Our first finding, that self-similarity in the web is pervasive and appears in
many unrelated contexts, is an experimental result. We explore a number of graph-theoretic and syntactic
parameters. The set of parameters we consider is the following: indegree and outdegree distributions;
strongly- and weakly- connected component sizes; bow-tie structure and community structure on the web
graph; and population statistics for trees representing the URL namespace. We define these parameters
formally below. We also consider a number of methods for decomposing the web into interesting subgraphs.
The set of subgraphs we consider is the following: a large internet crawl; various subgraphs consisting of
about 10% of the sites in the original crawl; 100 websites from the crawl each containing at least 10,000
pages; ten graphs, each consisting of every page containing a set of keywords (in which the ten keyword sets
represent five broad topics and five sub-topics of the broad topics); a set of pages containing geographical
references (e.g., phone numbers, zip codes, city names, etc.) to locations in the western United States; a
graph representing the connectivity of web sites (rather than web pages); and a crawl of the IBM intranet.
We then consider each of the parameters described above, first for the entire collection, and then for each
decomposition of the web into sub-collections. Self-similarity is manifest in the resulting measurements in
two flavors. First, when we fix a collection or sub-collection and focus on the distribution of any parameter
(such as the number of hyperlinks, number of connected components, etc.), we observe a Zipfian self-similarity
within the pageset. 1 Namely, for any parameter x with distribution X , there is a constant c such
that for all t ? 0 and a - 1, Second, the phenomena (whether distributional or
structural) that are manifest within a sub-collection are also observed (with essentially the same constants)
in the entire collection, and more generally, in all sub-collections at all scales - from local websites to the
web as a whole.
2. Interpretations. Our second finding is an interpretation of the experimental data. As mentioned above,
the sub-collections we study are created to be cohesive clusters, rather than simply random sets of web
pages. We will refer to them as thematically unified clusters, or simply TUCs. Each TUC has structure
similar to the web as a whole. In particular, it has a Zipfian distribution over the parameters we study, strong
navigability properties, and significant community and bow-tie structure (in a sense to be made explicit
below).
Furthermore, we observe unexpectedly that the central regions of different TUCs are tightly and robustly
connected together. These tight and robust inter-cluster linking patterns provide a navigational backbone for
the web. By analogy, consider the problem of navigating from one physical address to another. A user might
take a cab to the airport, take a flight to the appropriate destination city, and take a cab to the destination
address. Analogously, navigation between TUCs is accomplished by traveling to the central core of a TUC,
following the navigational backbone to the central core of the destination TUC, and finally navigating within
the destination TUC to the correct page. We show that the self-similarity of the web graph, and its local and
global structure, are alternate and equivalent ways of viewing this phenomenon.
1.2 Related prior work
Zipf-Pareto-Yule and Power laws.
Distributions with an inverse polynomial tail have been observed in a number of contexts. The earliest
observations are due to Pareto [38] in the context of economic models. Subsequently, these statistical behaviors
have been observed in the context of literary vocabulary [45], sociological models [46], and even
oligonucleotide sequences [33], among others. Our focus is on the closely related power law distributions,
defined on the positive integers, with the probability of the value i being proportional to i \Gammak for a small positive
number k. Perhaps the first rigorous effort to define and analyze a model for power law distributions is
due to Herbert Simon [41].
Recent work [30, 7] suggests that both the in- and the outdegrees of nodes on the web graph have power
laws. The difference in scope in these two experiments is noteworthy. The first [30] examines a web crawl
from 1997 due to Alexa, Inc., with a total of over 40 million nodes. The second [7] examines web pages
from the University of Notre Dame domain *.nd.edu as well as a portion of the web reachable from 3
other URLs. This collection of findings already leads us to suspect the fractal-like structure of the web.
1 For more about the connection between Zipfian distributions and self-similarity, see Section 2.2 and [31].
2 For example, the fraction of web pages that have k hyper-inlinks is proportional to k
\Gamma2:1 .
Graph-theoretic methods.
Much recent work has addressed the web as a graph and applied algorithmic methods from graph theory
in addressing a slew of search, retrieval, and mining problems on the web. The efficacy of these methods
was already evident even in early local expansion techniques [14]. Since then, increasingly sophisticated
techniques have been used; the incorporation of graph-theoretical methods with both classical and new
methods that examine both context and content, and richer browsing paradigms have enhanced and validated
the study and use of such methods. Following Botafogo and Shneiderman [14], the view that connected and
strongly-connected components represent meaningful entities has become widely accepted.
Power laws and browsing behavior.
The power law phenomenon is not restricted to the web graph. For instance, [21] report very similar
observations about the physical topology of the internet. Moreover, the power law characterizes not only the
structure and organization of information and resources on the web, but also the way people use the web.
Two lines of work are of particular interest to us here. (1) Web page access statistics, which can be easily
obtained from server logs (but for caching effects) [22, 25, 2]. (2) User behavior, as measured by the number
of times users at a single site access particular pages also enjoy power laws, as verified by instrumenting and
inspecting logs from web caches, proxies, and clients [9, 32].
There is no direct evidence that browsing behavior and linkage statistics on the web graph are related
in any fundamental way. However, making the assumption that linkage statistics directly determine the
statistics of browsing has several interesting consequences. The Google search algorithm, for instance, is an
example of this. Indeed, the view of PageRank put forth in [12] is that it puts a probability value on how
easy (or difficult) it is to find particular pages by a browsing-like activity. Moreover, it is generally true (for
instance, in the case of random graphs) that this probability value is closely related to the indegree of the
page. In addition there is recent theoretical evidence [27, 41] suggesting that this relationship is deeper. In
particular, if one assumes that the ease of finding a page is proportional to its graph-theoretic indegree, and
that otherwise the process of evolution of the web as a graph is a random one, then power law distributions
are a direct consequence. The resulting models, known as copying models for generating random graphs
seem to correctly predict several other properties of the web graph as well.
Preliminaries
In this section we formalize our view of the web as a graph; here we ignore the text and other content in
pages, focusing instead on the links between pages. In the terminology of graph theory [23], we refer to
pages as nodes, and to links as arcs. In this framework, the web is a large graph containing over a billion
nodes, and a few billion arcs.
2.1 Graphs and terminology
A directed graph consists of a set of nodes, denoted V and a set of arcs, denoted E. Each arc is an ordered
pair of nodes (u; v) representing a directed connection from u to v. The outdegree of a node u is the number
of distinct arcs (u; v 1 (i.e., the number of links from u), and the indegree is the number of
distinct arcs (v 1 (i.e., the number of links to u). A path from node u to node v is a sequence
of arcs (u; u 1 v). One can follow such a sequence of arcs to "walk" through the graph
from u to v. Note that a path from u to v does not imply a path from v to u. The distance from u to v
is one more than the smallest k for which such a path exists. If no path exists, the distance from u to v
is defined to be infinity. If (u; v) is an arc, then the distance from u to v is 1. Given a graph (V; E) and
a subset V 0 of the node set v, the node-induced subgraph E) is defined by taking E 0 to be
i.e., the node-induced subgraph corresponding to some subset V 0 of the nodes
contains only arcs that lie entirely within V 0 .
Given a directed graph, a strongly connected component of this graph is a set of nodes such that for any
pair of nodes u and v in the set there is a path from u to v. In general, a directed graph may have one or many
strong components. Any graph can be partitioned into a disjoint union of strong components. Given two
strongly connected components, C 1
and C 2
, either there is a path from C 1
to C 2
or a path from C 2
to C 1
or
neither, but not both. Let us denote the largest strongly component by SCC. Then, all other components can
be classified with respect to the SCC in terms of whether they can reach, be reached from, or are independent
of, the SCC. Following [13], we denote these components IN, OUT, and OTHER respectively. The SCC,
flanked by the IN and OUT, figuratively forms a "bow-tie."
A weakly connected component (WCC) of a graph is a set of nodes such that for any pair of nodes u
and v in the set, there is a path from u to v if we disregard the directions of the arcs. Similar to strongly
connected components, the graph can be partitioned into a disjoint union of weakly connected components.
We denote the largest weakly connected component by WCC.
2.2 Zipf distributions and power laws
The power law distribution with parameter a ? 1 is a distribution over the positive integers. Let X be a
power law distributed random variable with parameter a. Then, the probability that proportional to
\Gammaa . The Zipf distribution is an interesting variant on the power law. The Zipf distribution is a defined over
any categorical-valued attribute (for instance, words of the English language). In the Zipf distribution, the
probability of the i-th most likely attribute value is proportional to i \Gammaa . Thus, the main distinction between
these is in the nature of the domain from which the r.v. takes its values. A classic general technique for
computing the parameter a characterizing the power law is due to Hill [24]. We will use Hill's estimator as
the quantitative measure of self-similarity.
While a variety of socio-economic phenomena have been observed to obey Zipf's law, there is only a
handful of stochastic models for these phenomena of which satisfying Zipf's law is a consequence. Simon
was perhaps the first to propose a class of stochastic processes whose distribution functions follow the
Zipf law [31]. Recently, new models have been proposed for modeling the evolution of the web graph [27].
These models predict that several interesting parameters of the web graph obey the Zipf law.
3 Experimental setup
3.1 Random subsets and TUCs
Since the average degree of the web graph is small, one should expect subgraphs induced by (even fairly
large) random subsets of the nodes to be almost empty. Consider for instance a random sample of 1 million
web pages (say out of a possible 1 billion pages). Consider now an arbitrary arc, say (a; b). The probability
that both endpoints of the arc are chosen in the random sample is about 1 in a million (1/1000 * 1/1000).
Thus, the total expected number of arcs in the induced subgraph of these million nodes is about 8000,
assuming an average degree of 8 for the web as a whole. Thus, it would be unreasonable to expect random
subgraphs of the web to contain any graph-theoretic structure. However, if the subgraphs chosen are not
random, the situation could be (and is) different. In order to highlight this dichotomy, we introduce the
notion of a thematically unified cluster (TUC). A TUC is a cluster of webpages that share a common trait.
In all instances we consider, these thematically unified clusters share a fairly syntactic trait. However, we
do not wish to restrict our definition only to such instances. For instance, one could consider linkage-based
concepts [43], and [39] as well. We now detail several instances of TUCs.
(1) By content: The premise that web content on any particular topic is also "local" in a graph-theoretic
context has motivated some interesting earlier work [26, 30]. Thus, one should expect web pages that
share subject matter to be more densely linked than random subsets of the web. If so, these graphs should
display interesting morphological structure. Moreover, it is reasonable to expect this structure to represent
interesting ways of further segmenting the topic.
The most naive method for judging content correlation is to simply look at a collection of webpages
which share a small set of common keywords. To this end, we have generated 10 slices of the web, denoted
henceforth as KEYWORD1, . , KEYWORD10. To determine whether a page belongs to a keyword set,
we simply look for the keyword in the body of the document after simple pre-processing (removing tags,
javascript, transform to lower case, etc. The particular keyword sets we consider are shown in Tables 3 and
4 below. The terms in the first table correspond to mesoscopic subsets and the corresponding terms in the
second table are microscopic subsets of the earlier ones.
(2) By location: Websites and intranets are logically consistent ways of partitioning the web. Thus,
they are obvious candidates for TUCs. We look at intranets and particular websites to see what structures
are represented at this level. We are interested in what features, if any, distinguish these two cases from
each other and indeed from the web at large. Our observations here would help determine what special
processing, if any, would be relevant in the context of an intranet. To this end, we have created TUCs
consisting of the IBM intranet, denoted INTRANET henceforth, and 100 websites denoted SUBDOMAIN1,
. , SUBDOMAIN100, each containing at least 10K pages.
(3) By geographic location: Geography is becoming increasingly evident in the web, with the growth in
the number of local and small businesses represented on the web (restaurants, shows, housing information,
and other local services) as well as local information websites such as sidewalk.com. We expect the
recurrence of similar information structures at this level. We hope to understand more detail about overlaying
geospatial information on top of the web. We have created a subset of the web based on geographic cues,
denoted GEO henceforth. The subset contains pages that have geographical references (addresses, telephone
numbers, and ZIP codes) to locations in the western United States. This was constructed through the use of
databases for latitude-longitude information for telephone number area codes, prefixes, and postal zipcodes.
Any page that contained a zipcode or telephone number was included if the reference was within a region
bounded by Denver (Colorado) on the east and Nilolski (Alaska) on the west, Vancouver (British Columbia)
on the north, and Brownsville (Texas) on the south.
To complete our study, we also define some additional graphs derived from the web. Strictly speaking,
these are not TUCs. However, they can be derived from the web in a fairly straightforward manner. As it
turns out, some of our most interesting observations about the web relates to the interplay between structure
at the level of the TUCs and structure at the following levels. We define them now:
Random collections of websites: We look at all the nodes that belong in a random collection of
websites. We do this in order to understand the fine grained structure of the SCC, which is the navigational
backbone of the web. Unlike random subgraphs of the web, random collections of websites exhibit interesting
behaviors. First, the local arcs within a website ensure that there is fairly tight connectivity within
each website. This allows the small number of additional intersite arcs to be far more useful than would be
the case in a random subgraph. We have generated 7 such disjoint subsets. We denote these STREAM1, . ,
STREAM7.
The hostgraph contains a single node corresponding to each website (for instance
www.ibm.com is represented by a single node), and has an arc between two nodes, whenever there is
a page in the first website that points to a page in the second. The hostgraph is not a subgraph of the web
graph, but it can be derived from it in a fairly straightforward manner, and more importantly, is relevant to
understanding the structure of linkage at levels higher than that of a web page. In the following discussion,
this graph is denoted by HOSTGRAPH.
3.2 Parameters studied
We study the following parameters:
(1) Indegree distributions: Recall that the indegree of a node is the number of arcs whose destination is
that node. We consider the distribution of indegree over all nodes in a particular graph, and consider properties
of that distribution. A sequence of papers [7, 3, 28, 13] has provided convincing evidence that indegree
distributions follow the power law, and that the parameter a (called indegree exponent) is reliably around
2.1 (with little variation). We study the indegree distributions for the TUCs and the random collections.
(2) Outdegree distributions: Outdegree distributions seem to not follow the power law at small values.
However, larger values do seem to follow such a distribution, resulting in a "drooping head" of the log-log
plot as observed in earlier work. A good characterization of outdegrees for the web graph has not yet been
offered, especially one that would satisfactorily explain the drooping head.
(3) Connected component sizes: (cf. Section 2) We consider the size of the largest strongly-connected
component, the second-largest, third-largest and so forth as a distribution, for each graph of interest. We
consider similar statistics for the sizes of weakly-connected components. Specifically, we will show that
they obey power laws at all scales, and study the exponents of the power law (called SCC/WCC exponent).
We also report the ratio of the size of the largest strongly-connected component to the size of the largest
weakly-connected component. For the significance of these parameters, we refer the reader to [13], and
note that the location of a web page in the connected component decomposition crucially determines the
reachability of this page (often related to its popularity).
cores: Bipartite cores are graph-theoretic signatures of community structure on the web.
A K i;j bipartite core is a set of i pages such that each of i pages contains a hyperlink to all of the
remaining j pages. We pick representative values of i and j, and focus on K 5;7
's, which are sets of 5 ``fan''
nodes, each of which points to the same set of 7 "center" nodes. Since computing the exact number of
K 5;7 's is a complex subgraph enumeration problem that is intractable using known techniques, we instead
estimate the number of node-disjoint K 5;7
's for each graph of interest. To perform this estimation, we use
the techniques of [28, 29]. The number of communities (cores) is an estimate of community structure with
the TUC. The K 5;7
factor is the relative size of the community to the size of the nodes that participate in
's in it. The higher the factor, the less one can view the TUC as a single well defined community.
(5) URL compressibility and namespace utilization: The URL namespace can be viewed as a tree,
with the root node being represented by the null string. Each node of the tree corresponds to a
URL prefix (say www.foo.com) with all URLs that share that prefix, (e.g, www.foo.com/bar and
www.foo.com/rab) being in the subtree subtended at that node. For each subgraph and each value d
of the depth, we study the following distribution: for each s, the number of depth-d nodes whose subtrees
have s nodes. We will see that these follow the power law. Following conventional source coding theory, it
follows that this skew in the population distributions of the URL namespace can be used to design improved
compression algorithms for URLs. The details of this analysis are beyond the scope of the present paper.
3.3 Experimental infrastructure
We performed these experiments on a small cluster of Linux machines with about 1TB of disk space. We
created a number of data sets from two original sets of pages. The first set consists of about 500K pages
from the IBM intranet. We treat this data as a single entity, mainly for purposes of comparison with the
external web. The second set consists of 60M pages from the web at large, crawled in Oct. 2000. These
pages represent approximately 750GB of content. The crawling algorithm obeyed all politeness rules,
crawling no site more often than once per second. Therefore, while we had collected 750GB of content
(crawling about 1.3M sites) no more than 12K pages had been crawled from any one site.
4 Results and interpretation
Our results are shown in the following tables and figures. Though we have an enormous amount of data,
we try to present as little as possible, while conveying the main thoughts. All the graphs here refer to node-
induced subgraphs and the arcs refer to the arcs in the induced subgraph. Our tables show the parameters
in terms of the graphs while our figures show the consistency of the parameters across different graphs,
indicating a fractal nature.
Table
1 shows all the parameters for the STREAM1 through STREAM7. The additional parameter, expansion
factor, refers to the fraction of hyperlinks that point to nodes in the same collection to the total number
of hyperlinks. As we can see, the numbers are quite consistent with earlier work. For instance, the indegree
exponent is -2.1, the SCC exponent is around -2.15, and the WCC exponent is around -2.3. As we can see,
the ratios of IN, OUT, SCC with respect to WCC are also consistent with earlier work.
Table
2 shows the results for the three special graphs: INTRANET, HOSTGRAPH, and GEO. The expansion
factor for the INTRANET is 2.158 while the indegree exponent is very different from that of other
graphs. The WCC exponent for HOSTGRAPH is not meaningful since there is a single component that is
99.4% of the entire graph.
Table
3 shows the results for single keyword queries. The graphs in the category are in few hundreds of
thousands. Table 4 shows the results for double keyword graphs. The graphs in this category are in few tens
of thousands. A specific interesting case is the large K 5;7
factor for the keyword MATH, which probably
arises since pages containing the term MATH is probably not a TUC since it is far too general.
Table
5 shows the averaged results for the 100 sites SUBDOMAIN1, . , SUBDOMAIN100.
Next, we point out the consistency of the parameters across various graphs. For ease of presentation, we
picked a small set of TUCs and plotted the distribution of indegree, outdegree, SCC, WCC on a log-log scale
(see Figures in Appendix). Figure 2 shows the indegree and outdegree distributions for five of the TUCs.
As we see, the shape of plots are strikingly alike. As observed in earlier studies, a drooping initial segment
is observed in the case of outdegree. Figure 3 shows the component distributions for the graphs. Again, the
similarity of shapes is striking. Figure 4 show the URL tree size distribution. The figures show remarkable
self-similarity that exists both across graphs and within each graph across different depths.
4.1 Discussion
We now mention four interesting observations based on the experimental results. Following [13] (see also
Section 2), we say that a slice of the web graph has the bow-tie structure if the SCC, IN, and OUT, each
accounts for a large constant fraction of the nodes in the slice.
(1) Almost all nodes (82%) of the HOSTGRAPH are contained in a giant SCC (Table 2). This is not
surprising, since one would expect most websites to have at least one page that belongs to the SCC.
(2) The (microscopic) local graphs of SUBDOMAIN1, . , SUBDOMAIN100, look surprisingly like the
web graph (see Table 5. Each has an SCC flanked by IN and OUT sets that, for the most part, have sizes
proportional to their size on the web as a whole, about 40% for the SCC, for instance. Large websites
seemed to have a more clearly defined bow-tie structure than the smaller, less developed ones.
(3) Keyword based TUCs corresponding to KEYWORD1, . , KEYWORD10 (see Tables 3 and
similar phenomena; the differences often owe to the extent to which a community has a well-established
presence on the web. For example, it appears from our results that the GOLF is a well-established web
community, while RESTAURANT is a newer developing community on the web. While the mathematics
community had a clearly defined bow-tie structure, the less developed geometry community lacked one.
Considering STREAM1, . , STREAM7, we find the surprising fact (Table 1) that the union of a
random collection of TUCs contains a large SCC. This shows that the SCC of the web is very resilient
to node deletion and does not depend on the existence of large taxonomies (such as yahoo.com) for its
connectivity. Indeed, as we remarked earlier, each of these streams contain very few arcs which are not
entirely local to the website. However, the bow-tie structure of each website allows the few intersite arcs to
be far more valuable than one would expect.
4.2 Analysis and summary
The foregoing observation about the SCC of the streams, while surprising, is actually a direct consequence
of the following theorem about random edges in graphs with large strongly connected components.
Theorem 1. Consider the union of n=k graphs on k nodes each, where each graph has a strongly connected
component of size ek. Suppose we add dn arcs whose heads and tails are uniformly distributed among the
nodes, then provided that d is at least of the order 1=(ek), with high probability, we will have a strongly
connected component of size of the order of en on the n-node union of the n=k graphs.
The proof of Theorem 1 is fairly straightforward. On the web, n is about 1 billion, k, the size of each TUC,
is about 1 million (in reality, there are more than 1K TUCs that overlap, which only makes the connectivity
stronger), and e is about 1=4. Theorem 1 suggests that the addition of a mere few thousand arcs scattered
uniformly throughout the billion nodes will result in very strong connectivity properties of the web graph!
Indeed, the evolving copying models for the web graph proposed in [27] incorporates a uniformly random
component together with a copying stochastic process. Our observation above is, in fact, lends consid-
Nodes Arcs Expansion Indeg. Outdeg. SCC WCC WCC SCC/ IN/ OUT/ K 5;7
6.38 48.1 2.05 -2.06 -2.15 -2.15 -2.24 4.47 0.24 0.20 0.23 49.5
6.84 50.0 2.04 -2.12 -2.30 -2.14 -2.27 4.86 0.23 0.21 0.23 43.5
6.83 48.2 2.06 -2.08 -2.27 -2.11 -2.29 4.90 0.24 0.20 0.23 45.4
6.77 49.3 2.01 -2.10 -2.32 -2.11 -2.25 4.78 0.23 0.20 0.24 45.3
6.23 43.5 2.03 -2.13 -2.19 -2.15 -2.27 4.31 0.22 0.19 0.23 46.9
Table
1: Results for STREAM1 through STREAM7.
Subgraph Nodes Arcs Indeg. SCC WCC WCC SCC/ IN/ OUT/ K 5;7
INTRANET 285.5 1910.7 -2.31 -2.53 -2.83 207.7 0.20 0.48 0.17 56.13
GEO 410.7 1477.9 -2.51 -2.69 -2.27 2.1 0.87 0.03 0.10 139.9
Table
2: Results for graphs: INTRANET, HOSTGRAPH, and GEO.
Subgraph Nodes Arcs Indeg. SCC WCC WCC SCC/ K 5;7
GOLF 696.8 8512.8 -2.06 -2.06 -2.18 47.3 0.15 44.48
MATH 831.7 3787.8 -2.85 -2.66 -2.73 50.2 0.28 148.7
MP3 497.3 7233.2 -2.20 -2.39 -2.20 47.6 0.28 57.18
Table
3: Results for single keyword query graphs KEYWORD1 through KEYWORD5.
Subgraph Nodes Arcs WCC SCC/ K 5;7
GOLF TIGER WOODS 14.9 62.8 1501 0.20 83.02
MATH GEOMETRY 44.0 86.9 1903 0.27 407.52
RESTAURANT SUSHI 7.4 23.7 167 0.72 132.14
Table
4: Results for double keyword query graphs KEYWORD6 through KEYWORD10.
Nodes Arcs WCC SCC/ K 5;7
7.17 108.42 7.08 0.42 22.97
Table
5: Averaged results for SUBDOMAIN1 through SUBDOMAIN100.
erable support to the legitimacy of this model. These observations, together with Theorem 1, imply a very
interesting detailed structure for the SCC of the webgraph.
The web comprises several thematically unified clusters (TUCs). The common theme within
a TUC is one of many diverse possibilities. Each TUC has a bow-tie structure that consists
of a large strongly connected component (SCC). The SCCs corresponding to the TUCs are
integrated, via the navigational backbone, into a global SCC for the entire web. The extent to
which each TUC exhibits the bow-tie structure and the extent to which its SCC is integrated
into the web as a whole indicate how well-established the corresponding community is.
An illustration of this characterization of the web is shown in Figure 1.
IN SCC OUT
Figure
1: TUCs connected by the navigational backbone inside the SCC of the web graph.
Conclusions
In this paper, we have examined the structure of the web in greater detail than earlier efforts. The primary
contribution is two-fold. First, the web exhibits self-similarity in several senses, at several scales. The
self-similarity is pervasive, in that it holds for a number of parameters. It is also robust, in that it holds
irrespective of which particular method is used to carve out small subgraphs of the web. Second, these
smaller thematically unified subgraphs are organized into the web graph in an interesting manner. In par-
ticular, the local strongly connected components are integrated into the global SCC. The connectivity of the
global SCC is very resilient to random and large scale deletion of websites. This indicates a great degree of
fault-tolerance on the web, in that there are several alternate paths between nodes in the SCC.
While our understanding of the web as a graph is greater now than ever before, there are many lacunae
in our current understanding of the graph-theoretic structure of the web. One of the principal holes deals
with developing stochastic models for the evolution of the web graph (extending [27]) that are rich enough
to explain the fractal behavior of the web in such amazingly diverse ways and contexts.
Acknowledgments
Thanks to Raymie Stata and Janet Wiener (Compaq SRC) for some of the code. The second author thanks
Xin Guo for her encouragement to this project.
--R
A random graph model for massive graphs.
The nature of markets on the world wide web.
Scaling behavior on the world wide web.
Towards compressing web graphs.
Pagerank Computation and the Structure of the Wweb: Experiments and Algorithms.
The Lorel Query Language for Semistructured Data.
Emergence of scaling in random networks.
Changes in web client access patterns: Characteristics and caching implications.
Improved algorithms for topic distillation in hyperlinked environments.
Random Graphs.
The anatomy of a large scale hypertextual web search engine.
Graph Structure in the web.
aggregates in hypertext structures.
Searching and visualizing the Web through connectivity.
Automatic resource compilation by analyzing hyperlink structure and associated text.
Experiments in topic distillation.
Surfing the web backwards.
Indexing by latent semantic analysis.
On power law relationships of the internet topology.
A caching relay for the world wide web.
Graph Theory.
A Simple method for inferring the tail behavior of distributions.
Strong regularities in world wide web surfing.
Authoritative sources in a hyperlinked environment.
Random graph models for the web graph.
Trawling the web for cyber communities.
Extracting large scale knowledge bases from the web.
Human effort in semi-automated taxonomy construction
http://linkage.
Surfing as a real option.
Oligonucleotide frequencies in DNA follow a Yule distribution.
Querying the world wide web.
Finding regular simple paths in graph databases.
Latent Semantic Indexing: A Probabilistic Analysis.
Graphical Evolution.
Cours d'economie politique.
Silk from a sow's ear: Extracting usable structures from the web.
On a class of skew distribution functions.
Mining Structural Information on the web.
In Ann.
Statistical Study of Literary Vocabulary.
Human Behavior and the Principle of Least Effort.
--TR
aggregates in hypertext structures
A caching relay for the World Wide Web
Finding Regular Simple Paths in Graph Databases
Silk from a sow''s ear
Life, death, and lawfulness on the electronic frontier
ParaSite
WebQuery
Applications of a Web query language
Surfing as a real option
Improved algorithms for topic distillation in a hyperlinked environment
Automatic resource compilation by analyzing hyperlink structure and associated text
The anatomy of a large-scale hypertextual Web search engine
Trawling the Web for emerging cyber-communities
Focused crawling
Surfing the Web backwards
On power-law relationships of the Internet topology
Authoritative sources in a hyperlinked environment
A random graph model for massive graphs
Graph structure in the Web
Latent semantic indexing
An adaptive model for optimizing performance of an incremental web crawler
Changes in Web client access patterns
Extracting Large-Scale Knowledge Bases from the Web
Stochastic models for the Web graph
Towards Compressing Web Graphs
--CTR
Ricardo Baeza-Yates , Carlos Castillo, Relationship between web links and trade, Proceedings of the 15th international conference on World Wide Web, May 23-26, 2006, Edinburgh, Scotland
Christoph Lindemann , Lars Littig, Coarse-grained classification of web sites by their structural properties, Proceedings of the eighth ACM international workshop on Web information and data management, November 10-10, 2006, Arlington, Virginia, USA
Trevor Fenner , Mark Levene , George Loizou, A stochastic model for the evolution of the Web allowing link deletion, ACM Transactions on Internet Technology (TOIT), v.6 n.2, p.117-130, May 2006
Josiane Xavier Parreira , Debora Donato , Carlos Castillo , Gerhard Weikum, Computing trusted authority scores in peer-to-peer web search networks, Proceedings of the 3rd international workshop on Adversarial information retrieval on the web, May 08-08, 2007, Banff, Alberta, Canada
Einat Amitay , David Carmel , Adam Darlow , Ronny Lempel , Aya Soffer, The connectivity sonar: detecting site functionality by structural patterns, Proceedings of the fourteenth ACM conference on Hypertext and hypermedia, August 26-30, 2003, Nottingham, UK
Jui-Pin Yang, Self-Configured Fair Queueing, Simulation, v.83 n.2, p.189-198, February 2007
Ricardo Baeza-Yates , Carlos Castillo , Efthimis N. Efthimiadis, Characterization of national Web domains, ACM Transactions on Internet Technology (TOIT), v.7 n.2, p.9-es, May 2007
Ricardo Baeza-Yates , Carlos Castillo , Mauricio Marin , Andrea Rodriguez, Crawling a country: better strategies than breadth-first for web page ordering, Special interest tracks and posters of the 14th international conference on World Wide Web, May 10-14, 2005, Chiba, Japan | World-Wide-Web;graph structure;fractal;online information services;web-based services;self-similarity |
581484 | Contracts for higher-order functions. | Assertions play an important role in the construction of robust software. Their use in programming languages dates back to the 1970s. Eiffel, an object-oriented programming language, wholeheartedly adopted assertions and developed the "Design by Contract" philosophy. Indeed, the entire object-oriented community recognizes the value of assertion-based contracts on methods.In contrast, languages with higher-order functions do not support assertion-based contracts. Because predicates on functions are, in general, undecidable, specifying such predicates appears to be meaningless. Instead, the functional languages community developed type systems that statically approximate interesting predicates.In this paper, we show how to support higher-order function contracts in a theoretically well-founded and practically viable manner. Specifically, we introduce con, a typed lambda calculus with assertions for higher-order functions. The calculus models the assertion monitoring system that we employ in DrScheme. We establish basic properties of the model (type soundness, etc.) and illustrate the usefulness of contract checking with examples from DrScheme's code base.We believe that the development of an assertion system for higher-order functions serves two purposes. On one hand, the system has strong practical potential because existing type systems simply cannot express many assertions that programmers would like to state. On the other hand, an inspection of a large base of invariants may provide inspiration for the direction of practical future type system research. | In this paper, we show how to support higher-order function contracts
in a theoretically well-founded and practically viable man-
ner. Specifically, we introduce lCON, a typed lambda calculus with
assertions for higher-order functions. The calculus models the assertion
monitoring system that we employ in DrScheme. We establish
basic properties of the model (type soundness, etc.) and
illustrate the usefulness of contract checking with examples from
DrScheme's code base.
We believe that the development of an assertion system for higher-order
functions serves two purposes. On one hand, the system has
strong practical potential because existing type systems simply cannot
express many assertions that programmers would like to state.
On the other hand, an inspection of a large base of invariants may
provide inspiration for the direction of practical future type system
research.
Categories & Subject Descriptors: D.3.3, D.2.1; General Terms: De-
sign, Languages, Reliability; Keywords: Contracts, Higher-order Func-
tions, Behavioral Specifications, Predicate Typing, Software ReliabilityWork partly conducted at Rice University, Houston TX. Address as of
9/2002: University of Chicago; 1100 E 58th Street; Chicago, IL 60637
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. To copy otherwise, to republish, to post on servers or to redistribute
to lists, requires prior specific permission and/or a fee.
ICFP'02, October 4-6, 2002, Pittsburgh, Pennsylvania, USA.
Dynamically enforced pre- and post-condition contracts have been
widely used in procedural and object-oriented languages [11, 14,
17, 20, 21, 22, 25, 31]. As Rosenblum [27] has shown, for example,
these contracts have great practical value in improving the robustness
of systems in procedural languages. Eiffel [22] even developed
an entire philosophy of system design based on contracts (Design
by Contract). Although Java [12] does not support contracts, it is
one of the most requested extensions.1
With one exception, higher-order languages have mostly ignored
assertion-style contracts. The exception is Bigloo Scheme [28],
where programmers can write down first-order, type-like constraints
on procedures. These constraints are used to generate more
efficient code when the compiler can prove they are correct and are
turned into runtime checks when the compiler cannot prove them
correct.
First-order procedural contracts have a simple interpretation. Consider
this contract, written in an ML-like syntax:
val rec
It states that the argument to f must be an int greater than 9 and
that f produces an int between 0 and 99. To enforce this contract, a
contract compiler inserts code to check that x is in the proper range
when f is called and that f's result is in the proper range when f
returns. If x is not in the proper range, f's caller is blamed for
a contractual violation. Symmetrically, if f's result is not in the
proper range, the blame falls on f itself. In this world, detecting
contractual violations and assigning blame merely means checking
appropriate predicates at well-defined points in the program's evaluation
This simple mechanism for checking contracts does not generalize
to languages with higher-order functions. Consider this contract:
val rec
The contract's domain states that g accepts int int functions and
must apply them to ints larger than 9. In turn, these functions must
produce ints between 0 and 99. The contract's range obliges g to
produce ints between 0 and 99.http://developer.java.sun.com/developer/bugParade/top25rfes.html
Although g may be given f, whose contract matches g's domain
contract, g should also accept functions with stricter contracts:
val rec
g(h),
functions without explicit contracts:
g(l x. 50),
functions that process external data:
read
val rec read read the nth entry from a file .
g(read num),
and functions whose behavior depends on the context:
val rec dual purpose = l x.
if . predicate on some global state .
then 50
else 5000.
as long as the context is properly established when g applies its
argument.
Clearly, there is no algorithm to statically determine whether proc
matches its contract, and it is not even possible to dynamically
check the contract when g is applied. Even worse, it is not enough
to monitor applications of proc that occur in g's body, because g
may pass proc to another function or store it in a global variable.
Additionally, higher-order functions complicate blame assignment.
With first-order functions, blame assignment is directly linked to
pre- and post-condition violations. A pre-condition violation is the
fault of the caller and a post-condition violation is the fault of the
callee. In a higher-order world, however, promises and obligations
are tangled in a more complex manner, mostly due to function-valued
arguments.
In this paper, we present a contract system for a higher-order world.
The key observation is that a contract checker cannot ensure that g's
argument meets its contract when g is called. Instead, it must wait
until proc is applied. At that point, it can ensure that proc's argument
is greater than 9. Similarly, when proc returns, it can ensure
that proc's result is in the range from 0 to 99. Enforcing contracts in
this manner ensures that the contract violation is signaled as soon as
the contract checker can establish that the contract has indeed been
violated. The contract checker provides a first-order value as a witness
to the contract violation. Additionally, the witness enables the
contract checker to properly assign blame for the contract violation
to the guilty party.
The next section introduces the subtleties of assigning blame for
higher-order contract violations through a series of examples in
Scheme [8, 16]. Section 3 presents lCON, a typed, higher-order
functional programming language with contracts. Section 4 speci-
fies the meaning of lCON, and section 5 provides an implementation
of it. Section 6 contains a type soundness result and proves that the
implementation in section 5 matches the calculus. Section 7 shows
how to extend the calculus with function contracts whose range depends
on the input to the function, and section 8 discusses the interactions
between contracts and tail recursion.
Example Contracts
We begin our presentation with a series of Scheme examples that
explain how contracts are written, why they are useful, and how to
check them. The first few examples illustrate the syntax and the basic
principles of contract checking. Sections 2.2 and 2.3 discuss the
problems of contract checking in a higher-order world. Section 2.4
explains why it is important for contracts to be first-class values.
Section 2.5 demonstrates how contracts can help with callbacks,
the most common use of higher-order functions in a stateful world.
To illustrate these points, each section also includes examples from
the DrScheme [5] code base.
2.1 Contracts: A First Look
The first example is the sqrt function:
number number
(define/contract sqrt
((l
(l
Following the tradition of How to Design Programs [3], the sqrt
function is proceeded by an ML-like [23] type specification (in a
comment). Like Scheme's define, a define/contract expression
consists of a variable and an expression for its initial value, a function
in this case. In addition, the second subterm of define/contract
specifies a contract for the variable.
Contracts are either simple predicates or function contracts. Function
contracts, in turn, consist of a pair of contracts (each either a
predicate or another function contract), one for the domain of the
function and one for the range of the function:
The domain portion of sqrt's contract requires that it always receives
a non-negative number. Similarly, the range portion of the
contract guarantees that the result is non-negative. The example
also illustrates that, in general, contracts check only certain aspects
of a function's behavior, rather than the complete semantics of the
function.
The contract position of a definition can be an arbitrary expression
that evaluates to a contract. This allows us to clarify the contract
on sqrt by defining a bigger-than-zero? predicate and using it in the
definition of sqrt's contract:
(define bigger-than-zero? (l (x) ( x 0)))
number number
(define/contract sqrt
(bigger-than-zero? - bigger-than-zero?)
(l
The contract on sqrt can be strengthened by relating sqrt's result to
its argument. The dependent function contract constructor allows
the programmer to specify range contracts that depend on the value
of the function's argument. This constructor is similar to -, except
that the range position of the contract is not simply a contract.
Instead, it is a function that accepts the argument to the original
function and returns a contract:
d
(module preferences scheme/contract
(provide add-panel open-dialog)
(define/contract add-panel
((any -
(l (new-child)
(let ([children (send (send new-child get-parent)
(eq? (car children) new-child))))
(l (make-panel)
(set! make-panels (cons make-panel make-panels))))
(define make-panels null)
(define open-dialog
(l
(let ([d (instantiate dialog% () .)]
[sp (instantiate single-panel% () (parent d))]
[children (map (call-make-panel sp) make-panels)])
(define call-make-panel
(l (sp)
(l (make-panel)
(make-panel sp)))))
Figure
1. Contract Specified with add-panel
Here is an example of a dependent contract for sqrt:
number number
(define/contract sqrt
d
(bigger-than-zero? -
(l (x)
(l
(and (bigger-than-zero? res)
(abs (- x ( res res)))
(l
This contract, in addition to stating that the result of sqrt is positive,
also guarantees that the square of the result is within 0.01 of the
argument.
2.2 Enforcement at First-Order Types
The key to checking higher-order assertion contracts is to postpone
contract enforcement until some function receives a first-order
value as an argument or produces a first-order value as a result.
This section demonstrates why these delays are necessary and discusses
some ramifications of delaying the contracts. Consider this
toy module:
(module delayed scheme/contract
(provide save use)
(define saved (l (x) 50))
(define/contract save
((bigger-than-zero? bigger-than-zero?) any)
(l (f) (set! saved f)))
use : integer integer
(define use
(module preferences scheme
(provide add-panel open-dialog)
(define add-panel
(l (make-panel)
(set! make-panels (cons make-panel make-panels))))
(define make-panels null)
(define open-dialog
(l
(let ([d (instantiate dialog% () .)]
[sp (instantiate single-panel% () (parent d))]
[children (map (call-make-panel sp) make-panels)])
(define call-make-panel
(l (sp)
(l (make-panel)
(let ([new-child (make-panel sp)
[children (send (send new-child get-parent)
(unless (eq? (car children) new-child)
(contract-error make-panel))
new-child)
Figure
2. Contract Manually Distributed
(bigger-than-zero? bigger-than-zero?)
(l (n) (saved n))))
The module [8, 9] declaration consists of a name for the module,
the language in which the module is written, a provide declaration
and a series of definitions. This module provides save and use. The
variable saved holds a function that should map positive numbers
to positive numbers. Since it is not exported from the module, it
has no contract. The getter (use) and setter (save) are the two visible
accessors of saved. The function save stores a new function
and use invokes the saved function. Naturally, it is impossible for
save to detect if the value of saved is always applied to positive
numbers since it cannot determine every argument to use. Worse,
save cannot guarantee that each time saved's value is applied that
it will return a positive result. Thus, the contract checker delays the
enforcement of save's contract until save's argument is actually applied
and returns. Accordingly, violations of save's contract might
not be detected until use is called.
In general, a higher-order contract checker must be able to track
contracts during evaluation from the point where the contract is established
(the call site for save) to the discovery of the contract
violation (the return site for use), potentially much later in the eval-
uation. To assign blame, the contract checker must also be able to
report both where the violation was discovered and where the contract
was established.
The toy example is clearly contrived. The underlying phe-
nomenon, however, is common. For a practical example, consider
DrScheme's preferences panel. DrScheme's plugins can add additional
panels to the preferences dialog. To this end, plugins register
callbacks that add new panels containing GUI controls (buttons,
list-boxes, pop-up menus, etc.) to the preferences dialog.
a bool) a a bool
(define (make/c op) (l (x) (l (y) (op y x))))
number number bool
(define /c (make/c
(define /c (make/c
any bool
(define eq/c (make/c eq?))
(define equal/c (make/c equal?))
any bool
(define any (l (x) #t))
Figure
3. Abstraction for Predicate Contracts
Every GUI control needs two values: a parent, and a callback that is
invoked when the control is manipulated. Some GUI controls need
additional control-specific values, such as a label or a list of choices.
In order to add new preference panels, extensions define a function
that accepts a parent panel, creates a sub-panel of the parent panel,
fills the sub-panel with controls that configure the extension, and
returns the sub-panel. These functions are then registered by calling
add-panel. Each time the user opens DrScheme's preferences
dialog, DrScheme constructs the preferences dialog from the registered
functions.
Figure
1 shows the definition of add-panel and its contract (boxed
in the figure). The contract requires that add-panel's arguments are
functions that accept a single argument. In addition, the contract
guarantees that the result of each call to add-panel's argument is a
panel and is the first child in its parent panel. Together, these checks
ensure that the order of the panels in the preferences dialog matches
the order of the calls to add-panel.
The body of add-panel saves the panel making function in a list.
Later, when the user opens the preferences dialog, the open-dialog
function is called, which calls the make-panel functions, and the
contracts are checked. The dialog% and single-panel% classes are
part of the primitive GUI library and instantiate creates instances
of them.
In comparison, figure 2 contains the checking code, written as if
there were no higher-order contract checking. The boxed portion of
the figure, excluding the inner box, is the contract checking code.
The code that enforces the contracts is co-mingled with the code
that implements the preferences dialog. Co-mingling these two decreases
the readability of both the contract and call-make-panel,
since client programmers now need to determine which portion of
the code concerns the contract checking and which performs the
function's work. In addition, the author of the preferences module
must find every call-site for each higher-order function. Finding
these sites in general is impossible, and in practice the call sites are
often in collaborators' code, whose source might not be available.
2.3 Blame and Contravariance
Assigning blame for contractual violations in the world of first-class
functions is complex. The boundaries between cooperating components
are more obscure than in the world with only first-order func-
tions. In addition to invoking a component's exported functions,
one component may invoke a function passed to it from another
component. Applying such first-class functions corresponds to a
flow of values between components. Accordingly, the blame for a
corresponding contract violation must lie with the supplier of the
bad value, no matter if the bad value was passed by directly applying
an exported function or by applying a first-class function.
As with first-order function contract checking, two parties are involved
for each contract: the function and its caller. Unlike first-
order function contract checking, a more general rule applies for
blame assignment. The rule is based on the number of times that
each base contract appears to the left of an arrow in the higher-order
contract. If the base contract appears an even number of times, the
function itself is responsible for establishing the contract. If it appears
an odd number of times, the function's caller is responsible.
This even-odd rule captures which party supplies the values and
corresponds to the standard notions of covariance (even positions)
and contravariance (odd positions).
Consider the abstract example from the introduction again, but with
a little more detail. Imagine that the body of g is a call to f with 0:
(define/contract g
(l (f) (f 0)))
At the point when g invokes f, the greater-than-nine? portion of
g's contract fails. According to the even-odd rule, this must be g's
fault. In fact, g does supply the bad value, so g must be blamed.
Imagine a variation of the above example where g applies f to 10
instead of 0. Further, imagine that f returns -10. This is a violation
of the result portion of g's argument's contract and, following the
even-odd rule, the fault lies with g's caller. Accordingly, the contract
enforcement mechanism must track the even and odd positions
of a contract to determine the guilty party for contract violations.
This problem of assigning blame naturally appears in contracts
from DrScheme's implementation. For example, DrScheme creates
a separate thread to evaluate user's programs. Typically, extensions
to DrScheme need to initialize thread-specific hidden state before
the user's program is run. The accessors and mutators for this state
implicitly accept the current thread as a parameter, so the code that
initializes the state must run on the user's thread.2
To enable DrScheme's extensions to run code on the user's thread,
DrScheme provides the primitive run-on-user-thread. It accepts a
thunk, queues the thunk to be run on the user's thread and returns.
It has a contract that promises that when the argument thunk is ap-
plied, the current thread is the user's thread:
(define/contract run-on-user-thread
any)
(l (thunk)
This contract is a higher-order function contract. It only has one
interesting aspect: the pre-condition of the function passed to run-
on-user-thread. This is a covariant (even) position of the function
contract which, according to the rule for blame assignment, means
that run-on-user-thread is responsible for establishing this contract.This state is not available to user's program because the accessors and
mutators are not lexically available to the user's program.
(module preferences scheme/contract
(provide add-panel .)
preferences:add-panel : (panel panel) void
(define/contract add-panel
d
((any -
(l (sp)
(let ([pre-children (copy-spine (send sp get-children))])
(l (new-child)
(let ([post-children (send sp get-children)])
(and (= (length post-children)
(add1 (length pre-children)))
(andmap eq?
(cdr post-children)
pre-children)
(eq? (car post-children) new-child)))))))
any)
(l (make-panel)
(set! make-panels (cons make-panel make-panels))))
(define (copy-spine l) (map (l (x) x) l)))
Figure
4. Preferences Panel Contract, Protecting the Panel
Therefore, run-on-user-thread contractually promises clients of this
function that the thunks they supply are applied on the user's thread
and that these thunks can initialize the user's thread's state.
2.4 First-class Contracts
Experience with DrScheme has shown that certain patterns of contracts
recur frequently. To abstract over these patterns, contracts
must be values that can be passed to and from functions. For exam-
ple, curried comparision operators are common (see figure 3).
More interestingly, patterns of higher-order function contracts are
also common. For example, DrScheme's code manipulates mixins
[7, 10] as values. These mixins are functions that accept a class
and returns a class derived from the argument. Since extensions of
DrScheme supply mixins to DrScheme, it is important to verify that
the mixin's result truly is derived from its input. Since this contract
is so common, it is defined in DrScheme's contract library:
contract
(define mixin-contract
d
res arg)))))
This contract is a dependent contract. It states that the input to the
function is a class and its result is a subclass of the input.
Further, it is common for the contracts on these mixins to guarantee
that the base class passed to the mixin is not just any class,
but a class that implements a particular interface. To support these
contracts, DrScheme's contract library provides this function that
constructs a contract:
interface (class class) contract
(define mixin-contract/intf
(l (interface)
d
(l (arg) (l (res) (subclass? res arg))))))
The mixin-contract/intf function accepts an interface as an argument
and produces a contract similar to mixin-contract, except that
the contract guarantees that input to the function is a class that implements
the given interface.
Although the mixin contract is, in principle, checkable by a type
system, no such type system is currently implemented. OCaml [18,
19, 26] and OML [26] are rich enough to express mixins, but type-checking
fails for any interesting use of mixins [7], since the type
system does not allow subsumption for imported classes. This contract
is an example where the expressiveness of contracts leads to
an opportunity to improve existing type systems. Hopefully this
example will encourage type system designers to build richer type
systems that support practical mixins.
2.5 Callbacks and Stateful Contracts
Callbacks are notorious for causing problems in preserving invari-
ants. Szyperski [32] shows why callbacks are important and how
they cause problems. In short, code that invokes the callback must
guarantee that certain state is not modified during the dynamic extent
of the callback. Typically, this invariant is maintained by examining
the state before the callback is invoked and comparing it to
the state after the callback returns.3
Consider this simple library for registering and invoking callbacks.
(module callbacks scheme/contract
(provide register-callback invoke-callback)
(define/contract register-callback
(any
d
(l (arg)
(let ([old-state . save the relevant state .])
(l
. compare the new state to the old state .))))
(l (c)
(set! callback c)))
(define invoke-callback
(l
(define callback (l () (void))))
The function register-callback accepts a callback function and registers
it as the current callback. The invoke-callback function calls
the callback. The contract on register-callback makes use of the
dependent contract constructor in a new way. The contract checker
applies the dependent contract to the original function's arguments
before the function itself is applied. Therefore, the range portion
of a dependent contract can determine key aspects of the state and
save them in the closure of the resulting predicate. When that predicate
is called with the result of the function, it can compare the
current version of the state with the original version of the state,
thus ensuring that the callback is well-behaved.
This technique is useful in the contract for DrScheme's preferences
panel, whose contract we have already considered. Consider the
revision of add-panel's contract in figure 4. The revision does moreIn practice, lock variables are often used for this; the technique presented
here adapts to a lock-variable based solution to the callback problem.
core syntax
| n | e aop e | e rop e
| e::e | [] | hd(e) | tl(e) | mt(e)
| if e then e else e | true | false | str
| e - e | contract(e)
| flatp(e) | pred(e) | dom(e) | rng(e) | blame(e)
. | "aa" | "ab" | .
types
list | int | bool | string | t contract
evaluation contexts
val rec x
d .
e
| val rec
val rec
d .
e
| val rec
| if E then e else e
| dom(E) | rng(E) | pred(E) | flatp(E) | blame(E)
|
values
Figure
5. lCON Syntax, Types, Evaluation Contexts, and Values
than just ensure that the new child is the first child. In addition, it
guarantees that the original children of the preferences panel remain
in the panel in the same order, thus preventing an extension from
removing the other preference panels.
3 Contract Calculus
Although contracts can guarantee stronger properties than types
about program execution, their guarantees hold only for particular
program executions. In contrast, the type checker's weaker guarantees
hold for all program executions. As such, contracts and types
play synergistic roles in program development and maintenance so
practical programming languages must support both. In that spirit,
this calculus contains both types and contracts to show how they
interact.
Figure
5 contains the syntax for the contract calculus. Each program
consists of a series of definitions, followed by a single expres-
sion. Each definition consists of a variable, a contract expression
and an expression for initializing the variable. All of the variables
bound by val rec in a single program must be distinct. All of the
if n1
if n1 <
l x.e V e[x / V]
fix x.e e[x / fix x.e]
where P contains val rec x
if true then e1 else e2 e1
if false then e1 else e2 e2
true
true
dom(V1 - V2) V1
Figure
6. Reduction Semantics of lCON
definitions are mutually recursive, except that the contract positions
may only refer to defined variables that appear earlier in a program.
Expressions (e) include abstractions, applications, variables, fix
points, numbers and numeric primitives, lists and list primitives,
if expressions, booleans, and strings. The final expression forms
specify contracts. The contract(e) and e - e expressions construct
flat and function contracts, respectively. A flatp expression
returns true if its argument is a flat contract and false if its argument
is a function contract. The pred, dom, and rng expressions select
the fields of a contract. The blame primitive is used to assign blame
to a definition that violates its contract. It aborts the program. This
first model omits dependent contracts; we return to them later.
The types for lCON are those of core ML (without polymorphism),
plus types for contract expressions. The typing rules for contracts
are given in figure 7. The first typing rule is for complete programs.
A program's type is a record of types, written:
t .
where the first types are the types of the definitions and the last type
is the type of the final expression.
Contracts on flat values are tagged by the contract value constructor
and must be predicates that operate on the appropriate type.
Contracts for functions consist of two contracts, one for the domain
{ contract . G { {
G val rec xi :
string
contract
G l x.
list list list list
list G list G list
G if e1 then e2 else string
Figure
7. lCON Type Rules
and one for the range of the function. The typing rule for defini-
tions ensures that the type of the contract matches the type of the
definition. The rest of the typing rules are standard.
Consider this definition of the sqrt function:
val rec sqrt : contract(l x.x
l n.
The body of the sqrt function has been elided. The contract on sqrt
must be an - contract because the type of sqrt is a function type.
Further, the domain and range portions of the contract are predicates
on integers because sqrt consumes and produces integers.4
More succinctly, the predicates in this contract augment the sqrt's
type, indicating that the domain and range must be positive.
Figures
5 and 6 define a conventional reduction semantics for the
base language without contracts [4].
4 Contract Monitoring
As explained earlier, the contract monitor must perform two tasks.
First, it must track higher-order functions to discover contract vio-
lations. Second, it must properly assign blame for contract viola-
tions. To this end, it must track higher-order functions through the
program's evaluation and the covariant and contravariant portions
of each contract.
To monitor contracts, we add a new form of expression, some new
values, evaluation contexts and reduction rules. Figure 8 contains
the new expression form, representing an obligation:
e,x,x
e
The first superscript is a contract expression that the base expression
is obliged to meet. The last two are variables. The variables enableTechnically, sqrt should consume and produce any number, but since
lCON only contains integers and the precise details of sqrt are unimportant,
we consider a restricted form of sqrt that operates on integers.
the contract monitoring system to assign blame properly. The first
variable names the party responsible for values that are produced by
the expression under the superscript and the second variable names
the party responsible for values that it consumes.
An implementation would add a fourth superscript, representing the
source location where the contract is established. This superscript
would be carried along during evaluation until a contract violation
is discovered, at which point it would be reported as part of the error
message.
In this model, each definition is treated as if it were written by a
different programmer. Thus, each definition is considered to be a
separate entity for the purpose of assigning blame. In an implemen-
tation, this is too fine-grained. Blame should instead be assigned to
a coarser construct, e.g., Modula's modules, ML's structures and
functors, or Java's packages. In DrScheme, we blame modules [9].
Programmers do not write obligation expressions. Instead, contracts
are extracted from the definitions and turned into obligations.
To enforce this, we define the judgment p ok that holds when there
are no obligation expressions in p.
Obligations are placed on each reference to a val rec-defined vari-
able. The first part of the obligation is the definition's contract ex-
pression. The first variable is initially the name of the referenced
definition. The second variable is initially the name of the defini-
tion where the reference occurs (or main if the reference occurs in
the last expression). The function I (defined in the accompanying
technical report [6]) specifies precisely how to insert the obligations
expressions.
The introduction of obligation expressions induces the extension of
the set of evaluation contexts, as shown in figure 8. They specify
that the value of the superscript in an obligation expression is
determined before the base value. Additionally, the obligation expression
induces a new type rule. The type rule guarantees that the
obligation is an appropriate contract for the base expression.
obligation expressions
obligation type rule
contract
obligation evaluation contexts
obligation values
obligation reductions
Figure
8. Monitoring Contracts in lCON
Finally, we add the class of labeled values. The labels are function
obligations (see figure 8). Although the grammar allows any value
to be labeled with a function contract, the type soundness theorem
coupled with the type rule for obligation expressions guarantees
that the delayed values are always functions, or functions wrapped
with additional obligations.
For the reductions in figure 6, superscripted evaluation proceeds
just like the original evaluation, except that the superscript is carried
from the instruction to its result. There are two additional re-
ductions. First, when a predicate contract reaches a flat value, the
predicate on that flat value is checked. If the predicate holds, the
contract is discarded and evaluation continues. If the predicate fails,
execution halts and the definition named by the variable in the positive
position of the superscript is blamed.
The final reduction of figure 8 is the key to contract checking for
higher-order functions (the hoc above the arrow stands for higher-
order contract). At an application of a superscripted procedure,
the domain and range portion of the function position's superscript
are moved to the argument expression and the entire application.
Thus, the obligation to maintain the contract is distributed to the
argument and the result of the application. As the obligation moves
to the argument position of the application, the value producer and
the value consumer exchange roles. That is, values that are being
provided to the function are being provided from the argument and
vice versa. Accordingly, the last two superscripts of the obligation
expression must be reversed, which ensures that blame is properly
assigned, according to the even-odd rule.
For example, consider the definition of sqrt with a single use in
the main expression. The reduction sequence for the application
of sqrt is shown on the left in figure 10. For brevity, references
to variables defined by val rec are treated as values, even though
they would actually reduce to the variable's current values. The
first reduction is an example of how obligations are distributed on
an application. The domain portion of the superscript contract is
moved to the argument of the procedure and the range portion is
moved to the application. The second reduction and the second
string string t
ct. l x. l p. l n.
if flatp(ct) then
if (pred(ct)) x then x else error(p)
else
in
l y. wrap r
Figure
9. Contract Compiler Wrapping Function
to last reduction are examples of how flat contracts are checked.
In this case, each predicate holds for each value. If, however, the
predicate had failed in the second reduction step, main would be
blamed, since main supplied the value to sqrt. If the predicate had
failed in the second to last reduction step, sqrt would be blamed
since sqrt produced the result.
For a second example, recall the higher-order program from the
introduction (translated to the calculus):
val rec
val rec bet0
val rec
l f. f 0
(l x. 25)
The definitions of gt9 and bet0 99 are merely helper functions for
defining contracts and, as such, do not need contracts. Although the
calculus does not allow such definitions, it is a simple extension to
add them; the contract checker would simply ignore them.
Accordingly, the variable g in the body of the main expression is
the only reference to a definition with a contract. Thus, it is the
only variable that is compiled into an obligation. The contract for
the obligation is g's contract. If an even position of the contract is
not met, g is blamed and if an odd position of the contract is not
met, main is blamed. Here is the reduction sequence:
(if gt9(0) then 0
bet0 99,main,g bet0 99,g,main
else
In the first reduction step, the obligation on g is distributed to g's
argument and to the result of the application. Additionally, the variables
indicating blame are swapped in (l x. 25)'s obligation. The
second step substitutes l x. 25 in the body of g, resulting in an application
of l x. 25 to 0. The third step distributes the contract on l
x. 25 to 0 and to the result of the application. In addition, the variables
for even and odd blame switch positions again in 0's contract.
The fourth step reduces the flat contract on 0 to an if test that determines
if the contract holds. The final reduction steps assign blame
to g for supplying 0 to its argument, since it promised to supply a
number greater than 9.
val rec sqrt : contract(l x.x
l n. body intentionally elided .
sqrt 4
REDUCTIONS IN lON
(contract(l x.x
then 4
contract(l x.x 0),sqrt,main
else blame(main)))
else blame(sqrt)
REDUCTIONS OF THE COMPILED EXPRESSION
(wrap (contract(l x.x
sqrt "sqrt" "main")- ((l y. wrap (contract(l x.x 0))
(sqrt (wrap (contract(l x.x 0))
y
"main" "sqrt"))
"sqrt" "main")
For the next few steps, we show the reductions of wrap's
argument before the reduction of wrap, for clarity.
(sqrt (wrap (contract(l x.x 0))"main" "sqrt"))
"sqrt" "main"
(sqrt (if ((l x.x
else blame("main")))
"sqrt" "main"
else blame("sqrt")
Figure
10. Reducing sqrt in lCON and with wrap
This example shows that higher-order functions and first-order
functions are treated uniformly in the calculus. Higher-order functions
merely require more distribution reductions than first-order
functions. In fact, each nested arrow contract expression induces a
distribution reduction for a corresponding application. For simplic-
ity, we focus on our sqrt example for the remainder of the paper.
5 Contract Implementation
To implement lCON, we must compile away obligation expressions.
The key to the compilation is the wrapper function in figure 9. The
wrapper function is defined in the calculus (the let expression is
short-hand for inline applications of l-expressions, and is used for
clarity). It accepts a contract, a value to test, and two strings. These
strings correspond to the variables in the superscripts. We write
wrap as a meta-variable to stand for the program text in figure 9,
not a program variable.
Compiling the obligations is merely a matter of replacing an obligation
expression with an application of wrap. The first argument
is the contract of the referenced variable. The second argument is
the expression under the obligation and the final two arguments are
string versions of the variables in the obligation. Accordingly, we
define a compiler (C) that maps from programs to programs. It
replaces each obligation expression with the corresponding application
of wrap. The formal definition is given in the accompanying
technical report [6].
The function wrap is defined case-wise, with one case for each kind
of contract. The first case handles flat contracts; it merely tests if
the value matches the contract and blames the positive position if
the test fails. The second case of wrap deals with function con-
tracts. It builds a wrapper function that tests the original function's
argument and its result by recursive calls to wrap. Textually, the
first recursive call to wrap corresponds to the post-condition check-
ing. It applies the range portion of the contract to the result of the
original application. The second recursive call to wrap corresponds
to the pre-condition checking. It applies the domain portion of the
contract to the argument of the wrapper function. This call to wrap
has the positive and negative blame positions reversed as befits the
domain checking for a function.
The right-hand side of figure 10 shows how the compiled version
of the sqrt program reduces. It begins with one call to wrap from
the one obligation expression in the original program. The first
reduction applies wrap. Since the contract in this case is a function
contract, wrap takes the second case in its definition and returns a
l expression. Next, the l expression is applied to 4. At this point,
the function contract has been distributed to sqrt's argument and to
the result of sqrt's application, just like the distribution reduction in
lCON (as shown on the left side of figure 10). The next reduction
step is another call to wrap, in the argument to sqrt. This contract is
flat, so the first case in the definition of wrap applies and the result is
an if test. If that test had failed, the else branch would have assigned
blame to main for supplying a bad value to sqrt. The test passes,
however, and the if expression returns 4 in the next reduction step.
if I(p) -fh l x. p
if I(p) -fh VV2 - V3,p,n
if I(p) -fw l x.e
Figure
11. Evaluator Functions
After that, sqrt returns 2. Now we arrive at the final call to wrap.
As before, the contract is a flat predicate, so wrap reduces to an if
expression. This time, however, if the if test had failed, sqrt would
have been blamed for returning a bad result. In the final reduction,
the if test succeeds and the result of the entire program is 2.
6 Correctness
DEFINITION 6.1 DIVERGENCE. A program p diverges under -
if for any p1 such that p - p1, there exists a p2 such that p1 -
p2.
Although the definition of divergence refers only to -, we use it
for each of the reduction relations.
The following type soundness theorem for lCON is standard [34].
THEOREM 6.2 (TYPE SOUNDNESS FOR lCON ). For any program,
p, such that
according to the type judgments in figure 7, exactly one of the following
holds:
x is a val rec defined variable in p, /,
hd, tl, pred dom, or rng, or
diverges under -.
PROOF. Combine the preservation and progress lemmas for
lCON.
LEMMA 6.3 (PRESERVATION FOR lCON ). If 0/
LEMMA 6.4 (PROGRESS FOR lCON ). If 0/ . then either
The remainder of this section formulates and proves a theorem that
relates the evaluation of programs in the instrumented semantics
from section 4 and the contract compiled programs from section 5.
To relate these two semantics, we introduce a new semantics and
show how it bridges the gap between them. The new semantics
is an extension of the semantics given in figures 5 and 6. In
addition to those expressions it contains obligation expressions,
flat
evaluation contexts, and - reduction from figure 8 (but not the
hoc wrap
new values or the - reduction in figure 8), and the - reduction:
D[(l x. e)(V1 - V2),p,n] -wrap
D[l y. ((l x. e) yV1,n,p)V2,p,n]
where y is not free in e.
DEFINITION 6.5 (EVALUATORS). Define -fh to be the transitive
closure of (-flat -hoc ) and define -fw to be the transitive
flat wrap
closure of (-).
The evaluator functions (shown in figure 11) are defined on programs
p such that p ok and G As a short-hand
notation, we write that a program value is equal to a value
when the main expression of the program Vp is equal to V.
LEMMA 6.6. The evaluators are partial functions.
PROOF. From an inspection of the evaluation contexts, we can
prove that there is a unique decomposition of each program into
an evaluation context and an instruction, unless it is a value. From
this, it follows that the evaluators are (partial) functions.
THEOREM 6.7 (COMPILER CORRECTNESS).
PROOF. Combine lemma 6.8 with lemma 6.9.
PROOF SKETCH. This proof is a straightforward examination of
the evaluation sequences of E and Efw. Each reduction of an appli-
flat wrap
cation of wrap corresponds directly to either a - or a - reduction
and otherwise the evaluators proceed in lock-step.
The full proof is given in an accompanying technical report [6].
LEMMA 6.9.
PROOF SKETCH. This proof establishes a simulation between Efh
and Efw. The simulation is preserved by each reduction step and it
relates values to themselves and errors to themselves.
The full proof is given in an accompanying technical report [6].
7 Dependent Contracts
Adding dependent contracts to the calculus is straightforward. The
reduction relation for dependent function contracts naturally extends
the reduction relation for normal function contracts. The
reduction for distributing contracts at applications is the only dif-
ference. Instead of placing the range portion of the contract into
the obligation, an application of the range portion to the function's
original argument is placed in the obligation, as in figure 12.
dependent contract expressions
d
dependent contract type rule
d
contract
dependent contract evaluation contexts
d d
dependent contract reductions
d
Figure
12. Dependent Function Contracts for lCON
The evaluation contexts given in figure 8 dictate that an obligation's
superscript is reduced to a value before its base expression. In par-
ticular, this order of evaluation means that the superscripted application
resulting from the dependent contract reduction in figure 12
is reduced before the base expression. Therefore, the procedure in
the dependent contract can examine the state (of the world) before
the function proper is applied. This order of evaluation is critical
for the callback examples from section 2.5.
8 Tail Recursion
Since the contract compiler described in section 5 checks post-
conditions, it does not preserve tail recursion [2, 30] for procedures
with post-conditions. Typically, determining if a procedure
call is tail recursive is a simple syntactic test. In the presence of
higher-order contracts, however, understanding exactly which calls
are tail-calls is a complex task. For example, consider this program:
val rec
val
l g. g 3
f (l x. x+1)
The body of f is in tail position with respect to a conventional inter-
preter. Hence, a tail-call optimizing compiler should optimize the
call to g and not allocate any additional stack space. But, due to the
contract that g's result must be larger than 0, the call to g cannot be
optimized, according to the semantics of contract checking.5
Even worse, since functions with contracts and functions without
contracts can co-mingle during evaluation, sometimes a call to a
function is a tail-call but at other times a call to the same function
call is not a tail-call. For instance, imagine that the argument to f
was a locally defined recursive function. The recursive calls would
be tail-calls, since they would not be associated with any top-level
variable, and thus no contract would be enforced.
Contracts are most effective at module boundaries, where they serve
the programmer by improving the opportunities for modular rea-
soning. That is, with well-written contracts, a programmer can
study a single module in isolation when adding functionality or
fixing defects. In addition, if the programmer changes a contract,
the changed contract immediately indicates which other source files
must change.At a minimum, compiling it as a tail-call becomes much more difficult.
Since experience has shown that module boundaries are typically
not involved in tight loops, we conjecture that losing tail recursion
for contract checking is not a problem in practice. In particular,
adding these contracts to key interfaces in DrScheme has had no
noticeable effect on its performance. Removing the tail-call optimization
entirely, however, would render DrScheme useless.
presents further evidence for this conjecture about tail re-
cursion. His compiler does not preserve tail recursion for any cross-module
procedure call - not just those with contracts. Still, he has
not found this to be a problem in practice [29, section 3.4.1].
9 Conclusion
Higher-order, typed programming language implementations [1,
12, 15, 19, 33] have a static type discipline that prevents certain
abuses of the language's primitive operations. For example, programs
that might apply non-functions, add non-numbers, or invoke
methods of non-objects are all statically rejected. Yet these languages
go further. Their run-time systems dynamically prevent additional
abuses of the language primitives. For example, the primitive
array indexing operation aborts if it receives an out of bounds
index, and the division operation aborts if it receives zero as a divi-
sor. Together these two techniques dramatically improve the quality
of software built in these languages.
With the advent of module languages that support type abstraction
[13, 18, 24], programmers are empowered to enforce their own
abstractions at the type level. These abstractions have the same
expressive power that the language designer used when specifying
the language's primitives. The dynamic part of the invariant en-
forcement, however, has become a second-class citizen. The programmer
must manually insert dynamic checks and blame is not
assigned automatically when these checks fail. Even worse, as discussed
in section 2, it is not always possible for the programmer
to insert these checks manually because the call sites may be in
unavailable modules.
This paper presents the first assertion-based contract checker for
languages with higher-order functions. Our contract checker enables
programmers to refine the type-specifications of their abstractions
with additional, dynamically enforced invariants. We illustrate
the complexities of higher-order contract checking with a series
of examples chosen from DrScheme's code-base. These examples
serve two purposes. First, they illustrate the subtleties of contract
checking for languages with higher-order functions. Second,
they demonstrate that current static checking techniques are not expressive
enough to support the contracts underlying DrScheme.
We believe that experience with assertions will reveal which contracts
have the biggest impact on software quality. We hope that this
information, in turn, helps focus type-system research in practical
directions.
Acknowledgments
Thanks to Thomas Herchenroder, Michael Vanier, and the anonymous
ICFP reviews for their comments on this paper.
We would like to send a special thanks to ICFP reviewer #3, whose
careful analysis and insightful comments on this paper have renewed
our faith in the conference reviewing process.
--R
AT&T Bell Labratories.
Proper tail recursion and space efficiency.
How to Design Programs.
The revised report on the syntactic theories of sequential control and state.
Contracts for higher-order [26] functions
Modular object-oriented programming with units and mixins
PLT MzScheme: Language manual.
You want it when?
A programmer's reduction semantics for classes and mixins.
A Language Manual for Sather 1.1
The Java(tm) Language
The Turing programming lan- guage
Manifest types
The Objective Caml system
Programming with specifications.
Monographs in Computer Science
An overview of Anna
specification language for Ada.
The Language.
The Definition of Standard
Abstract types have existential
A technique for software module specification
Communications of the ACM
extension of ML.
Principles of Programming Languages
A practical approach to programming
IEEE Transactions on Software Engineering
Bigloo: A practical Scheme compiler
Bee: an integrated development environment for
the Scheme programming language.
Debunking the
MIT Artificial Intelligence Laboratory
An Introduction.
Component Software.
The GHC Team.
A syntactic approach to type
First appeared as Technical Report TR160
--TR
Abstract types have existential type
The Turing programming language
Eiffel: the language
The revised report on the syntactic theories of sequential control and state
Eiffel
Manifest types, modules, and separate compilation
A type-theoretic approach to higher-order modules with sharing
A syntactic approach to type soundness
A Practical Approach to Programming With Assertions
Objective ML
Proper tail recursion and space efficiency
Modular object-oriented programming with units and mixins
Revised<supscrpt>5</supscrpt> report on the algorithmic language scheme
A technique for software module specification with examples
Programming with Specifications
The Java Language Specification
The Definition of Standard ML
Composable and compilable macros:
A Programmer''s Reduction Semantics for Classes and Mixins
DrScheme
Debunking the MYAMPERSANDldquo;expensive procedure callMYAMPERSANDrdquo; myth or, procedure call implementations considered harmful or, LAMBDA
--CTR
Frances Perry , Limin Jia , David Walker, Expressing heap-shape contracts in linear logic, Proceedings of the 5th international conference on Generative programming and component engineering, October 22-26, 2006, Portland, Oregon, USA
Dana N. Xu, Extended static checking for haskell, Proceedings of the 2006 ACM SIGPLAN workshop on Haskell, September 17-17, 2006, Portland, Oregon, USA
Philippe Meunier , Robert Bruce Findler , Matthias Felleisen, Modular set-based analysis from contracts, ACM SIGPLAN Notices, v.41 n.1, p.218-231, January 2006
Matthias Blume , David McAllester, A sound (and complete) model of contracts, ACM SIGPLAN Notices, v.39 n.9, September 2004
Matthias Blume , David McAllester, Sound and complete models of contracts, Journal of Functional Programming, v.16 n.4-5, p.375-414, July 2006
Dale Vaillancourt , Rex Page , Matthias Felleisen, ACL2 in DrScheme, Proceedings of the sixth international workshop on the ACL2 theorem prover and its applications, August 15-16, 2006, Seattle, Washington
Tobin-Hochstadt , Matthias Felleisen, Interlanguage migration: from scripts to programs, Companion to the 21st ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications, October 22-26, 2006, Portland, Oregon, USA
Kathryn E. Gray , Robert Bruce Findler , Matthew Flatt, Fine-grained interoperability through mirrors and contracts, ACM SIGPLAN Notices, v.40 n.10, October 2005
January 2006
Jacob Matthews , Robert Bruce Findler, Operational semantics for multi-language programs, ACM SIGPLAN Notices, v.42 n.1, January 2007
Nick Benton, Embedded interpreters, Journal of Functional Programming, v.15 n.4, p.503-542, July 2005 | predicate typing;higher-order functions;solfware reliability;contracts;behavioral specifications |
581487 | A demand-driven adaptive type analysis. | Compilers for dynamically and statically typed languages ensure safe execution by verifying that all operations are performed on appropriate values. An operation as simple as car in Scheme and hd in SML will include a run time check unless the compiler can prove that the argument is always a non-empty list using some type analysis. We present a demand-driven type analysis that can adapt the precision of the analysis to various parts of the program being compiled. This approach has the advantage that the analysis effort can be spent where it is justified by the possibility of removing a run time check, and where added precision is needed to accurately analyze complex parts of the program. Like the k-cfa our approach is based on abstract interpretation but it can analyze some important programs more accurately than the k-cfa for any value of k. We have built a prototype of our type analysis and tested it on various programs with higher order functions. It can remove all run time type checks in some nontrivial programs which use map and the Y combinator. | Introduction
Optimizing compilers typically consist of two components: a program
analyzer and a program transformer. The goal of the ana-
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. To copy otherwise, to republish, to post on servers or to redistribute
to lists, requires prior specific permission and/or a fee.
ICFP'02, October 4-6, 2002, Pittsburgh, Pennsylvania, USA.
(let ((f (lambda (a b) (cons 1 (car 2 a) b)))
(i (lambda (c) c)))
(let ((j (lambda (d) ( 3 i d))))
(car 4 (f (f (cons 5 5 '())
(cons 6 6 '()))
Figure
1. A Scheme program under analysis
lyzer is to determine various attributes of the program so that the
transformer can decide which optimizations are possible and worth-
while. To avoid missing optimization opportunities the analyzer
typically computes a very large set of attributes to a predetermined
level of detail. This wastes time because the transformer only uses
a small subset of these attributes and some attributes are more detailed
than required. Moreover the transformer may require a level
of detail for some attributes which is higher than what was determined
by the analyzer.
Consider a compiler for Scheme that optimizes calls to car by removing
the run time type check when the argument is known to be
a pair. The compiler could use the 0-cfa analysis [8, 9] to compute
for every variable of a program the (conservative) set of allocation
points in the program that create a value (pair, function, number,
etc) that can be bound to an instance of that variable. In the program
fragment shown in Figure 1 the 0-cfa analysis computes that
only pairs, created by cons 1 and cons 5 , can be bound to instances
of the variable a and consequently the transformer can safely remove
the run time type check in the call to car 2 .
Note that the 0-cfa analysis wasted time computing the properties
of variable b which are not needed by the transformer. Had there
been a call (car b) in f's body it would take the more complex 1-
cfa analysis to discover that only a pair created by cons 6 and cons 8
can be bound to an instance of variable b; the 0-cfa does not exclude
that the empty list can be bound to b because the empty list can be
bound to c and returned by function i. The 1-cfa analysis achieves
this higher precision by using an abstract execution model which
partitions the instances of a particular variable on the basis of the
call sites that create these instances. Consequently it distinguishes
the instances of variable c created by the call ( 7 i (cons .))
and those created by the call ( 9 i '()), allowing it to narrow the
type returned by ( 7 i (cons .)) to pairs only. If these two
calls to i are replaced by calls to j then the 2-cfa analysis would
be needed to fully remove all type checks on calls to car. By using
an abstract execution model that keeps track of call chains up to a
length of 2 the 2-cfa analysis distinguishes the instances of variable
c created by the call chain ( 7 j (cons d) and the
call chain ( 9 j '()) # ( 3 i d). The compiler implementer (or
user) is faced with the difficult task of finding for each program
| x l x # Var, l # Lab
| (l l x. e 1
| (if l e
| (cons l e 1
| (car l e 1
| (cdr l e 1
| (pair? l e 1
Figure
2. Syntax of the Source Language
an acceptable trade-off between the extent of optimization and the
value of k and compile time.
The analysis approach presented in this paper is a demand-driven
type analysis that adapts the analysis to the source program. The
work performed by the analyzer is driven by the need to determine
which run time type checks can be safely removed. By being
demand-driven the analyzer avoids performing useless analysis
work and performs deeper analysis for specific parts of the program
when it may result in the removal of a run time type check. This
is achieved by changing the abstract execution model dynamically
to increase the precision where it appears to be beneficial. Like the
k-cfa our analysis is based on abstract interpretation. As explained
in Section 4, our models use lexical contours instead of call chains.
Some important programs analyzed with our approach are more accurately
analyzed than with the k-cfa for any value of k (see Section
6). In particular, some programs with higher order functions,
including uses of map and the Y combinator, are analyzed precisely.
Our demand-driven analysis does not place a priori limits on the
precision of the analysis. This has the advantage that the analysis
effort can be varied according to the complexity of the source program
and in different parts of the same program. On the other hand,
the analysis may not terminate for programs where it is difficult or
impossible to prove that a particular type check can be removed.
We take the pragmatic point of view that it is up to the user to decide
what is the maximal optimization effort (limit on the time or on
some other resource) the compiler should expend. The type checks
that could not be removed within this time are simply kept in the
generated code. We think this is better than giving the user the
choice of an "optimization level" (such as the k to use in a k-cfa)
because there is a more direct link with compilation time.
Although our motivation is the efficient compilation of Scheme, the
analysis is also applicable to languages such as SML and Haskell
for the removal of run time pattern-matching checks. Indeed the
previous example can be translated directly in these statically typed
languages, where the run time type checks are in the calls to hd.
After a brief description of the source language we explain the ana-
lyzer, the abstract execution models and the processing of demands.
Experimental results obtained with a prototype of our analyzer are
then presented.
The source language of the analysis is a purely functional language
similar to Scheme and with only three data types: the false value,
pairs and one argument functions. Each expression is uniquely labeled
to allow easy identification in the source program. The syntax
is given in Figure 2.
Booleans
Pairs
Env := Var #Val
Evaluation function
(lv.
(lv.
(lv.
(lv.
Apply function
Figure
3. Semantics of the Source Language
There is no built-in letrec special form. The Y combinator must
be written explicitly when defining recursive functions. Note also
that cons, car, cdr and pair? are treated as special forms.
The semantics of the language is given in Figure 3. A notable departure
from the Scheme semantics is that pair? returns its argument
when it is a pair. The only operations that may require a run
time type check are car and cdr (the argument must be a pair) and
function call (the function position must be a function).
3 Analysis Framework
To be able to modify the abstract evaluation model during the analysis
of the program we use an analysis framework. The framework
is a parameterized analysis general enough to be used for type anal-
ysis, as we do here, as well as a variety of other program analyses.
When the specifications of an abstract evaluation model are fed to
the framework an analysis instance is obtained which can then be
used to analyze the program.
The analysis instance is composed of a set of evaluation constraints
that is produced from the framework parameters and the program.
These constraints represent an abstract interpretation of the pro-
gram. The analysis of the program amounts to solving the set of
constraints. The solution is the analysis results. From the program
and the framework parameters can also be produced the safety
constraints which indicate at which program points run time type
checks may be needed. It is by confronting the analysis results with
the safety constraints that redundant type checks are identified. If
all the safety constraints are satisfied, all the type checks can be
removed by the optimizer. A detailed description of the analysis
1 The -
# operator is the disjoint union, i.e. the sets to combine
must be disjoint.
Abstract
Booleans
al C #= /
Abstract closures
al P #= /
Abstract
pairs
Cont #= /
Contours
al C Abstract closure creation
Abstract pair creation
al C -V al -Cont # Cont
Contour selection
subject to |V al | < - and |C ont | < -
Figure
4. Instantiation parameters of the analysis framework
Value of e l in k:
a l,k # V al l # Lab, k # Cont
Contents of x in k:
Return value of c with its body in k:
Flag indicating evaluation of e l in k:
Creation circumstances of c:
Creation circumstances of p:
Circumstances leading to k:
Figure
5. Matrices containing the results of an analysis
framework and its implementation is given in [4]. Here we only
give an overview of the framework.
3.1 Framework Parameters
Figure
4 presents the framework parameters that specify the abstract
evaluation model. The interface is simple and flexible. Four
abstract domains, the main contour, and three abstract evaluation
functions have to be provided to the framework.
al P are the abstract domains for the Booleans,
closures, and pairs. They must be non-empty and mutually disjoint.
al is the union of these three domains. Cont is the abstract domain
of contours. Contours are abstract versions of the evaluation
contexts in which expressions of the program get concretely evalu-
ated. The part of the evaluation contexts that is abstractly modeled
by the contours may be the lexical environment, the continuation,
or a combination of both. The main contour k 0 indicates in which
abstract contour the main expression of the program is to be evaluated
The abstract evaluation functions cc, pc, and call specify closure
creation, pair creation, and how the contour is selected when a function
call occurs. cc(l,k) returns the abstract closure created when
the l-expression e l is evaluated in contour k. pc(l,v 1 , v 2 , returns
the abstract pair created by the cons-expression labeled l evaluated
in contour k with arguments v 1 and v 2 . Finally, call(l, c,v,k) indicates
the contour in which the body of closure c is evaluated when c
is called from the call-expression e l in contour k and with argument
v.
Any group of modeling parameters that satisfies the constraints
given in Figure 4 is a valid abstract evaluation model for the frame-work
#mPat#f | l # | l l k | (P,
where l # Lab, k #mkPat#, P,P #mPat#
#sPat# | l # | l l k | (P,
where l # Lab, k #skPat#,
#skPat#
.
Figure
6. Syntax of patterns
3.2 Analysis Results
The analysis results are returned in the seven abstract matrices
shown in Figure 5. Matrices a, b, and g indicate respectively the
value of the expressions, the value of the variables, and the return
value of closures. The value b x,k is defined as follows. Assume that
closure c was created by l-expression (l l x. e l ). Then if c is called
and the call function prescribes contour k for the evaluation of c's
body, then parameter x will be bound to the abstract value b x,k .
d l,k indicates whether or not expression e l is evaluated in contour k.
e l is evaluated in contour k if and only if d l,k #= / 0. Apparently, d l,k
should have been defined as a Boolean instead of a set. However,
the use of sets makes the implementation of the analysis framework
simpler (see [4]).
Matrices c, p, and k are logs keeping the circumstances prevailing
when the different closures, pairs, and contours, respectively, are
created. For example, if during the abstract interpretation of the
program a pair p is created at expression e l with values v 1 and v 2
and in contour k, then this creation of p is logged into p p . That is,
. Most of the time, the circumstances logged into
the log variables are much fewer than what they can theoretically
be. In other words, p p usually contains fewer values than pc
Similarly, when closure created by the evaluation of e l
in k, (l, is inserted in c c . And when contour
selected to be the contour in which the body of f is to be evaluated
when f gets invoked on v at e l in k, (l, f , v, is inserted in k k .
Pattern-Based Models
In the demand-driven type analysis we use patterns and pattern-
matchers to implement abstract evaluation models. Patterns constitute
the abstract values (V al ) and the abstract contours (Cont ).
Abstract values are shallow versions of the concrete values and abstract
contours are shallow versions of the lexical environments.
These lexical contours are one of the features distinguishing our
analysis from most type and control-flow analyses which typically
use call chains. A call chain is a string of the labels of the k nearest
enclosing dynamic calls. Although the use of call chains guarantees
polynomial-time analyses, it can also be fooled easily. We believe
that lexical contours provide a much more robust way to abstract
concrete evaluation contexts.
Figure
6 gives the syntax of patterns. There are two kinds of pat-
terns: modeling patterns (#mPat# and #mkPat#) and split patterns
(#sPat# and #skPat#). For both modeling patterns and split patterns,
there is a value variant (#mPat# and #sPat#) and a contour variant
(#mkPat# and #skPat#). Split patterns contain a single split point
that is designated by #. They are used in the demands that drive
the analysis (in split demands, more precisely). Modeling patterns
contain no split point. They form the representation of the abstract
values and contours.
#f #f
and
r # l (), if r is valid at label l 2 and
if r is valid at label l,
x is the innermost variable in Dom(r),
Figure
7. Formal definition of relation "is abstracted by"
4.1 Meaning of Patterns
Modeling patterns represent abstract values, which in turn can be
seen as sets of concrete values. Pattern # abstracts any value, pattern
#f abstracts the Boolean value #f, pattern l # abstracts any clo-
sure, pattern l l k abstracts any closure coming from l-expression
labeled l and having a definition environment that can be abstracted
by k #mkPat#, and pattern (P 1 , abstracts any pair whose components
can be abstracted by P 1 and P 2 , respectively. The difference
between abstract values and concrete values is that an abstract value
can be made imprecise by having parts of it cut off using # and l # .
Modeling contour patterns appear in the modeling patterns of clo-
sures. To simplify, we use the term contour to mean modeling contour
pattern. Contours abstract lexical environments. A contour is
a list with an abstract value for each variable visible from a certain
label (from the innermost variable to the outermost). For example,
the contour (l #)) indicates that the innermost variable (say
y) is a closure and the other (say x), is a pair. It could abstract the
following concrete environment: 5
A formal definition of what concrete values are abstracted by what
abstract values is given in Figure 7. The relation #Val- #mPat#
relates concrete and abstract values such that v # P means that v
is abstracted by P. We mention (without proof) that any concrete
value obtained during execution of the program can be abstracted
by a modeling pattern that is perfectly accurate. That is, the latter
abstracts only one concrete value, which is the former.
The split patterns and split contour patterns are used to express
split demands that increase the precision of the abstract evaluation
model. Their structure is similar to that of the modeling patterns but
they include one and only one split point (#) that indicates exactly
where in an abstract value an improvement in the precision of the
model is requested. Their utility will be made clearer in Section 5.
Operations on split patterns are explained next.
2 r is valid at label l if its domain is exactly the set of variables
that are visible from e l .
denotes the domain of function f .
4 '#' denotes an undefined value. Consequently, r [x #] is the
same environment as r but without the binding to x.
5 The empty concrete environment, -, contains no bindings.
l
l
l l (P 1 . P n ) #l l (P #
l l (P #
Figure
8. Algorithm for computing the intersection between
two patterns
4.2 Pattern Intersection
Although the # relation provides a formal definition of when a
concrete value is abstracted by an abstract value, and, by extension,
when an abstract value is abstracted by another, it is not necessarily
expressed as an algorithm. Moreover, the demand-driven analysis
does not manipulate concrete values, only patterns of all kinds. So
we present a method to test whether an abstract value is abstracted
by another. More generally, we want to be able to test whether a
(modeling or split) pattern intersects with another. Similarly for
both kinds of contour patterns.
The intersection between patterns is defined in Figure 8. It is partially
defined because two patterns may be incompatible, in the
sense that they do not have an intersection and as such, their empty
intersection cannot be represented using patterns, or as the intersection
of two split patterns may create something having two split
points. The equations in the figure should be seen as cases to try in
order from the first to the last until, possibly, a case applies.
A pattern P intersects with another pattern P # if the intersection
function is defined when applied to P and P # . Moreover, when P
intersects with P # , the resulting intersection characterized
by: 6
4.3 Spreading on Split Patterns
Another relation that is needed to perform the demand-driven analysis
is the spreading test. It is useful in determining if a given split
pattern will increase the precision of the model if it is used in a
split demand. Spreading can occur between a set of abstract values
(modeling patterns) and a split pattern. A split pattern can be
thought of as denoting a sub-division: the set of its abstracted concrete
value is partitioned into a number of sets corresponding to the
different possibilities seen at the split point. Each of those sets is
called a bucket. For example, the pattern # abstracts all values, that
is, Val. It sub-divides Val into three buckets: ValB, ValC, and ValP.
Spreading occurs between the set of abstract values V and the split
6 Provided that we consider '#' and `l # ' to abstract all concrete
values and all concrete closures, respectively.
S #, if #f # S and S\{#f} #= /
0, or
and l #= l #
1 | (P #
2 | (P #
Figure
9. Algorithm for the relation "is spread on"
pattern P if some two values (or refinements of values) in V that are
abstracted by P fall into different buckets. We say that V is spread
on split pattern P and denote it with V #P. Figure 9 gives a formal
definition of #. As with the # operator, cases should be tried in
order.
Mathematically, the relation # has the following meaning. The set
of abstract values S is spread on the split pattern P, denoted S # P,
are modeling patterns obtained by replacing
'#' in P by #f, l # , and (#), respectively.
4.4 Model Implementation
An abstract value can be viewed as a concrete value that has gone
through a projection. Similarly, a contour can be viewed as a lexical
environment that has gone through a projection. If one arranges for
the image of the projection to be finite, then one obtains the desired
abstract domains al P , and Cont .
But which projection should be used? The # relation is not of
much help since, generally, for a concrete value v, there may be
more than one abstract value -
v such that v # -
v. So a projection
based on # would be ill-defined.
The projection we use is based on an exhaustive non-redundant
pattern-matcher. That is, the pattern-matcher implementing the projection
of the values is a finite set of modeling patterns. For any
concrete value v, there will exist one and only one modeling pattern
v in the set such that v # -
v. Such a pattern-matcher describes a
finite partition of Val.
For example, the simplest projection for the values is: 7
{#f, l #)}
It is finite, exhaustive and non-redundant.
7 This is not exactly true. The simplest pattern-matcher would
be the trivial one, {#}, but it would not implement a legal model
for the framework since an abstract model must at least distinguish
the Booleans, the closures, and the pairs.
As for the projection of contours, we use one pattern-matcher per l-
expression. For a given l-expression e l , the lexical environment in
which its body is evaluated can be projected by the pattern-matcher
l . The empty lexical environment is always projected onto the list
of length 0, as the empty list is the only contour that abstracts the
empty environment.
The simplest contour pattern-matcher M l for expression (l l x. e l )
is {(#)}, it is a single list having as many entries as there are
visible variables in the environment in which e l # is evaluated.
Having a pattern-matcher M v that projects values and a family
of pattern-matchers {M i | .} that project lexical environments,
and assuming that M v projects closures coming from different l-
expressions to different abstract closures, it is easy to create an abstract
model, i.e. to define the parameters of the analysis framework,
as follows.
. al B is {#f}
al C is {l l k # M v }
. al P is {(v 1 ,
. Cont is () #
. k 0 is ()
. cc(l,k) is the projection of l l k by M v
. is the projection of (v 1 ,
. call(l, l l (w 1 . w n ), v, is the projection of (v w 1 . w n ) by
l
4.5 Maintaining Model Consistency
One remaining problem that requires special attention is consis-
tency. During the demand-driven analysis, pattern-matchers are not
used to project concrete values, but abstract values. If one of the
abstract values is not precise enough the projection operation may
become ill-defined. In general, abstract values abstract a set of concrete
values. Suppose that -
1 is such an imprecise abstract value.
2 be a modeling pattern that contains -
1 as a sub-pattern.
We want to project -
in order to obtain the resulting abstract value.
A sensible definition for the projection of -
consists in choosing a
modeling pattern -
w in the pattern-matcher M such that all concrete
values abstracted by -
are abstracted by -
w. Unfortunately, such a
w may not exist as it may take the union of many modeling patterns
of M to properly abstract all the concrete values abstracted by -
Here is an example to help clarifying this notion. The following
pattern-matcher M, intended for the projection of values, is inconsistent
#f, (#f), (#f, #)),
l #, l #, (l #)),
Note that the pattern-matcher is finite, exhaustive, and non-redundant
but nevertheless inconsistent. Before explaining why, let us
see how it models the values. First, it distinguishes the values by
their (top-level) type. Second, it distinguishes the pairs by the type
of the value in the CDR-field. Finally, the pairs containing a sub-
pair in the CDR-field are distinguished by the type of the value in
the CAR-field of the sub-pair. Note that the CAR-field of the sub-
pairs is more precisely described than the CAR-field of the pairs
themselves. This is the inconsistency. Problems occur when we try
to make a pair with another pair in the CDR-field. Let us try to make
PM := PM O | PM C | PM L
PM O := Onode [V al
Onode
and {l 1 , . , l n l is a l-expr.}
PM L := Leaf #mPat# | Leaf #mkPat#
Figure
10. Implementation of the pattern-matchers
To project P #mPat# with M # PM,
to project (P 1 . P n ) #mkPat# with M # PM,
[queue of #mPat#mPat#mkPat#
pm(Onode [V al #M 1 ], P#q)
pm(Onode
pm(Onode [. , V al C #M 2 , .], P#q)
pm(Onode [. ,
pm(Cnode [Lab #M 1 ], P#q)
pm(Cnode [. , l #M i , .], l l i
Figure
11. Pattern-matching algorithm
a pair with the values #f and (#f, #)). We obtain the modeling
pattern -
and we have to project it using M. It is
clear that we cannot non-ambiguously choose one of the modeling
patterns of M as an abstraction of all the values abstracted by -
v.
In order to avoid inconsistencies, each time an entity is refined in
one of the pattern-matchers, we must ensure that the abstract values
and the contours on which the refined entity depends are sufficiently
precise. If not, cascaded refinements are propagated to the
dependencies of the entity. This cascade terminates since, for each
propagation, the depth at which the extra details are required decreases
4.6 Pattern-Matcher Implementation
Our implementation of the pattern-matchers is quite simple. A
pattern-matcher is basically a decision tree doing a breadth-first
inspection of the modeling pattern or modeling contour pattern to
project. An internal node of the decision tree is either an O-node
(object) or a C-node (closure). A leaf contains an abstract value
or a contour which is the result of the projection. Each O-node
is either a three-way switch that depends on the type of the object
to inspect or is a one-way catch-all that ignores the object and
continues with its single child. Each C-node is either a multi-way
switch that depends on the label of the closure to inspect or is a one-way
catch-all that ignores the closure and continues with its single
child.
Figure
presents the data structures used to implement the
pattern-matchers.
#demand# := show a # B
where a #a-var#,B #bound#
| split s P
where s #splittee#,P #sPat#
| show
| bad-call l
where a #a-var#,b #b-var#,c #g-var#
#a-var# := a l,k where l # Lab,k #mkPat#
#g-var# := g c,k where c #mPat#,k #mkPat#
#d-var# := d l,k where l # Lab,k #mkPat#
Figure
12. Syntax of demands
The pattern-matching algorithm is presented in Figure 11. The
breadth-first traversal is done using a queue. The contents of the
queue always remain synchronized with the position in the decision
tree. That is, when a C-node is reached, a closure is next on
the queue, and when a leaf is reached, the queue is empty. The initial
queue for an abstract value projection contains only the abstract
value itself. The initial queue for a contour projection contains all
the abstract values contained in the contour, with the first abstract
value of the contour being the first to be extracted from the queue.
To keep the notation terse, we use the view operation # both to enqueue
and dequeue values. When enqueuing, the queue is on the
left of #. When dequeuing, the queue is on the right of #. The
empty queue is denoted by [ ].
The pattern-matchers used in the initial abstract model are the fol-
lowing. Note that we describe them in terms of set theory and not
in terms of the actual data structures. The value pattern-matcher
contains one abstract Boolean, one abstract pair, and one abstract
closure for each l-expression. For each l-expression, its corresponding
contour pattern-matcher is the trivial one. Note that they
are consistent as the pattern-matchers are almost blind to any de-
tail. The only inspection that is performed is the switch on the label
when projecting a closure. However, the projection of closures
always involves closures with explicit labels since it only occurs
through the use of the abstract model function cc.
We do not give a detailed description of the process of refining a
pattern-matcher because it would be lengthy and it is not conceptually
difficult.
5 Demand Processing
Figure
12 presents the syntax of demands. The syntax of the demands
builds on the syntax of the patterns. There are show de-
mands, split demands, and bad call demands.
5.1 Meaning of Demands
A show demand asks for the demonstration of a certain property.
For example, it might ask for demonstration that a particular abstract
variable must only contain pairs, meaning that a certain ex-
pression, in a certain evaluation context, must only evaluate to pairs.
Or it might ask for the demonstration that a particular abstract variable
must be empty, meaning that a certain expression, in a certain
evaluation context, must not get evaluated. Note that the bound
al T rues represents the values acting as true in the conditionals.
That is,
al P .
A bad call demand asks for the demonstration that a particular function
call cannot happen. It specifies where and in which contour
the bad call currently happens, which function is called, and which
value is passed as an argument. Of course, except for the label, the
parameters of the demand are abstract.
A split demand asks that proper modifications be done on the model
in such a way that the splittee is no longer spread on the pattern.
Take this demand for example: split a l,k #. It asks that the abstract
values contained in a l,k be distinguished by their type (because of
the pattern #). If the variable a l,k currently contains abstract values
of different types, then these values are said to be spread on the
pattern #. Then the model ought to be modified in such a way that
the contour k has been subdivided into a number of sub-contours
, such that a l,k i
contains only abstract values of a single
In case of success, one might observe that a l,k 1
contains only pairs, a l,k 2
, only closures, a l,k 3
, nothing, a l,k 4
, only
#f, etc. That is, the value of expression e l in contour k would have
been split according to the type.
In a split demand, the splittee can be an aspect of the abstract model
(when it is V al C or V al P ) or an abstract variable from one of
the a, b, or g matrices. A splittee in #b-var# does not denote an
ordinary entry in the b matrix. It does indicate the name of the
source variable but it also gives a label and a contour where this
variable is referenced (not bound).
Only the values that intersect with the pattern are concerned by the
split. For example, if the demand is split a l,k (#) and a
{#f,(#f, #f),(#f, l # )}, the only thing that matters is that the two
abstract pairs must be separated. What happens with the Boolean is
not important because it does not intersect with the pattern (#).
Normally, a show demand is emitted because the analysis has determined
that, if the specified property was false, then a type error will
most plausibly happen in the real program. Similarly for a bad call
demand. Unfortunately, split demands do not have such a natural
interpretation. They are a purely artificial creation necessary for the
demand-driven analysis to perform its task. Moreover, during the
concrete evaluation of the program, an expression, in a particular
evaluation context, evaluates to exactly one value. So splitting in
the concrete evaluation is meaningless.
5.2 Demand-Driven Analysis Algorithm
The main algorithm of the demand-driven analysis is relatively sim-
ple. It is sketched in Figure 13. Basically, it is an analysis/model-
update cycle. The analysis phase analyses the program using
the framework parameterized by the current abstract model. The
model-update phase computes, when possible, a model-updating
demand based on the current analysis results and applies it to the
model. Note that the successive updates of the abstract model
make it increasingly refined and the analysis results that it helps to
produce improve monotonically. Consequently, any run time type
check that is proved to be redundant at some point remains as such
for the rest of the cycle.
The steps performed during the model-update phase are: the initial
demands are gathered; demand processing (of the demands that
do not modify the model) and call monitoring occur until no new
demands can be generated; if there are model-updating demands,
the best one is selected and applied on the model. The model-
modifying demands are the split demands in which the splittee is
al C , V al P , or a member of #b-var#.
create initial model
analyze program with model
while there is time left
set demand pool to initial demands
make the set of modifying demands empty
repeat
monitor call sites (l, k) that are marked
while there is time left and
there are new demands in the pool do
pick a new demand D in the pool
if D is a modifying demand then
insert D in the modifying demands set
else
process D
add the returned demands to the pool
until there is no time left or
there are no call sites to monitor
if modifying demands set empty then
exit
else
pick the best modifying demand D
modify model with D
re-analyze program with new model
Figure
13. Main demand-driven analysis algorithm
The initial demands are those that we obtain by responding to the
needs of the optimizer and not by demand processing. That is, if
non-closures may be called or non-pairs may go through a strictly
"pairwise" operation, bound demands asking a demonstration that
these violations do not really occur are generated. More precisely,
for a call ( l e e l # ) and for k # Cont , if a l ,k # V al C , then the initial
demand show a l ,k # V al C is generated. And for a pair-access
expression (car l e l ) or (cdr l e l # ) and for k # Cont , if a l # ,k #
al P , then the initial demand show a l # ,k # V al P is generated.
The criterion used to select a good model-updating demand in our
implementation is described in Section 6.
The analysis/model-update cycle continues until there is no more
time left or no model updates have been proposed in the model-
update phase. Indeed, it is the user of a compiler including our
demand-driven analysis who determines the bound on the computational
effort invested in the analysis of the program. The time is
not necessarily wall clock time. It may be any measure. In our
implementation, a unit of time allows the algorithm to process a
demand. Two reasons may cause the algorithm to stop by lack of
model-updating demands. One is that there are no more initial de-
mands. That means that all the run time type checks of the program
have been shown to be redundant. The other is that there remain
initial demands but the current analysis results are mixed in such a
way that the demand processing does not lead to the generation of
a model-updating demand.
5.3 Demand Processing
5.3.1 Show In Demands
us present the processing of demands. We begin with the
processing of show #a-var#bound# demands. Let us consider
the demand show a l,k # B. There are 3 cases. First case, if the values
in a l,k all lie inside of the bound B, then the demand is trivially
successful. Nothing has to be done in order to obtain the desired
demonstration.
if a l,k # B:
Second case, if the values in a l,k all lie outside of the bound B, then
it must be shown that the expression e l does not get evaluated in
the abstract contour k. This is a sufficient and necessary condition
because, if e l is evaluated in contour k, any value it returns is outside
of the bound, causing the original demand to fail. And if e l does not
get evaluated in contour k, then we can conclude that any value in
a l,k lies inside the bound.
if a l,k
0:
# show d
Last case, some values in a l,k lie inside of B and some do not.
The only sensible thing to do is to first split the contour k into sub-contours
in such a way that it becomes clear whether the values all
lie inside of B or they all lie outside of B. Since the bounds are all
simple, splitting on the type of the objects is sufficient. Once (we
would better say "if") the split demand is successful, the original
demand can be processed again.
otherwise:
# split a l,k #
5.3.2 Show Empty Demands
We continue with the processing of show
demands. Let
us consider the demand show d
There are many cases in
its processing. First, if the variable d l,k is already empty, then the
demand is trivially successful.
if d
0:
Otherwise, the fact that e l does get evaluated or not in contour k
depends a lot on its parent expression, if it has one at all. If it does
not have a parent expression, it means that e l is the main expression
of the program and, consequently, there is no possibility to prove
that e l does not get evaluated in contour k. 8
if e l is the main expression:
In case e l does have a parent expression, let e l # be that expression.
Let us consider the case where e l is a l-expression. It implies that
e l is the body of e . Note that the evaluation of e l in contour k has
no direct connection with the evaluation of e l # in contour k. In fact,
e l gets evaluated in contour k if a closure c, resulting from the evaluation
of e l in some contour, gets called somewhere (at expression
e l # ) in some (other) contour k # on a certain argument v in such a
way that the resulting contour call(l # , c,v,k # ) in which the body of
c must be evaluated is k. So the processing of the demand consists
in emitting a bad call demand for each such abstract call. Note how
the log matrices k and c are used to recover the circumstances under
which the contours and closures were created.
8 In fact, it is a little more complicated than that. We suppose
here that the abstract variables contain the minimal solution for
the evaluation constraints generated by the analysis framework. In
these conditions, for l being the label of the program main expres-
sion, d l,k is non-empty if and only if k is the main abstract contour.
For any other contour k # , d l,k
if e l
let us consider the case where e l is a conditional. A conditional
has three sub-expressions, so we first consider the case where
e l is the then-branch of e l . Clearly, it is sufficient to show that e l
is not evaluated at all in contour k. However, such a requirement is
abusive. The sufficient and necessary condition for a then-branch
to be evaluated (or not to be evaluated) is for the test to return (not
to return, resp.) some true values.
if e l
# show a l # ,k #
The case where e l is the else-branch of the conditional is analogous.
The else-branch cannot get evaluated if the test always returns true
values.
if e l
# show a l # ,k # V al T rues
The case where e l is the test of the conditional can be treated as a
default case. The default case concerns all situations not explicitly
treated above. In the default case, to prove that e l does not get
evaluated in contour k requires a demonstration that e l does not get
evaluated in contour k either. This is obvious since the evaluation
of a call, cons, car, cdr, or pair? expression necessarily involves
the evaluation of all its sub-expressions. Similarly for the test sub-expression
in a conditional.
otherwise:
# show d l
We next describe how the bad call demands are processed. Let
us consider this demand: bad-call l f v k. The expression e l is
necessarily a call and let e There are two cases: either
the specified call does not occur, or it does. If the call does not
occur, then the demand is trivially successful. 9
In the other case, the specified call is noted into the bad call log.
Another note is kept in order to later take care of all the bad calls
at e l in contour k. We call this operation monitoring e in contour
k. More than one bad call may concern the same expression and
the same contour. Because the monitoring is a crucial operation, it
should have access to bad call informations that are as accurate as
possible. So, it is preferable to postpone the monitoring as much as
possible.
otherwise:
in the bad call log.
Flag (l, k) as a candidate for monitoring.
9 Actually, in the current implementation, this case cannot occur.
The demand is generated precisely because the specified call was
found in the k matrix. However, previous implementations differed
in the way demands were generated and bad call demands could be
emitted that were later proved to be trivially successful.
5.3.4 Split Demands
Direct Model Split
Let us now present the processing of the split demands. The processing
differs considerably depending on the splittee. We start by
describing the processing of the following demands: split V al C P
and split V al P P. These are easy to process because they explicitly
prescribe a modification to the abstract model. The modification
can always be accomplished successfully.
Update M v with P
a-variables
The most involving part of the demand processing is the processing
of the split #a-var#sPat# demands. Such a demand asks for a
splitting of the value of an expression in a certain contour, so that
there is no more spreading of the values on the specified pattern. Let
us consider the demand split a l,k P. The first possibility is that there
is actually no spreading. Then the demand is trivially successful.
However, if there is spreading, then expression e has to be in-
spected, as the nature of the computations for the different expressions
vary greatly. Let us examine each kind of expression, one by
one. First, we consider the false constant. Note that this expression
can only evaluate to #f. So its value cannot be spread on P, no
matter which split pattern P is. For completeness, we mention the
processing of the demand nevertheless.
Second, e l may be a variable reference. Processing this demand is
straightforward and it translates into a split demand onto a #b-var#.
Third, e l may be a call. Clearly, this case is the most difficult to
deal with. This is because of the way a call expression is abstractly
evaluated. Potentially many closures are present in the caller position
and many values are present in the argument position. It follows
that a Cartesian product of all possible invocations must be
done. In turn, each invocation produces a set that potentially contains
many return values. So, in order to succeed with the split,
each set of return values that is spread on the pattern must be split.
And the sub-expressions of the call must be split in such a way that
no invocation producing non-spread return values can occur in the
same contour than another invocation producing incompatible non-
spread return values. This second task is done with the help of the
function SC (Split Couples) that prescribes split patterns that separate
all the incompatible couples. An example follows the formal
description of the processing of the split demand on a call.
split g c,k # P c # a l ,k #V al C #
# split a l ,k
# split a l # ,k
The following example illustrates the processing of the demand.
Suppose that we want to process the demand split a l,k #; that two
closures may result from the evaluation of e l # , say, a l
and that two values may be passed as arguments, say, a l #
that
#,
that and that g c 2 ,k 22 # V al P . Closure c 1 , when
called on v 2 , and closure c 2 , when called on v 1 , both return values
that are spread on #. It follows that their return values in those
circumstances must be split. So, g c 1 ,k 12 and g c 2 ,k 21 must be split by
the pattern #. It is necessary for these two splits to succeed in order
to make our original demand succeed. It is not sufficient, however.
We cannot allow c 1 to be called on v 1 and c 2 to be called on v 2
under the same contour k. It is because the union of their return
values is spread on #. They are incompatible. This is where the SC
function comes into play and its use:
returns either ({l # }, /
0,{#}). In either case, a split according
to the prescribed pattern, if successful, would make the two incompatible
calls occur in different contours. If we suppose that the first
case happens, the result of processing the original demand is:
split
split a l # ,k l #
Fourth, e may be a l-expression. The processing of this demand is
simple as it reduces to a split on the abstract model of closures.
Fifth, let us consider the case where e l is a conditional. Two cases
are possible: the first case is that at least one of the branches is
spread on the pattern; the second is that each branch causes no
spreading on the pattern but they are incompatible and the test sub-expression
evaluates to both true and false values. In the first case,
a conservative approach consists in splitting the branches that cause
the spreading.
# split a l (n) ,k P l (n)
In the second case, it is sufficient to split on the type of the test
sub-expression, as determining the type of the test sub-expression
allows one to determine which of the two branches is taken and
consequently knowing that the value of the conditional is equal to
one of the two branches.
# split a l # ,k #
Sixth, our expression e l may be a pair construction. The fact that
the value of e l is spread on the pattern implies first that the pattern
has the form (P # , second that the value of one of the two
sub-expressions of e l is spread on its corresponding sub-pattern (P #
or P # ). In either case, the demand is processed by splitting the
appropriate sub-expression by the appropriate sub-pattern.
(cons l e l e l #
# split a l # ,k P
(cons l e l e l #
# split a l # ,k P #
Seventh, e l may be a car-expression. In order to split the value
of e l on P, the sub-expression has to be split on (P, #). However,
there is the possibility that the abstract model of the pairs is not
precise enough to abstract the pairs up the level of details required
by (P, #). If not, the model of the pairs has to be split first. If it is,
the split on the sub-expression can proceed as planned.
al P is precise enough for (P, #):
# split a l # ,k (P, #)
al P (P, #)
Eighth, if e l is a cdr-expression, the processing is similar to that of
a car-expression.
al P is precise enough for (#, P):
# split a l # ,k (#, P)
Ninth, e l must be a pair?-expression. Processing the demand simply
consists in doing the same split on the sub-expression. To
see why, it is important to recall that, if this case is currently being
considered, it is because a l,k # P. If #, the type of the
sub-expression must be found in order to find the type of the ex-
pression. If the same split is required on the sub-expression
since all the pairs of the pair?-expression come from its
sub-expression. P cannot be l # or l l # k # , for l # Lab, k #mkPat#,
because e l can only evaluate to Booleans and pairs.
# split a l # ,k P
b-variables
The next kind of split demands have a #b-var# as a splittee. Recall
that a #b-var# indicates the name of a program variable and the
label and contour where a reference to that variable occurs. Let
us consider this particular demand: split b x,k,l P. Recall also that
the contour k is a modeling contour pattern which consists in a list
of modeling patterns, one per variable in the lexical environment
visible from the expression e l . Each modeling pattern represents a
kind of bound in which the value of the corresponding is guaranteed
to lie. The first modeling pattern corresponds to the innermost
variable. The last corresponds to the outermost.
Note that the analysis framework does not compute the value of
variable references using these bounds. As far as the framework is
concerned, the whole contour is just a name for a particular evaluation
context. In the framework, a reference to a variable x is
computed by either inspecting the abstract variable b x,k if x is the
innermost variable or by translating it into a reference to x from the
label l # of the l-expression immediately surrounding e l and contour
# in which l-expression e l got evaluated, creating a closure that
later got invoked, leading to the evaluation of its body in contour
k. For the details on variable references in the analysis framework,
see [4]. Nonetheless, because of the way we implement the abstract
model, a reference to a variable x from a label l, and in a contour k
always produces values that lie inside of the bound corresponding
to x in k.
Consequently, a split on a program variable involves a certain number
of splits on the abstract models of call and cc. Moreover, consistency
between abstract values also prescribes multiple splits on
the abstract model. For example, if contour k results from the call
of closure l l k # on a value v at label l # , and in contour k # , that is,
cannot be more precise than
about the program variable bounds it shares with contour k # . In
turn, if closure l l # k # results from the evaluation of e l in contour k # ,
that is, l l k cannot be more precise
than k # about the program variable bounds it shares with contour
# . It follows that a split on a program variable, which can be seen
as a refining of its bound in the local contour, requires the refining
of a chain of contours and closure environments until a point is
reached where the contour to refine does not share the variable with
the closure leading to its creation.
Now, if we come back to the processing of split b x,k,l P, the first
thing that must be verified is whether a reference to x from e l in
contour k produces values that are spread on pattern P. We denote
such a variable reference by ref(x,k, l). If no spreading occurs, 10
the demand is trivially successful, otherwise modifications to the
model must be done.
otherwise:
with
Update M v with l l m+1
Update M l m+1 with (P m+1 P #
Update M v with l l n
Update M l n with (P n . P m+1 P #
where
(l l m x. (l l m+1 y m+1 . (l l n y n . x l .)
is the l-expression binding x
g-variables
The last kind of demands is the split demand with a #g-var# as a
splittee. The processing of such a demand is straightforward since
Once again, this case cannot occur in the current implementation
the return value of a closure is the result of the evaluation of its
body. Let us consider this particular demand: split g c,k P. In case
the return value is not spread on the pattern, the demand in trivially
successful.
otherwise:
# split a l # ,k P
5.3.5 Call Site Monitoring
The processing rules have been given for all the demands. However,
we add here the description of the monitoring of call sites. The
monitoring of call sites is pretty similar to the processing of the
demand split a l,k P where e l is a call. The difference comes from
the fact that, with the monitoring, effort is made in order to prove
that the bad calls do not occur. Let us consider the monitoring of
call expression ( l e l e l # ) in contour k. Let L BC denote the bad call
log. Potentially many closures may result from the evaluation of e l #
and potentially many values may result from the evaluation of e l # .
Among all the possible closure-argument pairs, a certain number
may be marked as bad in the bad call log and the others not. If
no pair is marked as bad, then the monitoring of e l in k is trivially
successful.
if # (a l ,k #V al C)-a l # ,k #L BC (l,
0:
On the contrary, if all the pairs are marked as bad calls, then a demand
is emitted asking to show that the call does not get evaluated
at all.
# show d
But in the general case, there are marked pairs and non-marked
pairs occurring at the call site. It is tempting to emit a demand D
asking a proof that the call does not get evaluated at all. It would
be simple but it would not be a good idea. The non-marked pairs
may abstract actual computations in the concrete evaluation of the
program and, consequently, there would be no hope of ever making
D successful. 11 What has to be done is to separate, using splits, the
pairs that are marked and the pairs that are not. The (overloaded)
SC function is used once again.
otherwise:
# split a l # ,k
5.3.6 The Split Couples Function
We conclude this section with a short description of the SC function.
SC is used for two different tasks: splitting closure-argument pairs
11 This is because an analysis done using the framework is conservative
(see [4]). That is, the computations made in the abstract
interpretation abstract at least all the computations made in the concrete
interpretation. So, it is impossible to prove that an abstract
invocation does not occur if it has a concrete counterpart occurring
in the concrete interpretation.
according to the bucket in which the return values fall relatively to
a split pattern splitting closure-argument pairs depending on the
criterion that they are considered bad calls or not. In fact, those two
tasks are very similar. In both cases, the set of pairs is partitioned
into equivalence classes that are given either by the split pattern
bucket or by the badness of the call. In order to separate two pairs
belonging to different classes, it is sufficient
to provide a split that separates v 1 from w 1 or a split that separates
v 2 from w 2 . So, what SC has to do is to prescribe a set of splits
to perform only on the first component of the pairs and another
set of splits to perform only on the second component such that
any two pairs from different classes would be separated. This is
clearly possible since prescribing splits intended to separate any
first component from any other is a simple task. Similarly for the
second components. This way, any pair would be separated from
all the others. Doing so would be overly aggressive, however, as
there are usually much smaller sets of splits that are sufficient to
separate the pairs.
Our implementation of SC proceeds this way. It first computes the
equivalence classes. Next, each pair is converted into a genuine
abstract pair (a modeling pattern). Then, by doing a breadth-first
traversal of all the pairs simultaneously, splitting strategies are elaborated
and compared. At the end, the strategy requiring the smallest
number of splits is obtained. Being as little aggressive as possible
is important because each of the proposed splits will have to be applied
on one of the two sub-expressions of a call expression. And
these sub-expressions may be themselves expressions that are hard
to split (such as calls).
6 Experimental Results
6.1 Current Implementation
Our current implementation of the demand-driven analysis is merely
a prototype written in Scheme to experiment with the analysis
approach. No effort has been put into making it fast or space-
efficient. For instance, abstract values are implemented with lists
and symbols and closely resemble the syntax we gave for the modeling
patterns. Each re-analysis phase uses these data without converting
them into numbers nor into bit-vectors. And a projection
using the pattern-matchers is done for each use of the cc, pc, and
call functions.
Aside from the way demands are processed, many variants of the
main algorithm have been tried. The variant that we present in Section
5 is the first method that provided interesting results. Previous
variants were trying to be more clever by doing model changes
concurrently with demand processing. This lead to many compli-
cations: demands could contain values and contours expressed in
terms of an older model; a re-analysis was periodically done but
not necessarily following each model update, which caused some
demands to not see the benefits of a split on the model that had
just been done; a complex system of success and failure propaga-
tion, sequencing of processing, and periodic processing resuming
was necessary; etc. The strength of the current variant is that, after
each model update, a re-analysis is done and the whole demand-
propagation is restarted from scratch, greatly benefitting from the
new analysis results.
In the current variant, we tried different approaches in the way the
best model-updating demand is selected to be applied on the model.
At first, we applied all the model-updating demands that were proposed
by the demand processing phase. This lead to exaggerate
(l 2
op. (l 3
l. (if 4
l 5 (cons 6 ( 7
(cdr 15
l
l 17 )))
(let
(let 22 y. ( 24 y 25 #f 26
(letrec
(l 28 data.
(let data
(let 36 data 42
43 loop 44 (cons 45 (cons 46 (cons
(car 50 data 51
(cons 52 (l 53 w. #f 54 )
(cdr data 56 )))))))
(cons 59 #f
Figure
14. Source of the map-hard benchmark
refining of the model, leading to massive space use. So we decided
to make a selection of one of the demands according to a certain
criterion. The first criterion was to measure how much the abstract
model increases in size if a particular demand is selected. While
it helped in controlling the increase in size of the model, it was
not choosing very wisely as for obtaining very informative analysis
results. That is, the new results were expressed with finer values
but the knowledge about the program data flow was not always in-
creased. Moreover, it did not necessarily help in controlling the
increase in size of the analysis results. The second criterion, which
we use now, measures how much the abstract model plus the analysis
results increase in size. This criterion really makes a difference,
although the demand selection step involves re-analyzing the program
for all candidate demands.
6.2 Benchmarks
We experimented with a few small benchmark programs. Most of
the benchmarks involve numeric computations using naturals. Two
important remarks must be made. First, our mini-language does
not include letrec-expressions. This means that recursive functions
must be created using the Y combinator. Note that we wrote
our benchmarks in an extended language with let- and letrec-
expressions, and used a translator to reduce them into the base lan-
guage. We included two kinds of letrec translations: one in which
Y is defined once globally and all recursive functions are created
using it; one in which a private Y combinator is generated for each
letrec-expression. The first kind of translation really makes the
programs more intricate as all recursive functions are closures created
by Y. The second kind of translation loosely corresponds to
making the analysis able to handle letrec-expressions as a special
form. We made tests using both translation modes. Our second re-mark
concerns numbers. Our mini-language does not include inte-
gers. Another translation step replaces integers and simple numeric
operators by lists of Booleans and functions, respectively. Thus, integers
are represented in unary as Peano numbers and operations on
the numbers proceed accordingly. This adds another level of difficulty
on top of the letrec-expression translation. For an example
of translation, see Appendix A.
Our benchmarks are the following. Cdr-safe contains the definition
of a function which checks its argument to verify that it is a pair
before doing the access. It can be analyzed perfectly well by a 1-cfa,
but not by a 0-cfa. Loop is an infinite loop. 2-1 computes the value
of (- 2 1). Map-easy uses the 'map' function on a short list of
pairs using two different operators. Map-hard repetitively uses the
'map' function on two different lists using two different operators.
The lists that are passed are growing longer and longer. This use of
'map' is mentioned in [7] as being impossible to analyze perfectly
well by any k-cfa. The source code of this benchmark is shown in
Figure
14. Fib, gcd, tak, and ack are classical numerical computa-
tions. N-queens counts the number of solutions for 4 queens. SKI
is an interpreter of expressions written with the well known S, K,
and I combinators. The interpreter runs an SKI program doing an
infinite loop. The combinators and the calls are encoded using pairs
and Booleans.
6.3 Results
Figure
15 presents the results of running our analysis on the bench-
marks. Each benchmark was analyzed when reduced with each
translation method (global and private Y). A time limit of 10000
"work units" has been allowed for the analysis of each bench-
mark. The machine running the benchmarks is a PC with a 1.2 GHz
Athlon CPU, 1 GByte RAM, and running RH Linux kernel 2.4.2.
Gambit-C 4.0 was used to compile the demand-driven analysis.
The column labeled "Y" indicates whether the Y combinator is
Global or Private. The next column indicates the size of the translated
benchmark in terms of the number of basic expressions. The
columns labeled "total", "pre", "during", and "post" indicate the
number of run time type checks still required in the program at
those moments, respectively: before any analysis is done, after
the analysis with the initial model is done, during, and after the
demand-driven analysis. Finally, the computation effort invested in
the analysis is measured both in terms of work units and CPU time.
The measure in column "total" is a purely syntactic one, it basically
counts the number of call-, car-, and cdr-expressions in the
program. The measure in "pre" is useful as a comparison between
the 0-cfa and our analysis. Indeed, the initial abstract model used in
our approach is quite similar to that implicitly used in the 0-cfa. An
entry like 2@23 in column "during" indicates that 2 run time type
checks are still required after having invested 23 work units in the
demand-driven analysis (this gives an idea of the convergence rate
of the analysis).
When we look at Figure 15, the aspect of the results that is the
most striking is the small improvements that the full demand-driven
analysis obtains over the results obtained by the 0-cfa. Two reasons
explain this fact. First, many run time type checks are completely
trivial to remove. For instance, every let-expression, once trans-
lated, introduces an expression of the form ((lx. In turn,
the translation of each letrec-expression introduces 2 or 3 let-
expressions, depending on the translation method. It is so easy to
optimize such an expression that even a purely syntactic detection
would suffice. Second, type checks are not all equally difficult to
remove. The checks that are removed by the 0-cfa are removed because
it is "easy" to do so. The additional checks that are removed
by the demand-driven phase are more difficult ones. In fact, the difficulty
of the type checks seems to grow very rapidly as we come
close to the 100% mark. This statement is supported by the numbers
presented in [2] where a linear-time analysis, the sub-0-cfa, obtains
analysis results that are almost as useful to the optimizer than those
from the 0-cfa, despite its patent negligence in the manipulation of
the abstract values.
Note how translating with a private Y per letrec helps both the
0-cfa and the demand-driven analysis. In fact, except for the
n-queens benchmark, the demand-driven analysis is able to remove
all type checks when private Y combinators are used. The
success of the analysis varies considerably between benchmarks.
Y size total pre during post units time(s)
loop G
map-easy G
map-hard G 96 33 9 6@38 5@254 3@305 1@520 0 1399 76.26
n-queens G 372 121 51 51 10000 15899.39
ack G 162
7@473 6@543 5@1474 4@3584
Figure
15. Experimental results
unrolling
units 176 280 532 1276 3724
Figure
16. The effect of the size of a program on the analysis
work
Moreover, it is not closely related to the size of the program. It
is more influenced by the style of the code. In order to evaluate
the performance of the analysis on similar programs, we conducted
experiments on a family of such programs. We modified the ack
benchmark by unrolling the recursion a certain number of times.
Translation with private Y is used. Figure 16 shows the results for
a range of unrolling levels. For each unrolling level i, the total
number of type checks in the resulting program is 43
optimization is done, 3 checks are still required after the program
is analyzed with the initial model, and all the checks are eliminated
when the demand-driven analysis finishes. We observe a somewhat
quadratic increase in the analysis times. This is certainly better than
the exponential behavior expected for a type analysis using lexical-
environment contours.
Conclusions
The type analysis presented in this paper produces high quality
results through the use of an adaptable abstract model. During
the analysis, the abstract model can be updated in response to the
specifics of the program while considering the needs of the opti-
mizer. This adaptivity is obtained by the processing of demands
that express, directly or indirectly, the needs of the optimizer. That
is, the model updates are demand-driven by the optimizer. More-
over, the processing rules for the demands make our approach more
robust to differences in coding style.
The approach includes a flexible analysis framework that generates
analyses when provided with modeling parameters. We proposed
a modeling of the data that is based on patterns and described a
method to automatically compute useful modifications on the abstract
model. We gave a set of demands and processing rules for
them to compute useful model updates. Finally, we demonstrated
the power of the approach with some experiments, showing that it
analyzes precisely (and in relatively short time) a program that is
known to be impossible to analyze with the k-cfa. A complete presentation
of our contribution can be found in [3]. An in-depth presentation
of all the concepts and algorithms along with the proofs
behind the most important theoretical results are also found there.
Except for the ideas of abstract interpretation and flexible analyses,
the remainder of the presented work is, to the best of our knowl-
edge, original. Abstract interpretation is frequently used in the field
of static analysis (see [2, 7, 8, 9]). The k-cfa family of analyses
(see [8, 9]) can, to some extent, be considered as flexible. The
configurable analysis presented in [2] by Ashley and Dybvig can
produce an extended family of analyses, but at compiler implementation
time. Our analysis framework (see [4]) allows for more subtlety
and can be modified during the analysis.
We can think of many ways to continue research on this subject: extended
experiments on our approach in comparison to many other
analyses; the speed and memory consumption of the analysis; incremental
re-analysis (that is, if analysis results R 1 were obtained
by using model M 1 , and model M 2 is a refinement of model M 1 ,
then compute new results R 2 efficiently), better selection of the
model-updating demands. Moreover, language extensions should
be considered to handle a larger part of Scheme and extending our
demand-driven approach to other analyses. There are also more
theoretical questions. We know that analyzing with the analysis
framework and adequate modeling parameters is always at least as
powerful as the k-cfa (or many other analyses). However, it requires
the parameters to be given by an oracle. What we do not know is
whether our current demand-driven approach is always at least as
powerful as the k-cfa family. We think it is not, but do not yet have
a proof.
(l 2 m. (l 3 n. (if 4
(if
Figure
17. The ack benchmark, before expansion
-p.
ackp.
(cons 22 #f 23 (cons 24 #f 25 (cons 26 #f 27 (cons 28 #f 29 #f
(l 35 ackf. (l 36 m. (l 37 n. (if 38 (
(cons 67 #f 68 #f 69
(l 94 =f. (l 95 x. (l 96 y. (if 97 x
(if 109 y 110 #f 111 (cons 112 #f 113 #f 114 ))))))))
(l 118 -f. (l 119 x2. (l 120 y2. (if 121 y2 122 ( 123 ( 124 -f 125 (cdr 126 x2 127
(l 134 +f. (l 135 x3. (l 136 y3. (if 137 x3 138 (cons 139 #f 140 ( 141 ( 142 +f 143 (cdr 144 x3 145
(l 148 f. ( 149 (l 150 g. ( 151 g 152 g 153
Figure
18. The ack benchmark, after expansion
Other researchers have worked on demand-driven analysis but in a
substantially different way (see the work of Duesterwald et al. [5],
Agrawal [1], and Heintze and Tardieu [6]). These approaches do
not have an abstract execution model that changes to suit the pro-
gram. Their goal is to adapt well-known analysis algorithms into
variants with which one can perform what amounts to a lazy evaluation
of the analysis results.
Acknowledgments
The authors thank the anonymous referees for their careful review
and the numerous constructive comments.
This work was supported in part by a grant from the Natural Sciences
and Engineering Research Council of Canada.
9
--R
Simultaneous demand-driven data-flow and call graph analysis
A practical and flexible flow analysis for higher-order languages
A unified treatment of flow analysis in higher-order languages
Control flow analysis in Scheme.
The semantics of Scheme control-flow analysis
--TR
Control flow analysis in scheme
The semantics of Scheme control-flow analysis
Demand-driven computation of interprocedural data flow
A unified treatment of flow analysis in higher-order languages
A practical and flexible flow analysis for higher-order languages
Demand-driven pointer analysis
Simultaneous Demand-Driven Data-Flow and Call Graph Analysis | demand-driven analysis;type analysis;static analysis |
581498 | Meta-programming with names and necessity. | Meta-programming languages provide infrastructure to generate and execute object programs at run-time. In a typed setting, they contain a modal type constructor which classifies object code. These code types generally come in two flavors: closed and open. Closed code expressions can be invoked at run-time, but the computations over them are more rigid, and typically produce less efficient residual object programs. Open code provides better inlining and partial evaluation of object programs, but once constructed, expressions of this type cannot in general be evaluated.Recent work in this area has focused on combining the two notions into a sound system. We present a novel way to achieve this. It is based on adding the notion of names from the work on Nominal Logic and FreshML to the -calculus of proof terms for the necessity fragment of modal logic S4. The resulting language provides a more fine-grained control over free variables of object programs when compared to the existing languages for meta-programming. In addition, this approach lends itself well to addition of intensional code analysis, i.e. ability of meta programs to inspect and destruct object programs at run-time in a type-safe manner, which we also undertake. | Introduction
Meta-programming can be broadly defined as a discipline of algorithmic manipulation
of programs written in a certain object language, through a program written
in another (or meta) language. The operations on object programs that the meta
program may describe can be very diverse, and may include, among others: gen-
eration, inspection, specialization, and, of course, execution of object programs at
run-time.
To illustrate the concept we present the following scenario, and refer to (Sheard,
2001) for a more comprehensive treatment. For example, rather than using one
general procedure to solve many di#erent instances of a problem, a program can
generate specialized (and hence more e#cient) subroutines for each particular case.
If the language is capable of executing thus generated procedures, the program
can choose dynamically, depending on a run-time value of a certain variable or
expression, which one is most suitable to invoke. This is the idea behind the work on
run-time code generation (Lee & Leone, 1996; Wickline et al., 1998b; Wickline et al.,
1998a) and the functional programming concept of staged computation (Ershov,
1977; Gl-uck & J-rgensen, 1995; Davies & Pfenning, 2001).
A. Nanevski, F. Pfenning
Languages in which object programs can not only be composed and executed
but also have their structure inspected add further advantages. In particular, e#-
ciency may benefit from various optimizations that can be performed knowing the
structure of the code. For example, (Griewank, 1989) reports on a way to reuse
common subexpressions of a numerical function in order to compute its value at a
certain point and the value of its n-dimensional gradient, but in such a way that the
complexity of both evaluations performed together does not grow with n. There are
other applications as well which seem to call for the capability to execute a certain
function and also inspect its structure: see (Rozas, 1993) for examples in computer
graphics and numerical analysis, and (Ramsey & Pfe#er, 2002) for an example in
machine learning and probabilistic modeling.
In this paper, we are concerned with typed functional languages for meta-pro-
even more precisely, we limit the considerations to only homogeneous
meta-programming, which is the especially simple case when the object and the
meta language are the same. Recent developments in this direction have been centered
around two particular modal lambda calculi: # and # . The # -calculus
is the proof-term language for the modal logic S4, whose necessity constructor #
annotates valid propositions (Davies & Pfenning, 2001; Pfenning & Davies, 2001).
The type #A has been used in run-time code generation to classify generators of
code of type A (Wickline et al., 1998b; Wickline et al., 1998a). The # -calculus is
the proof-term language for discrete linear-time temporal logic, and the type #A
classifies terms associated with the subsequent time moment. The intended application
of # is in partial evaluation because the typing annotation of a # -program
can be seen as a binding-time specification (Davies, 1996). Both calculi provide a
distinction between levels (or stages) of terms, and this explains their use in meta-
programming. The lowest level is the meta language, which is used to manipulate
the terms at the next level (terms of type #A in # and #A in # ), which is
the meta language for the subsequent level containing another stratum of boxed or
circled types, etc.
For purposes of meta-programming, the type #A is also associated with closed
code - it classifies closed object terms of type A. On the other hand, the type #A
is the type of postponed code, because it classifies object terms of type A which
are associated with the subsequent time moment. The operational semantics of #
allows reduction under object-level #-binders, and that is why the the postponed
code of # is frequently conflated with the notion of open code.
This dichotomy between closed and open code has inspired most of the recent type
systems for meta-programming. The abstract concept of open code (not necessarily
that of # ) is more general than closed code. In a specific programming environment
(as already observed by (Davies, 1996)), working with open code is more flexible
and results in better and more optimized residual object programs. However, we
also want to run the generated object programs when they are closed, and thus we
need a type system which integrates modal types for both closed and open code.
There have been several proposed type systems providing this expressiveness,
most notable being MetaML (Moggi et al., 1999; Taha, 1999; Calcagno et al., 2000;
Calcagno et al., 2001). MetaML defines its notion of open code to be that of the
Names and Necessity 3
postponed code of # and then introduces closed code as a refinement - as open
code which happens to contain no free variables.
The approach in our calculus (which we call # ) is opposite. Rather than refining
the notion of postponed code of # , we relax the notion of closed code of # . We
start with the system of # , but provide the additional expressiveness by allowing
the code to contain specified object variables as free (and rudiments of this idea have
already been considered in (Nielsen, 2001)). If a given code expression depends on a
set of free variables, it will be reflected in its type. The object variables themselves
are represented by a separate semantic category of names (also called symbols or
atoms), which admits equality. The treatment of names is inspired by the work on
Nominal Logic and FreshML (Gabbay & Pitts, 2002; Pitts & Gabbay, 2000; Pitts,
2001; Gabbay, 2000). This design choice leads to a logically motivated and easily
extendable type system. For example, we describe in (Nanevski, 2002) an extension
with intensional code analysis which allows object expressions to be compared for
structural equality and destructed via pattern-matching, much in the same way as
one would work with any abstract syntax tree.
This paper is organized as follows: Section 2 is a brief exposition of prior work
on # . The type system of # and its properties are described in Section 3, while
Section 4 describes parametric polymorphism in sets of names. We illustrate the
type system with example programs, before discussing the related work in Section 5.
Modal # -calculus
This section reviews the previous work on the modal # -calculus and its use in
meta-programming to separate, through the mechanism of types, the realms of
meta-level programs and object-level programs. The # -calculus is the proof-term
calculus for the necessitation fragment of modal logic S4 (Pfenning & Davies, 2001;
Davies & Pfenning, 2001). Chronologically, it came to be considered in functional
programming in the context of specialization for purposes of run-time code generation
(Wickline et al., 1998b; Wickline et al., 1998a). For example, consider the
exponentiation function, presented below in ML-like notation.
The function exp1 : int -> int -> int is written in curried form so that it
can be applied when only a part of its input is known. For example, if an actual
parameter for n is available, exp1(n) returns a function for computing the n-th
power of its argument. In a practical implementation of this scenario, however, the
outcome of the partial instantiation will be a closure waiting to receive an actual
parameter for x before it proceeds with evaluation. Thus, one can argue that the
following reformulation of exp1 is preferable.
4 A. Nanevski, F. Pfenning
else
let val
in
#x:int. x * u(x)
Indeed, when only n is provided, but not x, the expression exp2(n) performs computation
steps based on the value of n to produce a residual function specialized
for computing the n-th power of its argument. In particular, the obtained residual
function will not perform any operations or take decisions at run-time based on
the value of n; in fact, it does not even depend on n - all the computation steps
dependent on n have been taken during the specialization.
A useful intuition for understanding the programming idiom of the above ex-
ample, is to view exp2 as a program generator; once supplied with n, it generates
the specialized function for computing n-th powers. This immediately suggests a
distinction in the calculus between two stages (or levels): the meta and the object
stage. The object stage of an expression encodes #-terms that are to be viewed as
data - as results of a process of code generation. In the exp2 function, such terms
would be (#x:int.1) and (#x:int. x * u(x)). The meta stage describes the specific
operations to be performed over the expressions from the object stage. This is
why the above-illustrated programming style is referred to as staged computation.
The idea behind the type system of # is to make explicit the distinction between
meta and object stages. It allows the programmer to specify the intended staging of
a term by annotating object-level subterms of the program. Then the type system
can check whether the written code conforms to the staging specifications, making
staging errors into type errors. The syntax of # is presented below; we use b to
stand for a predetermined set of base types, and c for constants of those types.
Types A ::= b | A 1 # A 2 | #A
erms e ::= c | x | u | #x:A. e | e 1 e 2 |
alue variable contexts #, x:A
Expression variable contexts #, u:A
alues v ::= c | #x:A. e | box e
There are several distinctive features of the calculus, arising from the desire to
di#erentiate between the stages. The most important is the new type constructor #.
It is usually referred to as modal necessity, as on the logic side it is a necessitation
modifier on propositions (Pfenning & Davies, 2001). In our meta-programming
application, it is used to classify object-level terms. Its introduction and elimination
forms are the term constructors box and let box, respectively. As Figure 1 shows, if
e is an object term of type A, then box e would be a meta term of type #A. The box
term constructor wraps the object term e so that it can be accessed and manipulated
by the meta part of the program. The elimination form let box does
Names and Necessity 5
the opposite; it takes the object term enclosed in e 1 and binds it to the variable u
to be used in e 2 .
The type system of # distinguishes between two kinds of variables, and consequently
has two variable contexts: # for variables bound to meta terms, and #
for variables bound to object terms. We implicitly assume that exchange holds for
both; that is, that the order of variables in the contexts is immaterial.
Figure
2 presents the small-step operational semantics of # . We have decided on
a call-by-value strategy which, in addition, prohibits reductions on the object level.
Thus, if an expression is boxed, its evaluation will be suspended. Boxed expressions
themselves are considered values. This choice is by no means canonical, but is
necessary for the applications in this paper.
We can now use the type system of # to make explicit the staging of exp2.
else
let box
in
box (#x:int. x * u(x))
Application of exp3 at argument 2 produces an object-level function for squaring.
val (#x:int. x *
(#y:int. y *
(#z:int.
In the elimination form let box , the bound variable u belongs to the
context # of object-level variables, but it can be used in e 2 in both object positions
(i.e., under a box) and meta positions. This way the calculus is not only capable
of composing object programs, but can also explicitly force their evaluation. For
example we can use the generated function sqbox in the following way.
val
val
This example demonstrates that object expressions of # can be reflected; that
is, coerced from the object-level into the meta-level. The opposite coercion which
is referred to as reification, however, is not possible. This suggests that # should
be given a more specific model in which reflection naturally exists, but reification
does not. A possible interpretation exhibiting this behavior considers object terms
as actual syntactic expressions, or abstract syntax trees of source programs of the
calculus, while the meta terms are compiled executables. Because # is typed, in
this scenario the object terms represent not only syntax, but higher-order syntax
(Pfenning & Elliott, 1988) as well. The operation of reflection corresponds to the
6 A. Nanevski, F. Pfenning
#,
#,
Fig. 1. Typing rules for # .
(#x:A.
let box
let box
Fig. 2. Operational semantics of # .
natural process of compiling source code into an executable. The opposite operation
of reconstructing source code out of its compiled equivalent is not usually feasible,
so this interpretation does not support reification, just as required.
Modal calculus of names
3.1 Motivation, syntax and overview
If we adhere to the interpretation of object terms as higher-order syntax, then
the # staging of exp3 is rather unsatisfactory. The problem is that the residual
object programs produced by exp3 (e.g., sqbox), contain unnecessary variable-for-
variable redexes, and hence are not as optimal as one would want. This may not
be a serious criticism from the perspective of run-time code generation; indeed,
variable-for-variable redexes can easily be eliminated by a compiler. But if object
terms are viewed as higher-order syntax (and, as we argued in the previous section,
this is a very natural model for the # -calculus), the limitation is severe. It exhibits
that # is too restrictive to allow for arbitrary composition of higher-order syntax
trees. The reason for the deficiency is in the requirement that boxed object terms
must always be closed. In that sense, the type #A is a type of closed syntactic
expressions of type A. As can be observed from the typing rules in Figure 1, the
#-introduction rule erases all the meta variables before typechecking the argument
term. It allows for object level variables, but in run-time they are always substituted
by other closed object expressions to produce a closed object expression at the end.
Names and Necessity 7
Unfortunately, if we only have a type of closed syntactic expressions at our disposal,
we can't ever type the body of an object-level #-abstraction in isolation from the
#-binder itself - subterms of a closed term are not necessarily closed themselves.
Thus, it would be impossible to ever inspect, destruct or recurse over object-level
expressions with binding structure.
The solution should be to extend the notion of object level to include not only
closed syntactic expressions, but also expressions with free variables. This need has
long been recognized in the meta-programming community, and Section 5 discusses
several di#erent meta-programming systems and their solutions to the problem.
The technique predominantly used in these solutions goes back to the Davies' # -
calculus (Davies, 1996). The type constructor # of this calculus corresponds to
discrete temporal logic modality for propositions true at the subsequent time mo-
ment. In meta-programming setup, the modal type #A stands for open object
expression of type A, where the free variables of the object expression are modeled
by meta-variables from the subsequent time moment, bound somewhere outside of
the expression.
Our # -calculus adopts a di#erent approach. It seems that for purposes of higher-order
syntax, one cannot equate bound meta-variables with free variables of object
expressions. For, imagine recursing over two syntax trees with binding structure
to compare them for syntactic equality modulo #-conversion. Whenever a
#-abstraction is encountered in both expressions, we need to introduce a new entity
to stand for the bound variable of that #-abstraction, and then recursively proceed
comparing the bodies of the abstractions. But then, introducing this new entity
standing for the #-bound variable must not change the type of the surrounding
term. In other words, free variables of object expressions cannot be introduced into
the computation by a type introduction form, like #-abstraction, as it is the case
in # and other languages based on it.
Thus, we start with the # -calculus, and introduce a separate semantic category
of names, motivated by (Pitts & Gabbay, 2000; Gabbay & Pitts, 2002), and also
(Odersky, 1994). Just as before, object and meta stages are separated through the
#-modality, but now object terms can use names to encode abstract syntax trees
variables. The names appearing in an object term will be apparent from
its type. In addition, the type system must be instrumented to keep track of the
occurrences of names, so that the names are prevented from slipping through the
scope of their introduction form.
Informally, a term depends on a certain name if that name appears in the meta-level
part of the term. The set of names that a term depends on is called the support
of the term. The situation is analogous to that in polynomial algebra, where one is
given a base structure S and a set of indeterminates (or generators) I and then freely
adjoins S with I into a structure of polynomials. In our setup, the indeterminates are
the names, and we build "polynomials" over the base structure of # expressions.
For example, assuming for a moment that X and Y are names of type int, and that
the usual operations of addition, multiplication and exponentiation of integers are
8 A. Nanevski, F. Pfenning
primitive in # , the term
would have type int and support set {X, Y }. The names X and Y appear in e 1 at
the meta level, and indeed, notice that in order to evaluate e 1 to an integer, we first
need to provide definitions for X and Y . On the other hand, if we box the term e 1 ,
we obtain
which has the type #X,Y int, but its support is the empty set, as the names X and
Y only appear at the object level (i.e., under a box). Thus, the support of a term
(in this case e 1 ) becomes part of the type once the term itself is boxed. This way,
the types maintain the information about the support of subterms at all stages. For
example, assuming that our language has pairs, the term
would have the type int -# Y int with support {X}.
We are also interested in compiling and evaluating syntactic entities in # when
they have empty support (i.e., when they are closed). Thus, we need a mechanism
to eliminate a name from a given expression's support, eventually turning
non-executable expressions into executable ones. For that purpose, we use explicit
substitutions. An explicit substitution provides definitions for names which appear
at a meta-level in a certain expression. Note the emphasis on the meta-level; explicit
substitutions do not substitute under boxes, as names appearing at the object level
of a term do not contribute to the term's support. This way, explicit substitutions
provide extensions (i.e., definitions) for names, while still allowing names under
boxes to be used for the intensional information of their identity (which we utilize
in a related development described in (Nanevski, 2002)).
We next present the syntax of the # -calculus and discuss each of the constructors
Names X # N
Types A ::= b | A 1 # A 2 | A 1 # A 2 | #CA
substitutions # ::= - | X # e, #
erms e ::= c | X | x | #u | #x:A. e | e 1 e 2 |
#X :A. e | choose e
alue variable contexts #, x:A
Expression variable contexts #, u:A[C]
Name contexts #, X :A
Just as # , our calculus makes a distinction between meta and object levels, which
here too are interpreted as the level of compiled code and the level of source code
(or abstract syntax expressions), respectively. The two levels are separated by a
modal type constructor #, except that now we have a whole family of modal type
Names and Necessity 9
constructors - one for each finite set of names C. In that sense, values of the type
#CA are the abstract syntax trees of the calculus freely generated over the set of
names C. We refer to the finite set C as a support set of such syntax trees. All the
names are drawn from a countably infinite universe of names N .
As before, the distinction in levels forces a split in the variable contexts. We
have a context # for meta-level variables (we will also call them value variables),
and a context # for object-level variables (which we also call syntactic expression
variables, or just expression variables). The context # must keep track not only of
the typing of a given variable, but also of its support set.
The set of terms includes the syntax of the # -calculus from Section 2. How-
ever, there are two important distinctions in # . First, we can now explicitly refer
to names on the level of terms. Second, it is required that all the references
to expression variables that a certain term makes are always prefixed by some
explicit substitution. For example, if u is an expression variable bound by some
let box then u can only appear in e 2 prefixed by an explicit
substitution # (and di#erent occurrences of u can have di#erent substitutions associated
with them). The explicit substitution is supposed to provide definitions for
names in the expression bound to u. When the reference to the variable u is pre-fixed
by an empty substitution, instead of #u we will simply write u. The explicit
substitutions used in # -calculus are simultaneous substitutions. We assume that
the syntactic presentation of a substitution never defines a denotation for the same
name twice.
Example 1 Assuming that X and Y are names of type int, the code segment below
creates a polynomial over X and Y and then evaluates it at the point
2).
in
val
The terms #x:A. e and choose e are the introduction and elimination form for
the type constructor A # B. The term #X :A. e binds a name X of type A that can
subsequently be used in e. The term choose picks a fresh name of type A, substitutes
it for the name bound in the argument #-abstraction of type A # B, and proceeds
to evaluate the body of the abstraction. To prevent the bound name in #X :A. e
from escaping the scope of its definition and thus creating an observable e#ect, the
type system must enforce a discipline on the use of X in e. An occurrence of X at
a certain position in e will be allowed only if the type system can establish that
that occurrence of X will not be encountered during evaluation. Such possibilities
arise in two ways: if X is eventually substituted away by an explicit substitution,
A. Nanevski, F. Pfenning
or if X appears in a computationally irrelevant (i.e., dead-code) part of the term.
Needless to say, deciding these questions in a practical language is impossible. Our
type system provides a conservative approximation using a fairly simple analysis
based on propagation of names encountered during typechecking.
Finally, enlarging an appropriate context by a new variable or a name is subject to
the usual variable conventions: the new variables and names are assumed distinct, or
are renamed in order not to clash with already existing ones. Terms that di#er only
in the syntactic representation of their bound variables and names are considered
equal. The binding forms in the language are #x:A. e, let box
#X :A. e. As usual, capture-avoiding substitution [e 1 /x]e 2 of expression e 1 for the
variable x in the expression e 2 is defined to rename bound variables and names
when descending into their scope. Given a term e, we denote by fv(e) and fn(e) the
set of free variables of e and the set of names appearing in e at the meta-level. In
addition, we overload the function fn so that given a type A and a support set C,
fn(A[C]) is the set of names appearing in A or C.
Example 2 To illustrate our new constructors, we present a version of the staged
exponentiation function that we can write in # -calculus. In this and in other
examples we resort to concrete syntax in ML fashion, and assume the presence of
the base type of integers, recursive functions and let-definitions.
choose (#X : int.
let fun exp' (m
else
let box
in
in
let box
in
box (#x:int. #X -> x# v)
end)
val
The function exp takes an integer n and generates a fresh name X of integer type.
Then it calls the helper function exp' to build the expression
| {z }
of type int and support {X}. Finally, it turns the expression v into a function by
explicitly substituting the name X in v with a newly introduced bound variable x.
Names and Necessity 11
Notice that the generated residual code for sq does not contain any unnecessary
redexes, in contrast to the # version of the program from Section 2.
3.2 Explicit substitutions
In this section we formally introduce the concept of explicit substitution over names
and define related operations. As already outlined before, substitutions serve to
provide definitions for names, thus e#ectively removing the substituting names from
the support of the term in which they appear. Once the term has empty support,
it can be compiled and evaluated.
its domain and range)
An explicit substitution is a function from the set of names to the set of terms
Given a substitution #, its domain dom(#) is the set of names that the substitution
does not fix. In other words
Range of a substitution # is the image of dom(#) under #:
For the purposes of this work, we only consider substitutions with finite domains.
A substitution # with a finite domain has a finitary syntactical representation as a
set of ordered pairs X # e, relating a name X from dom(#), with its substituting
expression e. The opposite also holds - any finite and functional set of ordered pairs
of names and expressions determines a unique substitution. We will frequently
equate a substitution and the set that represents it when it does not result in
ambiguities. Just as customary, we denote by fv(#) the set of free variables in the
terms from range(#). The set of names appearing either in dom(#) or range(#) is
denoted by fn(#).
Each substitution can be uniquely extended to a function over arbitrary terms
in the following way.
Given a substitution # and a term e, the operation {#}e of applying # to the meta
level of e is defined recursively on the structure of e as given below. Substitution
A. Nanevski, F. Pfenning
application is capture-avoiding.
{#x:A.
{#} (let box
{#} (choose choose {#}e
The most important aspect of the above definition is that substitution application
does not recursively descend under box. This property is of utmost importance for
the soundness of our calculus as it preserves the distinction between the meta and
the object levels. It is also justified, as explicit substitutions are intended to only
remove names which are in the support of a term, and names appearing under box
do not contribute to the support.
The operation of substitution application depends upon the operation of substitution
composition which we define next.
Definition 3 (Composition of substitutions)
Given two substitutions # 1 and # 2 with finite domains, their composition # 1 # 2
is the substitution defined as
The composition of two substitutions with finite domains is well-defined, as the
resulting mapping from names to terms is finite. Indeed, for every name X #
finite, the syntactic
representation of the composition can easily be computed as the set
It will occasionally be beneficial to represent this set as a disjoint union of two
smaller sets # 1
defined as:
It is important to notice that, though the definitions of substitution application
and substitution composition are mutually recursive, both the operations are ter-
minating. Substitution application is defined inductively over the structure of its
argument, so the size of terms on which it operates is always decreasing. Composing
substitutions with finite domain also terminates because # 1 # 2 requires only
applying # 1 to the defining terms in # 2 .
Names and Necessity 13
3.3 Type system
The type system of the # -calculus consists of two mutually recursive judgments:
and
Both of them are hypothetical and work with three contexts: context of names #,
context of expression variables #, and a context of value variables # (the syntactic
structure of all three contexts is given in Section 3.1). The first judgment is the
typing judgment for expressions. Given an expression e it checks whether e has type
A, and is generated by the support set C. The second judgment types the explicit
substitutions. Given a substitution # and two support sets C and D, the substitution
has the type [C] # [D] if it maps expressions of support C to expressions of
support D. This intuition will be proved in Section 3.4.
The contexts deserve a few more words. Because the types of # -calculus depend
on names, and types of names can depend on other names as well, we must impose
some conditions on well-formedness of contexts. Henceforth, variable contexts #
and # will be well-formed relative to # if # declares all the names that appear in
the types of # and #. A name context # is well-formed if every type in # uses
only names declared to the left of it. Further, we will often abuse the notation and
to define the set # obtained after removing the name X from
the context #. Obviously, # does not have to be a well-formed context, as types in
it may depend on X , but we will always transform # into a well-formed context
before using it again. Thus, we will always take care, and also implicitly assume,
that all the contexts in the judgments are well-formed. The same holds for all the
types and support sets that we use in the rules.
The typing rules of # are presented in Figure 3. A pervasive characteristic of
the type system is support weakening. Namely, if a term is in the set of expressions
of type A freely generated by a support set C, then it certainly is among the
expressions freely generated by some support set D # C. We make this property
admissible to both judgments of the type system, and it will be proved as a lemma
in Section 3.4.
Explicit substitutions. A substitution with empty syntactic representation is the
identity substitution. When an identity substitution is applied to a term containing
names from C, the resulting term obviously contains names from C. But the support
of the resulting term can be extended by support weakening to a superset D, as
discussed above, so we bake this property into the side condition C # D for the
identity substitution rule. We implicitly require that both the sets are well-formed;
that is, they both contain only names already declared in the name context #.
The rule for non-empty substitutions recursively checks each of its component
terms for being well typed in the given contexts and support. It is worth noticing
however, that a substitution # can be given a type [C] # [D] where the "domain"
support set C is completely unrelated to the set dom(#). In other words, the sub-
14 A. Nanevski, F. Pfenning
Explicit substitutions
Hypothesis
#,
#,
#-calculus
#,
Modality
Names
Fig. 3. Typing rules of the # -calculus.
stitution can provide definitions for more names or for fewer names than the typing
judgment actually expresses. For example, the substitution
has domain dom(#) = {X, Y }, but it can be given (among others) the typings:
as well as [X, Y, Z] # [Z]. And indeed, # does map a term of
another term with support [ ], a term of support [X ] into a term
with support [ ], and a term with support [X, Y, Z] into a term with support [Z].
Hypothesis rules. Because there are three kinds of variable contexts, we have three
hypothesis rules. First is the rule for names. A name X can be used provided it has
been declared in # and is accounted for in the supplied support set. The implicit
assumption is that the support set C is well-formed; that is, C # dom (#). The
rule for value variables is straightforward. The typing x:A can be inferred, if x:A
is declared in #. The actual support of such a term can be any support set C as
long as it is well-formed, which is implicitly assumed. Expression variables occur
in a term always prefixed with an explicit substitution. The rule for expression
variables has to check if the expression variable is declared in the context # and if
its corresponding substitution has the appropriate type.
Names and Necessity 15
#-calculus fragment. The rule for #-abstraction is quite standard. Its implicit assumption
is that the argument type A is well-formed in name context # before
it is introduced into the variable context #. The application rule checks both the
function and the application argument against the same support set.
Modal fragment. Just as in # -calculus, the meaning of the rule for #-introduction
is to ensure that the boxed expression e represents an abstract syntax tree. It checks
e for having a given type in a context without value variables. The support that
e has to match is supplied as an index to the # constructor. On the other hand,
the support for the whole expression box e is empty, as the expression obviously
does not contain any names at the meta level. Thus, the support can be arbitrarily
weakened to any well-formed support set D. The #-elimination rule is also a
straightforward extension of the corresponding # rule. The only di#erence is that
the bound expression variable u from the context # now has to be stored with its
support annotation.
Names fragment. The introduction form for names is #X :A. e with its corresponding
type A # B. It introduces an "irrelevant" name X :A into the computation
determined by e. It is assumed that the type A is well-formed relative to the context
#. The term constructor choose is the elimination form for A # B. It picks
a fresh name and substitutes it for the bound name in the #-abstraction. In other
words, the operational semantics of the redex choose (#X :A. e) (formalized in Section
proceeds with the evaluation of e in a run-time context in which a fresh
name has been picked for X . It is justified to do so because X is bound by # and,
by convention, can be renamed with a fresh name. The irrelevancy of X in the
above example means that X will never be encountered during the evaluation of e
in a computationally significant position. Thus, (1) it is not necessary to specify its
run-time behavior, and (2) it can never escape the scope of its introducing # in any
observable way. The side-condition to #-introduction serves exactly to enforce this
irrelevancy. It e#ectively limits X to appear only in "dead-code" subterms of e or
in subterms from which it will eventually be removed by some explicit substitution.
For example, consider the following term
#X:int. #Y:int.
box (let box
in
end)
It contains a substituted occurrence of X and a dead-code occurrence of Y , and is
therefore well-typed (of type int # int #int).
One may wonder what is the use of entities like names which are supposed to
appear only in computationally insignificant positions in the computation. The
fact is, however, that names are not insignificant at all. Their import lies in their
identity. For example, in a related development on intensional analysis of syntax
(Nanevski, 2002), we compare names for equality - something that cannot be done
A. Nanevski, F. Pfenning
with ordinary variables. For, ordinary variables are just placeholders for some val-
ues; we cannot compare the variables for equality, but only the values that the
variables stand for. In this sense we can say that #-abstraction is parametric, while
#-abstraction is deliberately designed not to be.
It is only that names appear irrelevant because we have to force a certain discipline
upon their usage. In particular, before leaving the local scope of some name
X , as determined by its introducing #, we have to "close up" the resulting expression
if it depends significantly on X . This "closure" can be achieved by turning the
expression into a #-abstraction by means of explicit substitutions. Otherwise, the
introduction of the new name will be an observable e#ect. To paraphrase, when
leaving the scope of X , we have to turn the "polynomials" depending on X into
functions. An illustration of this technique is the program already presented in
Example 2.
The previous version of this work (Nanevski, 2002) did not use the constructors
# and choose, but rather combined them into a single constructor new. This is also
the case in the (Pitts & Gabbay, 2000). The decomposition is given by the equation
new X :A in choose (#X :A. e)
We have decided on this reformulation in order to make the types of the language
follow more closely the intended meaning of the terms and thus provide a stronger
logical foundation for the calculus.
3.4 Structural properties
This section explores the basic theoretical properties of our type system. The lemmas
developed here will be used to justify the operational semantics that we ascribe
to # -calculus in Section 3.5, and will ultimately lead to the proof of type preservation
(Theorem 12) and progress (Theorem 13).
Lemma 4 (Structural properties of contexts)
1. Weakening Let # and # . Then
(a) if
2. Contraction on variables
(a) if #, x:A,
(b) if #, x:A,
#,
(c) if #, u:A[D], v:A[D]);
#,
(d) if #, u:A[D],
#,
Proof
By straightforward induction on the structure of the typing derivations.
Names and Necessity 17
Contraction on names does not hold in # . Indeed, identifying two di#erent
names in a term may make the term syntactically ill-formed. Typical examples
are explicit substitutions which are in one-one correspondence with their syntactic
representations. Identifying two names may make a syntactic representation assign
two di#erent images to a same name which would break the correspondence with
substitutions.
The next series of lemmas establishes the admissibility of support weakening, as
discussed in Section 3.3.
Lemma 5 (Support weakening)
Support weakening is covariant on the right-hand side and contravariant on the
left-hand side of the judgments. More formally, let C # C # dom(#) and D #
dom(#) be well-formed support sets. Then the following holds:
1. if
2.
3. if #, u:A[D]);
4.
Proof
The first two statements are proved by straightforward simultaneous induction on
the given derivations. The third and the fourth part are proved by induction on the
structure of their respective derivations.
Lemma 6 (Support extension)
Let D # dom(#) be a well-formed support set. Then the following holds:
1. if #, u:A[C 1
2.
Proof
By induction on the structure of the derivations.
Lemma 7 (Substitution merge)
dom(#,
Proof
By induction on the structure of # .
The following lemma shows that the intuition behind the typing judgment for
explicit substitutions explained in Section 3.3 is indeed valid.
Lemma 8 (Explicit substitution principle)
the following holds:
1. if
2.
Proof
A. Nanevski, F. Pfenning
By simultaneous induction on the structure of the derivations. We just present the
proof of the second statement.
Given the substitutions # and # , we split the representation of # into two
disjoint sets:
and set out to show that
These two typings imply the result by the substitution merge lemma (Lemma 7).
To establish (a), observe that from the typing of # it is clear that # 1
dom(# [D]. By definition of dom(# fixed
by # . Thus, either X does not appear in the syntactic representation of # , or the
syntactic representation of # contains a sequence of mappings
. , In the second case, X is the substituting term for Xn , and thus
In the first case, X # C by inductively appealing to the typing rules for
substitutions until the empty substitution is reached. Either way, C 1 \dom(# C,
and furthermore C 1 \ dom(# C \ dom(# ). Now the result follows by support
weakening (Lemma 5.4).
The establish (b) observe that if X # dom(# ), and X :A #, then #
[C]. By the first induction hypothesis, # (X)) : A [D]. The
typing (b) is now obtained by inductively applying the typing rules for substitutions
for each X # (C 1 # dom(# )).
The following lemma establishes the hypothetical nature of the two typing judgment
with respect to the ordinary value variables.
Lemma 9 (Value substitution principle)
[C]. The following holds:
1. if #, x:A) # e
2. if #,
Proof
Simultaneous induction on the two derivations.
The situation is not that simple with expression variables. A simple substitution
of an expression for some expression variable will not result in a syntactically well-formed
term. The reason is, as discussed before, that occurrences of expression
variables are always prefixed by an explicit substitution to form a kind of closure.
But, explicit substitutions in # -calculus can occur only as part of closures, and
cannot be freely applied to arbitrary terms 1 . Hence, if a substitution of expression e
for expression variable u is to produce a syntactically valid term, we need to follow
1 Albeit this extension does not seem particularly hard, we omit it for simplicity.
Names and Necessity 19
it up with applications over e of explicit name substitutions that were paired up
with u. This operation also gives us a control over not only the extensional, but also
the intensional form of boxed expressions. The definition below generalizes capture-avoiding
substitution of expression variables in order to handle this problem.
(Substitution of expression variables)
The capture-avoiding substitution of e for an expression variable u is defined recursively
as follows
choose e
Note that in the first clause #u of the above definition the resulting expression is
obtained by carrying out the explicit substitution.
Lemma 11 (Expression substitution principle)
Let e 1 be an expression without free value variables such that # e
Then the following holds:
1. if #, u:A[C]); # e
2. if #,
Proof
By simultaneous induction on the two derivations. We just present one case from
the proof of the first statement.
case
1. by derivation,
2. by the second induction hypothesis, #[[e
3. by explicit substitution (Lemma 8.1), # {[[e 1
4. but this is exactly equal to
3.5 Operational semantics
We define the small-step call-by-value operational semantics of the # -calculus
through the judgment
#, e # , e #
A. Nanevski, F. Pfenning
#,
#, (e1 e2) # ,
#, e2 # , e # 2
#, (v1 e2
#x:A. e) v #, [v/x]e
#,
#, (let box
#, (let box
#, e # , e #
#, choose e # , choose e #, choose (#X:A. e) #, X:A), e
Fig. 4. Structured operational semantics of # -calculus.
which relates an expression e with its one-step reduct e # . The relation is defined
on expressions with no free variables. An expression can contain free names, but
it must have empty support. In other words, we only consider for evaluation those
terms whose names appear exclusively at the object level, or in computationally
irrelevant positions, or are removed by some explicit substitution. Because free
names are allowed, the operational semantics has to account for them by keeping
track of the run-time name contexts. The rules of the judgment are given in Figure 4,
and the values of the language are generated by the grammar below.
alues v ::= c | #x:A. e | box e | #X :A. e
The rules are standard, and the only important observation is that the #-redex
for the type constructor # extends the run-time context with a fresh name before
proceeding. This extension is needed for soundness purposes. Because the freshly introduced
name may appear in computationally insignificant positions in the reduct,
we must keep the name and its typing in the run-time context.
The evaluation relation is sound with respect to typing, and it never gets stuck,
as the following theorems establish.
Theorem 12 (Type preservation)
If extends #, and # e
Proof
By a straightforward induction on the structure of e using the substitution principles
Theorem 13 (Progress)
If
1. e is a value, or
2. there exist a term e # and a context # , such that #, e # , e # .
Proof
Names and Necessity 21
By a straightforward induction on the structure of e.
The progress theorem does not indicate that the reduct e # and the context #
are unique for each given e and #. In fact, they are not, as fresh names may be
introduced during the course of the computation, and two di#erent evaluations of
one and the same term may choose the fresh names di#erently. The determinacy
theorem below shows that the choice of fresh names accounts for all the di#erences
between two reductions of the same term. As customary, we denote by # n the
n-step reduction relation.
Theorem 14 (Determinacy)
If #, e # there exists a permutation of names
fixing dom(#), such that #
Proof
By induction on the length of the reductions, using the property that if #, e # n
# , e # and # is a permutation on names, then #(e) # n #(e # ). The only
interesting case is when choose (#X :A. e # ). In that case, it must
be
are fresh. Obviously, the involution these
two names has the required properties.
It is frequently necessary to write programs which are polymorphic in the support
of their syntactic object-level arguments, because they are intended to manipulate
abstract syntax trees whose support is not known at compile time. A typical
example would be a function which recurses over some syntax tree with binding
structure. When it encounters a #-abstraction, it has to place a fresh name instead
of the bound variable, and recursively continue scanning the body of the
#-abstraction, which is itself a syntactic expression but depending on this newly
introduced name. 2 For such uses, we extend the # -calculus with a notion of explicit
support polymorphism in the style of Girard and Reynolds (Girard, 1986;
Reynolds, 1983).
The addition of support polymorphism to the simple # -calculus starts with
syntactic changes that we summarize below.
Support variables p, q # S
sets C, D # P(N # S)
Types A ::= . | #p. A
erms e ::= . | #p. e | e [C]
Name context #, p
alues v ::= . | #p. e
2 The calculus described here cannot support this scenario in full generality yet because it lacks
type polymorphism and type-polymorphic recursion, but support polymorphism is a necessary
step in that direction.
22 A. Nanevski, F. Pfenning
We introduce a new syntactic category of support variables, which are intended
to stand for unknown support sets. In addition, the support sets themselves are
now allowed to contain these support variables, to express the situation in which
only a portion of a support set is unknown. Consequently, the function fn(-) must
be updated to now return the set of names and support variables appearing in
its argument. The language of types is extended with the type #p. A expressing
universal support quantification. Its introduction form is #p. e, which abstracts
an unknown support set p in the expression e. This #-abstraction will also be a
value in the extended operational semantics. The corresponding elimination form
is the application e [C] whose meaning is to instantiate the unknown support set
abstracted in e with the provided support set C. Because now the types can depend
on names as well as on support variables, the name contexts must declare both. We
assume the same convention on well-formedness of the name context as before.
The typing judgment has to be instrumented with new rules for typing support-
polymorphic abstraction and application.
The #-introduction rule requires that the bound variable p does not escape the
scope of the constructors # and # which bind it. In particular it must be p # C.
The convention also assumes implicitly that p #, before it can be added. The
rule for #-elimination substitutes the argument support set D into the type A. It
assumes that D is well-formed relative to the context #; that is, D # dom(#). The
operational semantics for the new constructs is also not surprising.
#, e # , e #
#,
The extended language satisfies the following substitution principle.
Lemma 15 (Support substitution principle)
the operation of substituting
D for p. Then the following holds.
1. if
2.
Proof
By simultaneous induction on the two derivations. We present one case from the
proof of the second statement.
case
1. by derivation,
2. by first induction hypothesis,
3. by second induction hypothesis,
4. because (C # 1 \ {X}) # (C 1 \ {X}) # , by support weakening (Lemma 5.4),
Names and Necessity 23
5. result follows from (2) and (4) by the typing rule for non-empty substitution
The structural properties presented in Section 3.4 readily extend to the new
language with support polymorphism. The same is true of the type preservation
(Theorem 12) and progress (Theorem 13) whose additional cases involving support
abstraction and application are handled using the above Lemma 15.
Example 3 In a support-polymorphic # -calculus we can slightly generalize the
program from Example 2 by pulling out the helper function exp' and parametrizing
it over the exponentiating expression. In the following program, we use [p] in the
function definition as a concrete syntax for #-abstraction of a support variable p.
else
let box
in
choose (#X : int.
let box
in
box (#x:int. #X -> x# w)
end)
val
Example 4 As an example of a more realistic program we present the regular expression
matcher from (Davies & Pfenning, 2001) and (Davies, 1996). The example
assumes the declaration of the datatype of regular expressions:
datatype
Empty
| Plus of regexp * regexp
| Times of regexp * regexp
| Star of regexp
| Const of char
A. Nanevski, F. Pfenning
(*
* val acc1 : regexp -> (char list -> bool) ->
* char list -> bool
| acc1 (Plus (e1,
| acc1 (Times (e1,
(acc1 e1 (acc1 e2 k)) s
| acc1 (Star e) k
else acc1 (Star e) k s')
| acc1 (Const c) k
case s
of nil => false
| (x::l) =>
(*
* val accept1 : regexp -> char list -> bool
Fig. 5. Unstaged regular expression matcher.
We also assume a primitive predicate null : char list -> bool testing if the
input string is empty. Figure 5 presents an ordinary ML implementation of the
matcher, and # and # versions can be found in (Davies & Pfenning, 2001; Davies,
1996).
We would now like to use the # -calculus to stage the program from Figure 5
so that it can be specialized with respect to a given regular expression. For that
purpose, it is useful to view the helper function acc (called acc1 in Figure 5) as a
code generator. It takes a regular expression e and emits code for parsing according
to e, and at the end, it appends k to the generated code. This is the main idea behind
the program in Figure 6. Here, for simplicity, we use the name S for the input string
to be parsed by the code that acc generates. We also want to allow the continuation
code k to contain further names standing for yet unbound variables, and hence the
support-polymorphic typing acc : regexp -> #p.(# S,p bool -> # S,p bool). The
support polymorphism pays o# when generating code for alternation Plus(e 1 , e 2 )
and iteration Star(e). Indeed, observe in the alternation case that the generated
code does not duplicate the continuation k. Rather, k is emitted as a separate
function which is a joining point for the computation branches corresponding to
e 1 and e 2 . Similarly, in the case of iteration, we set up a loop in the output code
that would attempt zero or more matchings against e. The support polymorphism
of acc enables us to produce code in chunks without knowing the exact identity
of the above-mentioned joining or looping points. Once all the parts of the output
code are generated, we just stitch them together by means of explicit substitutions.
Names and Necessity 25
(*
* val accept : regexp ->
* #(char list -> bool)
choose list.
(*
* -> #S,p bool)
let fun acc (Empty) [p]
| acc (Plus (e1, e2)) [p]
choose list
-> bool.
let box
acc e1 [JOIN] box(JOIN S)
acc e2 [JOIN] box(JOIN S)
in
box(let fun join
in
orelse
end)
| acc (Times (e1, e2)) [p]
acc e1 (acc e2
| acc (Star e) [p]
choose list
choose list
-> bool.
let box
acc e [T, LOOP]
else LOOP S)
in
box(let fun loop
orelse
in
loop S
end)
| acc (Const c) [p]
let box
in
box(case S
of (x::xs) =>
| nil => false)
in
box (#s:char list. <S->s>code)
Fig. 6. Regular expression matcher staged in the # -calculus.
At this point, it may be illustrative to trace the execution of the program on
a concrete input. Figure 7 presents the function calls and the intermediate results
that occur when the # -staged matcher is applied to the regular expression
Star(Empty). Note that the resulting specialized program does not contain variable-
for-variable redexes, but it does perform unnecessary boolean tests. It is possible
to improve the matching algorithm to avoid emitting this extraneous code. The
improvement involves a further examination and preprocessing of the input regular
expression, but the thorough description is beyond the scope of this paper. We refer
to (Harper, 1999) for an insightful analysis.
5 Related work
The work presented in this paper lies in the intersection of several related ar-
eas: staged computation and partial evaluation, run-time code generation, meta-
programming, modal logic and higher-order abstract syntax.
An early reference to staged computation is (Ershov, 1977) which introduces
26 A. Nanevski, F. Pfenning
else LOOP S))
null (t) orelse
in
loop S
end)
# box (#s. let fun loop
null (t) orelse
in
loop s
end)
Fig. 7. Example execution trace for a regular expression matcher in # . Function calls
are marked by # and the corresponding return results are marked by an aligned #.
staged computation under the name of "generating extensions". Generating extensions
for purposes of partial evaluation were also foreseen by (Futamura, 1971), and
the concept is later explored and eventually expanded into multi-level generating
extensions by (Jones et al., 1985; Gl-uck & J-rgensen, 1995; Gl-uck & J-rgensen,
1997). Most of this work is done in an untyped setting.
The typed calculus that provided the direct motivation and foundation for our
system is the # -calculus. It evolved as a type theoretic explanation of staged
computation (Davies & Pfenning, 2001; Wickline et al., 1998a), and run-time code-generation
(Lee & Leone, 1996; Wickline et al., 1998b), and we described it in
Section 2.
Another important typed calculus for meta-programming is # . Formulated by
(Davies, 1996), it is the proof-term calculus for discrete temporal logic, and it
provides a notion of open object expression where the free variables of the object
expression are represented by meta variables on a subsequent temporal level. The
original motivation of # was to develop a type system for binding-time analysis in
the setup of partial evaluation, but it was quickly adopted for meta-programming
through the development of MetaML (Moggi et al., 1999; Taha, 1999; Taha, 2000).
MetaML adopts the "open code" type constructor of # and generalizes the
language with several features. The most important one is the addition of a type
refinement for "closed code". Values classified by these "closed code" types are those
"open code" expressions which happen to not depend on any free meta variables.
It might be of interest here to point out a certain relationship between our concept
of names and the phenomenon which occurs in the extension of MetaML with
references (Calcagno et al., 2000; Calcagno et al., 2001). A reference in MetaML
Names and Necessity 27
must not be assigned an open code expression. Indeed, in such a case an eventual free
variable from the expression may escape the scope of the #-binder that introduced
it. For technical reasons, however, this actually cannot be prohibited, so the authors
resort to a hygienic handling of scope extrusion by annotating a term with the list of
free variables that it is allowed to contain in dead-code positions. These dead-code
annotations are not a type constructor in MetaML, and the dead-code variables
belong to the same syntactic category as ordinary variables, but they nevertheless
very much compare to our names and #-abstraction.
Another interesting calculus for meta-programming is Nielsen's # [ described in
(Nielsen, 2001). It is based on the same idea as our # -calculus - instead of defining
the notion of closed code as a refinement of open code of # or MetaML, it relaxes
the notion of closed code of # . Where we use names to stand for free variables
of object expression, # [ uses variables introduced by box (which thus becomes a
binding construct). Variables bound by box have the same treatment as #-bound
variables. The type-constructor # is updated to reflect the types (but not the names)
of variables that its corresponding box binds. This property makes it unclear if # [
can be extended with a concept corresponding to our support polymorphism.
Nielsen and Taha present another system for combining closed and open code in
(Nielsen & Taha, 2003). It is based on # but it can explicitly name the object
stages of computation through the notion of environment classifiers. Because the
stages are explicitly named, each stage can be revisited multiple times and variables
declared in previous visits can be reused. This feature provides the functionality of
open code. The environment classifiers are related to our support variables in several
respects: they both are bound by universal quantifiers and they both abstract over
sets. Indeed, our support polymorphism explicitly abstracts over sets of names,
while environment classifiers are used to name parts of the variable context, and
thus implicitly abstract over sets of variables.
Coming from the direction of higher-order abstract syntax, probably the first
work pointing to the importance of a non-parametric binder like our #-abstraction
is (Miller, 1990). The connection of higher-order abstract syntax to modal logic
has been recognized by Despeyroux, Pfenning and Sch-urmann in the system presented
in (Despeyroux et al., 1997), which was later simplified into a two-level
system in Sch-urmann's dissertation (Sch-urmann, 2000). There is also (Hofmann,
which discusses various presheaf models for higher-order abstract syntax,
then (Fiore et al., 1999) which explores untyped abstract syntax in a categorical
setup, and an extension to arbitrary types (Fiore, 2002).
However, the work that explicitly motivated our developments is the series of
papers on Nominal Logic and FreshML (Gabbay & Pitts, 2002; Pitts & Gabbay,
2000; Pitts, 2001; Gabbay, 2000). The names of Nominal Logic are introduced
as the urelements of Fraenkel-Mostowsky set theory. FreshML is a language for
manipulation of object syntax with binding structure based on this model. Its
primitive notion is that of swapping of two names which is then used to define the
operations of name abstraction (producing an #-equivalence class with respect to
the abstracted name) and name concretion (providing a specific representative of
an #-equivalence class). The earlier version of our paper (Nanevski, 2002) contained
28 A. Nanevski, F. Pfenning
these two operations, which were almost orthogonal to add. Name abstraction was
used to encode abstract syntax trees which depend on a name whose identity is not
known.
Unlike our calculus, FreshML does not keep track of a support of a term, but
rather its complement. FreshML introduces names in a computation by a construct
new X in e, which can roughly be interpreted in # -calculus as
new X in choose (#X. e)
Except in dead-code position, the name X can appear in e in a scope of an abstraction
which hides X . One of the main di#erences between FreshML and # is that
names in FreshML are run-time values - it is possible in FreshML to evaluate a term
with a non-empty support. On the other hand, while our names can have arbitrary
types, FreshML names must be of a single type atm (though this can be generalized
to an arbitrary family of types disjoint from the types of the other values of the
language). Our calculus allows the general typing for names thanks to the modal
distinction of meta and object levels. For example, without the modality, but with
names of arbitrary types, a function defined on integers will always have to perform
run-time checks to test if its argument is a valid integer (in which case the function
is applied), or if its argument is a name (in which case the evaluation is suspended,
and the whole expression becomes a syntactic entity). An added bonus is that #
can support an explicit name substitution as primitive, while substitution must be
user-defined in FreshML.
On the logic side, the direct motivation for this paper comes from (Pfenning &
Davies, 2001) which presents a natural deduction formulation for propositional S4.
But in general, the interaction between modalities, syntax and names has been of
interest to logicians for quite some time. For example, logics that can encode their
own syntax are the topic of G-odel's Incompleteness theorems, and some references in
that direction are (Montague, 1963) and (Smory-nski, 1985). Viewpoints of (Attardi
and contexts of (McCarthy, 1993) are similar to our notion of support,
and are used to express relativized truth. Finally, the names from # resemble non-rigid
designators of (Fitting & Mendelsohn, 1999), names of (Kripke, 1980), and
virtual individuals of (Scott, 1970), and also touch on the issues of existence and
identity explored in (Scott, 1979). All this classical work seems to indicate that
meta-programming and higher-order syntax are just but a concrete instance of a
much broader abstract phenomenon. We hope to draw on the cited work for future
developments.
6 Conclusions and future work
This paper presents the # -calculus, which is a typed functional language for meta-
programming, employing a novel way to define a modal type of syntactic object
programs with free variables. The system combines the # -calculus (Pfenning &
Davies, 2001) with the notion of names inspired by developments in FreshML and
Nominal Logic (Pitts & Gabbay, 2000; Gabbay & Pitts, 2002; Pitts, 2001; Gabbay,
2000). The motivation for combining # with names comes from the long-recognized
Names and Necessity 29
need of meta-programming to handle object programs with free variables (Davies,
1996; Taha, 1999; Moggi et al., 1999). In our setup, the # -calculus provides a
way to encode closed syntactic code expressions, and names serve to stand for the
eventual free variables. Taken together, they provide a way to encode open syntactic
program expressions, and also compose, evaluate, inspect and destruct them. Names
can be operationally thought of as locations which are tracked by the type system,
so that names cannot escape the scope of their introduction form. The set of names
appearing in the meta level of a term is called support of a term. Support of a term
is reflected in the typing of a term, and a term can be evaluated only if its support
is empty. We also considered constructs for support polymorphism.
The # -calculus is a reformulation of the calculus presented in (Nanevski, 2002).
Some of the adopted changes involve simplification of the operational semantics and
the constructs for handling names. Furthermore, we decomposed the name introduction
form new into two constructors # and choose which are now introduction
and elimination form for a new type constructor A # B. This design choice gives a
stronger logical foundation to the calculus, as now the level of types follows much
more closely the behavior of the terms of the language. We hope to further investigate
these logical properties. Some immediate future work in this direction would
include the embedding of discrete-time temporal logic and monotone discrete temporal
logic into the logic of types of # , and also considering the proof-irrelevancy
modality of (Pfenning, 2001) and (Awodey & Bauer, 2001) to classify terms of
unknown support.
Another important direction for exploration concerns the implementation of # .
The calculus presented in this paper was developed with a particular semantical
interpretation in mind of object level expressions as abstract syntax trees representing
templates for source programs. But this need not be the only interpretation. It
is quite possible that boxed expressions of # -calculus with support polymorphism
can be stored at run-time in some intermediate or even compiled form, which might
benefit the e#ciency of programs. It remains an important future work to explore
these implementation issues.
Acknowledgment
We would like to thank Dana Scott, Bob Harper, Peter Lee and Andrew Pitts for
their helpful comments on the earlier versions of the paper and Robert Gl-uck for
pointing out some missing references.
--R
A formalization of viewpoints.
Propositions as
Closed types as a simple approach to safe imperative multi-stage programming
Closed types for a safe imperative MetaML.
A temporal logic approach to binding-time analysis
A modal analysis of staged computation.
Journal of the ACM
Primitive recursion for higher-order abstract syntax
On the partial computation principle.
Semantic analysis of normalization by evaluation for typed lambda calculus.
Abstract syntax and variable binding.
Partial evaluation of computation process - an approach to a compiler-compiler
A new approach to abstract syntax with variable binding.
The system F of variable types
An automatic program generator for multi-level specialization
On automatic di
Semantical analysis of higher-order abstract syntax
An experiment in partial evaluation: the generation of a compiler generator.
Naming and necessity.
Optimizing ML with run-time code generation
Pages 137-148 of: Conference on Programming Language Design and Implementation
An extension to ML to handle bound variables in data structures.
Pages 323-335 of: <Proceedings>Proceedings of the first esprit BRA workshop on logical frameworks
Syntactical treatment of modalities
Combining closed and open code.
A functional theory of local names.
A judgmental reconstruction of modal logic.
Mathematical structures in computer science
Nominal logic: A first order theory of names and binding.
Pages 219-242 of: Kobayashi
A metalanguage for programming with bound names modulo renaming.
Translucent procedures
Advice on modal logic.
and existence in intuitionistic logic.
Accomplishments and research challenges in meta-programming
Pages 2-44 of
A sound reduction semantics for untyped CBN multi-stage compu- tation
--TR
A functional theory of local names
A modal analysis of staged computation
Run-time code generation and modal-ML
Modal types as staging specifications for run-time code generation
First-order modal logic
A sound reduction semantics for untyped CBN mutli-stage computation. Or, the theory of MetaML is non-trival (extended abstract)
Stochastic lambda calculus and monads of probability distributions
Accomplishments and Research Challenges in Meta-programming
Nominal Logic
Primitive Recursion for Higher-Order Abstract Syntax
A Metalanguage for Programming with Bound Names Modulo Renaming
An Idealized MetaML
A temporal-logic approach to binding-time analysis
Translucent Procedures, Abstraction without Opacity
Multistage programming
--CTR
Marcos Viera , Alberto Pardo, A multi-stage language with intensional analysis, Proceedings of the 5th international conference on Generative programming and component engineering, October 22-26, 2006, Portland, Oregon, USA
Chiyan Chen , Hongwei Xi, Implementing typeful program transformations, ACM SIGPLAN Notices, v.38 n.10, p.20-28, October
Kevin Donnelly , Hongwei Xi, Combining higher-order abstract syntax with first-order abstract syntax in ATS, Proceedings of the 3rd ACM SIGPLAN workshop on Mechanized reasoning about languages with variable binding, p.58-63, September 30-30, 2005, Tallinn, Estonia
Ik-Soon Kim , Kwangkeun Yi , Cristiano Calcagno, A polymorphic modal type system for lisp-like multi-staged languages, ACM SIGPLAN Notices, v.41 n.1, p.257-268, January 2006
Geoffrey Washburn , Stephanie Weirich, Boxes go bananas: encoding higher-order abstract syntax with parametric polymorphism, ACM SIGPLAN Notices, v.38 n.9, p.249-262, September
Chiyan Chen , Rui Shi , Hongwei Xi, Implementing Typeful Program Transformations, Fundamenta Informaticae, v.69 n.1-2, p.103-121, January 2006
Yosihiro Yuse , Atsushi Igarashi, A modal type system for multi-level generating extensions with persistent code, Proceedings of the 8th ACM SIGPLAN symposium on Principles and practice of declarative programming, July 10-12, 2006, Venice, Italy
Mark R. Shinwell , Andrew M. Pitts , Murdoch J. Gabbay, FreshML: programming with binders made simple, ACM SIGPLAN Notices, v.38 n.9, p.263-274, September
Chiyan Chen , Hongwei Xi, Meta-programming through typeful code representation, ACM SIGPLAN Notices, v.38 n.9, p.275-286, September
Derek Dreyer, A type system for well-founded recursion, ACM SIGPLAN Notices, v.39 n.1, p.293-305, January 2004
Chiyan Chen , Hongwei Xi, Meta-programming through typeful code representation, Journal of Functional Programming, v.15 n.6, p.797-835, November 2005
Aleksandar Nanevski , Frank Pfenning, Staged computation with names and necessity, Journal of Functional Programming, v.15 n.6, p.893-939, November 2005
Walid Taha , Michael Florentin Nielsen, Environment classifiers, ACM SIGPLAN Notices, v.38 n.1, p.26-37, January | modal lambda-calculus;higher-order abstract syntax |
581499 | Tagless staged interpreters for typed languages. | Multi-stage programming languages provide a convenient notation for explicitly staging programs. Staging a definitional interpreter for a domain specific language is one way of deriving an implementation that is both readable and efficient. In an untyped setting, staging an interpreter "removes a complete layer of interpretive overhead", just like partial evaluation. In a typed setting however, Hindley-Milner type systems do not allow us to exploit typing information in the language being interpreted. In practice, this can mean a slowdown cost by a factor of three or mor.Previously, both type specialization and tag elimination were applied to this problem. In this paper we propose an alternative approach, namely, expressing the definitional interpreter in a dependently typed programming language. We report on our experience with the issues that arise in writing such an interpreter and in designing such a language. .To demonstrate the soundness of combining staging and dependent types in a general sense, we formalize our language (called Meta-D) and prove its type safety. To formalize Meta-D, we extend Shao, Saha, Trifonov and Papaspyrou's H language to a multi-level setting. Building on H allows us to demonstrate type safety in a setting where the type language contains all the calculus of inductive constructions, but without having to repeat the work needed for establishing the soundness of that system. | Introduction
In recent years, substantial effort has been invested in the development
of both the theory and tools for the rapid implementation
of domain specific languages (DSLs) [4, 22, 40, 47, 45, 23]. DSLs
are formalisms that provide their users with a notation appropriate
for a specific family of tasks. A promising approach to implementing
domain specific languages is to write a definitional interpreter
[42] for the DSL in some meta-language, and then to stage
this interpreter either manually, by adding explicit staging annotations
(multi-stage programming [55, 30, 45, 50]), or by applying an
automatic binding-time analysis (off-line partial evaluation [25]).
The result of either of these steps is a staged interpreter. A staged
interpreter is essentially a translation from a subject-language (the
DSL) to a target-language 1 . If there is already a compiler for the
target-language, the approach yields a simple compiler for the DSL.
In addition to the performance benefit of a compiler over an inter-
preter, the compiler obtained by this process often retains a close
syntactic connection with the original interpreter, inspiring greater
confidence in its correctness.
This paper is concerned with a subtle but costly problem which
can arise when both the subject- and the meta-language are statically
typed. In particular, when the meta-language is typed, there
is generally a need to introduce a "universal datatype" to represent
values uniformly (see [48] for a detailed discussion). Having such
a universal datatype means that we have to perform tagging and
untagging operations at run time. When the subject-language is un-
typed, as it would be when writing an ML interpreter for Scheme,
the checks are really necessary. But, when the subject-language is
also statically typed, as it would be when writing an ML interpreter
for ML, the extra tags are not really needed. They are only necessary
to statically type check the interpreter. When this interpreter is
staged, it inherits [29] this weakness, and generates programs that
contain superfluous tagging and untagging operations. Early estimates
of the cost of tags suggested that it produces up to a 2.6 times
slowdown in the SML/NJ system [54]. More extensive studies in
the MetaOCaml system show that slowdown due to tags can be as
high as 10 times [21].
How can we remove the tagging overhead inherent in the use of
universal types?
One recently proposed possibility is tag elimination [54, 53, 26],
a transformation that was designed to remove the superfluous tags
in a post-processing phase. Under this scheme, DSL implementation
is divided into three distinct stages (rather than the traditional
two). The extra stage, tag elimination, is distinctly different
from the traditional partial evaluation (or specialization) stage. In
essence, tag elimination allows us to type check the subject pro-
staging in a multi-stage language usually implies that
the meta-language and the target-language are the same language.
gram after it has been transformed. If it checks, superfluous tags
are simply erased from the interpretation. If not, a "semantically
equivalent" interface is added around the interpretation. Tag elim-
ination, however, does not statically guarantee that all tags will be
erased. We must run the tag elimination at runtime (in a multi-stage
language).
In this paper, we study an alternative approach that does provide
such a guarantee. In fact, the user never introduces these tags in the
first place, because the type system of the meta-language is strong
enough to avoid any need for them.
In what follows we describe the details of superfluous tags problem
1.1 Untyped Interpreters
We begin by reviewing how one writes a simple interpreter in an
untyped language. 2 For notational parsimony, we will use ML syntax
but disregard types. An interpreter for a small lambda language
can be defined as follows:
datatype int | V of string
| L of string * exp | A of exp * exp
fun eval e
case e of
| L (s,e) => fn v => eval e (ext env s v)
| A (f,e) => (eval f env) (eval e env)
This provides a simple implementation of subject programs represented
in the datatype exp. The function eval evaluates exps
in an environment env that binds the free variables in the term to
values.
This implementation suffers from a severe performance limita-
tion. In particular, if we were able to inspect the result of a an
interpretation, such as (eval (L("x",V "x")) env0), we would
find that it is equivalent to
This term will compute the correct result, but it contains an unexpanded
recursive call to eval. This problem arises in both call-by-
value and call-by-name languages, and is one of the main reasons
for what is called the "layer of interpretive overhead" that degrades
performance. Fortunately, this problem can be eliminated through
the use of staging annotations [48].
1.2 Untyped Staged Interpreters
Staging annotations partition the program into stages. Brackets
. surrounding an expression lift it to the next stage (building
code). Escape .-_ drops its surrounded expression to a previous
stage (splicing in already constructed code to build larger pieces
of code), and should only appear within brackets. Staging annotations
change the evaluation order of programs, even evaluating
under lambda abstraction, and force the unfolding of the eval function
at code-generation time. Thus, by just adding staging annotations
to the eval function, we can change its behavior to achieve
the desired operational semantics:
(case e of
2 Discussing the issue of how to prove the adequacy of representations
or correctness of implementations of interpreters is beyond
the scope of this paper. Examples of how this can done can be found
elsewhere [54].
| L (s,e) => .<fn v => .-(eval' e (ext env s . .
| A (f,e) => .-(eval' f env) .-(eval' e env))>.
Computing the application eval' (L("x",V "x")) env0 directly
yields a term .<fn v => v>.
Now there are no leftover recursive calls to eval. Multi-stage
languages come with a run annotation .!_ that allows us to execute
such a code fragment. A staged interpreter can therefore be viewed
as user-directed way of reflecting a subject program into a meta-
program, which then can be handed over in a type safe way to the
compiler of the meta-language.
1.3 Hindley-Milner Staged Interpreters
In programming languages, such as Haskell or ML, which use
a Hindley-Milner type system, the above eval function (staged or
unstaged) is not well-typed [48]. Each branch of the case statement
has a different type, and these types cannot be reconciled.
Within a Hindley-Milner system, we can circumvent this problem
by using a universal type. A universal type is a type that is rich
enough to encode values of all the types that appear in the result of
a function like eval. In the case above, this includes function as
well as integer values. A typical definition of a universal type for
this example might be:
I of int | F of V -> V
The interpreter can then be rewritten as a well-typed program:
fun unF
fun eval e
(case e of
| L (s,e) => F (fn v => eval e (ext env s v))
| A (f,e) => (unF (eval f env)) (eval e env));
Now, when we compute (eval (L("x",V "x")) env0) we get
back a value
Just as we did for the untyped eval, we can stage this version of
eval.
Now computing (eval (L("x",V "x")) env0) yields:
1.4 Problem: Superfluous Tags
Unfortunately, the result above still contains the tag F. While this
may seem like minor issue in a small program like this one, the effect
in a larger program will be a profusion of tagging and untagging
operations. Such tags would indeed be necessary if the subject-
language was untyped. But if we know that the subject-language
is statically typed (for example, as a simply-typed lambda calcu-
lus) the tagging and untagging operations are really not needed.
Benchmarks indicate that these tags add a 2-3 time overhead [54],
sometimes as large as 3-10 times [21].
There are a number of approaches for dealing with this prob-
lem. None of these approaches, however, guarantee (at the time of
writing the staged interpreter) that the tags will be eliminated before
runtime. Even tag elimination, which guarantees the elimination
of tags for these particular examples, requires a separate metatheoretic
proof for each subject language to obtain such a guarantee
[54].
Contributions
In this paper we propose an alternative solution to the superfluous
tags problem. Our solution is based on the use of a dependently
typed multi-stage language. This work was inspired by work on
writing dependently typed interpreters in Cayenne [2]. To illustrate
viability of combining dependent types with staging, we have designed
and implemented a prototype language we call Meta-D. We
use this language as a vehicle to investigate the issues that arise
when taking this approach. We built a compiler from an interpreter,
from beginning to end in Meta-D. We also report on the issues that
arise in trying to develop a dependently typed programming language
(as opposed to a type theory).
features
Basic staging operators
Dependent types (with help for avoiding redundant typing annotations
Dependently typed inductive families (dependent datatypes)
Separation between values and types (ensuring decidable type
checking)
A treatment of equality and representation types using an
equality-type-like mechanism
The technical contribution of this paper is in formalizing a multi-stage
language, and proving its safety under a sophisticated dependent
type system. We do this by capitalizing on the recent work
by Shao, Saha, Trifonov and Papaspyrou's on the system [44],
which in turn builds on a number of recent works on typed intermediate
languages [20, 7, 59, 43, 9, 57, 44].
1.6 Organization of this Paper
Section 2 shows how to take our motivating example and turn it
into a tagless staged interpreter in a dependently typed setting. First,
we present the syntax and semantics of a simple typed language and
show how these can be implemented in a direct fashion in Meta-D.
The first part of this (writing the unstaged interpreter) is similar to
what has been done in Cayenne [2], but is simplified by the presence
of dependent datatypes in Meta-D (see Related Work). The key observation
here is that the interpreter needs to be defined over typing
derivations rather than expressions. Dependently typed datatypes
are needed to represent such typing derivations accurately. Next,
we show how this interpreter can be easily staged. This step is exactly
the same as in the untyped and in the Hindley-Milner setting.
In Section 3 we point out and address some basic practical problems
that arise in the implementation of interpreters in a dependently
typed programming language. First, we show how to construct
the typing judgments that are consumed by the tagless inter-
preter. Then, we review why it is important to have a clear separation
between the computational language and the type language.
This motivates the need for representation types, and has an effect
on the code for the tagless staged interpreter.
Section 4 presents a formalization of a core subset of Meta-D,
and a formal proof of its type safety. The original work on used
this system to type a computational language that includes basic
effects, such as non-termination [44]. In this paper, we develop
a multi-stage computational language, and show how essentially
the same techniques can be used to verify its soundness. The key
technical modifications needed are the addition of levels to typing
judgments, and addressing evaluation under type binders.
Section 5 discusses related work, and Section 6 outlines directions
for future work and concludes.
An extended version of the paper is available on-line as a technical
report [36].
Staged Interpreter
In this section we show how the example discussed in the introduction
can be redeveloped in a dependently typed setting. We
begin by considering a definition of the syntax and semantics (of a
simply typed version) of the subject language.
2.1 Subject-Language Syntax and Semantics
Figure
1 defines the syntax, type system, and semantics of an
example subject language we shall call SL. For simplicity of the
development, we use de Bruijn indices for variables and binders.
The semantics defines how the types of SL are mapped to their intended
meaning. For example, the meaning of the type N is the set
of natural numbers, while the meaning of the arrow type
the function space Furthermore, we map the meaning of
type assignments G, into a product of the sets denoting the finite
number of types in the assignment. Note that the semantics of programs
is defined on typing judgments, and maps to elements of the
meanings of their types. This is the standard way of defining the
semantics of typed languages [56, 18, 39], and the implementation
in the next section will be a direct codification of this definition.
2.2 Interpreters in Meta-D
An interpreter for SL can be simply an implementation of the
definition in Figure 1. We begin by defining the datatypes that will
be used to interpret the basic types (and typing environments) of
SL. To define datatypes Meta-D uses an alternative notation to SML
or Haskell datatype definitions. For example, to define the set of
natural numbers, instead of writing
datatype
we write
inductive
The inductive notation is more convenient when we are defining
dependent datatypes and when we wish to define not only new types
but new kinds (meaning "types of types"). Now type expression and
type assignments are represented as follows:
inductive
inductive
inductive
The *1 in these definitions means that we are defining a new type.
To implement the type judgment of SL we need a dependently typed
datatype indexed by three parameters: a type assignment Env, an
expression Exp, and a type Typ. We can define such a datatype as
shown in Figure 2. 3 Each constructor in this datatype corresponds
to one of the rules in the type system for our object language. For
example, consider the rule for lambda abstraction (Lam) from Figure
1. The basic idea is to use the "judgments as types principle"
[19], and so we can view the type rule as a constant combinator on
judgments. This combinator takes hypothesis judgments (and their
returns the conclusion judgment. In this case the
rule requires an environment G, two types t and t 0 , a body e of the
lambda abstraction, a judgment that G;t returns a judgment
This rule is codified directly by the following
constructor
J(e1, EL t1 s2, ArrowT t1 t2).
In the definition of J we see differences between the traditional
datatype definitions and inductive datatypes: each of the constructors
can have dependently typed arguments and a range type J indexed
by different indices. It is through this variability in the return
3 For practical reasons that we will discuss in the next section,
this datatype is not legal in Meta-D. We will use it in this section
to explain the basic ideas before we discuss the need for so-called
representation types.
Figure
1. Semantics of SL
inductive
Figure
2. The typing judgment J (without representation types)
type of the constructors that dependent datatypes can provide more
information about their values.
2.2.1 Interpreters of Types and Judgments
After defining judgments, we are ready to implement the in-
terpretations. Note, however, that the type of the result of the
interpretation of judgments, depends on the interpretation of SL
types. This dependency is captured in the interpretation function
typEval. Figure 3 presents the implementation of the interpretation
of types typEval; the mapping of type assignments into Meta-
D types envEval; and the interpretation of judgments eval.
The function eval is defined by case analysis on typing judg-
ments. Computationally, this function is not significantly different
from the one presented in Section 1.2. Differences include additional
typing annotations, and the case analysis over typing judg-
ments. Most importantly, writing it does not require that we use
tags on the result values, because the type system allows us to specify
that the return type of this function is typEval t. Tags are no
longer needed to help us discriminate what type of value we are
getting back at runtime: the type system now tells us, statically.
2.3 Staged Interpreters in Meta-D
Figure
4 shows a staged version of eval. As with Hindley-Milner
types, staging is not complicated by dependent types. The
staged interpreter evalS, returns a value of type (code (typEval
t)). Note that the type of value assignments is also changed (see
envEvalS in Figure 4): Rather than carrying runtime values for
SL, it carries pieces of code representing the values in the variable
assignment. Executing this program produces the tagless code fragments
that we are interested in.
Even though the eval function never performs tagging and un-
tagging, the interpretative overhead from traversing its input is still
considerable. Judgements must be deconstructed by eval at run-
time. This may require even more work than deconstructing tagged
values. With staging, all these overheads are performed in the first
stage, and an overhead-free term is generated for execution in a later
stage.
Staging violations are prevented in a standard way by Meta-D's
type system (See technical report [36]). The staging constructs are
those of Davies [10] with the addition of cross-stage persistence
[55]. We refer the reader to these references for further details on
the nature of staging violations. Adding a run construct along the
lines of previous works [51, 30] was not considered here.
Now we turn to addressing some practical questions that are
unique to the dependent typing setting, including how the above-mentioned
judgements are constructed.
3 Practical Concerns
Building type judgments amounts to implementing either type-checking
or type inference for the language we are interpreting.
Another practical concern is that types that depend on values can
lead to either undecidable or unsound type checking. This happens
when values contain diverging or side-effecting computations. In
this section we discuss how both of these concerns are addressed in
the context of Meta-D.
3.1 Constructing Typing Judgments
Requiring the user of a DSL to supply a typing judgment for each
program to be interpreted is not likely to be acceptable (although it
can depend on the situation). The user should be able to use the
implementation by supplying only the plain text of the subject pro-
gram. Therefore, the implementation needs to include at least a type
checking function. This function takes a representation of a type-
annotated program and produces the appropriate typing judgment,
if it exists. We might even want to implement type inference, which
does not require type annotations on the input. Figure 4 presents a
function typeCheck. This function is useful for illustrating a number
of features of Meta-D:
The type of the result 4 of typeCheck is a dependent sum,
J(e,s,t). This means that the result
of typeCheck consists of an SL type, and a typing judgment
that proves that the argument expression has that particular
type under a given type assignment.
Since judgments are built from sub-judgments, a case (strong
dependent sum elimination) construct is need to deconstruct
4 In a pure (that is with no computational effects whatsoever)
setting the result of typeCheck should be option ([t : Typ]
(J (e,s,t))), since a particular term given to typeCheck may
not be well-typed. In the function given in this paper, we omit the
option, to save on space (and rely on incomplete case expressions
instead).
case t of NatT => Nat | ArrowT t1 t1 => (typEval t1) -> (typEval t2)
case e of EmptyE => unit | ExtE e2 t => (envEval e2, typEval t)
case j of
JN e1 n1 => n1
| JV e1 t1 => #2(rho)
| JW e1 t1 t2 i j1 => eval e1 (#1(rho)) (EV i) t1 j1
| JL ee1 et1 et2 es2 ej1 => fn
| JA e s1 s2 t1 t2 j1 j2 => (eval e rho s1 (ArrowT t1 t2) j1) (eval e rho s2 t1 j2)
Figure
3. Dependently typed tagless interpreter (without representation types)
case e of EmptyE => unit | ExtE e2 t =>
(envEvalS e2, code (typEval t))
case j of
JN e1 n1 => . .
| JV e1 t1 => #2(rho)
| JW e1 t1 t2 i j1 => evalS e1 (#1(rho)) (EV i) t1 j1
| JL ee1 et1 et2 es2 ej1 => .<fn v:(typEval et1) => (.-(evalS (ExtE ee1 et1) (rho,. .) es2 et2 ej1))>.
| JA e s1 s2 t1 t2 j1 j2 => .-(evalS e rho s1 (ArrowT t1 t2) j1)) (.-(evalS e rho s2 t1 j2))>.
case s of
| EV nn => (case nn of Z => (case e of ExtE ee t2 =>
| S n => (case e of ExtE e2 t2 =>
case x of [rx : Typ]j2 =>
(typeCheck e2 (EV n)))))
| EL targ s2 =>
case x of [rt : Typ] j2 =>
(typeCheck (ExtE e targ) s2))
| EA s1 s2 =>
case x1 of [rt1 : Typ]j1 => case x2 of [rt2 : Typ]j2 =>
(case rt1 of ArrowT tdom tcod =>
(cast [assert rt2=tdom,J(e,s,tdom), j2])) end)))
(typeCheck e s1) (typeCheck e s2))
case x of
(case t1 of NatT => eval EmptyE () s NatT j
| ArrowT t2 t3 => Z
(typeCheck EmptyE s))
Figure
4. Staged tagless interpreter and the function typeCheck (without representation types)
the results of recursive calls to typeCheck.
The case for constructing application judgments illustrates
an interesting point. Building a judgment for the expression
(EA s1 s2) involves first computing the judgments for
the sub-terms s1 and s2. These judgments assign types
(ArrowT tdom tcod) and rt2 to their respective expres-
sions. However, by definition of the inductive family J, in
order to build the larger application judgment, tdom and rt2
must be the same SL type (i.e., their Typ values must be
equal).
We introduce two language constructs to Meta-D to express
this sort of constraints between values. First, the expression of
the form assert introduces an equality judgment,
ID e1 e2 between values of equality types. 5
An elimination construct
is used to cast the expression e2 from some type T[v1]
to T[v2], where e1 is an equality judgment of the type
ID v1 v2. The type checker is allowed to use the Leibniz-
style equality to prove the cast correct, since e1 is an equality
judgment stating that v1 and v2 are equal.
Operationally, the expression assert e1=e2 evaluates its two
subexpressions and compares them for equality. If they are indeed
equal, computation proceeds If, however, the two values
are not equal, the program raises an exception and terminates.
The cast construct makes sure that its equality judgment introduced
by assert is evaluated at runtime, and if the equality
check succeeds, simply proceeds to evaluate its argument expression
An alternative to using assert/cast is to include equality
judgments between types as part of typing judgments, and
build equality proofs as a part of the typeCheck function. 6
This approach, while possible, proves to be verbose, and will
be omitted in this paper. The assert/cast, however, can
serve as a convenient programming shortcut and relieves the
user from the effort formalizing equality at the type level and
manipulating equality types.
3.2 Representation Types
Combining effects with dependent types requires care. For ex-
ample, the typeCheck function is partial, because there are many
input terms which are just not well typed in SL. Such inputs
to typeCheck would cause runtime pattern match failures, or an
equality assertion exception. We would like Meta-D to continue
to have side-effects such as non-termination and exceptions. At
the same time, dependently typed languages perform computations
during type checking (to determine the equality of types). If we
allow effectful computations to leak into the computations that are
done during type checking, then we risk non-termination, or even
unsoundness, at type-checking time. This goal is often described as
"preserving the phase distinction" between compile time and run-time
[5].
The basic approach to dealing with this problem is to allow types
to only depend on other types, and not values. Disallowing any kind
of such dependency, however, would not allow us to express our
type checking function, as it produces a term whose type depends
5 This feature is restricted to ground types whose value can be
shown equal at runtime.
6 Due to space limitation we omit this approach here, but define
an alternative type-checking function in the accompanying technical
report [36].
on the value of its argument. A standard solution to is to introduce a
mechanism that allows only a limited kind of dependency between
values and types. This limited dependency uses so-called singleton
or representation types [60, 7, 9, 57]. The basic idea is to allow
bijections on ground terms between the value and type world.
Now, we can rewrite our interpreter so that its type does not depend
on runtime values, which may introduce effects into the type-checking
phase. Any computation in the type checking phase can
now be guaranteed to be completely effect-free. The run-time values
are now forced to have representation types that reflect, in the
world of values, the values of inductive kinds. In Meta-D, a special
type constructor R is used to express this kind of dependency. For
example, we can define an inductive kind Nat
inductive
Note that this definition is exactly the same as the one we had for
the type Nat, except it is not classified by *2 instead of *1. Once
this definition is encountered, we have introduced not only the constructors
for this type, but also the possibility of using the special
type constructor R. Now we can write R(S(S Z)) to refer to a type
that has a unique inhabitant, which we also call rep (S(S Z)).
Figure
5 presents the implementation with representation types.
Introducing this restriction on the type system requires us to turn the
definition of Exp, Env, and Typ into definitions of kinds (again this
is just a change of one character in each definition). Because these
terms are now kinds, we cannot use general recursion in defining
their interpretation. Therefore, we use special primitive recursion
constructs provided by the type language to define these interpreta-
tions. Judgments, however, remain a type. But now, they are a type
indexed by other types, not by values.
For the most part, the definition of judgments and the interpretation
function do not change. We need to change judgments in the
case of natural numbers by augmenting them with a representation
for the value of that number. The constructor JN now becomes
and the definition of eval is changed accordingly. The modified
eval uses a helper function to convert a representation of a natural
type to a natural number. 7
The definition of the typeCheck function requires more substantial
changes (Figure 5). In particular, this function now requires carrying
out case analysis on types [20, 7, 59, 43, 9]. For this purpose
Meta-D provides a special case construct
tycase x by y of C_n x_n => e_n.
A pattern (C_n x_n) matches against a value x of type K, where
K is some inductive kind, only if we have provided a representation
value y of type R(x). Pattern matching over inductive kinds cannot
be performed without the presence of a corresponding runtime
value of the appropriate representation type. Inside the body of the
case (e_n), the expression rep x_n provides a representation value
for the part of the inductive constructor that x_n is bound to.
4 Formal Development
In this section we report our main technical result, which is type
safety for a formalized core subset of Meta-D. This result shows
that multi-stage programming constructs can be safely used, even
when integrated with a sophisticated dependent type system such
as that of We follow the same approach used by the developers
of TL, and build a computation language l H
that uses
as its type language. Integrating our formalization into the
7 In practice, we see no fundamental reason to distinguish the
two. Identifying them, however, requires the addition of some special
support for syntactic sugar for this particular representation
type.
inductive
inductive
inductive
inductive
inductive
inductive
|
|
|
|
primrec Typ nat (fn c : *1 => fn d : *1 => c -> d)
fun cast (n : by rn of Z => zero
| S (cast
case j of
JN e1 n1 rn1 => cast n1 rn1
| JV e1 t1 => #2(rho)
| JW e1 t1 t2 i j1 => eval e1 (#1(rho)) (EV i) t1 j1
| JL ee1 et1 et2 es2 ej1 =>fn
| JA e s1 s2 t1 t2 j1 j2 => (eval e rho s1 (ArrowT t1 t2) j1) (eval e rho s2 t1 j2)
tycase s by rs of
| EV n =>
(tycase n by (rep n) of Z => (tycase e by re of ExtE ee t2 =>
| S n => (tycase e by re of ExtE (e2) (t2) =>
case x of [rx : Typ]j2 =>
(#1 j2, JW e2 rx t2 n (#2 j2)))
(typeCheck e2 (rep e2) (EV n) (rep (EV n)))))))
| EL targ s2 =>
case x of
(typeCheck (ExtE e targ) (rep (ExtE e targ)) s2 (rep s2)))
| EA s1 s2 =>
case x1 of [t1 : Typ]j1 => case x2 of [t2 : Typ]j2 =>
(tycase t1 by (#1 (j1)) of
ArrowT tdom tcod =>
(cast [assert t2=tdom,J(e,s,tdom),j2]))) end)))
(typeCheck e (rep e) s1 (rep s1)) (typeCheck e (rep e) s2 (rep s2)))
Figure
5. Tagless interpreter with representation types in MetaD
inductive W
Figure
6. The definition of the types of l H
A type expressions of
cast (e 0
Figure
7. Syntax of l H
A
A
cast
Figure
8. Type system of l H
framework gave us significant practical advantages in formal development
of l H
Important meta-theoretic properties of the type language we
use, TL, have already been proven [44]. Since we do not
change anything about the type language itself, all these results
(e.g., the Church-Rosser property of the type language,
decidable equality on type terms) are easily reused in our
proofs.
is based on the computational language lH [44]. We
have tried to make the difference between these two languages
as small as possible. As a result, the proof of type safety of
l H
is very similar to the type safety proof for lH . Again, we
were able to reuse certain lemmata and techniques developed
for lH to our own proof.
A detailed proof of the type safety of l H
is presented in an extended
technical report [36].
Figure
6 defines l H
computational types, and is the first step
needed to integrate l H
into the framework. The syntax of
the computational language l H
is given in Figure 7. The language
l H
contains recursion and staging constructs. It contains
two predefined representation types: naturals and booleans. The
if construct, as in lH , provides for propagating proof information
into branches (analogous to the tycase construct of MetaD); full
implementation of inductive datatypes in the style of MetaD is left
for future work. Since arbitrary dependent types are prohibited in
l H
, we use universal and existential quantification to express dependencies
of values on types and kinds. For example, the identity
function on naturals is expressed in l H
as follows:
In l H
, we also formalize the assert/cast construct, which
requires extending the language of computational types with equality
judgment types. Similarly, we add the appropriate constructs to
the syntax of l H
.
To be able to define the small-step semantics for a staged lan-
guage, we had to define the syntax of l H
in terms of level-indexed
families of expressions and values [48]. The typing judgment (Fig-
ure 8), as well as the type assignments, of l H
has also been appropriately
extended with level annotations [55]. A level-annotation
erasure function ( j n ) is used to convert l H
typing assignments
into a form required by the typing judgment of TL[44]. This interface
then allows us to reuse the original typing judgment.
Due to lack of space we do not show all the definitions for the
small-step semantics of l H
. These, together with proofs of the
relevant theorems, are included in a companion technical report
[36]. Here, we list the most important theorems.
Proof is by structural induction on e n 2E n , and then by examination
of cases of the typing judgment.
Proof is by cases of possible reductions e ! e 0 .
7!
Proof uses subject reduction (Lemma 2) and progress (Lemma 1)
lemmas and follows Wright and Felleisen's syntactic technique
[58].
5 Related Work
Barendregt [3] is a good high-level introduction to the theory of
dependent type systems. There are a number of other references
to (strictly terminating) functional programming in dependent type
theory literature [32, 31, 6].
Cayenne is a dependently typed programming language [1]. In
essence, it is a direct combination of a dependent type theory with
(potentially) non-terminating recursion. It has in fact been used to
implement an (unstaged) interpreter similar to the one discussed
in this paper [2]. The work presented here extends the work done
in Cayenne in three respects: First, Cayenne allows types to depend
on values, and thus, does not ensure that type checking termi-
nates. Second, Cayenne does not support dependent datatypes (like
J(e,s,t)), and so, writing an interpreter involves the use of a separate
proof object to encode the information carried by J(e,s,t)),
which is mostly just threaded through the program. The number of
parameters passed to both the Meta-D and Cayenne implementation
of the eval function is the same, but using dependent datatypes in
Meta-D allows direct analogy with the standard definition of the
semantics over typing judgments rather than raw terms. Third,
Cayenne does not provide explicit support for staging, an essential
component for achieving the performance results that can be
achieved using tagless staged interpreters.
Xi and Pfenning study a number of different practical approaches
to introducing dependent types into programming languages [59,
60]. Their work concentrates on limiting the expressivity of the
dependent types, and thus limiting the constraints that need to be
solved to Presburger arithmetic problems. Singleton types seem to
have been first used by Xi in the context of DML [60]. The idea
was later used in a number of works that further developed the idea
of representation types and intensional type analysis.
Logical frameworks [19, 37] use dependent types as a basis for
proof systems. While this is related to our work, logical frameworks
alone are not sufficient for our purposes, as we are interested
in computational programming languages that have effects
such as non-termination. It is only with the recent work of Shao,
Saha, Trifonov and Papaspyrou that we have a generic framework
for safely integrating a computation base language, with a rich dependent
type system, without losing decidability (or soundness) of
type-checking.
Dybjer extensively studies the semantics of inductive sets and
families [11, 12, 13, 14, 16] and simultaneous inductive- recursive
definitions [15]. uses only the former (in the type level), and
we also use them at the value level (J(e,s,t)). The Coq proof
assistant provides fairly extensive support for both kinds of definitions
[17, 34, 35]. In the future, it will be interesting to explore
the integration of the second of these techniques into programming
languages.
One interesting problem is whether self-interpretation is possible
in a given programming language. This is possible with simply-typed
languages [54]. It is not clear, however, that it can be done
in a dependently typed language [38]. Exploring this problem is
interesting future work.
Finally, staged type inference [46] can also be used as a means
of obtaining programs without tags. Of the techniques discussed
in this paper, it is probably closest in spirit to tag elimination. In
fact, in a multi-stage setting tag elimination is applied at runtime
and is nothing but a non-standard type analysis. Key differences
are that in the staged type inference system the code type that is
used does not reflect any type information, and type information
can only be determined by dynamic type checking. More impor-
tantly, the success and failure of staged type inference can depend
on whether the value in the code type has undergone simplifica-
tion, and it is easy to return a value that tells us (at runtime, in the
language) whether this dynamic inference succeeded or not. Tag
elimination, on the other hand, works on code that has an explicit
static type. Additionally, by using carefully crafted "fall-back plan"
projection/embedding pairs, runtime tag elimination is guaranteed
to always have the same denotational semantics (but certainly not
operational semantics) independently of the test of the code being
analyzed and any simplifications that may be done to the subject
program [54].
6 Conclusions and Future Work
In this paper we have shown how a dependently typed programming
language can be used to express a staged interpreter that completely
circumvents the need for runtime tagging and untagging operations
associated with universal datatypes. In doing so we have
highlighted two key practical issues that arise when trying to develop
staged interpreters in a dependently typed language. First,
the need for functions that build the representations of typing judgments
that the interpretation function should be defined over. And
second, the need for representation types to avoid polluting the type
language with the impure terms of the computational language. To
demonstrate that staging constructs and dependent types can be
safely combined, we formalize our language as a multi-stage computational
language typed by Shao, Saha, Trifonov, and Papaspy-
rou's system. This allows us to prove type safety in a fairly
straightforward manner, and without having to duplicate the work
done for the system.
A practical concern about using dependent types for writing interpreters
is that such systems do not have decidable type inference,
which some view as a highly-valued feature for any typed language.
We did not find that the annotations were a burden, and some simple
tricks in the implementation were enough to avoid the need for
redundant annotations.
In carrying out this work we developed a deeper appreciation
for the subtleties involved in both dependently typed programming
and in the implementation of type checkers for dependently typed
languages. Our current implementation is a prototype system that
we have made available online [27]. Our next step is to study the
integration of such a dependently typed language into a practical
implementation of multi-stage programming, such as MetaOCaml
[28]. We have also found that there a lot of opportunities in the context
of dependently typed languages that we would like to explore
in the future. Examples include syntactically lighter-support for
representation types, formalizing some simple tricks that we have
used in our implementation to help alleviate the need for redundant
type annotations. We are also interested in exploring the use of dependent
types to reflect the resource needs of generated programs
[8, 24, 52].
--R
An exercise in dependent types: A well-typed inter- preter
Lambda calculi with types.
Little languages.
Phase distinctions in type theory.
Type theory and programming.
Flexible type analysis.
Resource bound certifica- tion
Intensional polymorphism in type-erasure semantics
A modal analysis of staged computation.
Inductively defined sets in Martin-L-of's set theory
Inductive sets and families in Martin- L-of's type theory and their set-theoretic semantics
Inductive sets and families in Martin-L-of's type theory and their set-theoretic semantics
Inductive families.
A general formulation of simultaneous inductive-recursive definitions in type theory
Finite axiomatizations of inductive and inductive-recursive definitions
A tutorial on recursive types in Coq.
Semantics of Programming Languages.
A frame-work for defining logics
Compiling polymorphism using intentional type analysis.
A practical implementation of tag elimination.
Building domain specific embedded languages.
Modular domain specific languages and tools.
Proving the correctness of reactive systems using sized types.
Partial Evaluation and Automatic Program Generation.
On Jones-optimal specialization for strongly typed languages
MetaOCaml: A compiled
Inherited limits.
An idealized MetaML: Simpler
Programming in constructive set the- ory: Some examples
Programming in Martin-Lof's Type Theory
Oregon Graduate Institute Technical Reports.
Inductive definitions in the system Coq: Rules and properties.
Inductive definitions in the system Coq: Rules and properties.
Emir Pa-sali-c
Logic programming in the LF logical frame- work
LEAP: A language with eval and polymorphism.
Basic Category Theory for Computer Scientists.
Microlanguages for operating system specializa- tion
Definitional interpreters for higher-order programming languages
Definitional interpreters for higher-order programming languages
Nikolaos Pa- paspyrou
Benaissa, and Emir Pa-sali-c
Peyton Jones.
A transformation library for data structures.
A sound reduction semantics for untyped CBN multi-stage computation
Directions in functional programming for real(-time) applications
Tag elimination - or - type specialisation is a type-indexed effect
Tag elimination and Jones-optimality
Semantics of Programming Languages.
Fully reflexive intensional type analysis.
A syntactic approach to type soundness.
Eliminating array bound checking through dependent types.
Dependent types in practical programming.
--TR
Basic category theory for computer scientists
Logic programming in the LF logical framework
Inductive sets and families in Martin-LoMYAMPERSANDuml;f''s type theory and their set-theoretic semantics
Semantics of programming languages
Partial evaluation and automatic program generation
A syntactic approach to type soundness
Compiling polymorphism using intensional type analysis
A type-based compiler for standard ML
A modal analysis of staged computation
Proving the correctness of reactive systems using sized types
Building domain-specific embedded languages
Multi-stage programming with explicit annotations
Dynamic typing as staged type inference
Eliminating array bound checking through dependent types
Intensional polymorphism in type-erasure semantics
Dependent types in practical programming
Flexible type analysis
Resource bound certification
A sound reduction semantics for untyped CBN mutli-stage computation. Or, the theory of MetaML is non-trival (extended abstract)
DSL implementation using staging and monads
Fully reflexive intensional type analysis
A type system for certified binaries
Principles of Programming Languages
Semantics, Applications, and Implementation of Program Generation
Tag Elimination and Jones-Optimality
On Jones-Optimal Specialization for Strongly Typed Languages
Inductive Definitions in the system Coq - Rules and Properties
Multi-Stage Programming
Directions in Functional Programming for Real(-Time) Applications
An Idealized MetaML
Inherited Limits
Definitional interpreters for higher-order programming languages
Programming in Constructive Set Theory
Modular Domain Specific Languages and Tools
Multistage programming
--CTR
Chiyan Chen , Hongwei Xi, Implementing typeful program transformations, ACM SIGPLAN Notices, v.38 n.10, p.20-28, October
Jason Eckhardt , Roumen Kaiabachev , Emir Pasalic , Kedar Swadi , Walid Taha, Implicitly heterogeneous multi-stage programming, New Generation Computing, v.25 n.3, p.305-336, January 2007
Manuel Fhndrich , Michael Carbin , James R. Larus, Reflective program generation with patterns, Proceedings of the 5th international conference on Generative programming and component engineering, October 22-26, 2006, Portland, Oregon, USA
Adam Chlipala, A certified type-preserving compiler from lambda calculus to assembly language, ACM SIGPLAN Notices, v.42 n.6, June 2007
Seth Fogarty , Emir Pasalic , Jeremy Siek , Walid Taha, Concoqtion: indexed types now!, Proceedings of the 2007 ACM SIGPLAN symposium on Partial evaluation and semantics-based program manipulation, January 15-16, 2007, Nice, France
Oleg Kiselyov , Chung-chieh Shan, Lightweight Static Capabilities, Electronic Notes in Theoretical Computer Science (ENTCS), v.174 n.7, p.79-104, June, 2007
Chiyan Chen , Rui Shi , Hongwei Xi, Implementing Typeful Program Transformations, Fundamenta Informaticae, v.69 n.1-2, p.103-121, January 2006
Oleg Kiselyov , Kedar N. Swadi , Walid Taha, A methodology for generating verified combinatorial circuits, Proceedings of the 4th ACM international conference on Embedded software, September 27-29, 2004, Pisa, Italy
Sheard, Languages of the future, ACM SIGPLAN Notices, v.39 n.12, December 2004
Edwin Brady , Kevin Hammond, A verified staged interpreter is a verified compiler, Proceedings of the 5th international conference on Generative programming and component engineering, October 22-26, 2006, Portland, Oregon, USA
Jim Grundy , Tom Melham , John O'leary, A reflective functional language for hardware design and theorem proving, Journal of Functional Programming, v.16 n.2, p.157-196, March 2006
Walid Taha , Michael Florentin Nielsen, Environment classifiers, ACM SIGPLAN Notices, v.38 n.1, p.26-37, January | calculus of constructions;definitional interpreters;multi-stage programming;domain-specific languages |
581642 | Bit section instruction set extension of ARM for embedded applications. | Programs that manipulate data at subword level, i.e. bit sections within a word, are common place in the embedded domain. Examples of such applications include media processing as well as network processing codes. These applications spend significant amounts of time packing and unpacking narrow width data into memory words. The execution time and memory overhead of packing and unpacking operations can be greatly reduced by providing direct instruction set support for manipulating bit sections.In this paper we present the Bit Section eXtension (BSX) to the ARM instruction set. We selected the ARM processor for this research because it is one of the most popular embedded processor which is also being used as the basis of building many commercial network processing architectures. We present the design of BSX instructions and their encoding into the ARM instruction set. We have incorporated the implementation of BSX into the Simplescalar ARM simulator from Michigan. Results of experiments with programs from various benchmark suites show that by using BSX instructions the total number of instructions executed at runtime by many transformed functions are reduced by 4.26% to 27.27% and their code sizes are reduced by `1.27% to 21.05%. | INTRODUCTION
Programs for embedded applications frequently manipulate data
represented by bit sections within a single word. The need to operate
upon bit sections arises because such applications often involve
data which is smaller than a word, or even a byte. Moreover it is
also the characteristic of many such applications that at some point
the data has to be maintained in packed form, that is, multiple data
items must be packed together into a single word of memory. In
fact in most cases the input or the output of an application consists
of packed data. If the input consists of packed data, the application
typically unpacks it for further processing. If the output is required
to be in packed form, the application computes the results and explicitly
packs it before generating the output. Since packing and
unpacking of data is a characteristic of the application domain, it
is reflected in the source program itself. In this work we assume
that the programs are written in the C language as it is a widely
used language in the embedded domain. In C programs packing
and unpacking of data involves performing many bitwise logical
operations and shift operations.
Important applications that manipulate subword data include media
processing applications that manipulate packed narrow width
media data and network processing applications that manipulate
packets. Typically such embedded applications receive media data
or data packets over a transmission medium. Therefore, in order to
make best use of the communication bandwidth, it is desirable that
each individual subword data item be expressed in its natural size
and not expanded into a 32 bit entity for convenience. However,
when this data is deposited into memory, either upon its arrival as
an input or prior to its transmission as an output, it clearly exists in
packed form.
The processing of packed data that typically involves unpacking
of data, or generation of packed data that typically involves packing
of data, both require execution of additional instructions that
carry out shift and logical bitwise operations. These instructions
cost cycles and also increase the code size. The examples given below
are taken from adpcm (audio) and gsm (speech) applications
respectively. The first example is an illustration of an unpacking
operation which extracts a 4 bit entity from inputbuffer. The
second example illustrates the packing of a 5 bit entity taken from
LARc[2] with a 3 bit entity taken from LARc[3].
Unpacking:
Packing:
In addition to the generation of extra instructions for packing
and unpacking data, there are other consequences of packing and
unpacking. Additional memory locations and registers are required
to hold values in packed and unpacked form. Increase in register
pressure results which can further increase memory requirements
and cache activity. Finally all of the above factors influence the
total energy comsumption which can be of vital concern.
In this paper we present the Bit Section eXtension (BSX) to the
arm processor's instruction set. Bit sections are the subword entities
that are manipulated by the programs. We selected the arm
processor for this research because it is one of the most popular
embedded processor which is also being used by many commercial
network processing architectures being built today. We present
the design of BSX instructions and their encoding into the arm
instruction set. The newly designed instructions allow us to specify
register operands that are bit sections of 32 bit values contained
within registers. As a result the data stored in packed form can be
directly accessed and manipulated and thus the need for performing
explicit unpacking operations is eliminated. Similarly computed results
can be stored directly into packed form which eliminates the
need for explicit packing operations.
We have incorporated the implementation of BSX in the Simplescalar
arm simulator from Michigan. Results of experiments
with programs from various benchmark suites show that by using
BSX instructions the number of instructions executed by these programs
can be significantly reduced. For the functions in which BSX
instructions were used we observed a reduction in dynamic instruction
counts that ranges from 4.26% to 27.27%. The code sizes of
the functions were reduced by 1.27% to 21.05%.
The remainder of the paper is organized as follows. In Section
2 we describe the design of bit section specification methods and
their incorporation in various types of instructions. We also show
how these new instructions are encoded using the unused encoding
space of the arm instruction set. Section 3 describes our approach
to generating code that makes use of BSX instructions. Section 4
describes our experimental setup and results of experiments. Related
work on instruction set design and compiler techniques for
taking advantage of these instructions is discussed in Section 5.
Concluding remarks are given in Section 5.
2. BIT SECTION EXTENSIONS (BSX)
2.1 Bit Section Descriptors
Subword level data entities are called bit sections. A bit section
is a sequence of consecutive bits within a word. A bit section can
vary from 1 bit long to 32 bits long. We specify bit sections through
use of bit section descriptors (BSDs).
To specify a bit section within a word, we have two options. One
way is to specify the starting bit position and the ending bit position
within the word. Another way is to specify the starting bit position
and bit section length. Either way it takes 10 bits to specify a single
bit section: 5 bits for the starting position and 5 bits for the length
or ending position.
We use the form which specifies length of the bit section. By analyzing
the MediaBench and CommBench programs we found that
many operations involve multiple bit section operands of the same
size. Therefore when one instruction involves multiple bit section
operands, they can share the same bit section length specification.
While the lengths of multiple bit sections used by an instruction are
often the same and can be specified once, the ending bit position of
these bit sections can often be different and thus unlike the length
the ending position specification cannot be shared.
2.2 Bit Section Addressing Modes
There are two different addressing modes through which bit section
descriptors can be specified. While the position of many bit
sections within the word boundary can be determined at compile
time, the position of some bit sections can only be determined at
run time. Therefore we need two addressing modes for specifying
bit sections: bit section operand can be specified as an immediate
value encoded within the instruction; or bit section can be specified
in a register as it cannot be expressed as an immediate constant.
The number of bit section operands that are used by various instructions
can vary from one to three.
2.2.1 Immediate Bit Section Descriptors
An immediate bit section descriptor is encoded as part of the in-
struction. Let us assume that R is a register operand of the instruction
which is specified using 4 bits as arm contains 16 registers
(R0.R15). If the operand is a bit section within R whose position
within R is known to be fixed, then an immediate bit section
descriptor is associated with the register as follows:
R[#start; #len] refers to:
bits [#start::#start +#len 1] of R.
The constant #start is 5 bits as the starting position of the bit
section may vary from bit 0 to bit 31 and #len is also 5 bits as
the number of bits in the bit section can at most include all the bits
(0.31) of the register. Note that for valid bit section descriptors
#start +#len 1 is never greater than 31.
Immediate bit section descriptors are used if either the instruction
has only one such bit section operand or two bit section operands.
When two bit section descriptors need to be specified, the #len
specification is the same and hence shared by the two descriptors
as shown below.
R1[#start1]; R2[#start2]; #len refers to:
bits [#start1::#start1 +#len 1] of R1; and
bits [#start2::#start2 +#len 1] of R2.
2.2.2 Register Bit Section Descriptors
When both the operands of an instruction as well as its result are
bit sections, then three bit section descriptors need to be specified.
Even though all three bit sections share the same length, it is not
possible to specify all three bit sections as immediates because not
enough bits are available in an instruction to carry out this task.
Therefore in such cases the specification of the bit section descriptors
is stored in a register rather than as an immediate value in the
instruction itself.
There is another reason for specifying bit section descriptors in
registers. In some situations the positions and lengths of the bit
sections within a register are not fixed but rather determined at run-time
by the program. In this case the bit section descriptor is not an
immediate value specified as part of the instruction but rather the
descriptor is computed into the register which is then specified as
part of the instruction. The register which specifies the bit section
descriptor may specify one, two or three bit sections in one, two,
and three (possibly different) registers as shown below:
where register R contains the bit section descriptors for the appropriate
operand registers R1, R3. The contents of R are
organized as shown in Figures 1, 2 and 3.
len
Figure
1: Bit Section Descriptor for 1 Bit Section.
len
start115
Figure
2: Bit Section Descriptor for 2 Bit Sections.
len
start21520
start1
Figure
3: Bit Section Descriptor for 3 Bit Sections.
2.3 Bit Section Instructions & their Encoding
Next we describe the arm instructions that are allowed the use
of bit section operands. While in principle it is possible to allow
any existing arm instruction with register operands to access bit
sections within the register as operands, we cannot allow all instructions
this flexibility as there would be too many new variations
of instructions and there is not enough space in the encoding
of arm instructions to accommodate these new instructions.
Therefore we choose a selected subset of instructions which are
most likely to be involved in bit section operations and developed
variations for them. In the benchmarks we studied most of the possible
operations are related to data processing. Therefore eight data
processing instructions are chosen from version 5T of the arm
instruction set which include six ALU instructions (ADD, SUB,
AND, EOR, ORR, and RSB) as well as compare and move (CMP
and MOV) instructions. The selection of these instructions was
based on studying a number of multimedia benchmarks and determining
the type of instructions that are most commonly needed.
Figure
4 shows the percantage of total executed instructions that
fall in the category of above instruction types selected for supporting
bit section operands. As we can see, the selected instructions
account for a significant percentage of dynamic instruction counts.
adpcm.decoder adpcm.encoder jpeg.cjpeg g721.decode g721.encode cast.decoder cast.encoder frag thres bilint histogram convolve softfloat dh
0%
10%
20%
30%
40%
50%
70%
80%
90%
100%
Percentage
of
Selected
Instructions
Counts
Figure
4: Dynamic frequency of selection instructions.
2.3.1 Instructions with Immediate BSDs
For each of the above instructions we provide three variations
when immediate bit section operands are used. In version 5T of
the arm instruction set the encoding space with prefix 11110 is
undefined. We use the remaining 27 bits of space of this undefined
instruction to deploy the new instructions. Of these 27 bits three
bits are used to distinguish between the eight operations that are
involved.
Let us discuss the three variations of each of the ALU instruc-
tions. In the first variation (FV) of the above ALU instructions, the
corresponding instructions have two bit section operands. Therefore
one of the operands acts both as the source operand and the
destination. The variants of CMP and MOV instructions are slightly
different as they require only two operands, unlike the ALU instructions
which require three operands. For CMP the two bit section
operands are both source operands and for MOV one operand is
the source and the other is the destination. We cannot allow three
operands to be all bit section operands at the same time because
three bit section operands will need at least 32 bits to specify.
The encoding of these instructions is shown below. The prefix
11110 in bits 31 to 27 indicates the presence of BSX instruction.
Three bits that encode the eight operations are bits 24 to 26. Bit
23 is 0, which indicates this is the first variation of the instruc-
tion. The remaining bits encode the two bit section descriptors:
Rd[Rds; len] and Rm[Rms; len].opcode1 1 1 1 Rd Rds Rm len
28 27 26 24 23 22 19
Rms
Figure
5: First Variation: ALU Instructions.
The second variation (SV) of instructions has three operands.
One is a destination register (not a bit section), one is a source
register (not a bit secion), and the third operand is a bit section
operand. In this variation the operation is done as if the bit section
is zero extended. To specify this variation bit 23 must be 1 and bit
14 must be 0. The instruction format and encoding is shown below.
28 27 26 24 23 22 19
Rms
Figure
Instructions.
CMP and MOV are again slightly different as they need only
two operands. Bit 15 is a flag to indicate whether the bit section
is to be treated as an unsigned or signed entity. If it is 0, then it is
unsigned and then zero extended before the operation. If it is 1, the
bit section is signed, and therefore the first bit in the bit section is
extended before the operation.
28 27 26 24 23 22 19
Rms
Figure
7: CMP and MOV Instructions.
Rms
Figure
8: Third Variation: ALU Instructions.
The third variation (TV) has one 8 bit immediate value which is
one of the operands and one bit section descriptor which represents
the second operand. The latter bit section also serves as the destination
operand. To specify this variation, bit 23 must be 1 and bit
14 must be 1. The instruction format and encoding is shown above.
2.3.2 Instructions with Register BSDs
For each of the above instructions we have three variations when
register bit section operands are used. These variations differ in
the number of bit section operands. We found another undefined
instruction space with prefix 11111111 to encode these instructions
into version 5T of the arm instruction set. The encoding of the
instructions is as follows. Bits 19 to 21 contain the opcode while
bits 17 and stand for the number of the bit section operands in
the instructon. Therefore 01, 10, and 11 correspond to presence
of 1, 2, and 3 bit section operands. The S bit specifies whether
the bit section contains unsigned or signed integer. The format and
encoding of the instructions is given below.
Rn
Figure
9: ALU Instructions: Register BSDs.
Figure
10: CMP and MOV Instructions: Register BSDs.
Rms
Rns
Figure
Figure
12: Setup Specifier.
Instructions CMP and MOV are a little bit different, they can
have at most two bit section operands. Therefore bits 17 and
can only be 01 or 10 and bits 8 to 11 are not specified.
The bit section descriptor in itself contains several bit sections.
Therefore setup costs of a bit section descriptor in a register can be
high. Therefore we introduce new instructions with opcode setup
to set up the bit section descriptors efficiently. These instructions
can set multiple values in bit section descriptor simultaneously. The
format and encoding of these instructions are given in Figures 11
and 12.
The instruction setup Rd, Rns, Rms, len can set up the
value of Rns, Rms and len fields in bit section descriptor held in Rd
simultaneously. A 6-bit setup specifer describes how a field is set
up. In each setup specifier, if bit 5 is 1, then bits 0 to 4 represent
an immediate value. The field is setup by copying this immediate
value. If bit 5 is 0, and bit 4 is 0, then bits 0 to 3 are used to specify
a register. The field is setup by copying the last five digits in the
register. For Rns specifier, if bit 5 is 0 and bit 4 is 1, then Rns is not
a valid bit section specifier and must be ignored. In general, since
all three values (Rns, Rms, and len) can be in registers, we need
to read these registers to implement the instruction in one cycle.
However, in practice we never encountered a situation where there
was a need to read three registers.
2.4 BSX Implementation
To implement the BSX instructions two approaches are possi-
ble. One approach involves redesign of the register file. The bit
section can be directly supplied to the register file during a read or
write operation and logic inside the register file ensures that only
the appropriate bits of a register are read or written.
An alternative approach which does not require any modification
to the register file reads or writes an entire register. During a read,
entire register is read, and then logic is provided so that the relevant
bit section can be selected to generate the bit section operand for
an instruction. Similarly during a write to update only some of the
bits in a register, in the cycle immediately before the cycle in which
the write back operation is to occur, the contents of the register to
be partially overwritten are read. The value read is made available
to the instruction during the write back stage where the relevant bit
section is first updated and then written to the register file. An extra
dedicated read port should be provided to perform the extra read
associated with each write operation.
The advantage of the first approach is that it is more energy effi-
cient. Even though it requires the redesign of the register file, it is
also quite simple. The second approach is not as energy efficient,
it requires greater number of register reads, and is also somewhat
more complex to implement.
3. GENERATING BSX arm CODE
Our approach to generating code that uses the BSX instructions
is to take existing arm code generated for programs using the unmodified
compiler and then, in a postpass, selectively replace the
use of arm instructions by BSX instructions to generate the optimized
code. The optimizations are aimed at packing and unpacking
operations in context of bit sections with compile time fixed and dynamically
varying positions.
3.1 Fixed Unpacking
An unpacking operation involves merely extracting a bit section
from a register that contains packed data and placing the bit section
by itself in the lower order bits of another register. The example
below illustrates unpacking which extracts bit section 4.7
from inputbuffer and places it in lower order bits of delta
(the higher order bits of delta are 0). As shown below, the arm
code requires two instructions, a shift and an and instruction.
However, a single BSX instruction which takes bits 4 to 7, zero
extends them, and places them in a register is sufficient to perform
unpacking.
arm code
mov r3, r8, asr #4
and r12, r3, #15 ; 0xf
BSX arm code
mov r12, r8[#4,#4]
The general transformation that optimizes the unpacking operation
takes the following form. In the arm code an and instruction
extracts bits from register ri and places them in register rj.
Then the extracted bit section placed in rj is used possibly multiple
times. In the transformed code, the and instruction is eliminated
and each use of rj is replaced by a direct use of bit section in
ri. This transformation also eliminates the temporary use of register
rj. Therefore, for this transformation to be legal, the compiler
must ensure that register rj is indeed temporarily used, that is, the
value in register rj is not referenced following the code fragment.
Before Transformation
and rj, ri, #mask(#s,#l)
inst1 use rj
instn use rj
Precondition
the bit section in ri remains unchanged
until instn and rj is dead after instn.
After Transformation
inst1 use ri[#s,#l]
instn use ri[#s,#l]
3.2 Fixed Packing
In arm code when a bit section is extracted from a data word
we must perform shift and and operations. Such operations can
be eliminated as a BSX instruction can be used to directly reference
the bit section. This situation is illustrated by the example
given below. The C code takes bits 0.4 of LARc[2] and concatenates
them with bits 2.4 of LARc[3]. The first two instructions
of the arm code extract the relevant bits from LARc[3], the third
instruction extracts relevant bits from LARc[2], and the last instructions
concatenates the bits from LARc[2] and LARc[3]. As we can
see, the BSX arm code only has two instructions. The first instruction
extracts bits from LARc[3], zero extends them, and stores
them in register r0. The second instruction moves the relevant bits
of LARc[2] from register r1 and places them in proper position in
register r0.
arm code
mov r0, r0, lsr #2
and r0, r0, #7
and r2, r1, #31
orr r0, r0, r2, asl #3
BSX arm code
mov r0, r0[#2,#3]
mov r0[#3,#5], r1[#0,#5]
In general the transformation for eliminating packing operations
can be characterized as follows. An instruction defines a bit section
and places it into a temporary register ri. The need to place the bit
section by itself into a temporary register ri arises because the bit
section is possibly used multiple times. Eventually the bit section
is packed into another register rj using an orr instruction. In the
optimized code, when the bit section is defined, it can be directly
computed into the position it is placed by the packing operation,
that is, into rj. All uses of the bit section can directly reference
the bit section from rj. Therefore the need for temporary register
ri is eliminated and the packing orr instruction is eliminated.
For this transformation to be legal, the compiler must ensure that
register ri is indeed temporarily used, that is, the value in ri is
not referenced after the code fragment.
Before Transformation
ri ;bit section definition in a whole register
inst1 use ri ;use register
instn use ri ;use register
orr rj, rj, ri ;pack bit section
Precondition
the bit sections in ri and rj remain unchanged
until orr and ri is dead after orr.
After Transformation
;define and pack
inst1 use rj ;use bit section
instn use rj ; use bit section
3.3 Dynamic Unpacking
There are situations in which, while extraction of bit sections is
to be carried out, the position of the bit section is determined at run-
time. In the example below, a number of lower order bits, where
the number equals the value of variable size, are extracted from
put buffer, zero extended, and placed back into put buffer.
Since the value of size is not known at compile time, an immediate
value cannot be used to specify the bit section descriptor. Instead
the first three arm instructions shown below are used to dynamically
construct the mask which is then used by the and instruction
to extract the required value from put buffer. In the optimized
code the bit section descriptor is setup in register r3 and then used
by the mov instruction to extract the require bits and place them by
themselves in r7.
arm code
mov r3, #1
mov r3, r3, lsl r5
sub r3, r3, #1
and r7, r7, r3
BSX arm code
setup r3, , #0, r5
mov r7, r7[r3]
The general form of this transformation is shown below. The
arm instructions that construct the mask are replaced by a single
setup instruction. The and instruction can be replaced by a mov
of a bit section whose descriptor can be found in the register set up
by the setup instruction.
arm code
mov ri, #1
mov ri, ri, lsl rj
sub ri, ri, #1
and rd, rn, ri
Precondition
value in ri should be dead
after and instruction.
BSX arm code
setup ri, rj, rj
mov rd, rn[ri]
3.4 Dynamic Packing
Packing of bit sections together, whose sizes are not known till
runtime, can cost several instructions. The C code given below extracts
lower order p bits from m and higher order bits from
n and packs them together into o. The arm code for this operation
involves many instructions because first the required masks for m
and n are generated. Next the relevant bits are extracted using the
masks and finally they are packed together using the orr instruc-
tion. In contrast the BSX arm code uses far fewer instructions.
Since p's value is not known at compile time, we must use register
bit section descriptors for m and n.
arm code
mov r12, #1
and r1, r1, r2, lsl r3 ; n&((1 << (16 p)) 1)
and r0, r0, r12 ; m&((1 << p) 1)
BSX arm code
setup r12, , #0, r3 ; descriptor for m's bit section
rsb r2, r3, #16
setup r2, , r3, r2 ; descriptor for n's bit section
relevant bits in r0
relevant bits in r0
In general the transformation for optimizing dynamic packing
operations can be described as follows. Two or more bit sections,
whose positions and lengths are unknown at compile time, are extracted
from registers where they currently reside and put into separate
registers respectively. A mask is constructed and an and instruction
is used to perform the extraction. Finally they are packed
togehter into one register using orr instruction. In the optimized
code, for each bit section, we setup a register bit section descriptor
first, and then move the bit section into the final register with
the bit section descriptor directly. As a result, orr instruction is
removed. By using the setup instruction to simultaneously setup
several fields in the bit section descriptor, we reduce the number
of instructions in comparison to the instruction sequence used to
create the masks in the original code. Different types of instruction
sequences can be used to create a mask and thus it is not always
possible to identify such sequences. Our current implementation
can only handle some commonly encountered sequences.
arm code
instruction sequence to create mask1
and ra, rb, mask1
instruction sequence to create mask2
and rc, rd, mask2
orr rm, ra, rc
BSX arm code
setup register bit section descriptor 1
move bit section 1 to rm using bit section descriptor 1
setup register bit section descriptor 2
move bit section 2 to rm using bit section descriptor 2
4. EXPERIMENTAL EVALUATION
4.1 Experimental Setup
Before we present the results of the experiments, we describe
our experimental setup which includes a simulator for arm, an
optimizing compiler, and a set of relevant benchmarks.
Processor Simulator
We started out with a port of the cycle level simulator Simplescalar
[1] to arm available from the University of Michigan. This version
simulates the five stage pipeline described in the preceding
section which is the Intel's SA-1 StrongARM pipeline [8] found,
for example, in the SA-110. The I-Cache configuration for this
processor are: 16Kb cache size, 32b line size, and 32-way asso-
ciativity, and miss penalty of 64 cycles (a miss requires going off-
chip). The timing of the model has been validated against a Rebel
NetWinder Developer workstation [16] by the developers of the
system at Michigan.
We have extended the above simulator in a number of important
ways for this research. First we modified Simplescalar to use
the system call conventions followed by the Newlib C library instead
of glibc which it currently uses. We made this modification
because Newlib has been developed for use by embedded systems
[10]. Second we incorporated the implementation of the BSX
instructions for the purpose of their evaluation. In addition, we
have also incorporated the Thumb instruction set into Simplescalar.
However, this feature is not relevant for this paper.
Optimizing Compiler
The compiler we used in this work is the gcc compiler which was
built to create a version that supports generation of arm, Thumb
as well as mixed arm and Thumb code. Specifically we use the
xscale-elf-gcc compiler version 2.9-xscale. All programs
were compiled at -O2 level of optimization. We did not use
-O3 because at that level of optimization function inlining and loop
unrolling is enabled. Clearly since the code size is an important
concern for embedded systems, we did not want to enable function
inlining and loop unrolling.
The translation of arm code into optimized BSX arm code
was carried out by an optimization postpass. Only the frequently
executed functions in each program that involve packing, unpack-
ing, and use of bit section data were translated into BSX arm
code. The remainder of the program was not modified. As we have
seen from the transformations of the preceding section, temporary
registers are freed by the optimizations. While it may be possible
to improve the code quality by making use of these registers, we do
not do so at this time due to the limitations of our implementation.
Representative Benchmarks
The benchmarks we use are taken from the Mediabench [12],
Commbench [21], Netbench [14], and Bitwise [18] suites as
they are representative of a class of applications important for the
embedded domain. We also added an image processing application
thres. The following programs are used:
adpcm - encoder and
encode; and jpeg - cjpeg.
frag and cast - decoder and encoder.
Image Processing: thres.
Bitwise:
bilint, histogram, convolve, and softfloat.
dh.
4.2 Results
Next we present the results of experiments that measure the improvements
in code quality due to the use of BSX instructions. We
measured the reductions in both the instruction counts and cycle
counts for BSX arm code in comparison to pure arm code. The
results are given in Tables 1 and 2. In these results we provide
the percentage improvements for each of the functions that were
modified as well as the improvements in the total counts for the entire
program. The reduction in instruction counts for the modified
functions varies between 4.26% and 27.27%. The net instruction
count reductions for the entire programs are lower and range from
0.45% to 8.79%. This is to be expected because only a subset of
functions in the programs can make significant use of the BSX in-
struction. The reductions in cycle counts for the modified functions
varies between 0.66% and 27.27%. The net cycle count reductions
for the entire programs range from 0.39% to 8.67%. In Table 5 the
reductions in code size of functions that were transformed to make
use of BSX instructions are given. The code size reductions range
from 1.27% to 21.05%.
Finally we also studied the usage of BSX instructions and transformations
used by the benchmarks. In Table 3 we show the types
of BSX instructions that were used by each of the benchmarks. In
particular, we indicate whether fixed BSDs were used in the instructions
or dynamic BSDs were used. For fixed BSDs we also indicate
which of the three variations of bit section referencing instructions
were used by the benchmark. For dynamic BSDs
we also indicate the use of setup instruction. As we can see, fixed
BSDs are more commonly used and situations involving the three
variations of bit section operands arise. In Table 4 we show the kind
of transformations that were found to be applicable to each of the
benchmarks - packing and unpacking involving fixed or dynamic
BSDs. As we can see, each optimization and every BSX instruction
was used in some program. The results of Tables 3 and 4 indicate
that the fixed BSD instructions that we have included in BSX are
appropriate and useful. The results for register BSDs are negative.
While we found instances where the positions of the BSDs vary at
runtime, we are not able to develop the appropriate compiler transformations
to effectively take advantage of these situations using
our instructions.
One of the benefits of using BSX instructions is that often the
number of registers required is reduced. This is because multiple
subword data items can now simultaneously reside in a single
register and it is no longer to separate them in hold them in different
registers. The performance data presented above is based upon
BSX arm code that does not take advantage of the additional registers
that may become available. Once the registers are used one
can expect additional performance gains. While the problem of
global register allocation for subword data is beyond the scope of
this paper, in a related paper [19] we have shown that register requirements
can be reduced by 12% to 50% for functions that can
take advantage of BSX instructions.
5. RELATED WORK
A wide variety of instruction set support has been developed to
support multimedia and network processing applications. Most of
these extensions have to do with exploiting subword [5] and super-
word [11] parallelism. The instruction set extensions proposed by
Yang and Lee [22] focus on permuting subword data that is packed
together in registers. The network processor described [15] also
supports bit section referencing. In this paper we carefully designed
an extension consisting of a small subset of flexible bit-section referencing
instructions and showed how they can be easily incorporated
in a popular embedded arm processor.
Compiler research on subword data can be divided into two cat-
egories. First work is being done to automatically identify narrow
width data. Second techniques to automatically pack narrow width
data and perform register allocation, instruction selection, and generation
of SIMD parallel instructions is being carried out.
There are several complementary techniques for identifying sub-word
data. Stephenson et al. [18] proposed bitwidth analysis to
discover narrow width data by performing value range analysis.
Budiu et al. [2] propose an analysis for inferring individual bit
values which can be used to narrow the width of data. Tallam and
Gupta [19] propose a new type of dead bits analysis for narrowing
the width of data. The analysis by Zhang et al. [7] is aimed
at automatic discovery of multiple data items packed into program
variables.
The works on packing narrow width data after its discovery include
the following. Davidson and Jinturkar [3] were first to propose
a compiler optimization that exploits narrow width data. They
proposed memory coalescing for improving the cache performance
of a program. Zhang and Gupta [23] have proposed techniques for
compressing narrow width and pointer data for improving cache
performance. Both of these techniques were explored in context
of general purpose processors and both change the data layout in
memory through packing. Aggressive packing of scalar variables
into registers is studied in [19]. As mentioned earlier, this register
allocation technique, when combined with the work in this paper,
can further improve performance. Another work on register allocation
in presence of bit section referencing is by Wagner and Leupers
Table
1: Reduction in Dynamic Instruction Counts.
Benchmark Instruction Count Savings
Function arm BSX arm [%]
adpcm.decoder
adpcm decoder 6124744 5755944 6.02%
Total 6156561 5787760 5.99%
adpcm.encoder
adpcm encoder 7097316 6654756 6.24%
Total 7129778 6687534 6.20%
jpeg.cjpeg
emit bits 634233 586291 7.56%
Total 15765616 15694887 0.45%
g721.decode
fmult 47162982 43282495 8.23%
predictor zero 9293760 8408640 9.52%
step size 1468377 1320857 10.05%
reconstruct 2628342 2480822 5.61%
Total 258536428 253180667 2.07%
g721.encode
fmult 48750464 44367638 8.99%
predictor zero 9293760 8408640 9.52%
step size 2372877 2225357 6.22%
reconstruct 2645593 2498073 5.58%
Total 264021499 258163419 2.22%
cast.decoder
CAST encrypt 41942016 37850112 9.76%
Total 109091228 103209100 5.40%
cast.encoder
CAST encrypt 41942016 37850112 9.76%
Total 105378485 99496358 5.58%
frag
in cksum 26991150 25494952 5.54%
Total 37506531 36010318 3.99%
threshold
coalesce 3012608 2602208 13.62%
memo 3223563 2814963 12.68%
blocked memo 2941542 2531826 13.93%
Total
bilint
main 87 79 9.20%
Total 496 488 1.61%
histogram
main 317466 301082 5.16%
Total 327311 310857 5.03%
convolve
main 30496 30240 0.84%
Total 30799 30542 0.83%
softfloat
float32 signals nan 132 96 27.27%
addFloat32Sigs 29 23 20.70%
subFloat32Sigs 29 23 20.70%
float32 mul
float32 div
rem 28 23 17.86%
Total 898 819 8.79%
dh
NN DigitMult 153713163 141768387 7.77%
Total 432372762 419604191 2.95%
Table
2: Reduction in Dynamic Cycle Counts.
Benchmark Instruction Count Savings
Function arm BSX arm [%]
adpcm.decoder
adpcm decoder 6424241 6202961 3.44%
Total 6499880 6278786 3.40%
adpcm.encoder
adpcm encoder 7958088 7515456 5.56%
Total 8035001 7592761 5.50%
jpeg.cjpeg
emit bits 1047235 999163 4.59%
Total 19611965 19535002 0.39%
g721.decode
fmult 63914793 60034237 6.07%
predictor zero 12834446 11949382 6.90%
step size 1564728 1269752 18.85%
reconstruct 2601534 2454014 5.67%
Total 347037906 341531879 1.59%
g721.encode
fmult 65798336 61415518 6.66%
predictor zero 12834447 11949327 6.90%
step size 2630082 2335106 11.22%
reconstruct 2636030 2488439 5.60%
Total 353610636 347605462 1.70%
cast.decoder
CAST encrypt 46557053 40674664 12.63%
Total 141113081 133440304 5.44%
cast.encoder
CAST encrypt 46557174 40674817 12.63%
Total 135572465 127900147 5.66%
frag
in cksum 32698919 31205099 4.57%
Total 57745393 56207197 2.66%
threshold
coalesce 4355796 3937458 9.60%
memo 4725060 4307735 8.83%
blocked memo 22092904 21683166 1.85%
Total 181425566 180186381 0.68%
bilint
main 887 808 8.91%
Total 5957 5878 1.32%
histogram
main 481531 462532 3.95%
Total 496650 477807 3.79%
convolve
main 40215 39949 0.66%
Total 44945 44803 0.32%
softfloat
float32 signals nan 132 96 27.27%
addFloat32Sigs 324 247 23.77%
subFloat32Sigs 675 595 11.85%
float32 mul 577 513 11.09%
float32 div 397 380 4.28%
rem 453 314 30.68%
Total 10255 9366 8.67%
dh
NN DigitMult 236874768 224929644 5.04%
Total 578187905 565434086 2.21%
Table
3: BSX Instruction Usage.
Benchmark Fixed BSDs Dynamic Setup
adpcm.decoder yes yes
adpcm.encoder yes yes
jpeg.cjpeg yes yes yes
g721.decode yes yes
g721.encode yes yes
cast.decoder yes yes
cast.encoder yes yes
frag yes
thres yes
bilint yes
histogram yes yes
convolve yes
softfloat yes yes yes
dh yes yes
Table
4: Transformations Applied.
Benchmark Fixed BSDs Dynamic BSDs
Pack Unpack Pack Unpack
adpcm.decoder yes
adpcm.encoder yes yes
jpeg.cjpeg yes
g721.decode yes
g721.encode yes
cast.decoder yes yes
cast.encoder yes yes
frag yes
thres yes
bilint yes
histogram yes
convolve yes
softfloat yes yes
dh yes
Table
5: Reduction in Code Size.
Benchmark Code Size Reduction
Function arm BSX arm [%]
adpcm.decoder
adpcm decoder 260 248 4.62%
adpcm.encoder
adpcm encoder 300 284 5.33%
jpeg.cjpeg
emit bits 228 216 5.26%
g721.decode and g721.encode
fmult 196 176 10.2%
predictor zero 92 84 8.7%
step size 76 72 5.26%
reconstruct 96 92 4.17%
cast.decoder and cast.encoder
CAST encrypt 1328 1200 9.64%
frag
in cksum 108 88 18.52%
threshold
coalesce 148 136 8.11%
memo 296 284 4.05%
blocked memo 212 200 5.66%
bilint
main 352 320 9.09%
histogram
main 316 312 1.27%
convolve
main 652 648 0.61%
softfloat
addFloat32Sigs 348 324 6.90%
subFloat32Sigs 396 372 6.06%
float32 mul 428 400 6.54%
float32 div 544 520 4.41%
rem 648 628 3.09%
float32 sqrt 484 464 4.13%
dh
NN DigitMult 112 104 7.14%
[20]. Their work exploits bit section referencing in context of variables
that already contain packed data. They do not carry out any
additional variable packing. Compiler techniques for carrying out
SIMD operations on narrow width data packed in registers can be
found in [4, 11].
6. CONCLUSIONS
We presented the design of the Bit Section eXtension (BSX) to
the arm processor which can be easily encoded into the free encoding
space of the arm instruction set. We found that bit sections
are frequently manipulated by multimedia and network data
processing codes. Therefore BSX instructions can be used quite
effectively to improve the performance of these benchmarks. In
addition, reductions in code size and register requirements also result
when BSX instructions are used. We have incorporated the
implementation of BSX in the Simplescalar arm simulator from
Michigan. Results of experiments with programs from the various
benchmark suites show that by using BSX instructions the number
of instructions executed by these programs can be significantly re-
duced. Our future work will focus on integrating the use of BSX instructions
with register allocation techniques that aggressively pack
subword variables into a single registers.
Acknowledgements
This work is supported by DARPA award F29601-00-1-0183 and
National Science Foundation grants CCR-0220334, CCR-0208756,
CCR-0105355, and EIA-0080123 to the University of Arizona.
7.
--R
"The Simplescalar Tool Set, Version 2.0,"
"BitValue Inference: Detecting and Exploiting Narrow Width Computations,"
"Memory Access Coalescing : A Technique for Eliminating Redundant Memory Accesses,"
"Compiling for SIMD within Register,"
"Data Alignment for Sub-Word Parallelism in DSP,"
"ARM system Architecture,"
"A Representation for Bit Section Based Analysis and Optimization,"
"SA-110 Microprocessor Technical Reference Manual,"
"The Intel XScale Microarchitecture Technical Summary,"
"Exploiting Superword Level Parallelism with Multimedia Instruction Sets,"
A Tool for Evaluating and Synthesizing Multimedia and Communications Systems,"
"A 160-MHz, 32-b, 0.5-W CMOS RISC Microprocessor,"
Benchmarking Suite for Network Processors,"
"A New Network Processor Architecture for High Speed Communications,"
http://www.
"ARM Architecture Reference Manual,"
"Bitwidth Analysis with Application to Silicon Compilation,"
"Bitwidth Aware Global Register Allocation,"
"C Compiler Design for an Industrial Network Processor,"
"Commbench - A Telecommunications Benchmark for Network Processors,"
"Fast Subword Permutation Instructions Using Omega and Flip Network Stages,"
"Data Compression Transformations for Dynamically Allocated Data Structures,"
--TR
Memory access coalescing
MediaBench
The SimpleScalar tool set, version 2.0
Bidwidth analysis with application to silicon compilation
Exploiting superword level parallelism with multimedia instruction sets
Compiler Design for an Industrial Network Processor
ARM Architecture Reference Manual
ARM System Architecture
NetBench
Compiling for SIMD Within a Register
BitValue Inference
A Representation for Bit Section Based Analysis and Optimization
Data Compression Transformations for Dynamically Allocated Data Structures
Fast Subword Permutation Instructions Using Omega and Flip Network Stages
--CTR
Sriraman Tallam , Rajiv Gupta, Bitwidth aware global register allocation, ACM SIGPLAN Notices, v.38 n.1, p.85-96, January
Bengu Li , Rajiv Gupta, Simple offset assignment in presence of subword data, Proceedings of the international conference on Compilers, architecture and synthesis for embedded systems, October 30-November 01, 2003, San Jose, California, USA
Ranjit Jhala , Rupak Majumdar, Bit level types for high level reasoning, Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering, November 05-11, 2006, Portland, Oregon, USA | bit section operations;multimedia data;network processing |
581647 | Increasing power efficiency of multi-core network processors through data filtering. | We propose and evaluate a data filtering method to reduce the power consumption of high-end processors with multiple execution cores. Although the proposed method can be applied to a wide variety of multi-processor systems including MPPs, SMPs and any type of single-chip multiprocessor, we concentrate on Network Processors. The proposed method uses an execution unit called Data Filtering Engine that processes data with low temporal locality before it is placed on the system bus. The execution cores use locality to decide which load instructions have low temporal locality and which portion of the surrounding code should be off-loaded to the data filtering engine.Our technique reduces the power consumption, because a) the low temporal data is processed on the data filtering engine before it is placed onto the high capacitance system bus, and b) the conflict misses caused by low temporal data are reduced resulting in fewer accesses to the L2 cache. Specifically, we show that our technique reduces the bus accesses in representative applications by as much as 46.8% (26.5% on average) and reduces the overall power by as much as 15.6% (8.6% on average) on a single-core processor. It also improves the performance by as much as 76.7% (29.7% on average) for a processor with execution cores. | INTRODUCTION
Power has been traditionally a limited resource and one of the most
important design criteria for mobile processors and embedded
systems. Due in part to increased logic density, power dissipation in
high performance processors is also becoming a major design factor.
The increased number of transistors causes processors to dissipate
more heat, which in return reduces processor performance and
reliability.
Network Processors (NPUs) are processors optimized for networking
applications. Until recently, processing elements in the networks
were either general-purpose processors or ASIC designs. Since
general-purpose processors are software programmable, they are very
flexible in implementing different networking tasks. ASICs, on the
other hand, typically have better performance. However, if there is a
change in the protocol or the application, it is hard to reflect the
change in the design. With the increasing number of new protocols
and increasing link speeds, there is a need for processing elements
that satisfy the processing and flexibility requirements of the modern
networks. NPUs fill this gap by combining network specific
processing elements with software programmability.
In this paper, we present an architectural technique to reduce the
power consumption of multi-core processors. Specifically, we:
a) present simulation results showing that most of the misses in
networking applications are caused by only a few instructions,
b) devise a power reduction technique, which is a combination of a
locality detection mechanism and an execution engine,
c) discuss a fine-grain technique to offload code segments to the
engine, and
d) conduct simulation experiments to evaluate the effectiveness of
our technique.
Although our technique can be efficiently employed by a variety of
multi-processor systems, we concentrate on NPUs, because most of
the NPUs follow the single-chip multiprocessor design methodology
[18] (e.g. Intel IXP 1200 [11] (7 execution cores) and IBM PowerNP
[12] (17 execution cores)). In addition, these chips consume
significant power: IBM PowerNP [12] consumes 20W during typical
operations, whereas C-Port C-5 [7] consumes 15W. The multiple
execution cores are often connected by a global system bus as shown
in
Figure
1. The capacitive load on the processor's input/output
drivers is usually much larger (by orders of magnitude) than that on
the internal nodes of the processor [23]. Consequently, a significant
portion of the power is consumed by the bus.
Figure
1. A generic Network Processor design and the location of the
proposed DFE
System Bus
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear
this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
CASES 2002, October 8-11, 2002, Grenoble, France.
Our technique uses two structures to achieve desired reduction in
power consumption. First, the shared global memory is connected to
the system bus through an execution unit named Data Filtering
Engine (DFE). If not activated, the DFE passes the data to the bus,
and hence has no effect on the execution of the processors. If the DFE
is activated by an execution core, it processes the data and places the
results on the bus. The goal is to process the low-temporal data
within the DFE so that the number of bus accesses is reduced. By
reducing the bus accesses and the cache misses caused by these low-
temporal accesses, our technique achieves significant power
reduction in the processor. Second, the low temporal accesses are
determined by the execution cores using a locality prediction table
(LPT). This table stores information from prior loads. The LPT stores
the program counter (PC) of the instruction as well as the miss/hit
behavior in the recent executions of the instruction. The LPT is also
used to determine which section of the code should be offloaded to
the DFE. The details of the LPT will be explained in Section 2.1. In
the following subsection, we present simulation numbers motivating
the use of data filtering for power reduction.
Instructions causing DL1 misses
0%
10%
20%
30%
40%
50%
70%
80%
90%
100%
crc dh drr md5 nat url ipc rou tl avg.
Fraction
of
inst. 5 inst. 25 inst. 125 inst. 625 inst.
Figure
2. Number of instructions causing the DL1 misses
1.1 Motivation
In many applications, the majority of misses in the L1 data cache are
caused by a few instructions [1]. We have performed a set of
simulations to see the cache miss distribution for different
instructions in networking applications. We have simulated several
applications from the NetBench suite [20]. The simulations are
performed using the SimpleScalar simulator [17] and the processor
configuration explained in Section 4.1. The results are presented in
Figure
2. The figure gives the percentage of the data misses caused
by different number of instructions. For example, in the CRC
application, approximately 30% of the misses are caused by a single
instruction and 80% of the misses are caused by five instructions
with the highest number of misses. On average, 58% of the misses
are caused by only five instructions and 87% of the misses are caused
by 25 instructions. In addition, we have performed another set of
experiments to see what type of data accessed causes the misses. The
simulations reveal that 66% of the misses occur when the processor
is reading packet data. The advantage of few instructions causing a
majority of the misses is that if we can offload these instructions to
the DFE, we can reduce the number of cache misses and L2 accesses
(hence bus accesses) significantly.
In the next section, we present results from combined split cache /
cache bypassing mechanism and discuss the disadvantages of such
locality enhancement techniques. Section 2.1 explains the details of
the LPT. In Section 3, we discuss the design of the DFE and show
how the design options can be varied. Section 4 presents the
experimental results. In Section 5, we give an overview of the related
work and Section 6 concludes the paper with a summary.
2. REDUCING L1 CACHE ACCESSES
There are several cache locality-enhancing mechanisms proposed in
the literature [9, 13, 14, 25, 26]. The power implications of some of
these mechanisms have been examined by Bahar et al. [4]. These
techniques try to improve the performance by reducing the L1 cache
misses. Since the L1 misses are reduced, intuitively, these techniques
should also reduce the power consumption due to less L1 and L2
cache activity.
In this section, we first examine the power implications of a
representative cache locality enhancing mechanism, the combined
split cache / cache bypassing mechanism proposed by Gonzalez et al.
[9]. We study split cache technique because our proposed mechanism
uses an advanced version of this mechanism to detect the code
segments to be offloaded. In this mechanism, the L1 data cache is
split into two structures, one storing data with temporal locality
(temporal cache) and the other storing data with spatial locality
(spatial cache). The processor uses a locality prediction table to
categorize the instructions into: a) accessing data with temporal
locality (temporal instructions), b) accessing data with spatial
locality (spatial instructions), or c) accessing data with no locality
(bypass instructions). The detailed simulation results will be
explained in Section 4. Although the technique reduces the number
of execution cycles in most applications, it does not have the same
impact on the overall power consumption. In most applications, the
technique increases the power consumption of the data caches. The
reasons for this increase are twofold: an LPT structure has to be
accessed with every data access and two smaller caches (one with
larger linesize) have to be accessed in parallel instead of a single
cache. Nevertheless, the overall power consumption is reduced by the
technique due to the significant reduction in the bus switching
activity.
In the following, we first explain the LPT as presented by Gonzalez
et al. [9] and then discuss the enhancements required to implement
our proposed technique.
2.1 Locality Prediction Table
The LPT is a small table that is used for data accesses by the
processor in conjunction with the PC of the instructions. It stores
information about the past behavior of the access: last address
accessed by it, the size of the access, the miss/hit history and the
prediction made using other fields of the LPT. By considering the
stride and the past behavior, a decision is made about whether the
data accessed by the instruction should be placed into the temporal
cache, spatial cache or should be uncached.
We have modified the original LPT to accommodate the information
required by the DFE. We have added three more fields to the LPT:
start address of the code to be offloaded to the DFE (sadd field), end
address of the code to be offloaded (eadd field) and the variables
(registers) required to execute the offloaded code (lreg field).
Assuming 32-bit address and a register file of size 32, these three
fields require additional 128 1 bits for each line in the LPT. The
functions of these fields will be explained in the following section,
where we discuss the design of the DFE.
1 The lreg field has 2-bits for each of the register as explained in
Section 3.1. We have assumed 32 registers in the execution cores,
hence 64 bits are required by the lreg field.
3.
Figure
4 presents the DFE design. The DFE is an execution core with
additional features to control the passing of the memory data to the
bus. If the memory request has originated from an execution core, the
pass gate is opened and the request transfers to the bus as usual. If
the request has originated from the DFE, the controller closes the
pass gate and forwards the data to either DFE data cache or DFE
instruction cache (in the experiments explained in Section 4, the DFE
is equipped with 2 KB data and instruction caches, compared to the 4
KB of data and instruction caches in the execution cores). There are
two more differences between the general-purpose execution cores
and the DFE. First, the DFE controller is equipped with additional
logic to check whether the code executed requests a register value not
generated by DFE or communicated to it from the execution cores. In
such a case, the DFE communicates to the master to create an
interrupt for the execution core that offloaded the code segment to the
DFE. Second, the DFE has a code-management unit (CMU) to keep
track of the origin of the code executed.
for
sum += array[i];
Figure
3. A code segment showing the effectiveness of DFE
Figure
4. DFE Design.
The DFE is located between the on-chip L2 cache and the execution
cores. It is activated/deactivated by the execution cores. When active,
it executes a code segment that is determined by the core and
communicates the necessary results back to the core. Consider the
code segment in Figure 3. If the code segment is executed in one of
cores, the core has to read the complete array, which means that the
system bus has to be accessed several times. If the array structure is
not going to be used again, these accesses will result in unnecessary
power consumption in the processor. If this code segment is executed
on the DFE, on the other hand, the bus will be accessed only to
initiate the execution and to get the result (sum) from DFE. Besides
reducing the number of bus accesses, offloading this segment can
also have positive impact on the execution cores cache because the
replacements due to accessing array will be prevented. Note that
the code segments to be executed on the DFE are not limited to
loops. We discuss which code segments should be offloaded to the
DFE in Section 3.1.
3.1 Determining the DFE code
When an execution core detects an instruction that accesses low-
temporal data (instructions categorized as spatial or bypass
instructions by the LPT), it starts gathering information about the
code segment containing the instruction. In the first run, the loop that
contains the candidate PC is detected. After executing the load, the
execution core checks for branch instructions that jump over the
examined PC (the destination of this branch is the start of the
containing loop) or procedure call/return instructions. The PC of this
instruction is stored in the eadd field of the LPT and corresponds to
the last instruction in the code to be offloaded. If the core found a
containing loop, then the destination of the jump is stored in the sadd
field, which corresponds to the start of the code to be offloaded. If
there is no containing loop (the core detected a procedure call/return
instruction) then the examined PC is stored in the sadd field. Once
the start and end addresses are detected, the execution core gathers
information about the register values required by the code to be
offloaded. The source registers for each instruction are marked as
required in the LPT if they are not generated within the code.
Destination register is marked as generated if it was not marked
required. This marking is performed with the help of the lreg field in
the LPT 2 . Once a code segment is completely processed, it is stored
in the LPT. If the execution reaches the start address again, the code
is directly offloaded to the DFE. To achieve this, we use a
comparator that stores the five last offloaded code segments. If the
PC becomes equal to one of these entries, the corresponding code is
automatically offloaded. Note that the process of looking for code to
offload can be turned off after a segment is executed several times.
For the example code segment in Figure 3, the method first
determines the load accesses to the array as candidates. Then, the
loop containing the load (in this case this is the whole for loop) is
captured by observing the branches back to the start of the loop.
Finally, the whole loop is migrated to the DFE in the third iteration
of the loop.
There are limitations to which code can be offloaded to the DFE. We
do not offload segments that contain store instructions to avoid a
cache coherency problem. In addition, the offloaded code must be
contained within a single procedure (this is achieved by checking
procedure call/return instructions). There are also some limitations
related to the performance of the offloading. We offload a segment
only if the number of required registers is below a certain
register_limit. When a code segment is offloaded to the DFE, it is
going to access the L2 cache for the code segment, so it might not be
beneficial to offload large code segments. Therefore, we do not
offload code segments that are larger than a codesize_limit. By
changing these parameters, we modify the aggressiveness of the
technique.
3.2 Executing the DFE code
When the execution core decides to offload a code segment, it sends
the start and end addresses of the segment to the DFE along with the
values of the required registers. This communication is achieved
through extension to the ISA with the instructions that initiate the
communication between the cores and the DFE. Therefore, no
additional ports are required on the register file. After receiving a
request, DFE first accesses the L2 cache to retrieve the instructions
and then starts the execution. If during the execution of the segment,
the DFE uses a register neither generated by the segment nor
2 The lreg field is 2-bits and represent the states: required, generated,
required and generated, and invalid (or not used).
Inst.
Cache
Func.
Unit
Controller
Reg. File
Pass
Gate
CMU
Master
Pass
Data
Cache
System
Bus
Memory
communicated to the DFE, it generates an interrupt to the core that
offloaded the segment to access the necessary register values. Such
an exception is possible because the register values required by the
DFE is determined by the previous executions of the code segment.
Hence, with different input data, it is possible to execute a segment
that was not executed during the determination of the DFE code.
When the execution ends (execution tries to go to below the eadd or
above the sadd), the DFE communicates the necessary register values
to the core (i.e. the values generated by the DFE).
Table
1. NetBench applications and their properties: arguments are the execution arguments, # inst is the number of instructions executed, # cycle is the
number of cycles required, # il1 (dl1) acc is the number of accesses to the level 1 instruction (data) cache, # l2 acc is the level 2 cache accesses.
Application Arguments # inst. [M] # cycle [M] # IL1 acc
crc crc 10000 145.8 262.0 219.0 59.8 0.6
dh dh 5 64 778.3 1663.1 1009.1 364.7 38.4
drr drr 128 10000 12.9 33.5 22.8 7.9 1.1
drr-l drr 1024 10000 34.7 80.2 60.1 23.3 5.0
ipchains ipchains 10 10000 61.7 160.2 103.9 26.2 3.6
nat nat 128 10000 11.4 26.7 17.3 5.6 1.2
nat-l nat 1024 10000 33.2 74.2 55.0 21.1 5.1
rou route 128 10000 14.2 32.0 23.3 7.1 0.9
rou-l route 1024 10000 36.8 81.7 62.6 22.8 5.0
snort-l snort -r defcon -n 10000 -dev -l ./log -b 343.0 925.6 515.0 132.2 33.4
snort-n snort -r defcon -n 10000 -v -l ./log -c sn.cnf 545.9 1654.1 893.7 219.7 56.2
ssl-w opensll NetBench weak 10000 329.0 832.1 441.1 152.0 31.8
tl tl 128 10000 6.9 15.7 11.8 3.9 0.7
tl-l tl 1024 10000 30.3 67.1 52.2 19.9 4.7
url url small_inputs 10000 497.0 956.7 768.9 249.1 10.0
average 193.1 458.7 284.5 86.8 13.0
Table
2. DFE code information: number of different dfe code segments
(dfe codes), average number of instructions in segments (size), average
number of register values transferred for each segment (reg req), total
number of register values transferred to/from the DFE (trans), fraction of
instructions executed in the dfe (ratio),
App. Dfe
codes
size reg req trans
ratio
dh
drr 96 28.9 2.2 32.2 5.88
drr-l 96 28.9 2.2 182.5 5.71
ipchains 113 77.8 1.8 12.3 1.86
nat-l 67 20.1 2.3 180.1 5.77
nat 67 20.1 2.3 28.9 6.20
rou-l
rou
snort-l 146 19.4 3.4 53.1 4.35
snort-n 161 16.5 3.1 383.3 9.62
ssl-w 95 32.7 1.6 90.5 1.66
tl-l 24 43.1 2.2 179.9 5.89
tl 24 43.1 2.2 28.0 7.03
url
average
The execution core also sends a task_id when the code is activated.
Task_id consists of core_id, which is a unique identification number
of the execution core and segment_id, which is the id of the process
offloading the code segment. Task_id uniquely determines the
process that has offloaded the segment. When the task is complete,
DFE uses the task_id (stored in CMU) to send the results back to the
execution core. For the cores, the process is similar to accessing data
from L2 cache. There might be context switches in the execution core
after it sends a request, but the results will be propagated to the
correct process eventually. After execution of each segment, the DFE
data cache is flushed to prevent possible coherence problems.
4. EXPERIMENTS
We have performed several experiments to measure the effectiveness
of our technique. In all the simulations, the SimpleScalar/ARM [17]
simulator is used. The necessary modifications to the simulator are
implemented to measure the effects of the LPT and the DFE. Due to
the limitations of the simulation environment the execution core
becomes idle when it offloads a code segment. In an actual
implementation, the core might start executing other processes,
which will increase the system utilization. We simulate the
applications in the NetBench suite [20]. Important characteristics of
the applications are explained in Table 1.
We report the reduction in the execution cycles, bus accesses and the
power consumption of the caches, and overall power consumption.
The DFE is able to reduce the power consumption due to the smaller
number of bus accesses and the reduction in the cache accesses.
Each will be discussed in the following.
4.1 Simulation Parameters
We first report the results for a single-core processor. The base
processor is similar to StrongARM 110 with 4 KB, direct-mapped L1
data and instruction caches with 32-byte linesize, and a 128 KB, 4-
way set-associative unified L2 cache with a 128-byte linesize. The
LPT in both split cache experiments and our technique is a 2-way, 64
entry cache (0.5K). For split cache technique, the temporal cache is
4K with 32-byte lines and the spatial cache is 2K with 128-byte lines.
In our technique, the execution core has a single cache equal to the
temporal cache (4K with 32-byte lines) and DFE data cache is the
similar to the spatial cache (2K with 64-byte lines). The DFE also
has a level 1 instruction cache of size 2KB (32-byte lines). The
latency for all L1 caches is set to 1 cycle, and all the L2 cache
latencies are set to 10 cycles. The DFE is activated using L2 calls in
the simulator; hence, the delay is 10 cycles to start the DFE
execution.
We calculate the power consumption of the caches using the
simulation numbers and cache power consumption values obtained
from the CACTI tool [27]. We modified the SimpleScalar to gather
information about the switching activity in the bus. The bus power
consumption is calculated by using this information and the model
developed by Zhang and Irwin [28]. To find the total power
consumption of the processor, we use the power consumption of each
section of the StrongARM presented by Montanaro et al. [21]. The
power consumptions of the caches are modified to represent the
caches in out simulations. The overall power consumption is the sum
of the power of the execution cores and the shared resources. For the
DFE experiments, the register limit is set to four and the maximum
offloadable code size is 200 instructions.
4.2 Single-Core Results
For the first set of experiments, we compare the performance of four
systems to the base processor explained in Section 4.1:
System 1) same processor with a 4 KB, 2-way set associative level
data cache (2-way),
System same processor with a direct-mapped 8 KB level 1 data
cache (8KB),
System a processor with split cache (LPT), and
System processor using the proposed DFE method (DFE).
Table
2 gives the number of different code segments offloaded, the
size of each segment, the number of registers required by each call,
the total number of register values transferred to/from the DFE to
execute the offloaded code and the fraction of instructions executed
by the DFE. Note that the code size is the number of instructions
between the first instruction in the segment and the last instruction in
the segment and does not correspond to the number of instructions
executed.
Figure
5 summarizes the results for execution cycles. It presents the
relative performance of all systems with respect to the base
processor. The proposed method increases the execution cycles by
0.29% on average. None of the systems have a positive impact on the
performance of the DH application, because there are only a few data
cache misses in this application (the L1 data cache miss rate is
almost 0%), hence it is not effected by any of these techniques. We
see that the DFE approach can increase the execution cycles by up to
1.6%. This is mainly due to the imperfect decision making about
which code segments to offload. Specifically, the DFE might be
activated for a code segment that contains loads to data that is used
again (i.e. data with temporal locality). In such a case, the execution
cycles will be improved because the data will be accessed by the
cores and the DFE generating redundant accesses. If the data
processed by a DFE segment is used again, then the performance
might be degraded.
Figure
6 presents the effects of the techniques in the total power
consumption of the data caches. For the proposed technique this
corresponds to the power consumption of the level 1 data cache and
the LPT structures on the execution core, level 1 data cache of the
DFE and the level 2 cache. Our proposed technique is able to reduce
power consumption by 0.98% on average.
The reduction in the number of bit switches on the bus accesses are
presented in Figure 7. We see that the proposed mechanism can
reduce the switching activity by as much as 46.8% and by 26.5% on
average. The reduction in bus activity for the LPT mechanism is
18.6% on average. Similarly, the 8 KB and 2-way level 1 caches
reduce the bus activity by 9.46% and 12.1%, respectively.
The energy-delay product [10] is presented in Figure 8. The DFE
technique improves the energy-delay product by as much as 15.5%
and by 8.3% on average. The split cache mechanism, on the other
hand, has a 5.4% lower energy-delay product than the base processor.
We see that in almost every category the split cache technique
performs better than a system with 8 KB cache or the 2-way
associative 4 KB cache because the networking applications exhibit a
mixture of accesses with spatial and temporal locality. Hence using a
split cache achieves significant improvement in the cache
performance. This class of applications is also amenable to our
proposed structure, because spatial accesses are efficiently utilized by
the DFE.
-113crc dh drr drr-l ipchains md5 nat-l nat rou rou-l snort-l snort-n ssl tl-l tl url avg.
Reduction
in
Exec.
Cycles
[%] 2-way 8K LPT DFE
Figure
5. Reduction in execution cycles
2-way 8K LPT DFE
Figure
6. Reduction in power consumption of data caches (sum of level 1, level 2 and DFE data caches when applicable)
crc dh drr drr-l ipchains md5 nat-l nat rou rou-l snort-l snort-n ssl tl-l tl url avg.
Reduction
2-way 8K LPT DFE
Figure
7. Reduction in number of bit switches on the system bus
-5515crc dh drr drr-l ipchains md5 nat-l nat rou rou-l snort-l snort-n ssl tl-l tl url avg.
Reduction
2-way 8K LPT DFE
Figure
8. Reduction in energy-delay product
-10103050crc dh drr drr-l ipchains md5 nat-l nat rou rou-l snort-l snort-n ssl tl-l tl url avg.
Reduction
in
Exec.
Cycles
2-way 8K LPT DFE
Figure
9. Reduction in execution cycles for a processor with 4 execution cores.
in
Exec.
Cycles
2-way 8K LPT DFE
Figure
10. Reduction in execution cycles for a processor with execution cores.
4.3 Multiple-Core Results
Two important issues need to be considered regarding use of the
DFE in a multi-core design. First, since the DFE is a shared
resource, it might become a performance bottleneck if several
cores are contending for it. On the other hand, if the DFE reduces
the bus accesses significantly (which is another shared resource),
it might improve performance due to less contention for the bus.
Note that the system bus is one of the most important bottlenecks
in chip multiprocessors. Although all the techniques have small
effect on the performance of a single-core processors, the
performance improvement can be much more dramatic for multi-
core systems due to the reduction in the bus accesses as shown in
Figure
7.
To see the effects of multiple execution cores, we have first
designed a trace-driven multi-core simulator (MCS). The
simulator processes event traces generated by the
SimpleScalar/ARM. In this framework, SimpleScalar is used to
generate the events for a single-core, which are then processed by
the MCS to account for the effects of global events (bus and the
accesses). The MCS finds the new execution time using the
number of cores employed in the system and the events in the
traces. Each execution core runs a single application, hence there
is no communication between execution cores. Although this
does not represent the workload in some multi-core systems,
where a single application has control over all the execution
cores, this is a realistic workload for most of the NPUs. In our
experiments, we simulate the same application in each execution
core. Each application has a random startup time and they are not
aware of the other cores in the system, hence the simulations
realistically measure the activity in shared resources. We report
results for processors with 4 execution cores and 16 execution
cores. Note that the cores do not execute any code and wait for
the results if they performed a DFE call.
The improvement in the execution cycles for a processor with 4
execution cores are presented in Figure 9. The proposed DFE
mechanism reduces the number of execution cycles by as much
as 50.8% and 20.1% on average. The split cache technique is
reducing the execution cycles by 17.2% on average.
Figure
presents the results for a processor with execution
cores. The DFE is able to improve the performance by as much
as 76.7% and 29.7% on average. The technique increases the
execution cycles for snort and url applications. For the url
application, the execution cores utilize the DFE (5.13%), but are
not able to reduce the bus activity. Therefore, as the number of
execution cores is increased, the cores have to stall for bus and
contention, resulting in an increase in the execution cycles.
For snort application, on the other hand, the DFE usage is very
high (9.62%). Hence, although the bus contention is reduced, the
contention for DFE increases the execution cycles. Note that we
can overcome the contention problem by using an adaptive
offloading mechanism: if the DFE is occupied, the execution core
continues with the execution itself, hence the code is offloaded
only if the DFE is idle. If DFE becomes overloaded, the only
overhead in such an approach would be the checking of the DFE
state by the execution cores. Another solution to the contention
problem might be to use multiple DFEs. However, such
techniques are out of the scope of this paper.
We also measured the energy consumption for multiple cores.
However, increasing the number of execution cores did not have
any significant effect on the overall power consumption, because
each additional execution core increases the bus, the DFE, and
the level 2 cache activity linearly, hence the ratios for different
system did not change significantly.
5. RELATED WORK
Streaming data, i.e. data accessed with a known, fixed
displacements between successive elements has been studied in
the literature. McKee et al. [19] propose using a special stream
buffer unit (SBU) to store the stream accesses and a special
scheduling unit for reordering accesses to stream data. Benitez
and Davidson [6] present a compiler framework to detect
streaming data. Our proposed architecture does not require any
compiler support for its tasks. In addition, the displacement of
accesses in some networking applications is not fixed due to the
unknown nature of the packet distribution in the memory.
Several techniques have been proposed to reduce the power
consumption of high-performance processors [2, 5, 15, 24]. Some
of these techniques use small energy-efficient structures to
capture a portion of the working set thereby filtering the accesses
to larger structures. Others concentrate on restructuring the
caches [2]. In the context of multiple processor systems,
Moshovos et al. [22] propose a filtering technique for snoop
accesses in the SMP servers.
There is a plethora of techniques to improve cache locality [9, 13,
14, 25, 26]. These techniques reduce L1 data cache misses by
either intelligently placing the data into it or using external
structures. In our study, however, we change the location of the
computation of the low-temporal data accesses. Most of these
techniques could be used to detect the low temporal data for our
proposed method.
Active or smart memories have been extensively studied [3, 8,
16]. However, such techniques concentrate on off-chip active
memory in contrast to out methods, which improve the
performance of on-chip caches. Therefore the fine-grain
offloading is not feasible for such systems.
6. CONCLUSION
Network Processors are powerful computing engines that usually
combine several execution cores and consume significant amount
of power. Hence, they are prone to performance limitations due
to excessive power dissipation. We have proposed a technique
for reducing power in multi-core Network Processors. Our
technique reduces the power consumption in the bus and the
caches, which are the most power consuming entities in the high-performance
processors. We have shown that in networking
applications most of the L1 data cache misses are caused by only
a few instructions, which motivates the specific technique we
have proposed. Our technique uses a locality prediction table to
detect load accesses with low temporal locality and a novel data
filtering engine that processes the code segment surrounding the
low temporal accesses. We have presented simulation numbers
showing the effectiveness of our technique. Specifically, our
technique is able reduce the overall power consumption of the
processor by 8.6% for a single-core processor. It is able to reduce
the energy-delay product by 8.3% and 35.8% on average for a
single-core processor and for a processor with
respectively.
We are currently investigating compiler techniques to determine
the code segments to be offloaded to the DFE. Static techniques
have the advantage of determining exact communication
requirements between the code remaining in the execution cores
and the segments offloaded to the DFE. However, determining
locality dynamically has advantages. Therefore, we believe that
better optimizations will be possible by utilizing the static and
dynamic techniques in an integrated approach.
--R
Predictability of Load/Store Instruction Latencies.
Selective cache ways.
Towards a Programming Environment for a Computer with Intelligent Memory.
Power and Performance Tradeoffs using Various Caching Strategies.
Architectural and compiler support for energy reduction in the memory hierarchy of high performance processors.
Code Generation for Streaming: An Access/Execute Mechanism.
A Case for Intelligent RAM: IRAM
A Data Cache with Multiple Caching Strategies Tuned to Different Types of Locality.
Energy dissipation in general purpose microprocessors.
Intel Network Processor Targets Routers
Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache Prefetch Buffers
The Filter Cache: an energy efficient memory structure.
Combined DRAM and Logic Chip for Massively Parallel Applications.
SimpleScalar Home Page.
Improving Bandwidth for Streamed References
A Benchmarking Suite for Network Processors.
JETTY: Snoop filtering for reduced power in SMP servers.
Reducing Address Bus Transitions for Low Power Memory Mapping.
A circuit technique to reduce leakage in cache memories.
Reducing conflicts in direct-mapped caches with a temporality-based design
Managing Data Caches Using Selective Cache Line Replacement.
An enhanced access and cycle time model for on-chip caches
--TR
Code generation for streaming: an access/execute mechanism
A data cache with multiple caching strategies tuned to different types of locality
Predictability of load/store instruction latencies
Managing data caches using selective cache line replacement
Run-time adaptive cache hierarchy management via reference analysis
The filter cache
Architectural and compiler support for energy reduction in the memory hierarchy of high performance microprocessors
Improving direct-mapped cache performance by the addition of a small fully-associative cache prefetch buffers
Power and performance tradeoffs using various caching strategies
Selective cache ways
Gated-V<subscrpt>dd</subscrpt>
NetBench
Smarter Memory
A Case for Intelligent RAM
Towards a Programming Environment for a Computer with Intelligent Memory
Combined DRAM and logic chip for massively parallel systems
Reducing Address Bus Transitions for Low Power Memory Mapping
Energy-Delay Analysis for On-Chip Interconnect at the System Level
--CTR
Mary Jane Irwin, Compiler-directed proactive power management for networks, Proceedings of the 2005 international conference on Compilers, architectures and synthesis for embedded systems, September 24-27, 2005, San Francisco, California, USA
Yan Luo , Jia Yu , Jun Yang , Laxmi N. Bhuyan, Conserving network processor power consumption by exploiting traffic variability, ACM Transactions on Architecture and Code Optimization (TACO), v.4 n.1, p.4-es, March 2007 | remote procedure call;chip multiprocessors;data locality;power reduction;network processors |
581691 | Template meta-programming for Haskell. | We propose a new extension to the purely functional programming language Haskell that supports compile-time meta-programming. The purpose of the system is to support the algorithmic construction of programs at compile-time.The ability to generate code at compile time allows the programmer to implement such features as polytypic programs, macro-like expansion, user directed optimization (such as inlining), and the generation of supporting data structures and functions from existing data structures and functions.Our design is being implemented in the Glasgow Haskell Compiler, ghc. | Introduction
"Compile-time program optimizations are similar to po-
etry: more are written than are actually published in
commercial compilers. Hard economic reality is that
many interesting optimizations have too narrow an audience
to justify their cost. An alternative is to allow
programmers to define their own compile-time op-
timizations. This has already happened accidentally for
C++, albeit imperfectly. [It is] obvious to functional
programmers what the committee did not realize until
later: [C++] templates are a functional language evaluated
at compile time." [12].
Haskell Workshop, Oct 2002, Pittsburgh
Robinson's provocative paper identifies C++ templates as a ma-
jor, albeit accidental, success of the C++ language design. Despite
the extremely baroque nature of template meta-programming,
templates are used in fascinating ways that extend beyond the
wildest dreams of the language designers [1]. Perhaps surprisingly,
in view of the fact that templates are functional programs, functional
programmers have been slow to capitalize on C++'s success;
while there has been a recent flurry of work on run-time meta-
programming, much less has been done on compile-time meta-
programming. The Scheme community is a notable exception, as
we discuss in Section 10.
In this paper, therefore, we present the design of a compile-time
meta-programming extension of Haskell, a strongly-typed, purely-
functional language. The purpose of the extension is to allow programmers
to compute some parts of their program rather than write
them, and to do so seamlessly and conveniently. The extension can
be viewed both as a template system for Haskell (-a la C++), as well
as a type-safe macro system. We make the following new contributions
. We describe how a quasi-quotation mechanism for a language
with binders can be precisely described by a translation into
a monadic computation. This allows the use of a gensym-
like operator even in a purely functional language like Haskell
(Sections 6.1 and 9).
. A staged type-checking algorithm co-routines between type
checking and compile-time computations. This staging is use-
ful, because it supports code generators, which if written as
ordinary programs, would need to be given dependent types.
The language is therefore expressive and simple (no dependent
but still secure, because all run-time computations
(either hand-written or computed) are always type-checked
before they are executed (Section 7).
. Reification of programmer-written components is supported,
so that computed parts of the program can analyze the structure
of user-written parts. This is particularly useful for building
"boilerplate" code derived from data type declarations
(Sections 5 and 8.1).
In addition to these original contributions, we have synthesized previous
work into a coherent system that provides new capabilities.
These include
. The representation of code by an ordinary algebraic datatype
makes it possible use Haskell's existing mechanisms (case
analysis) to observe the structure of code, thereby allowing
the programmer to write code manipulation programs, as well
as code generation programs (Sections 6.2 and 9.3).
. This is augmented by a quotation monad, that encapsulates
meta-programming features such as fresh name generation,
program reification, and error reporting. A monadic library of
syntax operators is built on top of the algebraic datatypes and
the quotation monad. It provides an easy-to-use interface to
the meta-programming parts of the system (Sections 4, 6, 6.3,
and Section 8).
. A quasi-quote mechanism is built on top of the monadic li-
brary. Template Haskell extends the meta-level operations of
static scoping and static type-checking into the object-level
code fragments built using its quasi-quote mechanism (Sec-
tions 9 and 7.1). Static scoping and type-checking do not automatically
extend to code fragments built using the algebraic
datatype representation; they would have to be "programmed"
by the user (Sections 9 and 9.3).
. The reification facilities of the quotation monad allows the
programmer (at compile-time) to query the compiler's internal
data structures, asking questions such as "What is the line
number in the source-file of the current position?" (useful for
error reporting), or "What is the kind of this type construc-
tor?" (Section 8.2).
. A meta-program can produce a group of declarations, including
data type, class, or instance declarations, as well as an
expression (Section 5.1).
2 The basic idea
We begin with an example to illustrate what we mean by meta-
programming. Consider writing a C-like printf function in
Haskell. We would like to write something like:
printf "Error: %s on line %d." msg line
One cannot define printf in Haskell, because printf's type de-
pends, in a complicated way, on the value of its first argument (but
see [5] for an ingenious alternative). In Template Haskell, though,
we can define printf so that it is type-safe (i.e. report an error at
compile-time if msg and line do not have type String and Int
respectively), efficient (the control string is interpreted at compile
time), and user-definable (no fixed number of compiler extensions
will ever be enough). Here is how we write the call in Template
Haskell:
$(printf "Error: %s on line %d") msg line
The "$" says "evaluate at compile time"; the call to printf returns
a Haskell expression that is spliced in place of the call, after which
compilation of the original expression can proceed. We will often
use the term "splice" for . The splice $(printf .) returns
the following code:
(\ s0 -> \ n1 ->
show
This lambda abstraction is then type-checked and applied to msg
and line. Here is an example interactive session to illustrate:
prompt> $(printf "Error: %s at line %d") "Bad var" 123
"Error: Bad var at line 123"
1 Note that in Template Haskell that $ followed by an open
parenthesis or an alphabetic character is a special syntactic form.
x $y means "x applied to splice y", whereas x $ y means the
ordinary infix application of the function $ just as it does in ordinary
Haskell. The situation is very similar to that of ".", where A.b
means something different from A . b.
The function printf, which is executed at compile time, is a program
that produces a program as its result: it is a meta-program. In
Template Haskell the user can define printf thus:
printf :: String -> Expr
The type of printf says that it transforms a format string into a
Haskell expression, of type Expr. The auxiliary function parse
breaks up the format string into a more tractable list of format specifiers
data String
parse :: String -> [Format]
For example,
parse "%d is %s" returns [D, L " is ", S]
Even though parse is executed at compile time, it is a perfectly
ordinary Haskell function; we leave its definition as an exercise.
The function gen is much more interesting. We first give the code
for gen assuming exactly one format specifier:
gen :: [Format] -> Expr
gen [L
The results of gen are constructed using the quasi-quote notation
- the "templates" of Template Haskell. Quasi-quotations
are the user's interface to representing Haskell programs, and
are constructed by placing quasi-quote brackets [| _ |] around
ordinary Haskell concrete syntax fragments. The function
lift :: String -> Expr "lifts" a string into the Expr type, producing
an Expr which, if executed, would evaluate to lifts's ar-
gument. We have more to say about lift in Section 9.1
Matters become more interesting when we want to make gen recur-
sive, so that it can deal with an arbitrary list of format specifiers. To
do so, we have to give it an auxiliary parameter, namely an expression
representing the string to prefix to the result, and adjust the call
in printf accordingly:
printf :: String -> Expr
gen :: [Format] -> Expr -> Expr
gen
Inside quotations, the splice annotation ($) still means "evaluate
when the quasi-quoted code is constructed"; that is, when gen is
called. The recursive calls to gen are therefore run at compile time,
and the result is spliced into the enclosing quasi-quoted expression.
The argument of $ should, as before, be of type Expr.
The second argument to the recursive call to gen (its accumulating
parameter) is of type Expr, and hence is another quasi-quoted ex-
pression. Notice the arguments to the recursive calls to gen refer
to object-variables (n, and s), bound in outer quasi-quotes. These
occurrences are within the static scope of their binding occurrence:
static scoping extends across the template mechanism.
3 Why templates?
We write programs in high-level languages because they make our
programs shorter and more concise, easier to maintain, and easier
to think about. Many low level details (such as data layout and
memory allocation) are abstracted over by the compiler, and the
programmer no longer concerns himself with these details. Most of
the time this is good, since expert knowledge has been embedded
into the compiler, and the compiler does the job in manner superior
to what most users could manage. But sometimes the programmer
knows more about some particular details than the compiler does.
It's not that the compiler couldn't deal with these details, but that for
economic reasons it just doesn't [12]. There is a limit to the number
of features any compiler writer can put into any one compiler. The
solution is to construct the compiler in a manner in which ordinary
users can teach it new tricks.
This is the rationale behind Template Haskell: to make it easy for
programmers to teach the compiler a certain class of tricks. What do
compilers do? They manipulate programs! Making it easy for users
to manipulate their own programs, and also easy to interlace their
manipulations with the compiler's manipulations, creates a powerful
new tool.
We envision that Template Haskell will be used by programmers to
do many things.
. Conditional compilation is extremely useful for compiling a
single program for different platforms, or with different de-bugging
options, or with a different configuration. A crude
approach is to use a preprocessor like cpp - indeed several
compilers for Haskell support this directly - but a mechanism
that is part of the programming language would work
much better.
. Program reification enables programs to inspect their own
structure. For example, generate a function to serialise a data
structure, based on the data type declaration for that structure.
. Algorithmic program construction allows the programmer to
construct programs where the algorithm that describes how
to construct the program is simpler than the program itself.
Generic functions like map or show are prime examples, as
are compile-time specialized programs like printf, where
the code compiled is specialized to compile-time constants.
. Abstractions that transcend the abstraction mechanisms accessible
in the language. Examples include: introducing
higher-order operators in a first-order language using
compile-time macros; or implementing integer indexed functions
(like zip1, zip2, . zipn) in a strongly typed language.
. Optimizations may teach the compiler about domain-specific
optimizations, such as algebraic laws, and in-lining opportunities
In Template Haskell, functions that execute at compile time are
written in the same language as functions that execute at run time,
namely Haskell. This choice is in sharp contrast with many existing
systems; for example, cpp has its own language (#if, #define
etc.), and template meta-programs in C++ are written entirely in
the type system. A big advantage of our approach is that existing
libraries and programming skills can be used directly; arguably, a
disadvantage is that explicit annotations ("$" and "[| |]") are necessary
to specify which bits of code should execute when. Another
consequence is that the programmer may erroneously write a non-terminating
function that executes at compile time. In that case, the
compiler will fail to terminate; we regard that as a programming
error that is no more avoidable than divergence at run time.
In the rest of the paper we flesh out the details of our design. As
we shall see in the following sections, it turns out that the simple
quasi-quote and splice notation we have introduced so far is not
enough.
4 More flexible construction
Once one starts to use Template Haskell, it is not long before one
discovers that quasi-quote and splice cannot express anything like
the full range of meta-programming opportunities that we want.
Haskell has built-in functions for selecting the components from a
namely fst and snd. But if we want to select the first component
of a triple, we have to write it by hand:
case x of (a,b,c) -> a
In Template Haskell we can instead write:
$(sel 1
Or at least we would like to. But how can we write sel?
sel :: Int -> Int -> Expr
sel case x of . |]
Uh oh! We can't write the ``.'' in ordinary Haskell, because the
pattern for the case expression depends on n. The quasi-quote notation
has broken down; instead, we need some way to construct
Haskell syntax trees more directly, like this:
sel :: Int -> Int -> Expr
sel
where alt :: Match
pat :: Patt
rhs :: Expr
based
as :: [String]
In this code we use syntax-construction functions which construct
expressions and patterns. We list a few of these, their types, and
some concrete examples for reference.
- Syntax for Patterns
pvar :: String -> Patt - x
ptup :: [Patt] -> Patt - (x,y,z)
pcon :: String -> [Patt] -> Patt - (Fork x y)
pwild :: Patt - _
- Syntax for Expressions
var :: String -> Expr - x
app :: Expr -> Expr -> Expr - f x
lam :: [Patt] -> Expr -> Expr - \ x y -> 5
case x of .
simpleM :: Patt -> Expr -> Match - x:xs -> 2
The code for sel is more verbose than that for printf because it
uses explicit constructors for expressions rather than implicit ones.
In exchange, code construction is fundamentally more flexible, as
sel shows. Template Haskell provides a full family of syntax-
construction functions, such as lam and pvar above, which are
documented in Appendix A.
The two styles can be mixed freely. For example, we could also
sel like this:
sel :: Int -> Int -> Expr
sel
where
To illustrate the idea further, suppose we want an n-ary zip func-
tion, whose call might look like this:
$(zipN
where as, bs, and cs are lists, and zipN :: Int -> Expr generates
the code for an n-ary zip. Let's start to write zipN:
zipN :: Int -> Expr
The meta-function zipN generates a local let binding like
(let . in zip3). The body of the binding (the dots
.) is generated by the auxiliary meta-function mkZip defined be-
low. The function defined in the let (zip3 in the example in this
paragraph) will be recursive. The name of this function doesn't really
matter, since it is used only once in the result of the let, and
never escapes the scope of the let. It is the whole let expression that
is returned. The name of this function must be passed to mkZip so
that when mkZip generates the body, the let will be properly scoped.
The size of the zipping function, n, is also a parameter to mkZip.
It's useful to see what mkZip generates for a particular n in understanding
how it works. When applied to 3, and the object variable
(var "ff") it generates a value in the Expr type. Pretty-printing
that value as concrete syntax we get:
case (y1,y2,y3) of
Note how the parameter (var "ff") ends up as a function in one
of the arms of the case. When the user level function zipN (as
opposed to the auxiliary function mkZip) is applied to 3 we obtain
the full let. Note how the name of the bound variable zp0, which is
passed as a parameter to mkZip ends up in a recursive call.
case (y1,y2,y3) of
in zp0
The function mkZip operates by generating a bunch of patterns (e.g.
y1, y2, y3 and (x1:xs1,x2:xs2,x3:xs3)), and a bunch of expressions
using the variables bound by those patterns. Generating
several patterns (each a pattern-variable), and associated expressions
(each an expression-variable) is so common we abstract it into
a function
String -> Int -> ([Patt],[Expr])
in (map pvar ns, map var ns)
- ([pvar "x1",pvar "x2"],[var "x1",var "x2"])
In mkZip we use this function to construct three lists of matching
patterns and expressions. Then we assemble these pieces into the
lambda abstraction whose body is a case analysis over the lambda
abstracted variables.
where
(pXs,
(pYs,
pcons x
pcons pXs pXSs)) b
Here we use the quasi-quotation mechanism for patterns [p| _ |]
and the function apps, another idiom worth abstracting into a function
- the application of a function to multiple arguments.
apps :: [Expr] -> Expr
apps
apps
The message of this section is this. Where it works, the quasi-quote
notation is simple, convenient, and secure (it understands Haskell's
static scoping and type rules). However, quasi-quote alone is not
enough, usually when we want to generate code with sequences of
indeterminate length. Template Haskell's syntax-construction functions
(app, lam, caseE, etc.) allow the programmer to drop down
to a less convenient but more expressive notation where (and only
necessary.
5 Declarations and reification
In Haskell one may add a "deriving" clause to a data type declaration
data T a = Tip a | Fork (T a) (T a) deriving( Eq )
The deriving( Eq ) clause instructs the compiler to generate
"boilerplate" code to allow values of type T to be compared for
equality. However, this mechanism only works for a handful of
built-in type classes (Eq, Ord, Ix and so on); if you want instances
for other classes, you have to write them by hand. So tiresome is
this that Winstanley wrote DrIFT, a pre-processor for Haskell that
allows the programmer to specify the code-generation algorithm
once, and then use the algorithm to generate boilerplate code for
many data types [17]. Much work has also been done on poly-typic
algorithms, whose execution is specified, once and for all, based on
the structure of the type [9, 6].
Template Haskell works like a fully-integrated version of DrIFT.
Here is an example:
data T a = Tip a | Fork (T a) (T a)
splice (genEq (reifyDecl T))
This code shows two new features we have not seen before: reification
and declaration splicing. Reification involves making the internal
representation of T available as a data structure to compile-time
computations. Reification is covered in more detail in Section 8.1.
5.1 Declaration splicing
The construct splice (.) may appear where a declaration
group is needed, whereas up to now we have only seen $(.)
where an expression is expected. As with $, a splice instructs the
compiler to run the enclosed code at compile-time, and splice in the
resulting declaration group in place of the splice call 2 .
Splicing can generate one or more declarations. In our example,
genEq generated a single instance declaration (which is essential
An aside about syntax: we use "splice" rather than "$" only
because the latter seems rather terse for a declaration context.
for the particular application to deriving), but in general it could
also generate one or more class, data, type, or value declarations
Generating declarations, rather than expressions, is useful for purposes
other than deriving code from data types. Consider again the
n-ary zip function we discussed in Section 4. Every time we write
$(zipN as bs cs a fresh copy of a 3-way zip will be gener-
ated. That may be precisely what the programmer wants to say, but
he may also want to generate a single top-level zip function, which
he can do like this:
But he might want to generate all the zip functions up to 10, or 20,
or whatever. For that we can write
splice (genZips 20)
with the understanding that zip1, zip2, . , zip20 are brought into
scope.
6 Quasi-quotes, Scoping, and the Quotation
Monad
Ordinary Haskell is statically scoped, and so is Template Haskell.
For example consider the meta-function cross2a below.
cross2a :: Expr -> Expr -> Expr
Executing cross2a (var "x") (var "y") we expect that the
(var "x") and the (var "y") would not be inadvertently captured
by the local object-variables x and y inside the quasi-quotes in
cross2a's definition. Indeed, this is the case.
prompt> cross2a (var "x") (var "y")
Displaying top-level term of type: Expr
The quasi-quote notation renames x and y, and we get the expected
result. This is how static scoping works in ordinary Haskell, and the
quasi-quotes lift this behavior to the object-level as well. Unfortu-
nately, the syntax construction functions lam, var, tup, etc. do not
behave this way. Consider
lam [ptup [pvar "x", pvar "y"]]
(tup [app f (var "x"),app g (var "y")])
Applying cross2b to x and y results in inadvertent capture.
prompt> cross2b (var "x") (var "y")
Displaying top-level term of type: Expr
Since some program generators cannot be written using the quasiquote
notation alone, and it appears that the syntax construction
functions are inadequate for expressing static scoping, it appears
that we are in trouble: we need some way to generate fresh names.
That is what we turn to next.
6.1 Secrets Revealed
Here, then, is one correct rendering of cross in Template Haskell,
without using quasi-quote:
cross2c :: Expr -> Expr -> Expr
do { x <- gensym "x"
return (Lam [Ptup [Pvar x,Pvar
(Tup [App ft (Var x)
In this example we reveal three secrets:
. The type Expr is a synonym for monadic type, Q Exp. In-
deed, the same is true of declarations:
. The code returned by cross2c is represented by ordinary
Haskell algebraic datatypes. In fact there are two algebraic
data types in this example: Exp (expressions) with constructors
Lam, Tup, App, etc; and Pat (patterns), with constructors
Pvar, Ptup, etc.
. The monad, Q, is the quotation monad. It supports the
usual monadic operations (bind, return, fail) and the do-
notation, as well as the gensym operation:
gensym :: String -> Q String
We generate the Expr returned by cross2c using Haskell's
monadic do-notation. First we generate a fresh name for x and
y using a monadic gensym, and then build the expression to return.
Notice that (tiresomely) we also have to "perform" f and g in the
monad, giving ft and gt of type Exp, because f and g have type
and might do some internal gensyms. We will see how to
avoid this pain in Section 6.3.
To summarize, in Template Haskell there are three "layers" to the
representation of object-programs, in order of increasing convenience
and decreasing power:
. The bottom layer has two parts. First, ordinary algebraic data
types represent Haskell program fragments (Section 6.2).
Second, the quotation monad, Q, encapsulates the notion of
generating fresh names, as well as failure and input/output
(Section 8).
. A library of syntax-construction functions, such as tup and
app, lift the corresponding algebraic data type constructors,
such as Tup and App, to the quotation-monad level, providing
a convenient way to access the bottom layer (Section 6.3).
. The quasi-quote notation, introduced in Section 2, is most
convenient but, as we have seen, there are important meta-programs
that it cannot express. We will revisit the quasiquote
notation in Section 9, where we show how it is built on
top of the previous layers.
The programmer can freely mix the three layers, because the latter
two are simply convenient interfaces to the first. We now discuss in
more detail the first two layers of code representation. We leave a
detailed discussion of quasi-quotes to Section 9.
6.2 Datatypes for code
Since object-programs are data, and Haskell represents data structures
using algebraic datatypes, it is natural for Template Haskell to
represent Haskell object-programs using an algebraic datatype.
The particular data types used for Template Haskell are given in
Appendix
B. The highlights include algebraic datatypes to represent
expressions (Exp), declarations (Dec), patterns (Pat), and
types (Typ). Additional data types are used to represent other syntactic
elements of Haskell, such as guarded definitions (Body), do
expressions and comprehensions (Statement), and arithmetic sequences
(DotDot). We have used comments freely in Appendix B
to illustrate the algebraic datatypes with concrete syntax examples.
We have tried to make these data types complete yet simple. They
are modelled after Haskell's concrete surface syntax, so if you can
Haskell programs, you should be able to use the algebraic
constructor functions to represent them.
An advantage of the algebraic approach is that object-program representations
are just ordinary data; in particular, they can be analysed
using Haskell's case expression and pattern matching.
Disadvantages of this approach are verbosity (to construct the representation
of a program requires considerably more effort than that
required to construct the program itself), and little or no support for
semantic features of the object language such as scoping and typing
6.3 The syntax-construction functions
The syntax-construction functions of Section 4 stand revealed as
the monadic variants of the corresponding data type constructor.
For example, here are the types of the data type constructor App,
and its monadic counterpart (remember that
App :: Exp -> Exp -> Exp
app :: Expr -> Expr -> Expr
The arguments of app are computations, whereas the arguments of
App are data values. However, app is no more than a convenience
function, which simply performs the argument computations before
building the result:
app :: Expr -> Expr -> Expr
do { a <- x; b <- return (App a b)}
This convenience is very worth while. For example, here is yet
another version of cross:
cross2d :: Expr -> Expr -> Expr
do { x <- gensym "x"
lam [ptup [pvar x, pvar
(tup [app f (var x)
(var y)])
We use the monadic versions of the constructors to build the result,
and thereby avoid having to bind ft and gt "by hand" as we did in
cross2c. Instead, lam, app, and tup, will do that for us.
In general, we use the following nomenclature:
. A four-character type name (e.g. Expr) is the monadic version
of its three-character algebraic data type (e.g. Exp).
. A lower-cased function (e.g. app) is the monadic version of
its upper-cased data constructor (e.g. App) 3 .
While Expr and Decl are monadic (computational) versions of
the underlying concrete type, the corresponding types for patterns
(Patt) and types (Type) are simply synonyms for the underlying
data type:
3 For constructors whose lower-case name would clash with
Haskell keywords, like Let, Case, Do, Data, Class, and Instance
we use the convention of suffixing those lower-case names with the
initial letter of their type: letE, caseE, doE, dataD, classD, and
instanceD.
Reason: we do not need to gensym when constructing patterns or
types. Look again at cross2d above. There would be no point in
gensym'ing x or y inside the pattern, because these variables must
scope over the body of the lambda as well.
Nevertheless, we provide type synonyms Patt and Type, together
with their lower-case constructors (pvar, ptup etc.) so that programmers
can use a consistent set - lower-case when working in
the computational setting (even though only the formation of Exp
and Dec are computational), and upper-case when working in the
algebraic datatype setting.
The syntax-construction functions are no more than an ordinary
Haskell library, and one that is readily extended by the program-
mer. We have seen one example of that, in the definition of apps
at the end of Section 4, but many others are possible. For example,
consider this very common pattern: we wish to generate some code
that will be in the scope of some newly-generated pattern; we don't
care what the names of the variables in the pattern are, only that
they don't clash with existing names. One approach is to gensym
some new variables, and then construct both the pattern and the expression
by hand, as we did in cross2d. But an alternative is to
"clone" the whole pattern in one fell swoop, rather than generate
each new variable one at a time:
do { (vf,p) <- genpat (ptup [pvar "x",pvar "y"])
lam [p] (tup[app f (vf "x"),app g (vf "y")])
The function genpat :: Patt -> Q (String->Expr, Patt)
alpha-renames a whole pattern. It returns a new pattern, and a function
which maps the names of the variables in the original pattern to
Exprs with the names of the variables in the alpha-renamed pattern.
It is easy to write by recursion over the pattern. Such a scheme can
even be mixed with the quasi-quote notation.
do { (vf,p) <- genpat [p| (x,y) |]
This usees the quasi-quote notation for patterns: [p| _ |] that we
mentioned in passing in Section 4. We also supply a quasi-quote
notation for declarations [d| _ |] and types [t| _ |]. Of course
all this renaming happens automatically with the quasi-quotation.
We explain that in detail in Section 9.
7 Typing Template Haskell
Template Haskell is strongly typed in the Milner sense: a well-typed
program cannot "go wrong" at run-time. Traditionally, a
strongly typed program is first type-checked, then compiled, and
then executed - but the situation for Template Haskell is a little
more complicated. For example consider again our very first example
$(printf "Error: %s on line %d") "urk" 341
It cannot readily be type-checked in this form, because the type of
the spliced expression depends, in a complicated way, on the value
of its string argument. So in Template Haskell type checking takes
place in stages:
. First type check the body of the splice; in this case it is
(printf "Error: %s on line %d") :: Expr.
. Next, compile it, execute it, and splice the result in place of
the call. In our example, the program now becomes:
(\ s0 -> \ n1 ->
show
"urk" 341
. Now type-check the resulting program, just as if the programmer
had written that program in the first place.
Hence, type checking is intimately interleaved with (compile-time)
execution.
Template Haskell is a compile-time only meta-system. The meta-level
operators (brackets, splices, reification) should not appear in
the code being generated. For example, [| f [| 3 |] |] is ille-
gal. There are other restrictions as well. For example, this definition
is illegal (unless it is inside a quotation):
Why? Because the "$" says "evaluate at compile time and splice",
but the value of x is not known until f is called. This is a common
staging error.
To enforce restrictions like these, we break the static-checking part
of the compiling process into three states. Compiling (C) is the state
of normal compilation. Without the meta-operators the compiler
would always be in this state. The compiler enters the state Bracket
compiling code inside quasi-quotes. The compiler enters
the state Splicing (S) when it encounters an expression escape inside
quasi-quoting brackets. For example, consider:
The definition of f is statically checked in state C, the call to foo is
typed in state B, but the call to zipN is typed in state S.
In addition to the states, we count levels, by starting in state 0, incrementing
when processing under quasi-quotes, and decrementing
when processing inside $ or splice. The levels are used to distinguish
a top-level splice from a splice inside quasi-quotes. For
example
The call to h is statically checked in state S at level -1, while the
x*2 is checked in state B at level 0. These three states and their
legal transitions are reflected in Figure 1. Transitions not in the diagram
indicate error transitions. It is tempting to think that some of
the states can be merged together, but this is not the case. Transitions
on $ from state C imply compile-time computation, and thus
require more complicated static checking (including the computation
itself!) than transitions on $ from the other states.
The rules of the diagram are enforced by weaving them into the
type checker. The formal typing judgments of the type checker are
given in Figure 2; they embody the transition diagram by supplying
cases only for legal states. We now study the rules in more detail.
7.1 Expressions
We begin with the rules for expressions, because they are simpler;
indeed, they are just simplifications of the well-established rules
for MetaML [16]. The type judgment rules for expressions takes
the conventional form
where G is an environment mapping variables to their types and
binding states, e is an expression, t is a type. The state s describes
the state of the type checker, and n is the level, as described above.
[| |]
reify
reify
splice
[| |]
Figure
1. Typing states for Template Haskell
Rule BRACKET says that when in one of the states C or S, the expression
[|e|] has type Q Exp, regardless of the type of e. How-
ever, notice that e is still type-checked, but in a new state B, and we
increment the level. This reflects the legal transitions from Figure
1, and emphasizes that we can only use the BRACKET typing rule
when in one of the listed states.
Type checking the term e detects any internal type inconsistencies
right away; for example would be rejected
immediately. This represents an interesting design compromise:
meta-functions, including the code fragments that they generate,
are statically checked, but that does not guarantee that the meta-function
can produce only well-typed code, so completed splices
are re-checked. We believe this is a new approach to typing meta-
programs. This approach catches many errors as early as possible,
avoids the need for using dependent types, yet is still completely
type-safe.
Notice, too, that there is no rule for quasi-quotes in state B -
quasi-quotes cannot be nested, unlike multi-stage languages such
as MetaML.
Rule ESCB explains how to type check a splice $e inside quasi-
quotes (state B). The type of e must be Q Exp, but that tells us nothing
about the type of the expression that e will evaluate to; hence the
use of an unspecified t. There is no problem about soundness, how-
ever: the expression in which the splice sits will be type-checked
later.
Indeed, that is precisely what happens in Rule ESCS, which deals
with splicing when in state C. The expression e is type checked, and
then evaluated, to give a new expression e # . This expression is then
type checked from scratch (in state C), just as if the programmer
had written it in the first place.
Rules LAM and VAR deal with staging. The environment G contains
assumptions of the form which records not only x's type
but also the level m at which it was bound (rule LAM). We think of
this environment as a finite function. Then, when a variable x is
used at level n, we check that n is later than (#) its binding level, m
(rule VAR).
7.2 Declarations
Figure
also gives the rules for typing declarations, whose judgments
are of form:
States: s #C,B,S
REIFYDECL
FUN
Figure
2. Typing rules for Template Haskell
Here, G is the environment in which the declarations should be
checked, while G # is a mini-environment that gives the types of the
variables bound by decl 4 .
Most rules are quite conventional; for example, Rule FUN explains
how to type function definitions. The rule for splicing is the interesting
one, and it follows the same pattern as for splicing expres-
sions. First type-check the spliced expression e, then run it, then
typecheck the declarations it returns.
The ability to generate a group of declarations seems to be of fundamental
usefulness, but it raises an interesting complication: we
cannot even resolve the lexical scoping of the program, let alone
the types, until splicing has been done.
For example, is this program valid?
splice (genZips 20)
Well, it is valid if the splice brings zip3 into scope (as we expect
it to do) and not if it doesn't. Similar remarks naturally apply to
the instance declaration produced by the genEq function of Section
5.1. If the module contains several splices, it may not be at
all obvious in which order to expand them.
We tackle this complication by assuming that the programmer intends
the splices to be expanded top-to-bottom. More precisely,
4 A single Haskell declaration can bind many variables.
to type-check a group of declarations [d 1 , . , d N ], we follow the
following procedure:
. Group the declarations as follows:
splice e a
[d a+2 , . , d b ]
splice e b
splice e z
[d z+2 , . , d N ]
where the only splice declarations are the ones indicated
explicitly, so that each group [d 1 , . , d a ], etc, are all ordinary
Haskell declarations.
. Perform conventional dependency analysis, followed by type
checking, on the first group. All its free variables should be in
scope.
. In the environment thus established, type-check and expand
the first splice.
. Type-check the result of expanding the first splice.
. In the augmented environment thus established, type-check
the next ordinary group,
. And so on.
It is this algorithm that implements the judgment for declaration
lists that we used in the rule SPLICE:
7.3 Restrictions on declaration splicing
Notice that the rule for SPLICE assumes that we are in state C at
level 0. We do not permit a declaration splice in any other state.
For example, we do not permit this:
splice
in (p,q)
where h :: Int -> Decl. When type-checking f we cannot run
the computation (h x) because x is not known yet; but until we
have run (h x) we do not know what the let binds, and so we
cannot sensibly type-check the body of the let, namely (p,q). It
would be possible to give up on type-checking the body since, after
all, the result of every call to f will itself be type-checked, but the
logical conclusion of that line of thought would be give up on type-checking
the body of any quasi-quote expression. Doing so would
be sound, but it would defer many type errors from the definition
site of the meta-function to its call site(s). Our choice, pending
further experience, is to err on the side of earlier error detection.
If you want the effect of the f above, you can still get it by dropping
down to a lower level:
In fact, we currently restrict splice further: it must be a top-level
declaration, like Haskell's data, class, and instance declara-
tions. The reason for this restriction concerns usability rather than
technical complexity. Since declaration splices introduce unspecified
new bindings, it may not be clear where a variable that occurs in
the original program is bound. The situation is similar for Haskell's
existing import statements: they bring into scope an unspecified
collection of bindings. By restricting splice to top level we make
a worthwhile gain: given an occurrence of x, if we can see a lexically
enclosing binding for x, that is indeed x's binding. A top
level splice cannot hide another top-level binding (or import) for
x because Haskell does not permit two definitions of the same value
at top level. (In contrast, a nested splice could hide the enclosing
binding for x.) Indeed, one can think of a top-level splice as a
kind of programmable import statement.
8 The quotation monad revisited
So far we have used the quotation monad only to generate fresh
names. It has other useful purposes too, as we discuss in this section
8.1 Reification
Reification is Template Haskell's way of allowing the programmer
to query the state of the compiler's internal (symbol) tables. For
example, the programmer may write:
module M where
data T a = Tip a | Fork (T a) (T a)
repT :: Decl
lengthType :: Type
percentFixity :: Q Int
here :: Q String
First, the construct reifyDecl T returns a computation of type
Decl (i.e. Q Dec), representing the type declaration of T. If we
performed the computation repT (perhaps by writing $repT) we
would obtain the Dec:
Data "M:T" ["a"]
[Constr "M:Tip" [Tvar "a"],
Constr "M:Fork"
[Tapp (Tcon (Name "M:T")) (Tvar "a"),
Tapp (Tcon (Name "M:T")) (Tvar
We write "M:T" to mean unambiguously "the T that is defined in
module M" - we say that M:T is its original name. Original names
are not part of the syntax of Haskell, but they are necessary if we
are to describe (and indeed implement) the meta-programming cor-
rectly. We will say more about original names in Section 9.1.
In a similar way, reifyDecl f, gives a data structure that represents
the value declaration for f; and similarly for classes. In-
deed, reification provides a general way to get at compile-time in-
formation. The construct reifyType length returns a computation
of type Type (i.e. Q Typ) representing the compiler's knowledge
about the type of the library function length. Similarly
reifyFixity tells the fixity of its argument, which is useful when
figuring out how to print something. Finally, reifyLocn, returns
a computation with type Q String, which represents the location
in the source file where the reifyLocn occurred. Reify always
returns a computation, which can be combined with other computations
at compile-time. Reification is a language construct, not a
say (map reifyType xs), for example.
It is important that reification returns a result in the quotation
monad. For example consider this definition of an assertion function
assert :: Expr - Bool -> a -> a
if b then r else
error ("Assert fail at "
(Notice the comment giving the type of the expression generated
by assert; here is where the more static type system of MetaML
would be nicer.) One might invoke assert like this:
find xs
When the $assert splice is expanded, we get:
find xs n
error ("Assert fail at " ++
"line 22 of Foo.hs"))
(n < 10) (xs !! n)
It is vital, of course, that the reifyLocn captures the location of
the splice site of assert, rather than its definition site - and that
is precisely what we achieve by making reifyLocn return a com-
putation. One can take the same idea further, by making assert's
behaviour depend on a command-line argument, analogous to cpp's
command mechanism for defining symbols -Dfoo:
cassert :: Expr - Bool -> a -> a
do { mb <- reifyOpt "DEBUG"
isNothing mb then
else
assert }
Here we assume another reification function
reifyOpt :: String -> Maybe String, which returns
Nothing if there is no -D command line option for the specified
string, and the defined value if there is one.
One could go on. It is not yet clear how much reification can or
should be allowed. For example, it might be useful to restrict the
use of reifyDecl to type constructors, classes, or variables (e.g.
function) declared at the top level in the current module, or perhaps
to just type constructors declared in data declarations in imported
modules. It may also be useful to support additional kinds of reification
making other compiler symbol table information available.
8.2 Failure
A compile-time meta-program may fail, because the programmer
made some error. For example, we would expect $(zipN (-1))
to fail, because it does not make sense to produce an n-ary zip
function for arguments. Errors of this sort are due to inappropriate
use, rather than bogus implementation of the meta-program,
so the meta-programmer needs a way to cleanly report the error.
This is another place where the quotation monad is useful. In the
case of zipN we can write:
zipN :: Int -> Expr
| "Arg to zipN must be >= 2"
|
The fail is the standard monadic fail operator, from class Monad,
whose type (in this instance) is
fail :: String -> Q a
The compiler can "catch" errors reported via fail, and gracefully
report where they occured.
8.3 Input/output
A meta-program may require access to input/output facilities. For
example, we may want to write:
splice (genXML "foo.xml")
to generate a Haskell data type declaration corresponding to the
XML schema stored in the file "foo.xml", together with some
boilerplate Haskell functions to work over that data type.
To this end, we can easily provide a way of performing arbitrary
input/output from the quotation monad:
qIO :: IO a -> Q a
Naturally, this power is open to abuse; merely compiling a malicious
program might delete your entire file store. Many compromise
positions are possible, including ruling out I/O altogther, or
allowing a limited set of benign operations (such as file reading
only). This is a policy choice, not a technical one, and we do not
consider it further here.
8.4 Printing code
So far we have only produced code in order to splice it into the
module being compiled. Sometimes we want to write programs that
generate a Haskell program, and put it in a file (rather than compiling
it). The Happy parser generator is an example of an existing
program that follows this paradigm. Indeed, for pedagogic reasons,
it is extremely convenient to display the code we have generated,
rather than just compile it.
To this end, libraries are provided that make Exp, Dec, etc instances
of class Show.
instance Show Exp
instance Show Dec
.etc.
To display code constructed in the computational framework we
supply the function runQ :: Q a -> IO a. Thus, if we compile
and run the program
do { e <- runQ (sel 1
the output "\x -> case x of (a,b,c) -> a" will be pro-
duced. Notice the absence of the splicing $! (sel was defined
in Section 4.)
Implementing Q
So far we have treated the Q monad abstractly, but it is easy to im-
plement. It is just the IO monad augmented with an environment:
newtype Q
The environment contains:
. A mutable location to serve as a name supply for gensym.
. The source location of the top-level splice that invoked the
evaluation, for reifyLocn.
. The compiler's symbol table, to support the implementation
of reifyDecl, reifyFixity, reifyType.
. Command-line switches, to support reifyOpt.
Other things could, of course, readily be added.
9 Quasi-quotes and Lexical Scoping
We have introduced the quasi-quote notation informally, and it is
time to pay it direct attention.
The quasi-quote notation is a convenient shorthand for representing
Haskell programs, and as such it is lexically scoped. More precisely
every occurrence of a variable is bound to the value that
is lexically in scope at the occurrence site in the original
source program, before any template expansion.
This obvious-sounding property is what the Lisp community calls
hygienic macros [10]. In a meta-programming setting it is not
nearly as easy to implement as one might think.
The quasi-quote notation is implemented on top of the quotation
monad (Section 6), and we saw there that variables bound inside
quasi-quotes must be renamed to avoid inadvertent capture (the
cross2a example). But that is not all; what about variables bound
outside the quasi-quotes?
9.1 Cross-stage Persistence
It is possible for a splice to expand to an expression that contains
names that are not in scope where the splice occurs, and we need to
take care when this happens. Consider this rather contrived example
module T( genSwap ) where
Now consider a call of genswap in another module:
module Foo where
import T( genSwap )
What does the splice $(genSwap (4,5)) expand to? It cannot expand
to (swap (4,5) because, in module Foo, plain "swap" would
bind to the boolean value defined in Foo, rather than the swap defined
in module T. Nor can the splice expand to (T.swap (4,5)),
using Haskell's qualified-name notation, because ``T.swap'' is not
in scope in Foo: only genSwap is imported into Foo's name space
by import T( genSwap ).
Instead, we expand the splice to (T:swap (4,5)), using the original
name T:swap. Original names were first discussed in Section
8.1 in the context of representations returned by reify. They solve
a similar problem here. They are part of code representations that
must unambiguously refer to (global, top-level) variables that may
be hidden in scopes where the representations may be used. They
are an extension to Haskell that Template Haskell uses to implement
static scoping across the meta-programming extensions, and
are not accessible in the ordinary part of Haskell. For example, one
cannot write M:map f [1,2,3].
The ability to include in generated code the value of a variable that
exists at compile-time has a special name - cross-stage persistence
- and it requires some care to implement correctly. We have
just seen what happens for top-level variables, such as swap, but
nested variables require different treatment. In particular, consider
the status variable x, which is free in the quotation [| swap x |].
Unlike swap, x is not a top-level binding in the module T. Indeed,
nothing other than x's type is known when the module T is com-
piled. There is no way to give it an original name, since its value
will vary with every call to genSwap.
Cross-stage persistence for this kind of variable is qualitatively dif-
ferent: it requires turning arbitrary values into code. For example,
when the compiler executes the call $(genSwap (4,5)), it passes
the value (4,5) to genSwap, but the latter must return a data structure
of type Exp:
App (Var "T:swap") (Tup [Lit (Int 4), Lit (Int 5)])
Somehow, the code for genSwap has to "lift" a value into an Exp.
To show how this happens, here is what genSwap becomes when
the quasi-quotes are translated away:
do { t <- lift x
return (App (Var "T:swap") t) }
Here, we take advantage of Haskell's existing type-class mecha-
nism. lift is an overloaded function defined by the type class
class Lift t where
lift :: t -> Expr
Instances of Lift allow the programmer to explain how to lift types
of his choice into an Expr. For example, these ones are provided as
part of Template Haskell:
instance Lift Int
instance (Lift a,Lift b) => Lift (a,b) where
Taking advantage of type classes in this way requires a slight
change to the typing judgment VAR of Figure 2. When the stage
s is B - that is, when inside quasi-quotes - and the variable x is
bound outside the quasi quotes but not at top level, then the type
checker must inject a type constraint Lift t, where x has type t.
(We have omitted all mention of type constraints from Figure 2 but
in the real system they are there, of course.)
To summarize, lexical scoping means that the free variables (such
as swap and x) of a top-level quasi-quote (such as the right hand
side of the definition of genSwap) are statically bound to the clo-
sure. They do not need to be in scope at the application site (inside
module Foo in this case); indeed some quite different value of the
same name may be in scope. There is nothing terribly surprising
about this - it is simply lexical scoping in action, and is precisely
the behaviour we would expect if genSwap were an ordinary function
9.2 Dynamic scoping
Occasionally, the programmer may instead want a dynamic scoping
strategy in generated code. In Template Haskell we can express
dynamic scoping too, like this:
Now a splice site $(genSwapDyn (4,5)) will expand to
(swap (4,5)), and this swap will bind to whatever swap is in
scope at the splice site, regardless of what was in scope at the definition
of genSwapDyn. Such behaviour is sometimes useful, but in
Template Haskell it is clearly flagged by the use of a string-quoted
variable name, as in (var "swap"). All un-quoted variables are
lexically scoped.
It is an open question whether this power is desirable. If not, it is
easily removed, by making var take, and gensym return, an abstract
type instead of a String.
9.3 Implementing quasi-quote
The quasi-quote notation can be explained in terms of original
names, the syntax constructor functions, and the use of gensym,
do and return, and the lift operation. One can think of this as
a translation process, from the term within the quasi-quotes to another
term. Figure 3 makes this translation precise by expressing
the translation as an ordinary Haskell function. In this skeleton we
handle enough of the constructors of Pat and Exp to illustrate the
process, but omit many others in the interest of brevity.
The main function, trE, translates an expression inside quasi-
quotes:
The first argument is an environment of type VEnv; we ignore it
for a couple more paragraphs. Given a term t :: Exp, the call
cl t) should construct another term t' :: Exp, such that
t' evaluates to t. In our genSwap example, the compiler translates
genSwap's body, [| swap x |], by executing the translation
function trE on the arguments:
trE cl (App (Var "swap") (Var "x"))
The result of the call is the Exp:
(App (App (Var "app")
(App (Var "var") (str "T:swap")))
(App (Var "lift") (Var "x")))
which when printed as concrete syntax is:
app (var "T:swap") (lift x)
which is what we'd expect the quasi-quoted [| swap x |] to expand
into after the quasi-quotes are translated out:
(It is the environment cl that tells trE to treat "swap" and "x"
differently.)
Capturing this translation process as a Haskell function, we write:
trE cl (App a b)
(trans a)) (trans b)
trE cl (Cond x y z)
(trans x))
(trans
(trans z)
trE cl .
There is a simple pattern we can capture here:
trE cl (App a cl [a,b])
trE cl (Cond x y cl [x,y,z])
trEs :: VEnv -> [Exp] -> [Exp]
trEs cl es = map (trE cl) es
rep :: String -> [Exp] -> Exp
where apps f
apps f
Now we return to the environment, cl :: VEnv. In Section 9.1 we
discovered that variables need to be treated differently depending
on how they are bound. The environment records this information,
and is used by trE to decide how to translate variable occurrences:
String -> VarClass
data ModName | Lifted | Bound
The VarClass for a variable v is as follows:
trE cl (Var s)
case cl s of
Bound -> rep "var" [Var s]
Lifted -> rep "lift" [Var s]
Orig mod -> rep "var" [str (mod++":"++s)])
trE cl e@(Lit(Int
trE cl (App f cl [f,x])
trE cl (Tup cl es)]
trE cl (Lam ps
ps
cl e]
trE cl (Esc
trE cl (Br "Nested Brackets not allowed"
trEs :: VEnv -> [Exp] -> [Exp]
trEs cl es = map (trE cl) es
copy :: VEnv -> Exp -> Exp
copy cl (Var
copy cl (Lit c) = Lit c
copy cl (App f cl f) (copy cl x)
copy cl (Lam ps ps (copy cl e)
copy cl (Br cl e
trP :: Pat -> ([Statement Pat Exp Dec],Pat)
trP (p @ Pvar s)
, rep "pvar" [Var s])
ps
ps
trPs :: [Pat] -> ([Statement Pat Exp Dec],[Pat])
Figure
3. The quasi-quote translation function trExp.
. Orig m means that the v is bound at the top level of module
m, so that m : v is its original name.
. Lifted means that v is bound outside the quasi-quotes, but
not at top level. The translation function will generate a call
to lift, while the type checker will later ensure that the type
of v is in class Lift.
. Bound means that v is bound inside the quasi-quotes, and
should be alpha-renamed.
These three cases are reflected directly in the case for Var in trE
Figure
3).
We need an auxiliary function trP to translate patterns
trP :: Pat -> ([Statement Pat Exp Dec],Pat)
The first part of the pair returned by trP is a list of Statements
(representing the gensym bindings generated by the translation).
The second part of the pair is a Pat representing the alpha-renamed
pattern. For example, when translating a pattern-variable (such as
x), we get one binding statement (x <- gensym "x"), and a result
(pvar x).
With trP in hand, we can look at the Lam case for trE. For a lambda
expression (such as \ f x -> f x) we wish to generate a local do
binding which preserves the scope of the quoted lambda.
do { f <- gensym "f"
lam [Pvar f,Pvar x] (app (var f) (var x))}
The bindings (f <- gensym "f"; x <- gensym "x") and re-named
patterns [Pvar f,Pvar x] are bound to the meta-variables
ss1 and xs by the call trPs ps, and these are assembled with the
body (app (var f) (var x)) generated by the recursive call to
trE into the new do expression which is returned.
The last interesting case is the Esc case. Consider, for example, the
The translation trE translates this as follows:
tup [ do { f <- gensym "f"
lam [Pvar f] (var f) }
, do { f <- gensym "f"
lam [Pvar f,Ptup [Pvar x,Pvar
(app (app (var f) (var y)) (w a) }
Notice that the body of the splice $(w a) should be transcribed
literally into the translated code as (w a). That is what the copy
function does.
Looking now at copy, the interesting case is when we reach
a nested quasi-quotation; then we just resort back to trE.
For example, given the code transformer f
the quasi-quoted term with nested quotations within an escape
do { x <- gensym "x"
lam [Pvar x] (tup [f (var x),lit (Int 5)])}
Related work
10.1 C++ templates
C++ has an elaborate meta-programming facility known as templates
[1]. The basic idea is that static, or compile-time, computation
takes place entirely in the type system of C++. A template
class can be considered as a function whose arguments can be either
types or integers, thus: Factorial<7>. It returns a type; one
can extract an integer result by returning a struct and selecting a
conventionally-named member, thus: Factorial<7>::RET.
The type system is rich enough that one can construct and manipulate
arbitrary data structures (lists, trees, etc) in the type system, and
use these computations to control what object-level code is gen-
erated. It is (now) widely recognized that this type-system computation
language is simply an extraordinarily baroque functional
language, full of ad hoc coding tricks and conventions. The fact
that C++ templates are so widely used is very strong evidence of
the need for such a thing: the barriers to their use are considerable.
We believe that Template Haskell takes a more principled approach
to the same task. In particular, the static computation language is
the same as the dynamic language, so no new programming idiom
is required. We are not the first to think of this idea, of course: the
Lisp community has been doing this for years, as we discuss next.
10.2 Scheme macros
The Lisp community has taken template meta-programming seriously
for over twenty years [11], and modern Scheme systems
support elaborate towers of language extensions based entirely on
macros. Early designs suffered badly from the name-capture prob-
lem, but this problem was solved by the evolution of "hygienic"
macros [10, 4]; Dybvig, Hieb and Bruggeman's paper is an excel-
lent, self-contained summary of the state of the art [7].
The differences of vocabulary and world-view, combined with the
subtlety of the material, make it quite difficult to give a clear picture
of the differences between the Scheme approach and ours. An
immediately-obvious difference is that Template Haskell is statically
typed, both before expansion, and again afterwards. Scheme
macro expanders do have a sort of static type system, however,
which reports staging errors. Beyond that, there are three pervasive
ways in which the Scheme system is both more powerful and
less tractable than ours.
. Scheme admits new binding forms. Consider this macro call:
A suitably-defined macro foo might require the first argument
to be a variable name, which then scopes over the second ar-
gument. For example, this call to foo might expand to:
Much of the complexity of Scheme macros arises from the
ability to define new binding forms in this way. Template
Haskell does this too, but much more clumsily.
On the other hand, at least this makes clear that the occurrence
var "k" is not lexically scoped in the source program.
A declaration splice (splice e) does bind variables, but declaration
splices can only occur at top level (outside quasi-
quotes), so the situation is more tractable.
. Scheme macros have a special binding form
(define-syntax) but the call site has no syntactic baggage.
Instead a macro call is identified by observing that the token
in the function position is bound by define-syntax. In
Template Haskell, there is no special syntax at the definition
site - template functions are just ordinary Haskell functions
- but a splice ($) is required at the call site.
There is an interesting trade-off here. Template Haskell
"macros" are completely higher-order and first class, like any
other function: they can be passed as arguments, returned as
results, partially applied, constructed with anonymous lamb-
das, and so on. Scheme macros are pretty much first order:
they must be called by name. (Bawden discussed first-class
macros [2].)
. Scheme admits side effects, which complicates everything.
When is a mutable value instantiated? Can it move from
compile-time to run-time? When is it shared? And so on.
Haskell is free of these complications.
10.3 MetaML and its derivatives
The goals of MetaML [16, 14, 13] and Template Haskell differ sig-
nificantly, but many of the lessons learned from building MetaML
have influenced the Template Haskell design. Important features
that have migrated from MetaML to Template Haskell include:
. The use of a template (or Quasi-quote notation) as a means of
constructing object programs.
. Type-safety. No program fragment is ever executed in a context
before it is type-checked, and all type checking of constructed
program fragments happens at compile-time.
. Static scoping of object-variables, including alpha renaming
of bound object-variables to avoid inadvertent capture.
. Cross-stage persistence. Free object-variables representing
run-time functions can be mentioned in object-code fragments
and will be correctly bound in the scope where code is created,
not where it is used.
10.3.1 MetaML
But there are also significant difference between Template Haskell
and MetaML. Most of these differences follow from different assumptions
about how meta-programming systems are used. The
following assumptions, used to design Template Haskell, differ
strongly from MetaML's.
. Users can compute portions of their program rather than writing
them and should pay no run-time overhead. Hence the
assumption that there are exactly two stages: Compile-time,
and Run-time. In MetaML, code can be built and executed,
even at run-time. In Template Haskell, code is meant to be
compiled, and all meta-computation happens at compile-time.
. Code is represented by an algebraic datatype, and is hence
amenable to inspection and case analysis. This appears at
first, to be at odds with the static-scoping, and quasi-quotation
mechanisms, but as we have shown can be accomplished in
rather interesting way using monads.
. Everything is statically type-checked, but checking is delayed
until the last possible moment using a strategy of just-in-time
type checking. This allows more powerful meta-programs to
be written without resorting to dependent types.
. Hand-written code is reifiable, I.e. the data representing it
can be obtained for further manipulation. Any run-time function
or data type definition can be reified - i.e. a data structure
of its representation can be obtained and inspected by the
compile-time functions.
Quasi-quotes in in MetaML indicate the boundary between stages
of execution. Brackets and run in MetaML are akin to quote and
eval in Scheme. In Template Haskell, brackets indicate the boundary
between compile-time execution and run-time execution.
One of the main breakthroughs in the type system of MetaML was
the introduction of quasi-quotes which respect both scoping and
typing. If a MetaML code generating program is type-correct then
so are all the programs it generates [16]. This property is crucial,
because the generation step happens at run-time, and that is too late
to start reporting type errors.
However, this security comes at a price: MetaML cannot express
many useful programs. For example, the printf example of Section
2 cannot be typed by MetaML, because the type of the call to
printf depends on the value of its string argument. One way to
address this problem is using a dependent type system, but that approach
has distinct disadvantages here. For a start, the programmer
would have the burden of writing the function that transforms the
format string to a type; and the type system itself becomes much
more complicated to explain.
In Template Haskell, the second stage may give rise to type errors,
but they still occur at compile time, so the situation is much less
serious than with run-time code generation.
A contribution of the current work is the development of a semantics
for quasi-quotes as monadic computations. This allows quasi-
quotes to exist in a pure language without side effects. The process
of generating fresh names is encapsulated in the monad, and hence
quasi-quotes are referentially transparent.
10.3.2 MetaO'Caml
MetaO'Caml [3] is a staged ML implementation built on top of the
O'Caml system. Like MetaML it is a run-time code generation sys-
tem. Unlike MetaML it is a compiler rather than an interpreter, generating
compiled byte-code at run-time. It has demonstrated some
impressive performance gains for staged programs over their non-
staged counterparts. The translation of quasi-quotes in a manner
that preserves the scoping-structure of the quoted expression was
first implemented in MetaO'Caml.
10.3.3 MacroML
MacroML [8] is a proposal to add compile-time macros to an ML
language. MacroML demonstrates that even macros which implement
new binding constructs can be given precise semantics as
staged programs, and that macros can be strongly typed. MacroML
allows the introduction of new hygenic local binders. MacroML
supports only generative macros. Macros are limited to constructing
new code and combining code fragments; they cannot analyze
code fragments.
10.3.4 Dynamic Typing
The approach of just-in-time type-checking has its roots in an earlier
study [15] of dynamic typing as staged type-inference. In that
work, as well as in Template Haskell, typing of code fragments is
split into stages. In Template Haskell, code is finally type-checked
only at top-level splice points ( splice and $ in state C ). In that
work, code is type checked at all splice points. In addition, code
construction and splice point type-checking were run-time activi-
ties, and significant effort was placed in reducing the run-time overhead
of the type-checking.
Implementation
We have a small prototype that can read Template Haskell and
perform compile-time execution. We are in the throes of scaling
this prototype up to a full implementation, by embodying Template
Haskell as an extension to the Glasgow Haskell Compiler, ghc.
The ghc implementation fully supports separate compilation. In-
deed, when compiling a module M, only functions defined in modules
compilied earlier than M can be executed a compile time. (Rea-
son: to execute a function defined in M itself, the compiler would
need to compile that function - and all the functions it calls - all
the way through to executable code before even type-checking other
parts of M.) When a compile-time function is invoked, the compiler
finds its previously-compiled executable and dynamically links it
(and all the modules and packages it imports) into the running com-
piler. A module consisting completely of meta-functions need not
be linked into the executable built by the final link step (although
ghc -make is not yet clever enough to figure this out).
Further work
Our design represents work in progress. Our hope is that, once we
can provide a working implementation, further work can be driven
directly by the experiences of real users. Meanwhile there are many
avenues that we already know we want to work on.
With the (very important) exception of reifying data type defini-
tions, we have said little about user-defined code manipulation or
optimization, which is one of our advertised goals; we'll get to that.
We do not yet know how confusing the error messages from Template
Haskell will be, given that they may arise from code that the
programmer does not see. At the least, it should be possible to display
this code.
We have already found that one often wants to get earlier type security
and additional documentation by saying "this is an Expr whose
type will be Int", like MetaML's type . We expect to add parameterised
code types, such as Expr Int, using Expr * (or some
such) to indicate that the type is not statically known.
C++ templates and Scheme macros have a lighter-weight syntax for
calling a macro than we do; indeed, the programmer may not need
to be aware that a macro is involved at all. This is an interesting
trade-off, as we discussed briefly in Section 10.2. There is a lot to
be said for reducing syntactic baggage at the call site, and we have
a few speculative ideas for inferring splice annotations.
Acknowledgments
We would like particularly to thank Matthew Flatt for several long
conversations in which we explored the relationship between Template
Haskell and Scheme macros - but any errors on our comparison
between the two remain ours. We also thank Magnus Carlsson,
Fergus Henderson, Tony Hoare, Dick Kieburtz, Simon Marlow,
Emir Pasalic, and the Haskell workshop referees, for their helpful
comments on drafts of our work.
We would also like to thank the students of the class CSE583 -
Fundamentals of Staged Computation, in the winter of 2002, who
participated in many lively discussions about the uses of staging,
especially Bill Howe, whose final project motivated Tim Sheard to
begin this work.
The work described here was supported by NSF Grant CCR-
0098126, the M.J. Murdock Charitable Trust, and the Department
of Defense.
--R
Modern C
A bytecode- compiled
Macros that work.
Functional unparsing.
A technical overview of Generic Haskell.
Syntactic abstraction in Scheme.
Macros as multi-stage computations: Type-safe
Derivable type classes.
Hygienic macro expansion.
Special forms in Lisp.
Impact of economics on compiler optimiza- tion
Accomplishments and research challenges in meta- programming
Introduction to Multistage Programming Using MetaML.
--TR
Macros that work
Syntactic abstraction in Scheme
Multi-stage programming with explicit annotations
Dynamic typing as staged type inference
Hygienic macro expansion
First-class macros have types
Impact of economics on compiler optimization
Macros as multi-stage computations
Accomplishments and Research Challenges in Meta-programming
Special forms in Lisp
--CTR
Andy Gill, Introducing the Haskell equational reasoning assistant, Proceedings of the 2006 ACM SIGPLAN workshop on Haskell, September 17-17, 2006, Portland, Oregon, USA
Bjrn Bringert , Anders Hckersten , Conny Andersson , Martin Andersson , Mary Bergman , Victor Blomqvist , Torbjrn Martin, Student paper: HaskellDB improved, Proceedings of the 2004 ACM SIGPLAN workshop on Haskell, p.108-115, September 22-22, 2004, Snowbird, Utah, USA
Walid Taha , Patricia Johann, Staged notational definitions, Proceedings of the second international conference on Generative programming and component engineering, p.97-116, September 22-25, 2003, Erfurt, Germany
Sava Krsti , John Matthews, Semantics of the reFLect language, Proceedings of the 6th ACM SIGPLAN international conference on Principles and practice of declarative programming, p.32-42, August 24-26, 2004, Verona, Italy
Gregory Neverov , Paul Roe, Towards a fully-reflective meta-programming language, Proceedings of the Twenty-eighth Australasian conference on Computer Science, p.151-158, January 01, 2005, Newcastle, Australia
Sheard, Languages of the future, Companion to the 19th annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications, October 24-28, 2004, Vancouver, BC, CANADA
Martin Sulzmann , Meng Wang, Aspect-oriented programming with type classes, Proceedings of the 6th workshop on Foundations of aspect-oriented languages, p.65-74, March 13-13, 2007, Vancouver, British Columbia, Canada
Ralf Lmmel , Simon Peyton Jones, Scrap your boilerplate with class: extensible generic functions, ACM SIGPLAN Notices, v.40 n.9, September 2005
Louis-Julien Guillemette , Stefan Monnier, Type-Safe Code Transformations in Haskell, Electronic Notes in Theoretical Computer Science (ENTCS), v.174 n.7, p.23-39, June, 2007
Bjrn Bringert , Aarne Ranta, A pattern for almost compositional functions, ACM SIGPLAN Notices, v.41 n.9, September 2006
Marcos Viera , Alberto Pardo, A multi-stage language with intensional analysis, Proceedings of the 5th international conference on Generative programming and component engineering, October 22-26, 2006, Portland, Oregon, USA
Ralf Lmmel, Scrap your boilerplate with XPath-like combinators, ACM SIGPLAN Notices, v.42 n.1, January 2007
Lloyd Allison, A programming paradigm for machine learning, with a case study of Bayesian networks, Proceedings of the 29th Australasian Computer Science Conference, p.103-111, January 16-19, 2006, Hobart, Australia
Tim Sheard , Emir Pasalic, Two-level types and parameterized modules, Journal of Functional Programming, v.14 n.5, p.547-587, September 2004
Arthur I. Baars , S. Doaitse Swierstra, Type-safe, self inspecting code, Proceedings of the 2004 ACM SIGPLAN workshop on Haskell, September 22-22, 2004, Snowbird, Utah, USA
Don Syme, Leveraging .NET meta-programming components from F#: integrated queries and interoperable heterogeneous execution, Proceedings of the 2006 workshop on ML, September 16-16, 2006, Portland, Oregon, USA
Amr Sabry, Modeling quantum computing in Haskell, Proceedings of the ACM SIGPLAN workshop on Haskell, p.39-49, August 28-28, 2003, Uppsala, Sweden
Martin Erwig , Zhe Fu, Software reuse for scientific computing through program generation, ACM Transactions on Software Engineering and Methodology (TOSEM), v.14 n.2, p.168-198, April 2005
Jerzy Karczmarczuk, Structure and interpretation of quantum mechanics: a functional framework, Proceedings of the ACM SIGPLAN workshop on Haskell, p.50-61, August 28-28, 2003, Uppsala, Sweden
James Cheney, Scrap your nameplate: (functional pearl), ACM SIGPLAN Notices, v.40 n.9, September 2005
Ralf Lmmel , Simon Peyton Jones, Scrap your boilerplate: a practical design pattern for generic programming, ACM SIGPLAN Notices, v.38 n.3, March
Stephanie Weirich, RepLib: a library for derivable type classes, Proceedings of the 2006 ACM SIGPLAN workshop on Haskell, September 17-17, 2006, Portland, Oregon, USA
Edwin Brady , Kevin Hammond, A verified staged interpreter is a verified compiler, Proceedings of the 5th international conference on Generative programming and component engineering, October 22-26, 2006, Portland, Oregon, USA
Murdoch J. Gabbay, A new calculus of contexts, Proceedings of the 7th ACM SIGPLAN international conference on Principles and practice of declarative programming, p.94-105, July 11-13, 2005, Lisbon, Portugal
Maribel Fernndez , Fabien Fleutot, A historic functional and object-oriented calculus, Proceedings of the 8th ACM SIGPLAN symposium on Principles and practice of declarative programming, July 10-12, 2006, Venice, Italy
Chiyan Chen , Hongwei Xi, Meta-programming through typeful code representation, Journal of Functional Programming, v.15 n.6, p.797-835, November 2005
Sheard, Languages of the future, ACM SIGPLAN Notices, v.39 n.12, December 2004
Ralf Hinze, Generics for the masses, Journal of Functional Programming, v.16 n.4-5, p.451-483, July 2006
Martin Erwig , Deling Ren, An update calculus for expressing type-safe program updates, Science of Computer Programming, v.67 n.2-3, p.199-222, July, 2007
Jim Grundy , Tom Melham , John O'leary, A reflective functional language for hardware design and theorem proving, Journal of Functional Programming, v.16 n.2, p.157-196, March 2006
Judith Bayard Cushing , Nalini Nadkarni , Michael Finch , Anne Fiala , Emerson Murphy-Hill , Lois Delcambre , David Maier, Component-based end-user database design for ecologists, Journal of Intelligent Information Systems, v.29 n.1, p.7-24, August 2007
Rita Loogen , Yolanda Ortega-malln , Ricardo Pea-mar, Parallel functional programming in Eden, Journal of Functional Programming, v.15 n.3, p.431-475, May 2005
Paul Hudak , John Hughes , Simon Peyton Jones , Philip Wadler, A history of Haskell: being lazy with class, Proceedings of the third ACM SIGPLAN conference on History of programming languages, p.12-1-12-55, June 09-10, 2007, San Diego, California | meta programming;templates |
581774 | On the complexity analysis of static analyses. | This paper argues that for many algorithms, and static analysis algorithms in particular, bottom-up logic program presentations are clearer and simpler to analyze, for both correctness and complexity, than classical pseudo-code presentations. The main technical contribution consists of two theorems which allow, in many cases, the asymptotic running time of a bottom-up logic program to be determined by inspection. It is well known that a datalog program runs in O(nk) time where k is the largest number of free variables in any single rule. The theorems given here are significantly more refined. A variety of algorithms are presented and analyzed as examples. | Introduction
This paper argues that for many algorithms, and static analysis algorithms in
particular, bottom-up logic program presentations are clearer and simpler to an-
alyze, for both correctness and complexity, than classical pseudo-code presenta-
tions. Most static analysis algorithms have natural representations as bottom-up
logic programs, i.e., as inference rules with a forward-chaining procedural inter-
pretation. The technical content of this paper consists of two meta-complexity
theorems which allow, in many cases, the running time of a bottom-up logic
logic program to be determined by inspection. This paper presents and analyzes
a variety of static analysis algorithms which have natural presentations as
bottom-up logic programs. For these examples the running time of the bottom-up
presentation, as determined by the meta-complexity theorems given here, is
either the best known or within a polylog factor of the best known.
We use the term "inference rule" to mean first order Horn clause, i.e. a first
order formula of the form A 1 -An ! C where C and each A i is a first order
atom, i.e., a predicate applied to first order terms. First order Horn clauses form
a Turing complete model of computation and can be used in practice as a general
purpose programming language. The atoms A 1 are called the antecedents
of the rule and the atom C is called the conclusion. When using inference rules as
a programming language one represents arbitrary data structures as first order
terms. For example, one can represent terms of the lambda calculus or arbitrary
formulas of first order logic as first order terms in the underlying programming
language. The restriction to first order terms in no way rules out the construction
of rules defining static analyses for higher order languages.
There are two basic ways to view a set of inference rules as an algorithm - the
backward chaining approach taken in traditional Prolog interpreters [13, 4] and
the forward chaining, or bottom-up approach common in deductive databases
[23, 22, 18]. Meta-complexity analysis derives from the bottom-up approach. As
a simple example consider the rule P (x; y) -P which states that
the binary predicate P is transitive. Let D be a set of assertions of the form
d) where c and d are constant symbols. More generally we will use the term
assertion to mean a ground atom, i.e., an atom not containing variable, and use
the term database to mean a set of assertions. For any set R of inference rules
and any database D we let R(D) denote the set of assertions that can be proved
in the obvious way from assertions in D using rules in R. If R consists of the
above rule for transitivity, and D consists of assertions of the form P (c; d), then
R(D) is simply the transitive closure of D. In the bottom-up view a rule set R
is taken to be an algorithm for computing output R(D) from input D.
Here we are interested in methods for quickly determining the running time
of a rule set R, i.e., the time required to compute R(D) from D. For exam-
ple, consider the following "algorithm" for computing the transitive closure of
a predicate EDGE defined by the bottom-up rules EDGE(x; y) ! PATH(x; y) and
z). If the input graph contains e edges this algorithm
runs in O(en) time - significantly better than O(n 3 ) for sparse graphs.
Note that the O(en) running time can not be derived by simply counting the
number of variables in any single rule. Section 4 gives a meta-complexity theorem
which applies to arbitrary rule sets and which allows the O(en) running time of
this algorithm to be determined by inspection. For this simple rule set the O(en)
running time may seem obvious, but examples are given throughout the paper
where the meta-complexity theorem can be used in cases where a completely
rigorous treatment of the running time of a rule set would be otherwise tedious.
The meta-theorem proved in section 4 states that R(D) can be computed in time
proportional to the number of "prefix firings" of the rules in R - the number
of derivable ground instances of prefixes of rule antecedents. This theorem holds
for arbitrary rule sets, no matter how complex the antecedents or how many
antecedents rules have, provided that every variable in the conclusion of a rule
appears in some antecedent of that rule.
Before presenting the first significant meta-complexity theorem in section 4,
section 3 reviews a known meta-complexity theorem based on counting the number
of variables in a single rule. This can be used for "syntactically local" rule
sets - ones in which every term in the conclusion of a rule appears in some
antecedent. Some other basic properties of syntactically local rule sets are also
mentioned briefly in section 3 such as the fact that syntactically local rule sets
can express all and only polynomial time decidable term languages.
Sections 4 gives the first significant meta-complexity theorem and some basic
examples, including the CKY algorithm for context-free parsing. Although this
paper focuses on static analysis algorithms, a variety of parsing algorithms, such
as Eisner and Satta's recent algorithm for bilexical grammars [6], have simple
complexity analyses based on the meta-complexity theorem given in section 4.
Section 5 gives a series of examples of program analysis algorithms expressed
as bottom-up logic programs. The first example is basic data flow. This algorithm
computes a "dynamic transitive closure" - a transitive closure operation
in which new edges are continually added to the underlying graph as the computation
proceeds. Many such dynamic transitive closure algorithms can be shown
to be 2NPDA-complete [11, 17]. 2NPDA is the class of languages that can be
recognized by a two-way nondeterministic pushdown automaton. A problem is
2NPDA-complete if it is in the class 2NPDA and furthermore has the property
that if it can be solved in sub-cubic time then any problem in 2NPDA can also be
solved in sub-cubic time. No 2NPDA-complete problem is known to be solvable
in sub-cubic time. Section 5 also presents a linear time sub-transitive data flow
algorithm which can be applied to programs typable with non-recursive data
types of bounded size and a combined control and data flow analysis algorithm
for the -calculus. In all these examples the meta-complexity theorem of section
4 allows the running time of the algorithm to be determined by inspection
of the rule set.
Section 6 presents the second main result of this paper - a meta-complexity
theorem for an extended bottom-up programming language incorporating the
union-find algorithm. Three basic applications of this meta-complexity theorem
for union-find rules are presented in section 7 - a unification algorithm, a congruence
closure algorithm, and a type inference algorithm for the simply typed
-calculus. Section 8 presents Henglein's quadratic time algorithm for typability
in a version of the Abadi-Cardelli object calculus [12]. This example is interesting
for two reasons. First, the algorithm is not obvious - the first published
algorithm for this problem used an O(n 3 ) dynamic transitive closure algorithm
[19]. Second, Henglein's presentation of the quadratic algorithm uses classical
pseudo-code and is fairly complex. Here we show that the algorithm can be presented
naturally as a small set of inference rules whose O(n 2 ) running time is
easily derived from the union-find meta-complexity theorem.
Assumptions
As mentioned in the introduction, we will use the term assertion to mean a
ground atom, i.e., atom not containing variables, and use the term database to
mean a set of assertions. Also as mentioned in the introduction, for any rule set
R and database D we let R(D) be the set of ground assertions derivable from
D using rules in R. We write D 'R \Phi as an alternative notation for \Phi 2 R(D).
We use jDj for the number of assertions in D and jjDjj for the number of distinct
ground terms appearing either as arguments to predicates in assertions in D or
as subterms of such arguments.
A ground substitution is a mapping from a finite set of variables to ground
terms. In this paper we consider only ground substitutions. If oe is a ground
substitution defined on all the variables occurring a term t the oe(t) is defined in
the standard way as the result of replacing each variable by its image under oe.
We also assume that all expressions - both terms and atoms - are represented
as interned dag data structures. This means that the same term is always represented
by the same pointer to memory so that equality testing is a unit time
operation. Furthermore, we assume that hash table operations take unit time
so that for any substitution oe defined (only) on x and y we can compute (the
pointer representing) oe(f(x; y)) in unit time. Note that interned expressions
support indexing. For example, given a binary predicate P we can index all assertions
of the form P (t; w) so that the data structure representing t points to a
list of all terms w such that P (t; w) has been asserted and, conversely, all terms
w point to a list of all terms t such that P (t; w) has been asserted.
We are concerned here with rules which are written with the intention of
defining bottom-up algorithms. Intuitively, in a bottom-up logic program any
variable in the conclusion that does not appear in any antecedent is "unbound"
it will not have any assigned value when the rule runs. Although unbound
variables in conclusions do have a well defined semantics, when writing rules to
be used in a bottom-up way it is always possible to avoid such variables. A rule
in which all variables in the conclusion appear in some antecedent will be called
bottom-up bound. In this paper we consider only bottom-up bound inference
rules.
A datalog rule is one that does not contain terms other than variables. A
syntactically local rule is one in which every term in the conclusion appears
in some antecedent - either as an argument to a predicate or as a subterm
of such an argument. Every syntactically local rule is bottom-up bound and
every bottom-up bound datalog rule is syntactically local. However, the rule
is bottom-up bound but not syntactically local. Note that the
converse, syntactically local.
Before giving the main results of this paper, which apply to arbitrary rule sets,
we give a first "naive" meta-complexity theorem. This theorem applies only to
syntactically local rule sets. Because every term in the conclusion of a syntactically
local rule appears in some antecedent, it follows that a syntactically local
rule can never introduce a new term. This implies that if R is syntactically local
then for any database D we have that R(D) is finite. More precisely, we have
the following.
Theorem 1. If R is syntactically local then R(D) can be computed in O(jDj
is the largest number of variables occurring any single rule.
To prove the theorem one simply notes that it suffices to consider the set of
ground Horn clauses consisting of the assertions in D (as unit clauses) plus all
instances of the rules in R in which all terms appear in D. There are O(jjDjj k )
such instances. Computing the inferential closure of a set of ground clauses can
be done in linear time [5].
As the O(en) transitive closure example in the introduction shows, theorem 1
provides only a crude upper bound on the running time of inference rules. Before
presenting the second meta-complexity theorem, however, we briefly mention
some addition properties of local rule sets that are not used in the remainder of
the paper but are included here for the sake of completeness. The first property
is that syntactically local rule sets capture the complexity class P. We say that
a rule set R accepts a term t if INPUT(t) 'R ACCEPT(t). The above theorem
implies that the language accepted by a syntactically local rule set is polynomial
time decidable. The following less trivial theorem is proved in [8]. It states the
converse - any polynomial time property of first-order terms can be encoded
as a syntactically local rule set.
Theorem 2 (Givan & McAllester). If L is a polynomial time decidable term
language then there exists a syntactically local rule set which accepts exactly the
terms in L.
The second subject we mention briefly is what we will call here semantic
locality. A rule set R will be called semantically local if whenever D ' R \Phi
there exists a derivation of \Phi from assertions in D using rules in R such that
every term in that derivation appears in D. Every syntactically local rule set
is semantically local. By the same reasoning used to prove theorem 1, if R is
semantically local then R(D) can be computed in O(jDj
is the largest number of variables in any single rule. In many cases it possible
to mechanically show that a given rule set is semantically local even though
it is not syntactically local [15, 2]. However, semantic locality is in general an
undecidable property of rule sets [8].
4 A Second Meta-Complexity Theorem
We now prove our second meta-complexity theorem. We will say that a database
E is closed under a rule set R if It would seem that determining
closedness would be easier than computing the closure in cases where we are not
yet closed. The meta-complexity theorem states, in essence, that the closure can
be computed quickly - it can be computed in the time needed to merely check
the closedness of the final result. Consider a rule A 1 -An ! C. To check that
a database E is closed under this rule one can compute all ground substitutions
oe such that are all in E and then check that oe(C) is also
in E. To find all such substitutions we can first match the pattern A 1 against
assertions in the database to get all substitutions oe 1 such that oe 1
given oe i such that oe i are all in E we can match oe i against
the assertions in the database to get all extensions oe i+1 such that oe i+1
are in E. Each substitution oe i determines a "prefix firing" of the rule
as defined below.
Definition 1. We define a prefix firing of a rule A in a rule set
R under database E to be a ground instance of an initial sequence
are all contained in D. We let PR (E)
be the set of all prefix firings of rules in R for database E.
Note that the rule P (x; might have a large
number of firings for the first two antecedents while having no firings of all three
antecedents. The simple algorithm outlined above for checking that E is closed
under R requires at least jP R (E)j steps of computation. As outlined above, the
closure check algorithm would actually require more time because each step of
extending oe i to oe i+1 involves iterating over the entire database. The following
theorem states that we can compute R(D) in time proportional to jDj plus
(R(D))j.
Theorem 3. For any set R of bottom-up bound inference rules there exists an
algorithm for mapping D to R(D) which runs in O(jDj + jP R (R(D))j) time.
Before proving theorem 3 we consider some simple applications. Consider
the transitive closure algorithm defined by the inference rules EDGE(x; y) !
PATH(x; y) and EDGE(x; y) - PATH(y; z) ! PATH(x; z). If R consists of these two
rules and D consists of e assertions of the form EDGE(c; d) involving n constants
then we immediately have that jP R (R(D))j is O(en). So theorem 3 immediately
implies that the algorithm runs in O(en) time.
PARSES(U, CONS(a, j),
PARSES(B, i,
PARSES(C, j,
Fig. 1. The Cocke-Kasimi-Younger (CKY) parsing algorithm. PARSES(u, i, means
that the substring from i to j parses as nonterminal u.
As a second example, consider the algorithm for context free parsing shown
in figure 1. The grammar is given in Chomsky normal form and consists of a set
of assertions of the form X ! a and X ! Y Z. The input sting is represented as
a "lisp list" of the form CONS(a and the input
string is specified by an assertion of the form INPUT(s). Let g be the number
of productions in the grammar and let n be the length of the input string.
Theorem 3 immediately implies that this algorithm runs in O(gn 3 ) time. Note
that there is a rule with six variables - three string index variables and three
nonterminal variables.
We now give a proof of theorem 3. The proof is based on a source to source
transformation of the given program. We note that each of the following source to
source transformations on inference rules preserve the quantity jDj+jPR (R(D))j
(as a function of D) up to a multiplicative constant. In the second transformation
note that there must be at least one element of D or P r (R(D)) for each assertion
in R(D). Hence adding any rule with only a single antecedent and with a fresh
predicate in the conclusion at most doubles the value of jDj + jP R (R(D))j. The
second transformation can then be done in two steps - first we add the new rule
and then replace the antecedent in the existing rule. A similar analysis holds for
the third transformation.
are all free variables in A 1 and A 2 .
where at least one of t i is a non-variable and x are all the free variables
in
are those variables among the x i s which are not among the
are those variables that occur both in the x i s and y i s; and
are those variables among the y i s that are not among the x i s.
These transformations allow us to assume without loss of generality that the
only multiple antecedent rules are of the form P (x; y);
For each such multiple antecedent rule we create an index such that for each
y we can enumerate the values of x such that P (x; y) has been asserted and
also enumerate the values of z such that Q(y; z) has been asserted. When a
new assertion of the form P (x; y) or Q(y; z) is derived we can now iterate over
the possible values of the missing variable in time proportional to the number
of such values.
5 Basic Examples
Figure
2 gives a simple first-order data flow analysis algorithm. The algorithm
takes as input a set of assignment statements of the form ASSIGN(x; e) where
x is a program variable and e is a either a "constant expression" of the form
CONSTANT(n), a tuple expression of the form hy; zi where y and z are program
variables, or a projection expression of the form \Pi 1 (y) or \Pi 2 (y) where y is a
program variable. Consider a database D containing e assignment assertions
involving n program variables and pair expressions. Clearly the first rule (upper
left corner) has at most e firings. The transitivity rule has at most n 3 firings.
The other two rules have at most en firings. Since e is O(n 2 ), theorem 3 implies
that the algorithm given in figure 2 runs in O(n 3 ) time.
It is possible to show that determining whether a given value can reach a
given variable, as defined by the rules in figure 2, is 2NPDA complete [11, 17].
2NPDA is the class of languages recognizable by a two-way nondeterministic
pushdown automaton. A language L will be called 2NPDA-hard if any problem
in 2NPDA can be reduced to L in n polylog n time. We say that a problem can be
solved in sub-cubic time if it can be solved in O(n k 3. If a 2NPDA-
hard problem can be solved in sub-cubic time then all problems in 2NPDA can
be solved in sub-cubic time. The data flow problem is 2NPDA-complete in the
sense that it is in the class 2NPDA and is 2NPDA-hard.
Cubic time is impractical for many applications. If the problem is changed
slightly so as to require that the assignment statements are well typed using
types of a bounded size, then the problem of determining if a given value can
reach a given variable can be solved in linear time. This can be done with sub-
transitive data flow analysis [10]. In the first-order setting of the rules in figure 2
we use the types defined by the following grammar.
h- i
Note that this grammar does not allow for recursive types. The linear time
analysis can be extended to handle list types and recursive types but giving
an analysis weaker than that of figure 2. For simplicity we will avoid recursive
types here. We now consider a database containing assignment statements such
as those described above but subject to the constraint that it must be possible
to assign every variable a type such that every assignment is well typed. For
example, if the database contains ASSIGN(x; hy; zi) then x must have type h-; oei
where - and oe are the types of y and z respectively. Similarly, if the database
contains must have a type of the form h-; oei where
y has type - . Under these assumptions we can use the inference rules given in
figure 3.
Note that the rules in figure 3 are not syntactically local. The inference rule
at the lower right contains a term in the conclusion, namely \Pi j (e 2 ), which is
not contained in any antecedent. This rules does introduce new terms. However,
it is not difficult to see that the rules maintain the invariant that for every
derived assertion of the form e 1 ( e 2 we have that e 1 and e 2 have the same
Fig. 2. A data flow analysis algorithm. The rule involving \Pi j is an abbreviation for
two rules - one with \Pi 1 and one with \Pi 2 .
Fig. 3. Sub-transitive data flow analysis. A rule with multiple conclusions represents
multiple rules - one for each conclusion.
z (
Fig. 4. Determining the existence of a path from a given source.
Fig. 5. A flow analysis algorithm for the -Calculus with pairing. The rules are intended
to be applied to an initial database containing a single assertion of the form INPUT(e)
where e is a closed -calculus term which has been ff-renamed so that distinct bound
variables have distinct names. Note that the rules rules are syntactically local - every
term in a conclusion appears in some antecedent. Hence all terms in derived assertions
are subterms of the input term. The rules compute a directed graph on the subterms
of the input.
type. This implies that every newly introduced term must be well typed. For
example, if the rules construct the expression \Pi 1 (\Pi 2 (x)) then x must have a
type of the form h-; hoe; jii. Since the type expressions are finite, there are only
finitely many such well typed terms. So the inference process must terminate.
In fact if no variable has a type involving more than b syntax nodes then the
inference process terminates in linear time. To see this it suffices to observe that
the rules maintain the invariant that for every derived assertion involving ( is
of the form \Pij the assertion e 1
derived directly from an assignment using one of the rules on the left hand side
of the figure. If the type of x has only b syntax nodes then an input assignment of
the form ASSIGN(x; e) can lead to at most b derived ( assertion. So if there are n
assignments in the input database then there are at most bn derived assertions
involving (. It is now easy to check that each inference rule has at most bn
firings. So by theorem 3 we have that the algorithm runs in O(bn) time.
It is possible to show that these rules construct a directed graph whose transitive
closure includes the graph constructed by the rules in figure 2. So to determine
if a given source value flows to a given variable we need simply determine
if there is a path from the source to the variable. It is well known that one can
determine in linear time whether a path exists from a given source node to any
other node in a directed graph. However, we can also note that this computation
can be done with the algorithm shown in figure 4. The fact that the algorithm
in figure 4 runs in linear time is guaranteed by Theorem 3.
As another example, figure 5 gives an algorithm for both control and data
flow in the -calculus extended with pairing and projection operations. These
rules implement a form of set based analysis [1, 9]. The rules can also be used
to determine if the given term is typable by recursive types with function, pair-
ing, and union types [16] using arguments similar to those relating control flow
analysis to partial types [14, 20]. A detailed discussion of the precise relationship
between the rules in figure 5, set based analysis, and recursive types is beyond
the scope of this paper. Here we are primarily concerned with the complexity
analysis of the algorithm. All rules other than the transitivity rule have at most
prefix firings and the transitivity rule has at most n 3 firings. Hence theorem 3
implies that the algorithm runs in O(n 3 ) time.
It is possible to give a sub-transitive flow algorithm analogous to the rules
in figures 5 which runs in linear time under the assumption that the input
expression is well typed and that every type expression has bounded size [10].
However, the sub-transitive version of figure 5 is beyond the scope of this paper.
6 Algorithms Based on Union-Find
A variety of program analysis algorithms exploit equality. Perhaps the most
fundamental use of equality in program analysis is the use of unification in type
inference for simple types. Other examples include the nearly linear time flow
analysis algorithm of Bondorf and Jorgensen [3], the quadratic type inference
algorithm for an Abadi-Cardelli object calculus given by Henglein [12], and the
dramatically improvement in empirical performance due to equality reported by
Fahndrich et al. in [7]. Here we formulate a general approach to the incorporation
of union-find methods into algorithms defined by bottom-up inference rules. In
this section we give a general meta-complexity theorem for such union find rule
sets.
We let UNION, FIND, and MERGE be three distinguished binary predicate sym-
bols. The predicate UNION can appear in rule conclusions but not in rule an-
tecedents. The predicates FIND and MERGE can appear in rule antecedents but
not in rule conclusions. A bottom-up bound rule set satisfying these conventions
will be called a union-find rule set. Intuitively, an assertion of the form
UNION(u; w) in the conclusion of a rule means that u and w should be made
equivalent. An assertion of the form MERGE(u; w) means that at some point a
union operation was applied to u and w and, at the time of that union operation,
u and w were not equivalent. An assertion FIND(u; f) means that at some point
the find of u was the value f .
For any given database we define the merge graph to be the undirected graph
containing an edge between s and w if either MERGE(s; w) or MERGE(w; s) is in
the database. If there is a path from s to w in the merge graph then we say
that s and w are equivalent. We say that a database is union-find consistent if
for every term s whose equivalence class contains at least two members there
exists a unique term f such that for every term w in the equivalence class of s the
database contains FIND(w; f). This unique term is called the find of s. Note that
a database not containing any MERGE or FIND assertions is union-find consistent.
We now define the result of performing a union operation on the terms s and t in
a union-find consistent database. If s and t are already equivalent then the union
operation has no effect. If s and t are not equivalent then the union operation
adds the assertion MERGE(s; t) plus all assertions of the form FIND(w; f) where
w is equivalent to either s or t and f is the find of the larger equivalence class
if either equivalence class contains more than one member - otherwise f is the
term t. The fact that the find value is the second argument if both equivalence
classes are singleton is significant for the complexity analysis of the unification
and congruence-closure algorithms. Note that if either class contains more than
one member, and w is in the larger class, then the assertion FIND(w; f) does not
need to be added. With appropriate indexing the union operation can be run in
time proportional to number of new assertions added, i.e., the size of the smaller
equivalence class. Also note that whenever the find value of term changes the
size of the equivalence class of that term at least doubles. This implies that for a
given term s the number of terms f such that E contains FIND(s; f) is at most
log (base 2) of the size of the equivalence class of s.
Of course in practice one should erase obsolete FIND assertions so that for any
term s there is at most one assertion of the form FIND(s; f). However, because
FIND assertions can generate conclusions before they are erased, the erasure
process does not improve the bound given in theorem 4 below. In fact, such
erasure makes the theorem more difficult to state. In order to allow for a relatively
simply meta-complexity theorem we do not erase obsolete FIND assertions.
We define an clean database to be one not containing MERGE or FIND as-
sertions. Given a union-find rule set R and a clean database D we say that a
database E is an R-closure of D if E can be derived from D by repeatedly applying
rules in R - including rules that result in union operations - and no
further application of a rules in R changes E. Unlike the case of traditional inference
rules, a union-find rule set can have many possible closures - the set of
derived assertions depends on the order in which the rules are used. For example
if we derive the three union operations UNION(u; w), UNION(s; w), and UNION(u; s)
then the merge graph will contain only two arcs and the graph depends on the
order in which the union operations are done. If rules are used to derived other
assertions from the MERGE assertions then arbitrary relations can depend on the
order of inference. For most algorithms, however, the correctness analysis and
running time analysis can be done independently of the order in which the rules
are run. We now present a general meta-complexity theorem for union-find rule
sets.
Theorem 4. For any union-find rule set R there exists an algorithm mapping D
to an R-closure of D, denoted as R(D), that runs in time O(jDj+ jP R (R(D))j+
jF (R(D))j) where F (R(D)) is the set of FIND assertions in R(D).
The proof is essentially identical to the proof of theorem 3. The same source-
to-source transformation is applied to R to show that without loss of generality
we need only consider single antecedent rules plus rules of the form
y, and z are variables and P , Q, and
R are predicates other than UNION, FIND, or MERGE. For all the rules that do not
have a UNION assertion in their conclusion the argument is the same as before.
Rules with union operations in the conclusion are handled using the union operation
which has unit cost for each prefix firing leading to a redundant union
operation and where the cost of a non-redundant operation is proportional to
the number of new FIND assertions added.
7 Basic Union-Find Examples
Figure
6 gives a unification algorithm. The essence of the unification problem
is that if a pair hs; ti is unified with hu; wi then one must recursively unify s
with u and t with w. The rules guarantee that if hs; ti is equivalent to hu; wi
then s and u are both equivalent to the term \Pi 1 (f) where f is the common
find of the two pairs. Similarly, t and w must also be equivalent. So the rules
compute the appropriate equivalence relation for unification. However, the rules
do not detect clashes or occurs-check failures. This can be done by performing
appropriate linear-time computations on the final find map.
To analyze the running time of the rules in figure 6 we first note that the rules
maintain the invariant that all find values are terms appearing in the input prob-
lem. This implies that every union operation is either of the form UNION(s; w)
or UNION(\Pi i (w); s) where s and w appear in input problem. Let n be the number
of distinct terms appearing in the input. We now have that there are only
Fig. 6. A unification algorithm. The algorithm operates on "simple terms" defined
to be either a constant, a variable, or a pair of simple terms. The input database is
assumed to be a set of assertions of the form EQUATE!(s; w) where s and w are simple
terms. The rules generate the appropriate equivalence relation for unification but do
not generate clashes or occurs-check failures (see the text). Because UNION(x; y) selects
y as the find value when both arguments have singleton equivalence classes, these rules
maintain the invariant that all find values are terms in the original input.
Fig. 7. A congruence closure algorithm. The input database is assumed to consist of
a set of assertions of the form EQUATE!(s; w) and INPUT(s) where s and w are simple
terms (as defined in the caption for figure 6). As in figure 6, all find values are terms
in the original input.
Fig. 8. Type inference for simple types. The input database is assumed to consist of a
single assertion of the form INPUT(e) where e is closed term of the pure -calculus and
where distinct bound variables have been ff-renamed to have distinct names. As in the
case of the unification algorithm, these rules only construct the appropriate equivalence
relation on types. An occurs-check on the resulting equivalence relation must be done
elsewhere.
O(n) terms involved in the equivalence relation defined by the merge graph.
For a given term s the number of assertions of the form FIND(s; f) is at most
the log (base 2) of the size of the equivalence class of s. So we now have that
there are only O(n log n) FIND assertions in the closure. This implies that there
are only O(n log n) prefix firings. Theorem 4 now implies that the closure can
be computed in O(n log n) time. The best known unification algorithm runs in
time [21] and the best on-line unification algorithm runs in O(nff(n)) time
where ff is the inverse of Ackermann's function. The application of theorem 4 to
the rules of figure 6 yields a slightly worse running time for what is, perhaps, a
simpler presentation.
Now we consider the congruence closure algorithm given in figure 7. First
we consider its correctness. The fundamental property of congruence closure is
that if s is equivalent to s 0 and t is equivalent to t 0 and the pairs hs; ti and
appear in the input, then hs; ti should be equivalent to hs
This fundamental property is guaranteed by the lower right hand rule in figure 7.
This rule guarantees that if hs; ti and hs occur in the input and s is
equivalent to s 0 and t to t 0 then both hs; ti and hs are equivalent to hf
is the common find of s and s 0 and f 2 is the common find of t and t 0 .
So the algorithm computes the congruence closure equivalence relation.
To analyze the complexity of the rules in figure 7 we first note that, as in
the case of unification, the rules maintain the invariant that every find value is
an input term. Given this, one can see that all terms involved in the equivalence
relation are either input terms or pairs of input terms. This implies that there
are at most O(n 2 ) terms involved in the equivalence relation where n is the
number of distinct terms in the input. So we have that for any given term s
the number of assertions of the form FIND(s; f) is O(logn). So the number of
firings of the congruence rule is O(n log 2 n). But this implies that the number of
terms involved in the equivalence relation is actually only O(n log 2 n). Since each
such term can appear in the left hand side of at most O(log n) FIND assertions,
there can be at most O(n log 3 n) FIND assertions. Theorem 4 now implies that
the closure can be computed in O(n log 3 n) time. It is possible to show that by
erasing obsolete FIND assertions the algorithm can be made to run in O(n log n)
time - the best known running time for congruence closure.
We leave it to the reader to verify that the inference rules in figure 8 define
the appropriate equivalence relation on the types of the program expressions and
that the types can be constructed in linear time from the find relation output
by the procedure. It is clear that the inference rules generate only O(n) union
operations and hence the closure can be computed in O(n log n) time.
8 Henglein's Quadratic Algorithm
We now consider Henglein's quadratic time algorithm for determining typability
in a variant of the Abadi-Cardelli object calculus [12]. This algorithm is
interesting because the first algorithm published for the problem was a classical
dynamic transitive closure algorithm requiring O(n 3
glein's presentation of the quadratic algorithm is given in classical pseudo-code
and is fairly complex.
Fig. 9. Henglein's type inference algorithm.
A simple union-find rule set for Henglein's algorithm is given in figure 9. First
we define type expressions with the grammar oe ::= ff j ['
where ff represents a type variable and ' i Intuitively, an object
has type [' provides a slot (or field) for each slot name
for each such slot name we have that the slot value o:' i of
has type oe i . The algorithm takes as input a set of assertions (type constraints)
of the form oe 1 - oe 2 where oe 1 and oe 2 are type expressions. We take [] to be
the type of the object with no slots. Note that, given this "null type" as a base
type, there are infinitely many closed type expressions, i.e., type expressions
not containing variables. The algorithm is to decide whether there exists an
interpretation fl mapping each type variable to a closed type expression such that
for each constraint oe 1 - oe 2 we have that fl(oe 1 ) is a subtype of fl(oe 2 ). The subtype
relation is taken to be "invariant", i.e., a closed type ['
subtype of a closed type [m is equal to some ' j
The rules in figure 9 assume that the input has been preprocessed so that
for each type expression [' appearing in the input (either at
the top level or as a subexpression of a top level type expression) the database
also includes all assertions of the form ACCEPTS(['
and Note that this pre-processing
can be done in linear time. The invariance property of the subtype
relation justifies the final rule (lower right) in figure 9. A system of constraints is
rejected if the equivalence relation forces a type to be a subexpression of itself,
i.e., an occurs-check on type expressions fails, or the final database contains
oe, but not ACCEPTS(-; ').
To analyze the complexity of the algorithm in figure 9 note that all terms
involved in the equivalence relation are type expressions appearing in the processed
input - each such expression is either a type expression of the original
unprocessed input or of the form oe:' where oe is in the original input and ' is a
slot name appearing at the top level of oe. Let n be the number assertions in the
processed input. Note that the preprocessing guarantees that there is at least
one input assertion for each type expression so the number of type expressions
appearing in the input is also O(n). Since there are O(n) terms involved in the
equivalence relation the rules can generate at most O(n) MERGE assertions. This
implies that the rules generate only O(n) assertions of the form oe ) - . This
implies that the number of prefix firings is O(n 2 ). Since there are O(n) terms
involved in the equivalence relation there are O(n log n) FIND assertions in the
closure. Theorem 4 now implies that the running time is O(n 2 +n log
9 Conclusions
This paper has argued that many algorithms have natural presentations as
bottom-up logic programs and that such presentations are clearer and simpler to
analyze, both for correctness and for complexity, than classical pseudo-code pre-
sentations. A variety of examples have been given and analyzed. These examples
suggest a variety of directions for further work.
In the case of unification and Henglein's algorithm final checks were performed
by a post-processing pass. It is possible to extend the logic programming
language in ways that allow more algorithms to be fully expressed as rules. Stratified
negation by failure would allow a natural way of inferring NOT(ACCEPTS(oe; '))
in Henglein's algorithm while preserving the truth of theorems 3 and 4. This
would allow the acceptability check to be done with rules. A simple extension
of the union-find formalism would allow the detection of an equivalence between
distinct "constants" and hence allow the rules for unification to detect clashes.
It might also be possible to extend the language to improve the running time for
cycle detection and strongly connected component analysis for directed graphs.
Another direction for further work involves aggregation. It would be nice
to have language features and meta-complexity theorems allowing natural and
efficient renderings of Dijkstra's shortest path algorithm and the inside algorithm
for computing the probability of a given string in a probabilistic context free
grammar.
--R
typing with conditional types.
Automated complexity analysis based on ordered resolution.
Efficient analysis for realistic off-line partial evalu- ation
Logic programming schemes and their implementations.
time algorithms for testing the satisfiability of propositional horn formulae.
Efficient parsing for bilexical context-free grammars and head automaton grammars
Partial online cycle elimination in inclusion constraint graphs.
New results on local inference relations.
based analysis of ml programs.
Linear time subtransitive control flow anal- ysis
On the cubic bottleneck in subtyping and flow analysis.
Breaking through the n 3 barrier: Faster object type inference.
Predicate logic as a programming language.
Efficient inference of partial types.
Automatic recognition of tractability in inference relations.
Inferring recursive types.
Intercovertability of set constraints and context free language reachability.
Efficient inference of object types.
A type system equivalent to flow analysis.
Linear unification.
Complexity of relational query languages.
--TR
Compilers: principles, techniques, and tools
OLD resolution with tabulation
Magic sets and other strange ways to implement logic programs (extended abstract)
The Alexander method-a technique for the processing of recursive axioms in deductive databases
Bottom-up beats top-down for datalog
A finite presentation theorem for approximating logic programs
typing with conditional types
Set-based analysis of ML programs
XSB as an efficient deductive database engine
Efficient inference of partial types
A type system equivalent to flow analysis
Efficient inference of object types
Tabled evaluation with delaying for general logic programs
Modern compiler implementation in Java
Linear-time subtransitive control flow analysis
Interconvertbility of set constraints and context-free language reachability
Partial online cycle elimination in inclusion constraint graphs
Breaking through the <italic>n</italic><subscrpt>3</subscrpt>barrier
An Efficient Unification Algorithm
An algorithm for reasoning about equality
Abstract interpretation
A Theory of Objects
On the Cubic Bottleneck in Subtyping and Flow Analysis
The complexity of relational query languages (Extended Abstract)
--CTR
I. Dan Melamed, Multitext Grammars and synchronous parsers, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, p.79-86, May 27-June 01, 2003, Edmonton, Canada
Jens Palsberg , Tian Zhao , Trevor Jim, Automatic discovery of covariant read-only fields, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.1, p.126-162, January 2005
Pablo Lpez , Frank Pfenning , Jeff Polakow , Kevin Watkins, Monadic concurrent linear logic programming, Proceedings of the 7th ACM SIGPLAN international conference on Principles and practice of declarative programming, p.35-46, July 11-13, 2005, Lisbon, Portugal
Mark-Jan Nederhof , Giorgio Satta, The language intersection problem for non-recursive context-free grammars, Information and Computation, v.192 n.2, p.172-184, August 1, 2004 | models of computation;algorithms;program analysis;logic programming;programming languages;complexity analysis |
581775 | Formal verification of standards for distance vector routing protocols. | We show how to use an interactive theorem prover, HOL, together with a model checker, SPIN, to prove key properties of distance vector routing protocols. We do three case studies: correctness of the RIP standard, a sharp real-time bound on RIP stability, and preservation of loop-freedom in AODV, a distance vector protocol for wireless networks. We develop verification techniques suited to routing protocols generally. These case studies show significant benefits from automated support in reduced verification workload and assistance in finding new insights and gaps for standard specifications. | Introduction
The aim of this paper is to study how methods of automated reasoning can
be used to prove properties of network routing protocols. We carry out three
case studies based on distance vector routing. In each such study we provide
a proof that is automated and formal in the sense that a computer aided the
construction and checking of the proof using formal mathematical logic. We
are able to show that automated verification of key properties is feasible based
on the IETF standard or draft specifications, and that efforts to achieve automated
proofs can aid the discovery of useful properties and direct attention
to potentially troublesome boundary cases. Automated proofs can also supplement
other means of assurance like manual mathematical proofs and testing by
easing the workload of tedious checking of cases and providing more thorough
analyses of certain kinds of conditions.
1.1 The Case Studies
The first case study proves the correctness of the asynchronous distributed
Bellman-Ford protocol as specified in the IETF RIP standard ([7, 12]). The
classic proof of a 'pure' form of the protocol is given in [2]. Our result covers
additional features included in the standard to improve realtime response times
(e.g. split horizons and poison reverse). These features add additional cases
to be considered in the proof, but the automated support reduces the impact
of this complexity. Adding these extensions make the theory better match the
standard, and hence also its implementations. Our proof also uses a different
technique from the one in [2], providing some noteworthy properties about
network stability.
Our second case study provides a sharp realtime convergence bound on RIP
in terms of the radius of the network around its nodes. In the worst case, the
Bellman-Ford protocol has a convergence time as bad as the number of nodes
in the network. However, if the maximum number of hops any source needs
to traverse to reach a destination is k (the radius around the destination) and
there are no link changes, then RIP will converge in k timeout intervals for this
destination. It is easy to see that convergence occurs within
but the proof of the sharp bound of k is complicated by the number of cases
that need to be checked: we show how to use automated support to do this
verification, based on the approach developed in the previous case study. Thus,
if a network has a maximum radius of 5 for each of its destinations, then it will
converge in at most 5 intervals, even if the network has 100 nodes. Assuming
the timing intervals in the RIP standard, such a network will converge within 15
minutes if there are no link changes. We did not find a statement of this result
in the literature, but it may be folklore knowledge. Our main point is to show
how automated support can cover realtime properties of routing protocols.
Our third case study is intended to explore how automated support can
assist new protocol development efforts. We consider a distance vector routing
protocol arising from work at MANET, the IETF work group for mobile ad
hoc networks. The specific choice is the Ad-Hoc On-Demand Distance Vector
(AODV) protocol of Perkins and Royer [18], as specified in the second version of
the IETF Internet Draft [17]. This protocol uses sequence numbers to protect
against the formation of loops, a widely noted shortcoming of RIP. A proof that
loops cannot form is given in [18]. We show how to derive this property from a
general invariant for the paths formed by AODV, essentially recasting the proof
by contradiction in [18] as a positive result, and use this invariant to analyze
some conditions concerning failures that are not fully specified in [17] but could
affect preservation of the key invariant if not treated properly. Our primary
conclusion is that the automated verification tools can aid analysis of emerging
protocol specifications on acceptable scales of effort and 'time-to-market'.
1.2 Verification of Networking Standards
Automated logical reasoning about computer systems, widely known as formal
methods, has been successful in a number of domains. Proving properties of
computer instruction sets is perhaps the most established application and many
of the major hardware vendors have programs to do modeling and verification
of their systems using formal methods. Another area of success is safety critical
devices. For instance, [6] studies invariants of a weapons control panel for
submarines modeled from the contractor design documents. The study led to
a good simulator for the panel and located some serious safety violations. The
application of formal methods to software has been a slower process, but there
has been noteworthy success with avionic systems, air traffic control systems,
and others. One key impediment in applying formal methods to non-safety-
critical systems concerns the existence of a specification of the software system:
it is necessary to know what the software is intended to satisfy before a verification
is possible. For many software systems, no technical specification exists, so
the verification of documented properties means checking invariants from inline
code comments or examples from user manuals.
An exception to this lack of documentation is software in the telecommunications
area, where researchers have a penchant for detailed technical specifica-
tions. RIP offers a case study in motivation. Early implementations of distance
vector routing were incompatible, so all of the routers running RIP in a domain
needed to use the same implementation. Users and implementors were led to
correct this problem by providing a specification that would define precise protocols
and packet formats. We find below that the resulting standard ([7, 12])
is precise enough to support, without significant supplementation, a very detailed
proof of correctness in terms of invariants referenced in the specification.
The proved properties are guaranteed to hold of any conformant implementation
and of any network of conformant routers. RIP is perhaps better than the
average in this respect, since (1) the standard seeks to bind itself closely to its
underlying theory, (2) distance vector routing is simpler than some alternative
routing approaches, and (3) at this stage, RIP is a highly seasoned standard
whose shortcomings have been identified through substantial experience. This
is not to say that RIP was already verified by its referenced theory. There are
substantial gaps between ([7, 12]) and the asynchronous distributed protocol
proved correct in [2]: the algorithm is different in several non-trivial ways, the
model is different, and the state maintained is different. Our analysis narrows
this gap and extends the results of the theory as applied to the standard version
of the protocol.
It is natural to expect that newer protocols, possibly specified in a sequence
of draft standards, will have more gaps and will be more likely to evolve. Useful
application of formal methods to such projects must 'track' this instability,
locating errors or gaps quickly and leveraging other activities like revision of
the draft standard and the development of simulations and implementations.
To test this agility for our tools and methods we have extended our analysis
of RIP to newer applications of distance vector routing in the emerging area
of mobile ad hoc networks. Ad hoc networks are networks formed from mobile
computers without the use of a centralized authority. A variety of protocols are
under development for such networks [21], including many based on distance
vector routing ([16, 3, 15, 18]). Requirements for a routing protocol for ad hoc
networks are quite different from those of other kinds of networks because of
considerations like highly variable connectivity and low bandwidth links. Given
the rapid rate of evolution in this area and the sheer number of new ideas, it
seems like an appropriate area as a test case for formal methods as part of a
protocol design effort.
1.3 Verification Attributes of Routing Protocols
There have been a variety of successful studies of communication protocols. For
instance, [13] provides a proof of some key properties of SSL 3.0 handshake
protocol [4]. However, most of the studies to date have focused on endpoint
protocols like SSL using models that involve two or three processes (representing
the endpoints and an adversary, for instance). Studies of routing protocols must
have a different flavor since a proof that works for two or three routers is not
interesting unless it can be generalized. Routing protocols generally have the
following attributes which influence the way formal verification techniques can
be applied:
1. An (essentially) unbounded number of replicated, simple processes execute
concurrently.
2. Dynamic connectivity is assumed and fault tolerance is required.
3. Processes are reactive systems with a discrete interface of modest complexity
4. Real time is important and many actions are carried out with some timeout
limit or in response to a timeout.
Most routing protocols have other attributes such as latencies of information
flow (limiting, for example, the feasibility of a global concept of time) and
the need to protect network resources. These attributes sometimes make the
protocols more complex. For instance, the asynchronous version of the Bellman-Ford
protocol is much harder to prove correct than the synchronous version [2],
and the RIP standard is still harder to prove correct because of the addition of
complicating optimizations intended to reduce latencies.
In this paper we verify protocols using tools that are very general (HOL)
or tuned for the verification of communication protocols (SPIN). The tools will
be described in Section 2, and the rest of the paper focuses on applications
in the case studies. A key technique is replica generalization. This technique
consists of considering first a router r connected by interfaces to a network N
that satisfies a property A. We then prove that r satisfies a property B. In the
next step we attempt to prove a property C of r by assuming that properties A
hold of N where B 0 is the property that all of the routers in N satisfy
B. A proof is organized as a sequence of replica generalizations. Using this
technique and others, we provide a proof of the correctness of RIP in Section 3,
proof of a sharp realtime bound on convergence of RIP in Section 4, and proof of
path invariants for AODV in Section 5. We offer some conclusions and statistics
in the final section.
2 Approaches to Formal Verification
Computer protocols have long been the chosen targets of verification and validation
efforts. This is primarily because protocol design often introduces subtle
bugs which remain hidden in all but a few runs of the protocol. These bugs can
nevertheless prove fatal for the system if they occur. In this section, we discuss
the complexities involved in verifying network protocols and propose automated
tool support for this task. As an example, we consider a simple protocol for
leader-election in a network. A variant of this protocol is used for discovering
spanning trees in an extended LAN ([19, 20]).
The network consists of n connected nodes. Each node has a unique integer
id. The node with the least id is called the leader. The aim of the protocol
is for every node to discover the id of the leader. To accomplish this, each
node maintains a leader-id: its own estimate of who the leader is, based on
the information it has so far. Initially, the node believes itself to be the leader.
Every p seconds, each node sends an advertisement containing its leader-id to all
its neighbors. On receiving such an advertisement, a node updates its leader-id
if it has received a lower id in the message.
The above protocol involves n processes that react to incoming messages.
The state of the system consists of the (integer) leader-ids at each process; the
only events that can occur are message transmissions initiated by the processes
themselves. However, due to the asynchronous nature of the processes, the
message transmissions could occur in any order. This means that in any period
of p seconds, there could be more than n! possible sequences of events that the
system needs to react to. It is easy to see that manual enumeration of the
protocol event or state sequences becomes impossible as n is increased. For
more complex protocols, manually tracing the path of the protocol for even a
single sample trace becomes tedious and error-prone. Automated support for
this kind of analysis is clearly required.
A well-known design tool for protocol analysis is simulation. However, to
simulate the election protocol, we would first have to fix the network size and
topology, then specify the length of the simulation. Finally, we can run the
protocol and look at its trace for a given initial state and a single sequence of
events. This simulation process, although informative, does not provide a complete
verification. A verification should provide guarantees about the behavior
of the protocol on all networks, over all lengths of time, under all possible initial
states and for every sequence of events that can occur.
We discuss two automated tools that can help provide these guarantees.
First, we describe the model-checker SPIN, which can be used to simulate and
possibly verify the protocol for a given network (and initial state). We then
describe the interactive theorem-prover HOL, which, with more manual effort,
can be used to verify general mathematical properties of the protocol in an
arbitrary network.
2.1 Model Checking Using SPIN
The SPIN model-checking system ([9, 10]) has been widely used to verify communication
protocols. The SPIN system has three main components: (1) the
Promela protocol specification language, (2) a protocol simulator that can perform
random and guided simulations, and (3) a model-checker that performs an
exhaustive state-space search to verify that a property holds under all possible
simulations of the system.
To verify the leader-election protocol using SPIN, we first model the protocol
in Promela. A Promela model consists of processes that communicate
by message-passing along buffered channels. Processes can modify local and
global state as a result of an event. The Promela process modeling the leader-
election protocol at a single node is as given in Table 1. We then hard-code a
Table
1: Leader Election in Promela
#define NODES 3
#define BUF-SIZE 1
chan
chan broadcast = [0] of -int,int-;
int leader-id[NODES];
proctype Node (int me; int myid)-
int advert;
do
if
else -? skip
true -? broadcast!me,leader-id[me]
od
network into the broadcast mechanism and simulate the protocol using SPIN.
SPIN simulates the behavior of the protocol over a random sequence of events.
Viewing the values of the leader-ids over the period of the simulation provides
valuable debugging information as well as intuitions about possible invariants
of the system.
Finally, we use the SPIN verifier to prove that the election protocol succeeds
in a 3-node network. This involves specifying the correctness property in Linear
Temporal Logic (LTL). In our case, the specification simply insists that the
leader-id at each node eventually stabilizes at the correct id. The verifier then
carries out an exhaustive search to ensure that the property is true for every
possible simulation of the system. If it fails for any allowed event sequence,
the verifier indicates the failure along with the counter-example, which can be
subsequently re-simulated to discover a possible bug.
2.2 Interactive Theorem Proving Using HOL
The HOL Theorem Proving System ([5, 8]) is a widely used general-purpose
verification environment. The main components of the HOL system are (1)
a functional programming language used for specifying functions, (2) Higher-Order
Logic used to specify properties about functions, and (3) a proof assistant
that allows the user to construct proofs of such properties by using inbuilt and
user-defined proof techniques. Both the programming model and the proof
environment are very general, capable of proving any mathematical theorem.
Designing the proof strategy is the user's responsibility.
In order to model the leader-election protocol in HOL, we need to model
processes and message-passing in a functional framework. We take our cue from
the reactive nature of the protocol. The input to the protocol is a potentially
infinite sequence of messages. The processes can then be considered as functions
that take a message as input and describe how the system state is modified. The
resulting model is essentially the function in Table 2. Note that the generality
Table
2: State Update Function
function Update
receiver then
then mesg else state(receiver)
else state(node)
of the programming platform allows us to define the protocol for an arbitrary
network in a uniform way.
We then specify the property that we desire from the protocol as a theorem
that we wish to prove in HOL.
Theorem 1 Eventually, every node's leader-id is the minimum of all the node
ids in the network.
In order to prove this property, we specify some lemmas that must be true of
the protocol as well, all of which can be easily encoded in Higher-Order Logic.
At each node, the leader-id can only decrease over time.
Lemma 3 If the state of the network is unchanged by a message from node n 1
to node n 2 as well as a message from n 2 to n 1 , the leader-ids at n 1 and n 2 must
be the same.
Lemma 4 Once a node's leader-id becomes correct, it stays correct.
Finally, we construct a proof of the desired theorem. The proof assistant organizes
the proof and ensures that the proofs are complete and bug-free. We first
prove the lemmas by case analysis on the states and the possible messages at
each point in time. Then, Lemmas 2 and 3 are used to prove that the state of the
network must 'progress' until all the nodes have the same leader-id. Moreover,
since the leader node's leader-id never changes (Lemma 4), all nodes must end
up with the correct leader-id. These proofs are carried out in a simple deductive
style managed by the proof assistant.
The above proof is just one of many different proofs that could be developed
in the HOL system. For example, if instead of correctness, we were interested
in proving how long the protocol takes to elect a leader, we could prove the
following lemma. Recall that p is the interval for advertisements.
Lemma 5 If all nodes within a distance k of the leader become correct after
seconds, then all nodes within a distance must become correct within
seconds.
In conjunction with Lemma 4, this provides a more informative inductive proof
of the Theorem.
2.3 Model Checking Vs Interactive Theorem Proving
We have described how two systems can address a common protocol verification
problem. The two systems clearly have different pay-offs. SPIN offers
comprehensive infrastructure for easily modeling and simulating communication
protocols and has fixed verification strategies for that domain. On the
other hand, HOL offers a more powerful mathematical infrastructure, allowing
the user to develop more general proofs. SPIN verifications are generally bound
by memory and expressiveness. HOL verifications are bound by man-months.
Our technique is to code the protocol first in SPIN and use HOL to address
limits in the expressiveness of SPIN. This is achieved by using HOL to prove
abstractions, showing properties like: if property P holds for two routers, then
it will hold for arbitrarily many routers. Or: advertisements of distances can
be assumed to be equal to k or k + 1. Also, abstraction proofs in HOL were
used to reduce the memory demands of SPIN proofs and assure that the SPIN
implementation properly reflected the standard. We give examples of these
tradeoffs in the case studies and summarize with some statistical data in the
conclusions.
3 Stability of RIP
We will assume that the reader is already familiar with the RIP protocol. Its
specification is given in ([7, 12]) and a good exposition can be found in [11].
3.1 Formal Terminology
We model the universe U as a bipartite connected graph whose nodes are partitioned
into networks and routers, such that each router is connected to at least
two networks. The goal of the protocol is to compute a table at each router
providing, for each network n, the length of the shortest path to n and the next
hop along one such path. The protocol is viewed as inappropriate for networks
that have more than 15 hops between a router and a destination network, so
the hop count is limited to a maximum of 16. A network that is marked as
hops away is considered to be unreachable.
Our proof shows that, for each destination d, the routers will all eventually
obtain a correct shortest path to d. An entry for d at a router r consists of three
parameters:
current estimate of the distance metric to d (an integer between
1 and 16 inclusively).
ffl nextN(r): the next network on the route to d.
ffl nextR(r): the next router on the route to d.
Both r and nextR(r) must be connected to nextN(r). We say that r points
to nextR(r). Initially, routers connected to d must have their metric set to 1,
while others must have it set to values strictly greater than 1. Two routers
are neighbors if they are connected to the same network. The universe changes
its state (i.e. routing tables) as a reaction to update messages being sent between
neighboring routers. Each update message can be represented as a triple
(snd; net; rcv), meaning that the router src sends its current distance estimate
through the network net to the router rcv. In some cases this will cause the
receiving router to update its own routing entry. An infinite sequence of such
messages (snd
is said to be fair if every pair of neighboring routers
s and r exchanges messages infinitely often:
This property simply assures that each router will communicate its routing
information to all of its neighbors. Distance to d is defined as
ae
if r is connected to d
neighbor of rg; otherwise.
For k - 1, the k-circle around d is the set of routers
For we say that the universe is k-stable if the following properties
and S2 both hold:
Every router r 2 C k has its metric set to the actual distance: that is,
r is not connected to d, it has its next
network and next router set the first network and router on some shortest
path to d: that is,
(S2) For every router r 62 C k
Given a k-stable universe, we say that a router r at distance k + 1 from
1)-stable if it has an optimal route: that is,
3.2 Proof results
Our first goal is to show that RIP indeed eventually discovers all the shortest
paths of length less than 16:
Theorem 6 (Correctness of RIP) For any k ! 16, starting from an arbitrary
state of the universe U , for any fair sequence of update messages, there is
a time t k such that U is k-stable at all times t - t k .
In particular, 15-stability will be achieved, which is our original goal. Notice
that the result applies to an arbitrary initial state. This is critical for the fault
tolerance aspect of RIP, since it assures convergence even in the presence of
topology changes. As long as the changes are not too frequent, we can apply
the theorem to the periods in between them.
Our proof, which we call the radius proof, differs from the one described
in [2] for the asynchronous Bellman-Ford algorithm. Rather than inducting on
estimates for upper and lower bounds for distances, we induct on the the radius
of the k-stable region around d. The proof has two attributes of interest:
1. It states a property about the RIP protocol, rather the asynchrous distributed
Bellman-Ford algorithm. Closer analysis reveals subtle, but substantial
differences between the two. In the case of Bellman-Ford, routers
keep all of their neighbors' most recently advertised metric estimates,
whereas RIP keeps only the best value. Furthermore, the Bellman-Ford
metric ranges over the set of all positive integers, while the RIP metric
saturates at 16, which is regarded as infinity. Finally, RIP includes certain
engineering optimizations, such as split horizon with poison reverse, that
do not exist in the Bellman-Ford algorithm.
2. The radius proof is more informative. It shows that correctness is achieved
quickly close to the destination, and more slowly further away. We exploit
this in the next section to show a realtime bound on convergence.
Theorem 6 is proved by induction on k. There are four parts to it:
Lemma 7 The universe U is initially 1-stable.
Lemma 8 (Preservation of stability) For any k ! 16, if the universe is
k-stable at some time t, then it is k-stable at any time t 0 - t.
Lemma 9 For any k ! 15 and router r such that the universe
is k-stable at some time t k , then there is a time t r;k
stable at all times t - t r;k .
if the universe U is k-stable at some
time t k , then there is a time t k+1 - t k such that U is 1)-stable at all times
Lemma 7 is easily proved by HOL and serves as the basis of the overall
induction. Lemma 8 is the fundamental safety property, which is also proved
in HOL. Parts that can be proved in either tool are typically done in SPIN,
since it provides more automation. Lemma 9, the main progress property in
the proof, is proved with SPIN, but SPIN can be used only for verification
of models for which there is a constant upper bound on the number of states
whereas the model from Lemma 9 can, in principle, have an arbitrarily large
state space. This problem is solved by finding a finitary property-preserving
abstraction of the system and checking the desired property of the abstracted
system using SPIN. Proof that the abstraction is indeed property-preserving
is done in HOL. The proof as a whole illustrates well how the two systems
can prove properties as a team. Interestingly enough, this argument uses the
previously-proved lemma about preservation of stability. This is an instance of
replica generalization, where proving one invariant allowed us to further simplify
(i.e. abstract) the system; this, in turn, facilitated the derivation of yet another
property. Lemma 10 is the inductive step, which is derived in HOL as an easy
generalization of Lemma 9, considering the fact that the number of routers is
finite.
Timing Bounds for RIP Stability
In the previous section we proved convergence for RIP conditioned on the fact
that the topology stays unchanged for some period of time. We now calculate
how big that period of time must be. To do this, we need to have some knowledge
about the times at which protocol events must occur. In the case of RIP, we
use the following:
Fundamental Timing Assumption There is a value \Delta, such that during
every topology-stable time interval of the length \Delta, each router gets at
least one update message from each of its neighbors.
This is the only assumption we make about timing of update messages. RIP
routers normally try to exchange messages every a failure to receive
an update within 180 seconds is treated as a link failure. Thus
satisfies the Fundamental Timing Assumption for RIP.
As in the previous section, we will concentrate on a particular destination
network d. Our timing analysis is based on the notion of weak k-stability. For
we say that the universe U is weakly k-stable if the following
conditions hold:
k\Gammastable or hops(r) ? k).
Weak k-stability is stronger than 1)-stability, but weaker than k-stability.
The disjunction in (WS2) (which distinguishes weak stability from the ordinary
stability) will typically introduce additional complexity in case analyses arising
from reasoning about weak stability.
As with k-stability, we have the following:
Lemma 11 (Preservation of weak stability) For any 2 - k - 15, if the
universe is weakly k-stable at some time t, then it is weakly k-stable at any time
t.
We must also show that the initial state inevitably becomes weakly 2-stable
after messages have been exchanged between every pair of neighbors:
Lemma 12 (Initial progress) If the topology does not change, the universe
becomes weakly 2-stable after \Delta time.
The main progress property says that it takes 1 update interval to get from
a weakly k-stable state to a weakly 1)-stable state. This property is shown
in two steps: first we show that condition (WS1) for 1)-stability holds
after \Delta:
Lemma 13 For any 2 - k - 15, if the universe is weakly k-stable at some time
t, then it is k-stable at time t \Delta.
and then we show the same for conditions (WS2) and (WS3). The following
puts both steps together:
Lemma 14 (Progress) For any 2 - k ! 15, if the universe is weakly k-stable
at some time t, then it is weakly k + 1-stable at time t \Delta.
The radius of the universe (with respect to d) is the maximum distance from
d:
r is a routerg:
The main theorem describes convergence time for a destination in terms of its
radius:
Theorem 15 (RIP convergence time) A universe of radius R becomes 15-
stable within maxf15; Rg \Delta \Delta time, assuming that there were no topology changes
during that time interval.
The theorem is an easy corollary of the preceding lemmas. Consider a universe
of radius R - 15. To show that it converges in R \Delta \Delta time, observe what happens
during each \Delta-interval of time:
after \Delta weakly 2-stable (by Lemma 12)
after
after 3 \Delta \Delta weakly 4-stable (by Lemma
after (R
after R \Delta \Delta R-stable (by Lemma
R-stability means that all the routers that are not more than R hops away from
d will have shortest routes to d. Since the radius of the universe is R, this
includes all routers.
An interesting observation is that progress from (ordinary) k-stability to
(ordinary) (k+1)-stability is not guaranteed to happen in less than 2 \Delta
leave this to the reader). Consequently, had we chosen to calculate convergence
time using stability, rather than weak stability, we would get a worse upper
bound of 2 \Delta. In fact, our upper bound is sharp: in a linear topology,
update messages can be interleaved in such a way that convergence time becomes
as bad as R \Delta \Delta. Figure 1 shows an example that consists of k routers and has
d
d
d
dr1 r2 r3 rk
Figure
1: Maximum Convergence Time
the radius k with respect to d. Router r 1 is connected to d and has the correct
metric. Router r 2 also has the correct metric, but points in the wrong direction.
Other routers have no route to d. In this state, r 2 will ignore a message from r 1 ,
because that route is no better than what r 2 (thinks it) already has. However,
after receiving a message from r 3 , to which it points, r 2 will update its metric to
and lose the route. Suppose that, from this point on, messages are interleaved
in such a way that during every update interval, all routers first send their
update messages and then receive update messages from their neighbors. This
will cause exactly one new router to discover the shortest route during every
update interval. Router r 2 will have the route after the second interval, r 3 after
the
after the k-th. This shows that our upper bound of k \Delta \Delta
is reachable.
4.1 Proof methodology
Lemmas 11, 12, and 14 are proved in SPIN (Lemma 13 is a consequence of
Lemma 14). Theorem 15 is then derived as a corollary in HOL. SPIN turned
out to be extremely helpful for proving properties such as Lemma 14, which
involve tedious case analysis. To illustrate this, assuming weak k-stability at
time t, let us look at what it takes to show that condition (WS2) for weak
1)-stability holds after \Delta time. ((WS1) will hold because of Lemma 13,
but further effort is required for (WS3).)
To prove (WS2), let r be a router with 1. Because of weak
k-stability at the time t, there are two possibilities for r: (1) r has a k-stable
neighbor, or (2) all of the neighbors of r have hops ? k. To show that r will
eventually progress into either a (k+1)-stable state or a state with hops ? k+1,
we need to further break the case (2) into three subcases with respect to the
properties of the router that r points to: (2a) r points to s 2 C k
(the k-circle),
which is the only neighbor of r from C k
, or (2b) r points to s 2 C k
but r has
another neighbor t 2 C k
such that t 6= s, or (2c) r points to s 62 C k
. Each of
these cases, branches into several further subcases based on the relative ordering
in which r, s and possibly t send and receive update messages.
Doing such proofs by hand is difficult and prone to errors. Essentially, the
proof is a deeply-nested case analysis in which final cases are straight-forward
to prove-an ideal job for a computer to do! Our SPIN verification is divided
into four parts accounting for differences in possible topologies. Each part has
a distinguished process representing r and another processes modeling the environment
for r. An environment is an abstraction of the 'rest of the universe'.
It generates all message sequences that could possibly be observed by r. Sometimes
a model can be simplified by allowing the environments to generate some
sequences that are not possible in reality. This does not affect the confidence
in positive verification answers, since any invariant that holds in a less constrained
environment also holds in a more constrained one. SPIN considered
more cases than a manual proof would have required, 21,487 of them altogether
for Lemma 14, but it checked these in only 1.7 seconds of CPU time. Even
counting set-up time for this verification, this was a significant time-saver. The
resulting proof is probably also more reliable than a manual one. We summarize
similar analyses for our other results in the conclusions.
Verifying Properties of AODV
The mobile, ad hoc network model consists of a number of mobile nodes capable
of communicating with their neighbors via broadcast or point-to-point links. The
mobility of the nodes and the nature of the broadcast media make these links
temporary, low bandwidth, and highly lossy. Nodes can be used as routers
to forward packets. At any point in time, the nodes can be thought of as
forming a connected ad hoc network with multi-hop paths between nodes. In
this environment, the goal of a routing protocol is to discover and maintain
routes between nodes. The essential requirements of such a protocol would be
to (1) react quickly to topology changes and discover new routes, (2) send a
minimal amount of control information, and (3) maintain 'good' routes so that
data packet throughput is maximized.
Distance-vector protocols like RIP are probably satisfactory for requirements
(2) and (3). However, the behavior of these protocols is drastically altered in
the presence of topology changes. For example, in RIP, if a link goes down,
the network could take infinite time (15 update periods) to adapt to the new
topology. This is because of the presence of routing loops in the network, resulting
in the so called counting-to-infinity problem. If routing loops could be
avoided, distance-vector protocols would be strong candidates for routing in
mobile, ad-hoc networks. There have been a variety of proposals for distance
vector protocols that avoid loop formation. We consider AODV [18], a recently-
introduced protocol that aims to minimize the flow of control information by
establishing routes only on demand. Our analysis is based on the version 2 draft
specification [17].
In a network running AODV, every node n maintains two counters: a sequence
number (seqno(n)) and a broadcast id (broadcast id(n). The sequence
number is incremented when the node discovers a local topology change, while
the broadcast id is incremented every time the node makes a route-discovery
broadcast. The node also maintains a routing table containing routes for each
destination d that it needs to send packets to. The route contains the hop-count
(hops d
(n)) to d and the next node (next d
(n)) on the route. It also contains the
last known sequence number (seqno d
(n)) of the destination, which is a measure
of the freshness of the route.
When a source node s needs to discover a route to a destination d, it broadcasts
a route-request (RREQ) packet to all its neighbors. The RREQ packet
contains the following fields:
hs; seqno(s); hop cnt; d; seqno d
broadcast id(s)i:
The RREQ is thought of as requesting a route from s to d at least as fresh as
the last known destination sequence number (seqno d
(s)). At the same time, the
packet informs the recipient node that it is hop cnt hops away from the node s,
whose current sequence number is seqno(s). Consequently, a node that receives
a new RREQ re-broadcasts it with an incremented hop cnt if it cannot satisfy
the request. Using the source information in the packet, every such intermediate
node also sets up a reverse-route to s through the node that sent it the RREQ.
The broadcast id is used to identify (and discard) multiple copies of an RREQ
received at a node.
When the RREQ reaches a node that has a fresh enough route to d, the node
unicasts a route-reply (RREP) packet back to s. Remember that the nodes that
forwarded the RREQ have set up a reverse route that can be used for this
unicast. The RREP packet contains the following fields:
hs; d; seqno(d); hop cnt; lifetimei:
As in the RREQ case, this message provides information (hop cnt; seqno(d))
that a recipient node can use to set up a route to d. The route is assumed to be
valid for lifetime milliseconds. Any node receiving the RREP updates its route
to d and passes it along towards the requesting source node s after incrementing
the hop cnt. In case a receiving node already has a route to d, it uses the route
with the maximum seqno(d) and then the one with the least hop cnt.
The above process can be used to establish routes when the network is
stable. However, when a link along a route goes down, the source node needs to
recognize this change and look for an alternative route. To achieve this, the node
immediately upstream of the link sends RREP messages to all its neighbors that
have been actively using the route. These RREP messages have the hop cnt set
to INFINITY (255) and seqno(d) set to one plus the previously known sequence
number. This forces the neighbors to recognize that the route is unavailable,
and they then forward the message to their active neighbors until all relevant
nodes know about the route failure. Note however, that this process depends
on the nodes directly connected to a link being able to detect its unavailability.
Link failure detection is assumed to be handled by some link-layer mechanism.
If not, the nodes can run the hello protocol, which requires neighbors to send
each other periodic 'hello' messages indicating their availability.
5.1 Path Invariants
As mentioned before, an essential property of AODV for handling topological
changes is that it is loop-free. A hand proof by contradiction is given in [18].
We provide an automated proof of this fact as a corollary of the preservation of
the key path invariant of the protocol. The invariant is also used to prove route
validity.
First, we model AODV in SPIN by Promela processes for each node. As described
earlier, the process needs to maintain state in the form of a broadcast-id,
a sequence number and a routing table. In the following, we write seqno d
(d) to
denote d's sequence number (seqno(d)). The process needs to react to several
events, possibly updating this state. The main events are neighbour discovery,
data or control (RREP/RREQ) packet arrival and timeout events like link failure
detection and route expiration. It is relatively straight-forward to generate
the Promela processes from [17].
Then, we prove that the following is an invariant (over time) of the AODV
process at a node n, for every destination d.
Theorem 16 If next d
1. seqno d
2. seqno d
(n
This invariant has two important consequences:
1. (Loop-Freedom) Consider the network at any instant and look at all the
routing-table entries for a destination d. Any data packet traveling towards
d would have to move along the path defined by the next d
pointers.
However, we know from Theorem 16 that at each hop along this path, either
the sequence number must increase or the hop-count must decrease.
In particular, a node cannot occur at two points on the path. This guarantees
loop-freedom for AODV.
2. (Route Validity) Loop-freedom in a finite network guarantees that data
paths are finite. This does not guarantee that the path ends at d. However,
if all the sequence numbers along a path are the same, hop-counts must
strictly decrease (by Theorem 16). In particular, the last node n l on the
path cannot have hop-count INFINITY (no route). But since n l does not
have a route to d, it must be equal to d.
To prove Theorem 16, we first prove the following properties about the
routing table at each node n, now considered as a function of time.
Lemma
seqno d
Lemma
seqno d
hops d
Suppose next d
. Then we define lut (last update time), to be the
last time before t, when next d
(n) changed to n 0 . We claim that following lemma
holds for times t and lut:
Lemma 19 If next d
1. seqno d
(n 0 )(lut), and
2. hops d
hops d
The lemmas are proved in SPIN using the Promela model described earlier.
Lemmas 17 and can be proved without restrictions on the events produced
by the environment. Lemma 19 is trickier and requires the model to record
the incoming seqno d
the protocol decides to change
next d (n) to n 0 . This is easily done by the addition of two variables. Subsequently,
Lemma 19 is also verified by SPIN.
The proof that the three lemmas together imply Theorem 16 involves standard
deductive reasoning which we did in HOL.
5.2 Failure Conditions
In the previous section, we described a Promela model for AODV and claimed
that it is straight-forward to develop the protocol model from the standard
specification in [17]. While it is clear what the possible events and corresponding
actions are, the Internet Draft does leave some boundary conditions unspecified.
In particular, it does not say what the initial state at a node must be.
We assume a 'reasonable' initial state, namely one in which the node's sequence
number and broadcast-id are 0 and the routing table is empty. The
proofs in the previous section were carried out with this initial-state assump-
tion. However, it is unclear if this assumption is necessary. Importantly, would
an AODV implementation that chose to have some default route and sequence-number
fail to satisfy some crucial properties?
We approach this problem by identifying the key invariants that must be
satisfied by any strategy that the node may use on reboot. We choose the
invariant in Theorem 16 as a target of our efforts, since violating this invariant
may result in breaking the loop-freedom property. A newly-initialized node n i
can break the invariant in exactly two ways:
1. This means that the node has initialized with a route for d that
goes through n 0
2.
node (n) has a route through n i
even though n i
has just
come up. This implies that n i
had been previously active before it failed
and the node failure was not noticed by n.
The key choices left to the implementor are:
1. whether node failures are detected by all neighbors,
2. whether nodes can initialize with a (default) route, and
3. whether nodes can initialize with an arbitrary sequence number.
However, any choice that the programmer makes must comply with the invariant
proved in Theorem 16. Keeping this in mind, we analyze all the possible choices.
First, we find that failure-detection (1) is necessary. Otherwise, it is possible
that next d no route to d. So, Part (1) of the invariant in
Theorem 16 is immediately broken and loops may be formed. For instance, n i
may soon look for a route to d and accept the route that n has to offer, thus
forming a loop.
Assuming failure detection, the safest answer to (2) is to disallow any initial
routes. This ensures that the invariant cannot be violated and therefore loop-
freedom will be preserved. On the other hand, if initial routes are allowed,
multiple failures would make the invariant impossible to guarantee.
Choice (3) comes into play only at the destination d. If d itself is initialized,
we assume that it does not have a route for itself. But then both choices for (3)
obey the invariant in the presence of failure detection (1). Moreover, d can never
be a member of the routing loop. This means that in the choice of (2) above,
Table
3: Protocol Verification Effort
Task HOL SPIN
Modeling RIP 495 lines, 19 defs, 20 lemmas 141 lines
Proving Lemma 8 Once 9 lemmas, 119 cases, 903 steps
Proving Lemma 8 Again 29 lemmas, 102 cases, 565 steps 207 lines, 439 states
Proving Lemma 9 Reuse Lemma 8 Abstractions 285 lines, 7116 states
Proving Lemma 11 Reuse Lemma 8 Abstractions 216 lines, 1019 states
Proving Lemma 12 Reuse Lemma 8 Abstractions 221 lines, 1139 states
Proving Lemma 14 Reuse Lemma 8 Abstractions 342 lines, 21804 states
Modeling AODV 95 lines, 6 defs 302 lines
Proving Lemma 17 173 lines, 5106 states
Proving Lemma lines, 5106 states
Proving Lemma 19 157 lines, 721668 states
Proving Theorem 16 4 lemmas, 2 cases, 5 steps
we would be safe if next d
irrespective of what the invariant suggests.
This analysis leads to the following:
Theorem 20 For AODV to be loop-free, it is sufficient that
1. node failures are detected by all neighbors, and
2. in the initial state for a node n
(a) there is no route for n i itself, and
(b) for any other destination d, either n i has no route for d, or next d (n i
Any implementation of AODV should conform to the above Theorem. For
example, a simple way to guarantee node-failure detection is to insist that nodes
remain silent for a fixed time after coming up, until they can be sure that their
previous absence has been detected. Ensuring the rest of the conditions is
straight-forward.
6 Conclusion
This paper provides the most extensive automated mathematical analysis of a class
of routing protocols to date. Our results show that it is possible to provide formal
analysis of correctness for routing protocols from IETF standards and drafts with
reasonable effort and speed, thus demonstrating that these techniques can effectively
supplement other means of improving assurance such as manual proof, simulation, and
testing. Specific technical contributions include: the first proof of the correctness of
the RIP standard, statement and automated proof of a sharp realtime bound on the
convergence of RIP, and an automated proof of loop-freedom for AODV.
Table
summarizes some of our experience with the complexity of the proofs in
terms of our automated support tools. The complexity of an HOL verification for the
human verifier is described with the following statistics measuring things written by a
human: the number of lines of HOL code, the number of lemmas and definitions, and
the number of proof steps. Proof steps were measured as the number of instances of the
HOL construct THEN. The HOL automated contribution is measured by the number of
cases discovered and managed by HOL. This is measured by the number of THENL's,
weighted by the number of elements in their argument lists. The complexity of SPIN
verification for the human verifier is measured by the number of lines of Promela
code written. The SPIN automated contribution is measured by the number of states
examined and the amount of memory used in the verification. As we mentioned
before, SPIN is memory bound; each of the verifications took less than a minute and
the time is generally proportional to the memory. Most of the lemmas consumed the
SPIN-minimum of 2.54MB of memory; Lemma 19 required 22.8MB. The figures were
collected for runs on a lightly-loaded Sun Ultra Enterprise with 1016MB of memory
and 4 CPU's running SunOS 5.5.1. The tool versions used were HOL90.10 and SPIN-
3.24. We carried out parallel proofs of Lemma 8, the Stability Preservation Lemma,
using HOL only and HOL together with SPIN.
Extensions of the results in this paper are possible in several areas: additional pro-
tocols, better tool support, and techniques for proving other kinds of properties. We
could prove correctness properties similar to the ones in this paper for other routing
protocols, including other approaches like link state routing [1]. It would be challenging
to prove OSPF [14] because of the complexity of the protocol specification, but the
techniques used here would apply for much of what needs to be done. We do intend to
pursue better tool support. In particular, we are interested in integration with simulation
and implementation code. This may allow us to leverage several related activities
and also improve the conformance between the key artifacts: the standard, the SPIN
program, the HOL invariants and environment model, the simulation code, and, of
course, the implementation itself. One program analysis technology of particular interest
is slicing, since it is important to know which parts of a program might affect the
values in messages. We are also interested in how to prove additional kinds of properties
such as security and quality of service (including reservation assurances). Security
is particularly challenging because of the difficulty in modeling secrecy precisely.
Acknowledgments
We would like to thank the following people for their assistance and encouragement:
Roch Guerin, Elsa L. Gunter, Luke Hornof, Sampath Kannan, Insup Lee, Charles
Perkins, and Jonathan Smith. This research was supported by NSF Contract CCR-
9505469, and DARPA Contract F30602-98-2-0198.
--R
A reliable
Data Networks.
Routing in clustered multihop
Secure Socket Layer.
Introduction to HOL: A theorem proving environment for higher order logic.
Applying the SCR requirements method to a weapons control panel: An experience report.
Routing information protocol.
Home page for the HOL interactive theorem proving system.
Design and Validation of Computer Protocols.
The SPIN model checker.
Routing in the Internet.
Carrying Additional Information.
OSPF version 2.
Highly dynamic destination-sequenced distance-vector routing (DSDV) for mobile computers
Ad hoc on demand distance vector (AODV) routing.
An algorithm for distributed computation of spanning trees in an extended LAN.
Interconnections: Bridges and Routers.
A review of current routing protocols for ad hoc mobile wireless networks.
--TR
Data networks
Design and validation of computer protocols
The temporal logic of reactive and concurrent systems
Interconnections: bridges and routers
Introduction to HOL
Model checking and abstraction
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers
Routing in the Internet
The Model Checker SPIN
An efficient routing protocol for wireless networks
Applying the SCR requirements method to a weapons control panel
An analysis of BGP convergence properties
An algorithm for distributed computation of a spanningtree in an extended LAN
Stable Internet routing without global coordination
Fault origin adjudication
Ad-hoc On-Demand Distance Vector Routing
--CTR
Shahan Yang , John S. Baras, Modeling vulnerabilities of ad hoc routing protocols, Proceedings of the 1st ACM workshop on Security of ad hoc and sensor networks, October 31, 2003, Fairfax, Virginia
Michael Compton, Stenning's protocol implemented in UDP and verified in Isabelle, Proceedings of the 2005 Australasian symposium on Theory of computing, p.21-30, January 01, 2005, Newcastle, Australia
Steve Bishop , Matthew Fairbairn , Michael Norrish , Peter Sewell , Michael Smith , Keith Wansbrough, Engineering with logic: HOL specification and symbolic-evaluation testing for TCP implementations, ACM SIGPLAN Notices, v.41 n.1, p.55-66, January 2006
Nick Feamster , Hari Balakrishnan, Detecting BGP configuration faults with static analysis, Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation, p.43-56, May 02-04, 2005
Steve Bishop , Matthew Fairbairn , Michael Norrish , Peter Sewell , Michael Smith , Keith Wansbrough, Rigorous specification and conformance testing techniques for network protocols, as applied to TCP, UDP, and sockets, ACM SIGCOMM Computer Communication Review, v.35 n.4, October 2005
Sebastian Nanz , Chris Hankin, A framework for security analysis of mobile wireless networks, Theoretical Computer Science, v.367 n.1, p.203-227, 24 November 2006
Alwyn Goodloe , Carl A. Gunter , Mark-Oliver Stehr, Formal prototyping in early stages of protocol design, Proceedings of the 2005 workshop on Issues in the theory of security, p.67-80, January 10-11, 2005, Long Beach, California
Alma L. Juarez Dominguez , Nancy A. Day, Compositional reasoning for port-based distributed systems, Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, November 07-11, 2005, Long Beach, CA, USA | network standards;formal verification;HOL;AODV;interactive theorem proving;routing protocols;SPIN;model checking;RIP;distance vector routing |
581851 | Hop integrity in computer networks. | A computer network is said to provide hop integrity iff when any router p in the network receives a message m supposedly from an adjacent router q, then p can check that m was indeed sent by q, was not modified after it was sent, and was not a replay of an old message sent from q to p. In this paper, we describe three protocols that can be added to the routers in a computer network so that the network can provide hop integrity, and thus overcome most denial-of-service attacks. These three protocols are a secret exchange protocol, a weak integrity protocol, and a strong integrity protocol. All three protocols are stateless, require small overhead, and do not constrain the network protocol in the routers in any way. | Introduction
Most computer networks suffer from the following
security problem: in a typical network, an adversary, that
has an access to the network, can insert new messages,
modify current messages, or replay old messages in the
network. In many cases, the inserted, modified, or
replayed messages can go undetected for some time until
they cause severe damage to the network. More
importantly, the physical location in the network where
the adversary inserts new messages, modifies current
messages, or replays old messages may never be
determined.
Two well-known examples of such attacks in networks
that support the Internet Protocol (or IP, for short) and the
Transmission Control Protocol (or TCP, for short) are as
follows.
* This work is supported in part by the grant ARP-003658-320 from
the Advanced Research Program in the Texas Higher Education
Coordinating Board.
i. Smurf Attack:
In an IP network, any computer can send a "ping"
message to any other computer which replies by sending
back a "pong" message to the first computer as required
by Internet Control Message Protocol (or ICMP, for short)
[14]. The ultimate destination in the pong message is the
same as the original source in the ping message. An
adversary can utilize these messages to attack a computer
d in such a network as follows. First, the adversary inserts
into the network a ping message whose original source is
computer d and whose ultimate destination is a multicast
address for every computer in the network. Second, a
copy of the inserted ping message is sent to every
computer in the network. Third, every computer in the
network replies to its ping message by sending a pong
message to computer d. Thus, computer d is flooded by
pong messages that it did not requested.
ii. SYN Attack:
To establish a TCP connection between two computers
c and d, one of the two computers c sends a "SYN"
message to the other computer d. When d receives the
SYN message, it reserves some of its resources for the
expected connection and sends a "SYN-ACK" message to
c. When c receives the SYN-ACK message, it replies by
sending back an "ACK" message to d. If d receives the
ACK message, the connection is fully established and the
two computers can start exchanging their data messages
over the established connection. On the other hand, if d
does not receive the ACK message for a specified time
period of T seconds after it has sent the SYN-ACK
message, d discards the partially established connection
and releases all the resources reserved for that connection.
The net effect of this scenario is that computer d has lost
some of its resources for T seconds. An adversary can
take advantage of such a scenario to attack computer d as
follows [1, 18]. First, the adversary inserts into the
network successive waves of SYN messages whose
original sources are different (so that these messages
cannot be easily detected and filtered out from the
network) and whose ultimate destination is d. Second, d
receives the SYN messages, reserves its resources for the
expected connections, replies by sending SYN-ACK
messages, then waits for the corresponding ACK
messages which will never arrive. Third, the net effect of
each wave of inserted SYN messages is that computer d
loses all its resources for T seconds.
In these (and other [7]) types of attacks, an adversary
inserts into the network messages with wrong original
sources. These messages are accepted by unsuspecting
routers and routed toward the computer under attack. To
counter these attacks, each router p in the network should
route a received m only after it checks that the original
source in m is a computer adjacent to p or m is forwarded
to p by an adjacent router q. Performing the first check is
straightforward, whereas performing the second check
requires special protocols between adjacent routers. In this
paper, we present a suite of protocols that provide hop
integrity between adjacent routers: whenever a router p
receives a message m from an adjacent router q, p can
detect whether m was indeed sent by q or it was modified
or replayed by an adversary that operates between p and q.
It is instructive to compare hop integrity with secure
routing [2, 11, 17], ingress filtering [4], and IPsec [8]. In
secure routing, for example [2], [11], and [17], the routing
update messages that routers exchange are authenticated.
This authentication ensures that every routing update
message, that is modified or replayed, is detected and
discarded. By contrast, hop integrity ensures that all
messages (whether data or routing update messages), that
are modified or replayed, are detected and discarded.
Using ingress filtering [4], each router on the network
boundary checks whether the recorded source in each
received message is consistent with where the router
received the message from. If the message source is
consistent, the router forwards the message as usual.
Otherwise, the router discards the message. Thus, ingress
filtering detects messages whose recorded sources are
modified (to hide the true sources of these messages),
provided that these modifications occur at the network
boundary. Messages whose recorded sources are modified
between adjacent routers in the middle of the network will
not be detected by ingress filtering, but will be detected
and discarded by hop integrity.
The hop integrity protocol suite in this paper and the
IPsec protocol suite presented in [8], [9], [10], [12], and
[13] are both intended to provide security at the IP layer.
Nevertheless, these two protocol suites provide different,
and somewhat complementary, services. On one hand, the
hop integrity protocols are to be executed at all routers in
a network, and they provide a minimum level of security
for all communications between adjacent routers in that
network. On the other hand, the IPsec protocols are to be
executed at selected pairs of computers in the network,
and they provide sophisticated levels of security for the
communications between these selected computer pairs.
Clearly, one can envision networks where the hop
integrity protocol suite and the IPsec protocol suite are
both supported.
Next, we describe the concept of hop integrity in some
detail.
2. Hop Integrity Protocols
A network consists of computers connected to
subnetworks. (Examples of subnetworks are local area
networks, telephone lines, and satellite links.) Two
computers in a network are called adjacent iff both
computers are connected to the same subnetwork. Two
adjacent computers in a network can exchange messages
over any common subnetwork to which they are both
connected.
The computers in a network are classified into hosts
and routers. For simplicity, we assume that each host in a
network is connected to one subnetwork, and each router
is connected to two or more subnetworks. A message m is
transmitted from a computer s to a faraway computer d in
the same network as follows. First, message m is
transmitted in one hop from computer s to a router r.1
adjacent to s. Second, message m is transmitted in one hop
from router r.1 to router r.2 adjacent to r.1, and so on.
Finally, message m is transmitted in one hop from a router
r.n that is adjacent to computer d to computer d.
A network is said to provide hop integrity iff the
following two conditions hold for every pair of adjacent
routers p and q in the network.
i. Detection of Message Modification:
Whenever router p receives a message m over the
subnetwork connecting routers p and q, p can
determine correctly whether message m was modified
by an adversary after it was sent by q and before it
was received by p.
ii. Detection of Message Replay:
Whenever router p receives a message m over the
subnetwork connecting routers p and q, and
determines that message m was not modified, then p
can determine correctly whether message m is
another copy of a message that is received earlier by
p.
For a network to provide hop integrity, two "thin"
protocol layers need to be added to the protocol stack in
each router in the network. As discussed in [3] and [16],
the protocol stack of each router (or host) in a network
consists of four protocol layers; they are (from bottom to
top) the subnetwork layer, the network layer, the transport
layer, and the application layer. The two thin layers that
need to be added to this protocol stack are the secret
exchange layer and the integrity check layer. The secret
exchange layer is added above the network layer (and
below the transport layer), and the integrity check layer is
placed below the network layer (and above the
subnetwork layer).
The function of the secret exchange layer is to allow
adjacent routers to periodically generate and exchange
(and so share) new secrets. The exchanged secrets are
made available to the integrity check layer which uses
them to compute and verify the integrity check for every
data message transmitted between the adjacent routers.
Figure
1 shows the protocol stacks in two adjacent
routers p and q. The secret exchange layer consists of the
two processes pe and qe in routers p and q, respectively.
The integrity check layer has two versions: weak and
strong. The weak version consists of the two processes pw
and qw in routers p and q, respectively. This version can
detect message modification, but not message replay. The
strong version of the integrity check layer consists of the
two processes ps and qs in routers p and q, respectively.
This version can detect both message modification and
message replay.
Next, we explain how hop integrity, along with ingress
filtering, can be used to prevent smurf and SYN attacks
(which are described in the Introduction). Recall that in
smurf and SYN attacks, an adversary inserts into the
network ping and SYN messages with wrong original
sources. These forged messages can be inserted either
through a boundary router or between two routers in the
middle of the network. Ingress filtering (which is usually
installed in boundary routers [4]) will detect the forged
messages if they are inserted through a boundary router
because the recorded sources in these messages would be
inconsistent with the hosts from which these messages are
received. However, ingress filtering may fail in detecting
forged messages if these messages are inserted between
two routers in the middle of the network. For example, an
adversary can log into any host located between two
routers p and q, and use this host to insert forged
messages toward router p, pretending that these messages
are sent by router q. The real source of these messages can
not be determined by router p because router p cannot
decide whether these messages are sent by router q or by
some host between p and q. However, if hop integrity is
installed between the two routers p and q, then the (weak
or strong) integrity check layer in router p concludes that
the forged messages have been modified after being sent
by router q (although they are actually inserted by the
adversary and not sent by router q), and so it discards
them.
Smurf and SYN attacks can also be launched by
replaying old messages. For example, the adversary can
log into any host located between two routers p and q.
When the adversary spots some passing legitimate ping or
SYN message being sent from q to p, it keeps a copy of
the passing message. At a later time, the adversary can
replay these copied messages over and over to launch a
smurf or SYN attack. Hop integrity can defeat this attack
as follows. If hop integrity is installed between the two
routers p and q, then the strong integrity check layer in
check
layer
exchange
layer
pe
network
pw or ps
applications
transport
subnetwork
qe
network
qw or qs
applications
transport
subnetwork
router p router q
Figure
1. Protocol stack for achieving hop integrity.
router p can detect the replayed messages and discard
them.
In the next three sections, we describe in some detail
the protocols in the secret exchange layer and in the two
versions of the integrity check layer. The first protocol
between processes pe and qe is discussed in Section 3.
The second protocol between processes pw and qw is
discussed in Section 4. The third protocol between
processes ps and qs is discussed in Section 5.
These three protocols are described using a variation of
the
Abstract
Protocol Notation presented in [5]. In this
notation, each process in a protocol is defined by a set of
inputs, a set of variables, and a set of actions. For
example, in a protocol consisting of processes px and qx,
process px can be defined as follows.
process px
inp <name of input> : <type of input>
<name of input> : <type of input>
var <name of variable> : <type of variable>
<name of variable> : <type of variable>
begin
Comments can be added anywhere in a process
definition; each comment is placed between the two
brackets { and }.
The inputs of process px can be read but not updated
by the actions of process px. Thus, the value of each input
of px is either fixed or is updated by another process
outside the protocol consisting of px and qx. The variables
of process px can be read and updated by the actions of
process px. Each of process px is of the form:
The of an action of px is either a <boolean
expression> or a statement of the form:
rcv from qx
The of an action of px is a sequence of
skip, , , or statements.
An statement is of the form:
<variable of px> :=
A statement is of the form:
send to qx
A statement is of the form:
if <boolean expression> -
[] <boolean expression> -
Executing an action consists of executing the statement
of this action. Executing the actions (of different
processes) in a protocol proceeds according to the
following three rules. First, an action is executed only
when its guard is true. Second, the actions in a protocol
are executed one at a time. Third, an action whose guard
is continuously true is eventually executed.
Executing an action of process px can cause a message
to be sent to process qx. There are two channels between
the two processes: one is from px to qx, and the other is
from qx to px. Each sent message from px to qx remains
in the channel from px to qx until it is eventually received
by process qx or is lost. Messages that reside
simultaneously in a channel form a sequence <m.1; m.2;
.; m.n> in accordance with the order in which they have
been sent. The head message in the sequence, m.1, is the
earliest sent, and the tail message in the sequence, m.n, is
the latest sent. The messages are to be received in the
same order in which they were sent.
We assume that an adversary exists between processes
px and qx, and that this adversary can perform the
following three types of actions to disrupt the
communications between px and qx. First, the adversary
can perform a message loss action where it discards the
head message from one of the two channels between px
and qx. Second, the adversary can perform a message
modification action where it arbitrarily modifies the
contents of the head message in one of the two channels
between px and qx. Third, the adversary can perform a
message replay action where it replaces the head message
in one of the two channels by a message that was sent
previously. For simplicity, we assume that each head
message in one of the two channels between px and qx is
affected by at most one adversary action.
3. The Secret Exchange Protocol
In the secret exchange protocol, the two processes pe
and qe maintain two shared secrets sp and sq. Secret sp is
used by router p to compute the integrity check for each
data message sent by p to router q, and it is also used by
router q to verify the integrity check for each data
message received by q from router p. Similarly, secret sq
is used by q to compute the integrity checks for data
messages sent to p, and it is used by p to verify the
integrity checks for data messages received from q.
As part of maintaining the two secrets sp and sq,
processes pe and qe need to change these secrets
periodically, say every te hours, for some chosen value te.
Process pe is to initiate the change of secret sq, and
process qe is to initiate the change of secret sp. Processes
pe and qe each has a public key and a private key that they
use to encrypt and decrypt the messages that carry the new
secrets between pe and qe. A public key is known to all
processes (in the same layer), whereas a private key is
known only to its owner process. The public and private
keys of process pe are named B p and R p respectively;
similarly the public and private keys of process qe are
named B q and R q respectively.
For process pe to change secret sq, the following four
steps need to be performed. First, pe generates a new sq,
and encrypts the concatenation of the old sq and the new
sq using qe's public key B q , and sends the result in a rqst
message to qe. Second, when qe receives the rqst
message, it decrypts the message contents using its private
R q and obtains the old sq and the new sq. Then, qe
checks that its current sq equals the old sq from the rqst
message, and installs the new sq as its current sq, and
sends a rply message containing the encryption of the new
sq using pe's public key B p . Third, pe waits until it
receives a rply message from qe containing the new sq
encrypted using B p . Receiving this rply message indicates
that qe has received the rqst message and has accepted the
new sq. Fourth, if pe sends the rqst message to qe but does
not receive the rply message from qe for some tr seconds,
indicating that either the rqst message or the rply message
was lost before it was received, then pe resends the rqst
message to qe. Thus tr is an upper bound on the round trip
time between pe and qe.
Note that the old secret (along with the new secret) is
included in each rqst message and the new secret is
included in each rply message to ensure that if an
adversary modifies or replays rqst or rply messages, then
each of these messages is detected and discarded by its
receiving process (whether pe or qe).
Process pe has two variables sp and sq declared as
follows.
array [0 . 1] of integer
Similarly, process qe has an integer variable sq and an
array variable sp.
In process pe, variable sp is used for storing the secret
sp, variable sq[0] is used for storing the old sq, and
variable sq[1] is used for storing the new sq. The assertion
indicates that process pe has generated and
sent the new secret sq, and that qe may not have received
it yet. The assertion that qe has
already received and accepted the new secret sq. Initially,
sq[0] in pe = sq[1] in pe = sq in qe, and
sp[0] in qe = sp[1] in qe = sp in pe.
Process pe can be defined as follows. (Process qe can
be defined in the same way except that each occurrence of
R p in pe is replaced by an occurrence of R q in qe, each
occurrence of B q in pe is replaced by an occurrence of B p
in qe, each occurrence of sp in pe is replaced by an
occurrence of sq in qe, and each occurrence of sq[0] or
sq[1] in pe is replaced by an occurrence of sp[0] or sp[1],
respectively, in qe.)
process pe
{private key of pe}
{round trip time}
array [0 . 1] of integer
{initially
{in qe}
begin
timeout
(te hours passed since rqst message sent last) -
send rqst(e) to qe
[] rcv rqst(e) from qe -
e := NCR(B q , sp);
send rply(e) to qe
skip
[] rcv rply(e) from qe -
{detect adversary} skip
[] timeout
(tr seconds passed since rqst message sent last) -
send rqst(e) to qe
The four actions of process pe use three functions
NEWSCR, NCR, and DCR defined as follows. Function
NEWSCR takes no arguments, and when invoked, it
returns a fresh secret that is different from any secret that
was returned in the past. Function NCR is an encryption
function that takes two arguments, a key and a data item,
and returns the encryption of the data item using the key.
For example, execution of the statement
causes the concatenation of sq[0] and sq[1] to be
encrypted using the public key B q , and the result to be
stored in variable e. Function DCR is a decryption
function that takes two arguments, a key and an encrypted
data item, and returns the decryption of the data item
using the key. For example, execution of the statement
causes the (encrypted) data item e to be decrypted using
the private key R p , and the result to be stored in variable
d. As another example, consider the statement
This statement indicates that the value of e is the
encryption of the concatenation of two values (v
using Thus, executing this statement causes e to be
decrypted using key R p , and the resulting first value v 0 to
be stored in variable d, and the resulting second value v 1
to be stored in variable e.
A proof of the correctness of the secret exchange
protocol is presented in the full version of the paper [6].
4. The Weak Integrity Protocol
The main idea of the weak integrity protocol is simple.
Consider the case where a data(t) message, with t being
the message text, is generated at a source src then
transmitted through a sequence of adjacent routers r.1, r.2,
., r.n to a destination dst. When data(t) reaches the first
router r.1, r.1 computes a digest d for the message as
follows:
d := MD(t; scr)
where MD is the message digest function, (t; scr) is the
concatenation of the message text t and the shared secret
scr between r.1 and r.2 (provided by the secret exchange
protocol in r.1). Then, r.1 adds d to the message before
transmitting the resulting data(t, d) message to router r.2.
When the second router r.2 receives the data(t, d)
message, r.2 computes the message digest using the secret
shared between r.1 and r.2 (provided by the secret
exchange process in r.2), and checks whether the result
equals d. If they are unequal, then r.2 concludes that the
received message has been modified, discards it, and
reports an adversary. If they are equal, then r.2 concludes
that the received message has not been modified and
proceeds to prepare the message for transmission to the
next router r.3. Preparing the message for transmission to
r.3 consists of computing d using the shared secret
between r.2 and r.3 and storing the result in field d of the
data(t, d) message.
When the last router r.n receives the data(t, d) message,
it computes the message digest using the shared secret
between r.(n-1) and r.n and checks whether the result
equals d. If they are unequal, r.n discards the message and
reports an adversary. Otherwise, r.n sends the data(t)
message to its destination dst.
Note that this protocol detects and discards every
modified message. More importantly, it also determines
the location where each message modification has
occurred.
Process pw in the weak integrity protocol has two
inputs sp and sq that pw reads but never updates. These
two inputs in process pw are also variables in process pe,
and pe updates them periodically, as discussed in the
previous section. Process pw can be defined as follows.
(Process qw is defined in the same way except that each
occurrence of p, q, pw, qw, sp, and sq is replaced by an
occurrence of q, p, qw, pw, sq, and sp, respectively.)
process pw
array [0 . 1] of integer
var t, d : integer
begin
rcv data(t, d) from qw -
if MD(t;
{defined later} RTMSG
{report adversary} skip
{p receives data(t, d) from router other than q}
{and checks that its message digest is correct}
{either p receives data(t) from an adjacent}
{host or p generates the text t for the next}
{data message}
In the first action of process pw, if pw receives a data(t,
d) message from qw while sq[0] - sq[1], then pw cannot
determine beforehand whether qw computed d using sq[0]
or using sq[1]. In this case, pw needs to compute two
message digests using both sq[0] and sq[1] respectively,
and compare the two digests with d. If either digest equals
d, then pw accepts the message. Otherwise, pw discards
the message and reports the detection of an adversary.
The three actions of process pw use two functions
named MD and NXT, and one statement named RTMSG.
Function MD takes one argument, namely the
concatenation of the text of a message and the appropriate
secret, and computes a digest for that argument. Function
NXT takes one argument, namely the text of a message
(which we assume includes the message header), and
computes the next router to which the message should be
forwarded. Statement RTMSG is defined as follows.
send data(t, d) to qw
{compute d as the message digest of}
{the concatenation of t and the secret}
{for sending data to NXT(t); forward}
{data(t, d) to router NXT(t)} skip
A proof of the correctness of the weak integrity
protocol is presented in the full version of the paper [6].
5. The Strong Integrity Protocol
The weak integrity protocol in the previous section can
detect message modification but not message replay. In
this section, we discuss how to strengthen this protocol to
make it detect message replay as well. We present the
strong integrity protocol in two steps. First, we present a
protocol that uses "soft sequence numbers" to detect and
discard replayed data messages. Second, we show how to
combine this protocol with the weak integrity protocol (in
the previous section) to form the strong integrity protocol.
Consider a protocol that consists of two processes u
and v. Process u continuously sends data messages to
process v. Assume that there is an adversary that attempts
to disrupt the communication between u and v by inserting
(i.e. replaying) old messages in the message stream from u
to v. In order to overcome this adversary, process u
attaches an integer sequence number s to every data
message sent to process v. To keep track of the sequence
process u maintains a variable nxt that stores the
sequence number of the next data message to be sent by u
and process v maintains a variable exp that stores the
sequence number of the next data message to be received
by v.
To send the next data(s) message, process u assigns s
the current value of variable nxt, then increments nxt by
one. When process v receives a data(s) message, v
compares its variable exp with s. If exp - s, then q accepts
the received data(s) message and assigns exp the value s
discards the data(s) message.
Correctness of this protocol is based on the observation
that the predicate exp - nxt holds at each (reachable) state
of the protocol. However, if due to some fault (for
example an accidental resetting of the values of variable
nxt) the value of exp becomes much larger than value of
nxt, then all the data messages that u sends from this point
on will be wrongly discarded by v until nxt becomes equal
to exp. Next, we describe how to modify this protocol
such that the number of data(s) messages, that can be
wrongly discarded when the synchronization between u
and v is lost due to some fault, is at most N, for some
chosen integer N that is much larger than one.
The modification consists of adding to process v two
variables c and cmax, whose values are in the range 0.N-
1. When process v receives a data(s) message, v compares
the values of c and cmax. If c - cmax, then process v
increments c by one (mod N) and proceeds as before
(namely either accepts the data(s) message if exp - s, or
discards the message if exp > s). Otherwise, v accepts the
message, assigns c the value 0, and assigns cmax a
random integer in the range 0.N-1.
This modification achieves two objectives. First, it
guarantees that process v never discards more than N data
messages when the synchronization between u and v is
lost due to some fault. Second, it ensures that the
adversary cannot predict the instants when process v is
willing to accept any received data message, and so
cannot exploit such predictions by sending replayed data
messages at those instants.
Formally, process u and v in this protocol can be
defined as follows.
process u
{sequence number of}
{next sent message}
begin
true - send data(nxt) to v; nxt := nxt
process v
{sequence number of}
{received message}
{sequence number of}
{next expected message}
begin
rcv data(s) from u -
{reject message; report an adversary}
{accept message}
exp
c := 0;
Processes u and v of the soft sequence number protocol
can be combined with process pw of the weak integrity
protocol to construct process ps of the strong integrity
protocol. A main difference between processes pw and ps
is that pw exchanges messages of the form data(t, d),
whereas ps exchanges messages of the form data(s, t, d),
where s is the message sequence number computed
according to the soft sequence number protocol, t is the
message text, and d is the message digest computed over
the concatenation (s; t; scr) of s, t, and the shared secret
scr. Process ps in the strong integrity protocol can be
defined as follows. (Process qs can be defined in the same
way.)
process ps
array [0 . 1] of integer
var s, t, d : integer
begin
rcv data(s, t, d) from qs -
if MD(s; t;
{reject message and}
{report an adversary}
{accept message}
exp
c := 0;
{report an adversary} skip
{p receives a data(s, t, d) from a router}
{other than q and checks that its encryption}
{is correct and its sequence number is}
{within range}
{either p receives a data(t) from adjacent host}
{or p generates the text t for the next data}
The first and second actions of process ps have a
statement RTMSG that is defined as follows.
send data(nxt, t, d) to qs;
{compute next soft sequence number s;}
{compute d as the message digest of the}
{concatenation of snxt, t and the secret}
{for sending data to NXT(t); forward}
{data(s, t, d) to router NXT(t)} skip
A proof of the correctness of the strong integrity
protocol is presented in the full version of the paper [6].
6. Implementation Considerations
In this section, we discuss several issues concerning the
implementation of hop integrity protocols presented in the
last three sections. In particular, we discuss acceptable
values for the inputs of each of these protocols.
There are four inputs in the secret exchange protocol in
Section 3. They are R p , B q , te and tr. Input R p is a private
key for router p, and input B q is a public key for router q.
These are long-term keys that remain fixed for long
periods of time (say one to three months), and can be
changed only off-line and only by the system
administrators of the two routers. Thus, these keys should
consist of a relatively large number of bytes, say 1024
bytes each. There are no special requirements for the
encryption and decryption functions that use these keys in
the secret exchange protocol.
Input te is the time period between two successive
secret exchanges between pe and qe. This time period
should be small so that an adversary does not have enough
time to deduce the secrets sp and sq used in computing the
integrity checks of data messages. It should also be large
so that the overhead that results from the secret exchanges
is reduced. An acceptable value for te is around 4 hours.
Input tr is the time-out period for resending a rqst
message when the last rqst message or the corresponding
rply message was lost. The value of tr should be an upper
bound on the round-trip delay between the two adjacent
routers. If the two routers are connected by a high speed
Ethernet, then an acceptable value of tr is around 4
seconds.
Next, we consider the two inputs sp and sq and
function MD used in the integrity protocols in Sections 4
and 5. Inputs sp and sq are short-lived secrets that are
updated every 4 hours. Thus, this key should consist of a
relatively small number of bytes, say 8 bytes. Function
MD is used to compute the digest of a data message.
Function MD is computed in two steps as follows. First,
the standard function MD5 [15] is used to compute a 16-
byte digest of the data message. Second, the first 4 bytes
from this digest constitute our computed message digest.
As discussed in Section 5, input N needs to be much
larger than 1. For example, N can be chosen 200. In this
case, the maximum number of messages that can be
discarded wrongly whenever synchronization between two
adjacent routers is lost is 200, and the probability that an
adversary who replays an old message will be detected is
percent.
The message overhead of the strong integrity protocol
is about 8 bytes per data message: 4 bytes for storing the
message digest, and 4 bytes for storing the soft sequence
number of the message.
7. Concluding Remarks
In this paper, we introduced the concept of hop
integrity in computer networks. A network is said to
provide hop integrity iff whenever a router p receives a
message supposedly from an adjacent router q, router p
can check whether the received message was indeed sent
by q or was modified or replayed by an adversary that
operates between p and q.
We also presented three protocols that can be used to
make any computer network provide hop integrity. These
three protocols are a secret exchange protocol (in Section
3), a weak integrity protocol (in Section 4), and a strong
integrity protocol (in Section 5).
These three protocols have several novel features that
make them correct and efficient. First, whenever the secret
exchange protocol attempts to change a secret, it keeps
both the old secret and the new secret until it is certain
that the integrity check of any future message will not be
computed using the old secret. Second, the integrity
protocol computes a digest at every router along the
message route so that the location of any occurrence of
message modification can be determined. Third, the
strong integrity protocol uses soft sequence numbers to
make the protocol tolerate any loss of synchronization.
All three protocols are stateless, require small overhead
at each hop, and do not constrain the network protocol in
any way. Thus, we believe that they are compatible with
IP in the Internet, and it remains to estimate or measure
the performance of IP when augmented with these
protocols.
--R
Flooding and IP Spoofing Attacks"
"An Efficient Message Authentication Scheme for Link State Routing"
Internetworking with TCP/IP: Vol.
"Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing"
Elements of Network Protocol Design
"Hop Integrity in Computer Networks"
"A Simple Active Attack Against TCP"
"Security Architecture for the Internet Protocol"
"IP Authentication Header"
"IP Encapsulating Security Payload (ESP)"
"Digital Signature Protection of the OSPF Routing Protocol"
"Internet Security Association and Key Management Protocol (ISAKMP)"
"The OAKLEY Key Determination Protocol"
"Internet Control Message Protocol"
"The MD5 Message-Digest Algorithm"
TCP/IP Illustrated
"Securing Distance Vector Routing Protocols"
"Internet Security Attacks at the Basic Levels"
--TR
Internetworking with TCP/IP: principles, protocols, and architecture
TCP/IP illustrated (vol. 1)
Elements of network protocol design
Hash-based IP traceback
Network support for IP traceback
Internet security attacks at the basic levels
Digital signature protection of the OSPF routing protocol
Securing Distance-Vector Routing Protocols
Hop integrity in computer networks
An efficient message authentication scheme for link state routing
--CTR
Haining Wang , Abhijit Bose , Mohamed El-Gendy , Kang G. Shin, IP Easy-pass: a light-weight network-edge resource access control, IEEE/ACM Transactions on Networking (TON), v.13 n.6, p.1247-1260, December 2005
Yingfei Dong , Changho Choi , Zhi-Li Zhang, LIPS: a lightweight permit system for packet source origin accountability, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.18, p.3622-3641, 21 December 2006
Chris Karlof , Naveen Sastry , David Wagner, TinySec: a link layer security architecture for wireless sensor networks, Proceedings of the 2nd international conference on Embedded networked sensor systems, November 03-05, 2004, Baltimore, MD, USA
Miao Ma, Tabu marking scheme to speedup IP traceback, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.18, p.3536-3549, 21 December 2006 | internet;router;network protocol;smurf attack;SYN attack;authentication;message replay;security;denial-of-service attack;message modification |
582068 | On using SCALEA for performance analysis of distributed and parallel programs. | In this paper we give an overview of SCALEA, which is a new performance analysis tool for OpenMP, MPI, HPF, and mixed parallel/distributed programs. SCALEA instruments, executes and measures programs and computes a variety of performance overheads based on a novel overhead classification. Source code and HW-profiling is combined in a single system which significantly extends the scope of possible overheads that can be measured and examined, ranging from HW-counters, such as the number of cache misses or floating point operations, to more complex performance metrics, such as control or loss of parallelism. Moreover, SCALEA uses a new representation of code regions, called the dynamic code region call graph, which enables detailed overhead analysis for arbitrary code regions. An instrumentation description file is used to relate performance information to code regions of the input program and to reduce instrumentation overhead. Several experiments with realistic codes that cover MPI, OpenMP, HPF, and mixed OpenMP/MPI codes demonstrate the usefulness of SCALEA. | Introduction
As hybrid architectures (e.g., SMP clusters) become the
mainstay of distributed and parallel processing in the
market, the computing community is busily developing
languages and software tools for such machines. Besides
OpenMP [27], MPI [13], and HPF [15], mixed programming
paradigms such as OpenMP/MPI are increasingly
being evaluated.
In this paper we introduce a new performance analysis
system, SCALEA, for distributed and parallel programs
that covers all of the above mentioned programming
paradigms. SCALEA is based on a novel classica-
tion of performance overheads for shared and distributed
memory parallel programs which includes data move-
ment, synchronization, control of parallelism, additional
computation, loss of parallelism, and unidentied over-
heads. SCALEA is among the rst performance analysis
tools that combines source code and HW proling
in a single system, signicantly extending the scope of
possible overheads that can be measured and examined.
These include the use of HW counters for cache analysis
to more complex performance metrics such as control or
loss of parallelism. Specic instrumentation and performance
analysis is conducted to determine each category
of overhead for individual code regions. Instrumentation
can be done fully automatically or user-controlled
through directives. Post-execution performance analysis
is done based on performance trace-les and a novel
representation for code regions named dynamic code region
call graph (DRG). The DRG re
ects the dynamic
relationship between code regions and its subregions and
enables a detailed overhead analysis for every code re-
gion. The DRG is not restricted to function calls but
also covers loops, I/O and communication statements,
etc. Moreover, it allows arbitrary code regions to be an-
alyzed. These code regions can vary from a single statement
to an entire program unit. This is in contrast to existing
approaches that frequently use a call graph which
considers only function calls.
A prototype of SCALEA has been implemented. We
will present several experiments with realistic programs
including a molecular dynamics application (OpenMP
version), a nancial modeling (HPF, and OpenMP/MPI
versions) and a material science code (MPI version) that
demonstrate the usefulness of SCALEA.
The rest of this paper is structured as follows: Section
describes an overview of SCALEA [24]. In Section
3 we present a novel classication of performance overheads
based on which SCALEA instruments a code and
analyses its performance. The dynamic code region call
graph is described in the next section. Experiments are
shown in Section 5. Related work is outlined in Section 6.
Conclusions and future work are discussed in Section 7.
SCALEA is a post-execution performance tool that in-
struments, measures, and analyses the performance behavior
of distributed memory, shared memory, and mixed
parallel programs.
Figure
1 shows the architecture of SCALEA which consists
of two main components: SCALEA instrumentation
system (SIS) and a post execution performance analysis
tool set. SIS is integrated with VFC [3] which is a
compiler that translates Fortran programs
MPI, OpenMP, HPF, and mixed programs) into For-
tran90/MPI or mixed OpenMP/MPI programs. The
input programs of SCALEA are processed by the compiler
front-end which generates an abstract syntax tree
(AST). SIS enables the user to select (by directives or
command-line options) code regions of interest. Based
on pre-selected code regions, SIS automatically inserts
probes in the code which will collect all relevant performance
information in a set of prole/trace les during
execution of the program on a target architecture. SIS
also generates an instrumentation description le (see
Section 2.2) that enables all gathered performance data
to be related back to the input program and to reduce
instrumentation overhead.
SIS [25] targets a performance measurement system
based on the TAU performance framework. TAU is an
integrated toolkit for performance instrumentation, mea-
surement, and analysis for parallel, multi-threaded pro-
grams. The TAU measurement library provides portable
proling and tracing capabilities, and supports access to
hardware counters. SIS automatically instruments parallel
programs under VFC by using the TAU instrumentation
library and builds on the abstract syntax tree of
VFC and on the TAU measurement system to create the
dynamic code region call graph (see Section 4). The main
functionality of SIS is given as follows:
Automatic instrumentation of pre-dened code regions
(loops, procedures, I/O statements, HPF INDEPENDENT
loops, OpenMP PARALLEL loops,
OpenMP SECTIONS, MPI send/receive, etc.) for
various performance overheads by using command-line
options.
Manual instrumentation through SIS directives
which are inserted in the program. These directives
also allow to dene user dened code regions for instrumentation
and to control the instrumentation
overhead and the size of performance data gathered
during execution of the program.
Post-execution
analysis with
sisprofile,
sisoverhead, pprof,
racy, vampir,
Dynamic code
region call graph
Raw
performance
data
instrumentation
description file
Pre-processing
profile/trace
files
Compilation
Linking
instrumented
code
Automatic
instrumentation
with SIS
Manual
instrumentation
Input program:
Fortran MPI program
OpenMP program
Hybrid program
executable
program
User's
instrumentation
control
Execution
SIS runtime system:
SISPROFILING, TAUSIS,
PAPI
Load imbalance
Timing result
Hardware counter result
Visualization
with racy,
vampir
Target
machine
subselect me
Select command and
visualize performance
data
Intermediate
database
Training sets
database
Performance
database
data repository
data object
physical resource
data processing
data flow
external input, control
control flow
Diagram legend
Figure
1: Architecture of SCALEA
Manual instrumentation to turn on/o proling for
a given code region.
A pre-processing phase of SCALEA lters and extracts
all relevant performance information from proles/trace
les which yields ltered performance data and the dynamic
code region call graph (DRG). The DRG re
ects
the dynamic relationship between code regions and its
subregions and is used for a precise overhead analysis
for every individual sub-region. This is in contrast to
existing approaches that are based on the conventional
call graph which considers only function calls but not
other code regions. Post-execution performance analysis
also employs a training set method to determine specic
information (e.g. time penalty for every cache miss over-
head, overhead of probes, time to access a lock, etc.)
for every target machine of interest. In the following we
describe the SCALEA instrumentation system and the
instrumentation description le. More details about SIS
and SCALEA's post-execution performance analysis can
be found in [24, 25].
2.1 SCALEA Instrumentation System
Based on user-provided command-line options or di-
rectives, SIS inserts instrumentation code in the program
which will collect all performance data of inter-
est. SIS supports the programmer to control prol-
ing/tracing and to generate performance data through
selective instrumentation of specic code region types
(loops, procedures, I/O statements, HPF INDEPENDENT
loops, OpenMP PARALLEL loops, OpenMP
SECTIONS, OpenMP CRITICAL, MPI barrier state-
ments, etc. SIS also enables instrumentation of arbitrary
code regions. Finally, instrumentation can be
turned on and o by a specic instrumentation directive.
In order to measure arbitrary code regions SIS provides
the following instrumentation:
code region
The directive !SIS$ CR BEGIN and !SIS$ CR END
must be, respectively, inserted by the programmer before
and after the region starts and nishes. Note that
there can be several entry and exit nodes for a code re-
gion. Appropriate directives must be inserted by the
IDF Entry Description
id code region identier
type code region types
le source le identier
unit program unit identier that encloses this region
line start line number where this region starts
column start column number where this starts
line end line number where this ends
column end column number where this ends
performance data performance data collected or
computed for this region
aux auxiliary information
Table
1: Contents of the instrumentation description le (IDF)
programmer in every entry and exit node of a given code
region. Alternatively, compiler analysis can be used to
automatically determine these entry and exit nodes.
Furthermore, SIS provides specic directives in order
to control tracing/proling. The directives MEASURE
ENABLE and MEASURE DISABLE allow the
programmer to turn on and o tracing/proling of a program
code region
For instance, the following example instruments a portion
of an OpenMP pricing code version (see Section 5.2),
where for the sake of demonstration, the call to function
RANDOM PATH is not measured by using the facilities
to control proling/tracing as mentioned above.
END DO
Note that SIS directives are inserted by the programmer
based on which SCALEA automatically instruments
the code.
2.2 Instrumentation Description File
A crucial aspect of performance analysis is to relate performance
information back to the original input program.
During instrumented of a program, SIS generates an
instrumentation description le (IDF) which correlates
proling, trace and overhead information with the corresponding
code regions. The IDF maintains for every
instrumented code region a variety of information (see
Table
1).
A code region type describes the type of the code re-
gion, for instance, entire program, outermost loop, read
statement, OpenMP SECTION, OpenMP parallel loop,
MPI barrier, etc. The program unit corresponds to a
subroutine or function which encloses the code region.
The IDF entry for performance data is actually a link to
a separate repository that stores this information. Note
that the information stored in the IDF can actually be
made a runtime data structure to compute performance
overheads or properties during execution of the program.
IDF also helps to keep instrumentation code minimal, as
for every probe we insert only a single identier that allows
to relate the associated probe timer or counter to
the corresponding code region.
3 Classication of Temporal
According to Amdahl's law [1], theoretically the best sequential
algorithm takes time T s to nish the program,
and T p is the time required to execute the parallel version
with p processors. The temporal overhead of a parallel
program is dened by T re
ects the
dierence between achieved and optimal parallelization.
T can be divided into T i and T u such that T
where T i is the overhead that can be identied and T u
is the overhead fraction which could not be analyzed in
detail. In theory T can never be negative, which implies
that the speedup T s =T p can never exceed p [16]. How-
ever, in practice it occurs that temporal overhead can
become negative due to super linear speedup of applica-
tions. This eect is commonly caused by an increased
available cache size. In Figure 2 we give a classication
of temporal overheads based on which the performance
analysis of SCALEA is conducted:
Temporal
overheads
Data movement
Local memory
access
Remote memory
access
Level 2 to level 1
Level 3 to level 2
Level n to level
Send/Receive
Put/Get
Synchronisation
Barriers
Locks
Conditional
variable
Control of
parallelism
Scheduling
Inspector/
Executor
Fork/join
Additional
computation
Algorithm change
Compiler change
Front-end
normalization
Loss of parallelism
Unparallelised
code
Replicated code
Partial parallelised
code
Unidentified
Figure
2: Temporal overheads classication
Data movement corresponds to any data transfer
within a single address space of a process (local
memory access) or between processes (remote memory
access).
Synchronization (e.g. barriers and locks) is used
to coordinate processes and threads when accessing
data, maintaining consistent computations and
data, etc.
Control of parallelism (e.g. fork/join operations and
loop scheduling) is used to control and manage the
parallelism of a program and can be caused by run-time
library, user, and compiler operations.
Additional computation re
ects any change of the
original sequential program including algorithmic or
compiler changes to increase parallelism (e.g. by
eliminating data dependences) or data locality (e.g.
through changing data access patterns).
Loss of parallelism is due to imperfect parallelization
of a program which can be further classied as
follows: unparallelized code (executed by only one
processor), replicated code (executed by all proces-
sors), and partially parallelized code (executed by
more than one but not all processors).
Unidentied overhead corresponds to the overhead
that is not covered by the above categories.
Note that the above mentioned classication has been
stimulated by [6] but diers in several respects. In [6],
synchronization is part of information movement, load
imbalance is a separate overhead, local and remote memory
are merged in a single overhead class, loss
of parallelism is split into two classes, and unidentied
overhead is not considered at all. Load imbalance in our
opinion is not an overhead but represents a performance
property that is caused by one or more overheads.
4 Dynamic Code Region
Call Graph
Every program consists of a set of code regions which can
range from a single statement to the entire program unit.
A code region can be, respectively, entered and exited by
multiple entry and exit control
ow points (see Figure 3).
In most cases, however, code regions are single-entry-
single-exit code regions.
In order to measure the execution behavior of a code
region, the instrumentation system has to detect all entry
and exit nodes of a code region and insert probes
at these nodes. Basically, this task can be done with the
support of a compiler or guided through manual insertion
of directives. Figure 3 shows an example of a code region
with its entry and exit nodes. To select an arbitrary
code region, the user, respectively, marks two statements
as the entry and exit statements { which are at the same
time entry and exit nodes { of the code region (e.g., by
using SIS directives [25]). Through a compiler analysis,
SIS then automatically tries to determine all other entry
and exit nodes of the code region. Each node represents
a statement in the program. Figure 3 shows an example
code region with multiple entry and exit nodes. The
instrumentation tries to detect all these nodes and automatically
inserts probes before and after all entry and
exit nodes, respectively.
Code regions can be overlapping. SCALEA currently
does not support instrumentation of overlapped code re-
gions. The current implementation of SCALEA supports
mainly instrumentation of single-entry multiple-exit code
regions. We are about to enhance SIS to support also
multiple-entry multiple-exit code regions.
4.1 Dynamic Code Region Call Graph
SCALEA has a set of predened code regions which are
classied into common (e.g. program, procedure, loop,
function call, statement) and programming paradigm
specic code regions (MPI calls, HPF INDEPENDENT
loops, OpenMP parallel regions, loops, and sections,
etc. Moreover, SIS provides directives to dene arbitrary
code regions (see Section 2.1) in the input program.
Based on code regions we can dene a new data structure
called dynamic code region call graph (DRG):
A dynamic code region call graph (DRG) of a
program Q is dened by a directed
ow graph
with a set of nodes R and a set of
edges E. A node r 2 R represents a code region
which is executed at least once during runtime
of Q. An edge (r
indicates that a code
region r 2
is called inside of r 1
during execution
of Q and r 2
is a dynamic sub-region of r 1
. The
rst code region executed during execution of
Q is dened by s.
The DRG is used as a key data structure to conduct a
detailed performance overhead analysis under SCALEA.
Notice that the timing overhead of a code region r with
explicitly instrumented sub-regions r 1 ; :::; r n is given by
where T (r i ) is the timing overhead for an explicitly
instrumented code region r i (1 i n). T (Start r ) and
correspond to the overhead at the beginning
(e.g. fork threads, redistribute data, etc.) and at the end
(join threads, barrier synchronization, process reduction
operation, etc.) of r. T (Remain) corresponds to the
code regions that have not been explicitly instrumented.
Entry point
Statement that begins
the selected code
region
Statement that ends
the selected code
region
Exit point
Statement
Control flow
Control flow
Exit point
Entry point
Figure
3: A code region with several entry and exit points
However, we can easily compute T (Remain) as region r
is instrumented as well.
Figure
4 shows an excerpt of an OpenMP code together
with its associated DRG.
Call graph techniques have been widely used in performance
analysis. Tools such as Vampir [20], gprof [11, 10],
CXperf [14] support a call graph which shows how much
time was spent in each function and its children. In [7] a
call graph is used to improve the search strategy for automated
performance diagnosis. However, nodes of the call
graph in these tools represent function calls [10, 14]. In
contrast our DRG denes a node as an arbitrary code region
(e.g. function, function call, loop, statement, etc.
INTEGER::X, A,N
PRINT *, "Input N="
READ *,N
call
A =0
call
DO I=1,N
A =A+1
END DO
call
call SISF_START(5)
call SISF_STOP(5)
call
END PROGRAM
R 4
R 5
R 4
R 5
Figure
4: OpenMP code excerpt with DRG
4.2 Generating and Building the
Dynamic Code Region Call Graph
Calling code region r 2
inside a code region r 1
during execution
of a program establishes a parent-children relationship
between r 1 and r 2 . The instrumentation library
will capture these relationships and maintain them during
the execution of the program. If code region r 2 is
called inside r 1 then a data entry representing the relationship
between r 1 and r 2 is generated and stored in
appropriate proles/trace les. If a code region r is encountered
that isn't child of any other code region (e.g.,
the code region that is executed rst), an abstract code
region is assigned as its parent. Every code region has a
unique identier which is included in the probe inserted
by SIS and stored in the instrumentation description le.
The DRG data structure maintains the information of
code regions that are instrumented and executed. Every
thread of each process will build and maintain its own
sub-DRG when executing.
In the pre-processing phase (cf. Figure 1) the DRG
of the application will be built based on the individual
sub-DRGs of all threads. A sub-DRG of each thread is
computed by processing the proles/trace les that contain
the performance data of this thread. The algorithm
for generating DRGs is described in detail in [24].
Figure
5: Execution time(s) of the MD application
5 Experiments
We have implemented a prototype of SCALEA which is
controlled by command-line options and user directives.
Code regions including arbitrary code regions can be selected
through specic SIS directives that are inserted
in the input program. Temporal performance overheads
according to the classication shown in Figure 2 can be
selected through command-line options. Our visualization
capabilities are currently restricted to textual out-
put. We plan to build a graphical user interface by the
end of 2001. The graphical output except for tables of the
following experiments have all been generated manually.
More information of how to use SIS and post-execution
analysis can be found in [25, 24].
Overhead 2CPUs 3CPUs 4CPUs
Loss of parallelism 0.025 0.059 0.066
Control of parallelism 1.013 0.676 0.517
Synchronization 1.572 1.27 0.942
Total execution time 146.754 98.438 74.079
Table
2: Overheads (sec) of the MD application. T i ,
are identied, unidentied and total overhead,
respectively.
In this section, we present several experiments to
demonstrate the usefulness of SCALEA. Our experiments
have been conducted on Gescher [23] which is an
SMP cluster with 6 SMP nodes (connected by FastEth-
ernet) each of which comprises 4 Intel Pentium III
Xeon 700 MHz CPUs with 1MB full-speed L2 cache,
2Gbyte ECC RAM, Intel Pro/100+Fast Ethernet, Ul-
tra160 36GB hard disk is run with Linux 2.2.18-SMP
patched with perfctr for hardware counters performance.
We use MPICH [12] and pgf90 compiler version 3.3. from
the Portland Group Inc.
5.1 Molecular Dynamics (MD) Applica-
tion
The MD program implements a simple molecular dynamics
simulation in continuous real space. This program obtained
from [27] has been implemented as an OpenMP
program which was written by Bill Magro of Kuck and
Associates, Inc. (KAI).
The performance of the MD application has been measured
on a single SMP node of Gescher. Figure 5 and
Table
2 show the execution time behavior and measured
overheads, respectively. The results demonstrate a good
speedup behavior (nearly linear). As we can see from Table
2, the total overhead is very small and large portions
of the temporal overhead can be identied.
The time of the sequential code regions (unparal-
lelized) doesn't change as it is always executed by only
Figure
The L2 cache misses/cache accesses ratio of
OMP DO regions in the MD application
one processor. Loss of parallelism for an unparallelized
code region r in a program q is dened as t r
processors are used to execute q and t r is the sequential
execution time of r. By increasing p it can be easily
shown that the loss of parallelism increases as well which
is also conrmed by the measurements shown in Table
2.
Control of parallelism { mostly caused by loop
scheduling { actually decreases for increasing number of
processors. A possible explanation for this eect can be
that for larger number of processors the master thread
processes less loop scheduling phases than for a smaller
number of processors. The load balancing improves by
increasing the number of processors/threads in one SMP
node which at the same time decreases synchronization
time.
We then examine the cache miss ratio { dened by the
number of L2 cache misses divided by the number of L2
cache accesses { of the two most important OMP DO
code regions namely OMP DO COMPUTE and OMP
DO UPDATE as shown in Figure 6. This ratio is nearly
when using only a single processor which implies very
good cache behavior for the sequential execution of this
code. All data seem to t in the L2 cache for this
case. However, in a parallel version, the cache miss ratio
increases substantially as all threads process data of
global arrays that are kept in private L2 caches. The
cache coherency protocol causes many cache lines to be
Figure
7: Execution times of the HPF+ and OpenMP/MPI version for the backward pricing application
exchanged between these private caches which induces
cache misses. It is unclear, however, why the master
thread has a considerably higher cache miss ratio then
all other threads. Overall, the cache behavior has very
little impact on the speedup of this code.
5.2 Backward Pricing Application
The backward pricing code [8] implements the backward
induction algorithm to compute the price of an interest
rate dependent nancial product, such as a variable
coupon bond. Two parallel code versions have been
created. First, an HPF+ version that exploits only
data parallelism and is compiled to an MPI program,
and second, a mixed version that combines HPF+ with
OpenMP. For the latter version, VFC generates an
OpenMP/MPI program. HPF+ directives are used to
distribute data onto a set of SMP nodes. Within each
node an OpenMP program is executed. Communication
among SMP nodes is realized by MPI calls.
The execution times for both versions are shown
in
Figure
7. The term \all" in the legend denotes
the entire program, whereas \loop" refers to the main
computational loops (HPF INDEPENDENT loop and
an OpenMP parallel loop for version 1 and 2, re-
spectively). The HPF+ version performs worse than
the OpenMP/MPI version which shows almost linear
speedup for up to 2 nodes (overall 8 processors). Tables
3 and 5 display the overheads for the HPF+ and
mixed OpenMP/MPI version, respectively. In both cases
the largest overhead is caused by the control of parallelism
overhead which rises signicantly for the HPF+
version with increasing number of nodes. This eect is
less severe for the OpenMP/MPI version. In order to
nd the cause for the high control of parallelism overhead
we use SCALEA to determine the individual components
of this overhead (see Tables 4 and 6). Two routines
(Update HALO and MPI Init) are mainly responsible
for the high control of parallelism overhead of the
version. Update HALO updates the overlap areas
of distributed arrays which causes communication if
one process requires data that is owned by another process
in a dierent node. MPI Init initializes the MPI
runtime system which also involves communication. The
version implies a much higher overhead for these
two routines compared to the OpenMP/MPI reason because
it employs a separate process on every CPU of each
SMP node. Whereas the OpenMP/MPI version uses one
process per node.
5.3 LAPW0
LAPW0 [4] is a material science program that calculates
the eective potential of the Kohn-Sham eigenvalue
problem. LAPW0 has been implemented as a Fortran
MPI code which can be run across several SMP
nodes. The pgf90 compiler takes care of exchanging data
between processors both within and across SMP nodes.
We used SCALEA to localize the most important code
regions of LAPW0 which can be further subdivided into
sequentialized code regions: FFT REAN0,
FFT REAN3, FFT REAN4
parallelized code regions: Interstitial Potential,
Loop 50, ENERGY, OUTPUT
The execution time behavior and speedups (based on
the sequential execution time of each code region) for
each of these code regions are shown in Figures 8 and 9,
respectively. LAPW0 has been examined for a problem
size of 36 atoms which are distributed onto the processors
of a set of SMP nodes. Clearly when using 8, 16, and 24
processors we can't reach optimal load balance, whereas
processors display a much better
load imbalance. This eect is conrmed by SCALEA
(see
Figure
for the the most computationally intensive
routines of LAPW0 (Interstitial Potential and Loop 50 ).
scales poorly due to load imbalances
and large overheads due to loss of parallelism, data move-
ment, and synchronization; see Table 7. LAPW0 uses
many BLAS and SCALAPACK library calls that are currently
not instrumented by SCALEA which is the reason
for the large fraction of unidentied overhead (see Table
7). The main sources of the control of parallelism overhead
is caused by MPI Init (see Figure 10). SCALEA
also discovered the main subroutines that cause the loss
of parallelism overhead: FFT REAN0, FFT REAN3,
and FFTP REAN4 all of which are sequentialized.
6 Related Work
Paraver [26] is a performance analysis tool for
OpenMP/MPI tools which dynamically instruments binary
codes and determines various performance param-
eters. This tool does not cover the same range of performance
overheads supported by SCALEA. Moreover,
tools that use dynamic interception mechanisms commonly
have problems to relate performance data back to
the input program.
Ovaltine [2] measures and analyses a variety of performance
overheads for Fortran77 OpenMP programs.
Paradyn [18] is an automatic performance analysis tool
that uses dynamic instrumentation and searches for performance
bottlenecks based on a specication language.
A function call graph is employed to improve performance
tuning [7].
Recent work on an OpenMP performance interface [19]
based on directive rewriting has similarities to the SIS instrumentation
approach in SCALEA and to SCALA [9]
{ the predecessor system of SCALEA. Implementation
of the interface (e.g., in a performance measurement library
such as TAU) allows proling and tracing to be per-
formed. Conceivably, such an interface could be used to
generate performance data that the rest of the SCALEA
system could analyze.
The TAU [22, 17] performance framework is an integrated
toolkit for performance instrumentation, mea-
surement, and analysis for parallel, multithreaded pro-
grams. SCALEA uses TAU instrumentation library as
one of its tracing libraries.
PAPI [5] species a standard API for accessing hardware
performance counters available on most modern mi-
croprocessors. SCALEA uses the PAPI library for measuring
hardware counters.
gprof [11, 10] is a compiler-based proling framework
that mostly analyses the execution behavior and counts
of functions and function calls.
VAMPIR [20] is a performance analysis tool that processes
trace les generated by VAMPIRtrace [21]. It supports
various performance displays including time-lines
and statics that are visualized together with call graphs
and the source code.
7 Conclusions and Future Work
In this paper, we described SCALEA which is a performance
analysis system for distributed and parallel pro-
grams. SCALEA currently supports performance analysis
for OpenMP, MPI, HPF and mixed parallel programs
(e.g. OpenMP/MPI).
SCALEA is based on a novel classication of performance
overheads for shared and distributed memory
parallel programs. SCALEA is among the rst performance
analysis tools that combines source code and HW-
proling in a single system which signicantly extends
the scope of possible overheads that can be measured and
examined. Specic instrumentation and performance
analysis is conducted to determine each category of overhead
for individual code regions. Instrumentation can
be done fully automatically or user-controlled through
directives. Post-execution performance analysis is done
based on performance trace-les and a novel representation
for code regions named dynamic code region call
graph (DRG). The DRG re
ects the dynamic relationship
between code regions and its subregions and enables
a detailed overhead analysis for every code region. The
DRG is not restricted to function calls but also covers
loops, I/O and communication statements, etc. More-
over, it allows to analyze arbitrary code regions that can
vary from a single statement to an entire program unit.
Processors Sequential 1N, 1P 1N, 4P 2N, 4P 3N, 4P 4N,4P 5N,4P 6N,4P
Data movement
Control of parallelism 0 0.244258 6.59928 17.2419 28.9781 41.4966 56.4554 70.7302
Tu 3.139742 1.726465 1.835957 2.047059 2.3549 2.99739 2.5173
To 3.384 8.33775 19.10787 31.0459 43.8749 59.48315 73.2829
Total execution time 316.417 319.801 87.442 58.66 57.414 63.651 75.304 86.467
Table
3: Overheads of the HPF+ version for the backward pricing application. T i , T u , T are identied, unidentied
and total overhead, respectively. 1N, 4P means 1 SMP node with 4 processors.
Processors 1N, 1P 1N, 4P 2N, 4P 3N, 4P 4N,4P 5N,4P 6N,4P
Inspector
Work distribution 0.000258 0.000285
Update HALO 0.149 3.114 9.110 16.170 24.060 33.868 43.830
MPI Init 0.005 3.462 8.113 12.784 17.420 22.568 26.860
Other
Table
4: Control of parallelism overheads for the HPF+ version for the backward pricing application.
This is in contrast to existing approaches that frequently
use a call graph which considers only function calls.
Based on a prototype implementation of SCALEA we
presented several experiments for realistic codes implemented
in MPI, HPF, and mixed OpenMP/MPI. These
experiments demonstrated the usefulness of SCALEA to
nd performance problems and their causes.
We are currently integrating SCALEA with a database
to store all derived performance data. Moreover, we plan
to enhance SCALEA with a performance specication
language in order to support automatic performance bottleneck
analysis.
The SISPROFILING measurement library for DRG
and overhead proling is an extension of TAU's prol-
ing capabilities. Our hope is to integrate these features
into the future releases of the TAU performance system
so that these features can be oered more portably and
other instrumentation tools can have access to the API.
--R
Validity of the single processor approach to achieving large scale computing capabili- ties
Automatic overheads pro
VFC: The Vienna Fortran Compiler.
A scalable cross-platform infrastructure for application performance tuning using hardware counters
A hierarchical classi
A callgraph-based search strategy for automated performance diagnosis
Pricing Constant Maturity Floaters with Embeeded Options Using Monte Carlo Simulation.
GNU gprof.
A call graph execution pro
The MPI standard for message pass- ing
CXperf User's Guide
High Performance Fortran Forum.
Introduction to Parallel Comput- ing:design and analysis of parallel algorithms
Performance technology for complex parallel and distributed sys- tems
Towards a performance tool interface for openmp: An approach based on directive rewriting.
VAMPIR: Visualization and analysis of MPI resources.
Vampirtrace 2.0 Installation and User's Guide
Portable pro
Gescher system.
Scalea version 1.0: User's guide.
--TR
Introduction to parallel computing
A high-performance, portable implementation of the MPI message passing interface standard
Portable profiling and tracing for parallel, scientific applications using C++
Execution-driven performance analysis for distributed and parallel systems
Performance technology for complex parallel and distributed systems
A scalable cross-platform infrastructure for application performance tuning using hardware counters
The Paradyn Parallel Performance Measurement Tool
A Callgraph-Based Search Strategy for Automated Performance Diagnosis (Distinguished Paper)
A hierarchical classification of overheads in parallel programs
The MPI Standard for Message Passing
Gprof
--CTR
Thomas Fahringer , Clvis Seragiotto Jnior, Modeling and detecting performance problems for distributed and parallel programs with JavaPSL, Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), p.35-35, November 10-16, 2001, Denver, Colorado
Ming Wu , Xian-He Sun, Grid harvest service: a performance system of grid computing, Journal of Parallel and Distributed Computing, v.66 n.10, p.1322-1337, October 2006 | distributed and parallel systems;performance analysis;performance overhead classification |
582418 | Cumulated gain-based evaluation of IR techniques. | Modern large retrieval environments tend to overwhelm their users by their large output. Since all documents are not of equal relevance to their users, highly relevant documents should be identified and ranked first for presentation. In order to develop IR techniques in this direction, it is necessary to develop evaluation approaches and methods that credit IR methods for their ability to retrieve highly relevant documents. This can be done by extending traditional evaluation methods, that is, recall and precision based on binary relevance judgments, to graded relevance judgments. Alternatively, novel measures based on graded relevance judgments may be developed. This article proposes several novel measures that compute the cumulative gain the user obtains by examining the retrieval result up to a given ranked position. The first one accumulates the relevance scores of retrieved documents along the ranked result list. The second one is similar but applies a discount factor to the relevance scores in order to devaluate late-retrieved documents. The third one computes the relative-to-the-ideal performance of IR techniques, based on the cumulative gain they are able to yield. These novel measures are defined and discussed and their use is demonstrated in a case study using TREC data: sample system run results for 20 queries in TREC-7. As a relevance base we used novel graded relevance judgments on a four-point scale. The test results indicate that the proposed measures credit IR methods for their ability to retrieve highly relevant documents and allow testing of statistical significance of effectiveness differences. The graphs based on the measures also provide insight into the performance IR techniques and allow interpretation, for example, from the user point of view. | 2. CUMULATED GAIN -BASED MEASUREMENTS
2.1 Direct Cumulated Gain
When examining the ranked result list of a query, it is obvious that:
highly relevant documents are more valuable than marginally relevant documents,
and
the greater the ranked position of a relevant document, the less valuable it is for the
user, because the less likely it is that the user will ever examine the document.
The first point leads to comparison of IR techniques through test queries by their cumulated
gain by document rank. In this evaluation, the relevance score of each document is
somehow used as a gained value measure for its ranked position in the result and the gain
1 For a discussion of the degree of relevance and the probability of relevance, see Robertson and Belkin [1978].
is summed progressively from ranked position 1 to n. Thus the ranked document lists (of
some determined length) are turned to gained value lists by replacing document IDs by
their relevance scores. Assume that the relevance scores 0 - 3 are used (3 denoting high
value, 0 no value). Turning document lists up to rank 200 to corresponding value lists
gives vectors of 200 components each having the value 0, 1, 2 or 3. For example:
The cumulated gain at ranked position i is computed by summing from position 1 to i
when i ranges from 1 to 200. Formally, let us denote position i in the gain vector G by
G[i]. Now the cumulated gain vector CG is defined recursively as the vector CG where:
CG[i]=GCG[1[]i,1]+G[i],oifthie=rw1ise (1)
For example, from G' we obtain >. The cumulated
gain at any rank may be read directly, for example, at rank 7 it is 11.
2.2 Discounted Cumulated Gain
The second point above stated that the greater the ranked position of a relevant document,
the less valuable it is for the user, because the less likely it is that the user will ever examine
the document due to time, effort, and cumulated information from documents already
seen. This leads to comparison of IR techniques through test queries by their cumulated
gain based on document rank with a rank-based discount factor. The greater the rank, the
smaller share of the document score is added to the cumulated gain.
A discounting function is needed which progressively reduces the document score as
its rank increases but not too steeply (e.g., as division by rank) to allow for user persistence
in examining further documents. A simple way of discounting with this requirement
is to divide the document score by the log of its rank. For example
thus a document at the position 1024 would still get one tenth of it face
value. By selecting the base of the logarithm, sharper or smoother discounts can be computed
to model varying user behavior. Formally, if b denotes the base of the logarithm,
the cumulated gain vector with discount DCG is defined recursively as the vector DCG
where:
Note that we must not apply the logarithm-based discount at rank 1 because blog
Moreover, we do not apply the discount case for ranks less than the logarithm base (it
would give them a boost). This is also realistic, since the higher the base, the lower the
discount and the more likely the searcher is to examine the results at least up to the base
rank (say 10).
For example, let 2. From G' given in the preceding section we obtain
5, 6.89, 6.89, 6.89, 7.28, 7.99, 8.66, 9.61, 9.61, >.
The (lack of) ability of a query to rank highly relevant documents toward the top of
the result list should show on both the cumulated gain by document rank (CG) and the
cumulated gain with discount by document rank (DCG) vectors. By averaging over a set
of test queries, the average performance of a particular IR technique can be analyzed.
Averaged vectors have the same length as the individual ones and each component i gives
the average of the ith component in the individual vectors. The averaged vectors can directly
be visualized as gain-by-rank -graphs (Section 3).
To compute the averaged vectors, we need vector sum operation and vector multiplication
by a constant. Let be two vectors.
Their sum is the vector V+ wk>. For a set of vectors
{V1, V2, , Vn}, each of k components, the sum vector is generalised as
Vn. The multiplication of a vector by a constant r is the
vector r*vk>. The average vector AV based on vectors V= {V1,
V2, , Vn}, is given by the function avg-vect(V):
Now the average CG and DCG vectors for vector sets CG and DCG, over a set of test
queries, are computed by avg-vect(CG) and avg-vect(DCG).
The actual CG and DCG vectors by a particular IR method may also be compared to
the theoretically best possible. The latter vectors are constructed as follows. Let there be
k, l, and m relevant documents at the relevance levels 1, 2 and 3 (respectively) for a given
request. First fill the vector positions 1 m by the values 3, then the positions m+1
m+l by the values 2, then the positions m+l+1 m+l +k by the values 1, and finally the
remaining positions by the values 0. More formally, the theoretically best possible score
vector BV for a request of k, l, and m relevant documents at the relevance levels 1, 2 and
3 is constructed as follows:
0, otherwise
A sample ideal gain vector could be:
The ideal CG and DCG vectors, as well as the average ideal CG and DCG vectors and
curves, are computed as above. Note that the curves turn horizontal when no more relevant
documents (of any level) can be found (Section 3 gives examples). They do not unrealistically
assume as a baseline that all retrieved documents could be maximally rele-
vant. The vertical distance between an actual (average) (D)CG curve and the theoretically
best possible (average) curve shows the effort wasted on less-than-perfect documents due
to a particular IR method. Based on the sample ideal gain vector I', we obtain the ideal
CG and DCG (b = 2) vectors:
Note that the ideal vector is based on the recall base of the search topic rather than on
the result of some IR technique. This is an important difference with respect to some related
measures, e.g. the sliding ratio and satisfaction measure [Korfhage 1997].
2.3. Relative to the Ideal Measure - the Normalized (D)CG-measure
Are two IR techniques significantly different in effectiveness from each other when
evaluated through (D)CG curves? In the case of P-R performance, we may use the average
of interpolated precision figures at standard points of operation, e.g., eleven recall
levels or DCV points, and then perform a statistical significance test. The practical significance
may be judged by the Sparck Jones [1974] criteria, for example, differences
less than 5% are marginal and differences over 10% are essential. P-R performance is
also relative to the ideal performance: 100% precision over all recall levels. The (D)CG
curves are not relative to an ideal. Therefore it is difficult to assess the magnitude of the
difference of two (D)CG curves and there is no obvious significance test for the difference
of two (or more) IR techniques either. One needs to be constructed.
The (D)CG vectors for each IR technique can be normalized by dividing them by the
corresponding ideal (D)CG vectors, component by component. In this way, for any vector
position, the normalized value 1 represents ideal performance, and values in the range [0,
1) the share of ideal performance cumulated by each technique. Given an (average)
(D)CG vector of an IR technique, and the (average) (D)CG vector I =
<i1, i2, , ik> of ideal performance, the normalized performance vector n(D)CG is obtained
by the function:
For example, based on CG' and CGI' from above, we obtain the normalized CG vector
The normalized DCG vector nDCG' is obtained in a similar way from DCG' and
DCGI'. Note that, as a special case, the normalized ideal (D)CG vector is always norm-
I is the ideal vector.
The area between the normalized ideal (D)CG vector and the normalized (D)CG vector
represents the quality of the IR technique. Normalized (D)CG vectors for two or more
techniques also have a normalized difference. These can be compared in the same way
as P-R curves for IR techniques. The average of a (D)CG vector (or its normalized varia-
tion), up to a given ranked position, summarizes the vector (or performance) and is analogous
to the non-interpolated average precision of a DCV curve up to the same given
ranked position. The average of a (n)(D)CG vector V up to the position k is given by:
These vector averages can be used in statistical significance tests in the same way as
average precision over standard points of operation, for example, eleven recall levels or
points.
2.4. Comparison to Earlier Measures
The novel measures have several advantages when compared with several previous and
related measures. The average search length (ASL) measure [Losee 1998] estimates the
average position of a relevant document in the retrieved list. The expected search length
(ESL) measure [Korfhage 1997; Cooper 1968] is the average number of documents that
must be examined to retrieve a given number of relevant documents. Both are dichotomi-
cal, they do not take the degree of document relevance into account. The former also is
heavily dependent on outliers (relevant documents found late in the ranked order).
The normalized recall measure (NR for short; Rocchio [1966] and Salton and McGill
[1983]), the sliding ratio measure (SR for short; Pollack [1968] and Korfhage [1997]),
and the satisfaction - frustration - total measure (SFT for short; Myaeng and Korfhage
[1990] and Korfhage [1997]) all seek to take into account the order in which documents
are presented to the user. The NR measure compares the actual performance of an IR
technique to the ideal one (when all relevant documents are retrieved first). Basically it
measures the area between the ideal and the actual curves. NR does not take the degree of
document relevance into account and is highly sensitive to the last relevant document
found late in the ranked order.
The SR measure takes the degree of document relevance into account and actually
computes the cumulated gain and normalizes this by the ideal cumulated gain for the
same retrieval result. The result thus is quite similar to our nCG vectors. However, SR is
heavily dependent on the retrieved list size: with a longer list the ideal cumulated gain
may change essentially and this affects all normalized SR ratios from rank one onwards.
Because our nCG is based on the recall base of the search topic, the first ranks of the ideal
vector are not affected at all by extension of the evaluation to further ranks. Improving on
normalized recall, SR is not dependent on outliers, but it is too sensitive to the actual retrieved
set size. SR does not have the discount feature of our (n)DCG measure.
The SFT measure consists of three components similar to the SR measure. The satisfaction
measure only considers the retrieved relevant documents, the frustration measure
only the irrelevant documents, and the total measure is a weighted combination of the
two. Like SR, also SFT assumes the same retrieved list of documents, which are obtained
in different orders by the IR techniques to be compared. This is an unrealistic assumption
for comparison since for any retrieved list size n, when n << N (the database size), different
techniques may retrieve quite different documents - that is the whole idea (!). A
strong feature of SFT comes from its capability of punishing an IR technique for retrieving
irrelevant documents while rewarding for the relevant ones. SFT does not have the
discount feature of our nDCG measure.
The relative relevance and ranked half life measures [Borlund and Ingwersen 1998;
Borlund 2000] were developed for interactive IR evaluation. The relative relevance (RR
for short) measure is based on comparing the match between the system-dependent probability
of relevance and the user-assessed degree of relevance, the latter by the real per-
son-in-need or a panel of assessors. The match is computed by the cosine coefficient
[Borlund 2000] when the same ranked IR technique output is considered as vectors of
relevance weights as estimated by the technique, by the user, or by the panel. RR is (in-
tended as) an association measure between types of relevance judgments, and is not directly
a performance measure. Of course, if the cosine between the IR technique scores
and the user relevance judgments is low, the technique cannot perform well from the user
point of view. The ranked order of documents is not taken into account.
The ranked half life (RHL for short) measure gives the median point of accumulated
relevance for a given query result. It thus improves on ASL by taking the degree of
document relevance into account. Like ASL, RHL is dependent on outliers. The RHL
may also be the same for quite differently performing queries. RHL does not have the
discount feature of DCG.
The strengths of the proposed CG, DCG, nCG and nDCG measures can now be
summarized as follows:
They combine the degree of relevance of documents and their rank (affected by their
probability of relevance) in a coherent way.
At any number of retrieved documents examined (rank), CG and DCG give an estimate
of the cumulated gain as a single measure no matter what is the recall base size.
They are not heavily dependent on outliers (relevant documents found late in the
ranked order) since they focus on the gain cumulated from the beginning of the result
up to any point of interest.
They are obvious to interpret, they are more direct than P-R curves by explicitly giving
the number of documents for which each n(D)CG value holds. P-R curves do not
make the number of documents explicit for given performance and may therefore
mask bad performance [Losee 1998].
In addition, the DCG measure has the following further advantages:
It realistically weights down the gain received through documents found later in the
ranked results.
It allows modeling user persistence in examining long ranked result lists by adjusting
the discounting factor.
Furthermore, the normalized nCG and nDCG measures support evaluation:
They represent performance as relative to the ideal based on a known (possibly
large) recall base of graded relevance judgments.
The performance differences between IR techniques are also normalized in relation
to the ideal thereby supporting the analysis of performance differences.
Jrvelin and Keklinen have earlier proposed recall and precision based evaluation
measures to work with graded relevance judgments [Jrvelin and Keklinen 2000;
Keklinen and Jrvelin 2002a]. They first propose the use of each relevance level separately
in recall and precision calculation. Thus different P-R curves are drawn for each
level. Performance differences at different relevance levels between IR techniques may
thus be analyzed. Furthermore, they generalize recall and precision calculation to directly
utilize graded document relevance scores. They consider precision as a function of recall
and demonstrate that the relative effectiveness of IR techniques, and the statistical significance
of their performance differences, may vary according to the relevance scales used.
The proposed measures are similar to standard IR measures while taking document relevance
scores into account. They do not have the discount feature of our (n)DCG measure.
The measures proposed in this article are directly user-oriented in calculating the gain
cumulated by consulting an explicit number of documents. P-R curves tend to hide this
information. The generalized P-R approach extends to DCV (Document Cut-off Value)
based recall and precision as well, however.
The limitations of the measures are considered in Chapter 4.
3. CASE STUDY: COMPARISON OF SOME TREC-7 RESULTS AT DIFFERENT
We demonstrate the use of the proposed measures in a case study testing runs from
TREC-7 ad hoc track with binary and non-binary relevance judgments. We give the results
as CG and DCG curves, which exploit the degrees of relevance. We further show
the results as normalized nCG and nDCG curves, and present the results of a statistical
test based on the averages of n(D)CG vectors.
3.1 TREC-7 Data
The seventh Text Retrieval Conference (TREC-7) had an ad hoc track in which the participants
produced queries from topic statements - altogether 50 - and run those queries
against the TREC text document collection. The collection includes about 528,000
documents, or 1.9 GB data. Participants returned lists of the best 1000 documents retrieved
for each topic. These lists were evaluated against binary relevance judgments
provided by the TREC organizers (National Institute of Standards and Technology,
NIST). Participants were allowed to submit up to three different runs, which could be
based on different queries or different retrieval methods. [Voorhees and Harman 1999.]
Ad hoc task had two subtracks, automatic and manual, with different query construction
techniques. An automatic technique means deriving a query from a topic statement
without manual intervention; manual technique is anything else. [Voorhees and Harman
1999.]
In the case study, we used result lists for 20 topics by five participants from TREC-7
ad hoc manual track. These topics were selected because of the availability of non-binary
relevance judgments for them (see Sormunen [2002]).2
3.2 Relevance Judgments
The non-binary relevance judgments were obtained by re-judging documents judged
relevant by NIST assessors and about 5 % of irrelevant documents for each topic. The
new judgments were made by six Master's students of Information Studies, all of them
fluent in English though not native speakers. The relevant and irrelevant documents were
pooled, and the judges did not know the number of documents previously judged relevant
or irrelevant in the pool. [Sormunen 2002.]
The assumption about relevance in the re-judgment process was topicality. This
agrees with the TREC judgments for the ad hoc track: documents are judged one by one;
general information with limitations given in the topic's narrative is searched, not details
in sense of question answering. New judgments were done on a four-point scale:
1. Irrelevant document. The document does not contain any information about the
topic.
2 The numbers of topics are: 351, 353, 355, 358, 360, 362, 364, 365, 372, 373, 377, 378, 384, 385, 387, 392, 393, 396, 399,
400. For details see http://trec.nist.gov/data/topics_eng/topics.351-400.gz.
2. Marginally relevant document. The document only points to the topic. It does not
contain more or other information than the topic statement.
3. Fairly relevant document. The document contains more information than the topic
statement but the presentation is not exhaustive. In case of multi-faceted topic, only
some of the sub-themes are covered.
4. Highly relevant document. The document discusses the themes of the topic exhaus-
tively. In case of multi-faceted topics, all or most sub-themes are covered.
Altogether 20 topics from TREC-7 and topics from TREC-8 were re-assessed. In
Table
1 the results of re-judgment are shown with respect to the original TREC judg-
ments. It is obvious that almost all originally irrelevant documents were also assessed
irrelevant in re-judgment (93.8 %). Of the TREC relevant documents 75 % were judged
relevant at some level, and 25 % irrelevant. This seems to indicate that the re-assessors
have been somewhat stricter than the original judges. The great overlap in irrelevant
documents proves the new judgments reliable. However, in the case study we are not interested
to compare the results based on different judgments but to show the effects of
utilizing non-binary relevance judgments in evaluation. Thus we do not use the original
TREC judgments in any phase of the case study.
Levels of
relevance
TREC relevant TREC irrelevant Total
# of
docs
# of
docs
# of
docs
691 25.0% 2780 93.8% 3471 60.5%
1004 36.2% 134 4.5% 1138 19.8%
Total 2772 100.0% 2965 100.0% 5737 100.0%
Table
1. Distribution of new relevance judgments with relation to original TREC judgments.
In the subset of 20 topics, among all relevant documents the share of
highly relevant documents was 20.1%, the share of fairly relevant documents was 30.5%,
and that of marginal documents was 49.4%.
3.3 The Application of the Evaluation Measures
We run the TREC-7 result lists of five participating groups against the new, graded relevance
judgments. For the cumulated gain evaluations we tested logarithm bases and handling
of relevance levels varied as parameters as follows.
1. We tested different relevance weights at different relevance levels. First, we replaced
document relevance levels 0, 1, 2, 3 with binary weights, i.e. we gave documents at
level 0 weight 0, and documents at levels 1-3 weight 1 (weighting scheme 0-1-1-1
for the four point scale). Then, we replaced the relevance levels with weights 0, 0, 0,
1, to test the other extreme where only the highly relevant documents are valued. The
last weighting scheme, 0, 1, 10, 100, is between the extremes; the highly relevant
documents are valued hundred times more than marginally relevant documents, and
ten times more than fairly relevant ones. Different weighting on highly relevant
documents may affect the relative effectiveness of IR techniques as also pointed out
by Voorhees [2001]. The first and last weighting schemes only are shown in graphs
because the 0-0-0-1 scheme is very similar to the last one in appearance.
2. The logarithm bases 2 and 10 were tested for the DCG vectors. The base 2 models
impatient users, base 10 persistent ones. While the differences in results do not vary
markedly with the logarithm base, we show only the results for the logarithm base 2.
We also prefer the stricter test condition the smaller logarithm base provides.
3. The average actual CG and DCG vectors were compared to the ideal average vectors
4. The average actual CG and DCG vectors were normalized by dividing them with the
ideal average vectors.
3.4 Cumulated Gain
Figures
1(a) and 1(b) present the CG vector curves for the five runs at ranks 1 - 100, and
the ideal curves. Figure 1a shows the weighting scheme 0-1-1-1, and 1b the scheme 0-1-
10-100. In the ranked result list, highly relevant documents add either 1 or 100 points to
the cumulated gain; fairly relevant documents add either 1 or 10 points; marginally relevant
documents add 1 point; and irrelevant documents add 0 points to the gain.
a. 0-1-1-150CG200
A
IDEAL
Rank
Figure
1(a). Cumulated gain (CG) curves, binary weighting.
b. 0-1-10-10014001000
CG600200A
IDEAL
Rank
Figure
1(b). Cumulated gain (CG) curves, non-binary weighting.
The different weighting schemes change the position of the curves compared to each
other. For example, in Figure 1(a) - the binary weighting scheme - the performance of
(run) D is close to that of C; when highly relevant documents are given more weight, D is
more similar to B, and C and E are close in performance. Note, that the graphs have different
scales because of the weighting schemes.
In
Figure
1(a) the best possible curve starts to level off at the rank 100 reflecting the
fact that at the rank 100 practically all relevant documents have been found. In Figure
1(b) it can be observed, that the ideal curve has already found the most fairly and highly
relevant documents by the rank 50. This, of course, reflects the sizes of the recall bases -
average number of documents at relevance levels 2 and 3 per topic is 29.9. The best system
hangs below the ideal by 0 - 39 points with binary weights (1(a)), and 70 - 894
points with non-binary weights (1(b)). Note that the differences are not greatest at rank
100 but often earlier. The other runs remain further below by 0 - 6 points with binary
weights (1(a)), and 0 - 197 points with non-binary weights (1(b)). The differences between
the ideal and all actual curves are all bound to diminish when the ideal curve levels
off.
The curves can also be interpreted in another way: In Figure 1(a) one has to retrieve
documents by the best run, and 90 by the worst run in order to gain the benefit that
could theoretically be gained by retrieving only 10 documents (the ideal curve). In this
respect the best run is three times as effective as worst run. In Figure 1(b) one has to retrieve
documents by the best run to get the benefit theoretically obtainable at rank 5;
the worst run does not provide the same benefit even at rank 100.
Discounted cumulated gain
Figures
2(a) and 2(b) show the DCG vector curves for the five runs at ranks 1 - 100, and
the ideal curve. The log2 of the document rank is used as the discounting factor. Discounting
alone seems to narrow the differences between the systems (1(a) compared to
2(a), and 1(b) to 2(b)). Discounting and non-binary weighting changes the performance
order of the systems: in Figure 2(b), run A seems to lose and run C to benefit.
In
Figure
2(a), the ideal curve levels off upon the rank 100. The best run hangs below
by points. The other runs remain further below by 0.25 - 1 points. Thus, with discounting
factor and binary weighting, the runs seem to perform equally. In Figure 2(b),
the ideal curve levels off upon the rank 50. The best run hangs below by 71 - 408 points.
The other runs remain further below by 13 - 40 points. All the actual curves still grow at
the rank 100, but beyond that the differences between the best possible and the other
curves gradually become stable.
a. 0-1-1-11410
IDEAL
Rank
Figure
2(a). Discounted cumulated gain (DCG) curves, binary weighting.
b. 0-1-10-100500DCG2000
A
IDEAL
Rank
Figure
2(b). Discounted cumulated gain (DCG) curves, non-binary weighting.
These graphs can also be interpreted in another way: In Figure 2(a) one has to expect
the user to examine 40 documents by the best run in order to gain the (discounted) benefit
that could theoretically be gained by retrieving only 5 documents. The worst run reaches
that gain round rank 95. In Figure 2b, none of the runs gives the gain that would theoretically
7be obtainable at rank 5. Given the worst run the user has to examine 50 documents
in order to get the (discounted) benefit that is obtained with the best run at rank 10. In that
respect the difference in the effectiveness of runs is essential.
One might argue that if the user goes down to, say, 50 documents, she gets the real
value, not the discounted one and therefore the DCG data should not be used for effectiveness
comparison. Although this may hold for the user situation, the DCG-based comparison
is valuable for the system designer. The user is less and less likely to scan further
and thus documents placed there do not have their real relevance value, a retrieval technique
placing relevant documents later in the ranked results should not be credited as
much as another technique ranking them earlier.
3.6 Normalized (D)CG Vectors and Statistical Testing
Figures
3(a) and 3(b) show the curves for CG vectors normalized by the ideal vectors.
The curve for the normalized ideal CG vector has value 1 at all ranks. The actual normalized
CG vectors reach it in due course when all relevant documents have been found.
Differences at early ranks are easier to observe than in Figure 1. The nCG curves readily
show the differences between methods to be compared because of the same scale but they
lack the straightforward interpretation of the gain at each rank given by CG curves. In
Figure
3(b) the curves start lower than in Figure 3(a); it is obvious that highly relevant
documents are more difficult to retrieve.
a. 0-1-1-1
0,6
0,4
0,3
0,1A
Rank
Figure
3(a). Normalized cumulated gain (nCG) curves, binary weighting.
0,6
0,4
0,3
0,1A
Rank
Figure
3(b). Normalized cumulated gain (nCG) curves, non-binary weighting.
Figures
4(a) and 4(b) display the normalized curves for DCG vectors. The curve for
the normalized ideal DCG vector has value 1 at all ranks. The actual normalized DCG
vectors never reach it, they start to level off upon the rank 100. The effect of discounting
can be seen by comparing Figures 3 and 4, e.g. the order of the runs changes. The effect
of normalization can be detected by comparing Figure 2 and Figure 4: the differences
between the IR techniques are easier to detect and comparable.
a. 0-1-1-1
0,6
0,4
0,3
0,1A
Rank
Figure
4(a). Normalized discounted cumulated gain (nDCG) curves, binary weighting.
0,6
0,4
0,3
0,1A
Rank
Figure
4(b). Normalized discounted cumulated gain (nDCG) curves, non-binary weighting.
nDCG (0-0-0-1) Statistical testing of differences between query types was based on normalized average
n(D)CG vectors. These vector averages can be used in statistical significance tests in
the same way as average precision over document cut-off values. The classification we
used to label the relevance levels through numbers 0 - 3 is on an ordinal scale. Holding to
the ordinal scale suggests non-parametric statistical tests, such as the Friedman test (see
Conover [1980]). However, we have based our calculations on class weights to represent
their relative differences. The weights 0, 1, 10 and 100 denote differences on a ratio scale.
This suggests the use of parametric tests such as ANOVA provided that its assumptions
on sampling and measurement distributions are met. Next we give the grand averages of
the vectors of length 200, and the results of the Friedman test; ANOVA did not prove any
differences significant.
Table
2. n(D)CG averages over topics and statistical significance the results for five TREC-7 runs (legend:
Friedman test).
In
Table
2, the average is first calculated for each topic, then an average is taken over
the topics. If the average would have been taken of vectors of different length, the results
of the statistical tests might have changed. Also, the number of topics (20) is rather small
to provide reliable results. However, even these data illuminate the behavior of the
(n)(D)CG measures.
4. DISCUSSION
The proposed measures are based on several parameters: the last rank considered, the
gain values to employ, and discounting factors to apply. An experimenter needs to know
which parameter values and combinations to use. In practice, the evaluation context and
scenario should suggest these values. Alternatively, several values and/or combinations
may be used to obtain a richer picture on IR system effectiveness under different condi-
tions. Below we consider the effects of the parameters. Thereafter we discuss statistical
testing, relevance judgments and limitations of the measures.
Last Rank Considered
Gain vectors of various length from 1 to n may be used for computing the proposed
measures and curves. If one analyzes the curves alone, the last rank does not matter.
Eventual differences between the IR methods are observable for any rank region. The
gain difference for any point (or region) of the curves may be measured directly.
If one is interested in differences in average gain up to a given last rank, then the last
rank matters, particularly for nCG measurements. Suppose IR method A is somewhat
better than the method B in early ranks (say, down to rank 10) but beyond them the methods
starts catching up so that they are en par at rank 50 with all relevant documents
found. If one now evaluates the methods by nCG, they might be statistically significantly
different for the ranks 1 - 10, but there probably would be no significant difference for
the ranks 1 - 100 (or down to lower positions).
If one uses nDCG in the previous case the difference earned by the method A would
be preserved due to discounting low ranked relevant documents. In this case the difference
between the methods may be statistically significant also for the ranks 1 - 100 (or
down to lower positions).
The measures themselves cannot tell how they should be applied - down to which
rank? This depends on the evaluation scenario and the sizes of recall bases. It makes
sense to produce the n(D)CG curves liberally, i.e., down to quite low ranks. The significance
of differences between IR methods, when present, can be tested for selected regions
(top n) when justified by the scenario. Also our test data demonstrate that one run may be
significantly better than another, if just top ranks are considered, while being similarly
effective as another, if low ranks are included also (say up to 100; see e.g. runs C and D
in
Figure
3).
Gain Values
Justifying different gain values for documents relevant to different degrees is inherently
quite arbitrary. It is often easy to say that one document is more relevant than another, but
the quantification of this difference still remains arbitrary. However, determining such
documents as equally relevant is another arbitrary decision, and less justified in the light
of the evidence from relevance studies [Tang, Shaw and Vevea 1999; Sormunen 2002].
Since graded relevance judgments can be provided reliably, the sensitivity of the
evaluation results to different gain quantifications can easily be tested. Sensitivity testing
is also typical in cost-benefit studies, so this is no new idea. Even if the evaluation scenario
would not advice us on the gain quantifications, evaluation through several flat to
steep quantifications informs us on the relative performance of IR methods better than a
single one. Voorhees [2001] used this approach in the TREC Web Track evaluation, when
she weighted highly relevant documents by factors 1-1000 in relation to marginal docu-
ments. Varying weighting affected relative effectiveness order of IR systems in her test.
Our present illustrative findings based on TREC data also show that weighting affects the
relative effectiveness order of IR systems. We can observe in Figures 4(a) and (b) (Sec-
tion 3.6) that by changing from weighting 0-1-1-1, that is, flat TREC-type weights, to
weights 0-1-10-100 for the irrelevant to highly relevant documents, run D appears more
effective than the others.
Tang, Shaw and Vevea [1999] proposed seven as the optimal number of relevance
levels in relevance judgments. Although our findings are for four levels the proposed
measures are not tightly coupled with any particular number of levels.
Discounting Factor
The choice between (n)CG and (n)DCG measures in evaluation is essential: discounting
the gain of documents retrieved late affects the order of effectiveness of runs as we saw in
Sections 3.4. and 3.5 (Figures 1(b) and 2(b)). It is however again somewhat arbitrary to
apply any specific form of discounting. Consider the discounting case of the DCG function
where df is the discounting factor and i the current ranked position. There are three cases
of interest:
If no discounting is performed - all documents, at whatever
rank retrieved, retain their relevance score.
If we have a very sharp discount - only the first documents would really
matter, which hardly is desirable and realistic for evaluation.
If then we have a smooth discounting factor, the smoothness of which can
be adjusted by the choice of the base b. A relatively small base (b = 2) models an
impatient searcher for whom the value of late documents drops rapidly. A relatively
high base (b > 10) models a patient searcher for whom even late documents are valu-
able. A very high base (b >100) yields a very marginal discount from the practical IR
evaluation point of view.
We propose the use of the logarithmic discounting factor. However, the choice of the
base is again somewhat arbitrary. Either the evaluation scenario should advice the evaluator
on the base or a range of bases could be tried out. Note that in the DCG function case
the choice of the base would not affect the order of
effectiveness of IR methods because blog any pair of bases a and b
since blog a is a constant. This is the reason for applying the discounting case for DCG
only after the rank indicated by the logarithm base. This is also the point where discounting
begins because blog 1. In the rank region 2 to b discounting would be replaced by
boosting.
There are two borderline cases for the logarithm base. When the base b (b 1) approaches
discounting becomes very aggressive and finally only the first document
would matter - hardly realistic. On the other hand, if b approaches infinity, then DCG
approaches CG - neither realistic. We believe that the base range 2 to 10 serves most
evaluation scenarios well.
Practical Methodological Problems
The discussion above leaves open the proper parameter combinations to use in evalua-
tion. This is unfortunate but also unavoidable. The mathematics work for whatever parameter
combinations and cannot advice us on which to choose. Such advice must come
from the evaluation context in the form of realistic evaluation scenarios. In research campaigns
such as TREC, the scenario(s) should be selected.
If one is evaluating IR methods for very busy users who are only willing to examine a
few best answers for their queries, it makes sense to evaluate down to shallow ranks only
(say, 30), use fairly sharp gain quantifications (say, 0-1-10-100) and a low base for the
discounting factor (say, 2). On the other hand, if one is evaluating IR methods for patient
users who are willing to dig down in the low ranked and marginal answers for their que-
ries, it makes sense to evaluate down to deep ranks (say, 200), use moderate gain quantifications
(say, 0-1-2-3) and a high base for the discounting factor (say, 10). It makes
sense to try out both scenarios in order to see whether some IR methods are superior in
one scenario only.
When such scenarios are argued out, they can be critically assessed and defended for
the choices involved. If this is not done, an arbitrary choice is committed, perhaps uncon-
sciously. For example, precision averages over 11 recall points with binary relevance
gains models well only very patient users willing to dig deep down the low ranked an-
swers, no matter how relevant vs. marginal the answers are. Clearly this is not the only
scenario one should look at.
The normalized measures nCG and nDCG we propose are normalized by the best possible
behavior for each query on a rank-by-rank basis. Therefore the averages of the normalized
vectors are also less prone to the problems of recall base size variation which
plague the precision-recall measurements, whether they are based on DCVs or precision
as function of recall.
The cumulated gain curves illustrate the value the user actually gets, but discounted
cumulative gain curves can be used to forecast the system performance with regard to a
user's patience in examining the result list. For example, if the CG and DCG curves are
analyzed horizontally in the case study, we may conclude that a system designer would
have to expect the users to examine by 100 to 500 % more documents by the worse query
types to collect the same gain collected by the best query types. While it is possible that
persistent users go way down the result list (e.g., from 30 to 60 documents), it often is
unlikely to happen, and a system requiring such a behavior is, in practice, much worse
than a system yielding the gain within a 50 % of the documents.
Relevance Judgments
Keklinen and Jrvelin [2002a] argue on the basis of several theoretical, laboratory and
field studies that the degree of document relevance varies and document users can distinguish
between them. Some documents are far more relevant than others are. Furthermore,
in many studies on information seeking and retrieval, multiple degree relevance scales
have been found pertinent, while the number of degrees employed varies. It is difficult to
determine, how many degrees there should be, in general. This depends on the study setting
and the user scenarios. When multiple degree approaches are justified, evaluation
methods should utilize / support them.
TREC has been based on binary relevance judgments with a very low threshold for
accepting a document as relevant for a topical request - the document needs to have at
least one sentence pertaining to the request to count as relevant [TREC 2001]. This is a
very special evaluation scenario and there are obvious alternatives. In many scenarios, at
that level of contribution one would count the document at most as marginal unless the
request is factual - in which case a short factual response should be regarded highly relevant
and another not giving the facts marginal if not irrelevant. This is completely compatible
with the proposed measures. If the share of marginal documents were high in the
test collection, then by utilizing TREC-like liberal binary relevance judgments would lead
to difficulties in identifying the better techniques as such. In our data sample, about 50%
of the relevant documents were marginally relevant. Possible differences between IR
techniques in retrieving highly relevant documents might be evened up by their possible
indifference in retrieving marginal documents. The net differences might seem practically
marginal and statistically insignificant.
Statistical Testing
Holding to the ordinal scale of relevance judgments suggests non-parametric statistical
tests, such as the Wilcoxon test or the Friedman test. However, when weights are used,
the scale of measurement becomes one of interval or ratio scale. This suggests the use of
parametric tests such as ANOVA or t-test provided that their assumptions on sampling
and measurement distributions are met. For example, Zobel [1998] used parametric tests
when analyzing the reliability of IR experiment results. Also Hull [1993] argues that with
sufficient data parametric tests may be used. In our test case ANOVA gave a result different
from Friedman - an effect of the magnitude of the differences between the IR runs
considered. However, the data set used in the demonstration was fairly small.
Empirical Findings Based on the Proposed Measures
The DCG measure has been applied in the TREC Web Track 2001 [Voorhees 2001] and
in a text summarization experiment by Sakai and Sparck Jones [2001]. Voorhees' findings
are based on a three-point relevance scale. She examined the effect of incorporating
highly relevant documents (HRDs) into IR system evaluation and weighting them more
or less sharply in a DCG-based evaluation. She found out that the relative effectiveness
of IR systems is affected when evaluated by HRDs. Voorhees pointed out that moderately
sharp weighting of HRDs in DCG measurement supports evaluation for HRDs but avoids
problems caused by instability due to small recall bases of HRDs in test collections. Sakai
and Sparck Jones first assigned the weight 2 to each highly relevant document, and
the weight 1 to each partially relevant document. They also experimented with other
valuations, e.g., zero for the partially relevant documents. Sakai and Sparck Jones used
log base 2 as the discounting factor to model user's (lack of) persistence. The DCG
measure served to test the hypotheses in the summarization study. Our present demonstrative
findings based on TREC-7 data also show that weighting affects the relative effectiveness
order of IR systems. These results exemplify the usability of the cumulated
gain-based approach to IR evaluation.
Limitations
The measures considered in this paper, both the old and the new ones, have weaknesses
in three areas. Firstly, none of them take into account order effects on relevance judg-
ments, or document overlap (or redundancy). In the TREC interactive track [Over 1999],
instance recall is employed to handle this. The user-system pairs are rewarded for retrieving
distinct instances of answers rather than multiple overlapping documents. In princi-
ple, the n(D)CG measures may be used for such evaluation. Secondly, the measures considered
in Section 2.4 all deal with relevance as a single dimension while it really is multidimensional
[Vakkari and Hakala 2000]. In principle, such multidimensionality may be
accounted for in the construction of recall bases for search topics but leads to complexity
in the recall bases and in the evaluation measures. Nevertheless, such added complexity
may be worth pursuing because so much effort is invested in IR evaluation.
Thirdly, any measure based on static relevance judgments is unable to handle dynamic
changes in real relevance judgments. However, when changes in user's relevance criteria
lead to a reformulated query, an IR system should retrieve the best documents for the re-formulated
query. Keklinen and Jrvelin [2002b] argue that complex dynamic interaction
is a sequence of simple topical interactions and thus good one-shot performance by a
retrieval system should be rewarded in evaluation. Changes in the user's information need
and relevance criteria affect consequent requests and queries. While this is likely happen,
it has not been shown that this should affect the design of the retrieval techniques. Neither
has it been shown that this would invalidate the proposed or the traditional evaluation
measures.
It may be argued that IR systems should not rank just highly relevant documents to
top ranks. Consequently, they should not be rewarded in evaluation for doing so. Spink,
Greisdorf and Bateman [1998] have argued that partially relevant documents are important
for users at the early stages of their information seeking process. Therefore one might
require that IR systems be rewarded for retrieving partially relevant documents in the top
ranks.
For about 40 years IR systems have been compared on the basis of their ability to provide
relevant - or useful - documents for their users. To us it seems plausible, that highly
relevant documents are those people find useful. The findings by Spink, Greisdorf and
Bateman do not really disqualify this belief, they rather state that students in the early
states of their information seeking tend to change their relevance criteria and problem
definition and that the number of partially relevant documents correlate with these
changes.
However, if it should turn out that for some purposes, IR systems should rank partially
relevant documents higher than, say, highly relevant documents, our measures suit perfectly
comparisons on such basis: the documents should just be weighted accordingly. We
do not intend to say how or on what criteria the relevance judgments should be made, we
only propose measures that take into account differences in relevance.
The limitations of the proposed measures being similar to those of traditional meas-
ures, the proposed measures offer benefits taking the degree of document relevance into
account and modeling user persistence.
5. CONCLUSIONS
We have argued that in modern large database environments, the development and
evaluation of IR methods should be based on their ability to retrieve highly relevant
documents. This is often desirable from the user viewpoint and presents a not too liberal
test for IR techniques.
We then developed novel methods for IR technique evaluation, which aim at taking
the document relevance degrees into account. These are the CG and the DCG measures,
which give the (discounted) cumulated gain up to any given document rank in the retrieval
results, and their normalized variants nCG and nDCG, based on the ideal retrieval
performance. They are related to some traditional measures like average search length
(ASL; Losee [1998]), expected search length (ESL; Cooper [1968]), normalized recall
(NR; Rocchio [1966] and Salton and McGill [1983]), sliding ratio (SR; Pollack [1968]
and Korfhage [1997]), and satisfaction - frustration - total measure (SFT; Myaeng and
Korfhage [1990]), and ranked half-life (RHL; Borlund and Ingwersen[1998]).
The benefits of the proposed novel measures are many: They systematically combine
document rank and degree of relevance. At any number of retrieved documents examined
(rank), CG and DCG give an estimate of the cumulated gain as a single measure no matter
what is the recall base size. Performance is determined on the basis of recall bases for
search topics and thus does not vary in an uncontrollable way, which is true of measures
based on the retrieved lists only. The novel measures are not heavily dependent on outliers
since they focus on the gain cumulated from the beginning of the result up to any
point of interest. They are obvious to interpret, and do not mask bad performance. They
are directly user-oriented in calculating the gain cumulated by consulting an explicit
number of documents. P-R curves tend to hide this information. In addition, the DCG
measure realistically down weights the gain received through documents found later in
the ranked results and allows modeling user persistence in examining long ranked result
lists by adjusting the discounting factor. Furthermore, the normalized nCG and nDCG
measures support evaluation by representing performance as relative to the ideal based on
a known (possibly large) recall base of graded relevance judgments. The performance
differences between IR techniques are also normalized in relation to the ideal thereby
supporting the analysis of performance differences.
An essential feature of the proposed measures is the weighting of documents at different
levels of relevance. What is the value of a highly relevant document compared to the
value of fairly and marginally relevant documents? There can be no absolute value because
this is a subjective matter that also depends on the information seeking situation. It
may be difficult to justify any particular weighting scheme. If the evaluation scenario
does not suggest otherwise, several weight values may be used to obtain a richer picture
on IR system effectiveness under different conditions. Regarding all at least somewhat
relevant documents as equally relevant is also an arbitrary (albeit traditional) decision,
and also counter-intuitive.
It may be argued that IR systems should not rank just highly relevant documents to
top ranks. One might require that IR systems be rewarded for retrieving partially relevant
documents in the top ranks. However, our measures suit perfectly comparisons on such
basis: the documents should just be weighted accordingly. The traditional measures do
not allow this.
The CG and DCG measures complement P-R based measures [Jrvelin and
Keklinen 2000; Keklinen and Jrvelin 2002a]. Precision over fixed recall levels hides
the user's effort up to a given recall level. The DCV-based precision - recall graphs are
better but still do not make the value gained by ranked position explicit. The CG and
DCG graphs provide this directly. The distance to the theoretically best possible curve
shows the effort wasted on less-than-perfect or useless documents. The normalized CG
and DCG graphs show explicitly the share of ideal performance given by an IR technique
and make statistical comparisons possible. The advantage of the P-R based measures is
that they treat requests with different number of relevant documents equally, and from the
system's point of view the precision at each recall level is comparable. In contrast, CG
and DCG curves show the user's point of view as the number of documents needed to
achieve a certain gain. Together with the theoretically best possible curve they also provide
a stopping rule, that is, when the best possible curve turns horizontal, there is nothing
to be gained by retrieving or examining further documents.
Generally, the proposed evaluation measures and the case further demonstrate that
graded relevance judgments are applicable in IR experiments. The dichotomous and liberal
relevance judgments generally applied may be too permissive, and, consequently, too
easily give credit to IR system performance. We believe that, in modern large environ-
9ments, the proposed novel measures should be used whenever possible, because they provide
richer information for evaluation.
ACKNOWLEDGEMENTS
We thank the FIRE group at University of Tampere for helpful comments, and the IR Lab
for programming.
--R
An evaluation of retrieval effectiveness for a full-text document-retrieval system
Evaluation of Interactive Information Retrieval Systems.
Measures of relative relevance and ranked half-life: Performance indicators for interactive IR
WILKINSON AND
Practical Nonparametric Statistics (2nd
Expected search length: A single measure of retrieval effectiveness based on weak ordering action of retrieval systems.
An evaluation of interactive Boolean and natural language searching with an online medical textbook.
Using statistical testing in the evaluation of retrieval experiments.
Journal of the American Society for Information Science and Technology 53
Libraries Unlimited: Greenwood Village
Information storage and retrieval.
Text retrieval and filtering: Analytic models of performance.
Integration of user profiles: Models and experiments in information retrieval.
Measures for the comparison of information retrieval systems.
Ranking in principle.
Document retrieval systems - Optimization and evaluation
Introduction to modern information retrieval.
A Method for Measuring Wide Range Performance of Boolean Queries in Full-Text Databases [On-line]
Extensions to the STAIRS Study - Empirical Evidence for the Hypothesised Ineffectiveness of Boolean Queries in Large Full-Text Databases
Liberal relevance criteria of TREC - Counting on negligible documents? <Proceedings>In Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval</Proceedings>
Automatic indexing.
From highly relevant to non relevant: examining different regions of relevance.
Changes in relevance criteria and problem stages in task performance.
Evaluation by highly relevant documents.
Overview of the Seventh Text REtrieval Conference (TREC-7) [On-line]
How reliable are the results of large-scale information retrieval experiments? <Proceedings>In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Re- trieval</Proceedings>
revised July
--TR
An evaluation of retrieval effectiveness for a full-text document-retrieval system
Integration of user profiles: models and experiments in information retrieval
Using statistical testing in the evaluation of retrieval experiments
An evaluation of interactive Boolean and natural language searching with an online medical textbook
Information storage and retrieval
The impact of query structure and query expansion on retrieval performance
How reliable are the results of large-scale information retrieval experiments?
Measures of relative relevance and ranked half-life
Text retrieval and filtering
Towards the identification of the optimal number of relevance categories
From highly relevant to not relevant
evaluation methods for retrieving highly relevant documents
Evaluation by highly relevant documents
Generic summaries for indexing in information retrieval
Liberal relevance criteria of TREC -
Introduction to Modern Information Retrieval
The Co-Effects of Query Structure and Expansion on Retrieval Performance in Probabilistic Text Retrieval
Extensions to the STAIRS StudyMYAMPERSANDmdash;Empirical Evidence for the Hypothesised Ineffectiveness of Boolean Queries in Large Full-Text Databases
Using graded relevance assessments in IR evaluation
--CTR
Mounia Lalmas , Gabriella Kazai, Report on the ad-hoc track of the INEX 2005 workshop, ACM SIGIR Forum, v.40 n.1, June 2006
Paul Ogilvie , Mounia Lalmas, Investigating the exhaustivity dimension in content-oriented XML element retrieval evaluation, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Mette Skov , Birger Larsen , Peter Ingwersen, Inter and intra-document contexts applied in polyrepresentation, Proceedings of the 1st international conference on Information interaction in context, October 18-20, 2006, Copenhagen, Denmark
Tetsuya Sakai, On the reliability of factoid question answering evaluation, ACM Transactions on Asian Language Information Processing (TALIP), v.6 n.1, p.3-es, April 2007
Crestan , Claude de Loupy, Natural language processing for browse help, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Tetsuya Sakai, Evaluating evaluation metrics based on the bootstrap, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Crestan , Claude de Loupy, Browsing help for faster document retrieval, Proceedings of the 20th international conference on Computational Linguistics, p.576-es, August 23-27, 2004, Geneva, Switzerland
Tetsuya Sakai, On the reliability of information retrieval metrics based on graded relevance, Information Processing and Management: an International Journal, v.43 n.2, p.531-548, March 2007
Egidio Terra , Robert Warren, Poison pills: harmful relevant documents in feedback, Proceedings of the 14th ACM international conference on Information and knowledge management, October 31-November 05, 2005, Bremen, Germany
Gabriella Kazai , Mounia Lalmas , Arjen P. de Vries, The overlap problem in content-oriented XML retrieval evaluation, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Robert M. Losee, Percent perfect performance (PPP), Information Processing and Management: an International Journal, v.43 n.4, p.1020-1029, July, 2007
Charles L. A. Clarke, Controlling overlap in content-oriented XML retrieval, Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, August 15-19, 2005, Salvador, Brazil
Mounia Lalmas , Anastasios Tombros, Evaluating XML retrieval effectiveness at INEX, ACM SIGIR Forum, v.41 n.1, p.40-57, June 2007
Jaana Keklinen, Binary and graded relevance in IR evaluations: comparison of the effects on ranking of IR systems, Information Processing and Management: an International Journal, v.41 n.5, p.1019-1033, September 2005
Per Ahlgren , Jaana Keklinen, Indexing strategies for Swedish full text retrieval under different user scenarios, Information Processing and Management: an International Journal, v.43 n.1, p.81-102, January 2007
Jorge R. Herskovic , M. Sriram Iyengar , Elmer V. Bernstam, Using hit curves to compare search algorithm performance, Journal of Biomedical Informatics, v.40 n.2, p.93-99, April, 2007
Yu-Ting Liu , Tie-Yan Liu , Tao Qin , Zhi-Ming Ma , Hang Li, Supervised rank aggregation, Proceedings of the 16th international conference on World Wide Web, May 08-12, 2007, Banff, Alberta, Canada
Thanh Tin Tang , Nick Craswell , David Hawking , Kathy Griffiths , Helen Christensen, Quality and relevance of domain-specific search: A case study in mental health, Information Retrieval, v.9 n.2, p.207-225, March 2006
Ryen W. White, Using searcher simulations to redesign a polyrepresentative implicit feedback interface, Information Processing and Management: an International Journal, v.42 n.5, p.1185-1202, September 2006
Hongyuan Zha , Zhaohui Zheng , Haoying Fu , Gordon Sun, Incorporating query difference for learning retrieval functions in world wide web search, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Vincenzo Della Mea , Stefano Mizzaro, Measuring retrieval effectiveness: a new proposal and a first experimental validation, Journal of the American Society for Information Science and Technology, v.55 n.6, p.530-543, April 2004
Gabriella Kazai , Mounia Lalmas, eXtended cumulated gain measures for the evaluation of content-oriented XML retrieval, ACM Transactions on Information Systems (TOIS), v.24 n.4, p.503-542, October 2006 | cumulated gain;graded relevance judgments |
582473 | The Roma personal metadata service. | People now have available to them a diversity of digital storage facilities, including laptops, cell phone address books, handheld devices, desktop computers and web-based storage services. Unfortunately, as the number of personal data repositories increases, so does the management problem of ensuring that the most up-to-date version of any document in a user's personal file space is available to him on the storage facility he is currently using. We introduce the Roma personal metadata service to make it easier to locate current versions of personal files and ensure their availability across different repositories. This centralized service stores information about each of a user's files, such as name, location, timestamp and keywords, on behalf of mobility-aware applications. Separating out these metadata from the data respositories makes it practical to keep the metadata store on a highly available, portable device. In this paper we describe the design requirements, architecture and current prototype implementation of Roma. | Introduction
As people come to rely more heavily on digital devices
to work and communicate, they keep more of their personal
files-including email messages, notes, presentations,
address lists, financial records, news clippings, music and
photographs-in a variety of data repositories. Since people
are free to switch among multiple heterogeneous de-
vices, they can squirrel away information on any device they
happen to be using at the moment as well as on an ever-
broadening array of web-based storage services. For ex-
ample, a businessperson wishing to record a travel expense
could type it into his laptop, scribble it into his personal
digital assistant, or record it in various web-based expense
tracking services.
One might expect this plethora of storage options to be
a catalyst for personal mobility[9], enabling people to access
and use their personal files wherever and whenever
they want, while using whatever device is most convenient
to them. Instead, it has made it harder for mobile people to
ensure that up-to-date versions of files they need are available
on the current storage option of choice. This is because
contemporary file management tools are poor at handling
multiple data repositories in the face of intermittent connec-
tivity. There is no easy way for a user to determine whether
a file on the device he is currently using will be accessible
later on another device, or whether the various copies of that
file across all devices are up-to-date. As a result, the user
may end up with many out-of-date or differently-updated
copies of the same file scattered on different devices.
Previous work has attempted to handle multiple data
repositories at the application level and at the file system
level. At the application level, some efforts have focused
on using only existing system services to do peer-to-peer
synchronization. Unfortunately, tools that use high-level
file metadata provided by the system[15], such as the file's
name or date of last modification, are unreliable; they can
only infer relationships between file copies from information
not intended for such use. For example, if the user
changes the name of one copy of a file, its relationship to
other copies may be broken. Other file synchronization
tools[14] that employ application-specific metadata to synchronize
files are useful only for the set of applications they
explicitly support.
Distributed file systems such as Coda[7] provide access
to multiple data repositories by emulating existing file system
semantics, redirecting local file system calls to a remote
repository or a local cache. Since they operate at the
file system level rather than the application level, they can
reliably track modifications made while disconnected from
the network, transparently store them in a log and apply
them to another copy upon reconnection. Synchronization
across multiple end devices is performed indirectly, through
a logically centralized repository that stores the master copy
of a user's files. Unfortunately, it is often the case that
two portable devices will have better connectivity with each
other than with a centralized data repository located a stationary
network server. Until fast, cheap wide-area net-work
connectivity becomes widespread, this approach will
remain impractical. Keeping the repository on a portable
device, on the other hand, will be feasible only when a tiny,
low-power device becomes capable of storing and serving
up potentially huge amounts of data over a fast local net-work
The ideal solution would offer the flexibility of peer-to-
peer synchronization tools along with the reliability of centralized
file systems. Users should be free to copy files to
any device to ensure that they will be available there later-
personal financial records on the home PC, digital audio
files in the car, phone numbers on the cell phone-without
having to remember which copies reside on which devices
and what copy was modified when.
Our system, Roma, provides an available, centralized
repository of metadata, or information about a single user's
files. The metadata format includes sufficient information
to enable tracking each file across multiple file stores, such
as a name, timestamp, and URI or other data identifier. A
user's metadata repository may reside on a device that the
user carries along with him (metadata records are typically
compact enough that they can be stored on a highly portable
device), thus ensuring that metadata are available to the
user's local devices even when wide-area network connectivity
is intermittent. To maintain compatibility with existing
applications, synchronization agents periodically scan
data stores for changes made by legacy applications and
propagate them to the metadata repository.
Related to the problem of managing versions of files
across data repositories is the problem of locating files
across different repositories. Most file management tools
offer hierarchical naming as the only facility for organizing
large collections of files. Users must invent unique, memorable
names for their files, so that they can find them in
the future; and must arrange those files into hierarchies, so
that related files are grouped together. Having to come up
with a descriptive name on the spot is an onerous task, given
that the name is often the only means by which the file can
later be found[11]. Arranging files into hierarchical folders
is cumbersome enough that many users do not even bother,
and instead end up with a single "Documents" folder listing
hundreds of cryptically named, uncategorized files. This
problem is compounded when files need to be organized
across multiple repositories.
Roma metadata include fully-extensible attributes that
can be used as a platform for supporting these methods of
organizing and locating files. While our current prototype
does not take advantage of such attributes, several projects
have explored the use of attribute-based naming to locate
files in either single or multiple repositories[2, 4].
The rest of this paper describes Roma in detail. We begin
by outlining the requirements motivating our design; in
subsequent sections we detail the architecture and current
prototype implementation of Roma, as well as some key issues
that became apparent while designing the system; these
sections are followed by a survey of related work and a discussion
of some possible future directions for this work.
2. Motivation and design requirements
To motivate this work, consider the problems faced by
Jane Mobile, techno-savvy manager at ABC Widget Com-
pany, who uses several computing devices on a regular ba-
sis. She uses a PC at work and another at home for editing
documents and managing her finances, a palmtop organizer
for storing her calendar, a laptop for working on the road,
and a cell phone for keeping in touch. In addition, she keeps
a copy of her calendar on a web site so it is always available
both to herself and to her co-workers, and she frequently
downloads the latest stock prices into her personal finance
software.
Before dashing out the door for a business trip to New
York, Jane wants to make sure she has everything she will
need to be productive on the road. Odds are she will forget
something, because there is a lot to remember:
. I promised my client I'd bring along the specifications
document for blue fuzzy widgets-I think it's called
BFWidgetSpec.doc, or is it SpecBluFuzWid.doc? If
Jane could do a keyword search over all her documents
(regardless of which applications she used to create
them) and over all her devices at once, she would not
have to remember what the file is called, which directory
contains it, or on which device it is stored.
. I also need to bring the latest blue fuzzy widget price
list, which is probably somewhere on my division's web
site or on the group file server. Even though the file
server and the web site are completely outside her con-
trol, Jane would like to use the same search tools that
she uses to locate documents on her own storage devices
. I have to make some changes to that presentation I was
working on yesterday. Did I leave the latest copy on my
PC at work or on the one at home? If Jane copies an
outdated version to her laptop, she may cause a write
conflict that will be difficult to resolve when she gets
back. She just wants to grab the presentation without
having to check both PCs to figure out which version
is the more recent one.
. I want to work on my expense report on the plane, so
I'll need to bring along my financial files. Like most
people, Jane does not have the time or patience to arrange
all her documents into neatly labeled directories,
so it's hard for her to find groups of related files when
she really needs them. More likely, she has to pore
over a directory containing dozens or hundreds of files,
and guess which ones might have something to do with
her travel expenses.
To summarize, the issues illustrated by this example are the
dependence on filenames for locating files, the lack of integration
between search tools for web documents and search
tools on local devices, the lack of support for managing
multiple copies of a file across different devices, and the
dependence on directories for grouping files together.
These issues lead us to a set of architectural requirements
for Roma. Our solution should be able to
1. Make information about the user's personal files always
available to applications and to the user.
2. Associate with each file (or file copy) a set of standard
attributes, including version numbers or timestamps to
help synchronize file replicas and avoid many write
conflicts.
3. Allow the attribute set to be extended by applications
and users, to include such attributes as keywords to
enable searching, categories to allow browsing related
files, digests or thumbnails to enable previewing file
content, and parent directories to support traditional
hierarchical naming (where desired). This information
can be used to develop more intuitive methods for organizing
and locating files.
4. Track files stored on data repositories outside the
user's control. A user may consider a certain file as
part of his personal file space even if he did not directly
create or maintain the data. For example, even though
the user's bank account balances are available on a web
site controlled and maintained by the bank, he should
be able to organize, search and track changes to these
data just like any other file in his personal space.
5. Track files stored on disconnected repositories and offline
storage media. Metadata can be valuable even if
the data they describe are unavailable. For example,
the user may be working on a disconnected laptop on
which resides a copy of the document that he wants to
edit. Version information lets him figure out whether
this copy is the latest, and if not, where to find the
most recent copy upon reconnection. Alternatively, if
the laptop is connected on a slow network, he can use
metadata (which are often smaller in size than their associated
file) to find which large piece of data needs to
be pulled over the network.
Metadata
server
Agent
Data store
Roma
application
Web server
Browser
Figure
1. The Roma architecture. Applications
are connected to the metadata server,
and possibly connected to a number of data
stores. Agents track changes to third-party
data stores, such as the web server in this di-
agram, and make appropriate updates to the
metadata server.
3. Architecture
At the core of the Roma architecture (illustrated in Figure
1) is the metadata server, a centralized, potentially
portable service that stores information about a user's personal
files. The files themselves are stored on autonomous
data repositories, such as traditional file systems, web
servers and any other device with storage capability. Roma-
aware applications query the metadata server for file infor-
mation, and send updates to the server when the information
changes. Applications obtain file data directly from data
repositories. Agents monitor data stores for changes made
by Roma-unaware applications, and update file information
in the metadata server when appropriate.
Roma supports a decentralized replication model where
all repositories store "first-class" file replicas-that is, all
copies of a file can be manipulated by the user and by ap-
plications. To increase availability and performance, a user
can copy a file to local storage from another device, or an
application can do so on the user's behalf. Roma helps applications
maintain the connection between these logically
related copies, or instances, of the file by assigning a unique
file identifier (UID) that is common to all of its instances.
The file identifier can be read and modified by applications
but is not normally exposed to the user.
Once the file is copied, the contents and attributes of
My Blue Fuzzy Widget
http
/projects/bluestuff/mbfw13.ps
blue
author
Jane Mobile
Figure
2. A typical metadata record, in XML.
each instance can diverge. Thus Roma keeps one metadata
record for each file instance. A metadata record is a tuple
composed of the UID, one or more data locations, a version
number and optional, domain-specific attributes. Figure 2
shows a typical metadata record.
The data location specifies the location of a file instance
as a Universal Resource Files residing on
the most common types of data repositories can be identified
using existing URI schemes, such as http: and ftp:
for network-accessible servers and file: for local file sys-
tems. When naming removable storage media, such as a
CD-ROM or a Zip disk, it is important to present a human-understandable
name to the user (possibly separate from the
media's native unique identifier, such as a floppy serial num-
ber).
The version number is a simple counter. Whenever a
change is made to a file instance, its version number is incremented
Roma-aware applications can supplement metadata
records with a set of optional attributes, stored as
name/value pairs, including generic attributes such as the
size of a file or its type, and domain-specific attributes like
categories, thumbnails, outlines or song titles.
These optional attributes enable application user interfaces
to support new modes of interaction with the user's
file space, such as query-based interfaces and browsers. Autonomous
agents can automatically scan files in the user's
space and add attributes to the metadata server based on the
files' contents. Section 6 briefly describes Presto, a system
developed by the Placeless Documents group at Xerox
PARC that allows users to organize their documents in
terms of user-defined attributes. The user interaction mechanisms
developed for Presto would mesh well with the cen-
tralized, personal metadata repository provided by Roma.
3.1. Metadata server
The metadata server is a logically centralized entity that
keeps metadata information about all copies of a user's data.
Keeping this metadata information centralized and separate
from the data stores has many advantages:
. Centralization helps avoid write conflicts, since a single
entity has knowledge of all versions of the data in
existence. Some potential conflicts can be prevented
before they happen (before the user starts editing an
out-of date instance of a file) rather than being caught
later, when the files themselves are being synchronized
. Centralization allows easier searching over all of a
user's metadata because applications only have to
search at a single entity. The completeness of a
search is not dependent on the reachability of the data
stores. In contrast, if metadata were distributed across
many data stores, a search would have to be performed
at each data store. While this is acceptable for
highly available data repositories connected via high-bandwidth
network, it is cumbersome for data stores
on devices that need to be powered on, plugged in, or
dug out of a shoebox to be made available.
. Separation of the metadata from the data store allows
easier integration of autonomous data stores, including
legacy and third-party data stores over which the
user has limited control. Storing metadata on a server
under the user's control, rather than on the data stores
with the data, eliminates the need for data stores to be
"Roma-compliant." This greatly eases the deployability
of Roma.
. Separation also provides the ability to impose a personalized
namespace over third-party or shared data.
A user can organize his data in a manner independent
of the organization of the data on the third-party data
store.
. Separation enables applications to have some knowledge
about data they cannot access, either because the
data store is off-line, or because it speaks a foreign pro-
tocol. In essence, applications can now "know what
they don't know.''
The main challenge in designing a centralized metadata
server is ensuring that it is always available despite intermittent
network connectivity. Section 5.2 describes one solution
to this problem, which is to host the metadata server
on a portable device. Since metadata tend to be significantly
smaller than the data they describe, it is feasible for users to
take their metadata server along with them when they disconnect
from the network.
3.2. Data stores
A data store is any information repository whose contents
can somehow be identified and retrieved by an appli-
cation. Roma-compatible data stores include not only traditional
file and web servers, but also laptops, personal digital
assistants (PDAs), cell phones, and wristwatches-devices
that have storage but cannot be left running and network-accessible
at all times due to power constraints, network
costs, and security concerns-as well as "offline" storage
media like compact discs and magnetic tapes. Information
in a data store can be dynamically generated (for example,
current weather conditions or bank account balances). Our
architecture supports
. data stores that are not under the user's control.
. heterogeneous protocols (local file systems, HTTP,
FTP, etc. There are no a priori restrictions on the
protocols supported by a data store.
. data stores with naming and hierarchy schemes independent
of both the user's personal namespace and
other data stores.
In keeping with our goal to support legacy and third-party
data stores, data stores do not have to be Roma-aware.
There is no need for direct communication between data
stores and the metadata server. This feature is key to increasing
the deployability of Roma.
3.3. Applications
In Roma, applications are any programs used by people
to view, search and modify their personal data. These
include traditional progams, such as text editors, as well
as handheld-based personal information managers (PIMs),
web-based applications, and special-purpose Internet appli-
ances. Applications can be co-located with data sources;
for example, applications running on a desktop computer
are co-located with the computer's local file system.
Roma-aware applications have two primary responsibil-
ities. The first is to take advantage of metadata information
already in the repository, either by explicitly presenting useful
metadata to the user or by automatically using metadata
to make decisions. For example, an application can automatically
choose to access the "nearest" or latest copy of a
file.
The application's second responsibility is to inform the
metadata server when changes made to the data affect the
metadata. At the very least, this means informing the meta-data
server when a change has been made (for synchronization
purposes), but can also include updating domain-specific
metadata. We are investigating how often updates
need to be sent to the metadata server to balance correctness
and performance concerns.
While applications should be connected to the metadata
server while in use, they are not necessarily well-connected
to all data stores; they may be connected weakly or not at
all. For example, an application might not speak the protocol
of a data store, and thus might be effectively disconnected
from it. Also, a data store itself may be disconnected
from the network.
3.4. Synchronization agents
Roma synchronization agents are software programs that
run on behalf of the user, without requiring the user's atten-
tion. Agents can do many tasks, including
. providing background synchronization on behalf of the
user.
. hoarding of files on various devices in preparation for
disconnected operation.
. making timely backups of information across data
stores.
. tracking third-party updates (on autonomous data
stores, or data shared between users).
Agents can be run anywhere on a user's personal computers
or on cooperating infrastructure. The only limitation
on an agent's execution location is that the agent must be
able to access relevant data stores and the metadata server.
Note that the use of a portable metadata server precludes
agents from running while the metadata server is disconnected
from the rest of the network; Section 5.2 describes
an alternative approach.
3.5. Examples
To illustrate how Roma supports a user working with
files replicated across several storage devices, let us revisit
Jane Mobile, and consider what a Roma-aware application
does in response to Jane's actions.
The action of copying a file actually has two different
results, depending on her intent, and the application should
provide a way for her to distinguish between the two:
. She makes a file instance available on a different
repository (in preparation for disconnected operation,
for example). The application contacts the metadata
server, creates a new metadata record with the same
file identifier, copies all attributes, and sets the data location
to point to the new copy of the file.
. She copies a file to create a new, logically distinct file
based on the original. The application contacts the
metadata server, creates a new metadata record with
a new file identifier, copies all attributes, and sets the
data location to point to the new copy of the file.
Other actions Jane may take:
. She opens a file for updating. The application contacts
the metadata server, and checks the version number of
this instance. If another instance has a higher version
number, the application warns Jane that she is about
to modify an old version, and asks her if she wants to
access the latest version or synchronize the old one (if
possible).
. She saves the modified file. The application contacts
the server, increments the version number of this in-
stance, and updates any attributes, such as the file's
size. As described in Section 5.1, a write conflict may
be detected at this point if the version number of another
instance has already been incremented.
. She brings a file instance up to date by synchronizing
it with the newest instance. The application contacts
the server, finds the metadata record with the highest
version number for this file, and copies all attributes
(except the data location) to the current instance.
3.6. Limitations
This architecture meets our requirements only to the extent
that (1) the metadata store is available to the user's applications
and to third-party synchronization agents, and (2)
applications take advantage of the metadata store to aid the
user in synchronizing and locating files. These issues are
discussed in Sections 5.2 and 5.3, respectively.
4. Implementation
In this section we describe the current status of our prototype
Roma implementation. The prototype is still in
its early stages and does not yet support synchronization
agents.
4.1. Metadata server
We have implemented a prototype metadata server that
supports updates and simple queries, including queries on
optional attributes. It is written in Java as a service running
on Ninja[5], a toolkit for developing highly available
network services. Metadata are stored in an XML format,
and we use XSet, a high performance, lightweight XML
database, for query processing and persistence[17].
We have also implemented a proof-of-concept portable
metadata server. Though the metadata server itself requires
a full Java environment to operate, we have implemented
a simple mechanism to migrate a metadata repository between
otherwise disconnected computers using a PDA as a
transfer medium. As a user finishes working on one com-
puter, the metadata repository is transferred onto his PDA.
The next time he begins using a computer, the metadata
repository is retrieved from the PDA. In this way, though
the metadata server itself is not traveling, the user's meta-data
are always accessible, regardless of the connectivity
between the user's computer and the rest of the world.
4.2. Data stores
Currently, the data stores we support are limited to those
addressable through URIs. Our applications can currently
access data stores using HTTP and FTP, as well as files accessible
via a standard file system interface such as local file
systems, NFS[12] and AFS[6].
4.3. Applications
We have implemented three Roma-aware applications.
These applications allow users to view and manipulate their
metadata and data from a variety of devices.
The first is a web-based metadata browser that provides
hierarchical browsing of a user's personal data. The browser
displays the names of data files, their version information,
and the deduced MIME type of the file. In addition, if the
file is accessible, the browser will present a link to the file
itself. We have also written a proxy to enable "web clip-
ping" of arbitrary web content into the user's personal file
space, as displayed in Figure 3.
Our second application is a set of command-line tools.
We have written Roma-aware ls and locate commands
to query a metadata server, a get command to retrieve the
latest version of a file from remote data stores, and im-
port, a utility to create metadata entries for files on a local
data store.
We have also implemented a proof-of-concept PDA ap-
plication. Built using a Waba VM and RMILite[16, 1], our
PDA application can query and view the contents of a meta-data
server. Currently, the PDA application does not access
the actual contents of any file.
Figure
3. A screenshot of the web-clipper proxy. As the user browses the web, the proxy adds links
on the fly, allowing the user to browse the metadata server and to add pages to his personal file
space.
Our applications have added a metadata attribute to describe
the data format of files. If available, our command-line
tools use the Unix magic command to determine the
data format. Our web clipper determines the data format
based on the MIME type of the file.
5. Design issues and future work
In this section we describe some of the issues and design
decisions encountered so far in our work with Roma, along
with some of the work that remains for us to do.
5.1. Why "personal"?
One important design issue in Roma is the scope of the
types of data it supports. There are several reasons behind
our choice to support only personal files, rather than
to tackle collaboration among different users as well, or to
attempt to simplify system administration by handling distribution
of application binaries and packages.
First, restricting ourselves to personal files gives us
the option of migrating the metadata server to a personal,
portable device that the user carries everywhere, to increase
its availability. This option is described in more detail in the
next section.
Second, it avoids a potential source of write conflicts-
those due to concurrent modifications by different users on
separate instances of the same file. Such conflicts are often
difficult to resolve without discussion between the two
users.
With a single user, conflicts can still result from modifications
by third parties working on his behalf, such as an
email transfer agent appending a new message to the user's
inbox while the user deletes an old one. However, these
conflicts can often be resolved automatically using knowledge
about the application, such as the fact that an email
file consists of a sequence of independent messages. A single
user may also create conflicts himself by concurrently
executing applications that access the same document, but
avoiding this behavior is usually within the control of the
user, and any resulting conflicts do not require communication
between multiple users for resolution. We are investigating
the use of version vectors to store more complete and
flexible versioning information[10].
Third, it lets us exploit the fact that users are much better
at predicting their future needs for their personal files than
for other kinds of files[3].
Fourth, it lets us support categories, annotations and
other metadata that are most meaningful to a single person
rather than a group.
Finally, we believe there is a trend toward specialized
applications tailored for managing other types of files:
. Groupware systems like the Concurrent Versioning
System (CVS), ClearCase, Lotus Notes and Microsoft
Outlook impose necessary structure and order on access
to shared data with multiple writers. Email is often
sufficient for informal collaboration within a small
group.
. Tools like the RedHat Package Manager (RPM)
and Windows Update are well-suited for distributing
system-oriented data such as application packages, operating
system components, and code libraries. These
tools simplify system administration by grouping related
files into packages, enforcing dependencies, and
automatically notifying the user of bug fixes and new
versions of software.
. The web has become the best choice for distributing
shared data with many readers.
Since these applications handle system data, collaborative
projects and shared read-mostly data, we believe that the
remaining important category of data is personal data. We
thus focus on handling this category of data in Roma.
5.2. Ensuring availability of metadata
Since our overarching goal is to ensure that information
about the user's files is always available to the user, we need
to make the system robust in the face of intermittent or weak
network connectivity-the very situations that underscore
the need for a metadata repository in the first place.
Our approach is to allow the user to keep the metadata
server in close physical proximity, preferably on a highly
portable device that he can always carry like a keychain,
watch, or necklace. Wireless network technologies like
Bluetooth will soon make "personal-area networks" a re-
ality. It is not hard to imagine a server embedded in a cell
phone or a PDA, with higher availability and better performance
than a remote server in many situations.
The main difficulty with storing metadata on a portable
server is making it available to third-party agents that act
on behalf of the user and modify data in the user's personal
file space. If the network is partitioned and the only copy of
the metadata is with the user, how does such an agent read
or modify the metadata? In other words, we need to ensure
availability to third parties as well.
One solution is to cache metadata in multiple locations.
If the main copy currently resides on the user's PDA, another
copy on a stationary, network-connected server can
provide access to third parties. This naturally raises the issues
of synchronizing the copies and handling update conflicts
between the metadata replicas.
However, our hypothesis is that updates made to the
metadata by third parties rarely conflict with user updates.
For example, a bank's web server updates a file containing
the user's account balances, but the user himself rarely updates
this file. Testing this hypothesis is part of our future
work in evaluating Roma.
5.3. Making applications Roma-aware
Making applications Roma-aware is the biggest challenge
in realizing Roma's benefits of synchronization and
file organization across multiple data stores. To gain
the most benefit, application user interfaces and file in-
put/output routines must be adapted to use and update information
in the metadata store. We have several options
for extending existing applications to use Roma or incorporating
Roma support into new applications.
Our first option is to use application-specific extension
mechanisms to add Roma-awareness to legacy applications.
For example, we implemented a Roma-aware proxy to integrate
existing web browsers into our architecture. Roma
add-in modules could be written for other applications, such
as Microsoft Outlook, that have extension APIs, or for
open-source applications that can be modified directly.
Our second option is to layer Roma-aware software beneath
the legacy application. Possibilities include modifying
the C library used by applications to access files, or writing
a Roma-aware file system. This option does nothing to
adapt the application's user interface, but can provide some
functionality enhancements such as intelligent retrieval of
updated copies of files.
A third option is to use agents to monitor data edited
by legacy applications in the same way we monitor data
repositories not under the user's control. This option neither
presents metadata to the user, nor enhances the functionality
of the application. It can, however, ensure that the meta-data
at the server are kept up-to-date with changes made by
legacy applications.
Beyond choosing the most appropriate method to extend
an application to use Roma, the bulk of the programming
effort is in modifying the application's user interface and
communicating with the metadata store. Our current prototype
provides a simple, generic Java RMI interface to
the metadata store, through which applications pass XML-formatted
objects. Platform- or domain-specific Roma libraries
could offer much richer support to application de-
velopers, including both user interface and file I/O compo-
nents, to help minimize the programming effort. For ex-
ample, a Roma library for Windows could offer a drop-in
replacement for the standard "file explorer" components, so
that adapting a typical productivity application would involve
making a few library API calls rather than developing
an entirely new user interface.
5.4. Addressing personal data
Our current Roma implementation uses a URI to identify
the file instance corresponding to a particular metadata
record. Unfortunately this is an imperfect solution since the
relationship between URIs and file instances is often not
one-to-one. In fact, it is rarely so.
On many systems, a file instance can be identified by
more than one URI, due to aliases and links in the underlying
file system or multiple network servers providing
access to the same files. For example, the file identified
by ftp://gunpowder/pub/paper.ps can also be
identified as ftp://gunpowder/pub/./paper.ps
(because . is an alias for the current directory) and
http://gunpowder/pub/ftp/pub/paper.ps
(since the public FTP directory is also exported by an
HTTP server).
The problem stems from the fact that URIs are defined
simply as a string that refers to a resource and not as a
unique resource identifier. Currently we rely on applications
and agents to detect and handle cases where multiple
URIs refer to the same file, but if an application fails to do
this, it could cause the user to delete the only copy of a file
because he was led to believe that a backup copy still ex-
isted. In the future, Roma must address this problem more
systematically.
6. Related work
Helping users access data on distributed storage repositories
is an active area of research. The primary characteristic
distinguishing our work from distributed file systems,
such as NFS[12], AFS[6], and Coda[7], is our emphasis on
unifying a wide variety of existing data repositories to help
users manage their personal files.
Like Roma, the Coda distributed file system seeks to
allow users to remain productive during periods of weak
or no network connectivity. While Roma makes metadata
available during these times, Coda caches file data in a
"hoard" according to user preferences in anticipation of periods
of disconnection or weak connectivity. However, unlike
Roma, users must store their files on centralized Coda
file servers to benefit fully from Coda, which is impractical
for people who use a variety of devices between which
there may be better connectivity than exists to a centralized
server. Even when users do not prefer to maintain more than
one data repository, they may be obliged to if, for instance,
their company does not permit them to mount company file
systems on their home computers. We note, however, that it
may be appropriate to use Coda for synchronization of our
centralized metadata repository.
The architecture of OceanStore[8] is similar to that of
Coda, but in place of a logically single, trusted server is a
global data utility comprised of a set of untrusted servers
whose owners earn a fee for offering persistent storage to
other users. Weakly connected client devices can read from
and write to the closest available server; the infrastructure
takes care of replicating and migrating data and re-solving
conflicts. As with Coda, users benefit fully from
OceanStore only if all their data repositories-from the
server at work to the toaster oven at home-are part of the
same OceanStore system.
The Bayou system[10] supports a decentralized model
where users can store and modify their files in many
repositories which communicate peer-to-peer to propagate
changes. However, users cannot easily integrate data from
Bayou-unaware data stores like third-party web services
into their personal file space.
The Presto system[2] focuses on enabling users to organize
their files more effectively. The Presto designers have
built a solution similar to Roma that associates with each of
a user's documents a set of properties that can be used to or-
ganize, search and retrieve files. This work does not specifically
address tracking and synchronizing multiple copies
of documents across storage repositories, nor does it ensure
that properties are available even when their associated
documents are inaccessible. However, the applications they
have developed could be adapted to use the Roma metadata
server as property storage.
Both Presto and the Semantic File System[4] enable
legacy applications to access attribute-based storage repositories
by mapping database queries onto a hierarchical
namespace. Presto achieves this using a virtual NFS server,
while the Semantic File System integrates this functionality
into the file system layer. Either mechanism could be used
with Roma to provide access to the metadata server from
Roma-unaware applications.
The Elephant file system[13] employs a sophisticated
technique for tracking files across both changes in name and
changes in inode number.
7. Conclusions
We have described a system that helps fulfill the promise
of personal mobility, allowing people to switch among multiple
heterogeneous devices and access their personal files
without dealing with nitty-gritty file management details
such as tracking file versions across devices. This goal is
achieved through the use of a centralized metadata repository
that contains information about all the user's files,
whether they are stored on devices that the user himself
manages, on remote servers administered by a third party,
or on passive storage media like compact discs. The meta-data
can include version information, keywords, categories,
digests and thumbnails, and is completely extensible. We
have implemented a prototype metadata repository, designing
it as a service that can be integrated easily with appli-
cations. The service can be run on a highly available server
or migrated to a handheld device so that the user's metadata
are always accessible.
8.
Acknowledgements
The authors thank Doug Terry for his helpful advice
throughout the project. We also thank Andy Huang, Kevin
Lai, Petros Maniatis, Mema Roussopoulos, and Doug Terry
for their detailed review and comments on the paper. This
work has been supported by a generous gift from NTT Mobile
Communications Network, Inc. (NTT DoCoMo).
--R
"Jini/RMI/TSpace for Small Devices."
"Uniform Document Interactions Using Document Properties."
"Translucent Cache Management for Mobile Computing,"
"The MultiSpace: an Evolutionary Platform for Infrastructural Services."
"Synchronization and Caching Issues in the Andrew File System."
"Discon- nected Operation in the Coda File System."
"OceanStore: An Architecture for Global-Scale Persistent Storage."
"The Mobile People Architecture."
"Flexible Update Propagation for Weakly Consistent Replica- tion."
The Humane Interface.
"Design and Implementation of the Sun Net-work File System."
"Deciding When to Forget in the Elephant File System."
"Extending Your Desktop with Pilot."
"The rsync Al- gorithm."
"Wabasoft: Product Overview."
"XSet: A Lightweight Database for Internet Applica- tions."
--TR
Semantic file systems
Measurements of a distributed file system
Disconnected operation in the Coda file system
Flexible update propagation for weakly consistent replication
What is a file synchronizer?
Deciding when to forget in the Elephant file system
The mobile people architecture
OceanStore
Integrating Information Appliances into an Interactive Workspace
Using Dynamic Mediation to Integrate COTS Entities in a Ubiquitous Computing Environment
Translucent cache management for mobile computing
--CTR
Alexandros Karypidis , Spyros Lalis, Automated context aggregation and file annotation for PAN-based computing, Personal and Ubiquitous Computing, v.11 n.1, p.33-44, October 2006 | personal systems;metadata;distributed databases;mobile computing;data synchronization;distributed data storage |
583819 | Elimination of Java array bounds checks in the presence of indirection. | The Java language specification states that every access to an array needs to be within the bounds of that array; i.e. between 0 and array length 1. Different techniques for different programming languages have been proposed to eliminate explicit bounds checks. Some of these techniques are implemented in off-the-shelf Java Virtual Machines (JVMs). The underlying principle of these techniques is that bounds checks can be removed when a JVM/compiler has enough information to guarantee that a sequence of accesses (e.g. inside a for-loop) is safe (within the bounds). Most of the techniques for the elimination of array bounds checks have been developed for programming languages that do not support multi-threading and/or enable dynamic class loading. These two characteristics make most of these tech niques unsuitable for Java. Techniques developed specifically for Java have not addressed the elimination of array bounds checks in the presence of indirection, that is, when the index is stored in another array (indirection array). With the objective of optimising applications with array indirection, this paper proposes and evaluates three implementation strategies, each implemented as a Java class. The classes provide the functionality of Java arrays of type int so that objects of the classes can be used instead of indirection arrays. Each strategy enables JVMs, when examining only one of these classes at a time, to obtain enough information to remove array bounds checks. | Introduction
The Java language [JSGB00] was specied to avoid the most common errors made by software
developers. Arguably, these are memory leaks, array indices out-of-bounds and type inconsis-
tencies. These features are the basic pillars that make Java an attractive language for software
development. On these basic pillars Java builds a rich API for distributed (network) and graphical
programming, and has built-in threads. All these features are even more important when
considering the signicant eort devoted to make Java programs portable.
The cost of these enhanced language features is/was performance. The rst generation of
Java Virtual Machines (JVMs) [LY99] were bytecode interpreters that concentrated on providing
the functionality and not performance. Much of the reputation of Java as a slow languages comes
from early performance benchmarks with these immature JVMs.
Nowadays, JVMs are in their third generation and are characterised by:
mixed execution of interpreted bytecodes and machine code generated just-in-time,
proling of application execution, and
selective application of compiler transformations to time-consuming sections of the application
The performance of modern JVMs is increasing steadily and getting closer to that of C and C++
compilers (see for example the Scimark benchmark [PM] and the Java Grande Forum Benchmark
Suite [BSPF01, EPC]); especially on commodity processors/operating systems. Despite
the remarkable improvement of JVMs, some standard compiler transformations (e.g. loop re-ordering
are still not included, and some overheads intrinsic to the language
specication remain.
The Java Grande Forum, a group of researchers in High-Performance Computing, was constituted
to explore the potential benets that Java can bring their research areas. From this
perspective the forum has observed its limitations and noted solutions [Jav98, Jav99, PBG
BMPP01, Thi02].
The overhead intrinsic in the language specication that this paper addresses is the array
bounds checks. The Java language specication requires that:
all array accesses are checked at run time, and
any access outside the bounds (less than zero or greater than or equal to the length) of
an array throws an ArrayIndexOutOfBoundsException.
The specic case under consideration is the elimination of array bounds checks in the
presence of indirection. For example consider the array accesses foo[ indices[6] ], where
indices and foo are both Java one-dimensional arrays of type int. The array foo is accessed
trough the indirection array indices. Two checks are performed at run time. The rst checks
the access to indices and it is out of the scope of this paper. Readers interested in techniques
implemented for Java in JVMs/compilers for eliminating checks of this kind of array accesses
can
The second checks the access to foo and it is this kind of array bounds check that is the subject
of this paper.
When an array bounds check is eliminated from a Java program, two things are accomplished
from a performance point of view. The rst, direct, reward is the elimination of the check itself
(at least two integer comparisons). The second, indirect, reward is the possibility for other
compiler transformations. Due to the strict exception model specied by Java, instructions
capable of causing exceptions are inhibitors of compiler transformations. When an exception
arises in a Java program the user-visible state of the program has to look like as if every
preceding instruction has been executed and no succeeding instructions have been executed.
This exception model prevents, in general, instruction reordering.
The following section motivates the work and provides a description of the problem to be
solved. Section 3 describes related work. Section 4 presents the three implementation strategies
that enable JVMs to examine only one class at a time to decide whether the array bounds checks
can be removed. The solutions are described as classes to be included in the Multiarray package
(JSR-083 [JSR]). The strategies were generated in the process of understanding the performance
delivered by our Java library OoLaLa [Luj99, LFG00, LFG02a, LFG02b], independently of the
JSR. Section 5 evaluates the performance gained by eliminating array bounds checks in kernels
with array indirection; it also determines the overhead generated by introducing classes instead
of indirection arrays. Section 6 summarises the advantages and disadvantages of the strategies.
Conclusions and future work are presented in Section 7.
Motivation and Description of the Problem
Array indirection is ubiquitous in implementations of sparse matrix operations. A matrix is
considered sparse when most elements of the matrix are zero; less than 10% of the elements are
nonzero elements. This kind of matrix arises frequently in Computational Science and Engineering
(CS&E) applications where physical phenomena are modeled by dierential equations. The
combination of dierential equations and state-of-the-art solution techniques produces sparse
matrices.
E-cient storage formats (data structures) for sparse matrices rely on storing only the
nonzero elements in arrays. For example, the coordinate (COO) storage format is a data
structure for a sparse matrix that consists of three arrays of the same size; two of type int
(indx and jndx) and the other (value) of any
oating point data type. Given an index k, the
elements in the k-th position in indx and jndx represent the row and column indices, respec-
tively. The element at the k-th position in the array value is the corresponding matrix element.
Those elements that are not stored have value zero. Figure 1 presents the kernel of a sparse
matrix-vector multiplication where the matrix is stored in COO. This gure illustrates an example
of array indirection and the kernel described is commonly used in the iterative solution
of systems of linear equations. \It has been estimated that about 75% of all CS&E applications
require the solution of a system of linear equations at one stage or another [Joh82]."
public class ExampleSparseBLAS {
public static void mvmCOO ( int indx[], int jndx[],
double value[], double vectorY[], double vectorX[] ) {
for (int
Figure
1: Sparse matrix-vector multiplication using coordinate storage format.
Array indirection can also occur in the implementation of irregular sections of multi-dimensional
arrays [MMG99] and in the solution of non-sparse systems of linear equations with pivoting
[GvL96].
Consider the set of statements presented in Figure 2. The statement with comment Access
A can be executed without problems. On the other hand the statement with comment Access
would throw an ArrayIndexOutOfBoundsException exception because it tries to access the
position -4 in the array foo.
int
double
. foo[ indx[0] ] . // Access A
. foo[ indx[1] ] . // Access B
Figure
2: Java example of array indirection
The di-culty of Java, compared with other main stream programming languages, is that
several threads can be running in parallel and more than one can access the array indx. Thus,
it is possible for the elements of indx to be modied before the statements with the comments
are executed . Even if a JVM could check all the classes loaded to make sure that no other
thread could access indx, new classes could be loaded and invalidate such analysis.
3 Related Work
Related work can be divided into (a) techniques to eliminate array bounds checks, (b) escape
analysis, (c) eld analysis and related eld analysis, and (d) the precursor of the present work
(IBM's Ninja compiler and multi-dimensional array package). A survey of other techniques to
improve the performance of Java applications can be found in [KCSL00].
Elimination of Array Bounds Checks { To the best of our knowledge, none of the
published techniques, per se, and existing compilers-JVMs can (i) optimise array bounds checks
in presence of indirection, (ii) are suitable for adaptive just-in-time compilation, or (iii) support
multi-threading and (iv) dynamic class loading. Techniques based on theorem probers [SI77,
NL98, XMR00] are too heavy weight. Algorithms based on data-
ow-style have been published
extensively [MCM82, Gup90, Gup93, KW95, Asu92, CG96], but for languages that do not
provide multi-threading. Another technique is based on type analysis and has its application
in functional languages [XP98]. Linear program solvers have also been proposed [RR00].
Bodik, Gupta and Sarkar developed the ABCD (Array Bounds Check on Demand) algorithm
[BGS00] for Java. It is designed to t the time constraints of analysing the program and
applying the transformation at run-time. ABCD targets business-kind applications but its
sparse representation of the program does not include information about multi-threading. Thus,
the algorithm, although the most interesting for our purposes, cannot handle indirection since
the origin of the problem is multi-threading. The strategies proposed in Section 4 can provide
cheaply that extra information to enable ABCD to eliminate checks in the presence of indirection
Escape Analysis { In 1999, four papers were published in the same conference describing
escape analysis algorithms and the possibilities to optimise Java programs by the optimal allocation
of objects (heap vs. stack) and the removal of synchronization [CGS
WR99]. Escape analysis tries to determine whether a object that has been created by a method,
a thread, or another object escapes the scope of its creator. Escape means that another object
can get a reference to the object and, thus, make it live (not garbage collected) beyond the
execution of the method, the thread, or the object that created it. The three strategies presented
in Section 4 require a simple escape analysis. The strategies can only provide information
(class invariants) if every instance variable does not escape the object. Otherwise, a dierent
thread can get access to instance variables and update them, possibly breaking the desired class
invariant.
Field Analysis and Related Field Analysis { Both eld analysis [GRS00] and related
eld analysis [AR01] are techniques that look at one class at the time and try to extract as
much information as possible. This information can be used, for example, for the resolution of
method calls or object inlining. Related eld analysis looks for relations among the instance
variables of objects. Aggarwal and Randall [AR01] demonstrate how related eld analysis can
be used to eliminate array bounds checks for a class following the iterator pattern [GHJV95].
The strategies presented in the following section make use of the concept of eld analysis.
The objective is to provide a meaningful class invariant and this is represented in the instance
variables (elds). However, the actual algorithms to test the relations have not been used in
previous work on eld analysis. The demonstration of eliminating array bounds checks given
in [AR01], cannot be applied in the presence of indirection.
IBM Ninja project and multi-dimensional array package { IBM's Ninja group
has focused on the Numerical Intensive
Applications based on arrays in Java. Midki, Moreira and Snir [MMS98] developed a loop
versioning algorithm so that iteration spaces of nested loops would be partitioned into regions
for which it can be proved that no exceptions (null checks and array bound checks) and no
synchronisations can occur. Having found these exception-free and syncronisation-free regions,
traditional loop reordering transformation can be applied without violating the strict exception
model.
The Ninja group design and implement a multi-dimensional array package [MMG99] to
replace Java arrays so that the discovery of safe regions becomes easier. To eliminate the
overhead introduced by using classes they develop semantic expansion [WMMG98]. Semantic
expansion is a compilation strategy by which selected classes are treated as language primitives
by the compiler. In their prototype static compiler, known as Ninja, they successfully
implement the elimination of array bounds checks, together with semantic expansion for their
multi-dimensional array package and other loop reordering transformations.
The Ninja compiler is not compatible with the specication of Java since it doesn't support
dynamic class loading. The semantic expansion technique only ensures that computations that
use directly the multi-dimensional array package do not suer overheard. Although the compiler
is not compatible with Java, this does not mean that the techniques that they developed could
not be incorporated into a JVM. These techniques would be especially attractive for quasi-static
compilers [SBMG00].
This paper extends the Ninja group's work by tackling the problem of the elimination of
array bounds checks in the presence of indirection. The strategies, described in Section 4,
generate classes that are incorporated into a multi-dimensional array package proposed in the
Java Specication Request (JSR) 083 [JSR]. If accepted, this JSR will dene the standard Java
multi-dimensional array package and it is a direct consequence of the Ninja group's work.
The strategies that are described in this section have a common goal: to produce a class for
which JVMs can discover their invariants simply by examining the class and, thereby, derive
information to eliminate array bounds checks. Each class provides the functionality of Java
arrays of type int and, thus, can substitute them. Objects of these classes would be used
instead of indirection arrays. The three strategies generate three dierent classes that naturally
t into a multi-dimensional array library such the one described in the JSR-083 [JSR]. Figure
3 describes the public methods that the three classes implement; the three classes extend the
abstract class IntIndirectionMultiarray1D.
package multiarray;
public class RuntimeMultiarrayException extends
RuntimeException {.}
public class UncheckedMultiarrayException extends
RuntimeMultiarrayException {.}
public class MultiarrayIndexOutOfBoundsException
extends RuntimeMultiarrayException {.}
public abstract class Multiarray {.}
public abstract class Multiarray1D extends Multiarray {.}
public abstract class Multiarray2D extends Multiarray {.}
public abstract class Multiarray D extends Multiarray {.}
public final class BooleanMultiarray D extends
Multiarray D {.}
public final class Multiarray D extends
Multiarray {.}
public abstract class IntMultiarray1D extends Multiarray1D{
public abstract int get ( int i );
public abstract void set ( int i, int value );
public abstract int length ();
public abstract class IntIndirectionMultiarray1D
extends IntMultiarray1D {
public abstract int getMin ();
public abstract int getMax ();
Figure
3: Public interface of classes that substitute Java arrays of int and a multi-dimensional
array package, multiarray.
Part of the class invariant that JVMs should discover is common to the three strategies.
This is that the values returned by the methods getMin and getMax are always lower bounds
and upper bounds, respectively, of the elements stored. This common part of the invariant
can be computed, for example, using the algorithm for constraint propagation proposed in the
ABCD algorithm, suitable for just-in-time dynamic compilation [BGS00].
The re
ection mechanism provided by Java can interfere with the three strategies. For
example, a program using re
ection can access instance variables and methods without knowing
the names. Even private instance variables and methods can be accessed! In this way a
program can read from or write to instance variables and, thereby, violate the visibility rules.
The circumstances under which this can happen depend on the security policy in-place for each
JVM. In order to avoid this interference, hereafter the security policy is assumed to:
1. have a security manager in-place,
2. not allow programs to create a new security manager or replace the existing one (i.e. permissions
setSecurityManager and createSecurityManager are not granted; see java-
3. not allow programs to change the current security policy (i.e. permissions getPolicy, set-
Policy, insertProvider, removeProvider, write to security policy les are not granted;
see java.security.SecurityPermission and java.io.FilePermission), and
4. not allow programs to gain access to private instance variables and methods (i.e. permission
suppressAccessChecks are not granted; see java.lang.reflect.ReflectPermission)
JVMs can test these assumptions in linear time by invoking specic methods in the java-
.lang.SecurityManager and java.security.AccessController objects at start-up. These
assumptions do not imply any loss of generality since, to the authors' knowledge, CS&E applications
do not require re
ection for accessing private instance variables or methods. In
addition, the described security policy assumptions represent good security management practice
for general purpose Java programs. For a more authoritative and detailed description of
Java security see [Gon98].
The rst strategy is the simplest. Given that the problem arises from the parallel execution
of multiple threads, a trivial situation occurs when no thread can write in the indirection
array. In other words, part of the problem disappears when the indirection array is immutable:
ImmutableIntMultiarray1D class.
The second strategy uses the synchronization mechanisms dened in Java. The objects
of this class, namely MutableImmutableStateIntMultiarray1D, are in either of two states.
The default state is the mutable state and it allows writes and reads. The other state is the
immutable state and it allows only reads. This second strategy can be thought of as a way to
simplify the general case into the trivial case proposed in the rst strategy.
The third and nal strategy takes a dierent approach and does not seek immutability.
Only an index that is outside the bounds of an array con generate an ArrayIndexOutOfBounds-
Exception: i.e. JVMs need to include explicit bounds checks. The number of threads accessing
(writing/reading) simultaneously an indirection array is irrelevant as long as every element in
the indirection array is within the bounds of the arrays accessed through indirection. The third
class, ValueBoundedIntMultiarray1D, enforces that every element stored in an object of this
class is within the range of zero to a given parameter. The parameter must be greater than or
equal to zero, cannot be modied and is passed in with a constructor.
4.1 ImmutableIntMultiarray1D
The methods of a given class can be divided into constructors, queries and commands [Mey97].
Constructors are those methods of a class that once executed (without anomalies) create a new
object of that class. In addition in Java a constructor must have the same name as its class.
Queries are those methods that return information about the state (instance variables) of an
object. These methods do not modify the state of any object, and can depend on an expression
of several instance variables. Commands are those methods that change the state of an object
(modify the instance variables).
The class ImmutableIntMultiarray1D follows the simple idea of making its objects im-
mutable. Consider the abstract class IntIndirectionMultiarray1D (see Figure 3), the methods
get, length, getMin and getMax are query methods. The method set is a command
method and, because the class is abstract, it does not declare constructors.
Figure
4 presents a simplied implementation of the class ImmutableIntMultiarray1D.
In order to make ImmutableIntMultiarray1D objects immutable, the command methods are
implemented simply by throwing an UncheckedMultiarrayException.
1 Obviously, the query
methods do not modify any instance variable. The instance variables (data, 2 length, min and
are declared as final instance variables and every instance variable is initialised by each
constructor.
public final class ImmutableIntMultiarray1D extends
{
private final int data[];
private final int length;
private final int min;
private final int max;
public ImmutableIntMultiarray1D ( int values[] ) {
int temp, auxMin, auxMax;
new int [length];
for (int
public int get ( int i ) {
return data[i];
else throw new MultiarrayIndexOutOfBoundsException();
public void set ( int i , int value ) {
throw new UncheckedMultiarrayException();
public int length () { return length; }
public int getMin () { return min; }
public int getMax () { return max; }
Figure
4: Simplied implementation of class ImmutableIntMultiarray1D.
Note that the only statements (bytecodes in terms of JVMs) that write to the instance
variables occur in constructors and that the instance variables are private and final. These
two conditions are almost enough to derive that every object is immutable.
Another condition is that those instance variables whose type is not primitive do not escape
the scope of the class: escape analysis. In other words, these instance variables are created by
any method of the class, none of the methods returns a reference to any instance variable and
they are declared as private. The life span of these instance variables does not exceed that of
UncheckedMultiarrayException inherits from RuntimeException { is an unchecked exception { and, thus,
methods need to neither include it in their signature nor provide try-catch clauses.
2 The declaration of an instance variable of type array as final indicates that once an array has been assigned
to the instance variable then no other assignment can be applied to that instance variable. However, the elements
of the assigned array can be modied without restriction.
its creator object.
For example, the instance variable data escapes the scope of the class if a constructor is
implemented as shown in Figure 5. It also escapes the scope of the class if the non-private
method getArray (see Figure 5) is included in the class. In both cases, any number of threads
can get a reference to the instance variable array and modify its content (see Figure 6).
public final class ImmutableIntMultiarray1D extends
{
public ImmutableIntMultiarray1D ( int values[] ) {
int temp, auxMin, auxMax;
for (int
public Object getArray () { return data; }
Figure
5: Methods that enable the instance variable array to escape the scope of the class
ImmutableIntMultiarray1D.
public class ExampleBreakImmutability {
public static void main ( String args[] ) {
int
new
int
Figure
An example program modies the content of the instance variable data, member
of every object of class ImmutableIntMultiarray1D, using the method and the constructor
implemented in Figure 5.
Consider an algorithm that checks:
that only bytecodes in constructors write to any instance variable,
that every instance variable is private and final, and
that any instance variable whose type is not primitive does not escape the class.
Such an algorithm can determine whether any class produces immutable objects and it is
of O(#b #iv) complexity, where #b is the number of bytecodes and #iv is the number of
instance variables. Hence, the algorithm is suitable for just-in-time compilers. Further with the
JSR-083 as the Java standard multidimensional array package, JVMs can recognise the class
and produce the invariant without any checking.
The constructor provided in the class ImmutableIntMultiarray1D is ine-cient in terms of
memory requirements. This constructor implies that at some point during execution a program
would consume double the necessary memory space to hold the elements of an indirection array.
This constructor is included mainly for the sake of clarity. A more complete implementation of
this class provides other constructors that read the elements from les or input streams.
The implementation of the method set includes a specic test for the parameter i before
accessing the instance variable data. The test ensures that accesses to data are not out the
bounds. Tests of this kind are implemented in every method set and get in the subsequent
classes. Hereafter, and for clarity, the tests are omitted in the implementations of subsequent
classes.
4.2 MutableImmutableStateIntMultiarray1D
Figure
7 presents a simplied and non-optimised implementation of the second strategy. The
idea behind this strategy is to ensure that objects of class MutableImmutableStateIntMultiarray-
1D can be in only two states:
Mutable state { Default state. The elements stored in an object of the class can be modied
and read (at the user's own risk).
Immutable state { The elements stored in an object of the class can be read but not modied.
The strategy relies on the synchronization mechanisms provided by Java to implement the
class. Every object in Java has associated a lock. The execution of a synchronized method 3 or
synchronized block 4 is a critical section. Given an object with a synchronized method, before
any thread can start the execution of the method it must rst acquire the lock of that object.
Upon completion the thread releases the lock. The same applies to synchronized blocks. At any
point in time at most one thread can be executing a synchronized method or a synchronized
block for the given object.
The Java syntax and the standard Java API do not provide the concept of acquiring and
releasing an object's lock. Thus, a Java application does not contain special keywords or invokes
a method of the standard Java API to access the lock of an object. These concepts are part of
the specication for the execution of Java applications. Further details about multi-threading
in Java can be found in [JSGB00, Lea99].
Consider an object indx of class MutableImmutableStateIntMultiarray1D. The implementation
of this class (see Figure 7) enforces that indx starts in the mutable state. The state
is stored in the boolean instance variable isMutable and its value is kept equivalent to the
boolean expression (reader == null). For the mutable state, the implementations method
get and set are as expected.
The object indx can only change its state when a thread invokes its synchronized method
passToImmutable. When the state is mutable indx changes its instance variable isMutable to
false and stores the thread that executed the method in the instance variable reader. When
3 A synchronized method is a method whose declaration contains the keyword synchronized.
4 A synchronized block is a set of consecutive statements in the body of a method not declared as synchronized
which are surrounded by the clause synchronized (object) f.g .
public final class MutableImmutableStateIntMultiarray1D
extends IntIndirectionMultiarray1D {
private Thread reader;
private final int data[];
private final int length;
private int min;
private int max;
private boolean isMutable;
public MutableImmutableStateIntMultiarray1D ( int length ) {
new int [length];
public int get ( int i ) { return data[i]; }
public synchronized void set ( int i , int value ) {
while ( !isMutable ){
try{ wait(); }
catch (InterruptedException e) {
throw new UncheckedMultiarrayException();
public int length () { return length; }
public synchronized int getMin () { return min; }
public synchronized int getMax () { return max; }
public synchronized void passToImmutable () {
while ( !isMutable ) {
try { wait(); }
catch (InterruptedException e) {
throw new UncheckedMultiarrayException();
public synchronized void returnToMutable () {
if ( reader == Thread.currentThread() ) {
else throw new UncheckedMultiarrayException();
Figure
7: Simplied implementation of class MutableImmutableStateIntMultiarray1D.
the state is immutable, the thread executing the method stops 5 until the state becomes mutable
5 Explanation of the wait/notify thread communication mechanism to clarify what stops means in this sen-
tence. The wait and notify methods are part of the standard Java API and both are members of the class
java.lang.Object. In Java every class is a subclass (directly or indirectly) of java.lang.Object and, thus, every
object inherits inherits the methods wait and notify. Both methods are part of a communication mechanism
for Java threads.
For example, the thread executing the method passToImmutable being indx in immutable state. The thread
starts executing the synchronized method after acquiring the lock of indx. After checking the state, it needs to
wait until the state of indx is mutable. The thread, itself, cannot force the state transition. It needs to wait
for another thread to provoke that transition. The rst thread stops execution by invoking the method wait
in indx. This method makes the rst thread release the lock of indx, wait until a second thread invokes the
method notify in indx and then reacquire the lock prior to return from wait. Several threads can be waiting in
indx (i.e. have invoked wait), but only one thread is notied the second threads invokes notify in indx. Further
and then proceeds as explained for the mutable state.
Once indx is in the immutable state, the get method is implemented as expected while the
set method cannot modify the elements of data until indx returns to the mutable state.
The object indx returns to the mutable state when the same thread that successfully provoked
the transition mutable-to-immutable invokes in indx the synchronized method return-
ToMutable. When the transition is completed, this thread noties other threads waiting in
indx of the state transition.
Given the complexity of matching the locking-wait-notify logic with the statements (byte-
codes) that write in the instance variables, the authors consider that JVMs will not incorporate
tests for this kind of class invariant. Thus, the possibility is that JVMs recognise the class as
being part of the standard Java API and automatically provide the class invariant.
Note that a requirement for proving the class invariant again is that the instance variables
of the class MutableImmutableStateIntMultiarray1D do not escape the scope of the class.
4.3 ValueBoundedIntMultiarray1D
Figure
8 presents a simplied implementation of the third strategy. The implementation of this
class, ValueBoundedIntMultiarray1D, ensures that its objects can only store elements greater
or equal than zero and less than or equal to a parameter. This parameter, upperBound, is
passed to every object by the constructor and cannot be modied.
public final class ValueBoundedIntMultiarray1D extends
{
private final int data[];
private final int length;
private final int upperBound;
private final int lowerBound = 0;
public
int upperBound ) {
new int [length];
public int get ( int i ) { return data[i]; }
public void set ( int i , int value ) {
if ( value >= lowerBound && value <= upperBound )
else throw new UncheckedMultiarrayException();
public int length () { return length; }
public int getMin () { return lowerBound; }
public int getMax () { return upperBound; }
Figure
8: Simplied implementation of class ValueBoundedIntMultiarray1D.
The implementation of the method get is the same as in the previous strategies. The
implementation of the method set includes a test that ensures that only elements in the range
are stored. The methods getMin and getMax, in contrast with previous classes,
do not return the actual minimum and maximum stored elements, but lower (zero) and upper
bounds.
information about threads in Java and the wait/notify mechanism can be found in [JSGB00, Lea99].
The tests that JVMs need to perform to extract the class invariant include the escape
analysis for the instance variables of the class (described in Section 4.1) and, as mentioned in
the introduction of Section 4 with respect to the common part of the invariant, the construction
of constraints using data
ow analysis for the modication of the instance variables. As with
previous classes, JVMs can also recognise the class and produce the invariant without any
checking.
4.4 Usage of the Classes
This section revisits the matrix-vector multiplication kernel where the matrix is sparse and
stored in COO format (see Figure 1) in order to illustrate how the three dierent classes can be
used.
Figure
presents the same kernel but, follows the recommendations of the Ninja group
to facilitate loop versioning (the statements inside the for-loop
do not produce NullPointerExceptions nor ArrayIndexOutOfBoundsExceptions and do
not use synchronization mechanisms). This new implementation checks that the parameters are
not null and that the accesses to indx and jndx are within the bounds. These checks are made
prior to the execution of the for-loop. If the checks fail then none of the statements in the for-loop
is executed and the method terminates by throwing an exception. The implementation, for
the sake of clarity, omits the checks to generate loops free of aliases. For example, the variables
fib and aux are aliases of the same array according to their declaration in Figure 9.
int
System.out.println("Are they
// Output: Are they aliases? true
Figure
9: An example of array aliases.
Note that checks to ensure that accesses to vectorY and vectorX are within bounds will
require traversing completely local copies of the arrays indx and jndx. Local copies of the
arrays are necessary, since both indx and jndx escape the scope of this method. This makes
possible, for example, that another thread can modify their contents after the checks but before
the execution of the for-loop. The creation of local copies is a memory ine-cient approach and
the overhead of copying and checking element by element for the maximum and minimum is
similar to (or greater than) to the overhead of the explicit checks.
Figure
11 presents the implementations of mvmCOO using the three classes described in previous
sections. The checks for nullity are replaced with a line comment indicating where the
checks would be placed. Only the implementation of mvmCOO for ImmutableIntMultiarray1D is
complete. The others include comments where the statements of the complete implementation
would appear. Due to the class invariants, the implementations include checks for accesses to
vectorY and vectorX. When the checks are passed, JVMs nd a loop free of ArrayIndexOut-
OfBoundsException and NullPointerException.
The implementations of mvmCOO for ImmutableIntMultiarray1D and ValueBoundedInt-
Multiarray1D are identical; the same statements but dierent method signature. The implementation
for MutableImmutableStateIntMultiarray1D builds on the previous implemen-
tations, but it rst needs to ensure that the objects indx and jndx are in immutable state.
The implementation also requires that these objects are returned to the state mutable upon
completion or abnormal interruption.
public class ExampleSparseBLASWithNinja {
public static void mvmCOO ( int indx[], int jndx[],
double value[], double vectorY[],
double vectorX[] ) throws SparseBLASException {
if ( indx != null && jndx != null && value != null &&
vectorY != null && vectorX != null &&
indx.length >= value.length &&
jndx.length >= value.length ) {
for (int
else throw new SparseBLASException();
Figure
10: Sparse matrix-vector multiplication using coordinate storage format and Ninja's
group recommendations.
5 Experiments
This section reports two sets of experiments. The rst set determines the overhead of array
bounds checks for a CS&E application with array indirection. The second set, also for
a CS&E application, determines the overhead of using the classes proposed in Section 4 instead
of accessing directly Java arrays. In other words, the experiments seek an experimental
lower bound for the performance improvement that can be achieved when array bounds
checks are eliminated and upper bound of the overhead due to using the classes introduced
in Section 4. A lower bound because array bounds checks together with the strict exception
model specied by Java are inhibitors of other optimising transformations (see Ninja project
01]) that can improve further the performance. An upper bound
because techniques, such as semantic expansion [WMMG98] or those proposed by the authors
for optimising OoLaLa [LFG02b, LFG02a], can remove the source of overhead of using a class
instead of Java arrays.
The considered example is matrix-vector multiplication where the matrix is in COO format
(mvmCOO). This matrix operation is the kernel of iterative methods [GvL96] for solving systems
of linear equations or determining the eigenvalues of a matrix. The iterative nature of these
methods implies that the operation is executed repeatedly until su-cient accuracy is achieved
or an upper bound on the number of iterations is reached. The experiments consider an iterative
method that executes mvmCOO 100 times.
Results are reported for four dierent implementations (see Figures 10 and 11) of mvmCOO.
The four implementations of mvmCOO are derived using only Java arrays (JA implementa-
tion), or objects of class ImmutableIntMultiarray1D (I-MA implementation), class Mutable-
ImmutableStateIntMultiarray1D(MI-MA implementation), and class ValueBoundedIntMulti-
array1D (VB-MA implementation).
The experiments consider three dierent square sparse matrices from the Matrix Market
collection [BPR 97]. The three matrices are: utm5940 (size 5940 and 83842 nonzero elements),
s3rmt3m3 (size 5357, 106526 entries and symmetric), and s3dkt3m2 (size 90449, 1921955 entries
and symmetric). The implementations do not take advantage of symmetry and, thus, s2rmt3m3
and s3dkt3m2 are stored and operated on using all their nonzero elements. The vectors vectorX
and vectorY (see implementations in Figures 10 and 11) are initialised with random numbers
public class KernelSparseBLAS {
public static void mvmCOO ( ImmutableIntMultiarray1D indx,
ImmutableIntMultiarray1D jndx, double value[],
double vectorY[], double vectorX[] )
throws SparseBLASException {
if ( // nullChecks &&
for (int
else throw new SparseBLASException();
public static void mvmCOO (
double value[], double vectorY[],
double vectorX[] ) throws SparseBLASException {
// idem implementation of mvmCOO for ImmutableIntMultiarray1D
// but before the throw statement include returnToMutable for
// indx and/or jndx
else throw new SparseBLASException();
public static void mvmCOO (
double value[], double vectorY[],
double vectorX[] ) throws SparseBLASException {
// idem mvmCOO for ImmutableIntMultiarray1D
Figure
11: Sparse matrix-vector multiplication using coordinate storage format, Ninja's group
recommendations and the classes described in Figures 4, 7 and 8.
according to a uniform distribution with values ranging between 0 and 5, and with zeros,
respectively.
The experiments are performed on a 1 GHz Pentium III with 256 Mb running Windows
2000 service pack 2, Java 2 SDK 1.3.1 02 Standard Edition and Jove (an static Java compiler
version 2.0 associated with the Java 2 Runtime Environment Standard Edition 1.3.0.
The programs are compiled with the
ag -O and (the Hotspot JVM is) executed with the
ags
-server -Xms128Mb -Xmx256Mb. Jove is used with the following two congurations. The rst
hereafter refer to as Jove, creates an executable Windows program using default
optimisation parameters and optimization level 1; the lowest level. The second conguration,
namely Jove-nochecks, is the same conguration plus a
ag that eliminates every array bounds
checks.
Each experiment is run 20 times and the results shown are the average execution time in
seconds. The timers are accurate to the millisecond. For each run, the total execution time of
mvmCOO is recorded. Recall that the execution of one experiment implies a program with a
loop that executes 100 times mvmCOO.
Table
1 presents the execution times for the JA implementation compiled with the two
congurations described for the Jove compiler. The fourth column gives the overhead induced
by array bounds checks. This overhead is between 9:03 and 9:83 percentage of the execution
time.
Matrix Jove (s.) Jove-nochecks (s.) Overheard (%)
s3rmt3m3 1:700 1:538 9:51
Table
1: Average times in seconds for the JA implementation of mvmCOO.
Table
2 presents the execution times for the four implementations of mvmCOO. In this
case, the JVM is the Hotspot server, part of the described Java SDK environment. Table 3
gives the overheads (%) for each of the implementations proposed in Section 4 with respect to
the JA implementation. The I-MA implementation produces the greatest overheads for all the
matrices. In the middle sits the IM-MA implementation, while the BV implementation always
produces the smallest overhead.
Matrix JA (s.) I-MA (s.) IM-MA (s.) VB-MA (s.)
Table
2: Average times in seconds for the four dierent implementations of mvmCOO.
Matrix I-MA
Table
3: Overhead (%) for the implementations of mvmCOO using the proposed classes with
respect to the JA implementation.
Although this section omits them, the second set of experiments run also on a Linux/-
Pentium and Solaris/Sparc platforms with the equivalent Java environment. These omitted
results are consistent with those presented below.
6 Discussion
Thus far, the development has assumed that the three classes can be incorporated in the multi-dimensional
array package proposed by the JSR-083. The following paragraphs try to determine
whether one of the classes is the \best" or whether the package needs more than one class.
Consider rst the class MutableImmutableStateIntMultiarray1D and an object indx of
this class. In order to obtain the benet of array bounds checks elimination when using indx
as an indirection array, programs need to follow these steps:
1. to change the state of indx to immutable,
2. to execute any other actions (normally inside loops) that acess the elements stored in
indx, and
3. to change the state of indx back to mutable.
An example of these steps is the implementation of matrix-vector multiplication using class
MutableImmutableStateIntMultiarray1D (see Figure 11).
If the third step is omitted, possibly accidentally, the indirection array indx becomes useless
for the purpose of array bounds checks elimination. Other threads (or a thread that abandoned
the execution without adequate clean up) would be stopped waiting indenitely for the noti-
cation that indx has returned back to the mutable state.
Another undesirable situation can arise when several threads are executing in parallel and,
at least, two threads need indx and jndx (another object of the same class) at the same time.
Depending on the order of execution or both being aliases of the same object a deadlock can
occur.
One might think that these two undesirable situations could be overcome by modifying the
implementation of the class so that it maintains adequately a list of thread readers instead of
just one thread reader. However, omission of the third step now leads to starvation of writers.
Given that these undesirable situations are not inherent in the other two strategies-classes,
and that the experiments of Section 5 do not show a performance benet in favor of class
MutableImmutableStateIntMultiarray1D, the decision is to disregard this class.
The discussion now looks at scenarios in CS&E applications where the two remaining classes
can be used. In other words the functionality of classes ImmutableIntMultiarray1Dand Value-
BoundedIntMultiarray1D are contrasted with the functionality requirements of CS&E applications
Remember that both classes, although they have the same public interface, provide dierent
functionality. As the name suggests, class ImmutableIntMultiarray1D implements the method
set without modifying any instance variable; it simply throws an unchecked exception (see
Figure
4). On the other hand, class ValueBoundedIntMultiarray1D provides the expected
functionality for this method as long as the parameter value of the method is positive and less
than or equal to the instance variable upperBound (see Figure 8).
In various scenarios of CS&E applications, such as LU-factorisation of dense and banded
matrices with pivoting strategies, and Gauss elimination for sparse matrices with ll-in , applications
need to update the content of indirection arrays [GvL96]. For example, ll-in means
that new nonzero elements are created where zero elements existed before. Thus, the Gauss
elimination algorithm creates new nonzero matrix elements progressively. Assuming that the
matrix is stored in COO format, the indirection arrays indx and jndx need to be updated with
the row and column indices, respectively, for each new nonzero element.
Given that in other CS&E applications indirection arrays remain unaltered after initialisa-
tion, the only reason for including class ImmutableIntMultiarray1D would be performance.
However the performance evaluation reveals that this is not the case. Therefore, the conclusion
is to incorporate only the class ValueBoundedIntMultiarray1D into the multi-dimensional
array package.
Conclusions
Array indirection is ubiquitous in CS&E applications. With the specication of array accesses
in Java and with current techniques for eliminating array bounds checks, applications with array
indirection suer the overhead of explicitly checking each access through an indirection array.
Building on previous work of IBM's Ninja and Jalape~no groups, three new strategies to
eliminate array bounds checks in the presence of indirection are presented. Each strategy is
implemented as a Java class that can replace Java indirection arrays of type int. The aim
of the three strategies is to provide extra information to JVMs so that array bounds checks
in the presence of indirection can be removed. For normal Java arrays of type int this extra
information would require access to the whole application (i.e. no dynamic loading) or be heavy
weigh. The algorithm to remove the array bounds checks is a combination of loop versioning (as
used by the Ninja group [MMS98, MMG00a, MMG
and construction of constraints based on data
ow analysis (ABCD algorithm [BGS00]).
The experiments have evaluated the performance benet of eliminating array bound checks
in the presence of indirection. The overhead has been estimated between 9:03 and 9:83 percentage
of the execution time. The experiments have also evaluated the overhead of using a Java
class to replace Java arrays on a o-the-self JVM. This overhead varies for each class, but it is
between 6:13 and 14:72 percentage of the execution time.
The evaluation of the three strategies also includes a discussion of their advantages and
disadvantages. Overall, the third strategy, class ValueBoundedIntMultiarray1D, is the best.
It takes a dierent approach by not seeking the immutability of objects. The number of threads
accessing an indirection array at the same time is irrelevant as long as every element in the
indirection array is within the bounds of the arrays accessed through indirection. The class
enforces that every element stored in a object of the class is between the values of zero and a
given parameter. The parameter must be greater than or equal to zero, cannot be modied and
is passed in with a constructor.
The remaining problem is the overhead for programs using the class ValueBoundedInt-
Multiarray1D instead of Java arrays. The authors have proposed a set of transformations
[LFG02a, LFG02b] for their Java library OoLaLa [Luj99, LFG00]. A paper that demonstrates
that the same set of transformations designed for OoLaLa is enough to optimise away the
overhead of using the class ValueBoundedIntMultiarray1D is in preparation. Future work
concentrates on developing an algorithm suitable for just-in-time dynamic compilation to determine
when to apply the set of transformations.
--R
The Jalape~no virtual machine.
Optimization of array subscript range.
ABCD: Eliminating array bounds checks on demand.
Escape analysis for object oriented languages.
The Matrix Market: A web resource for test matrix collections.
Benchmarking Java against C and Fortran for scienti
A reexamination of
Sreedhar, and Samuel Midki
Practicing JUDO: Java under dynamic optimizations.
Marmot: an optimizing compiler for Java.
Design Patterns: Elements of Reusable Object Oriented Software.
Fiel analysis: Getting useful and low-cost interprocedural information
A fresh look at optimizing array bound checking.
Optimizing array bound checks using ow analysis.
Matrix Computations.
Making Java Work for High-End Computing
Numerical Methods: A Sofware Approach.
Jove: Optimizing native compiler for java technology.
The Java Language Spec- i cation
Elimination of redundant array subscript range checks.
Concurrent Programming in Java: Design Principle and Patterns.
Mikel Luj
Mikel Luj
Mikel Luj
Mikel Luj
The Java Virtual Machine Speci
Optimization of range checking.
Object Oriented Software Construction.
Harissa: A hybrid approach to java execu- tion
The design and implementation of a certifying compiler.
Scimark 2.0.
Java for applications a way ahead of time (WAT) compiler.
Symbolic bounds analysis of pointer
Implementation of an array bound checker.
Overview of the IBM Java just-in-time compiler
Java at middle age: Enabling Java for computational science.
Compositional pointer and escape analysis for Java programs.
Safety checking of machine code.
Elinating array bound checking through dependent types.
--TR
A fresh look at optimizing array bound checking
Optimization of array subscript range checks
Optimizing array bound checks using flow analysis
A reexamination of MYAMPERSANDldquo;Optimization of array subscript range checksMYAMPERSANDrdquo;
Elimination of redundant array subscript range checks
Object-oriented software construction (2nd ed.)
Matrix market
Eliminating array bound checking through dependent types
The design and implementation of a certifying compiler
Escape analysis for Java
Escape analysis for object-oriented languages
Removing unnecessary synchronization in Java
Compositional pointer and escape analysis for Java programs
Marmot
From flop to megaflops
Practicing JUDO
Safety checking of machine code
Symbolic bounds analysis of pointers, array indices, and accessed memory regions
ABCD
Field analysis
Quicksilver
OoLALA
Techniques for obtaining high performance in Java programs
Benchmarking Java against C and Fortran for scientific applications
Related field analysis
The NINJA project
Implementation of an array bound checker
Numerical Methods
Java and Numerical Computing
Java at Middle Age
Optimization of range checking
--CTR
Sudarshan M. Srinivasan , Srikanth Kandula , Christopher R. Andrews , Yuanyuan Zhou, Flashback: a lightweight extension for rollback and deterministic replay for software debugging, Proceedings of the USENIX Annual Technical Conference 2004 on USENIX Annual Technical Conference, p.3-3, June 27-July 02, 2004, Boston, MA | array bounds check;array indirection |
584517 | Combinations of Modal Logics. | There is increasing use of combinations of modal logics in both foundational and applied research areas. This article provides an introduction to both the principles of such combinations and to the variety of techniques that have been developed for them. In addition, the article outlines many key research problems yet to be tackled within this callenging area of work. | Introduction
Combining logics for modelling purposes has become a rapidly expanding enterprise that
is inspired mainly by concerns about modularity and the wish to join together different
kinds of information. As any interesting real world system is a complex, composite entity,
decomposing its descriptive requirements (for design, verification, or maintenance pur-
poses) into simpler, more restricted, reasoning tasks is not only appealing but is often the
only plausible way forward. It would be an exaggeration to claim that we currently have a
thorough understanding of 'combined methods.' However, a core body of notions, questions
and results has emerged for an important class of combined logics, and we are beginning to
understand how this core theory behaves when it is applied outside this particular class.
In this paper we will consider the combination of modal (including temporal) logics,
identifying leading edge research that we, and others, have carried out. Such combined
systems have a wide variety of applications that we will describe, but also have significant
problems, often concerning interactions that occur between the separate modal
dimensions. However, we begin by reviewing why we might want to use modal logics at
all.
z Corresponding author. Current address: Department of Computer Science, University of Liverpool,
Liverpool L69 7ZF, UK.
This document was developed from a panel session in the 1999 UK Automated Reasoning Workshop,
which was held as part of the AISB symposium in Edinburgh. All the authors were panelists, except for
Michael Fisher who chaired the panel and edited this document.
c
2000 Kluwer Academic Publishers. Printed in the Netherlands.
2. Why Use Modal Logics?
2.1. MOTIVATION
Over the past decades our perception of computers and computer programs has changed
several times in quite dramatic ways, with consequences reaching far into our societies.
With the rise of the personal computer we began to view the computer as an extension of
our office desks and computer programs replaced traditional office tools such as typewrit-
ers, calculators, and filling cabinets with word processors, spreadsheets, and databases.
Now, the advent of the electronic information age changes our view again: Computers and
their programs turn into ubiquitous digital assistants (or digital agents). Digital agents
have become necessary due to the vast extent and scattered nature of the information
landscape. In addition, today's average computer user is neither able nor willing to learn
how to navigate through the information landscape with the help of more traditional tools.
Digital agents have now become possible for almost the same reason. For the first time
there is sufficient free information and a sufficient number of services available which can
be accessed and manipulated by a computer program without direct intervention by the
human computer user.
Like the personal computer, digital agents will have a substantial impact on our econ-
omy. But do they also have an impact on research in computer science? One should note
that computer hardware design is still generally based on the von Neumann architecture
and that computer programs are still Turing-machine equivalent. However, are techniques
and results already available in computer science research that could have an impact on
the way digital agents (both current and future) are developed and implemented?
Modal logics (Chellas, 1980) or, more precisely, combinations of modal logics, are good
candidates for a formal theory that can be helpful for the specification, development, and
even the execution of digital agents. Modal logics can be used for modelling both digital
agents and (aspects of) their human users. A digital agent should have an understanding
of its own abilities, knowledge, and beliefs. It should also have a representation of the
knowledge, beliefs, and goals of its user and of other digital agents with whom it might
have to cooperate in order to achieve its goals.
Modal logics seem to be perfectly suited as a representation formalism in this setting.
However, there are also some obstacles for the use of the well-studied propositional modal
logics:
propositional logic is often insufficient for more complex real world situations - a
first-order, or even higher-order, language might be necessary;
a monotonic logic might not be sufficient - in many situations our knowledge about
the world is incomplete and much of our knowledge is actually only a default or only
holds with a certain probability; hence, to come to useful conclusions we might have
to rely on a nonmonotonic or probabilistic logic.
Thus, an appropriate representation formalism for digital agents may use combinations of
(propositional or first-order) modal logics.
2.2. REPRESENTING AGENTS
Agent-based systems are a rapidly growing area of interest, in both industry and academia,
(Wooldridge and Jennings, 1995). In particular, the characterisation of complex distributed
components as intelligent or rational agents allows the system designer to analyse applications
at a much higher level of abstraction. In order to reason about such agents, a
number of theories of rational agency have been developed, such as the BDI (Rao and
Georgeff, 1991) and KARO (van Linder et al., 1996) frameworks. These frameworks are
usually represented as combined modal logics. In addition to their use in agent theories,
where the basic representation of agency and rationality is explored, these logics form
the basis for agent-based formal methods. The leading agent theories and formal methods
generally share similar logical properties. In particular, the logics used have:
an informational component, such as being able to represent an agent's beliefs or
knowledge,
a dynamic component, allowing the representation of dynamic activity, and,
a motivational component, often representing the agents desires, intentions or goals.
These aspects are typically represented as follows:
Information - modal logic of belief (KD45) or knowledge (S5);
Dynamism - temporal or dynamic logic;
Motivation - modal logic of intention (KD) or desire (KD).
Thus, the predominant approaches use relevant combinations. For example: Moore (Moore,
combines propositional dynamic logic and a modal logic of knowledge (S5); the BDI
framework (Rao and Georgeff, 1991; Rao, 1995) uses linear or branching temporal logic,
together with modal logics of belief (KD45) or knowledge (S5) , desire (KD), and intention
Halpern et al. (Halpern and Vardi, 1989; Fagin et al., 1996) use linear and
branching-time temporal logics combined with a multi-modal (S5) logic of knowledge; and
the KARO framework (van Linder et al., 1996; van der Hoek et al., 1997) uses propositional
dynamic logic, together with modal logics of belief (KD45) and wishes (KD).
If we assume that combinations of modal logics play an important part in modelling
digital agents, it is an obvious step to consider the question whether we are able to verify
specified requirements or properties of an agent using formal methods. Unfortunately,
many of these combinations, particularly those using dynamic logic, become too complex
(not only undecidable, but incomplete) to use in practical situations. Thus, much
current research activity concerning agent theories centres around developing simpler
combinations of logics that can express many of the same properties as the more complex
combinations, yet are simpler to mechanise. For example, some of our work in this area
has involved developing a simpler logical basis for BDI-like agents (Fisher, 1997b).
2.3. SPATIAL LOGICS
While the traditional uses of modal logics are for representing interacting propositional
attitudes such as belief, knowledge, intention, etc., recent work has investigated the representation
of spatial information in combined modal logics. For example, in (Bennett,
1996; Bennett, 1997), Bennett uses the topological interpretation of the S4 modality as an
interior operator, in combination with an S5 modality in order to encode a large class of
topological relations.
2.4. DESCRIPTION LOGICS
Although not originally characterised in this way, one of the most successful uses of combinations
of modal logics has been the development of expressive Description Logics (Sat-
tler, 1996; De Giacomo and Massacci, 1996; Horrocks, 1998b). Description logics have
found many practical applications, for example in reasoning about database schemata
and queries (Calvanese et al., 1998a; Calvanese et al., 1998b). Since description logics
have been shown to correspond directly to certain combinations of modal logics, such
combinations are also useful.
The application concerning schema and query reasoning that was described above is
very promising, as are ontological engineering applications (Rector et al., 1997; Baker
et al., 1998). In this, and other, contexts description logics that combine transitive, non-transitive
and inverse roles (Horrocks and Sattler, 2000) have proved particularly useful
as they enable many common conceptual data modelling formalisms (including Entity-Relationship
models) to be captured while still allowing for empirically tractable implementations
(Horrocks, 2000).
Another successful combination of modal logics within a description logic framework is
motivated by the attempt to add a temporal dimension to the knowledge representation
language (Artale and Franconi, 2000; Artale and Franconi, 2001). Typical applications of
such temporally extended description logics have been the representation of actions and
plans in Artificial Intelligence (Artale and Franconi, 1998; Artale and Franconi, 1999a),
and reasoning with conceptual models of (federated) temporal databases (Artale and Fran-
coni, 1999b). A temporal description logic can be obtained by combining the description
logic with a standard point-based tense logic (Schild, 1993; Wolter and Zakharyaschev,
1998) or with a variant of the HS interval-based propositional temporal logic (Halpern
and Shoham, 1991).
Given that combinations of modal logics have a number of real and potential uses, it is
important to remember the general technical problems that can occur with such combina-
tions; this we do in the next section.
3. Problems with Combinations
and L 2
be two logics - typically, these are special purpose logics with limited
expressive power, as it often does not make sense to put together logics with universal
expressive power. Let P be a property that logics may have, say decidability, or axiomatic
completeness. The transfer problem is this: if L 1
and L 2
enjoy the property P, does their
combination
have P as well? Transfer problems belong to the main mathematical
questions that logicians have been concerned with in the area of combining logics.
When, and for which properties, do we have transfer or failure of transfer? As a rule of
thumb, in the absence of interaction between the component logics, we do have
here, absence of interaction means that the component languages do not share any sym-
bols, except maybe the booleans and atomic symbols (we will say more about interactions
in Section 5). Properties that do transfer in this restricted case include the finite model
property, decidability, and (under suitable restrictions on the classes of models and the
complexity class) complexity upper bounds.
The positive proofs in the area are usually based on two key intuitions: divide and
conquer and hide and unpack. That is: try to split problems and delegate sub-problems to
the component logics; and when working inside one of the component logics view information
relating to other component logics as alien information and 'hide' it - don't unpack
the hidden information until we have reduced a given problem to a sub-problem in the
relevant component logic. Neither of these key intuitions continues to work in the presence
of interaction. For instance, consider two modal languages L 1
and L 2
with modal operators
and , respectively; there are logics L 1 and L 2 in L 1 and L 2 whose satisfiability problem
is in NP, while the satisfiability problem for the combined language plus the interaction
principle
4. Reasoning Methods
So, we have seen that combinations of modal logics are, or at least have the potential
to be, very useful. However, we must also be careful in combining such logics. In all the
application areas discussed in Section 2, the notion of proof is important. In agent theories,
for example, a proof allows us to examine properties of the overall theory and, in some
cases, to characterise computation within that theory. In agent-based formal methods, a
proof is clearly important in developing verification techniques.
But how do we go about developing proof techniques for combined logics? Since there are
a wide variety of reasoning systems for individual modal logics, it is natural to ask: "does
combining work for actual reasoning systems?", i.e., can existing tools for each component
logic be put together to obtain tools for combined logics? Obviously, the re-use of tools and
procedures is one of the key motivations underlying the field.
Unfortunately, one cannot put together proof procedures for two logics in a uniform way.
First, 'proving' can have different meanings in different logics: (semi-)deciding satisfiability
or validity, computing an instantiation, or generating a model. Second, it is not clear
where to "plug in" the proof procedure for a logic L 1
into that for a second logic L 2
; a proof
procedure may have different notions of valuations, or of proof goals.
So what can one do? One way out is to impose special conditions on the calculi that
one wants to combine (Beckert and Gabbay, 1998). Another possibility, in the case of
modal logics, is to use a translation-based approach to theorem proving, by mapping all
component logics into a common background logic (see below).
There are quite a few successful particular instances of combined logics where we have
no problems whatsoever in putting together tools; see (Aiello et al., 1999; Kurtonina and
de Rijke, 2000). By and large, however, we don't have a good understanding of how to
proceed. Further experiments are needed, both locally, and network based, so that at some
stage we will be able to plug together tools without having to be the designer or engineer
of the systems.
In the following sections we will consider the work that we are involved with concerning
the development of proof techniques for combined logics.
4.1. TABLEAUX BASED REASONING
Most commonly, reasoning methods for modal logics are presented as tableau-style proce-
dures. These semantically-based approaches are well suited to implementing proof procedures
for interacting modalities because the usually explicit presence of models in the
data-structures used by the algorithm helps the system to represent the interactions
between modalities. Thus, tableaux systems have the advantages that:
they have a (fairly) direct and intuitively obvious relationship with the Kripke structures
of the underlying logic;
algorithms are easy to design and extend;
the simplicity of the algorithms facilitates optimised implementation.
We have developed a range of tableaux-based systems for description logics, for example
(Horrocks, 1998a; Horrocks, 2000), and for combinations of linear-time temporal logic
with modal logics S5 or KD45 (Wooldridge et al., 1998).
4.2. RESOLUTION BASED REASONING
An alternative approach to reasoning in combined modal logics is to use direct resolution
techniques which should, in the long term have at least the performance of corresponding
tableaux-based systems.
Our work in this area has focused on extending the resolution methods developed for
linear-time temporal logics (Fisher, 1991; Fisher et al., 2001) to particular combinations
of the logics considered above. This clausal resolution method centres round three main
steps:
translation to a simple normal form (involving renaming of complex subformulae and
reduction to a core set of operators);
classical resolution between formulae that occur at the same moment in time; and
resolution between sets of formula that make always true with constraints that
ensure is false at some point in the future.
In (Dixon et al., 1998), we extended this method to linear-time temporal logic combined
with S5 modal logic. During translation to the normal form, temporal formulae are separated
out from modal (using renaming). Reasoning in the temporal and modal
components is then carried out separately and information is transferred between the two
components via classical propositional logic.
Other important direct resolution methods for modal logics include (Mints, 1990) (on
which the above approach was based) and our work on prefixed resolution (Areces et al.,
1999).
4.3. TRANSLATION BASED REASONING
Translation between different formal languages and different problem classes is one of the
most fundamental principles in computer science. For example, a compiler for a programming
language is just (the implementation of) a translation from one formal language into
another formal language. In the case of logical reasoning, such a translation approach is
based on the idea that given a (sound, complete, and possibly terminating) theorem prover
for a logic L 2 , inference in a logic L 1 can be carried out by translating formulae of L 1 into
. There are minimal requirements imposed on the translation from L 1
into
namely
that the translation preserves satisfiability equivalence and that it can be computed in
polynomial time.
In the case of modal logics, the most straightforward translation mapping, the relational
translation, is based on the Kripke semantics of modal logics (van Benthem, 1976). But
just as there are many compilers for a single programming language, there are a number
of alternative translation mappings from modal logics into subclasses of first order logic
which satisfy the mentioned minimal requirements. These include the functional translation
(Auffray and Enjalbert, 1992; Fari -
nas del Cerro and Herzig, 1988; Ohlbach, 1991)
the optimised functional translation (Ohlbach and Schmidt, 1997; Schmidt, 1997), and the
semi-functional translation (Nonnengart, 1995).
The advantages of the translation approach include the following. First, by enabling the
use of a variety of first-order and propositional theorem provers the translation approach
provides access to efficient, reliable and general methods of modal theorem proving. Sec-
ond, decision procedures can be obtained by suitably instantiating parameters in existing
theorem proving frameworks (Bachmair and Ganzinger, 1997). For the optimised functional
translation this has been demonstrated in (Schmidt, 1997; Schmidt, 1999), while
the relational and semi-functional translation have been considered in (Ganzinger et al.,
1999a; Hustadt, 1999; Hustadt and Schmidt, 1999b; Hustadt and Schmidt, 1998). Third,
there are general guidelines for choosing these parameters in the right way to enforce
termination and the hard part is to prove that termination is guaranteed.
The most common approach uses ordering refinements of resolution to ensure termi-
nation. Ordering refinements are very natural as they provide decision procedures for a
wide range of solvable first-order fragments
uller et al., 2000; Hustadt and Schmidt,
1999a).
An interesting alternative is selection refinements (Hustadt and Schmidt, 1998). Selection
refinements are closely related to hyper-resolution, a well-known and commonly used
refinement in first-order theorem provers. It can be shown that selection refinements plus
splitting are able to polynomially simulate proof search in standard tableaux calculi for
modal logic (Hustadt and Schmidt, 1999b; Hustadt and Schmidt, 2000). Another alternative
is based on the fact that the optimised functional translation can be used to translate
modal into a subclass of the Bernays-Sch- onfinkel class (Schmidt, 1997). The
number of ground instances of clauses obtained from formulae of the Bernays-Sch- onfinkel
class is always finite. Thus, it is possible to use propositional decision procedures to test
the satisfiability of the sets of ground clauses in this way.
While there are a number of other promising approaches, for example the use of equational
methods (e.g. rewriting), based on the algebraic interpretation of modal logics, or
constraint satisfaction techniques, it seems likely that many of the techniques developed
will share a broad similarity (Hustadt and Schmidt, 2000). In this sense, there are few
reasoning methods that are "inappropriate" (Ohlbach et al., 2000).
5. Interactions
Once we combine modal (and temporal) logics, we must decide whether we are going to
support interactions between the modal dimensions. Although there may be some cases of
combined modal logics without interactions that can be useful, to exploit the full power
of this combination technique, interactions must be handled. For example, interactions
typically involve commutative modalities - i.e., pairs of modalities, and satisfying
the schema
But this apparently simple schema is surprisingly hard to incorporate into modal reasoning
algorithms. Indeed, (Kracht, 1995) has shown that the logic containing three S5
modalities, each pair of which commutes, is undecidable, while it is well-known that S5
itself is NP-complete. See Section 8 later for further discussion of decidability issues.
In the following, we will re-examine some of the approaches considered in Section 4,
particularly concerning how they cope with such interactions.
5.1. DESCRIPTION LOGICS
In the context of description logics, interactions between different kinds of role (modality)
are an inherent and essential part of the work on expressive description logics. We can
think of basic description logics as a syntactic variant of the normal multi-modal logic K,
or even of a propositional dynamic logic. More expressive description logics are obtained
by adding new roles with specific properties - for example they may be transitive or
functional (deterministic); in this case, there is no interaction between modalities in the
combined logic. However, the interesting cases are when converse modalities, or implications
between modalities, e.g., ! , are introduced. The latter axiom schema in
a basic description logic combined with multi-modal S4 is important since it allows us to
encode the universal modality. As there are no description logics in which interactions such
as $ are required, the above approach leads to practical reasoning systems for
quite complex description logics.
In the context of temporal description logics, interactions between temporal and non-temporal
modalities are usually bad. For example, the ability to define a global role - i.e.,
invariant over time - makes the logic undecidable. However, we have identified a special
case, still very useful in practice, where the combination of a basic description logic with a
temporal component is decidable (Artale and Franconi, 1998).
5.2. TEMPORAL LOGICS OF KNOWLEDGE
Particular interactions between temporal logics and modal logics of knowledge (S5) have
been analysed in (Halpern and Vardi, 1986; Halpern and Vardi, 1988b; Halpern and Vardi,
1988a; Halpern and Vardi, 1989). Notions such as perfect recall, no learning, unique initial
state and synchrony are defined. The basic temporal logics of knowledge are then restricted
to those where certain of the above notions hold. Fagin et al. (Fagin et al., 1996) consider
the complexity of the validity problem of these logics for one or more agents, linear or
branching-time temporal logic and with or without common knowledge. In general, the
complexity of the validity problem is higher where interactions are involved, with some
combinations of interactions leading to undecidability.
In (Dixon and Fisher, 2000), we consider resolution systems with synchrony and perfect
recall by adding extra clauses to the clause-set to account for the application of the
synchrony and perfect recall axiom to particular clauses. The former can be axiomatised
by the axioms of linear-time temporal logic, plus the axioms of the modal logic S5 (rep-
resented by the modal operator 'K') with the additional interaction axiom (Halpern and
Vardi, 1988b; Halpern et. al., 2000),
meaning informally that if an agent knows that in the next moment will hold then in
the next moment the agent will know that holds. Essentially, in systems with perfect
recall, the number of timelines (or possible futures) that an agent considers possible stays
the same or decreases over time. The axiom for synchrony and no learning is the converse
of the above axiom.
We are also interested in looking at what interactions are actually used in application
areas. For example the interaction (where [do i ()] is effectively a dynamic logic operator)
meaning informally if agent i knows that doing action results in then agent i doing
the action results in agent i knowing , has been highlighted as desirable for the KARO
framework (van der Hoek et al., 1997). This is very similar to the synchrony and perfect
recall axiom noted above.
5.3. TRANSLATION
From the view of the translation approach, interactions between logics in some combination
of modal logics are no different from the more traditional axiom schemata which are
usually provided to characterise extensions of the basic modal logic K, for example, the
modal logics KD, S4 and S5. To accommodate these axiom schemata in the translation approach
we have to find a satisfiability equivalence preserving first-order characterisation
for them.
For example, consider a combination of two basic modal logics with modal operators
and . A very simple interaction between the two modal logics is given by the axiom
schema
Using the relational translation approach with predicate symbols R and R corresponding
to the accessibility relation for each logic, axiom schema (1) can be characterised by
the first-order formula
In some cases, the first-order formulae characterising interactions are already covered by
existing decidability results for subclasses of first-order logic corresponding to the modal
logics. For example, formula (2) belongs to the Skolem class, the class of DL-clauses
uller
et al., 2000), and various other classes. In these cases the translation approach provides us
with sound, complete, and terminating decision procedures for combinations of interacting
modal logics without any additional effort.
It is even possible to obtain general characterisations of the boundaries of decidability of
combinations of interacting modal logics in this way. For example, results from (Ganzinger
et al., 1999b) imply that if we have two modal logics which satisfy the 4 axiom, that is the
accessibility relations are transitive, and the modal logics are interacting, for example, by
the axiom schema (1), then the combined logic is undecidable.
6. General Frameworks
The considerations of previous sections were intended to highlight some of the potentials
of a representational framework based on combinations of modal logics. However, it was
also pointed out that providing such a representational framework together with the accompanying
tools is a non-trivial problem. It is therefore very likely that whatever our
first approach to this problem might be, it will have shortcomings which are too serious
to provide a workable solution. So, research in this direction will proceed by considering
a number of combinations of modal logics and assessing both their appropriateness and
usefulness.
Whether or not a particular combination of modal logics provides a suitable foundation
for a representational framework can only be decided if we are able to make practical
use of it, for example for the purpose of verifying and executing agents. Therefore,
the assessment necessarily requires the availability of implemented theorem provers for
combinations of modal logics.
It is, of course, possible to develop a suitable calculus from scratch, as we described in
Section 4, proving its soundness, completeness, and possibly its termination for the combination
2of logics under consideration, and finally to implement the calculus accompanied
with the required data structures, heuristic functions, and optimisation techniques in a
theorem prover. However, bearing in mind that we have to make this effort for a number
of different combinations of modal logics, it is necessary to find a more general approach.
Thus, we are interested in general principles and a general framework for combining
modal logics.
One approach is to use standard methods for modal logics. There are various levels at
which one could give a generalised account of combinations of modal logics. Semantically,
both the Kripke and Tarski style semantics generalise easily to combined modal systems.
Although algebraic semantics is more general and supports equational and constraint-based
methods, Kripke semantics is better known, more intuitive, and is well-suited to
developing tableau methods. The correspondences between these two semantic approaches
provide different perspectives on the interpretation, each of which has its own advantages;
see (Blackburn et al., 1999) for details on the correspondences.
The proof theory of modal logics was originally developed in terms of axiom schemata
(Hilbert systems). This approach can be applied equally well to combined modal systems.
Axiomatic presentations are concise and the meanings of the axioms are (if not too com-
plex) usually readily understandable. Within such a system, proofs can be carried out
simply by substituting formulae into the axiom schemata and applying the modus ponens
rule. However, these non-analytic rules do not provide practical inference systems. Rule-based
presentations seem to be better suited for the development of inference algorithms,
especially where they are purely analytic.
While the use of a traditional approach might be helpful in certain cases, there is
clearly a need for a framework specifically designed to handle the combination of modal
logics. Examples for such frameworks are fibring (Gabbay, 1999), the translation approach
(see above), and the SNF approach (Dixon et al., 1998). Briefly, the SNF approach involves
extending the normal form (called SNF) developed for representing temporal logics
(Fisher, 1997a) to other modal and temporal logics, such as branching-time temporal
logics (Bolotov and Fisher, 1999), modal logics (Dixon et al., 1998) and even -calculus
formulae (Fisher and Bolotov, 2000). The basic approach is to keep rules (clauses) separate
that deal with different logics, re-use proof rules for the relevant logics and to make sure
enough information is passed between each part (Dixon et al., 1998; Fisher and Ghidini,
1999).
An approach that we have been interested in more recently is itself a combination, namely,
a combination of the SNF and the translation approaches. For example, when combining a
temporal logic with a modal logic, the temporal aspects remain as SNF clauses, while the
modal clauses are translated to classical logic (Hustadt et al., 2000).
7. Tools
Given that combinations of modal logics are often quite complex, what are the prospects for
having practical tools that will allow us to reason about combined modal logics incorporating
One answer is that, for some particular instances of combined modal log-
ics, namely description logics, powerful and efficient systems already exist. Good examples
of these are FaCT (Horrocks, 1998a), iFaCT (Horrocks, 2000) and DLP (Patel-Schneider,
1998).
Implementation effort is also under way to support some of the reasoning methods
described in Section 4, for example clausal resolution based upon the SNF approach. Here,
a resolution based theorem prover for linear-time temporal logics plus S5 modal logic
based on (Dixon et al., 1998) is being developed. This is to be extended with work on
strategies for efficient application, and guidance, of proof rules (Dixon, 1996; Dixon, 1998;
Dixon, 1999) developed for temporal logics in order to help deal with interactions.
However, if combinations of modal logics are to function as practical vehicles for rea-
soning, we are likely to have to face the issues of complexity. There are various complexity
results counting both in favour and against the feasibility of combined modal reasoning. As
we have seen earlier, Kracht (Kracht, 1995) (amongst others) has shown that some simple
combinations of modalities together with very simple interaction axioms yield undecidable
systems; on the positive side, there are a number of examples of quite expressive fragments
of multi-modal languages, whose decision procedures are polynomial, for example (Renz
and Nebel, 1997). Thus, the viability of reasoning with combined modal logics depends
very much on the particular combination of modalities and interaction axioms.
An obvious way of reducing the complexity of a logical language is to restrict its syntax.
We consider such an approach, called layering (Finger and Gabbay, 1992), below.
7.1. LAYERED MODAL LOGICS
A layered modal logic is a special kind of combined modal logic in which restrictions are
placed on the nesting order of the different modalities in the language. A typical layering
restriction would be to require that temporal modalities lie outside the scope of spatial
modalities. This would allow one to represent tensed spatial constraints, but not specify
temporal constraints which vary over space. Whereas tensed spatial relations are very
common in natural language, the latter kind of constraint is not so common.
7.2. EXPRESSIVE POWER OF RESTRICTED FORMALISMS
One might expect that restricting combined modal formalisms to be layered would reduce
their expressive power to the extent that they would lose much of their usefulness. How-
ever, the S5(S4) hybrid described in Section 2.3 is strictly layered (which is one way to
account for its nice computational properties) and is also capable of representing a wide
range of topological relations.
In the case of a spatio-temporal language (as above) one might restrict temporal operators
to lie outside the scope of spatial operators. This would result in an intuitively natural
language of tensed spatial constraints, in which we could, for example, make statements
of the form "x will overlap y" but not of the form "every part of x will at some time satisfy
property ."
8. Decidability Problems
The result by (Ganzinger et al., 1999b) and related results by (Kracht, 1995) show that
undecidability of combinations of modal logics is imminent as soon as we allow interactions
between the logics. Consequently, one of the most attractive features of modal logics is lost
in these cases. Furthermore, one of the important arguments for the effort invested in the
development of tableaux-based theorem provers for modal logics is lost, namely, that they
provide rather straightforward decision procedures while the translation approach does
not ensure termination without the use of the appropriate refinements of resolution.
This raises several questions:
1. Should we concentrate our research on identifying combinations of interacting modal
logics which are still decidable? Or should we abandon the consideration of these
rather simple logics, since we have to expect that most combinations of modal logic
interesting for real-world applications will be undecidable?
2. If we acknowledge that we have to deal with undecidable combinations of modal logics,
does the use of modal logics still make sense? Is there still an advantage compared to
the use of first-order logic or some other logic for the tasks under consideration?
Note that the translation approach partly diminishes the importance of these questions as
far as the development of theorem provers is concerned. Refinements of resolution which
ensure termination on the translation of modal are still sound and complete
calculi for full first-order logic. Thus, in the translation approach we can always fall back
on full first-order logic whenever we have the feeling that combinations of modal logics are
too much of a strait-jacket.
In the work on description logics, the practical use of these formalisms has meant that
retaining decidability is essential. Thus, one of the key aspects of research in this area has
been the extension of expressive power while (just) retaining decidability.
In contrast, in the work on agent theories, there is research attempting to characterise
which interactions do lead to undecidability. For decidable cases, it is important to assess
how useful the interactions are in relevant application areas; if they are of use we can
study decision procedures for these interactions and one of the key strategies here has
been to structure information, and to separate the various reasoning tasks.
Finally, the development of heuristics and strategies to guide the proofs is essential
regardless of decidability as, even if decidable fragments exist, their complexity is likely
to be high if we include interactions.
9.
Summary
Does the idea of combining logics actually offer anything new? Some of the possible objections
can be justified. Logical combination is a relatively new idea: it has not yet been systematically
explored, and there is no established body of results or techniques. Nonethe-
less, there is a growing body of logic-oriented work in the field, and there are explorations
of their uses in AI, computational linguistics, automated deduction, and computer science.
An overly critical reaction seems misguided.
In order to receive more attention from the wider community of researchers interested
in knowledge representation and reasoning, the capabilities of combined modal logics
need to become more accessible; and their superiority over other formalisms (such as the
direct use of first-order logic) needs to be decisively demonstrated for some significant
applications. A survey of the expressive power, computational properties and potential
applications of a large class of combined modal logics would be very useful to AI system
designers.
--R
Spatial Reasoning for Image Retrieval.
Prefixed Resolution: A resolution method for modal and description logics.
A Temporal Description Logic for Reasoning about Actions and Plans.
Representing a Robotic Domain using Temporal Description Logics.
Temporal Entity-Relationship Modeling with Description Logics
A Survey of Temporal Extensions of Description Logics.
Temporal Description Logics.
A theory of resolution.
Transparent access to multiple bioinformatics information sources: an overview.
Fibring semantic tableaux.
Modal logics for qualitative spatial reasoning.
Logical Representations for Automated Reasoning about Spatial Relationships.
Modal Logic.
A Resolution Method for CTL Branching-Time Temporal Logic
Description Logic Framework for Information Integration.
Modal Logic
Tableaux and algorithms for propositional dynamic logic with converse.
Clausal Resolution for Logics of Time and Knowledge with Synchrony and Perfect Recall.
Resolution for Temporal Logics of Knowledge.
Search Strategies for Resolution in Temporal Logics.
Temporal Resolution using a Breadth-First Search Algorithm
Removing Irrelevant Information in Temporal Resolution Proofs.
Adding a Temporal Dimension to a Logic System.
uk/RESEARCH/LoCo/mures.
Clausal Temporal Resolution.
A Resolution Method for Temporal Logic.
A Normal Form for Temporal Logic and its Application in Theorem-Proving and Execution
Implementing BDI-like Systems by Direct Execution
Programming Resource-Bounded Deliberative Agents
Reasoning About Knowledge.
Fibring Logics
A resolution-based decision procedure for extensions of K4
The two-variable guarded fragment with transitive relations
IEEE Computer Society Press
The Complexity of Reasoning about Knowledge and Time: Extended Abstract.
Reasoning about Knowledge and Time in Asynchronous Systems.
The Complexity of Reasoning about Knowledge and Time: Synchronous Systems.
The Complexity of Reasoning about Knowledge and Time.
Journal of Computer and System Sciences
Complete axiomatizations for reasoning about knowledge and time (submitted).
A Propositional Modal Logic of Time Intervals.
Complexity transfer for modal logic.
The FaCT system.
Using an expressive description logic: FaCT or fiction?
FaCT and iFaCT.
Journal of Logic and Computation
Normal Forms and Proofs in Combined Modal and Temporal Logics.
Issues of decidability for description logics in the framework of resolution.
In Automated Deduction in Classical and Non-Classical Logics: Selected Papers
Using resolution for testing modal satisfiability and building models.
Maslov's class K revisited.
On the relation of resolution and tableaux proof systems for description logics.
Hybrid logics and linguistic inference.
Highway to the danger zone.
Simulation and transfer results in modal logic.
Formalising Motivational Attitudes of Agents: On Preferences
A Formal Theory of Knowledge and Action.
A Resolution-Based Calculus For Temporal Logics
Encoding two-valued non-classical logics in classical logic
Functional translation and second-order frame properties of modal logics
Journal of Logic and Computation
DLP system description.
Modeling Agents within a BDI-Architecture
Decision Procedures for Propositional Linear-Time Belief-Desire-Intention Logics
The GRAIL concept modelling language for medical terminology.
A concept language extended with different kinds of transitive roles.
Optimised Modal Translation and Resolution.
Decidability by resolution for propositional modal logics.
Combining Terminological Logics with Tense Logic.
Modal Correspondence Theory.
An integrated modal approach to rational agents.
Technical Report UU-CS-1997-06
Temporalizing Description Logics.
A Tableau-Based Proof Method for Temporal Logics of Knowledge and Belief
Intelligent agents: Theory and practice.
--TR
The complexity of reasoning about knowledge and time
Reasoning about knowledge and time in asynchronous systems
The complexity of reasoning about knowledge and time. I. lower bounds
Gentzen-type systems and resolution rules. Part I. Propositional logic
A propositional modal logic of time intervals
Reasoning about knowledge
On the decidability of query containment under constraints
Modal description logics
Clausal temporal resolution
Temporal resolution using a breadth-first search
A survey of temporal extensions of description logics
Decidability by Resolution for Propositional Modal Logics
Using Resolution for Testing Modal Satisfiability and Building Models
Combining Terminological Logics with Tense Logic
TAMBIS
Programming Resource-Bounded Deliberative Agents
On the Relation of Resolution and Tableaux Proof Systems for Description Logics
Normal Forms and Proofs in Combined Modal and Temporal Logics
Fibring Semantic Tableaux
The FaCT System
Temporal ER Modeling with Description Logics
A Concept Language Extended with Different Kinds of Transitive Roles
Issues of Decidability for Description Logics in the Framework of Resolution
Tableaux and Algorithms for Propositional Dynamic Logic with Converse
Maslov''s Class K Revisited
Prefixed Resolution
Search Strategies for Resolution in Temporal Logics
Encoding two-valued nonclassical logics in classical logic
Resolution decision procedures
The Two-Variable Guarded Fragment with Transitive Relations
--CTR
Daniel Oberle , Steffen Staab , Rudi Studer , Raphael Volz, Supporting application development in the semantic web, ACM Transactions on Internet Technology (TOIT), v.5 n.2, p.328-358, May 2005
Clare Dixon , Michael Fisher , Alexander Bolotov, Clausal resolution in a logic of rational agency, Artificial Intelligence, v.139 n.1, p.47-89, July 2002 | modal logics;knowledge representation;logical reasoning |
584532 | Interior-Point Algorithms for Semidefinite Programming Based on a Nonlinear Formulation. | Recently in Burer et al. (Mathematical Programming A, submitted), the authors of this paper introduced a nonlinear transformation to convert the positive definiteness constraint on an n n matrix-valued function of a certain form into the positivity constraint on n scalar variables while keeping the number of variables unchanged. Based on this transformation, they proposed a first-order interior-point algorithm for solving a special class of linear semidefinite programs. In this paper, we extend this approach and apply the transformation to general linear semidefinite programs, producing nonlinear programs that have not only the n positivity constraints, but also n additional nonlinear inequality constraints. Despite this complication, the transformed problems still retain most of the desirable properties. We propose first-order and second-order interior-point algorithms for this type of nonlinear program and establish their global convergence. Computational results demonstrating the effectiveness of the first-order method are also presented. | Introduction
Semidefinite programming (SDP) is a generalization of linear programming in which a
linear function of a matrix variable X is maximized or minimized over an affine subspace
Computational results reported in this paper were obtained on an SGI Origin2000 computer at Rice
University acquired in part with support from NSF Grant DMS-9872009.
y School of Mathematics, Georgia Tech., Atlanta, Georgia 30332, USA. This author was supported in part
by NSF Grants INT-9910084 and CCR-9902010. (E-mail: burer@math.gatech.edu).
z School of ISyE, Georgia Tech., Atlanta, Georgia 30332, USA. This author was supported in part by NSF
Grants INT-9600343, INT-9910084 and CCR-9902010. (Email: monteiro@isye.gatech.edu).
x Department of Computational and Applied Mathematics, Rice University, Houston, Texas 77005, USA.
This author was supported in part by DOE Grant DE-FG03-97ER25331, DOE/LANL Contract 03891-99-23
and NSF Grant DMS-9973339. (Email: zhang@caam.rice.edu).
of symmetric matrices subject to the constraint that X be positive semidefinite. Due to
its nice theoretical properties and numerous applications, SDP has received considerable
attention in recent years. Among its theoretical properties is that semidefinite programs
can be solved to a prescribed accuracy in polynomial time. In fact, polynomial-time interior-point
algorithms for SDP have been extensively studied, and these algorithms, especially
the primal-dual path-following algorithms, have proven to be efficient and robust in practice
on small- to medium-sized problems.
Even though primal-dual path-following algorithms can in theory solve semidefinite
programs very efficiently, they are unsuitable for solving large-scale problems in practice
because of their high demand for storage and computation. In [1], Benson et al have
proposed another type of interior-point algorithm - a polynomial-time potential-reduction
dual-scaling method - that can better take advantage of the special structure of the SDP
relaxations of certain combinatorial optimization problems. Moreover, the efficiency of the
algorithm has been demonstrated on several specific large-scale applications (see [2, 6]). A
drawback to this method, however, is that its operation count per iteration can still be
quite prohibitive due to its reliance on Newton's method as its computational engine. This
drawback is especially critical when the number of constraints of the SDP is significantly
greater than the size of the matrix variable.
In addition to - but in contrast with - Benson et al 's algorithm, several methods have
recently been proposed to solve specially structured large-scale SDP problems, and common
to each of these algorithms is the use of gradient-based, nonlinear programming techniques.
In [8], Helmberg and Rendl have introduced a first-order bundle method to solve a special
class of SDP problems in which the trace of the primal matrix X is fixed. This subclass
includes the SDP relaxations of many combinatorial optimization problems, e.g., Goemans
and Williamson's maxcut SDP relaxation and Lov'asz's theta number SDP. The primary
tool for their spectral bundle method is the replacement of the positive-semidefiniteness
constraint on the dual slack matrix S with the equivalent requirement that the minimum
eigenvalue of S be nonnegative. The maxcut SDP relaxation has received further attention
from Homer and Peinado in [9]. Using the original form of the Goemans-Williamson relax-
ation, i.e., not doing the change of variables n\Thetan is the original
variable, they show how the maxcut SDP can be reformulated as an unconstrained maximization
problem for which a standard steepest ascent method can be used. Burer and
Monteiro [3] improved upon the idea of Homer and Peinado by simply noting that, without
loss of generality, V can be required to be lower triangular. More recently, Vavasis [14] has
shown that the gradient of the classical log-barrier function of the dual maxcut SDP can
be computed in time and space proportional to the time and space needed for computing
the Cholesky factor of the dual slack matrix S. Since S is sparse whenever the underlying
graph is sparse, Vavasis's observation may potentially lead to efficient, gradient-based
implementations of the classical log-barrier method that can exploit sparsity.
In our recent paper [5], we showed how a class of linear and nonlinear SDPs can be
reformulated into nonlinear optimization problems over very simple feasible sets of the
is the size of matrix variable X , m is a problem-dependent,
nonnegative integer, and ! n
++ is the positive orthant of ! n . The reformulation is based
on the idea of eliminating the positive definiteness constraint on X by first applying the
substitution done in [3], where L is a lower triangular matrix, and then
using a novel elimination scheme to reduce the number of variables and constraints. We also
showed how to compute the gradient of the resulting nonlinear objective function efficiently,
hence enabling the application of existing nonlinear programming techniques to many SDP
problems.
In [5], we also specialized the above approach to the subclass of linear SDPs in which
the diagonal of the primal matrix X is fixed. By reformulating the dual SDP and working
directly in the space of the transformed problem, we devised a globally convergent, gradient-based
nonlinear interior-point algorithm that simultaneously solve the original primal and
dual SDPs. We remark that the class of fixed-diagonal SDPs includes most of the known
SDP relaxations of combinatorial optimization problems.
More recently, Vanderbei and Benson [13] have shown how the positive semidefinite
constraint on X can be replaced by the n nonlinear, concave constraints [D(X)] ii - 0,
is the unique diagonal matrix D appearing in the standard
of a positive semidefinite matrix X . Moreover, they show how
these concave constraints can be utilized in the solution of any linear SDP using an interior-point
algorithm for general convex, nonlinear programs. Since the discussion of Vanderbei
and Benson's method is mainly of theoretical nature in [13], the question of whether or not
this method offers practical advantages on large-scale SDPs is yet to be determined.
In this paper, we extend the ideas of [5] to solve general linear SDPs. More specifically,
in [5] we showed that if the diagonal of the primal matrix variable X was constrained to
equal a vector d, then the dual SDP could be transformed to a nonlinear programming
problem over the simple feasible set ! n
is the size of the matrix variable
and m is the number of additional primal constraints. The general case described here (that
is, the case in which the diagonal of X is not necessarily constrained as above) is based
on similar ideas but requires that the feasible points of the new nonlinear problem satisfy
nonlinear inequality constraints in addition to lying in the set ! n
These new
inequality constraints, however, can be handled effectively from an algorithmic standpoint
using ideas from interior-point methods.
We propose two interior-point algorithms for solving general linear SDPs based on the
above ideas. The first is a generalization of the first-order (or gradient-based) log-barrier
algorithm presented in [5], whereas the second is a potential reduction algorithm that employs
the use of second-derivative information via Newton's method. We believe that the
first algorithm is a strong candidate for solving large, sparse SDPs in general form and that
the second algorithm will also have relevance for solving small- to medium-sized SDPs, even
though our current perspective is mainly theoretical.
This paper is organized as follows. In Section 2, we introduce the SDP problem studied
in this paper along with our corresponding assumptions, and we briefly summarize the main
results of our previous paper. We then reformulate the SDP into the nonlinear programming
problem mentioned in the previous subsection and introduce and analyze a certain
Lagrangian function which will play an important role in the algorithms developed in this
paper. In Sections 3 and 4, respectively, we develop and prove the convergence of the
two aforementioned algorithms - one being a first-order log-barrier algorithm, another a
second-order potential reduction algorithm - for solving the SDP. In Section 5, we present
computational results that show the performance of the first-order log-barrier algorithm on
a set of SDPs that compute the so-called Lov'asz theta numbers of graphs from the liter-
ature. Also in Section 5, we discuss some of the advantages and disadvantages of the two
algorithms presented in the paper. In the last section, we conclude the paper with a few
final comments.
1.1 Notation and terminology
We use !, n\Thetan to denote the space of real numbers, real n-dimensional column
vectors, and real n \Theta n matrices, respectively, and ! n
++ to denote those subsets of ! n
consisting of the entry-wise nonnegative and positive vectors, respectively. By S n we denote
the space of real n \Theta n symmetric matrices, and we define S n
++ to be the subsets of
consisting of the positive semidefinite and positive definite matrices, respectively. We
to indicate that A 2 S n
++ , respectively. We will also
make use of the fact that each A 2 S n
has a unique matrix square root A 1=2 that satisfies
denotes the space of real n \Theta n lower triangular matrices, and L n
and L n
++ are the subsets of L n consisting of those matrices with nonnegative and positive
diagonal entries, respectively. In addition, we define L nae L n to be the set of all n \Theta n
strictly lower triangular matrices.
We let tr(A) denote the trace of a matrix A 2 ! n\Thetan , namely the sum of the diagonal
elements of A. Moreover, for n\Thetan , we define A In addition,
for denotes the Hadamard product of u and v, i.e., the entry-wise
multiplication of u and v, and if
++ , we define u \Gamma1 to be the unique vector satisfying
e, where e is the vector of all ones. We also define Diag n\Thetan by
U is the diagonal matrix having U
diag is defined to be the adjoint of Diag, i.e.,
We use the notation e to denote the i-th coordinate vector that has
a 1 in position i and zeros elsewhere.
We will use k \Delta k to denote both the Euclidean norm for vectors and its induced operator
norm, unless otherwise specified. The Frobenius norm of a matrix A is defined as
2 The SDP Problem and Preliminary Results
In this section, we introduce a general-form SDP problem, state two standard assumptions
on the problem, and discuss the optimality conditions and the central path for the SDP. We
then describe the transformation that converts the dual SDP into a constrained nonlinear
optimization problem. The consideration of optimality conditions for the new problem
leads us to introduce a certain Lagrangian function, for which we then develop derivative
formulas and several important results. We end the section with a detailed description of
the properties of a certain "primal estimate" associated with the Lagrangian function; these
properties will prove crucial for the development of the algorithms in Sections 3 and 4.
2.1 The SDP problem and corresponding assumptions
In this paper, we study the following slight variation of the standard form SDP problem:
is the matrix variable, and the data of the problem is given by the matrix
the vectors d and the linear function A : S which is
defined by [A(X)] for a given set of matrices fA k g m
ae S n . We remark that
differs from the usual standard form SDP only by the
additional inequality diag(X) - d, but we also note that every standard form SDP can be
written in the form of (P ) by simply adding the redundant constraint diag(X) - d for any
nonpositive vector d 2 ! n . So, in fact, the form (P ) is as general as the usual standard
form SDP.
The need for considering the general form (P ) rather than the usual standard form SDP
arises from the requirement that, in order to apply the transformation alluded to in the
introduction, the dual SDP must possess a special structure. The dual SDP of (P ) is
s.t. Diag(z) +A
z - 0; S - 0
is the matrix variable, z are the vector variables and
A is the adjoint of the operator A defined by A
. The term
Diag(z) found in the equality constraint of (D) is precisely the "special structure" which
our transformation will exploit. We will describe the transformation in more detail in the
following subsection.
We denote by F 0 (P ) and F 0 (D) the sets of strictly feasible solutions for problems (P )
and (D), respectively, i.e.,
and we make the following assumptions throughout our presentation.
Assumption 1: The set F 0 (P ) \Theta F 0 (D) is nonempty.
Assumption 2: The matrices fA k g m
are linearly independent.
Note that, when a standard form SDP is converted to the form (P ) by the addition of
the redundant constraint diag(X) - d, for any nonpositive d 2 ! n , the strict inequality
diag(X) ? d in the definition of F 0 (P ) is redundant, i.e., for each feasible X - 0, the
inequality diag(X) ? d is automatically satisfied. Hence, F 0 (P ) equals the usual set of
strictly feasible solutions defined by fX 2 0g. In particular, if we
assume that the usual set of interior solutions is nonempty, then F 0 (P ) is also nonempty.
In addition, it is not difficult to see that the set f(y;
of dual strictly feasible solutions for the usual standard form SDP is nonempty if and only
if the set F 0 (D) is nonempty. In total, we conclude that, when a standard form SDP is
put in the form (P ), Assumption 1 is equivalent to the usual assumption that the original
primal and dual SDPs both have strictly feasible solutions.
Under Assumption 1, it is well-known that problems (P ) and (D) both have optimal
solutions respectively, such that C ffl X This last
condition, called strong duality, can be alternatively expressed as the requirement that
or equivalently that X S
In addition, under Assumptions 1 and 2, it is well-known that, for each
the problems
(D - ) min
log(\Gammaz
have unique solutions X - and (z respectively, such that
where I 2 ! n\Thetan is the identity matrix and e 2 ! n is the vector of all ones. The set of
solutions known as the primal-dual central path for problems
(D). In the upcoming sections, this central path will play an important role in the
development of algorithms for solving problems (P ) and (D).
2.2 The transformation
In this subsection, we present the primary result of [5] which allows us to transform problem
(D) into an equivalent nonlinear program with n nonlinear inequality constraints. We then
introduce a certain Lagrangian function associated with the new nonlinear program and
prove some key results regarding this function.
Recall that L nae L n denotes the set of all n \Theta n strictly lower triangular matrices. The
following result is stated and proved in [5] (see theorem 4 of section 6 therein).
Theorem 2.1 The following statements hold:
(a) for each (w;
there exists a unique ( ~
(b) the functions ~
L(w; y) and z(w; y) defined according to (2) are each infinitely differentiable
and analytic on their domain ! n
(c) the spaces ! n
are in bijective correspondence according to the assignment (w; y) 7! (z;
It is important to note that the set in (3) differs from the strictly feasible set F 0 (D) in
that the inequality z ! 0 is not enforced.
An immediate consequence of Theorem 2.1 is that the dual SDP (D) can be recast as
the nonlinear program
s.t.
are the vector variables. A few remarks are in order concerning
(NLD) and its relationship to (D). Firstly, the functions ~
L(w; y) and z(w; y) introduced in
the above theorem cannot be uniquely extended to the boundary of ! n
so it is
necessary that w be strictly positive in (NLD). Secondly, the vector constraint z(w; y) !
0, which arises directly from the corresponding constraint of (D), could equivalently be
replaced by z(w; y) - 0. We have chosen the strict inequality because, with z(w; y) ! 0,
there is a bijective correspondence between F 0 (D) and the feasible set of (NLD). Finally,
because the elements of w are not allowed to take the value zero, (NLD) does not in general
have an optimal solution. In fact, only when (d; does (NLD) have an optimal
solution set.
Even though the feasible set of (NLD) does not include its boundary points, we may
still consider the hypothetical situation in which the inequalities w ? 0 and z(w;
are relaxed to w - 0 and z(w; y) - 0. In particular, we can investigate the first-order
optimality conditions of the resulting hypothetical, constrained nonlinear program using
the Lagrangian function
defined by
'(w;
If additional issues such as regularity are ignored in this hypothetical discussion, then the
first-order necessary conditions for optimality could be stated as follows: if (w;
is a local minimum of the function d T z(w; y)+b T y subject to the constraint that z(w; y) - 0,
then there exists
such that
rw '(w;
r y '(w;
One may suspect that these optimality conditions are of little use since they are based on
the hypothetical assumption that z(w; y) and ~
L(w; y) are defined on the boundary of ! n
\Theta
In the following sections, however, we show that these are precisely the conditions which
guarantee optimality when satisfied "in the limit" by a sequence of points f(w k ; y
++ .
2.3 The first and second derivatives of the Lagrangian
In this subsection, we establish some key derivative results for the Lagrangian function '
introduced in the last subsection. In addition to the definition (4) of the Lagrangian, we
define the functions L and S, each respectively mapping the set ! n
++ and
++ , by the formulas
We note that S(w; y) and L(w; y) are the positive definite slack matrix and its Cholesky
factor, respectively, which are associated with (w; y) via the bijective correspondence of
Theorem 2.1.
By (4), it is evident that the derivatives of the Lagrangian are closely related to the
derivatives of the function h
defined as
for all (w;
is an arbitrary, fixed vector. Theorems 2.3 and 2.4
below establish the derivatives of h v based on an auxiliary matrix X that is defined in the
following proposition. Since Proposition 2.2 is an an immediate consequence of lemma 3 of
[5], we omit its proof. (See also proposition 7 in [5].)
Proposition 2.2 Let L 2 L n
++ and v 2 ! n be given. Then the system of linear equations
has a unique solution X in S n .
We remark that the proof of the following theorem is basically identical to the one of
theorem 2 in [5]. It is included here for the sake of completeness and for paving the way
towards the derivation of the second derivatives of h v in Theorem 2.4.
Theorem 2.3 Let (w;
denote the unique
solution of
(a) rw h v (w;
(b) r y h v (w;
Proof. To prove (a), it suffices to show that (@h v =@w i )(w;
Differentiating (8) with respect to w i , we obtain
(w; y)
(w; y)
(w; y)
where the last equality follows from the fact that differentiating (2) with
respect to w i , we obtain
Diag
(w; y)
(w; y)
(w; y)
Taking the inner product of both sides of this equation with X and using the fact that X
is symmetric, we obtain
(w; y)
@ ~
where the second equality follows from the fact that (@ ~
strictly lower triangular
and XL is upper triangular in view of (9). Combining (10) and (12), we conclude
that (a) holds.
Differentiating (8) with respect to y k for a fixed k 2
(w; y)
(w; y)
where the second equality is due to the fact that Differentiating (2) with
respect to y k , we obtain
Diag
(w; y)
@ ~
(w; y)
@ ~
(w; y)
Taking the inner product of both sides of this equation with X and using arguments similar
to the ones above, we conclude that
(w; y)
@ ~
(w; y)
Statement (b) now follows from (13), the last identity, and the definition of A.
Theorem 2.4 Let (w;
denote the unique
solution of (9) in S n , where L j L(w; y). Then, for every ng and k; l 2
@y k @y l
where
(w;
Proof. We will only prove (15) since the proofs of equations (16) and (17) follow by similar
arguments. Note also that the proof of (15) is similar to the proofs of Theorems 2.3(a)
and 2.3(b), and so the proof has been somewhat abridged. Indeed, differentiating (8) with
respect to w i and then with respect to w j , we obtain
(w; y)
Now differentiating (11) with respect to w j , we obtain
Diag
(w; y)
(w; y)
(w; y)
which immediately implies
(w; y)
Combining (19) and (20), we conclude that (15) holds.
Now assume that X - 0. Using (15), (16) and (17), it is straightforward to see that
F
This proves the final statement of the theorem.
Before giving the first and second derivatives of the Lagrangian as corollaries to the
Theorems 2.3 and 2.4, we establish another technical result that will be used later in Section
4.
Lemma 2.5 Let (w;
denote the
unique solution of (9), where L j L(w; y). Suppose also that q
and q T r 2 h v (w; diagonal matrix, where q y is the vector of
the last m components of q.
Proof. and recall from the proof of Theorem 2.4 that the
positive semidefiniteness of X implies
for all q 2 ! n+m , where R is given by (21). Now, using the hypotheses of the lemma, it
is straightforward to see from (22) that Using the
definition of R, (11), (14), and (18), we have
where D i is defined as Diag((@z=@w i )(w; y)) and D k is defined similarly. From the above
equation, it is thus evident that
diagonal matrix.
We remark that the final statement of Theorem 2.4 can be strengthened using Lemma 2.5
if the linear independence of the matrices fe i e T
k=1 is assumed. In particular, it
can be shown that r 2 h v (w; y) - 0 whenever X - 0 and the above collection of the
data matrices is linearly independent. Such an assumption, however, is stronger than our
Assumption 2, and since we intend that the results in this paper be directly applicable
to SDPs that satisfy the usual assumptions, we only assume the linear independence of
k=1 .
Theorems 2.3 and 2.4 and Lemma 2.5 have immediate consequences for the derivatives
of the Lagrangian ', detailed in the following definition and corollary. Note that, in the
result below, we define
wy '
to be the (n m) \Theta (n + m) leading principal block of the Hessian r 2 '(w; y; -) of the
Lagrangian function.
Definition 2.6 For any (w;
denote the unique
solution of (9) in S n with v j -+ d and L j L(w; y). We refer to X(w; as the primal
estimate for (P ) associated with (w;
Corollary 2.7 Let (w;
n be given and define L j L(w; y) and X j
(a) rw '(w;
(b) r y '(w;
(c) r - '(w;
diagonal matrix,
where q y is the vector of the last m components of q.
We again mention that had we assumed linear independence of the matrices fe i e T
k=1 , we would also be able to claim that -
However, with the
weaker Assumption 2, the claim does not necessarily hold.
2.4 Properties of the primal estimate
This subsection establishes several important properties of the primal estimate X(w;
given by Definition 2.6. The following proposition is the analogue of lemma 5 of [5].
Lemma 2.8 Let (w;
(a) XL is upper triangular, or equivalently, L T XL is diagonal;
only if rw ' - 0; in addition, X - 0 if and only if rw ' ? 0;
(c) w rw
Proof. The upper triangularity of XL follows directly from (9). Since L T and XL are both
upper triangular, so is the product L T XL which is also symmetric. Hence L T XL must be
diagonal. On the other hand, if L T XL is diagonal, say it equals D, then
upper triangular. So, (a) follows.
To prove the first part of (b), we note that the nonsingularity of L implies that X - 0
if and only if L T XL - 0, but since L T XL is diagonal by (a), L T XL - 0 if and only
if diag(L T XL) - 0. Given that both L T and XL are upper triangular matrices, it is
easy to see that diag(L T XL) is the Hadamard product of diag(L T ) and diag(XL). Since
only if diag(XL) - 0. The first
statement of (b) now follows from the sequence of implications just derived and the fact
that rw by Corollary 2.7(a).
The second part of (b) can be proved by an argument similar to the one given in the
previous paragraph; we need only replace the inequalities by strict inequalities.
Statement (c) follows from (7), Proposition 2.7(a), and the simple observation that the
diagonal of L T XL is the Hadamard product of diag(L T since both L T
and XL are upper triangular.
The following proposition establishes that the matrix X(w; plays the role of a
(possibly primal estimate for any (w;
justification to its name in Definition 2.6. In particular, it gives necessary and sufficient
conditions for X(w; y; -) to be a feasible or strictly feasible solution of (P ). It is interesting
to note that these conditions are based entirely on the gradient of the Lagrangian function
Proposition 2.9 Let (w;
(a) X is feasible for (P) if and only if rw ' - 0 and r y
(b) X is strictly feasible for (P) if and only if rw ' ? 0 and r y
Proof. By the definition of X , we have X 2 S n and d. The theorem is
an easy consequence of Corollary 2.7(b) and Lemma 2.8(b).
The following proposition provides a measure of the duality gap, or closeness to optimal-
ity, of points (w;
and X(w;
are feasible for (NLP ) and (P ), respectively.
Proposition 2.10 Let (w;
is feasible for (D), and
Proof. The feasibility of X follows from Proposition 2.9, and that of (z; y; S) from the
definitions of z and S, and the assumption z - 0. The above equality follows from the
substitutions as well as from Lemma 2.8(c).
3 A Log-Barrier Algorithm
It is well known that under a homeomorphic transformation -, any path in the domain of -
is mapped into a path in the range of -, and vice versa. Furthermore, given any continuous
function f from the range of - to !, the extremers of f in the range of - are mapped into
corresponding extremers of the composite function f(-(\Delta)) in the domain of -. In particular,
if f has a unique minimizer in the range of -, then this minimizer is mapped into the unique
minimizer of f(-(\Delta)) in the domain of -.
In view of these observations, it is easy to see that, under the transformation introduced
in Section 2, the central path of the SDP problem (D) becomes a new "central path" in
the transformed space. Furthermore, since the points on the original central path are the
unique minimizers of a defining log-barrier function corresponding to different parameter
values, the points on the transformed central path are therefore unique minimizers of the
transformed log-barrier function corresponding to different parameter values. In general,
however, it is possible that extraneous, non-extreme stationary points could be introduced
to the transformed log-barrier function by the nonlinear transformations applied. In this
section, we show that the transformed log-barrier functions in fact have no such non-extreme
stationary points, and we use this fact to establish a globally convergent log-barrier algorithm
for solving the primal and dual SDP.
In the first subsection, we describe the central path in the transformed space, and then
some technical results that ensure the convergence of a sequence of primal-dual points are
given in the second subsection. Finally, the precise statement of the log-barrier algorithm
as well as its convergence are presented in the last subsection.
3.1 The central path in the transformed space
Given the strict inequality constraints of (NLD), a natural problem to consider is the
following log-barrier problem associated with (NLD), which depends on the choice of a
log
log(\Gammaz
is the i-th coordinate
function of z(w; y). (The reason for the factor 2 will become apparent shortly.) Not sur-
prisingly, (NLD - ) is nothing but the standard dual log-barrier problem (D - ) introduced in
Section 2.1 under the transformation given by Theorem 2.1, i.e., (D - ) is equivalent to the
nonlinear program
min
log(\Gammaz
which is exactly (NLD - ) after the simplification
log(det
log
and the next-to-last equality follows from the fact that
the determinant of a triangular matrix is the product of its diagonal entries.
Recall from the discussion in Section 2.1 that the primal and dual log-barrier problems
(D - ) and (P - ) each have unique solutions (z respectively, such that (1)
holds. One can ask whether (NLD - ) also has a unique solution, and if so, how this unique
solution relates to (z . The following theorem establishes that (NLD - )
does in fact have a unique stationary point (w which is simply the inverse image of
the point (z under the bijective correspondence given in Theorem 2.1.
Theorem 3.1 For each - ? 0, the problem (NLD - ) has a unique minimum point, which
is also its unique stationary point. This minimum (w - ; y - ) is equal to the inverse image
of the point (z under the bijective correspondence of Theorem 2.1. In particular,
Proof. Let (w; y) be a stationary point of (NLD - ), and define -
rw z j r z z(w; y) and r y z j r y z(w; y). Since (w; y) is a stationary point, it satisfies the
first-order optimality conditions of (NLD - )
(Recall that rw z is an n \Theta n matrix and that r y z is an m \Theta n matrix.) Using the definitions
of f and ', we easily see that [rz]-. Using this relation, the definition of -, we
easily see that the above optimality conditions are equivalent to
is the vector of all ones. It is now clear from (24) and Proposition 2.9 that X
is a strictly feasible solution of (P ).
Corollary 2.7(a) implies that the first equation of (24) is equivalent to diag(XL)
and so the equality which in turn
implies that diag(L T since XL is upper triangular by the definition of X . Since
L T XL is diagonal, it follows that L T hence that Note also that,
by the definitions of - and X(w;
satisfy the conditions of (1), and this clearly implies
We conclude that (NLD - ) has a unique stationary point satisfying all the conditions stated
in the theorem. That this stationary point is also a global minimum follows from the fact
that is the global minimum of (D - ).
3.2 Sufficient conditions for convergence
In accordance with the discussion in the last paragraph of Section 2.2, we now consider the
Lagrangian function '(w; only on the open set
Given a sequence of points f(w k ; y
The following
result gives sufficient conditions for the sequences f(z
to be bounded.
Lemma 3.2 Let f(w
ae\Omega be a sequence of points such that
ffl the sequences f(w k ) T rw ' k g and f(- k ) T z k g are both bounded.
Then the sequences f(z are bounded.
Proof. By Assumption 1, there exists a point -
the definition of F 0 (P ), we have -
Clearly, N 0 is a bounded open set containing -
Hence, by the linearity of A, A(N 0 )
is an open set containing
2.7(b) and the assumption that
we conclude that lim k!1 A(X k hence that A(X k
all k sufficiently large, say k . Hence, there exists ~
for all k - k 0 . We define ~
for each k - k 0 , and since ~
and moreover that f ~
is a bounded sequence.
Now let (-z
a point in F 0 (D), that is, a feasible solution of (D) such that
For each k - k 0 , we combine the information from the previous
paragraph, the fact that diag and A are the adjoints of Diag and A , respectively, and the
inequalities to obtain the following inequality:
z 0
Using this inequality and the fact that ~
for all k - k 0 , where the last inequality follows from the fact that X k - 0, which itself
follows from Proposition 2.9(b) and the assumption that rw ' - 0. By Proposition 2.8,
the assumption that f(w k ) T rw ' k g is bounded implies that fX k ffl S k g is bounded which,
together with the fact that f(- k ) T z k g and f ~
are bounded, implies that the left-hand
side of the above inequality is bounded for all k - k 0 . It thus follows from the positive
definiteness of -
S 0 and jI that both fX k g and fS k g are bounded.
The boundedness of fS k g clearly implies the boundedness of fL k g and hence the boundedness
of fw k )g. In addition, since - the
boundedness of fX k g implies that f- k g is bounded which, together with the boundedness
of f(- k ) T z k g, implies that fz k g is bounded. Now, using the boundedness of fS k g and fz k g
along with Assumption 2, we easily see that fy k g is bounded.
In Section 2.2, we used a hypothetical discussion of the optimality conditions of (NLD)
to motivate the use of the Lagrangian function '. In the following theorem, we see that the
hypothetical optimality conditions (5) do in fact have relevance to the solution of (NLD).
In particular, the theorem shows that if f(w k ; y is a sequence of points satisfying (5a)
for each k - 0 and if (5b), (5c) and (5d) are satisfied in the limit, then any accumulation
points of the corresponding sequences fX k g and f(z are optimal solutions of (P )
and (D), respectively.
Theorem 3.3 Let f(w
ae\Omega be a sequence of points such that z k ! 0 and
and such that
lim
z k -
Then:
(a) the sequences fX k g and f(z are bounded, and;
(b) any accumulation points of fX k g and f(z are optimal solutions of (P ) and
(D), respectively.
Proof. The proof of statement (a) follows immediately from Lemma 3.2. To prove (b), let
accumulation points of the sequences fX k g, f(z
and f- k g, respectively, where the boundedness of f- k g also follows from Lemma 3.2. The
assumptions and Proposition 2.9 imply that
lim
This clearly implies that
and
that is, X 1 is a feasible solution of (P ). Since each (z k ; y k feasible solution of (D),
it follows that (z 1 feasible solution of (D). Moreover, by Proposition 2.8,
we have that (w k 0, from which it follows that X 1 ffl S
and also that [diag(X k 0, from which it follows that
We have thus shown that X 1 and (z 1 are optimal
solutions of (P ) and (D).
3.3 A globally convergent log-barrier algorithm
In this short subsection, we introduce a straightforward log-barrier algorithm for solving
(NLD). The convergence of the algorithm is a simple consequence of Theorem 3.3.
Let constants be given, and for each - ? 0, define
m to be the set of all points (w; y) satisfying
e,
ffl kr y 'k - \Gamma-,
is the vector of all ones. Note that each
and that the unique minimizer (w - ; y - )
of (NLD - ) is in N (-). (See the proof of Theorem 3.1 and equation (24) in particular.) We
propose the following algorithm:
Log Barrier Algorithm:
For
1. Use an unconstrained minimization method to solve (NLD - k
approximately, obtaining a point (w
2. Set - increment k by 1, and return to step 1.
End
We stress that since (NLD - k
) has a unique stationary point for all - k ? 0 which is also the
unique minimum, step 1 of the algorithm will succeed using any reasonable unconstrained
minimization method. Specifically, any convergent, gradient-based method will eventually
produce a point in the set N (- k ).
If we define - then based on the definition of N (-) and
Proposition 2.8(b), the algorithm clearly produces a sequence of points f(w k ; y
that satisfies the hypotheses of Theorem 3.3. Hence, the log-barrier algorithm converges in
the sense of the theorem.
4 A Potential Reduction Algorithm
In this section, we describe and prove the convergence of a potential reduction interior-point
algorithm for solving (NLD). The basic idea is to produce a sequence of points
satisfying the hypotheses of Theorem 3.3 via the minimization of a special
merit function defined
This minimization is performed using an Armijo line search
along the Newton direction of a related equality system.
Throughout this section, we assume that a point (w
2\Omega is given that satisfies
4.1 Definitions, technical results and the algorithm
We define f
\Phi (w;
Note that (w \Xi. The potential reduction algorithm, which we state explicitly at
the end of this subsection, will be initialized with the point (w subsequently
produce a sequence of points f(w k ; y
requirement that rw '(w nonnegative for all k is reasonable in light of our goal
of producing a sequence satisfying the hypotheses of Theorem 3.3. The third requirement
that f(w k ; y k ) be less than f + for all k - 0 is technical and will be used to prove special
properties of the sequence produced by the algorithm.
We also define F : \Xi
!\Omega by
F (w;
r y '(w;
\Gamma- z(w; y)7 5 j6 4
e
\Gamma-
is the vector of all ones. For m, we let F i denote the i-th
coordinate function of F . Note that fF are the n scalar functions corresponding
to the elements of w rw '(w; hold between fF
and r y '(w; as well as between fF and \Gamma- z(w; y). In addition,
we define N ng [ fn +m+ mg.
With the definition of F , our goal of producing a sequence of points satisfying the
hypotheses of Theorem 3.3 can be stated more simply as the goal of producing a sequence
ae \Xi such that
lim
The primary tool which allows us to accomplish (27) is the merit function
by
log
is an arbitrary constant satisfying i ? n. The
usefulness of this merit function comes from the fact that (27) can be accomplished via
the iterative minimization of M by taking a step along the Newton direction of the system
F (w; at the current point. In what follows, we investigate this minimization
scheme.
Since we will apply Newton's method to the nonlinear system F (w; need
to study the nonsingularity of the Jacobian F 0 (w; To simplify our notation, for all
(w;
2\Omega we define
Recall that -
is the leading principal block of r 2 ' as defined in (23). Straightforward
differentiation of F (w;
e
\Gamma-
e
\Gamma-
Now multiplying F 0 (w; by the diagonal matrix Diag([w \GammaT
For (w; the Newton equation F 0 (w; z; -)[\Deltaw; \Deltay;
equivalent to
\Deltaw
\Deltay
\Delta-
e
Note that P (w; y; -) is a (2n+m) \Theta (2n+m) matrix which is in general asymmetric. We will
use the matrix P (w; y; -) to help establish the nonsingularity of F 0 (w; in the following
lemma.
Lemma 4.1 For (w; \Xi, the matrix P (w; positive definite and consequently,
the Jacobian F 0 (w;
Proof. Since F 0 (w; is the product of P (w; with a positive diagonal matrix, it
suffices to prove the first part of the lemma. Combining the fact that (w;
Lemma 2.8(b) and Corollary 2.7(d), we see that -
(However, it is not necessarily
positive definite even though X - 0; see the discussion after Corollary 2.7). Moreover, we
have
Hence, we conclude from (29) that is the sum of two positive semidefinite
matrices and one skew-symmetric matrix. It follows that P is positive semidefinite.
It remains to show that P is invertible, or equivalently that (\Deltaw; \Deltay;
the unique solution to the system
\Deltaw
\Deltay
\Delta-
where the sizes of the zero-vectors on the right-hand side should be clear from the context.
\Delta-) be a solution to (32). Pre-multiplying both sides of (32) by the row
vector using (29), we obtain a sum of three terms corresponding to
the three matrices in (29) that add to zero. By skew-symmetry of the third matrix in (29),
the corresponding term is zero; and by positive semidefiniteness of the first two matrices,
the first two terms are both nonnegative and thus each of them must be zero. The term
corresponding to the first matrix in (29) leads to
\Deltaw
which, together with (31), implies that (\Deltaw; \Delta-) = (0; 0). Rewriting (32) to reflect this
information, we obtain the equations
"\Deltay
"0
"\Deltay
where again the sizes of the zero-vectors should be clear from the context.
Since (w; we see from Lemma 2.8(b) that X(w; This fact together
with the first equation of (33) implies that the hypotheses of Corollary 2.7(e) hold with
It follows that A (\Deltay) is a diagonal matrix.
A (\Deltay), and let X 2 S n denote the unique solution of with respect to v and L j L(w; y).
Using the second equation of (33), the fact that rh v, and Theorem 2.3(b), we obtain
\Deltay T [r y
where the fifth equality is due to the identity and the sixth is due to the
fact that diag(X) = v. Hence, we conclude that Assumption 2,
this implies that \Deltay = 0, thus completing the proof that P is positive definite.
We remark that, if we had assumed linear independence of the entire collection fe
, then the proof that P (w; positive definite in the above proof would have
been trivial due to the fact that X(w;
(see the discussion
after Corollary 2.7). In any case, even though the proof was more difficult, our weaker
Assumption 2 still suffices to establish the nonsingularity of F 0 (w;
A direct consequence of Lemma 4.1 is that, for each (w; \Xi, the Newton direction
(\Deltaw; \Deltay; \Delta-) for the system F (w; exists at (w; y; -). Stated differently, Lemma
4.1 shows that the system
has a unique solution for all (w; \Xi. The following lemmas show that this Newton
direction is a descent direction for f (when (\Deltaw; \Deltay) is used as the direction) and also for
M.
Lemma 4.2 Let (w; \Delta-) be the Newton direction at (w;
given by (34). Then (\Deltaw; \Deltay) is a descent direction for f at (w; y).
Proof. Let -
P be the (n +m) \Theta (n +m) leading minor of P (w;
r'
r' consists of the first components of r'. Note that -
P is positive definite
since P (w; positive definite by Lemma 4.1.
Equation (34) implies that (30) holds. Using (26), (29), and (35), it is easy to see that
(30) can be rewritten as the following two equations:
\Deltaw
\Deltay
\Deltaw
\Deltay
Solving for \Delta- in the second equation and substituting the result in the first equation, we
obtain
\Deltaw
\Deltay
\Gammaz
\Deltaw
\Deltay
Multiplying the above equation on the left by the row vector [\Deltaw T ; \Deltay T ], noting that
using the positive definiteness of -
P , we have
\Deltaw
\Deltay
\Deltaw
\Deltay
\Deltaw
\Deltay
0:
The fact that \Gammaz \Gamma1 - ? 0 clearly implies that the matrix rz Diag(\Gammaz \Gamma1 -)rz T is positive
semidefinite. This combined with the above inequality proves that (\Deltaw; \Deltay) is a descent
direction for f at (w; y).
Lemma 4.3 Let (w; \Delta-) be the Newton direction at (w;
given by (34). Then (\Deltaw; \Deltay; \Delta-) is a descent direction for M at (w;
Proof. We first state a few simple results which we then combine to prove the lemma. Let
Then equation (34) implies that
\Deltaw
\Deltay
\Delta-
\Deltaw
\Deltay
\Delta-
We have from Lemma 4.2 that
\Deltaw
\Deltay
0: (37)
In addition, using (28), we have that
rf#
where we note that r - f(w;
using (36), (37), (38) and the inequalities i ? n and f(w; y)
\Deltaw
\Deltay
\Delta-
which proves that (\Deltaw; \Deltay; \Delta-) is a descent direction for M at (w;
Given (w;
where (\Deltaw; \Deltay; \Delta-) is the Newton direction given by (34). An important step in the
potential reduction algorithm is the Armijo line
the line search selects a step-size ff ? 0 such that (w(ff); y(ff); -(ff)) 2 \Xi and
\Deltaw
\Deltay
\Delta-
-). Due to the
fact that \Xi is an open set and also due to Lemma 4.3, such an ff can be found in a finite
number of steps.
We are now ready to state the potential reduction algorithm.
Potential Reduction (PR) Algorithm:
For
1. Solve system (34) for (w; to obtain the Newton
direction
2. Let j k be the smallest nonnegative integer such that
(w and such that (40) holds with
3. Set (w increment k by 1,
and return to step 1.
End
We remark that, due to Lemma 4.3, Algorithm PR monotonically decreases the merit
function M.
4.2 Convergence of Algorithm PR
In this subsection, we prove the convergence of the potential reduction algorithm given
in the previous subsection. A key component of the analysis is the boundedness of the
sequence produced by the algorithm, which is established in Lemmas 4.5 and 4.7.
k-0 be the sequence produced by the potential reduction algorithm,
and define f
Lemma 4.4 The sequence fF k g is bounded. As a result, the sequences fX k ffl S k g and
are also bounded.
Proof. Consider the function p
defined by
log
log
where i is the same constant appearing in (28). It is not difficult to verify (see Monteiro
and Pang [12], for example) that p is coercive, i.e., for every sequence f(r
1. This property implies that the
level set
is compact for all
The definition of M implies that, for all (w;
By the assumption that both the primal SDP (P ) and the dual SDP (D) are feasible,
duality implies that there exists a constant f \Gamma such that the dual objective value
implies that
for all (w; combining (42) with the fact that the potential reduction algorithm
decreases the merit function M in each iteration, we see that
for all k - 0. This in turn shows that
We conclude that fF k g is contained in a compact set and hence is bounded.
The boundedness of fX k ffl S k g and fA(X k now follows immediately from (26),
Lemma 2.8(c) and Corollary 2.7(b).
Lemma 4.5 The sequences f(z are bounded.
Proof. It suffices to show that the sequences fS k g and fz k g are bounded since, as in the
proof of Lemma 3.2, the boundedness of fS k g and fz k g immediately implies the boundedness
of fy k g, fL k g and fw k g. Let -
that, for each k - 0,
It follows from the inequalities -
is bounded. In addition, since
0, the above relation and the boundedness of fz k g imply
that fS k g is bounded.
Lemma 4.6 The sequence fC ffl X k g is bounded.
Proof. By Lemma 4.4, there exists
This implies the following relation, which holds for all k - 0:
is bounded and since fA(X k fy k g are bounded by Lemmas 4.4 and
4.5, we conclude from the above relation that fC ffl X k g is bounded.
Lemma 4.7 The sequences fX k g and f- k g are bounded.
Proof. Let (-z
note that -
Consider the following relation,
which holds for all k - 0:
From this relation, Lemmas 4.4 and 4.6, and the fact that -
we conclude that fX k g
is bounded. In addition, since diag(X k we conclude that f- k g is
bounded.
The following theorem proves the convergence of the potential reduction algorithm. We
remark that the key result is the convergence of fF k g to zero, which is stated in part (a)
of the theorem. Part (b) is already implied by Lemmas 4.5 and 4.7, and once (a) has been
established, part (c) follows immediately from Theorem 3.3.
Theorem 4.8 Let f(w be the sequence produced by algorithm PR. Then:
(a) lim
(b) the sequences fX k g and f(z are bounded;
(c) any accumulation points of fX k g and f(z are optimal solutions of (P ) and
(D), respectively.
Proof. To prove (a), assume for contradiction that (a) does not hold. Then Lemma 4.4
implies that there exists a convergent subsequence fF k g k2K such that F 1 j lim k2K F k 6= 0.
By Lemmas 4.5 and 4.7, we may also assume that the sequence f(w k ; y
to a point (w
is bounded, and due to the weak duality between (P ) and
(D), there exist constants such that
log
log F k
These three inequalities together imply that
lim
since otherwise, M(w towards infinity, an impossibility since the algorithm
has produced a sequence which monotonically decreases the merit function M.
Hence, we conclude that F 1 2 \Omega\Gamma and so we clearly have that (w 1 It follows
that the Newton direction (\Deltaw exists at (w 1
and in addition, the sequence f(\Deltaw k ; \Deltay k ; \Delta- k )g k2K of Newton directions converges to
Moreover, by the inequality (39) found in the proof of Lemma 4.3, we
have that
\Deltay 1
\Delta-
converges in \Xi and since M is continuous on \Xi, it follows that
converges. Using the relation
\Deltaw k
\Deltay k
\Delta-
and where the first and second inequalities follow from (40) and (39), re-
spectively, we clearly see that lim k2K ae since the left-hand side tends to zero as k 2 K
tends to infinity. This implies lim k2K
tends to infinity as k 2 K tends to infinity, we conclude that the Armijo
line search requires more and more trial step-sizes as k 2 K increases. Recall that the
line search has two simultaneous objectives: given (w; \Xi, the line search finds a
step-size ff ? 0 such that (w(ff); y(ff); -(ff)) 2 \Xi and such that relation (40) is satisfied.
converge to (w 1
respectively, where \Xi is an open set, it is straightforward to see that
there exist ~ j - 0 and ~
K such that
(w
for all j - ~ j and all k 2 K such that k - ~ k. Hence, due to the fact that lim k2K
there exists -
k such that, for all k -
k, we have which implies (46) holds with
but (40) is not satisfied for the step-size ae
\Deltaw k
\Deltay k
\Delta-
Letting k 2 K tend to infinity in the above expression, we obtain
\Deltay 1
\Delta-
\Deltay 1
\Delta-
which contradicts (45) and the fact that oe 2 (0; 1). Hence, we conclude that statement (a)
does in fact hold.
Statements (b) and (c) hold as discussed prior to the statement of the theorem.
Computational Results and Discussion
In this section, we discuss some of the advantages and disadvantages of the two algorithms
presented in Sections 3 and 4, and we also present some computational results for the
first-order log-barrier algorithm.
5.1 First-order versus second-order
It is a well-known phenomenon in nonlinear programming that first-order (f-o) methods,
i.e., those methods that use only gradient information to calculate their search directions,
typically require a large number of iterations for convergence to a high accuracy, while
second-order (s-o) methods, i.e., those that also use Hessian information, converge to the
same accuracy in far fewer iterations. The benefit of f-o methods over s-o methods, on the
other hand, is that gradient information is typically much less expensive to obtain than
Hessian information, and so f-o iterations are typically much faster than s-o iterations.
For many problems, s-o approaches are favored over f-o approaches since the small
number of expensive iterations produces an overall solution time that is better than the
f-o method's large number of inexpensive iterations. For other problems, the reverse is
true. Clearly, the relative advantages and disadvantages must be decided on a case-by-case
analysis.
For semidefinite programming, the current s-o interior-point methods (either primal-dual
or dual-scaling) have proven to be very robust for solving small- to medium-sized problems
to high accuracy, but their performance on large-sized problems (with large n and/or m)
has been mostly discouraging because the cost per iteration increases dramatically with
the problem size. In fact, these methods are often inappropriate for obtaining solutions of
even low accuracy. This void has been filled by f-o methods, which have proven capable
of obtaining moderate accuracy in a reasonable amount of time (see the discussion in the
introduction).
It is useful to consider the two algorithms presented in this paper in light of the above
comments. We feel that the f-o log-barrier algorithm will have its greatest use for the
solution of large SDPs. In fact, in the next section we give some computational results indicating
this is the case when n is of moderate size and m is large. The s-o potential reduction
method, however, will most likely not have an immediate impact except possibly on small-
to medium-sized problems. In addition, there may be advantages of the potential-reduction
algorithm over the conventional s-o interior-point methods. For example, the search direction
computation may be less expensive in the (w; y)-space, either if one solves the Newton
system directly or approximates its solution using the conjugate gradient method. (This is
a current topic of investigation.) Overall, the value of the potential reduction method is
two-fold: (i) it demonstrates that the convexity of the Lagrangian in the neighborhood \Xi
allows one to develop s-o methods for the transformed problem; and (ii) such s-o methods
may have practical advantages for solving small- to medium-sized SDPs.
5.2 Log-barrier computational results
Given a graph G with vertex set ng and edge set E, the Lov'asz theta number
# of G (see [11]) can be computed as the optimal value of the following primal-dual SDP
min
where are the variables, I is the n \Theta n identity matrix, e 2 ! n is the vector of
all ones, and e k 2 ! n is the k-th coordinate vector. Note that both the primal and dual
problems have strictly feasible solutions.
We ran the log-barrier algorithm on nineteen graphs for which n was of small to moderate
size but m was large. In particular, the size of m makes the solution of most of these Lov'asz
theta problems difficult for second-order interior-point methods. The first nine graphs
were randomly generated graphs on 100 vertices varying in edge density from 10% to 90%,
while the last ten graphs are the complements of test graphs used in the Second DIMACS
Challenge on the Maximum Clique Problem [10]. (Note that, for these graphs, the Lov'asz
theta number gives an upper bound on the size of a maximum clique.)
We initialized the log-barrier algorithm with
the specific w ? 0 corresponding to z = \Gammae. (Such a w was found by a direct Cholesky
factorization.) In this way, we were able to begin the algorithm with a feasible point. The
initial value of - was set to 1, and after each log-barrier subproblem was solved, - was
decreased by a factor of 10. Moreover, the criterion for considering a subproblem to be
solved was slightly altered from the theoretical condition described in Section 3.3. For the
computational results, we found it more efficient to consider a subproblem solved once the
norm of the gradient of the barrier function became less than 10 \Gamma3 . The overall algorithm
was terminated once - reached the value 10 \Gamma6 .
Our computer code was programmed in ANSI C and run on an SGI Origin2000 with
Gigabytes of RAM at Rice University, although we
stress that our code is not parallel. In Table 5.2, we give the results of the log-barrier
algorithm on the nineteen test problems. In the first four columns, information regarding
the problems are listed, including the problem name, the sizes of n and m, and a lower
bound on the optimal value for the SDP. We remark that
that the lower bounds were computed with the primal SDP code described in [4]. In the
next four columns, we give the objective value obtained by our code, the relative accuracy
of the final solution with respect to the given lower bound, the time in seconds taken by
the method, and the number of iterations performed by the method. From the table, we
can see that the method can obtain a nearly optimal solution (as evidenced by the good
relative accuracies) in a small amount of time even though m can be quite large. We also
see that the number of iterations is quite large, which is not surprising since the method is
a first-order algorithm.
6 Concluding Remarks
Conventional interior-point algorithms based on Newton's method are generally too costly
for solving large-scale semidefinite programs. In search of alternatives, some recent papers
have focused on formulations that facilitate in one way or another the application of
gradient-based algorithms. The present paper is one of the efforts in this direction.
In this paper, we apply the nonlinear transformation derived in [5] to a general linear
SDP problem to obtain a nonlinear program with both positivity constraints on variables
and additional inequality constraints as well. Under the standard assumptions of the primal-dual
strict feasibility and the linear independence of constraint matrices, we establish global
convergence for a log-barrier algorithmic framework and a potential-reduction algorithm.
Our initial computational experiments indicate that the log-barrier approach based on
our transformation is promising for solving at least some classes of large-scale SDP problems
including, in particular, problems where the number of constraints is far greater than the
size of the matrix variables. The potential reduction algorithm is also interesting from a
theoretical standpoint and for the advantages it may provide for solving small- to medium-scale
problems. We believe both methods are worth more investigation and improvement.
Table
1: Performance of the Log-Barrier Algorithm on Lov'asz Theta Graphs
low bd obj val acc time iter
rand2 100 992 22.1225 22.1234 4.2e\Gamma05 528 18877
rand3 100 1487 17.0210 17.0221 6.4e\Gamma05 629 21264
rand4 100 1982 13.1337 13.1355 1.4e\Gamma04 682 22560
rand5 100 2477 10.4669 10.4678 8.6e\Gamma05 696 22537
rand6 100 2972 8.3801 8.3814 1.5e\Gamma04 829 24539
rand7 100 3467 7.0000 7.0001 2.1e\Gamma05 137 4106
rand8 100 3962 5.0000 5.0000 9.5e\Gamma06 176 5218
rand9 100 4457 4.0000 4.0000 4.5e\Gamma06 118 3612
brock200-1.co 200 5068 27.4540 27.4585 1.6e\Gamma04 3605 16083
brock200-4.co 200 6813 21.2902 21.2946 2.1e\Gamma04 4544 20092
c-fat200-1.co 200 18368 12.0000 12.0029 2.5e\Gamma04 2560 9337
johnson08-4-4.co 70 562 14.0000 14.0004 3.1e\Gamma05 28 2519
san200-0.7-1.co 200 5972 30.0000 30.0002 5.5e\Gamma06 273 973
--R
Solving large-scale sparse semidefinite programs for combinatorial optimization
Approximating Maximum Stable Set and Minimum Graph Coloring Problems with the Positive Semidefinite Relaxation.
A Projected Gradient Algorithm for Solving the Max-cut SDP Relaxation
A Nonlinear Programming Algorithm for Solving Semidefinite Programs via Low-rank Factorization
Solving a Class of Semidefinite Programs via Nonlinear Programming.
Application of Semidefinite Programming to Circuit Partitioning.
Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming.
A spectral bundle method for semidefinite programming.
Design and performance of parallel and distributed approximation algorithms for maxcut.
On the Shannon Capacity of a graph.
A potential reduction Newton method for constrained equations.
On Formulating Semidefinite Programming Problems as Smooth Convex Nonlinear Optimization Problems.
A Note on Efficient Computation of the Gradient in Semidefinite Program- ming
--TR
Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming
Design and performance of parallel and distributed approximation algorithms for Maxcut
Cliques, Coloring, and Satisfiability
A Potential Reduction Newton Method for Constrained Equations
Solving Large-Scale Sparse Semidefinite Programs for Combinatorial Optimization
A Spectral Bundle Method for Semidefinite Programming | nonlinear programming;semidefinite program;interior-point methods;semidefinite relaxation |
584535 | Parallel Variable Distribution for Constrained Optimization. | In the parallel variable distribution framework for solving optimization problems (PVD), the variables are distributed among parallel processors with each processor having the primary responsibility for updating its block of variables while allowing the remaining secondary variables to change in a restricted fashion along some easily computable directions. For constrained nonlinear programs convergence theory for PVD algorithms was previously available only for the case of convex feasible set. Additionally, one either had to assume that constraints are block-separable, or to use exact projected gradient directions for the change of secondary variables. In this paper, we propose two new variants of PVD for the constrained case. Without assuming convexity of constraints, but assuming block-separable structure, we show that PVD subproblems can be solved inexactly by solving their quadratic programming approximations. This extends PVD to nonconvex (separable) feasible sets, and provides a constructive practical way of solving the parallel subproblems. For inseparable constraints, but assuming convexity, we develop a PVD method based on suitable approximate projected gradient directions. The approximation criterion is based on a certain error bound result, and it is readily implementable. Using such approximate directions may be especially useful when the projection operation is computationally expensive. | AMS subject classications: 90C30, 49D27
1. Introduction and Motivation
We consider parallel algorithms for solving constrained optimization
problems
min
Research of the rst author is supported by CNPq and FAPERJ. The second author
is supported in part by CNPq Grant 300734/95-6, by PRONEX{Optimization,
and by FAPERJ.
c
2001 Kluwer Academic Publishers. Printed in the Netherlands.
Sagastizabal and Solodov
where C is a nonempty closed set in < n , and f : < continuously
dierentiable function. Our approach consists of partitioning the
problem variables x 2 < n into p blocks x , such that
and distributing them among p parallel processors. Note that in this
notation we shall not explicitly account for possible re-arranging of
variables.
In most parallel algorithms, an iteration usually consists of two
steps: parallelization and synchronization, [2]. Consider for the moment
the unconstrained case, corresponding to Typically, the
synchronization step aims at guaranteeing a su-cient decrease in the
objective function, while the parallel step produces candidate points (or
candidate directions) by simultaneously solving certain subproblems
Each (P l ) is dened on a subspace of dimension
smaller than n, in such a way that these p subspaces
\span" the whole variable space < n . Most methods, such as Block-
Jacobi [2], updated conjugate subspaces [10], coordinate descent [21],
and parallel gradient distribution [14], dene (P l ) as a minimization
problem on the l-th block of variables, i.e., in < n l . More recently,
Parallel Variable Distribution (PVD) algorithms, introduced in [6] and
further studied and extended in [19, 20, 7], advocated subproblems
l ) of slightly higher dimensions than n l . In the l-th subproblem, in
addition to the associated \primary" optimization variables x l , there
are representing in a condensed form all the
remaining n n l problem variables (see Algorithm 1 below). Since those
remaining n n l variables are allowed to change only in a restricted
fashion (in a subspace of dimension p 1), the additional computational
burden of solving such enlarged subproblems is not big. The idea is
that the \forget-me-not" terms add an extra degree of freedom yielding
algorithms with better robustness and faster convergence. We refer the
reader to [6, 22, 12] for numerical validation of PVD-type methods. We
note that not having any variables in parallel subproblems completely
xed can be especially important in the constrained case, i.e., when
this, the method can simply fail. We shall
return to this issue later on.
1.1. General PVD Framework
To formalize the notion of primary and secondary variables, we need to
introduce some notation. Let l denote the complement of l in the index
set is the number of parallel processors. Given some
Parallel Variable Distribution 3
direction d i 2 < n , which we shall call PVD-direction, we denote by D i
l
the n l (p 1) block-diagonal matrix formed by placing the blocks
of the chosen direction d i along
its block diagonal as follows:
d id id i
l 1
Note that if I denotes the identity matrix of appropriate dimension,
the linear transformation
l :=
I n l n l
l
l
maps
l l In the unconstrained
case, more general transformations can be used, which give rise to a
fairly broad parallel variable transformation framework discussed in [7].
However, the theory of [7] does not appear to extend to the constrained
case, which is the focus of the present paper.
We describe next the basic PVD algorithm [6, 19, 20].
ALGORITHM 1. (PVD) Start with any x 0 2 C. Choose a PVD-
direction d
Having x i , check a stopping criterion. If it is not satised, compute
x i+1 as follows.
Parallelization. For each processor l 2 (possibly
solution (y i
l
l +D i
l l )
l
l l
Synchronization. Compute
l2f1;:::;pg
l (y i
choose new PVD-direction d i ; and repeat.
4 Sagastizabal and Solodov
The idea of PVD algorithm is to balance the reduction in the number
of variables for each (P l ) with allowing just enough freedom for
the change of other (secondary) variables. Due to this, the parallel
subproblems better approximate the original problem that has to be
resolved. Note that because the secondary variables can change only
along chosen xed directions, their inclusion does not signicantly increase
the dimensionality of the parallel subproblems (P l ). Indeed, the
number of variables in (P l ) is than if the
secondary variables were excluded. Of course, in the context of parallel
computing it is reasonable to assume that p is small relative to n l .
The synchronization step in Algorithm 1 may consist of minimizing
the objective function in the a-ne hull, subject to feasibility, of all the
points computed in parallel by the p processors. This would require
solving a p-dimensional problem, which is again small compared to
the original one. If (1.1) is a convex program, one can alternatively
dene x i+1 as a convex combination of the candidate points. In prin-
ciple, for convergence purposes, any point with the objective function
value at least as good as the smallest computed by all the processors
is acceptable. As for PVD-directions, they are typically some easily
computable feasible descent directions for the objective function f at
the current iterate x i , e.g., quasi-Newton or steepest descent directions
in the unconstrained case.
Algorithm 1 is a rather general framework, which has to be further
rened and specialized to obtain implementable/practical versions. In
this respect, the two principal questions are:
{ how to set up parallel subproblems; e.g., how to choose PVD-
directions, and
{ how to solve each parallel subproblem, including some criteria for
inexact resolution.
When (1.1) is unconstrained, some of these issues have been addressed
in [19], which contains improved convergence results (compared
to [6]), including linear rate of convergence. These results, as well
as useful generalizations such as algorithms with inexact subproblem
solution and a certain degree of asynchronization, were obtained by
imposing natural restrictions (of su-cient descent type) on the PVD-
directions. An even more general framework for the unconstrained case
was developed later in [7], where subproblems are obtained via certain
nondegenerate transformations of the original variable space. These
transformations can be very general, and there need not even be a
distinction between primary and secondary variables. However, this
approach does not seem to extend to the constrained case. It seems
Parallel Variable Distribution 5
also that intuitively justiable transformations do have some kind of
primary-secondary variable structure. For those reasons, in the present
paper we shall restrict our consideration to specic transformations of
the form (1.2).
When (1.1) is a constrained optimization problem, many questions
are still open, especially for a nonconvex feasible set C. When C is
convex with block-separable structure (i.e., C is a Cartesian product
of closed convex sets), it was shown in [6] that every accumulation
point of the PVD iterates satises the rst-order necessary optimality
conditions for problem (1.1). It was further stated that in the case
of inseparable convex constraints, the PVD approach may fail. This
conclusion was supported by a counter-example, which we reproduce
below:
This strongly convex quadratic program has the unique global solution
at 1). Consider any point ^
x 6= x such that ^
observe that
Therefore, if we apply Algorithm 1 using ^ x as its starting point and
xing the secondary variables, then this PVD variant will stay at this
same nonoptimal point thus failing to solve the original problem.
This shows that in the constrained case, a special care should be taken
in setting up the parallel subproblems.
For a general convex feasible set, it was shown in [20] that using the
projected gradient direction d(x) := x P C [x rf(x)] for secondary
variables does the job (here P C [] stands for the orthogonal projection
map onto the closed convex set C). Specically, it was established that
setting d i := d(x i ) would ensure convergence of PVD methods for
problems with general (inseparable) convex constraints. Some criteria
for inexact subproblem solution were also given in [20]. When C is
a polyhedral set, computing the projected gradient direction requires
solving at every synchronization step of Algorithm 1 a single quadratic
programming problem. For this, a wealth of fast and reliable algorithms
is available [11, 5]. It should be noted that in the case of nonlinear
constraints, the task of computing the projected gradient directions
is considerably more computationally expensive. Actually, even in the
a-ne case computing those directions exactly (or very accurately) may
turn to be rather wasteful, especially when far from the solution of the
original problem. Therefore, improvements are necessary.
6 Sagastizabal and Solodov
In this paper, we propose two new versions of PVD for the nonlinearly
constrained case. The rst one applies to problems with block-
separable nonconvex feasible sets. It is based on the use of sequential
quadratic programming techniques (SQP). Our second proposal is for
inseparable convex feasible sets. We introduce a computable approximation
criterion which allows to employ inexact projected gradient
directions. This criterion is based on an error bound result, which is of
independent interest. We emphasize that its is readily implementable
and preserves global convergence of PVD methods based on exact
directions.
Our notation is fairly standard. The usual inner product of two vectors
denoted by hx; yi, and the associated norm is given by
using other norms, we shall specify them explicitly.
Analogous notation will be used for subspaces of any other dimensions;
for example, the reduced subspaces < n l . For the sake of simplicity,
we sometimes use a compact (transposed) notation when referring to
composite vectors. For instance, stands for the column vector
l
. For a dierentiable function will denote the
n-dimensional column vector of partial derivatives of f at the point
. For a dierentiable vector-function c : <
denote the m n Jacobian matrix whose rows are the transposed gradients
of the components of c. For a function h
we
its partial derivatives are Lipschitz-continuous
on the set X with modulus L > 0.
2. Nonconvex Separable Constraints
Suppose the feasible set C in (1.1) is described by a system of inequality
constraints:
. In this section, we assume that C has block-
separable structure. Specically,
Our proposal is to solve parallel subproblems (P l ) of Algorithm 1 in-
exactly, by making one step of the sequential quadratic programming
method (SQP) [3, Ch. 13]. Since SQP methods are local by nature,
Parallel Variable Distribution 7
we modify the synchronization step in Algorithm 1 by introducing a
suitable line-search based on the following exact penalty function [9]:
l
where is a positive parameter, and y
taken componentwise. Given x i , the l-th block of rf(x i ) will be denoted
by
l := (I n l n l
l
Our algorithm is the following.
ALGORITHM 2. Start with any x 0 2 C. Choose parameters > 0
and ; 2 (0; 1), and positive denite n l n l matrices M 0
Having x i , check a stopping criterion. If it is not satised, compute
x i+1 as follows.
Parallelization. For each processor l 2
l
l as a KKT point of
h- l ; M i
l - l i
c l
l
l )- l
Synchronization. Dene
Line-search. Choose i
l
l
Using the merit function (2.5), nd m i , the smallest nonnegative integer
m, such that
l ,
repeat.
The above proposal has to be compared to the original PVD algorithm
[6]. Two remarks are in order. First, Algorithm 2 can be viewed as
a PVD method with inexact solution of parallel subproblems. Indeed,
\hard" nonlinear PVD subproblems (P l ) of Algorithm 1 are approximated
here by \easy" quadratic programming subproblems (QP l ). And
secondly, the PVD approach is extended to the case of nonconvex
8 Sagastizabal and Solodov
constraints. Note that the inclusion of forget-me-not terms does not
seem to be crucial in the block-separable case (for example, such terms
were not specied in the analysis of [6]). Thus, we drop here secondary
variables in (P l ), by taking null PVD-directions. However, the use of
secondary variables deserves further study for feasible sets with inseparable
constraints. We also note that Algorithm 2 can be thought of as
a distributed parallel implementation of SQP, where a block-diagonal
matrix is chosen to generate quadratic approximations of the objective
function. We remark that the contribution of Algorithm 2 is meant
primarily to PVD framework rather than general SQP methods.
In Algorithm 2, we assume that subproblems (QP l ) have nonempty
feasible sets for every iteration (this is guaranteed, for example, if for
each l the Jacobian c 0
l maps < n l onto < m l , or if c l is convex). Alter-
natively, it is known that feasibility of subproblems can be forced by
introducing an extra \slack" variable [1, p. 377]. It is sometimes argued
that SQP methods are not convenient for solving large-scale (with n
and m large) nonlinear programs when there are inequality constraints
present. More precisely, it is argued that the combinatorial aspect introduced
by the complementarity condition in the KKT system associated
to each QP makes the subproblems resolution relatively costly. On the
other hand, SQP techniques are known to be very e-cient for small to
medium size problems, and they are often the method of choice in that
setting. In relation to Algorithm 2, it is important to note that each
subproblem (QP l ) is a quadratic programming problem of relatively
small dimension (n l and m l are presumed to be small compared to
n and m). There exist a number of very fast and reliable algorithms
for solving such problems (see, e.g., [5]). In Section 4, we report some
numerical results for Algorithm 2, obtained via simulation on a serial
computer.
For further reference, we state the optimality conditions for (QP l
the pair (- i
l solves the KKT system
l +M i
l
l
l
c l
l
l )- i
l
l 0 and h i
l
l )- i
Denoting by 0
(x; d) the usual directional derivative of the merit function
at x 2 < n in the direction d 2 < n , we next state that - i is
a descent direction for this function at the point x i . This in turn will
imply that the line-search procedure in Algorithm 2 is well-dened.
Parallel Variable Distribution 9
LEMMA 1. If f; c 2 C 1;1
h- i
l i
l
l
l are dened in Algorithm 2.
Proof. Dening the index-sets I
we have that (e.g., see [9, p. 301])
Using (2.7b) componentwise, we have
implying that
Hence,
l
l
To obtain the right-most inequality in (2.8) we proceed as follows:
l i
l
l
l
l i
l i
l
l )- i
l i
l
Sagastizabal and Solodov
where the second equality is by (2.7a), the fourth is by (2.7c), and the
inequality follows from (2.7c) and the fact that c +
l
l
l ).
Combining the latter relation with (2.9), we now have that
l
l
l
l )j 1
(j i
l
l )j 1
l i
l
l
where the second inequality is by the Cauchy-Schwarz inequality, and
the last is by the choice of i . This completes the proof.
For Algorithm 2 to be globally convergent, we assume that the
penalization parameter i is kept bounded above, which essentially
means that the multipliers i
l stay bounded. From the practical point
of view, the latter assumption is natural. In what follows, we show
convergence of Algorithm 2 to Karush-Kuhn-Tucker (KKT) points of
(1.1), i.e., pairs (x;
THEOREM 1. Suppose that f; c 2 C 1;1
let the feasible
set C have block-separable structure given by (2.4). Let f(x be a
sequence generated by Algorithm 2. Assume there exists an iteration
and the matrices M i
l are
uniformly positive denite and bounded for all
iteration indices i. Then either i
or every accumulation
point (x;
) of the sequence f(x i ; i )g is a KKT point of the problem,
i.e., it satises (2.10).
Proof. If for some iteration index i it happens that
l
l
then (2.7a)-(2.7c) reduce to
l
l
l
c l
l
l 0 and h i
Parallel Variable Distribution 11
Now taking into account separability of constraints, it follows that
the KKT system (2.10).
Suppose now that (2.11) does not hold for any i. By (2.8) in Lemma
is then a direction of descent for the merit function i
at x i .
By standard argument, the line-search procedure is well-dened and
terminates nitely with some stepsize t i > 0. The entire method is
then well-dened and generates an innite sequence of iterates.
We prove rst that the sequence of stepsizes ft i g is bounded away
from 0. Take any t 2 (0; 1]. Since f 2 C 1;1
we have that
Similarly, since c 2 C 1;1
using the equivalence of the norms in the
nite-dimensional setting, for some R 1 > 0 we have that
l
l
l
l
l )- i
l
l
l
l )- i
l
l )j 1
c l
l
l
l
l )- i
l
l
l
l
l
l
l )- i
l )+ (1 t)c l
l
where the equality is obtained by adding and subtracting tc l
l ), the
second inequality follows from jaj 1 ja
used in the last inequality.
Re-writing the above relation, we obtain
l
l
l )j 1 j
l
l )- i
l
tj
c l
l
l )- i
l
l
l
l
l
where the second inequality is by the convexity of j() and the
equality is by (2.7b).
Together with (2.12), the last inequality yields
l
l
l )j 1
l
l
12 Sagastizabal and Solodov
is a xed constant depending on R 1
. By a direct
comparison of the latter relation with (2.6), we conclude that (2.6) is
guaranteed to be satised once m is large enough so that
within the set of t satisfying In particular,
since the line-search procedure did not accept the stepsize
it follows that
either
By (2.8) in Lemma 1 and the uniform positive deniteness of matrices
l , there exists R 3 > 0 such that
l i
and using (2.13), we conclude that
By the assumption that
for all i i 0 , from (2.6) it follows that
the sequence f
nonincreasing. Hence, it is either unbounded
below, or it converges. In the latter case, (2.6) implies that t i
and since t i t > 0, it holds that
By the denition of i and the uniform positive deniteness of matrices
l , we conclude that
l
l
passing onto the limit in (2.7a)-(2.7c) as i !1, and taking into
account the boundedness of the matrices M i
l , we obtain the assertions
of the theorem.
3. Convex Inseparable Constraints
Suppose now that the feasible set C is dened by a system of convex
inequalities, in (2.3) has convex components
We emphasize that in this section we do not assume
separability of constraints. In this setting, it appears that the only currently
known way to ensure convergence of the PVD algorithm is to use
Parallel Variable Distribution 13
the projected gradient directions for the change of secondary variables
[20]. Omitting the iteration indices i, if x is the current iterate, to
compute these directions one has to solve a subproblem of the following
structure:
min
This is done by some iterative algorithm. As already discussed in Section
1, solving this problem in the general nonlinear case can be quite
costly. Moreover, when far from a solution of the original problem,
exact (or even very accurate) projection directions are perhaps unnec-
essary. This suggests developing a stopping rule for solving (3.14) or,
equivalently, an approximation criterion for projection directions to be
used in the PVD scheme. For algorithmic purposes, it is important to
make this criterion constructive and implementable.
Assuming some constraint qualication condition [13], we have that
z solves (3.14), i.e.,
if, and only if, the pair (z;
the KKT system
r z L(z;
u 0 and hu;
where
is the standard Lagrangian for problem (3.14).
Suppose z 2 C and u
is some current approximation to a
primal-dual optimal solution of (3.14), generated by an iterative algorithm
applied to solve this problem. Lemma 2 below establishes an error
bound for the distance from z to P C [x rf(x)] in terms of violations
of the KKT conditions (3.15) by the pair (z; u). A nice feature of this
estimate is that unlike some (perhaps, most) error bounds results in
the literature (see [18] for a survey), it does not involve any expressions
which are not readily computable or any other quantities which are
not observable. Furthermore, this error bound holds globally, i.e., not
just in some neighbourhood of the solution point. Thus it can be easily
employed for algorithmic purposes.
Lemma 2 is related to error bounds for strongly convex programs
obtained in [15]. However, Theorem 2.2 in [15] involves certain constants
which are in general not computable, while Corollary 2.4 in [15]
assumes not only that z is primal feasible, but also that (z; u) is dual
feasible, which here means that u In Lemma 2
14 Sagastizabal and Solodov
we only assume that z is primal feasible and that the approximate
multiplier u is nonnegative. Our assumptions are not only weaker but
they also appear to be more suitable in an algorithmic framework, as
they will be satised at each iteration of many standard optimization
methods. Dual feasibility, in contrast, is unlikely to be satised along
iterations of typical algorithms, except in the limit.
LEMMA 2. Let c : < be convex and dierentiable, and suppose
that the set C given by (2.3) satises some constraint qualication.
Then for any z 2 C and u
holds that
where
"(z; u) :=2 jr z L(z; u)j +2
jr z L(z; u)j 2 4hu; c(z)i: (3.17)
Proof. Let
u be the associated multiplier, so
that the pair (z;
u) satises (3.15). Denote
where L(; ) is dened by (3.16). Take any z 2 C and u
. Denoting
further we have that
ui
jz
where the inequality follows from u 0,
and the facts that, by
the convexity of c(), c(z) c(z) c 0 (z)(z z) 0 and c(z) c(z)
On the other hand,
zi
jr z L(z; u)jjz
where the equality follows from the KKT conditions (3.15), and the
inequality follows from the facts that
c(z) 0, and the Cauchy-Schwarz inequality. Denoting t := jz zj ; :=
jr z L(z; u)j 0 and := hu; c(z)i 0, and combining (3.18) with
(3.19), we obtain the following quadratic inequality in t:
Parallel Variable Distribution 15
Resolving this inequality, we obtain that
Recalling denitions of the quantities involved, we conclude that
jz zj 2 jr z L(z; u)j +2
jr z L(z; u)j 2 4hu;
We propose the following PVD algorithm based approximations of
the projected gradient directions.
ALGORITHM 3. Choose parameters 1 2 (0; 1) and 2 2 (0; (1
Start with any x 0 2 C. Set i := 0.
Having x i , check a stopping criterion. If it is not satised, proceed as
follows.
PVD-direction choice. Compute z
and the associated approximate Lagrange multiplier u
for problem (3.14) satisfy
Compute
Parallelization. For each processor l 2 compute a solution
l
dened in Algorithm 1.
Synchronization. Compute
l2f1;:::;pg
l (y i
l
repeat.
Note that Algorithm 3.1 is a general-purpose method for problems
with no special structure. The example presented in the Introduction
shows that for such problems computing a meaningful direction
for secondary variables is indispensable. Compared to any alterna-
tive, computing an approximate projected gradient direction seems to
be quite favorable in terms of cost, and perhaps even the best one
Sagastizabal and Solodov
can do. Regarding the tolerance criterion (3.20), we note that if z i is
the exact projection point then "(z
the rst-order necessary optimality condition
From this, it is easy to see that the projection
problem (3.14) always has inexact solutions satisfying (3.20). Hence,
the method is well-dened.
As for the convergence properties of Algorithm 3, our result is the
following.
THEOREM 2. Let c : < be convex and dierentiable, and
L (C). Suppose fx i g is a sequence generated by Algorithm 3.
Then either f is unbounded from below on C, or the sequence ff(x i )g
converges, and every accumulation point of the sequence fx i g satises
the rst-order necessary optimality condition.
Proof. Consider any iteration processor l 2
and the point
l d i
We rst show that this point is feasible
for the corresponding subproblem (P l ) of minimizing the function
l
l
l l ). Indeed, using the notation (1.2),
l d i
l D i
l e l
l d i
l d i
where the rst equality follows from the structure of D i
l
, and the inclusion
is by the facts that x 1), and the convexity of the
set C. As a result, we have that
l +D i
l i
l
l (y i
l
l
l d i
l d i
l D i
l e l )
where the last inequality follows from f 2 C 1;1
A.24]).
Parallel Variable Distribution 17
By the construction of the algorithm and Lemma 2,
Hence, there exists some i 2 < n such that
Dene the (continuous) residual function
With this notation,
By properties of the projection operator (e.g., see [1, Proposition B.11]),
since x i 2 C we have that
Hence,
Combining the latter relation with (3.21) and (3.22), and taking into
account the synchronization step of Algorithm 3, we further obtain
Observe that
where the rst relation is by the Cauchy-Schwarz inequality, the second
follows from (3.22), and the last from (3.20). Hence,
Using (3.23), the latter inequality, and again (3.20), we have that
By the choice of and assumptions on parameters 1 and 2 , the
sequence ff(x i )g is nonincreasing. If f() is bounded below on C, then
bounded below, and hence it converges. In the latter case,
Sagastizabal and Solodov
(3.24). On the other
hand,
so that
We conclude that fR(x i )g also tends to zero. By the continuity of R(),
it then follows that for every accumulation point
x of the sequence
It is well known that the latter is equivalent to the the
minimum principle necessary optimality condition
xi 0 for all x 2 C.
4. Preliminary Numerical Experience
To get some insight into computational properties of our approach
in x 2, we considered test problems taken from the study of Sphere
Packing Problems [4]. In particular, the problems chosen are the same
as the ones used in [16]. Not having access to a parallel computer, we
have carried out a simulation, i.e., the subproblems are solved serially
on a serial machine. Even though this is admittedly a rather crude
experiment, it nevertheless gives some idea on what one might expect
in actual parallel implementation.
Given p spheres in < with so that x 2 < p , the
problems are as follows.
Problem 1.
min
x (i 1)+t x (j 1)+t
s.t.
(i 1)+t
Problem 2. Given an integer 1,
min
s.t.
Parallel Variable Distribution 19
Problem 3.
min
s.t.
We have implemented in Matlab our Algorithm 2, the serial SQP
method, and the general PVD Algorithm 1. All codes were run under
Matlab version 6.0.0.88 (Release 12) on a Sun UltraSPARCstation. Details
of the implementation are as follows. Algorithm 1 is implemented
using the function constr.m from the Matlab Optimization Toolbox
for solving the subproblems (P l ). Algorithm 2 and the serial SQP
method are quite sophisticated quasi-Newton implementations with
line-search based on the merit function (2.5). Each (QP l ) is solved by
the Null-Space Method [8]. The penalty parameter i is updated using a
modication of the Mayne and Polak rule [17] that allows i to decrease,
if warranted. The stepsize t i is computed with an Armijo rule that uses
safeguarded quadratic interpolation. In order to prevent the Maratos'
eect (i.e., to ensure that the unit stepsize is asymptotically accepted
if possible), a second-order correction is added to the search direction
necessary [3, Ch. 13.4]. Finally, the quasi-Newton matrices
are updated by the BFGS formula, using also the Powell correction
and the Oren-Luenberger scaling. The methods stop when the relative
error, measured by the sum of the norm of the reduced gradient and
the norm of the constraints, is less than 10 5 .
We rst compare our Algorithm 2 with the general PVD Algorithm
1. Since this comparison turned out rather obvious and one-sided
(which is certainly not surprising), we do not report exhaustive testing
for this part. In Table II we report the number of iterations and the
running times for Algorithms 1 and 2 on Problem 1 (results for other
problems follow the same trend and do not yield any further insight).
Note that the running times reported are serial, without any regard to
the parallel nature of the algorithms. Because the iterative structure of
the two algorithms is the same, this seems to be a meaningful and fair
comparison. It is already clear from intuition that solving exactly the
general nonlinear subproblems (P l ) in Algorithm 1 is very costly, and
is unlikely to yield a competitive algorithm. This was easily conrmed
by our experience. Of course, one could use heuristic considerations
to (somehow) adjust dynamically the tolerance in solving the sub-problems
(in our experiment, all subproblems are solved to within the
20 Sagastizabal and Solodov
Table
I. Results for Algorithm 2 and the SQP method on sphere packing prob-
lems. Starting points are generated randomly, with coordinates in [ 10; 10]. Times of
solving QPs are measured (in seconds) using the intrinsic Matlab function cputime.
Problem p QP iter calls to QP \speedup" iter calls to
26 28
28 59
Prob.3
Parallel Variable Distribution 21
Table
II. Results for Algorithms 1 and 2 on
Problem 1. Starting points are feasible, generated
randomly. Times are measured (in seconds) using the
intrinsic Matlab function cputime.
Algorithm 1 Algorithm 2
Name p iter time iter time
tolerance of 10 5 , same as the original problem). However, there is no
convergence analysis to support such a strategy within Algorithm 1 in
the constrained case (strictly speaking, there is no even a convergence
proof for Algorithm 1 in the case under consideration, since the feasible
set is nonconvex!). As discussed above, one of the motivations for our
Algorithm 2 is precisely to provide a constructive implementable way of
solving subproblems (P l ) inexactly, by solving their quadratic approx-
imations. The results in Table II conrm that the proposed approach
certainly makes sense.
Our next set of experiments concerns with the comparison between
Algorithm 2 and serial SQP. It is not easy to come up with a meaningful
comparison of a serial method and a parallel method on a serial
machine. For example, the overall running time of a \parallel" method
obtained on a serial machine is considered notoriously unreliable to
predict time that would be required by its parallel implementation
(i.e., just dividing the time by the number of \processors" is not a
good measure). We therefore focus on indicators which we feel should
be still meaningful for the actual parallel implementation, as discussed
below. To evaluate the gain in computing the search directions, we
report the total time spent solving quadratic programming subproblems
within the SQP and within the serial implementation of Algorithm
2. Since no communication between the processors would be needed
within this phase, the \expected" time for solving QPs by the parallel
implementation of Algorithm 2 can indeed be obtained dividing by the
number of processors. It is somewhat surprising that even for smaller
problems, see Table I, the serial time for solving QPs in Algorithm 2 is
already considerably smaller than in standard SQP, with the dierence
22 Sagastizabal and Solodov
becoming drastic for larger problems. If we approximate the \speedup"
e-ciency for computing directions in parallel by
time spent solving QPs in SQP
time spent solving QPs in serial Algorithm 2 100 ;
then these e-ciencies vary from acceptable (around 80%) to very high
(over 10000%), and grow fast with the size of the problem. This conrms
our motivation as discussed in x 2, i.e., smaller QPs are signi-
cantly easier to solve. Obviously, the possible price to pay in Algorithm
2 is \deterioration" of directions compared to a good implementation
of the full SQP algorithm. This issue is addressed by reporting the
numbers of iterations and calls to the oracle evaluating the function
and derivatives values, see Table I. We note that the numbers of iterations
and function/derivatives evaluations are usually slightly higher
for Algorithm 2, but not always. Overall, those numbers for the two
methods are quite similar. Given the impressive gain that Algorithm
exhibits in solving QP subproblems, this indicates that the parallel
implementation of this algorithm should indeed be e-cient, at least
when computing the function and derivatives is cheap relative to solving
QPs. In particular, this should be the case for large-scale problems
where the functions and their derivatives are given explicitly.
5. Concluding remarks
Two new parallel constrained optimization algorithms based on the
variables distribution (PVD) have been presented. The rst one consists
of a parallel sequential quadratic programming approach for the
case of block-separable constraints. This is the rst PVD-type method
whose convergence has been established for nonconvex feasible sets. The
second proposed algorithm employs approximate projected gradient
directions for the case of general (inseparable) convex constraints. The
use of inexact directions is of particular relevance when the projection
operation is computationally costly.
--R
Nonlinear programming.
Parallel and Distributed Computation.
Sphere Packings
The Linear Complementarity Problem.
Nonlinear Programming.
--TR
Iterative methods for large convex quadratic programs: a survey
Sphere-packings, lattices, and groups
Parallel and distributed computation: numerical methods
Dual coordinate ascent methods for non-strictly convex minimization
Parallel Gradient Distribution in Unconstrained Optimization
New Inexact Parallel Variable Distribution Algorithms
bounds in mathematical programming
Two-phase model algorithm with global convergence for nonlinear programming
Testing Parallel Variable Transformation
Parallel Synchronous and Asynchronous Space-Decomposition Algorithms for Large-Scale Minimization Problems
On the Convergence of Constrained Parallel Variable Distribution Algorithms
Parallel Variable Transformation in Unconstrained Optimization | projected gradient;constrained optimization;sequential quadratic programming;parallel optimization;variable distribution |
584552 | Language-Based Caching of Dynamically Generated HTML. | Increasingly, HTML documents are dynamically generated by interactive Web services. To ensure that the client is presented with the newest versions of such documents it is customary to disable client caching causing a seemingly inevitable performance penalty. In the <bigwig> system, dynamic HTML documents are composed of higher-order templates that are plugged together to construct complete documents. We show how to exploit this feature to provide an automatic fine-grained caching of document templates, based on the service source code. A <bigwig> service transmits not the full HTML document but instead a compact JavaScript recipe for a client-side construction of the document based on a static collection of fragments that can be cached by the browser in the usual manner. We compare our approach with related techniques and demonstrate on a number of realistic benchmarks that the size of the transmitted data and the latency may be reduced significantly. | Introduction
One central aspect of the development of the World Wide Web during the last decade
is the increasing use of dynamically generated documents, that is, HTML documents
generated using e.g. CGI, ASP, or PHP by a server at the time of the request from
a client [21, 2]. Originally, hypertext documents on the Web were considered to be
principally static, which has influenced the design of protocols and implementations.
For instance, an important technique for saving bandwidth, time, and clock-cycles is
to cache documents on the client-side. Using the original HTTP protocol, a document
that never or rarely changes can be associated an "expiration time" telling the browsers
and proxy servers that there should be no need to reload the document from the server
before that time. However, for dynamically generated documents that change on every
request, this feature must be disabled-the expiration time is always set to "now",
voiding the benefits of caching.
Even though most caching schemes consider all dynamically generated documents
"non-cachable" [19, 3], a few proposals for attacking the problem have emerged [23,
16, 7, 11, 6, 8]. However, as described below, these proposals are typically not applicable
for highly dynamic documents. They are often based on the assumptions that
although a document is dynamically generated, 1) its construction on the server often
does not have side-effects, for instance because the request is essentially a database
lookup operation, 2) it is likely that many clients provide the same arguments for the
request, or 3) the dynamics is limited to e.g. rotating banner ads. We take the next step
by considering complex services where essentially every single document shown to a
client is unique and its construction has side-effects on the server. A typical example of
such a service is a Web-board where current discussion threads are displayed according
to the preferences of each user. What we propose is not a whole new caching scheme
requiring intrusive modifications to the Web architecture, but rather a technique for
exploiting the caches already existing on the client-side in browsers, resembling the
suggestions for future work in [21].
Though caching does not work for whole dynamically constructed HTML docu-
ments, most Web services construct HTML documents using some sort of constant
templates that ideally ought to be cached, as also observed in [8, 20]. In Figure 1,
we show a condensed view of five typical HTML pages generated by different <big-
wig> Web services [4]. Each column depicts the dynamically generated raw HTML
text output produced from interaction with each of our five benchmark Web services.
Each non-space character has been colored either grey or black. The grey sections,
which appear to constitute a significant part, are characters that originate from a large
number of small, constant HTML templates in the source code; the black sections are
dynamically computed strings of character data, specific to the particular interaction.
The lycos example simulates a search engine giving 10 results from the query
"caching dynamic objects"; the bachelor service will based on a course roster generate
a list of menus that students use to plan their studies; the jaoo service is part of
a conference administration system and generates a graphical schedule of events; the
webboard service generates a hierarchical list of active discussion threads; and the
dmodlog service generates lists of participants in a course. Apart from the first sim-
ulation, all these examples are sampled from running services and use real data. The
dmodlog example is dominated by string data dynamically retrieved from a database,
as seen in Figure 1, and is thus included as a worst-case scenario for our technique.
For the remaining four, the figure suggests a substantial potential gain from caching
the grey parts.
The main idea of this paper is-automatically, based on the source code of Web
services-to exploit this division into constant and dynamic parts in order to enable
caching of the constant parts and provide an efficient transfer of the dynamic parts
from the server to the client.
Using a technique based on JavaScript for shifting the actual HTML document
construction from the server to the client, our contributions in this paper are:
. an automatic characterization, based on the source code, of document fragments
as cachable or dynamic, permitting the standard browser caches to have significant
effect even on dynamically generated documents;
. a compact representation of the information sent to the client for constructing
the HTML documents; and
(a) lycos (b) bachelor (c) jaoo (d) webboard (e) dmodlog
Figure
1: Benchmark services: cachable (grey) vs. dynamic (black) parts.
. a generalization allowing a whole group of documents, called a document clus-
ter, to be sent to the client in a single interaction and cached efficiently.
All this is possible and feasible due to the unique approach for dynamically constructing
HTML documents used in the language [17, 4], which we use as a
foundation. Our technique is non-intrusive in the sense that it builds only on preexisting
technologies, such as HTTP and JavaScript-no special browser plug-ins, cache
proxies, or server modules are employed, and no extra effort is required by the service
programmer.
As a result, we obtain a simple and practically useful technique for saving network
bandwidth and reviving the cache mechanism present in all modern Web browsers.
Outline
Section 2 covers relevant related work. In Section 3, we describe the approach
to dynamic generation of Web documents in a high-level language using HTML
templates. Section 4 describes how the actual document construction is shifted from
server-side to client-side. In Section 5, we evaluate our technique by experimenting
with five Web services. Finally, Section 6 contains plans and ideas for
further improvements.
Related Work
Caching of dynamic contents has received increasing attention the last years since it
became evident that traditional caching techniques were becoming insufficient. In the
following we present a brief survey of existing techniques that are related to the one
we suggest.
Most existing techniques labeled "dynamic document caching" are either server-
based, e.g. [16, 7, 11, 23], or proxy-based, e.g. [6, 18]. Ours is client-based, as e.g. the
HPP language [8].
The primary goal for server-based caching techniques is not to lower the network
load or end-to-end latency as we aim for, but to relieve the server by memoizing the
generated documents in order to avoid redundant computations. Such techniques are
orthogonal to the one we propose. The server-based techniques work well for services
where many documents have been computed before, while our technique works
well for services where every document is unique. Presumably, many services are
a mixture of the two kinds, so these different approaches might support each other
well-however, we do not examine that claim in this paper.
In [16], the service programmer specifies simple cache invalidation rules instructing
a server caching module that the request of some dynamic document will make
other cached responses stale. The approach in [23] is a variant of this with a more expressive
invalidation rule language, allowing classes of documents to be specified based
on arguments, cookies, client IP address, etc. The technique in [11] instead provides
a complete API for adding and removing documents from the cache. That efficient
but rather low-level approach is in [7] extended with object dependency graphs, representing
data dependencies between dynamic documents and underlying data. This
allows cached documents to be invalidated automatically whenever certain parts of
some database are modified. These graphs also allow representation of fragments of
documents to be represented, as our technique does, but caching is not on the client-
side. A related approach for caching in the Weave Web site specification system is
described in [22].
In [18], a protocol for proxy-based caching is described. It resembles many of
the server-based techniques by exploiting equivalences between requests. A notion of
partial request equivalence allows similar but non-identical documents to be identi-
fied, such that the client quickly can be given an approximate response while the real
response is being generated.
Active Cache [6] is a powerful technique for pushing computation to proxies, away
from the server and closer to the client. Each document can be associated a cache ap-
plet, a piece of code that can be executed by the proxy. This applet is able to determine
whether the document is stale and if so, how to refresh it. A document can be refreshed
either the traditional way by asking the server or, in the other extreme, completely by
the proxy without involving the server, or by some combination. This allows tailor-made
caching policies to be made, and-compared to the server-side approaches-it
saves network bandwidth. The drawbacks of this approach are: 1) it requires installation
of new proxy servers which can be a serious impediment to wide-spread practical
use, and 2) since there is no general automatic mechanism for characterizing document
fragments as cachable or dynamic, it requires tedious and error-prone programming of
the cache applets whenever non-standard caching policies are desired.
Common to the techniques from the literature mentioned above is that truly dynamic
documents, whose construction on the server often have side-effects and essentially
always are unique (but contain common constant fragments), either cannot be
cached at all or require a costly extra effort by the programmer for explicitly programming
the cache. Furthermore, the techniques either are inherently server-based, and
hence do not decrease network load, or require installation of proxy servers.
encoding [14] is based on the observation that most dynamically constructed
documents have many fragments in common with earlier versions. Instead of transferring
the complete document, a delta is computed representing the changes compared
to some common base. Using a cache proxy, the full document is regenerated near the
client. Compared to Active Cache, this approach is automatic. A drawback is-in addition
to requiring specialized proxies-that it necessitates protocols for management
of past versions. Such intrusions can obviously limit widespread use. Furthermore,
it does not help with repetitions within a single document. Such repetitions occur
naturally when dynamically generating lists and tables whose sizes are not statically
known, which is common to many Web services that produce HTML from the contents
of a database. Repetitions may involve both dynamic data from the database and static
markup of the lists and tables.
The HPP language [8] is closely related to our approach. Both are based on the
observation that dynamically constructed documents usually contain common constant
fragments. HPP is an HTML extension which allows an explicit separation between
static and dynamic parts of a dynamically generated document. The static parts of a
document are collected in a template file while the dynamic parameters are in a separate
binding file. The template file can contain simple instructions, akin to embedded
scripting languages such as ASP, PHP, or JSP, specifying how to assemble the complete
document. According to [8], this assembly and the caching of the templates can
be done either using cache proxies or in the browser with Java applets or plug-ins, but
it should be possible to use JavaScript instead, as we do.
An essential difference between HPP and our approach is that the HPP solution
is not integrated with the programming language used to make the Web service. With
some work it should be possible to combine HPP with popular embedded scripting lan-
guages, but the effort of explicitly programming the document construction remains.
Our approach is based on the source language, meaning that all caching specifications
are automatically extracted from the Web service source code by the compiler and the
programmer is not required to be aware of caching aspects. Regarding cachability,
HPP has the advantage that the instructions describing the structure of the resulting
document are located in the template file which is cached, while in our solution the
equivalent information is in the dynamic file. However, in HPP the constant fragments
constituting a document are collected in a single template. This means that HTML
fragments that are common to different document templates cannot be reused by the
cache. Our solution is more fine-grained since it caches the individual fragments sepa-
rately. Also, HPP templates are highly specialized and hence more difficult to modify
and reuse for the programmer. Being fully automatic, our approach guarantees cache
soundness. Analogously to optimizing compilers, we claim that the compiler
generates caching code that is competitive to what a human HPP programmer
y:
Figure
2: The plug operator.
could achieve. This claim is substantiated by the experiments in Section 5. More-
over, we claim that provides a more flexible, safe, and hence easier to use
template mechanism than does HPP or any other embedded scripting language. The
notion of higher-order templates is summarized in Section 3. A thorough
comparison between various mechanisms supporting document templates can be found
in [4].
As mentioned, we use compact JavaScript code to combine the cached and the dynamic
fragments on the client-side. Alternatively, similar effects could be obtained
using browser plug-ins or proxies, but implementation and installation would become
more difficult. The HTTP 1.1 protocol [9] introduces both automatic compression
using general-purpose algorithms, such as gzip, byte-range requests, and advanced
cache-control directives. The compression features are essentially orthogonal to what
we propose, as shown in Section 5. The byte-range and caching directives provide features
reminiscent of our JavaScript code, but it would require special proxy servers
or browser extensions to apply them to caching of dynamically constructed docu-
ments. Finally, we could have chosen Java instead of JavaScript, but JavaScript is
more lightweight and is sufficient for our purposes.
3 Dynamic Documents in
The part of the Web service programming language that deals with dynamic
construction of HTML documents is called DynDoc [17]. It is based on a notion
of templates which are HTML fragments that may contain gaps. These gaps can at
runtime be filled with other templates or text strings, yielding a highly flexible mechanism
A service consists of a number of sessions which are essentially entry
points with a sequential action that may be invoked by a client. When invoked, a
session thread with its own local state is started for controlling the interactions with
the client. Two built-in operations, plug and show, form the core of DynDoc. The plug
operation is used for building documents. As illustrated in Figure 2, this operator takes
two templates, x and y, and a gap name g and returns a copy of x where a copy of y
has been inserted into every g gap. A template without gaps is considered a complete
document. The show operation is used for interacting with the client, transmitting a
given document to the client's browser. Execution of the client's session thread is
suspended on the server until the client submits a reply. If the document contains input
fields, the show statement must have a receive part for receiving the field values
into program variables.
As in Mawl [12, 1], the use of templates permits programmer and designer tasks to
be completely separated. However, our templates are first-class values in that they can
be passed around and stored in variables as any other data type. Also they are higher-order
in that templates can be plugged into templates. In contrast, Mawl templates cannot
be stored in variables and only strings can be inserted into gaps. The higher-order
nature of our mechanism makes it more flexible and expressive without compromising
runtime safety because of two compile-time program analyses: a gap-and-field analysis
[17] and an HTML validation analysis [5]. The former analysis guarantees that at
every plug, the designated gap is actually present at runtime in the given template and
at every show, there is always a valid correspondence between the input fields in the
document being shown and the values being received. The latter analysis will guarantee
that every document being shown is valid according to the HTML specification.
The following variant of a well-known example illustrates the DynDoc concepts:
service {
session HelloWorld() {
string s;
show ask receive [s=what];
show hello;
Two HTML variables, ask and hello, are initialized with constant HTML templates,
and a session HelloWorld is declared. The entities and are
merely lexical delimiters and are not part of the actual templates. When invoked, the
session first shows the ask template as a complete document to the client. All documents
are implicitly wrapped into an element and a form with a default
"continue" button before being shown. The client fills out the what input field and
submits a reply. The session resumes execution by storing the field value in the s vari-
able. It then plugs that value into the thing gap of the hello template and sends the
resulting document to the client. The following more elaborate example will be used
throughout the remainder of the paper:
service {
Welcome
<body bgcolor=[color]>
<[contents]>
Hello <[who]>, welcome to <[what]>.
session welcome() {
contents=greeting<[who=person]];
show h<[what= BRICS ];
It builds a "welcome to BRICS" document by plugging together four constant templates
and a single text string, shows it to the client, and terminates. The higher-order
template mechanism does not require documents to be assembled bottom-up: gaps may
occur non-locally as for instance the what gap in h in the show statement that comes
from the greeting template being plugged into the cover template in the preceding
statement. Its existence is statically guaranteed by the gap-and-field analysis.
We will now illustrate how our higher-order templates
Figure
3: webboard
are more expressive and provide better cachability compared
to first-order template mechanisms. First note that
ASP, PHP, and JSP also fit the first-order category as they
conceptually correspond to having one single first-order
template whose special code fragments are evaluated on
the server and implicitly plugged into the template. Consider
now the unbounded hierarchical list of messages in
a typical Web bulletin board. This is easily expressed recursively
using a small collection of DynDoc templates.
However, it can never be captured by any first-order solution
without casting from templates to strings and hence
losing type safety. Of course, if one is willing to fix the
length of the list explicitly in the template at compile-time,
it can be expressed, but not with unbounded lengths. In
either case, sharing of repetitions in the HTML output is
sacrificed, substantially cutting down the potential benefits
of caching. Figure 3 shows the webboard benchmark
as it would appear if it had been generated entirely using
first-order templates: only the outermost template remains
and the message list is produced by one big dynamic area.
Thus, nearly everything is dynamic (black) compared to the higher-order version displayed
in Figure 1(d).
Languages without a template mechanism, such as Perl and C, that simply generate
documents using low-level print-like commands generally have too little structure
of the output to be exploited for caching purposes.
All in all, we have with the plug-and-show mechanism in successfully
", welcome to "
"."
who
what
(a) Leaf: greeting
s
d
(b) Node: strplug(d,g,s)
d
d
(c) Node: plug(d 1 ,g,d 2 )
Figure
4: DynDocDag representation constituents.
transferred many of the advantages known from static documents to a dynamic context.
The next step, of course, being caching.
Dynamic Document Representation
Dynamic documents in are at runtime represented by the DynDocDag data
structure supporting four operations: constructing constant templates, constant(c);
string plugging, strplug(d,g,s); template plugging, plug(d 1 ,g,d 2 ); and showing
documents, show(d). This data structure represents a dynamic document as a
binary DAG (Directed Acyclic Graph), where the leaves are either HTML templates
or strings that have been plugged into the document and where the nodes represent
pluggings that have constructed the document.
A constant template is represented as an ordered sequence of its text and gap con-
stituents. For instance, the greeting template from the BRICS example service is
represented as displayed in Figure 4(a) as a sequence containing two gap entries, who
and what, and three text entries for the text around and between the gaps. A constant
template is represented only once in memory and is shared among the documents it has
been plugged into, causing the data structure to be a DAG in general and not a tree.
The string plug operation, strplug, combines a DAG and a constant string by
adding a new string plug root node with the name of the gap, as illustrated in Figure
4(b). Analogously, the plug operation combines two DAGs as shown in Figure
4(c). For both operations, the left branch is the document containing the gap being
plugged and the right branch is the value being plugged into the gap. Thus, the data
structure merely records plug operations and defers the actual document construction
to subsequent show operations.
Conceptually, the show operation is comprised of two phases: a gap linking phase
that will insert a stack of links from gaps to templates and a print traversal phase that
performs the actual printing by traversing all the gap links. The need for stacks comes
from the template sharing.
The strplug(d,g,s), plug(d 1 ,g,d 2 ), and show(d) operations have optimal
complexities, O(1), O(1), and O(|d|), respectively, where |d| is the lexical size
of the d document.
Figure
5 shows the representation of the document shown in the BRICS example
color who
what
contents
"."
"."
"."
"."
"."
"."
"."
"."
color
contents
who
what
"#9966ff"
(anonymous
brics
person
greeting
cover
Figure
5: DynDocDag representation of the document shown in the BRICS example.
service. In this simple example, the DAG is a tree since each constant template is used
only once. Note that for some documents, the representation is exponentially more
succinct than the expanded document. This is for instance the case with the following
recursive function:
html tree(int n) {
if (n==0) return foo ;
return list<[gap=tree(n-1)];
which, given n, in O(n) time and space will produce a document of lexical size O(2 n ).
This shows that regarding network load, it can be highly beneficial to transmit the DAG
across the network instead of the resulting document, even if ignoring cache aspects.
Caching
In this section we will show how to cache reoccurring parts of dynamically generated
HTML documents and how to store the documents in a compact representation. The
first step in this direction is to move the unfolding of the DynDocDag data structure
from the server to the client. Instead of transmitting the unfolded HTML document, the
server will now transmit a DynDocDag representation of the document in JavaScript
along with a link to a file containing some generic JavaScript code that will interpret
the representation and unfold the document on the client. Caching is then obtained by
placing the constant templates in separate files that can be cached by the browser as
any other files.
Document structure:
color
contents
who
what
d1_2.js d2_3.js d3_3.js
d4_1.js
String Pool:
(a) Dynamic document structure reply file.
"."
"."
"."
"."
"."
"."
"."
color
contents
who
what
d1_2.js d2_3.js
d4_1.js
d3_3.js
(b) Cachable template files.
Figure
Separation into cachable and dynamic parts.
As we shall see in Section 5, both the caching and the compact representation
substantially reduce the number of bytes transmitted from the server to the client. The
compromise is of course the use of client clock cycles for the unfolding, but in a context
of fast client machines and comparatively slow networks this is a sensible tradeoff. As
explained earlier, the client-side unfolding is not a computationally expensive task, so
the clients should not be too strained from this extra work, even with an interpreted
language like JavaScript.
One drawback of our approach is that extra TCP connections are required for down-loading
the template files the first time, unless using the "keep connection alive" feature
in HTTP 1.1. However, this is no worse than downloading a document with many im-
ages. Our experiments show that the number of transmissions per interaction is limited,
so this does not appear to be a practical problem.
4.1 Caching
The DynDocDag representation has a useful property: it explicitly maintains a separation
of the constant templates occurring in a document, the strings that are plugged
into the document, and the structure describing how to assemble the document. In Figure
5, these constituents are depicted as framed rectangles, oval rectangles, and circles,
respectively.
Experiments suggest that templates tend to occur again and again in documents
shown to a client across the lifetime of a service, either because they occur
many times in the same document, 2) in many different documents, or 3) simply in
documents that are shown many times. The strings and the structure parts, however,
are typically dynamically generated and thus change with each document.
The templates account for a large portion of the expanded documents. This is substantiated
by Figure 1, as earlier explained. Consequently, it would be useful to somehow
cache the templates in the browser and to transmit only the dynamic parts, namely
the strings and the structure at each show statement. This separation of cachable and
dynamic parts is for the BRICS example illustrated in Figure 6.
As already mentioned, the solution is to place each template in its own file and
include a link to it in the document sent to the client. This way, the caching mechanism
in the browser will ensure that templates already seen are not retransmitted.
The first time a service shows a document to a client, the browser will obviously
not have cached any of the JavaScript template files, but as more and more documents
are shown, the client will download fewer and fewer of these files. With enough inter-
actions, the client reaches a point of asymptotic caching where all constant templates
have been cached and thus only the dynamic parts are downloaded.
Since the templates are statically known at compile-time, the compiler enumerates
the templates and for each of them generates a file containing the corresponding
JavaScript code. By postfixing template numbers with version numbers, caching can
be enabled across recompilations where only some templates have been modified.
In contrast to HPP, our approach is entirely automatic. The distinction between
static and dynamic parts and the DynDocDag structure are identified by the compiler,
so the programmer gets the benefits of client-side caching without tedious
and error-prone manual programming of bindings describing the dynamics.
4.2 Compact Representation
In the following we show how to encode the cachable template files and the reply documents
containing the document representation. Since the reply documents are transmitted
at each show statement, their sizes should be small. Decompression has to be
conducted by JavaScript interpreted in browsers, so we do not apply general purpose
compression techniques. Instead we exploit the inherent structure of the reply documents
to obtain a lightweight solution: a simple yet compact JavaScript representation
of the string and structure parts that can be encoded and decoded efficiently.
Constant Templates
A constant template is placed in its own file for caching and is encoded as a call to a
JavaScript constructor function, F, that takes the number and version of the template
followed by an array of text and gap constituents respectively constructed via calls to
the JavaScript constructor functions T and G. For instance, the greeting template
from the BRICS example gets encoded as follows:
F(T('Hello '),G(3),T(', welcome to '),G(4),T('.'));
Assuming this is version 3 of template number 2, it is placed in a file called d2 3.js.
The gap identifiers who and what have been replaced by the numbers 3 and 4, respec-
tively, abstracting away the identifier names. Note that such a file needs only ever be
downloaded once by a given client, and it can be reused every time this template occurs
in a document.
Dynamics
The JavaScript reply files transmitted at each show contain three document specific
parts: include directives for loading the cachable JavaScript template files, the dynamic
structure showing how to assemble the document, and a string pool containing the
strings used in the document.
The structure part of the representation is encoded as a JavaScript string constant,
by a uuencode-like scheme which is tuned to the kinds of DAGs that occur in the
observed benchmarks.
Empirical analyses have exposed three interesting characteristics of the strings used
in a document: 1) they are all relatively short, 2) some occur many times, and
many seem to be URLs and have common prefixes. Since the strings are quite short,
placing them in individual files to be cached would drown in transmission overhead.
For reasons of security, we do not want to bundle up all the strings in cachable string
pool files. This along with the multiple occurrences suggests that we collect the strings
from a given document in a string pool which is inlined in the reply file sent to the
client. String occurrences within the document are thus designated by their offsets into
this pool. Finally, the common prefix sharing suggests that we collect all strings in a
trie which precisely yields sharing of common prefixes. As an example, the following
four strings:
"foo",
"http://www.brics.dk/bigwig/",
"http://www.brics.dk/bigwig/misc/gifs/bg.gif",
"http://www.brics.dk/bigwig/misc/gifs/bigwig.gif"
are linearized and represented as follows:
"foo|http://www.brics.dk/bigwig/[misc/gifs/b(igwig.gif|g.gif)]"
When applying the trie encoding to the string data of the benchmarks, we observe a
reduction ranging from 1780 to 1212 bytes (on bachelor) to 27728 to 10421 bytes
(on dmodlog).
The reply document transmitted to the client at the show statement in the BRICS
example looks like:
<script src="http://www.brics.dk/bigwig/dyndoc.js">
<body onload="E();">
The document starts by including a generic 15K JavaScript library, dyndoc.js, for
unfolding the DynDocDag representation. This file is shared among all services and is
thus only ever downloaded once by each client as it is cached after the first service in-
teraction. For this reason, we have not put effort into writing it compactly. The include
directives are encoded as calls to the function I whose argument is an array designating
the template files that are to be included in the document along with their version
numbers. The S constructor function reconstructs the string trie which in our example
contains the only string plugged into the document, namely "#9966ff". As expected,
the document structure part, which is reconstructed by the D constructor function, is not
humanly readable as it uses the extended ASCII set to encode the dynamic structure.
The last three arguments to D recount how many bytes are used in the encoding of a
node, the number of templates plus plug nodes, and the number of gaps, respectively.
The last line of the document calls the JavaScript function E that will interpret all constituents
to expand the document. After this, the document has been fully replaced by
the expansion. Note that three script sections are required to ensure that processing
occurs in distinct phases and dependencies are resolved correctly. Viewing the HTML
source in the browser will display the resulting HTML document, not our encodings.
Our compact representation makes no attempts at actual compression such as gzip
or XML compression [13], but is highly efficient to encode on the server and to decode
in JavaScript on the client. Compression is essentially orthogonal in the sense that
our representation works independently of whether or not the transmission protocol
documents sent across the network, as shown in Section 5. However, the
benefit factor of our scheme is of course reduced when compression is added.
4.3 Clustering
In , the show operation is not restricted to transmit a single document.
It can be a collection of interconnected documents, called a cluster. For instance, a
document with input fields can be combined in a cluster with a separate document with
help information about the fields.
A hypertext reference to another document in the same cluster may be created using
the notation &x to refer to the document held in the HTML variable x at the time the
cluster is shown. When showing a document containing such references, the client
can browse through the individual documents without involving the service code. The
control-flow in the service code becomes more clear since the interconnections can be
set up as if the cluster were a single document and the references were internal links
within it.
The following example shows how to set up a cluster of two documents, input
and help, that are cyclically connected with input being the main document:
service {
Please enter your name: <input name="name">
Click <a href=[help]>here for help.
You can enter your given name, family name, or nickname.
<a href=[back]>Back to the form.
session cluster_example() {
html h,
string s;
show receive [s=name];
show output<[name=s];
The cluster mechanism gives us a unique opportunity for further reducing network
traffic. We can encode the entire cluster as a single JavaScript document, containing
all the documents of the cluster along with their interconnections. Wherever there is
a document reference in the original cluster, we generate JavaScript code to overwrite
the current document in the browser with the referenced document of the cluster. Of
course, we also need to add some code to save and restore entered form data when the
client leaves and re-enters pages with forms. In this way, everything takes place in the
client's browser and the server is not involved until the client leaves the cluster.
5 Experiments
Figure
7 recounts the experiments we have performed. We have applied our caching
technique to the five Web service benchmarks mentioned in the introduction.
In
Figure
7(b) we show the sizes of the data transmitted to the client. The grey
columns show the original document sizes, ranging between 20 and 90 KB. The white
columns show the sizes of the total data that is transmitted using our technique, none
of which exceeds 20 KB. Of ultimate interest is the black column which shows the
asymptotic sizes of the transmitted data, when the templates have been cached by the
client. In this case, we see reductions of factors between 4 and 37 compared to the
original document size.
The lycos benchmark is similar to one presented for HPP [8], except that our
reconstruction is of course in . It is seen that the size of our residual dynamic
data (from 20,183 to 3,344 bytes) is virtually identical to that obtained by HPP
(from 18,000 to 3,250 bytes). However, in that solution all caching aspects are hand-coded
with the benefit of human insight, while ours is automatically generated by the
compiler. The other four benchmarks would be more challenging for HPP.
In
Figure
7(c) we repeat the comparisons from Figure 7(b) but under the assumption
that the data is transmitted compressed using gzip. Of course, this drastically
reduces the benefits of our caching technique. However, we still see asymptotic reduction
factors between 1.3 and 2.9 suggesting that our approach remains worthwhile even
in these circumstances. Clearly, there are documents for which the asymptotic reduction
factors will be arbitrarily large, since large constant text fragments count for zero
on our side of the scales while gzip can only compress them to a certain size. Hence
we feel justified in claiming that compression is orthogonal to our approach. When the
HTTP protocol supports compression, we represent the string pool in a naive fashion
rather than as a trie, since gzip does a better job on plain string data. Note that in
some cases our uncompressed residual dynamic data is smaller than the compressed
version of the original document.
In
Figure
7(d) and 7(e) we quantify the end-to-end latency for our technique. The
total download and rendering times for the five services are shown for both the standard
documents and our cached versions. The client is Internet Explorer 5 running on
an 800 MHz Pentium III Windows PC connected to the server via either a 28.8K modem
or a 128K ISDN modem. These are still realistic configurations, since by August
2000 the vast majority of Internet subscribers used dial-up connections [10] and this
situation will not change significantly within the next couple of years [15]. The times
are averaged over several downloads (plus renderings) with browser caching disabled.
As expected, this yields dramatic reduction factors between 2.1 and 9.7 for the 28.8K
modem. For the 128K ISDN modem, these factors reduce to 1.4 and 3.9. Even our
"worst-case example", dmodlog, benefits in this setup. For higher bandwidth dimen-
sions, the results will of course be less impressive.
In
Figure
7(f) we focus on the pure rendering times which are obtained by averaging
several document accesses (plus renderings) following an initial download, caching
it on the browser. For the first three benchmarks, our times are in fact a bit faster than
for the original HTML documents. Thus, generating a large document is sometimes
faster than reading it from the memory cache. For the last two benchmarks, they are
somewhat slower. These figures are of course highly dependent on the quality of the
JavaScript interpreter that is available in the browser. Compared to the download la-
tencies, the rendering times are negligible. This is why we have not visualized them in
Figure
7(d) and 7(e).
6 Future Work
In the following, we describe a few ideas for further cutting down the number of bytes
and files transmitted between the server and the client.
In many services, certain templates often occur together in all show statements.
Such templates could be grouped in the same file for caching, thereby lowering the
transmission overhead. In , the HTML validation analysis [5] already approximates
a graph from which we can readily derive the set of templates that can
reach a given show statement. These sets could then be analyzed for tightly connected
templates using various heuristics. However, there are certain security concerns that
need to be taken into consideration. It might not be good idea to indirectly disclose a
template in a cache bundle if the show statement does not directly include it.
Finally, it is possible to also introduce language-based server-side caching which
is complementary to the client-side caching presented here. The idea is to exploit the
structure of programs to automatically cache and invalidate the documents
being generated. This resembles the server-side caching techniques mentioned in Sec-
original original and dynamics dynamics2060100
| {z }
lycos | {z }
bachelor | {z }
jaoo | {z }
webboard | {z }
dmodlog
KB
(b)
| {z }
lycos | {z }
bachelor | {z }
jaoo | {z }
webboard | {z }
dmodlog
KB
(c) gzip size51525| {z }
lycos | {z }
bachelor | {z }
jaoo | {z }
webboard | {z }
dmodlog
sec
(d) 28.8K modem download+rendering2610
| {z }
lycos | {z }
bachelor | {z }
jaoo | {z }
webboard | {z }
dmodlog
sec
lycos | {z }
bachelor | {z }
jaoo | {z }
webboard | {z }
dmodlog
msec
(f) pure rendering
Figure
7: Experiments with the template representation.
7 Conclusion
We have presented a technique to revive the existing client-side caching mechanisms in
the context of dynamically generated Web pages. With our approach, the programmer
need not be aware of caching issues since the decomposition of pages into cachable
and dynamic parts is performed automatically by the compiler. The resulting caching
policy is guaranteed to be sound, and experiments show that it results in significantly
smaller transmissions and reduced latency. Our technique requires no extensions to
existing protocols, clients, servers, or proxies. We only exploit that the browser can
interpret JavaScript code. These results lend further support to the unique design of
dynamic documents in .
--R
Mawl: a domain-specific language for form-based services
Changes in web client access patterns: Characteristics and caching implications.
World Wide Web caching: Trends and tech- niques
The project.
Static validation of dynamically generated HTML.
Active cache: Caching dynamic contents on the Web.
A scalable system for consistently caching dynamic web data.
HPP: HTML macro- preprocessing to support dynamic document caching
August 17
Improving web server performance by caching dynamic data.
Programming the web: An application-oriented language for hypermedia services
XMill: an efficient compressor for XML data.
Balachander Krishna- murthy
Designing Web Usability: The Practice of Simplicity.
A simple and effective caching scheme for dynamic content.
A type system for dynamic Web documents.
Exploiting result equivalence in caching dynamic web content.
A survey of web caching schemes for the Internet.
Studying the impact of more complete server information on web caching.
Characterizing web workloads to improve performance
Caching strategies for data-intensive web sites
--TR
Potential benefits of delta encoding and data compression for HTTP
A type system for dynamic Web documents
XMill
Static validation of dynamically generated HTML
A survey of web caching schemes for the Internet
The MYAMPERSANDlt;bigwigMYAMPERSANDgt; project
Designing Web Usability
Changes in Web client access patterns
Caching Strategies for Data-Intensive Web Sites
--CTR
Peter Thiemann, XML templates and caching in WASH, Proceedings of the ACM SIGPLAN workshop on Haskell, p.19-26, August 28-28, 2003, Uppsala, Sweden
Chun Yuan , Zhigang Hua , Zheng Zhang, PROXY+: simple proxy augmentation for dynamic content processing, Web content caching and distribution: proceedings of the 8th international workshop, Kluwer Academic Publishers, Norwell, MA, 2004
Chi-Hung Chi , HongGuang Wang, A generalized model for characterizing content modification dynamics of web objects, Web content caching and distribution: proceedings of the 8th international workshop, Kluwer Academic Publishers, Norwell, MA, 2004
Peter Thiemann, XML templates and caching in WASH, Proceedings of the ACM SIGPLAN workshop on Haskell, p.19-26, August 28-28, 2003, Uppsala, Sweden
Michael Rabinovich , Zhen Xiao , Fred Douglis , Chuck Kalmanek, Moving edge-side includes to the real edge: the clients, Proceedings of the 4th conference on USENIX Symposium on Internet Technologies and Systems, p.12-12, March 26-28, 2003, Seattle, WA
Anindya Datta , Kaushik Dutta , Helen Thomas , Debra VanderMeer , Krithi Ramamritham, Accelerating Dynamic Web Content Generation, IEEE Internet Computing, v.6 n.5, p.27-36, September 2002 | HTML;caching;web services |
584575 | A multi-temperature multiphase flow model. | In this paper we formulate a multiphase model with nonequilibrated temperatures but with equal velocities and pressures for each species. Turbulent mixing is driven by diffusion in these equations. The closure equations are defined in part by reference to a more exact chunk mix model developed by the authors and coworkers which has separate pressures, temperatures, and velocities for each species. There are two main results in this paper. The first is to identify a thermodynamic constraint, in the form of a process dependence, for pressure equilibrated models. The second is to determine one of the diffusion coefficients needed for the closure of the equilibrated pressure multiphase flow equations, in the incompressible case. The diffusion coefficients depend on entrainment times derived from the chunk mix model. These entrainment times are determined here first via general formulas and then explicitly for Rayleigh-Taylor and Richtmyer-Meshkov large time asymptotic flows. We also determine volume fractions for these flows, using the chunk mix model. | Introduction
Requirements of theromodynamic and mathematical consistency impose limits
on possible multiphase flow models. The length scales on which the mixing
occurs impose further restrictions, through limits on the range of physical
validity of these models. The models are distinguished by the dependent variables
allowed in the description of each fluid species. The thermodynamic,
mathematical and physical restrictions act to limit the freedom in the choices
of these dependent variables, or the flow regimes to which the models apply.
Two models for compressible mixing which meet the standards of thermodynamic
and mathematical consistency are (a) chunk mix and (b) molecular
mix. In the former, each fluid has distinct velocities and a full thermodynamic
description. In the latter, all fluid and thermodynamic variables are
shared, with a common value for all species. Only a volume or mass fraction
gives individual identity to the species.
We are mainly interested in models which describe fluid mixing layers. A
layer, by definition, has a small longitudinal dimension relative to its transverse
dimensions, and is thus quasi two dimensional in the large. For this
reason, there is an inverse cascade of enstrophy, with growth of large scale
structures. This growth is dominated by the merger of modes to form larger
modes. The number of mode doublings in a typical experiment [14, 17, 6]
is difficult to quantify precisely since the initial disturbance is not well char-
acterized, but a range of 3 to 5 doublings might be expected. Over this
range, i.e., within the experiments, a steady increase in structure size, and
adherence to resulting scaling growth laws for the mixing zone thinkness are
observed. To the extent that scaling laws and the regular doubling of structure
size persist, the chunk model appears to be an accurate description of
the mixing process.
Scaling laws and large scale structures go hand in hand. Among the events
which could disrupt these structures, we mention turbulence and shock waves.
If the large structures shatter, so that the structure size is fine scale and
somewhat uniformely dispersed, the flow regime will change in character to
one dominated by form drag and thermal conductivity. This regime will thus
have equilibrated pressures and temperatures, and will be well characterized
by the molecular mix models.
Data by Dimonte et al.[6] shows two dimensional slices through a mixing
layer, based on lasar illuminesence with indexed matched fluids. These
experiments show a wealth of small scale structures in a period for which
scaling laws still hold. However in this data, the structures at the edge of the
mixing zone have not broken up, an appearently important fact in explaining
the persistence of scaling laws.
The appeal of intermediate models, for example with common pressures
but distinct temperatures, lies in a hope for an optimal balance between
smplicity and fidelity to details of physical modeling. It has long been understood
that common pressures require common velocities or a high degree
of regularization of the velocities to ensure mathematical stabiliity (avoidance
of complex eigenvalues for the time propagation) [16].
The purpose of this paper is twofold. Our first main result is to identify
a thermodynamic constraint on equal pressure mix models. Equal pressure
models require the specification of a thermodynamic process or path along
which pressure equilibration is achieved or maintained. In practice, specification
of this path appears limited to certain relatively simple flow regimes.
One such case is that of incompressible flow and another is the case in which
all but one of the phases are restricted to be isentropic.
A consequence of the equal velocities required of equal pressure models
is that dispersive mixing must be modeled by a turbulent diffusion term.
Our second main result is to derive the required diffusion term, particularly
the diffusion coefficient, from the closed form solution of the more complete
chunk mix model, in the incompressible case.
The chunk mix model, with distinct pressures and velocities for each
phase, and with distinct temperatures in the compressible case, has been
studied in a series of papers [7, 9, 8, 10, 4]. Other authors have also considered
two pressure, two temperature two phase flow models [16, 11, 13]. Our
work goes beyond these references in several respects, including (a) a zero
parameter closure in agreement with Rayleigh-Taylor incompressible mixing
data and (b) closed form solutions, also for the incompressible case. Two
pressure closures based on drag are discussed in [18, 19].
Diffusion in the context of fluid mixing has been considered by many
authors [12]. Work of Youngs [18, 19] can be discussed in relation to the
acceleration driven mixing we consider here. The diffusion in [18, 19] derives
from a second order closure (k-ffl model) and requires new equations (k
and l equations) and parameters. Cranfill [5], in the same spirit, considers
a diffusion tensor, rather than scalar diffusion. This tensor is related to a
phenomenological drag frequency (defined by a k-ffl model) and the disordered
Reynolds and viscous stresses, and again requires additional equations
and parameters. Cranfill distinguishes ordered from disordered flow in his
approach. Shvarts et al. [1] propose diffusion as a model for velocity fluctuation
induced mixing. Our derivation of the dispersion closure differs from
[1] in several respects.
The chunk mix model and the diffusion model proposed here have as input
the velocities or trajectories of the edges of the mixing zone. Beyond this the
chunk mix model has zero (incompressible) or one (compressible) adjustable
parameter. We regard it as more accurate than the two temperature diffusive
mixing model proposed here and, for this reason, we use the chunk mix
model as a reference solution from which closure relations for the diffusive
two temperature model are derived.
Section 2 will develop a multi-temperature, multi-species thermodynamics
with equilibrated pressures, on the basis of an assumed EOS for each
species and no energy of mixing. The critical role of a thermodymanic process
for pressure equilibration, to complete the closure of these equations will
be stressed. Section 3 will develop model equations and closure expressions.
Section 4 will determine the coefficients in the diffusion closure relations.
Section 5 determines the Reynolds stress tensor. Conclusions are stated in
Section 6.
Thermodynamics
We assume that an equation of state ffl for the specific
internal energy of each species, We seek a composite EOS for
the n-species mixture which has achieved mechanical equilibrium (p k
for all k), but not thermal equilibrium. We assume no energy of mixing, so
that the total system specific internal energy density ffl satisfies
and
1.
A microscopic physical picture to describe this set of assumptions would
be a container consisting of n chambers separated by thermally
insulating frictionless moving partitions. Each chamber exerts pressure
forces on its two neighbors through the partition, and at equilibrium, each
chamber has expanded or contracted to achieve equal pressures. We argue
that this picture is an approximate description of the thermodynamics of
pressure equilibrated chunk mix.
Under the chunk mix assumption, we postulate chunks large enough that
the bulk energies and other thermodynamic quantities dominate surface ef-
fects, so that infinite volume thermodynamics applies within each chunk.
Thus we start with a system defined by 3n \Gamma 1 independent thermodynamic
variables, consisting of volume fractions fi k and two independent
thermodynamical variables per species. Within this space, we define a mechanical
equilibrium subspace, defined by equal pressures (n \Gamma 1 constraints),
and thus defined by volume fractions, a common pressure, and one additional
thermodynamic variable per species (e.g., S k
, or ae k
variables for the kth species are determined from the common pressure
and this one variable, using the k species EOS. The role of the fi k
is to give
relative species concentration. Other, equivalent, thermodynamic variables
with this role include the specific mass density - k
, the mass fraction
the species number density n k and the chemical potential
. Any of the above, in combination with the k species EOS and k species
thermodynamic state, determines fi k , and any can be used in place of fi k as
independent thermodynamic variables.
In summary, we describe the mixture at mechanical equilibrium with
concentration variables, n species dependent thermodynamic variables and 1
global thermodynamic variable, or in total 2n thermodynamic variables.
From the point of view of pressure equilibration, or of the definition of
pressure as a function of thermodynamically independent variables, in an
incomeplete EOS, the 2n thermodynamic variables are not all on an equal
footing. The variables fi k ae k represent mass of species k, and must be conserved
in any thermodynamic process. Likewise total energy, ffl is a conserved
quantity. The remaining are less fundamental. In effect, any specific
choice for these variables, to be held fixed during pressure equilibra-
tion, i.e., to serve as independent variables for the definition of the common
equilibrated pressure, is equivalent to the specification of a thermodynamic
process for the equilibration.
The n+ 1 variables which are conserved for all thermodynamic processes
and the process dependent choice of the remaining the
basic conserved dependent variables for the multiphase hydro equations of
x3. The domain of validity of these equations is thus limited to processes for
which pressure equilibration is maintained with preservation of these
variables.
As an example, consider One variable can be constrained
to vary adiabatically. Starkenberg [15] adopts this point of view in modeling
the initiation of a detonation wave, with the distinguished species being the
(unburned) reactants.
As a second example, consider the nearly incompressible case. Then the
volume fractions fi k
are (approximately) conserved. Choosing these as the
remaining process dependent variables completes the description of the
pressure equilibration process.
We outline an algorithm for the determination of pressure p as a function
of the 2n independent variables, based on the nearly incompressible closure
of the pressure equilibration process. Assume we are given fi k
and ffl. The
problem is to find the individual ffl k defined by these variables, subject to a
given total ffl, and then to apply the individual species EOS to determine
We start with an initial guess for the values of the ffl k .
The guess ffl k
together with the given ae k
determines a (non-equilibrated) p k
Each p k is monotone in ffl k . Thus we increase the specific internal energy of the
species with the smallest pressure, as based on this guess, and decrease the
specific internal energy of the species with the largest specific energy, while
preserving species mass fi k ae k and density ae k . In other words we heat the low
pressure species, and cool the high pressure species under constant volume,
total energy preservation conditions. This process terminates only when
all pressures have equilibrated,
, thus defining the common
pressure
The above algorithm determines a computational method for evaluation
of the incomplete EOS needed for solution of the multiphase fluid equations
(4), (7), (9), and (14), subject to an assumption of approximate incompress-
ibility. For other pressure equilibration processes, a different set of
process dependent conserved variables should be chosen.
3 Model Equations and Closure
As microphysical equations, we consider the multifluid compressible Euler
equations. Let X k (x; t) be the characteristic function of the region occupied
by the fluid species k at time t. Let h\Deltai denote an ensemble average, and let
be the volume fraction of fluid k.
Before considering the averaged equations in detail, we discuss some principles
which guide their derivation. As usual, we expand the variables in
terms of means and fluctuations. Specifically, for the velocity v,
The fluctuation v 0 plays an important role, because
the individual species velocities v k
which drive the mixing in the
chunk mix model [8] do not occur in the diffusion model proposed here. While
we expect
of the chunk mix model to correspond to the v of the
diffusion model, v 0 must play the role of v k \Gammav, for all
enters the diffusion model only indirectly, through the correlations which
define the diffusivity. Since the velocity diffusion of species volumes fi k
is the
important phenomena driving the mixing process, the fluctuations resulting
from the (v
correlations are important and must be retained in our
analysis of closure hypotheses.
We also consider mass weighted velocities e and the fluctuation
v. With ~ v defined as
the mass per unit volume available to species k, we have haee
We have previously used [2] the approximation v k - e v k , justified by the
smallness of the density fluctuations within a single species relative to those
between two species. Here we use the approximation v 0 - v 00 . Our basis for
this approximation is that difference between these quantities is not captured
within the phenomenological model of fluctuations via a diffusion process as
developed here.
Fluctuations are neglected for species dependent thermodynamic quantities
such as ae k and the species specific internal energy ffl k j hX k ffli=fi k or
e
These quantities enter into the conservation laws either
directly, as dependent variables (ae k ) or are available as thermodynamic functions
of the conserved dependent variables (ffl k ). For this reason fluctuations
in these quantities are of less importance and a strictly first order closure in
these quantities is sufficient. Any correlation which is linear in fluctuating
quantities will vanish. Fluctuation correlations arise when averaging non-linear
terms in the equations. In particular, the Reynolds stress R, defined
in terms of velocity-velocity fluctuation correlations, will play an important
role in the momentum equation. We assume here that the pressure has also
equilibrated, and thus the individual phase pressures p k
are also missing from
our model. On this basis, one might expect the pressure fluctuations p 0 to
play an important role also. But pressure, in contrast to velocity, enters
the momentum equation linearly, so that fluctuations do not arise. How-
ever, pressure fluctuations could appear in a pressure velocity fluctuation
correlation arising in the closure of the averaged energy equation.
The microphysical equation for X k is
@t
which upon averaging yields
@t
or
@t
Closure requires a model for the fluctuation term. As is common in models
of turbulent transport, we adapt a diffusion model, so that
@t
Following conventional ideas [12] (see Sect. 4), the diffusivity tensor D k is
defined as a velocity - velocity fluctuation correlation.
Next consider the microphysical continuity equation, multiplied by X k ,
@ae
@t
r
We multiply (1) by ae and add the result to (5), to obtain
@t
which implies
@t
including closure as in (4). Within the approximations of Section 4, the
diffusion constants in (4) and (7) coincide.
Next we consider the microphysical momentum equation,
@t
On averaging, this gives
@aee v
@t
ve
with R a Reynolds stress tensor, analyzed in Section 5, and
Finally we consider the microphysical energy equation,
@t
Our thermodynamic picture is that the dynamics will propagate within each
species with conservation of species energy, and then (on a slower time scale)
equilibrate adiabatically between species to a uniform pressure state with
conservation of total system energy. Thus we start by deriving conservation
equations for the energy of the individual species; for convenience these are
written nonconservatively in terms of the internal energy alone. Thus we
multiply (10) by X k and add aeffl times (1) to obtain
@t
Upon averaging, (11) becomes
e
@t
As above, this equation is closed with a diffusion term to give
e
@t
e
e
We use these equations to determine the total internal energy effl, which in
combination with the volume fractions fi k
and species densities ae k
, is a complete
thermodynamic description of the system constrained to equal pressure
in each species. Thus we sum (13) and obtain
@t
with neglect of (p; v) fluctuation correlations, where D \Delta raeeffl j
e
In the derivation of the model equations, we have applied the closure
With this closure, the model equations (4), (7), (9), and (14) together with
the evaluation of D given in x4, of R given in x5, and the EOS are expressed
in a closed form and provide a complete description of the system. Moreover,
the model equations are symmetric regarding different materials so that this
diffusion model can be applied to a system with any number of materials.
For 1-D flow dominated by velocity dispersion induced mixing associated
with velocity dispersion along a single coordinate direction (z), R
. The diffusivity tensor reduces to D . For radially dominated
dispersion, the 1-D diffusion model equations are simplified to
@t
@h
@
@h
(D k h -
@h
@t
@
@h
@
@h
(D k h -
@h
@t
@
@h
@
@h
@h
@t
@
@h
@h
@
@h
(D k h -
@h
where h is the spatial coordinate along the longitudinal axis,
geometry and cylindrical and spherical geometries. Accordingly,
are respectively for planar, cylindrical, and spherical geometries
If the pressure equilibration closure is based on an adiabatic process in
species, then the volume fraction equations are dropped from the finite
difference equations and equations for entropy advection in these
are added in their place.
4 Evaluation of Diffusion Coefficients
4.1 Lagrangian Formulas Evaluated within Chunk Mix
Model
Diffusive modeling of the velocity - volume fraction fluctuating correlation
(v
) is a consequence of the low order of closure in the model proposed
here. Such eddy diffusivity closure is justified on physical grounds by assuming
a mixing process dominated by concentration gradients, but it is also
required by a model in which the velocity fluctuations do not appear directly.
Thus we adopt the general form of the closed equations given in x3. It remains
to evaluate the diffusion constant, D. Here we appeal to a Lagrangian
framework. Since our problem is strongly time dependent, the conventional
definition of time averaged diffusivity,
where X(t) is the Lagrangian particle displacement, must be replaced by the
incrementally defined diffusivity,
dt
Normally the Lagrangian framework is difficult to use with Eulerian field
quantities. To justify use of this framework, we refer to the special feature,
or physical assumption of flow regimes the chunk mix model is intended
to describe, and then use those assumptions in the evaluation of (20). In
the chunk mix model, the major uncertainty, or fluctuation, is the choice of
distinct species, or label, to which a Lagrangian particle belongs.
We have neglected within species fluctuations, and closed the system at first
order in terms of mean flow quantities only. Thus any flow quantity with
a subscript k, such as ae k or v k , can be regarded as deterministic, and thus
a Lagrangian diffusive mix description as well as an Eulerian chunk mix
description of a species k fluid element. It follows that the expectation in
(20), when evaluated in terms of chunk mix flow fields is simply a weighted
sum over the index k.
We now restrict to the case of materials. As a fundamental modeling
or closure asumption, we use the known analytic solutions of the chunk
mix model for incompressible flow to generate expressions for these correla-
tions. Continuing with the evaluation of (20), for 1-D flow, we have
dt
and so
\Theta
For incompressible flow,
can be further simplified to
Here z(t) is the Lagrangian path of a fluid element of species k which enters
the mixing zone at the edge at some time s k - t and moves with the
chunk mix [8] velocity
to the point z at time t.
Here k 0 is the complementary index to k, and V k
is the velocity of the
edge of the mixing zone. Observe that D vanishes at both edges of
the mixing zone, as k and the k 0 the displacement z(t) \Gamma Z k 0 (s k )
also vanishes at Z k 0 .
We t). The function s k is the entrainment time for a
fluid parcel of species k located at the position z(fi k ; t) (volume fraction fi k
at time t. Since species k is entrained through the
the entrainment location is Z k 0 (s k
t, or in other words, a particle of species k at the z
is in the process of being entrained.
From the structure of (22), we see that separate evaluation of the species
dependent diffusion constants D k is possible. For Richtmyer-Meshkov mix-
ing, the individual D k have distinct asymptotics, which can be determined
by the methods developed here.
The main difficulty to be overcome in this section is the determination
of the entrainment time s k in large time asymptotic Rayleigh-Taylor (RT)
and Richtmyer-Meshkov (RM) flows. We believe this information is of independent
interest, beyond its contribution to the computation of D. For
RT mixing, the time of entrainment for a fluid parcel of species k, located
at volume fraction fi k and time t is proportional to t with a fi dependent
coefficient which we determine in closed form. For large Atwood number RT
mixing and especially for RM mixing, the entrainment times are significantly
nonlinear, in fact sublinear in fi indicating that more. For RM mixing, most
of the entrained material is deposited into the mixing zone early in the mixing
process. This difference between RT and RM mixing reflects a decrease
in the late of loss of memory of initial conditions for RM mixing as opposed
to RT mixing. In both the RT and RM mixing cases, we thus determine
residence times for entrained material in the mixing zone.
We determine s k as a function of fi k and t by integrating the evolution
equation
ds k
@z
ds k
@z
from together with condition Here we have used
expressions dz k given in the chunk mix
model, and
which leads to
The volume fraction fi k (z; t) is determined implicitly by
Z tv (s; fi k )ds: (28)
Thus we have
@z
Z t@v
ds
Z tds
We consider two distinct flow regimes for the determination of D. The
first is self similar Rayleigh-Taylor (RT) mixing, and the second is the time
asymptotic Richtmyer-Meshkov (RM) mixing. We assume that the RM mixing
is governed by unequal power laws for the two edge velocities, Z k -
4.2 Diffusive Modeling of Self-Similar Rayleigh-Taylor
Mixing
Now we specialize to RT mixing. For self-similar flow,
Substituting
@z
Ags 2
Substitution into (25) and (27) yields
ds k
Integrating this separable equation finally generates the entrainment times
ff
and
ff
This is an exact consequence of chunk model closure. The fact that the
entrainment times grow linearly with t reflects, and makes precise, the idea
that RT mixing forgets its initial data, and thus is universal in the large
time asymptotic limit. The distribution of s k =t within the mixing layer for
shown in Fig. 1.
For
and s k
required by the definition of entrainment
time. For indicating that material in the minority
phase at the edge of the mixing zone was entrained at time
similar flow.
Incorporating the solutions (33) into (23) and noting that z(fi 1
we obtain an explicit expression for the diffusion coefficient
ff
ff
within the mixing zone or interval [Z 1 (t); Z 2 (t)] and
interval. Fig. 2a and Fig. 2b respectively display the distributions of D
across the mixing layer for both
0:1807 at various times. From Fig. 2b we see that for large A, the
diffusivity is more prominent on the bubble side than on the spike side.
similar RT mixing takes place in an expanding domain [Z 1 (t); Z 2 (t)],
which can be transformed into a fixed domain [Z 1 (t)=t
t. Then (4) becomes
(D
and mixing occurs on a log time
scale in this model, as it should. In the time asymptotic regime, the solution
asymptotically independent of t 0 . It is thus approximately
a solution of the time independent equation
@
4.3 Large Time Asymptotic RM Mixing in the Chunk
Mix Model
We obtain a closed form solution for RM mixing for z(fi) and the velocities.n
the large time. This is a complete solusion of the chunk model with the
exception of the pressure field. Our first objective is to determine z and
dz=dfi k as functions of fi k . For the RM case, we assume Z
We introduce the variables
and
We have
\Theta
\Theta
2:
Substituting (39) and (41) into (28) and (29) for
@z
ts ds
In terms of - , (43) and (44) can be rewritten as
Z --
@z
The integrals in (45) and (46) can be expressed through the hypergeometric
function F (ff; fi; fl; u) by use of the formula
Z ux
is called a generalized hyper-geometric
function. This leads to
and
@z
where we have used (37) and (38). The numerical solutions for z(fi; t) with
are respectively shown in Fig. 3a and Fig. 3b. From
the figures, we see that the nonlinearity of the mixing front in RM mainly
depends on the ratio of ' 2 =' 1 . In order to understand the dominant behaviour
of (48), we use the hypergeometric function transformation formulae
and rewrite the F as
Employing this expression, (48) is reduced to
@z
The transformed F has a series expansion
which converges uniformly in the unit circle
4.4 Diffusive Modeling of Large Time Asymptotic RM
Mixing
To obtain the diffusion coefficient D, we need to solve for s k . Substituting
(42), (38) and (50) into (25) gives the evolution equations
ds 1
@z
and
ds 2
@z
for fi k . We attain the solutions s k by integrating the resulting equation for
from fi k
(t). The numerical simulations for s k
at different
times for are displayed in Fig. 4a and Fig. 4b. The diffusion
coefficient D in RM mixing is evaluated as
In Fig. 5a and Fig. 5b, we show the distribution of D across the RM mixing
layer at different times for both 0:96. From the figures, we
see features also demonstrated in RT mixing. The diffusivity on the bubble
side is more prominent than on the spike side. With the exception of the Lagrangian
time of entrainment s k , the formula (54) is a closed form evaluation
of D, which is exact within the chunk model assumptions and the large time
RM asymptotic assumptions. To proceed, we require a deeper understanding
of the RM mixing zone asymptotics to evaluate s k approximately.
4.5 Scaling Law Behavior of the RM Mixing Zone
Since the relationship between z(fi k ; t) and fi k
is significantly nonlinear, we
now determine approximately the principal features of fi k (z; t). In units of
most of the mixing occurs at a speed V 1 , near the bubble interface Z 1 ,
while in units of z, most of the mixing zone is occupied nearly exclusively by
light material, and trace amounts of heavy material involved in the mixing
move with speed close to V 2 . We believe this heavy material motion has the
form of widely isolated jets, each containing a high concentration of heavy
material. Thus we do not expect that the trace amounts of heavy material
will be uniformly distributed in the ambient light material.
To substantiate these statements, observe that v - ff
a small range of fi 1 near the spike interface, i.e., for
the same reason, the coefficient of -
2dominates that of
more than compensates for the larger size of -
Z 2 . Thus except for
most of the transport is light material moving at a
speed
into the heavy material.
With two distinct power laws assumed for the two edges of a RM mixing
zone, the solution within the mixing zone displays a continuous range of
power law exponents. We introduce two asymptotic regions:
For large and fixed t, the relations - AE 1 and -=(1 are satisfied for
most of the mixing zones as measured in fi values except for the neighborhood
of the spike interface (fi 2 - 0).
In the case of region I,
which solely depends on - or ' k converges uniformly. Thus
the equations (52) and (53) are separable and have the form
ds k
\Theta
2:
In the case of region II, and the
equations (52) and (53) become
ds k
\Theta
2:
We introduce the curve boundary between the two asymptotic
regions. A more accurate analysis, which we forego here, would include a
third region, - 1 intermediate between regions I and II. In this intermediate
region, use of the variables fi k and - in place of fi k and s k still gives a separable
pair of equations, which after approximation of F by a finite power series,
can be integrated in closed form.
Let s
and fi
respectively represent the time and the fi k value
along the solutions to (57)-(58) at
s
2: (59)
Consider in region I. The particle path defined by the solution of the
equation remains in region I and so it can be integrated to yield s 2 . We
integrate fi 2 from 1 to fi 2 (t) and s 2 from s 2 to t in (57) and obtain
As t.
The path for the solution of the fi 1 equation must terminate in region II.
Thus its solution is attained by a matched asymptotic expansion. We first
integrate fi 1 from fi 1 (t) to fi and s 1 from t to s in region I, we have
Next consider a point in region II. Region II includes the mising
zone edge 1. Thus the particle path defined by the solution of the fi 1
equation remains in region II and so it can be integrated to yield s 1 . We
integrate the equation (58) for region II, and obtain the entrainment time s 1
for this region:
As
The path for the solution of the fi 2 equation must terminate in region I.
Thus its solution is attained by the same treatment as fi 1 in region I. We
integrate fi 2 from fi 2 (t) to fi
2 and s 2 from t to s
2 in region II, we obtain
s
To complete the integrations for paths which cross both regions, we need
to integrate fi 1 from fi
1 to 1 and s 1 from s
1 to s 1 in region II, and to integrate
fi 2 from fi
2 to 1 and s 2 from s
2 to s 2 in region I. The former gives
and the later generates
which are consistent respectively with (62) and (60) when these are evaluated
at
Combining (59) and (61), we obtain
and thus
s
Analogously, incorporating (59) and (63) yields
which leads to
Substituting the " " variables in (64) and (65), we finally obtain the
entrainment time s k
in RM mixing as
and
Incorporating (70) and (71) into (54), and considering the facts that - AE
1 in region I and - 1 in region II, we obtain the approximate expressions
for D in these two asymptotic regions:
ae
oe
and
ae
oe
As fi k
approaches zero as it should.
5 The Reynolds Stress Tensor
By definition the Reynolds stress tensor is
ve
For mixing induced by velocity dispersion in a single (z) coordinate direction,
and with only two materials, the Reynolds stress tensor becomes R
, where
R
This correlation is conventionally simplified to a two variable model, e.g.,
describing the total turbulent kinetic energy and the rate of dissipation of
turbulent kinetic energy (k-ffl models). The new variables acquire their own
dynamic equations. Such an approach is problematic for several reasons. The
new equations, which thus define a second order closure, contain additional
unknown parameters of uncertain value. The k-ffl models refer to isotropic
turbulence, and only model the trace of R. Setting parameter values for
nonisotropic turbulence closures in still less well understood. The expanded
equation set has two new equations in contrast to the one new equation for
the chunk mix model. The uncertain coefficients in the k-ffl model are in
contrast to the nearly parameter-free status of the chunk mix model. Thus
the use of a k-ffl or related model for turbulent diffusion will surely be more
complex than the more exact chunk mix model we are trying to simplify.
Here we derive an expression for the Reynolds stress from the chunk
model in the case of two materials. This expression has two advantages:
lack of unknown parameters and use of a first order closure, so that no new
dependent variables and equations are introduced. In the chunk model, the
Reynolds stress is represented as a sum of three terms. The first two are
the Reynolds stresses associated with the fluids strictly within each species,
and the third term is the Reynolds stress associated with the two species
mixture. We have assumed for the chunk model that the first two terms
can be neglected, and we retain that assumption here. Thus we evaluate the
Reynolds stress as equal to the third term alone following [3],
ae
The expression (75) can be evaluated in terms of fi k , ae k , and V k (t), i.e.,
variables defined within the diffusion model. We use the
Then
ae
Thus we see that R behaves like a potential well, and accelerates ev going into
the mixing layer, and decelerates it upon exit.
To determine the scaling behavior of the Reynolds stress term rR, we
specialize to RT and RM mixing, as in Section 4. In the RT case, velocities
are O(t) and r is O(z) In the RM case, we have
two scalings or a continuum range of scalings, but in all cases, rR scales like
6 Conclusions
For equilibrated pressure multiphase models,we have identified a thermodynamic
constraint or process dependence which relates the choice of
of the conserved variables of the model to the domain of physical validity
of the model. For incompressible multiphase flow, we have determined the
coefficient of turbulent diffusivity and the form of the Reynolds stress tensor.
--R
A renormalization group scaling analysis for compressible two-phase flow
Boundary conditions for a two pressure two phase flow model.
A new multifluid turbulent-mixing model
Turbulent Rayleigh-Taylor instability experiments with variable acceleration
Renormalization group solution of two-phase flow equations for Rayleigh-Taylor mixing
Statistical evolution of chaotic fluid mixing.
Multipressure regularization for multi-phase flow
The Physics of Fluid Turbulence.
Hyperbolic two-pressure models for two-phase flow
Experimental investigation of turbulent mixing by Rayleigh-Taylor instability
Private communication
Numerical simulation of turbulent mixing by Rayleigh-Taylor instability
Modeling turbulent mixing by Rayleigh-Taylor instability
Numerical simulation of mixing by Rayleigh-Taylor and Richtmyer-Meshkov instabilities
--TR
Renormalization group analysis of turbulence I. Basic theory
Modelling turbulent mixing by Rayleigh-Taylor instability
Transport by random stationary flows | turbulence;multiphase flow |
584649 | Maximum Likelihood Estimation of Mixture Densities for Binned and Truncated Multivariate Data. | Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells. | Introduction
In this paper we address the problem of fitting mixture densities to multivariate
binned and truncated data. The problem is motivated by an application in medical
diagnosis where blood samples are taken from subjects. Typically each sample
contains about 40,000 different red blood cells. The volume and hemoglobin concentration
of the red blood cells are measured by a cytometric blood cell counter.
It then produces as output a bivariate histogram on a 100 \Theta 100 grid in volume
and hemoglobin concentration space (e.g., Figure 1). Each bin contains a count of
the number of red blood cells whose volume and hemoglobin concentration fall into
that bin. It is known that the data can be truncated, i.e., that the range of machine
measurement is less than the actual possible range of volume and hemoglobin
concentration values.
We present a general solution to the problem of fitting a multivariate mixture density
model to binned and truncated data. Binned and truncated data arise frequently
in a variety of application settings. Binning can occur systematically when a measuring
instrument has finite resolution, e.g., a digital camera with finite precision for
pixel intensity. Binning also may occur intentionally when real-valued variables are
quantized to simplify data collection, e.g., binning of a person's age into the ranges 0-
10, 10-20, and so forth. Truncation can also easily occur in a practical data collection
context, whether due to fundamental limitations on the range of the measurement
process or intentionally for other reasons.
For both binning and truncation, one can think of the original "raw" measurements
as being masked by the binning and truncation processes, i.e., we do not know
the exact location of data points within the bins or how many data points fall outside
the measuring range. It is natural to think of this problem as one involving missing
data and the Expectation-Maximization (EM) algorithm is an obvious candidate for
model fitting in a probabilistic context.
The theory for fitting finite mixture models to univariate binned and truncated
data by maximum likelihood via the EM algorithm was developed in McLachlan and
Jones (1988). The problem in somewhat simpler form was addressed earlier by Demp-
ster, Laird, and Rubin (1977) when the EM algorithm was originally introduced. The
univariate theory of McLachlan and Jones (1988) can be extended in a straightforward
manner to cover multivariate data. However, the implementation is subject to exponential
time complexity and numerical instability. This requires careful consideration
and is the focus of this present paper. In Section 2 we extend the McLachlan and
Jones results on univariate mixture estimation to the multivariate case. In Section 3
we present a detailed discussion of the computational and numerical considerations
necessary to make the algorithm work in practice. Section 4 discusses experimental
results on both simulation data and the afore-mentioned red blood cell data.
Basic Theory of EM with Bins and Truncation
We begin with a brief review of the EM algorithm. In the most general form, the
EM algorithm is an general procedure for finding maximum likelihood model parameters
if some part of the data is missing. For a finite mixture model the underlying
assumption (the generative model) is that each data point comes from one of g component
distributions. However, this information is hidden in that the identity of the
component which generated each point is unknown. If we knew this information,
the estimation of the parameters by maximum likelihood would be direct at least for
a single normal population; estimate the mean and covariance parameters for each
component separately using the data points identified as being from that component.
Further, the relative count of data points in each population would be the maximum
likelihood estimate of the weight of the components in the mixture model regardless
of the component distributions.
We can think of two types of data, the observed data and the missing data. Ac-
cordingly, we have the observed likelihood (the one we want to maximize), and the full
likelihood (the one that includes missing data and is typically easier to maximize).
The EM algorithm provides a theoretical framework that enables us to iteratively
maximize the observed likelihood by maximizing the expected value of the full likeli-
hood. For fitting Gaussian mixtures, the EM iterations are quite straightforward and
well-known (see McLachlan and Basford (1988) and Bishop (1995) for tutorial treatments
of EM for Gaussian mixtures and see Little and Rubin (1987) or McLachlan
and Krishnan (1997) for a discussion of EM in a more general context). With binning
and truncation we have two additional sources of hidden information in addition to
the hidden component identities for each data point.
McLachlan and Jones (1988) show how to use the EM algorithm for this type of
problem. The underlying finite mixture model can be written as:
where the - i 's are weights for the individual components, the f i 's are the component
density functions of the mixture model parametrized by ', and \Phi is the set of all
mixture model parameters, 'g. The overall sample space H is divided into v
disjoint subspaces H (bins) of which only the counts on the first r bins are observed,
while the counts on last bins are missing. The (observed) likelihood associated
with this model (up to irrelevant constant terms) is given by (Jones and McLachlan
r
where n is the total observed count:
r
and the P 's represent integrals of the probability density function (PDF) over bins:
Z
Z
r
The form of the likelihood function above corresponds to a multinomial distributional
assumption on bin occupancy.
To invoke the EM machinery we first define several quantities at the p-th iteration.
\Phi (p) and ' (p) represent current estimates of model parameters. E (p)
[:] denotes conditional
expectation given that the random variable belongs to the j-th bin, using the
current value of the unknown parameter vector (e.g., expected value with respect to
the normalized current PDF f(x; \Phi (p) )=P j (\Phi (p) )). Specifically, for any function g(x):
Z
We also define:
c (p)
[- (p)
where all the quantities on the left-hand side (with superscript (p)) depend on the
current parameter estimates \Phi (p) and/or ' (p) . Each term has an intuitive interpreta-
tion. For example, the m j 's represent a generalization of the bin counts to unobserved
data. They are either equal to the actual count in the observed bins (i.e., for j - r)
or they represent the conditional expected counts for unobserved bins (i.e., j ? r).
The conditional expected count formalizes the notion that if there is (say) 1% of the
PDF mass in the unobserved bins, then we should assign them 1% of the total data
points. - i (x) is the posterior probability of membership of the i-th component of the
mixture model given x being observed on the individual (it represents the relative
weight (
of each mixture component i at point x). Intuitively it is the
probability of data point x "belonging" to component i. c i is a measure of the overall
relative weight of component i. Note that in order to calculate c i the local relative
weight - i (x) is averaged over each bin, weighted by the count in the bin and summed
over all bins. This way, each data point within each bin contributes to c i an average
local weight for that bin (i.e. Compare this to the non-binned data where
each data point contributes to c i the actual local weight evaluated at the data point
is the value of the data point).
Next, we use the quantities defined in the last equation to define the E-step and
express the closed form solution for the M-step at iteration (p
[x- (p)
c (p)
oe (p+1)
c (p)
These equations specify how the component weights (i.e., -'s), component means (i.e.,
-'s) and component standard deviations (i.e., oe's) are updated at each EM step. Note
that the main difference here from the standard version of EM (for non-binned data)
comes from the fact that we are taking expected values over the bins (i.e., E (p)
[:]).
Here, each data point within each bin contributes the corresponding value averaged
over the bin, whereas in the non-binned case each point contributes the same value
but evaluated at the data point.
To generalize to the multivariate case, in theory all we need do is generalize
Equations (6)-(8) to the vector/covariance cases:
c (p)
[x- (p)
c (p)
c (p)
While the multivariate theory is a straightforward extension of the univariate case, the
practical implementation of this theory is considerably more complex due to the fact
that the approximation of multi-dimensional integrals is considerably more complex
than the univariate case.
Note that the approach above is guaranteed to maximize likelihood as defined by
equation (1), irrespective of the form of the selected conditional probability model
for missing data given observed data. Different choices of this conditional probability
model only lead to different paths in parameter space, but the overall maximum
likelihood parameters will be the same. This makes the approach quite general as no
additional assumptions about the distribution of the data are required.
3 Computational and Numerical Issues
In this section we discuss our approach to two separate problems that arise in the
multivariate case: 1) how to perform a single iteration of the EM algorithm; 2) how
to setup a full algorithm that will be both exact and time efficient. The main difficulty
in handling binned data (as opposed to having standard, non-binned data) is
the evaluation of the different expected values (i.e. E (p)
[:]) at each EM iteration. As
defined by Equation (2), each expected value in equations (9)-(11) requires integration
of some function over each of the v bins. These integrals cannot be evaluated
analytically for most mixture models (even for Gaussian mixture models). Thus, they
have to be evaluated numerically at each EM iteration, considerably complicating the
implementation of the EM procedure, especially for multivariate data. To summarize
we present some of the difficulties:
ffl If there are m bins in the univariate space, there are now O(m d ) bins in the
d-dimensional space (consider each dimension as having O(m) bins), which represents
exponential growth in the number of bins.
ffl If in the univariate space each numerical integration requires O(i) function eval-
uations, in multivariate space it will require at least O(i d ) function evaluations
for comparable accuracy of the integral. Combined with the exponential growth
in the number of bins, this leads to an exponential growth in number of function
evaluations. While the underlying exponential complexity cannot be avoided,
the overall execution time can greatly benefit from carefully optimized integration
schemes.
ffl The geometry of multivariate space is more complex than the geometry of univariate
space. Univariate histograms have natural end-points where the truncation
occurs and the unobserved regions have a simple shape. Multivariate
histograms typically represent hypercubes and unobserved regions, while still
"rectangular," are not of a simple shape any more. For example, for a 2-
dimensional histogram there are four sides from which the unobserved regions
extend to infinity, but there are also four "wedges" in between these regions.
ffl For fixed sample size, multivariate histograms are much sparser than their univariate
counterparts in terms of counts per bin (i.e., marginals). This sparseness
can be leveraged for the purposes of efficient numerical integration.
3.1 Numerical Integration at each EM Iteration
The E step of the EM algorithm consists of finding the expected value of the complete-data
log likelihood with respect to the distribution of missing data, while the M step
consists of maximizing this expected value with respect to the model parameters \Phi.
Equations both steps for a single iteration of the EM algorithm.
If there were no expected values in the equations (i.e., no E (p)
[:] terms), they would
represent a closed form solution for parameter updates. With binned and truncated
data, they are almost a closed form solution, but additional integration is still re-
quired. One could use any of a variety of Monte Carlo integration techniques for this
integration problem. However, the slow convergence of Monte Carlo is undesirable for
this problem. Since the functions we are integrating are typically quite smooth across
the bins, relatively straightforward numerical integration techniques can be expected
to give solutions with a high degree of accuracy.
Multidimensional numerical integration consists of repeated 1-dimensional inte-
grations. For the results in this paper we use Romberg integration (see Thisted
(1988) or Press et al., (1992) for details). An important aspect of Romberg integration
is selection of the order of integration. Lower-order schemes use relatively few
function evaluations in the initialization phase, but may converge slowly. Higher-order
schemes may take longer at the initialization phase, but converge faster. Thus,
order selection can substantially affect the computation time of numerical integration
(we will return to this point later). Note that the order only affects the path to
convergence of the integration; the final solution is the same for any order given the
same pre-specified degree of accuracy.
3.2 Handling Truncated Regions
The next problem that arises in practice concerns the truncated regions (i.e., regions
outside the measured grid). If we want to use a mixture model that is naturally
defined on the whole space we must define bins to cover regions extending from grid
boundaries to 1. In the 1-dimensional case it suffices to define 2 additional bins: one
extending from the last bin to 1, and the other extending from \Gamma1 to the first bin.
In the multivariate case it is more natural to define a single bin
r
that covers everything but the data grid than to explicitly describe the out-of-grid
regions. The reason is that we can calculate all the expected values over the whole
space H without actually doing any integration. With this in mind, we readily write
for the integrals over the truncated regions:
Z
r
Z
r
Z
Z
r
Z
Note that no extra work is required to obtain the integrals on the right-hand side of the
equations above. The basic EM Equations (9)-(11) require the calculation of expected
values similar to those defined in Equation (2) for each bin. Note, however, that the
only difference between those expected values and integrals on the right-hand side of
Equations (12)-(14) is the normalizing constant 1=P j (\Phi). Because the normalizing
constant does not affect the integration, it suffices to separately record normalized and
unnormalized values of integrals for each bin. The normalized values are later used
in equations (9)-(11), while the unnormalized values are used in Equations (12)-(14).
For computational efficiency we take advantage of the sparseness of the bin counts.
Assume that we want to integrate some function (i.e., the PDF) over the whole
grid. Further assume that we require some prespecified accuracy of integration ffi.
This means that if the relative change of the value of the integral in two consecutive
iterations falls below ffi we consider the integral to have converged. ffi is a small number,
typically of the order of 10 \Gamma5 or less. Assume further that we perform integration by
integrating over each bin on the grid and by adding up the results. Intuitively, the
contribution from some bins will be large (i.e., from the bins with significant PDF
mass in them), while the contribution from others will be negligible (i.e., from the
bins that contain near zero PDF mass). If the data are sparse, there will be many bins
with negligible contributions. The goal is to optimize the time spent on integrating
over numerous empty bins that do not significantly contribute to the integral or the
accuracy of the integration.
To see how this influences the overall accuracy, consider the following simplified
analysis. Let the size of the bins be proportional to H and let the mean height of the
PDF be approximately F . Let there be of the order pN bins with relevant PDF mass
in them, where is the total number of bins. A rough estimate of the
integral over all bins is given by I - FHpN . Since the accuracy of integration is of
order ffi, we are tolerating absolute error in integration of order ffiI . On the other hand,
assume that in the irrelevant bins the value of the PDF has height on the order of
fflF , where ffl is some small number. The estimated contribution of the irrelevant bins
to the value of the integral is I 0 - fflF H(1 \Gamma p)N which is approximately I 0 - ffl=pI
for sparse data (i.e., p is small compared to 1). The estimated contribution of the
irrelevant bins to the absolute error of integration is
of integration within irrelevant bins. Since any integration is as accurate as its least
accurate part, in an optimal scheme the contribution to the error of integration from
the irrelevant and relevant bins are comparable. In other words, it is suboptimal
(as we also confirm experimentally in the result section) to choose ffi 0 any smaller
than required by ffi 0 ffl=p - ffi. This means that integration within any bin with low
probability mass (i.e. - fflF ) need not be carried out more accurately than
Note that as ffl ! 0 we can integrate less and less accurately within each bin
without hurting the overall integral over the full grid. Note also that as ffl ! 0 and
Treshold
e
Figure
2: Execution time of a single EM step as a function of the threshold ffl for several
different values k of the Romberg integration order. For 2, the values were off-
scale, e.g., etc. Results based on fitting a two-component
mixture to 40,000 red blood cell measurements in two dimensions on 100 \Theta 100 bins.
Time [seconds]
Log-likelihood
(within
a
multiplicative
k-means
Final log-likelihood of k= 3,
Figure
3: Quality of solution (measured by log-likelihood) as a function of time for
different variations on the algorithm.
becomes o(1), we can start using a single iteration of the simplest possible integration
scheme and still stay within the allowed limit of ffi 0 . To summarize, given a value for ffl,
the algorithm estimates the average height F of the PDF and for all the bins with PDF
values less than fflF uses a single iteration of a simple and fast integrator. The original
behavior is recovered by setting no bins are integrated "quickly"). This
general idea provides a large computational gain with virtually no loss of accuracy
(note that ffi controls overall accuracy, while ffl adds only a small correction to ffi).
For example, we have found that the variability in parameter estimates from using
different small values of ffl is much smaller than the bin size and/or the variability in
parameter estimates from different (random) initial conditions.
Figure
2 shows the time required to complete a single EM step for different values
of k (the Romberg integration order) and ffl. The time is minimized for different values
of ffl by using 4, and is greatest for
choosing either too low or too high of an integration order is quite computationally
inefficient.
3.3 The Full EM Algorithm
After fine tuning each single EM iteration step above we are able to significantly
cut down on the execution time. However, since each step is still computationally
intensive, it is desirable to have EM converge as quickly as possible (i.e., to have as
few iterations as possible).
With this in mind we use the following additional heuristic. We take a random
sample of binned points and randomize the coordinates of each point around the
corresponding bin center (we use the uniform distribution within each bin). The
EM algorithm for this non-binned and non-truncated data is relatively fast as a
closed form solution exists for each EM step (without any integration). Once the EM
algorithm converges to a solution in parameter space on this initial data set, we use
these parameters as initial starting points for the EM algorithm on the full set of
binned and truncated data. This second application of EM (using the methodology
described earlier in this paper) refines the initial guesses to a final solution, typically
taking just a few iterations. Note that this initialization scheme cannot affect the
accuracy of the results, as the log-likelihood on the full set of binned and truncated
data is used as the final criterion for convergence.
Figure
3 illustrates the various computational gains. The y axis is the log-likelihood
(within a multiplicative constant) of the data and the x axis is computation
time. Here we are fitting a two-component mixture on a two-dimensional grid with
100 \Theta 100 bins of red blood cell counts. k is the order of Romberg integration and ffl is
the threshold for declaring a bin to be small enough for "fast integration" as described
earlier. All parameter choices (k; ffl) result in the same quality of final solution (i.e., all
asymmptote to the same log-likelihood eventually) Using no approximation
two orders of magnitude slower than using non-zero ffl values. Increasing ffl from 0.001
to 0.1 results in no loss in likelihood but results in faster convergence. Comparing
the curves for k-means is used to initialize the binned algorithm
versus the randomized initialization method described earlier, shows about a factor
of two gain in convergence time for the randomized initialization.
To summarize, the overall algorithm for fitting mixture models to multivariate
binned, truncated data consist of the following steps:
1. Treat the multivariate histogram as a PDF and draw a small number of data
points from it (add some counts to all the bins to prevent 0 probabilities in
empty bins).
2. Fit a standard mixture model to this sample using the usual EM algorithm for
non-binned, non-truncated data.
3. Use the parameter estimates from Step 2, and refine them using the EM algorithm
on the full set of binned and truncated data. This consists of iteratively
applying equations (9)-(11) for the bins within the grid and applying equations
for the single bin outside the grid until convergence as measured by
equation (1).
4 Experimental Results
4.1 EM Methodology
In the experiments below we use the following methods and parameters in the implementation
of EM.
ffl The standard EM algorithm is initialized by running k-means from 10 different
initial starting points and choosing the EM solution with the highest likelihood
(to avoid poor local maxima).
ffl K points are randomly drawn from the binned histogram, where K is chosen to
be 10% of the number of total data points or 100 points, whichever is greater.
Points are drawn using the uniform sampling distribution.
ffl The binned EM algorithm is initialized by running the standard EM algorithm
with 5 random restarts on the K randomly drawn data points.
ffl To avoid poor local maxima, the binned EM algorithm chooses the solution with
the highest likelihood out of solutions from 10 different random initializations.
ffl Convergence of the standard and binned/truncated EM is judged by a change
of less the 0.01% in the log-likelihood, or after a maximum of 20 EM iterations,
whichever comes first.
ffl The order of the Romberg integration is set to 3 and ffl is set to 10 \Gamma4 .
ffl The default accuracy of the integration is set to
4.2 Simulation Experiments
We simulated data from a two-dimensional mixture of two Gaussians, centered at
(-1.5,0) and (1.5,0) with unit covariance matrices. We then varied the number of
data points per dimension in steps of 10 from to 1000, and drew a 10 random
samples of size N from the bivariate mixture. In addition we varied the number of
bins per dimension in steps of 5 from so that the original unbinned
samples were quantized into B 2 bins. The range of the grid extended from (-5,-5) to
(5,5) so that truncation was relatively rare.
On the original unbinned samples we ran the standard EM algorithm, and on
the binned data we ran the binned version of EM (using for both versions of the
EM the parameters and settings described earlier). The purpose of the simulation
was to observe the effect of binning and sample size on the quality of the solution.
Note that the standard algorithm is typically being given much more information
about the data (i.e., the exact locations of the data points) and, thus, on average we
expect it to perform better than any algorithm which only has binned data to learn
from. To measure solution quality we calculated the Kullback-Leibler (K-L) (or cross-
entropy) distance between each estimated density and the true known density. The
Figure
4: Average KL distance between the estimated density (estimated using the
procedure described in this paper) and the true density, as a function of the number
of bins and the number of data points.
Bins per Dimension / Different Random Samples
KL
Distance
Binned, 100 data points per component
Standard, 100 data points per component
Binned, 300 data points per component
Standard, 300 data points per component
Binned, 1000 data points per component
Standard, 1000 data points per component
Figure
5: Average KL distance (log-scale) between the estimated densities and the
true density as function of the number of bins, for different sample sizes, and compared
to standard EM on the unbinned data.
K-L distance is non-negative and is zero if and only if two densities are identical. We
calculated the average K-L distance over the 10 samples for each value of N and B, for
both the binned and the standard EM algorithms. In total, each of the standard and
binned algorithms were run 20,000 different times to generate the reported results.
Figure
4 shows plot of average KL-distance for the binned EM algorithm, as a
function of the number of bins and the number of data points. One can clearly
see a "plateau" effect in that the KL-distance between solution and the generating
true density (a measure of quality of the solution) is relatively close to zero when
the number of bins is above 20 and the number of data points is above 500. As a
function of N , the number of data points, one sees the typical exponentially decreasing
"learning curve,", i.e., solution quality increases roughly in proportion to N \Gammaff for
some constant ff. As a function of bin size B, there appears to be more of a threshold
effect: with more than 20 bins the solution quality is again relatively flat as a function
of the number of bins. Below the solutions rapidely decrease in quality (e.g.,
there is a significant degradation).
In
Figure
5 we plot the KL distance (log-scale) as a function of bin size, for specific
values of N comparing both the standard and binned versions
of EM. For each of the 3 values of N , the curves have the same qualitative shape:
a rapid improvement in quality as we move from relatively
flat performance (i.e., no sensitivity to B) above 20. For each of the 3 values of
N , the binned EM "tracks" the performance of the standard EM quite closely: the
Number of Data Points
KL
Distance
5 bins per dimension
bins per dimension
100 bins per dimension
standard algorithm
Figure
Average KL distance (log-scale) between the estimated densities and the
true density as function of sample size, for different numbers of bins, and compared
to standard EM on the unbinned data.
difference between the two becomes less as N increases. The variability in the curves
is due to the variability in the 10 randomly sampled data sets for each particular value
of B and N . Note that for B - 20 the difference between the binned and standard
versions of EM is smaller than the "natural" variability due to random sampling
effects.
Figure
6 plots the average KL distance (log-scale) as a function of N , the number
of data points per dimension, for specific numbers of bins B. Again we compare the
binned algorithm (for various B values) with the standard unbinned algorithm. Over-all
we see the characteristic exponential decay (linear on a log-log plot) for learning
curves as a function of sample size. Again, for B - 20 the binned EM tracks the
standard EM quite closely.
The results suggest (on this particular problem at least) that the EM algorithm
for binned data is more sensitive to the number of bins than it is to the number of
data points, in terms of comparative performance to EM on unbinned data. Above
a certain threshold number of bins (here 20), the binned version of EM appears
to be able to recover the true shape of the densities almost as well as the version of
EM which sees the original unbinned data.
Volume
Hemoglobin
Concentration
Control #1
Hemoglobin
Concentration
Volume
Control #2
Hemoglobin
Concentration
Volume
Control #3
Hemoglobin
Concentration
Volume
Iron Deficient #1
Hemoglobin
Concentration
Volume
Iron Deficient #2
Hemoglobin
Concentration
Volume
Iron Deficient #3
Figure
7: Contour plots from estimated density estimates for three typical control
patients and three typical iron deficient anemia patients. The lowest 10% of the
probability contours are plotted to emphasize the systematic difference between the
two groups.
4.3 Application to Red Blood Cell Data
As mentioned at the beginning of the paper this work was motivated by a real-world
application in medical diagnosis based on two-dimensional histograms characterizing
red blood cell volume and hemoglobin measurements (see Figure 1).
McLaren (1996) summarizes prior work on this problem: the one-dimensional
mixture-fitting algorithm of McLachlan and Jones (1988) was used to fit mixture
models to one-dimensional red blood cell volume histograms. Mixture models are
particularly useful in this context as a generative model since it is plausible that different
components in the model correspond to blood cells in different states. In Cadez
et al (1999) we generalized the earlier work of McLaren et al (1991) and McLaren
(1996) on one-dimensional volume data to the analysis of two-dimensional volume-
hemoglobin histograms. Mixture densities were fit to histograms from 97 control
subjects and 83 subjects with iron deficient anemia, using the binned/truncated EM
procedure described in the present paper. Figure 3 demonstrated the improvement
in computation time which is achievable; the data in Figure 3 are for a 2-component
mixture model fit to a control subject with a 2-dimensional histogram of 40,000 red
blood cells.
Figure
7 shows contour probability plots of fitted mixture densities for 3 control
and 3 iron deficient subjects, where we plot only the lower 10% of the probability
density function (since the differences between the two populations are more obvious
in the tails). One can clearly see systematic variability within the control and the
iron deficient groups, as well as between the two groups. Since the number of bins
is relatively large in each dimension), as is the number of data points
(40,000), the simulation results from the previous section would tend to suggest that
these density estimates are likely to be relatively accurate (compared to running EM
on unbinned data).
In Cadez et al (1999) we used the parameters of the estimated mixture densities
as the basis for supervised classification of subjects into the two groups, with a resulting
error rate of about 1.5% in cross-validated experiments. This compares with
a cross-validated error rate of about 4% on the same subjects using algorithms such
as CART or C5.0 directly on features from the histogram such as univariate means
and standard deviations (i.e., using no mixture modeling). Thus, the ability to fit
mixture densities to binned and truncated data played a significant role in improved
classification performance on this particular problem.
Conclusions
The problem of fitting mixture densities to multivariate binned and truncated data
was addressed using a generalization of McLachlan and Jones' (1988) EM procedure
for the one-dimensional problem. The multivariate EM algorithm requires multi-variate
numerical integration at each EM iteration. We described a variety of computational
and numerical implementation issues which need careful consideration in
this context. Simulation results indicate that high quality solutions can be obtained
compared to running EM on the "raw" unbinned data, unless the number of bins is
relatively small.
Acknowledgements
The contributions of IC and PS to this paper have been supported in part by the
National Science Foundation under Grant IRI-9703120. The contribution of CMcL
has been supported in part by grants from the National Institutes of Health (R43-
HL46037 and R15-HL48349) and a Wellcome Research Travel Grant awarded by the
Burroughs Wellcome Fund. We thank Thomas H. Cavanagh for providing laboratory
facilities. We are grateful to Dr. Albert Greenbaum for technical assistance.
--R
Neural Networks for Pattern Recognition
'Hierarchical models for screening of iron deficiency anemia,' submitted to ICML-99
'Maximum likelihood from incomplete data via the EM algorithm,' J.
Statistical Analysis with Missing Data
'Fitting mixture models to grouped and truncated data via the EM algorithm,' Biometrics
The EM Algorithm and Extensions
'Mixture models in haematology: a series of case studies,' Statistical Methods in Medical Research
Numerical Recipes in C: the Art of Scientific Computing
Elements of Statistical Computing
--TR
Statistical analysis with missing data
Elements of statistical computing
Color indexing
Numerical recipes in C (2nd ed.)
Intelligent multimedia information retrieval
Histogram-based estimation techniques in database systems
Wavelet-based histograms for selectivity estimation
Multi-dimensional selectivity estimation using compressed histogram information
Neural Networks for Pattern Recognition
Query by Image and Video Content
Hierarchical Models for Screening of Iron Deficiency Anemia
--CTR
Nizar Bouguila , Djemel Ziou, Unsupervised learning of a finite discrete mixture: Applications to texture modeling and image databases summarization, Journal of Visual Communication and Image Representation, v.18 n.4, p.295-309, August, 2007 | binned;iron deficiency anemia;KL-distance;truncated;mixture model;histogram |
584651 | Learning Recursive Bayesian Multinets for Data Clustering by Means of Constructive Induction. | This paper introduces and evaluates a new class of knowledge model, the recursive Bayesian multinet (RBMN), which encodes the joint probability distribution of a given database. RBMNs extend Bayesian networks (BNs) as well as partitional clustering systems. Briefly, a RBMN is a decision tree with component BNs at the leaves. A RBMN is learnt using a greedy, heuristic approach akin to that used by many supervised decision tree learners, but where BNs are learnt at leaves using constructive induction. A key idea is to treat expected data as real data. This allows us to complete the database and to take advantage of a closed form for the marginal likelihood of the expected complete data that factorizes into separate marginal likelihoods for each family (a node and its parents). Our approach is evaluated on synthetic and real-world databases. | Introduction
One of the main problems that arises in a great variety of elds, including pattern
recognition, machine learning and statistics, is the so-called data clustering problem
[1, 3, 7, 14, 15, 22, 25]. Data clustering can be viewed as a data-partitioning problem,
where we partition data into dierent clusters based on a quality or similarity criterion
(e.g., as in K-Means [30]). Alternatively, data clustering is one way of representing the
joint probability distribution of a database. We assume that, in addition to the observed
or predictive attributes, there is a hidden variable. This unobserved variable re
ects
the cluster membership for every case in the database. Therefore, the data clustering
problem is also an example of learning from incomplete data due to the existence of such
a hidden variable. Incomplete data represents a special case of missing data, where all
the missing entries are concentrated in a single (hidden) variable. That is, we refer to a
given database as incomplete when the classication is not given. Parameter estimation
and model comparison in classical and Bayesian statistics provide a solution to the
data clustering problem. The most frequently used approaches include mixture density
models (e.g., Gaussian mixture models [3]) and Bayesian networks (e.g., AutoClass [8]).
We aim to automatically recover the joint probability distribution from a given
incomplete database by learning recursive Bayesian multinets (RBMNs). Roughly, a
recursive Bayesian multinet is a decision tree [4, 44] where each decision path (i.e., a
conjunction of predictive attribute-value pairs) ends in an alternate component Bayesian
network (BN) [6, 24, 29, 38].
RBMNs are a natural extension of BNs. While the conditional (in)dependencies
encoded by a BN are context-non-specic conditional (in)dependencies, RBMNs allow
us to work with context-specic conditional (in)dependencies [21, 49], which dier from
decision path to decision path.
Our heuristic approach to the learning of RBMNs requires the learning of its component
BNs from incomplete data. In the last few years, several methods for learning
BNs have arisen [5, 12, 23, 37, 48], some of them that learn from incomplete data
[9, 17, 33, 39, 40, 49]. We describe how the Bayesian heuristic algorithm for the learning
of BNs for data clustering developed by Pe~na et al. [39] is extended to learn RBMNs.
A key step in the Bayesian approach to learning graphical models in general and BNs
in particular is the computation of the marginal likelihood of data given the model. This
quantity is the ordinary likelihood of data averaged over the parameters with respect
to their prior distribution. When dealing with incomplete data, the exact calculation
of the marginal likelihood is typically intractable [12], thus, such a computation has to
be approximated [11]. The existing methods are rather ine-cient for our purpose of
eliciting a RBMN from an incomplete database, since they do not factorize into scores
for families (i.e., nodes and their parents). Hence, we would have to recompute the
score for the whole structure from anew, although only the factors of some families had
changed.
To avoid this problem, we use the algorithm developed in [39] based upon the work
done in [49]. We search for parameter values for the initial structure by means of the EM
algorithm [13, 31], or by means of the BC+EM method [40]. This allows us to complete
the database by using the current model, that is, by treating expected data as real data,
which results in the possibility of using a score criterion that is both in closed form and
The remainder of this paper is organized as follows. In Section 2, we describe BNs,
Bayesian multinets (BMNs) and RBMNs for data clustering. Section 3 is dedicated to
the heuristic algorithm for the learning of component BNs from incomplete data. In
Section 4, we describe the algorithm for the learning of RBMNs for data clustering.
Finally, in Section 5 we present some experimental results. The paper nishes with
Section 6 where we draw some conclusions and outline some lines of further research.
BNs, BMNs and RBMNs for data clustering
2.1 Notation
We follow the usual convention of denoting variables by upper-case letters and their
states by the same letters in lower-case. We use a letter or letters in bold-face upper-case
to designate a set of variables and the same bold-face lower-case letter or letters to
denote an assignment of state to each variable in a given set. jXj is used to refer to the
number of states of the variable X. We use p(x j y) to denote the probability that
y. We also use p(x j y) to denote the conditional probability distribution
(mass function, as we restrict our discussion to the case where all the variables are
discrete) for X given y. Whether p(x j y) refers to a probability or a conditional
probability distribution should be clear from the context.
As we mentioned, when facing a data clustering problem we assume the existence
of the n-dimensional random variable X that is partitioned as into an
(n 1)-dimensional random variable Y (predictive attributes), and a unidimensional
hidden variable C (cluster variable).
2.2 BNs for data clustering
Given an n-dimensional variable C), a BN [6, 24, 29, 38]
for X is a graphical factorization of the joint probability distribution for X. A BN
is dened by a directed acyclic graph b (model structure) determining the conditional
(in)dependencies among the variables of X and a set of local probability distributions.
When b contains an arc from the variable X j to the variable X i , X j is referred to as a
parent of X i . We denote by Pa(b) i the set of all the parents that the variable X i has in
b. The structure lends itself to a factorization of the joint probability distribution for
X as follows:
Y
where pa(b) i denotes the state of the parents of X i , Pa(b) i , consistent with x. The local
probability distributions of the BN are those in Equation 1 and we assume that they
depend on a nite set of parameters b 2 b . Therefore, Equation 1 can be rewritten
as follows:
Y
If b h denotes the hypothesis that the conditional (in)dependence assertions implied
by b hold in the true joint probability distribution for X, then we obtain from Equation 2
Y
According to the partition of X as Equation 3 can be rewritten as
follows:
Y
where prepa(b) i denotes the state of those parents of Y i that correspond to predictive
attributes, consistent with y.
Thus, a BN is completely dened by a pair (b; b ). The rst of the two components
is the model structure, and the second component is the set of parameters for the local
probability distributions corresponding to b. See Figure 1 for an example of a BN
structure for data clustering with ve predictive attributes.
2.3 BMNs for data clustering
The conditional (in)dependencies determined by the structure of a BN are called context-
non-specic conditional (in)dependencies [49], also known as symmetric conditional
(in)dependencies [21]. That is, if the structure implies that two sets of variables are
independent given some conguration (or state) of a third set of variables, then the
two rst sets are also independent given every other conguration of this third set
Figure
1: Example of the structure of a BN for data clustering for
C). It follows from the gure that the joint probability distribution
factorizes as
of variables. A BMN [21] is a generalization of the BN model that is able to encode
context-specic conditional (in)dependencies [49], also known as asymmetric conditional
(in)dependencies [21]. Therefore, a BMN structure may imply that two sets of variables
are independent given some conguration of a third set, and dependent given another
conguration of this third set. Formally, a BMN for
distinguished variable G 2 Y is a graphical factorization of the joint probability distribution
for X. A BMN is dened by a probability distribution for G and a set of
component BNs for XnfGg, each of which encodes the joint probability distribution for
XnfGg given a state of G. Because the structure of each component BN may vary, a
BMN can encode context-specic conditional (in)dependence assertions. In this paper,
we limit the distinguished variable G to be one of the original predictive attributes.
However, [49] allows the distinguished variable to be either one of the predictive attributes
or the hidden cluster variable C. When this last happens, each leaf represents
a single cluster. These models are called mixtures of BNs according to [49]. Figure 2
shows the structure of a BMN for data clustering when the distinguished variable G has
two values.
Let s and s denote the structure and parameters of a BMN for X and distinguished
variable G. In addition, let us suppose that b g and g denote the structure and parameters
of the g-th component BN of the BMN. Also, let s h denote the hypothesis that the
context-specic conditional (in)dependencies implied by s hold in the true joint probability
distribution for X and distinguished variable G. Therefore, the joint probability
distribution for X encoded by the BMN is given by:
where denotes the parameters of the BMN, b h
g is a short-hand
for the conjunction of s h and The last term of
the previous equation can be further factorized according to the structure of the g-th
component BN of the BMN (Equation 4).
Thus, a BMN is completely dened by a pair (s; s ). The rst of the two components
is the structure of the BMN, and the second component is the set of parameters.
We may see a BMN as a depth-one decision tree [4, 44], where the distinguished
variable is the root and there is a branch for each of its states. At the end of each of
Y
Y =y5 52Y =y
Figure
2: Example of the structure of a BMN for data clustering for
distinguished variable There are two component BNs
as the distinguished variable is dichotomic (jY 5 j=2). Dotted lines correspond to the
distinguished variable Y 5 .
these branches is a leaf which is a component BN. Thus, it is helpful to see the dotted
lines of Figure 2 as conforming a decision tree with component BNs as leaves.
2.4 RBMNs for data clustering
Let us follow with the view of a BMN as a depth-one decision tree where leaves are component
BNs. We propose to use deeper decision trees where leaves are still component
BNs. By denition, every component of a BMN is limited to be a BN. A RBMN allows
every component to be either a BN (at a leaf) or recursively a RBMN.
RBMNs extend BNs and BMNs, but RBMNs also extend partitional clustering systems
[16]. RBMNs can be considered as extensions of BNs because, like BMNs and mixtures
of BNs, RBMNs allow us to encode context-specic conditional (in)dependencies.
Thus, they constitute a more
exible tool than BNs and provide the user with structured
and specialized domain knowledge as alternative component BNs are learnt for every
decision path. Moreover, RBMNs generalize the idea behind BMNs by oering the possibility
of having decision paths with conjunctions of as many predictive attribute-value
pairs as we want. The only constraint is that these decision paths must be represented
by a decision tree.
Additionally, RBMNs extend traditional partitional clustering systems. A previous
work with the same aim is [16] where Fisher and Hapanyengwi propose to perform
data clustering based upon a decision tree. The measure used to select the divisive
attribute at each node during the decision tree construction consists of the computation
of the sum of information gains over all attributes, while in the supervised paradigm the
measure is limited to the information gain over a single specied class attribute. This is
a natural generalization of the works on supervised learning where the performance task
comprises the prediction of only one attribute from the knowledge of many, whereas the
generic performance task in unsupervised learning is the prediction of many attributes
Y =y
BN BN
BN BN
Y =y
Y =y
Y =y 32Distinguished
decision tree T
BNs
Component
Figure
3: Example of the structure of a 2-levels RBMN for data clustering for
distinguished decision tree T. This RBMN has two
component BMNs, each of them with two component BNs (assuming that the variables
in the distinguished decision tree are all dichotomic). Dotted lines correspond to the
distinguished decision tree T.
from the knowledge of many. Thus, RBMNs and the work by Fisher and Hapanyengwi
aim to learn a decision tree with knowledge at leaves su-cient for making inference
along many attributes. This implies that both paradigms are considered extensions
of traditional partitional clustering systems as they are concerned with characterizing
clusters of observations rather than partitioning them.
We dene a RBMN according to the intuitive idea of a decision tree with component
BNs as leaves. Let T be a decision tree, here referred to as distinguished decision
tree, where (i) every internal node in T represents a variable of Y, (ii) every internal
node has as many children or branches coming out from it as states for the variable
represented by the node, (iii) all the leaves are at the same level, and (iv) if T(root; l) is
the set of variables that are in the decision path between the root and the leaf l of the
distinguished decision tree, there are then no repeated variables in T(root; l). Condition
(iii) is imposed to simplify the understanding of RBMNs and their learning, but such a
constraint can be removed in practice. Let us dene XnT(root; l) as the set of all the
variables in X except those that are in the decision path between the root and the leaf
l of the distinguished decision tree T. Thus, a RBMN for
and distinguished decision tree T is a graphical factorization of the joint probability
distribution for X. A RBMN is dened by a probability distribution for the leaves of
T and a set of component BNs, each of which encodes the joint probability distribution
for XnT(root; l) given the l-th leaf of T. Thus, the component BN at every leaf of
the distinguished decision tree does not consider attributes involved in the tests on the
decision path leading to the leaf.
Obviously, BMNs are a special case of RBMNs in which T is a distinguished decision
tree with only one internal node, the distinguished variable. Moreover, we could assume
that BNs are also a special case of RBMNs in which the distinguished decision tree
contains no internal nodes.
Figure
3 helps us to illustrate the structure of a RBMN for data clustering as a
decision tree where each internal node is a predictive attribute and branches from that
node are states of the variable. Every leaf l is a component BN that does not consider
attributes in T(root; l). So, the induction of the component BNs is simplied. Since
every internal node of T is a predictive attribute, the hidden variable C appears in every
component BN. This fact implies that the component BN at each leaf of T does not
represent only one cluster as Fisher and Hapanyengwi propose in [16], but a context-
specic data clustering. That is, the data clustering encoded by each component BN
is totally unrelated to the data clusterings encoded by the rest. This means that the
probabilistic clusters identied by each component BN are not in correspondence with
those identied by the rest of component BNs. This is due to the fact that C acts
as a context-specic or local hidden cluster variable for every component BN. To be
exact, every variable of each component BN is a context-specic variable that does not
interact with the variables of any other component BN since the elicitation of every
component BN is totally independent of the rest. This is not explicitly re
ected in the
notation as every branch identies unambiguously each component BN and its variables.
Additionally, this avoids a too complex notation. This reasoning should also be applied
to BMNs as they are a special case of RBMNs.
Let s and s denote the structure and parameters of a RBMN for X and distinguished
decision tree T. In addition, let us suppose that b l and l denote the structure and
parameters of the l-th component BN of the RBMN. Also, let s h denote the hypothesis
that the context-specic conditional (in)dependencies implied by s hold in the true joint
probability distribution for X and distinguished decision tree T. Therefore, the joint
probability distribution for X encoded by the RBMN is given by:
l
where the leaf l is the only one that makes x be consistent with t(root; l),
denotes the parameters of the RBMN, L is the number of leaves
in T, b h
l is a shorthand for the conjunction of s h and T(root;
The last term of the previous equation can be further factorized
according to the structure of the l-th component BN of the RBMN (Equation 4).
Thus, a RBMN is completely dened by a pair (s; s ). The rst of the two components
is the structure of the RBMN, and the second component is the set of parameters.
In this paper, we limit our discussion to the case in which the component BNs
are dened by multinomial distributions. That is, all the variables are nite discrete
variables and the local distributions at each variable in the component BNs consist of a
set of multinomial distributions, one for each conguration of the parents. In addition,
we assume that the proportions (probabilities) of data covered by the leaves of T follow
also a multinomial distribution.
As stated, RBMNs extend BNs due to their ability to encode context-specic conditional
(in)dependencies which increases the expressive power of RBMNs over BNs. A
decision tree eectively identies subsets of the original database where dierent component
BNs result a better, more
exible way t to data.
Other works in supervised induction identify instance subspaces through local or
component models. Kohavi [27] links Naive Bayes (NB) classiers and decision tree
learning. On the other hand, the work done by Zheng and Webb [50] combines the
previous work by Kohavi with a lazy learning algorithm to build Bayesian rules where
the antecedent is a conjunction of predictive attribute-value pairs, and the consequent is
a NB classier. Thus, both works share the fact that they use conjunctions of predictive
attribute-value pairs to dene instance subspaces described by NB classiers. Zheng
and Webb [50] give an extensive experimental comparison between these two and other
approaches for supervised learning in some well-known domains.
Langley [28] proposes to identify instance subspaces where the independence assumptions
made by the NB classier hold. His work is based upon the recursive split of the
original database by using decision trees where nodes are NB classiers and leaves are
sets of cases belonging to only one class.
To illustrate how RBMNs structure a clustering for a given database, we use a real-world
domain where data clustering was successfully performed by means of probabilistic
graphical models [41], with the aim of improving knowledge on the geographical distribution
of malignant tumors. A geographical clustering of the towns of the Autonomous
Community of the Basque Country (north of Spain) was performed. Every town was
described by the age-standardized cancer incidence rates of the six most frequent cancer
types for patients of each sex between 1986 and 1994. The authors obtained a geographical
clustering for male patients and a geographical clustering for female patients
as the dierences in the geographical patterns of malignant tumors for patients of each
sex are well-known by the experts. Each of both clusterings was achieved by means
of the learning of a BN. The nal clusterings were presented by using colored maps to
partition the towns in such a way that each town was assigned to the most probable
cluster according to the learnt BN, i.e., each town was assigned to the cluster with the
highest posterior probability.
Due to the dierent geographical patterns for male and female patients, it seems
quite reasonable to assume that a RBMN would be an eective and automatic tool to
face the referred real-world problem without relying on human expertise. That is, the
learning of a RBMN would be able to automatically identify that the instance subspace
for male patients encodes an underlying model dierent from the one encoded by the
instance subspace for female patients. However, the authors relied on human expertise
to divide the original database and treat separately male and female cases.
Figure
4 shows a RBMN that, ideally, would be learnt, and the structured clustering
obtained from this model. It is easy to see that the clusterings obtained for male and
female patients are dierent as well as context-specic. Furthermore, [41] reports that
the characterization of each cluster was completely dierent for male and female patients.
These dierences in the geographical patterns can not be captured when learning BNs
from the original joint database. In this example, Figure 4 is also a BMN since the
distinguished decision tree contains only one predictive attribute. However, it is easy
to see that a RBMN might represent a more complex decision tree that represented a
more specialized clustering. For example, we might expect dierent component BNs for
each of the four conjunctions
example could be encoded by a RBMN with a 2-levels distinguished decision tree.
A key idea in our approach to the learning of a RBMN for data clustering is to
decompose the problem into learning its component BNs from incomplete data. The
component BN corresponding to each leaf l is learnt from an incomplete database that
is a subset of the original incomplete database. This subset contains all the cases of
the original database that are consistent with t(root; l). Therefore, there still exists
a hidden variable when learning every component BN. That is why the problem of
learning a RBMN for data clustering is largely a problem of learning component BNs
from incomplete data. Thus, in the following section, we present a heuristic algorithm
for the learning of a BN from an incomplete database.
SEX=male SEX=female
BN male BN female
Figure
4: Scheme of the structure of the RBMN that, ideally, would be learnt for the real-world
domain described in [41]. Additionally, the clusterings encoded by the component
BNs are shown as colored maps. White towns were excluded from the study.
3 Learning BNs from incomplete data through constructive
induction
In this section, we describe a heuristic algorithm to elicit the component BNs from
incomplete data. We use this heuristic algorithm as part of the algorithm for the learning
of RBMNs for data clustering that we present in the following section.
3.1 Component BN structure
Due to the di-culty involved in learning densely connected BNs and the painfully slow
probabilistic inference when working with them, it is desirable to develop methods for
learning the simplest BNs that t the data adequately. Some examples of this trade-
between the cost of the learning process and the quality of the learnt models are
NB models [14, 43], Extended Naive Bayes (ENB) models [36, 37, 39, 42], and Tree
Augmented Naive Bayes models [18, 19, 20, 26, 32, 40]. Despite the wide recognition
that these models are a weaker representation of some domains than more general BNs,
the expressive power of these models is often acceptable. Moreover, these models appeal
to human intuition and can be learnt relatively quickly.
For the sake of brevity, the class of compromise BNs that we propose to learn as
component BNs will be referred to as ENB [42]. ENB models were introduced by Pazzani
[36, 37] as Bayesian classiers and later used by Pe~na et al. [39, 42] for data clustering.
ENB models can be considered as having an intermediate place between NB models
NB model Fully correlated model
ENB model
Y ,Y ,Y 3 5
Figure
5: Component BN (ENB model) structure that we propose to learn seen as
having a place between NB models and fully correlated models, when applied to the
data clustering problem.
and models with all the predictive attributes fully correlated (see Figure 5). Thus,
they keep the main features of both extremes: simplicity from NB models and a better
performance from fully correlated models.
ENB models are very similar to NB models since all the attributes are independent
given the cluster variable. The only dierence with NB models is that the number of
nodes in the structure of an ENB model can be shorter than the original number of
attributes in the database. The reasons are that (i) a selection of the attributes to
be included in the models can be performed, and (ii) some attributes can be grouped
together under the same node as fully correlated attributes (we refer to such nodes as
supernodes 1 ). Therefore, the class of ENB models ensures a better performance than NB
models while it maintains their simplicity. As we consider all the attributes relevant for
the data clustering task, we do not perform attribute selection as proposed by Pazzani.
The structure of an ENB model for data clustering lends itself to a factorization of
the joint probability distribution for X as follows:
r
Y
is a partition of y, where r is the number of nodes (including the
special nodes referred to as supernodes). Each z i is the set of values in y for the original
predictive attributes that are grouped together under a supernode Z i , or it is the value
in y for a predictive attribute Z i .
3.2 Algorithm for learning ENB models from incomplete data
The log marginal likelihood is often used as the Bayesian criterion to guide the search
for the best model structure. An important feature of the log marginal likelihood is
that, under some reasonable assumptions, it factorizes into scores for families. When a
criterion is factorable, search is more e-cient since we need not reevaluate the criterion
for the whole structure when only the factors of some families have changed. This is
an important feature when working with some heuristic search algorithms, because they
iteratively transform the model structure by choosing the transformation that improves
the score the most and, usually, this transformation does not aect all the families.
1 In the remainder of this paper, we refer to the set of nodes and supernodes of an ENB model simply
as nodes.
1.Choose initial structure and initial set of parameter values
for the initial structure
2.Parameter search step
3.Probabilistic inference to complete the database
4.Calculate sufficient statistics to compute the log p(d j b h )
5.Structure search step
6.Reestimate parameter values for the new structure
7.IF no change in the structure has been done
THEN stop
ELSE IF interleaving parameter search step
THEN go to 2
ELSE go to 3
Figure
A schematic of the algorithm for the learning of component BNs (ENB models)
from incomplete data.
When the variable that we want to classify is hidden the exact calculation of the
log marginal likelihood is typically intractable [12], thus, we have to approximate such
a computation [11]. However, the existing methods for doing this are rather ine-cient
for eliciting the component BNs (ENB models) from incomplete databases as they do
not factorize into scores for families.
To avoid this problem, we use the heuristic algorithm presented in [39], which is
shown in Figure 6. First, the algorithm chooses an initial structure and parameter
values. Then, it performs a parameter search step to improve the set of parameters
for the current model structure. These parameter values are used to complete the
database, because the key idea in this approach is to treat expected data as real data
(hidden variable completion by means of probabilistic inference with the current model).
Hence, the log marginal likelihood of the expected complete data, log p(d j b h ), can be
calculated by [12] in closed form. Furthermore, the factorability of the log marginal
likelihood into scores for families allows the performance of an e-cient structure search
step. After structure search, the algorithm reestimates the parameters for the new
structure that it nds to be the maximum likelihood parameters given the complete
database. Finally, the probabilistic inference process to complete the database and the
structure search are iterated until there is no change in the structure. Figure 6 shows the
possibility of interleaving the parameter search step or not after each structural change,
though we will not interleave parameter and structure search in the experiments to
follow for reasons of cost.
Another key point is that a penalty term is built into the log marginal likelihood to
guard against overly complex models. In [33] a similar use of this built-in penalty term
can be found.
In the remainder of this section, we describe the parameter search step and the
structure search step in more detail.
3.2.1 Parameter search
As seen in Figure 6, the heuristic algorithm that we use considers the possibility of
interleaving parameter and structure search steps. Concretely, this interleaving process
1.FOR every case y in the database DO
a.Calculate the posterior probability distribution p(c j
b.Let p max be the maximum of p(c j which is reached
for
fixing probability threshold
THEN assign the case y to the cluster c max
2.Run the BC method
a.Bound
b.Collapse
3.Set the parameter values for the current BN to be the BC's
output parameter values
4.Run the EM algorithm until convergence
5.IF BC+EM convergence
THEN stop
ELSE go to 1
Figure
7: A schematic of the BC+EM method.
is done, at least, in the rst iteration of the algorithm. By doing that, we ensure a good
set of initial parameter values. For the remaining iterations we can then decide whether
to interleave parameter and structure search steps or not. Although any parameter
search procedure can be considered to perform the parameter search step, currently,
we propose two alternative techniques: the well-known EM algorithm [13, 31], and the
BC+EM method [40].
According to [40], the BC+EM method exhibits a faster convergence rate, and more
eective and robust behavior than the EM algorithm. That is why the BC+EM method
is used in our experimental evaluation of RBMNs. Basically, the BC+EM method
alternates between the Bound and Collapse (BC) method [45, 46] and the EM algorithm.
The BC method is a deterministic method to estimate conditional probabilities from
databases with missing entries. It bounds the set of possible estimates consistent with
the available information by computing the minimum and the maximum estimate that
would be obtained from all possible completions of the database. These bounds are
then collapsed into a unique value via a convex combination of the extreme points with
weights depending on the assumed pattern of missing data. This method presents all the
advantages of a deterministic method and a dramatic gain in e-ciency when compared
with the EM algorithm [47].
The BC method is used in presence of missing data, but it is not useful when there
is a hidden variable as in the data clustering problem. The reason for this is that the
probability intervals returned by the BC method would be too large and poorly inform
the missing entries of the single hidden variable. The BC+EM method overcomes this
problem by performing a partial completion of the database at each step. See Figure 7
for a schematic of the BC+EM method.
For every case y in the database, the BC+EM method uses the current parameter
values to evaluate the posterior probability distribution for the cluster variable C given
y. Then, it assigns the case y to the cluster with the highest posterior probability
only if this posterior probability is greater than a threshold, xing probability threshold,
that the user must determine. The case remains incomplete if there is no cluster with
1.Consider joining each pair of attributes
2.IF there is an improvement in the log p(d j b h )
THEN make the joint that improves the score the most
ELSE return the current case representation
Figure
8: A template for the forward structure search step.
1.Consider splitting each attribute at each possible point
2.IF there is an improvement in the log p(d j b h )
THEN make the split that improves the score the most
ELSE return the current case representation
Figure
9: A template for the backward structure search step.
posterior probability greater than the threshold. As some of the entries of the hidden
variable have been completed during this process, we hope to have more informative
probability intervals when running the BC method. The EM algorithm is then executed
to improve the parameter values that the BC method has returned. The process is
repeated until convergence.
3.2.2 Structure search
In [10], Chickering shows that nding the BN structure with the highest log marginal
likelihood from the set of all the BN structures in which each node has no more than k
parents is NP-hard for k > 1. Therefore, it is clear that heuristic methods are needed.
Our particular choice is based upon the work done by Pazzani [36, 37]. Pazzani presents
algorithms for learning augmented NB classiers (ENB models) by searching for dependencies
among attributes: the Backward Sequential Elimination and Joining (BSEJ)
algorithm and the Forward Sequential Selection and Joining (FSSJ) algorithm. To nd
attribute dependencies, these algorithms perform constructive induction [2, 35], which
is the process of changing the representation of the cases in the database by creating
new attributes (supernodes) from existing attributes. As a result, some violation of conditional
independence assumptions made by NB models are detected and dependencies
among predictive attributes are included in the model. Ideally, a better performance is
reached while the model that we obtain after the constructive induction process maintains
the simplicity of NB models. Pazzani uses the term joining to refer to the process of
creating a new attribute whose values are the Cartesian product of two other attributes.
To carry out this change in the representation of the database, Pazzani proposes a hill-climbing
search combined with two operators: replacing two existing attributes with a
new attribute that is the Cartesian product of the two attributes, and either delete an
irrelevant attribute (resulting in the BSEJ) or add a relevant attribute (resulting in the
FSSJ).
The algorithm for the learning of component BNs that we use (Figure starts
from one of two possible initial structures: from a NB model or from a model with all
the variables fully correlated. When considering a NB model as the initial structure,
the heuristic algorithm performs a forward search step (see Figure 8). On the other
hand, when starting from a fully correlated model, the heuristic algorithm performs a
backward search step (see Figure 9).
Notice should be taken that the algorithm of Figure 6 has completed the database
before the structure search step is performed. Consequently, the log marginal likelihood
of the expected complete data has a factorable closed form. The algorithm uses the
factorability of the log marginal likelihood to score every possible change in the model
structure e-ciently.
Learning RBMNs for data clustering
In this section, we present our heuristic algorithm for the learning of RBMNs for data
clustering. This algorithm performs model selection using the log marginal likelihood of
the expected complete data to guide the search. This section starts deriving a factorable
closed form for the marginal likelihood of data for RBMNs.
4.1 Marginal likelihood criterion for RBMNs
Under the assumptions that (i) the variables in the database are discrete, (ii) cases
occur independently, (iii) the database is complete, and (iv) the prior distribution for
the parameters given a structure is uniform, the marginal likelihood of data has a closed
form for BNs that allows us to compute it e-ciently. In particular:
Y
Y
Y
where n is the number of variables, r i is the number of states of the variable X i , q i
is the number of states of the parent set of X i , N ijk is the number of cases in the
database where X i has its k-th value and the parent set of X i has its j-th value, and
(see [12] for a derivation).
This important result is extended to BMNs in [49] as follows: let ig denote the set of
parameter variables associated with the local probability distribution of the i-th variable
belonging to XnfGg in the g-th component BN. Also, let denote the set of parameter
variables corresponding to the weights of the mixture of component BNs.
If (i) the parameters variables ; are mutually
independent given s h (parameter independence), (ii) the parameter priors p( ig j s h ) are
conjugate for all i and g, and (iii) the data d is complete, then the marginal likelihood
of data has a factorable closed form for BMNs. In particular:
log p(d j s h
log p(d X;g j b h
where d G is the data restricted to the distinguished variable G, and d X;g is the data
restricted to the variables XnfGg and to those cases in which g. The term p(d G )
is the marginal likelihood of a trivial BN having only a single node G. The terms in the
sum are log marginal likelihoods for the component BNs of the BMN.
Furthermore, this observation extends to RBMNs as follows: let il denote the set of
parameter variables associated with the local probability distribution of the i-th variable
belonging to XnT(root; l) in the l-th component BN. Also, let L denote the number of
leaves in T and m denote the number of levels (depth) of T. Let designate the set
1.Start from an empty tree l
2.WHILE stopping condition==FALSE DO
search leaf(l)
where search leaf(l) is
1.IF l is an empty tree or l is a leaf
THEN extension(l)
FOR every child ch of l DO
search leaf(ch)
and extension(l) is as follows
1.FOR every variable Y i in YnT(root; l) DO
b.FOR every state y ik of Y i DO
i.Let d ext;k be the database restricted to the variables
in ext, and to those cases in the database consistent
with t(root; l) and y ik
ii.Learn a component BN from d ext;k for the variables
in ext by means of constructive induction
c.Score the candidate BMN
2.Choose as extension the candidate BMN with the highest score
Figure
10: A schematic of the algorithm for the learning of RBMNs for data clustering.
of parameter variables corresponding to the weights of the mixture of component
BNs. If (i) the parameters variables ;
are mutually independent given s h (parameter independence), (ii) the parameter priors
are conjugate for all i and l, and (iii) the data d is complete, then the marginal
likelihood of data has a factorable closed form for RBMNs. In particular:
log p(d j s h
[log p(d t(root;l)
l
where d t(root;l) is the database restricted to the variables in T(root; l) and to those
cases consistent with t(root; l), d X;t(root;l) is the database restricted to the variables in
XnT(root; l) and to those cases consistent with t(root; l). The sum of the rst terms
can be easily calculated as the log marginal likelihood of a trivial BN with a single node
with as many states as leaves in the distinguished decision tree T. The second terms in
the sum are log marginal likelihoods for the component BNs of the RBMN. Thus, under
the assumptions referred above, there is a factorable closed form to calculate them [12].
Therefore, the log marginal likelihood of data has a closed form for RBMNs, and it can
be calculated from the log marginal likelihoods of the component BNs. This fact allows
us to decompose the problem of learning a RBMN into learning its component BNs.
Y
Y =y5 52
Y =y
Y
Y =y 11
Y
Y ,Y ,Y
Figure
11: Example of the structure of a 2-levels RBMN for data clustering for
distinguished decision tree T. Dotted lines
correspond to the distinguished decision tree T. The component BN at the leaf l is obtained
as a result of improving by constructive induction the NB model for the variables
XnT(root; l).
4.2 Algorithm for learning RBMNs from incomplete data
The heuristic algorithm that we present in this section performs data clustering by
learning, from incomplete data, RBMNs as they were dened in Section 2.4.
The algorithm starts from an empty distinguished decision tree and, at each iter-
ation, it enlarges the tree in one level until a stopping condition is veried. Stopping
might occur at some user-specied depth, or when no further improvement in the log
marginal likelihood of the expected complete data for the current model (Equation 10) is
observed. To enlarge the current tree, every leaf (component BN) should be extended.
The extension of each leaf l consists of learning the best BMN for XnT(root; l) and
distinguished variable Y i , where Y i 2 YnT(root; l). This BMN replaces the leaf l. For
learning each component BN of the BMN, we use the algorithm presented in Figure 6.
Figure
11 shows an example of a 2-levels RBMN structure that could be the output of
the algorithm that we present in Figure 10.
In this last gure, we can see that the learning algorithm replaces every leaf l by
the best BMN for XnT(root; l) and distinguished variable Y
This is done as follows: let Y i be a variable of YnT(root; l), for every state y ik of Y i ,
the algorithm learns a component BN, b k , for the variables in ext
from an incomplete database d ext;k (the cluster variable is still hidden), where
g. This learning is carried out by the heuristic algorithm that we
have presented in Figure 6. The database d ext;k is a subset of the original database
(instance subspace), in fact, it is the original database d restricted to the variables in
ext, and to those cases consistent with the decision path t(root; l) and y ik . After this
process, we have a candidate BMN with distinguished variable Y i as a possible extension
for the leaf l. Given that Equation 9 provides us with a closed form for the log marginal
likelihood for BMNs, we can use it to score the candidate BMN as follows:
log p(d X;t(root;l) j s h
log p(d ext;k j b h
where d X;t(root;l) is as dened before, d Y i
;t(root;l) is the database restricted to the predictive
attribute Y i and to those cases consistent with t(root; l), d ext;k is as dened
above, and b h
k is the k-th component of the BMN. The rst term can be calculated as
the log marginal likelihood of a trivial BN having only a single node Y i , and the terms
in the sum are calculated using Equation 8. Once all the possible candidate BMNs for
extending the leaf l have been scored, the algorithm performs the extension with the
highest score.
5 Experimental results
This section is devoted to the experimental evaluation of the algorithm for the learning of
RBMNs for data clustering using both synthetic and real-world data. All the variables in
the domains that we considered were discrete, and all the local probability distributions
were multinomial distributions. In all the experiments, we assumed that the real number
of clusters was known, thus, we did not perform a search to identify the number of
clusters in the databases.
As we have already mentioned, currently, our algorithm for learning RBMNs from
incomplete data considers 2 alternative techniques to perform the parameter search for
the component BNs: the EM algorithm and the BC+EM method. According to [40],
the BC+EM method exhibits a more desirable behavior than the EM algorithm: faster
convergence rate, and more eective and robust behavior. Thus, the BC+EM method
was the one used in our experimental evaluation, although we are aware that alternative
techniques exist.
The convergence criterion for the BC+EM method was satised when either the
relative dierence between successive values of the log marginal likelihood for the model
structure was less than 10 6 or 150 iterations were reached. Following [40], we used
fixing probability threshold equal to 0.51.
As shown, the algorithm for the learning of RBMNs runs the algorithm for the learning
of component BNs a large number of times. That is why the runtime of the latter
algorithm should be kept as short as possible. Thus, throughout the experimental evaluation
we did not consider interleaving the parameter search step after each structural
change (Figure 6), though it is an open question as to whether interleaving parameter
and structure search would yield better results. Prior experiments [39] suggest that
interleaved search in our domains, however, do not yield better results. For the same
reason, we only considered the forward structure search step (Figure 8), thus, the initial
structure for each component BN was always a NB model. These decisions were made
based upon the results of the work done in [39].
5.1 Performance criteria
In this section, we describe the criteria of Table 1 that we use to compare the learnt
models and to evaluate the learning algorithm. The log marginal likelihood criterion
was used to select the best model structure. We use this score to compare the learnt
models as well. In addition to this, we consider the runtime as valuable information. We
also pay special attention to the performance of the learnt models in predictive tasks
expression comment
sc initial S n mean standard deviation of the log marginal likelihood
of the initial model
sc nal S n mean standard deviation of the log marginal likelihood
of the learnt model
standard deviation of the predictive ability of the
learnt model (10-fold cross-validation)
timeS n mean standard deviation of the runtime of the learning
process (in seconds)
Table
1: Performance criteria.
(predictive ability). Predictive ability is measured by setting aside a test set. Following
learning, the log likelihood of the test set is measured given the learnt model.
All the experiments were run on a Pentium 366 MHz computer. All the results
reported for the performance criteria are averages over 5 independent runs.
5.2 Results on synthetic data
In this section, we describe our experimental results on synthetic data. Of course, one
of the disadvantages of using synthetic databases is that the comparisons may not be
realistic. However, seeing as the original or gold-standard models are known, they allow
us to show the reliability of the algorithm for the learning of RBMNs from incomplete
data and the improvement achieved by RBMNs over the results scored by BNs.
We constructed 4 synthetic databases (d 1 , d 2 , d 3 , and d 4 ) as follows. In d 1 and
there were 11 predictive attributes involved and 1 4-valued hidden cluster variable.
9 out of the 11 predictive attributes were 3-valued, and the 2 remaining were binary
attributes. To obtain d 1 and d 2 , we simulated 2 1-level RBMNs. Both models had a
distinguished decision tree with only 1 binary predictive attribute. Thus, there were 2
component BNs in each original model. At each of these component BNs several supernodes
were randomly created. The parameters for each local probability distribution of
the component BNs were randomly generated as far as they dened a local multinomial
distribution. Moreover, the weights of the mixture of component BNs were equal that is, the leaves followed a uniform probability distribution. From each of these 2
RBMNs we sampled 8000 cases resulting in d 1 and d 2 , respectively.
On the other hand, in d 3 and d 4 , there were 12 predictive attributes involved and
4-valued hidden cluster variable. 9 out of the 12 predictive attributes were 3-valued,
and the 3 remaining were binary attributes. For getting d 3 and d 4 , we simulated 2
2-levels RBMNs. Both models had a distinguished decision tree with 3 binary predictive
attributes. Thus, there were 4 component BNs in each original model. At each of these
component BNs several supernodes were randomly created. The parameters for each
local probability distribution of the component BNs were randomly generated as far
as they dened a local multinomial distribution. Moreover, the weights of the mixture
of component BNs were equal to 1, that is, the leaves followed a uniform probability
distribution. From each of these 2 RBMNs we sampled 16000 cases resulting in d 3 and
d 4 , respectively. Appendix A shows the structures of the 4 original RBMNs sampled.
Obviously, we discarded all the entries corresponding to the cluster variable for the
4 synthetic databases. Finally, every entry corresponding to a supernode was replaced
with as many entries as original predictive attributes that were grouped together under
database sc initial S n depth sc nal S n 10CV S n timeS n
Table
2: Performance achieved when learning RBMNs for data clustering from the 4
synthetic databases. All the results are averages over 5 independent runs.
this supernode. That is, we \decoded" the Cartesian product of original predictive
attributes for every entry in the database corresponding to a supernode.
Table
2 compares the performance of the learnt RBMNs for dierent values of the
column depth, which represents the depth of the distinguished decision trees. Remember
that BNs were assumed to be a special case of RBMNs where the depth of the
distinguished decision trees was equal to 0. It follows from the table that the algorithm
for learning RBMNs from incomplete data is able to discover the complexity of the underlying
model: in the databases d 1 and d 2 , the models with the highest log marginal
likelihood are those with a 1-level distinguished decision tree, whereas, in the databases
d 3 and d 4 , the learnt RBMNs with the highest log marginal likelihood are those with
a 2-levels distinguished decision tree. Thus, the log marginal likelihood of the expected
complete data appears to behave eectively when used to guide the search, and when
considered as the stopping condition.
The detailed analysis of the RBMN learnt in each of the 5 runs for the 4 synthetic
databases considered suggests that, in general, the variables used to split the original
databases in several instance subspaces (internal nodes of the distinguished decision
trees of the RBMNs sampled) are discovered most of the runs. For instance, all the
runs on d 1 identify Y 1 as the root of the distinguished decision tree. Then, the learnt
RBMNs recover on average 100 % of the true instance subspaces. On the other hand, 3
out of the 5 runs on d 2 discover the true attribute that splits the domain in 2 instance
subspaces which results in an average of 60 % of true instance subspaces discovered. For
out of the 5 runs provide us with a RBMN with Y 12 as the root of the distinguished
decision tree. Moreover, 2 of these 3 runs also identify the rest of true internal nodes
of the original 2-levels RBMN. The third of these 3 runs only discovers 1 of the 2 true
internal nodes of the second level of the distinguished decision tree. Additionally, the
other 2 runs of the 5 on d 3 identify the 3 internal nodes of the distinguished decision
tree of the original RBMN (Y 12 , Y 1 and Y 2 ) but Y 2 appears as the root and, Y 12 and
Y 1 in the second level of the distinguished decision tree. Then, only 2 of the 4 instance
subspaces are eectively discovered in these 2 runs. As a result, the learnt models for
d 3 discover on average 60 % of the 2 main true instance subspaces and 70 % of the 4
more specic true subspaces. For d 4 , 3 out of the 5 runs provide us with a RBMN that
splits the original data in the 4 true instance subspaces. The remainder 2 runs provide
us with RBMNs that have Y 12 as the root of the distinguished decision trees and Y 2 as
1 of the other 2 internal nodes. However, they fail to identify Y 1 as the second node of
the second level of the original distinguished decision tree. Thus, the learnt models for
d 4 discover on average 100 % of the 2 main true instance subspaces and 80 % of the 4
more specic true subspaces.
From the point of view of the predictive task (measured in the 10CV column), we
can report that, in general, the learnt RBMNs outperform BNs. For the databases d 1
and d 2 , the biggest dierence in the predictive ability is reached between the learnt BNs
and the learnt RBMNs with depth equal to 1. Remember that the underlying original
models for these databases were 1-level RBMNs. Furthermore, the learnt 1-level RBMNs
received the highest sc nal . Exactly the same is observed for the synthetic databases d 3
and d 4 , where the biggest increase in the 10CV is reached between the learnt RBMNs
with depth equal to 1 and the learnt RBMNs with 2-levels distinguished decision trees.
Again, note that these 2-levels RBMNs were the models scored with the highest sc nal ,
and that the underlying original models were 2-levels RBMNs.
As the learnt RBMNs have more complex distinguished decision trees, the improvement
of their predictive ability decreases. However, as a general rule, the more complex
the models are, the higher the predictive ability is. This fact is well-known because
10-fold cross-validation scores the log likelihood of the test database, which does not
penalize the complexity of the model, as does the log marginal likelihood. In addition,
as the complexity of the distinguished decision tree increases, the instance subspaces
where to learn the component BNs reduce and, thus, the uncertainty decreases. In order
to avoid very complex models, our results show that the log marginal likelihood is
a suitable score to guide the search for the best RBMN.
From the point of view of the e-ciency (measured as the runtime of the learning pro-
cess), our experimental results show that the learning of RBMNs implies a considerable
computational expense when compared with the learning of BNs. However, this expense
appears justied by the empirical evidence that RBMNs behave more eectively in these
synthetic domains, in addition to their outlined advantages (context-specic conditional
(in)dependencies, structured clustering,
exibility, etc.
5.3 Results on real data
Another source of data for our evaluation consisted of 2 real-world databases from
the UCI machine learning repository [34]: the tic-tac-toe database and the nursery
database. The past usage of the tic-tac-toe database helps to classify it as a paradigmatic
domain for testing constructive induction methods. Despite being used for supervised
classication due to the presence of the cluster variable, we considered this a good
domain to evaluate the performance of our approach once the cluster entries were hidden.
Furthermore, the past usage of the nursery database shows its suitability for testing
constructive induction methods. In addition to this fact, the presence of 5 clusters and
the large number of cases made this database very interesting for our purpose once the
cluster entries were hidden.
The tic-tac-toe database contains 958 cases, each of them represents a legal tic-
tac-toe endgame board. Each case has 9 3-valued predictive attributes and there are
clusters. The nursery database consists of 12960 cases, each of them representing
an application for admission in the public school system. Each case has 8 predictive
attributes, which have between 2 and 5 possible values. There are 5 clusters. Obviously,
database sc initial S n depth sc nal S n 10CV S n timeS n
nursery -57026120 0 -53910709 -6453126 306
Table
3: Performance achieved when learning RBMNs for data clustering from the 2
real-world databases. All the results are averages over 5 independent runs.
for both databases we deleted all the cluster entries.
Table
3 reports on the results achieved when learning RBMNs of dierent depth for
the distinguished decision tree from the 2 real-world databases. For both databases, the
learnt RBMNs outperform the learnt BNs in terms of both log marginal likelihood for
the learnt models and predictive ability. The learnt 1-level RBMNs obtain the highest
score for the log marginal likelihood for both domains. Moreover, these learnt RBMNs
with 1-level distinguished decision trees appear to be more predictive than more complex
models as the learnt RBMNs with 2-levels distinguished decision trees.
6 Conclusions and future research
We have proposed a new approach to perform data clustering based on a new class
of knowledge models: recursive Bayesian multinets (RBMNs). These models may be
learnt to represent the joint probability distribution from a given, complete or incom-
plete, database. RBMNs are a generalization of BNs and BMNs, as well as extensions
to classical partitional systems. Additionally, we have described a heuristic algorithm
for learning RBMNs for data clustering which simplies the learning to the elicitation of
the component BNs from incomplete data. Also, we have presented some of the advantages
derived from the use of RBMNs such as codication of context-specic conditional
(in)dependencies, structured and specialized domain knowledge, alternate clusterings
able to capture dierent patterns for dierent instance subspaces, and
exibility.
Our experimental results in both synthetic and real-world domains have shown that
the learnt RBMNs overcame the learnt BNs in terms of log marginal likelihood and
predictive ability for the learnt model. Moreover, in the synthetic domains, the score to
guide the structural search, the log marginal likelihood of the expected complete data,
has exhibited a suitable behavior as the instance subspaces implied by the underlying
original models have been eectively discovered.
To achieve such a gain there is an obvious increase in the runtime of the learning
process for RBMNs when compared with the learning of BNs. Our current research
is driven to, by means of a simple data preprocessing, reduce the set of the predictive
attributes that are considered to be placed in the distinguished decision tree. This
reduction of the search space would imply a huge save in runtime. Since our primary
aim was to introduce a new knowledge paradigm to perform data clustering, we did not
focus on exploiting all its possibilities. For instance, the denition of RBMNs introduced
in Section 2.4 limits the modelling power of RBMNs since all the leaves had to be at
the same level. This constraint was imposed for the sake of understandability of the
new model but it can be removed in practice resulting in the possibility of obtaining
more natural data clusterings. A limitation of the presented heuristic algorithm for the
learning of RBMNs is its monothetic nature, that is, only single attributes are considered
at each extension of a distinguished decision tree. We are currently considering the
possibility of learning polythetic decision paths in order to enrich the modelling power.
Another line of research that we are investigating is the extension of RBMNs to perform
data clustering in continuous domains. In this case, component BNs would have
to be able to deal with continuous attributes, thus, they would be conditional Gaussian
networks [29, 41, 42]. However, this approach would imply to search for the best discretization
of the attributes to be considered in the decision paths. [41] is an example
of real-world continuous domain where these mentioned extensions of RBMNs to continuous
data could be considered to perform data clustering as dierent patterns are
observed for dierent instance subspaces of the original data. This extension of RBMNs
to continuous domains would decrease the disrupting eects due to the discretization of
the original data that would be necessary to apply RBMNs as dened in this paper to
the problem domain presented in [41].
Acknowledgments
Jose Manuel Pe~na wishes to thank Dr. Dag Wedelin for his interest in this work. He
made the visit at Chalmers University of Technology at Gothenburg (Sweden) possible.
Technical support for this work was kindly provided by the Department of Computer
Science at Chalmers University of Technology.
Also, the authors would like to thank Prof. Douglas H. Fisher in addition to the two
anonymous referees for their useful comments and for addressing interesting readings
related to this work.
This work was supported by the spanish Ministerio de Educacion y Cultura under
AP97 44673053 grant.
Appendix
A
Structures of the original RBMNs sampled in order to obtain
the synthetic databases
Structures of the original 1-level and 2-levels RBMNs sampled to obtain the synthetic
databases. The rst two model structures correspond to the RBMNs sampled to generate
the synthetic databases d 1 (top) and d 2 (bottom), whereas the last two model structures
correspond to the RBMNs sampled to get the synthetic databases d 3 (top) and d 4
(bottom). Dotted lines correspond to the distinguished decision trees. All the predictive
attributes were 3-valued except Y 1 , Y 2 and Y 12 which were binary. The cluster variable
C was 4-valued.
Y =y
Y =y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
=y
Y
=y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
=y
Y
Y
=y
Y
=y
Y111212222
Y
Y
Y
Y
YY
Y
Y
Y
YY
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
YYY
=y
Y
Y
Y 6
Y
Y 41221212
Y
Y
Y 6
Y
--R
Analysis for Applications.
Constructive Induction: The Key to Design Creativity.
Operations for learning with graphical models.
Analyse Typologique.
AutoClass: A Bayesian Classi
Bayesian classi
Learning Bayesian networks is NP-complete
A Bayesian Method for the Induction of Probabilistic Networks from Data.
Maximum Likelihood from Incomplete Data via the EM Algorithm.
Pattern classi
Knowledge Acquisition via Incremental Conceptual Clustering.
Database Management and Analysis Tools of Machine Induction.
The Bayesian structural EM algorithm.
Bayesian network classi
Building Classi
Bayesian network classi
Knowledge representation and inference in similarity networks and Bayesian multinets.
Clustering Algorithms.
Learning Bayesian net- works: The combination of knowledge and statistical data
An introduction to Bayesian networks.
Finding Groups in Data.
Learning Augmented Bayesian Classi
Scaling Up the Accuracy of Naive-Bayes Classi ers: A Decision-Tree Hybrid
Induction of recursive Bayesian classi
Graphical Models.
Some Methods for Classi
The EM Algorithm and Extensions.
UCI repository of machine learning databases.
Pattern Recognition as Knowledge-Guided Computer In- duction
Constructive Induction of Cartesian Product Attributes.
Searching for dependencies in Bayesian classi
Probabilistic Reasoning in Intelligent Systems.
Learning Bayesian networks for clustering by means of constructive induction.
An improved Bayesian structural EM algorithm for learning Bayesian networks for clustering.
Geometric Implications of the Naive Bayes Assumption.
Learning Bayesian Networks from Incomplete Databases.
Parameter Estimation in Bayesian Networks from Incomplete Databases.
Learning Conditional Probabilities from Incomplete Data: An Experimental Comparison.
Bayesian analysis in expert systems.
Learning Mixtures of DAG Models.
Lazy Learning of Bayesian Rules.
--TR
Probabilistic reasoning in intelligent systems: networks of plausible inference
A Bayesian Method for the Induction of Probabilistic Networks from Data
C4.5: programs for machine learning
Learning Bayesian Networks
Knowledge representation and inference in similarity networks and Bayesian multinets
Bayesian classification (AutoClass)
Bayesian Network Classifiers
Efficient Approximations for the Marginal Likelihood of Bayesian Networks with Hidden Variables
Learning Bayesian networks for clustering by means of constructive induction
An improved Bayesian structural EM algorithm for learning Bayesian networks for clustering
Lazy Learning of Bayesian Rules
Clustering Algorithms
Introduction to Bayesian Networks
Expert Systems and Probabiistic Network Models
Knowledge Acquisition Via Incremental Conceptual Clustering
Induction of Recursive Bayesian Classifiers
Bayesian Network Classification with Continuous Attributes
--CTR
J. M. Pea , J. A. Lozano , P. Larraaga, Unsupervised learning of Bayesian networks via estimation of distribution algorithms: an application to gene expression data clustering, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, v.12 n.SUPPLEMENT, p.63-82, January 2004
Radu Stefan Niculescu , Tom M. Mitchell , R. Bharat Rao, Bayesian Network Learning with Parameter Constraints, The Journal of Machine Learning Research, 7, p.1357-1383, 12/1/2006
J. M. Pea , J. A. Lozano , P. Larraaga, Globally Multimodal Problem Optimization Via an Estimation of Distribution Algorithm Based on Unsupervised Learning of Bayesian Networks, Evolutionary Computation, v.13 n.1, p.43-66, January 2005 | bayesian networks;constructive induction;BC+EM method;EM algorithm;bayesian multinets;data clustering |
584669 | A Simple, Object-Based View of Multiprogramming. | Object-based sequential programming has had a major impact on software engineering. However, object-based concurrent programming remains elusive as an effective programming tool. The class of applications that will be implemented on future high-bandwidth networks of processors will be significantly more ambitious than the current applications (which are mostly involved with transmissions of digital data and images), and object-based concurrent programming has the potential to simplify designs of such applications. Many of the programming concepts developed for databases, object-oriented programming and designs of reactive systems can be unified into a compact model of concurrent programs that can serve as the foundation for designing these future applications. We propose a model of multiprograms and a discipline of programming that addresses the issues of reasoning (e.g., understanding) and efficient implementation. The major point of departure is the disentanglement of sequential and multiprogramming features. We propose a sparse model of multiprograms that distinguishes these two forms of computations and allows their disciplined interactions. | Introduction
Object-based sequential programming has had a major impact on software engineer-
ing. However, object-based concurrent programming remains elusive as an eective
programming tool. The class of applications that will be implemented on future
high-bandwidth networks of processors will be signicantly more ambitious than
the current applications (which are mostly involved with transmissions of digital
data and images), and object-based concurrent programming has the potential to
simplify designs of such applications. Many of the programming concepts developed
for databases, object-oriented programming and designs of reactive systems
can be unied into a compact model of concurrent programs that can serve as the
foundation for designing these future applications.
1.1. Motivation
Research in multiprogramming has, traditionally, attempted to reconcile two apparently
contradictory goals: (1) it should be possible to understand a module (e.g., a
process or a data object) in isolation, without considerations of interference by the
other modules, and (2) it should be possible to implement concurrent threads at a
ne level of granularity so that no process is ever locked out of accessing common
data for long periods of time. The goals are in con
ict because ne granularity,
in general, implies considerable interference. The earliest multiprograms (see, for
instance, the solution to the mutual exclusion problem in Dijkstra [8]) were trivially
small and impossibly dicult to understand, because the behaviors of the individual
processes could not be understood in isolation, and all possible interactions among
the processes had to be analyzed explicitly. Since then, much eort has gone into
limiting or even eliminating interference among processes by employing a variety
of synchronization mechanisms: locks or semaphores, critical regions, monitors and
message communications.
Constraining the programming model to a specic protocol (binary semaphores
or message communication over bounded channels, for instance) will prove to be
short-sighted in designing complex applications. More general mechanisms for interactions
among modules, that include these specic protocols, are required. Fur-
ther, for the distributed applications of the future, it is essential to devise a model
in which the distinction between computation and communication is removed; in
particular, the methods for designing and reasoning about the interfaces should be
no dierent from those employed for the computations at the nodes of the network.
1.2. Seuss
We have developed a model of multiprogramming, called Seuss. Seuss fosters a
discipline of programming that makes it possible to understand a program execution
as a single thread of control, yet it permits program implementation through
multiple threads. As a consequence, it is possible to reason about the properties of
A SIMPLE, OBJECT-BASED VIEW OF MULTIPROGRAMMING 281
a program from its single execution thread, whereas an implementation on a spe-
cic platform (e.g., shared memory or message communicating system) may exploit
the inherent concurrency appropriately. A central theorem establishes that multiple
execution threads implement single execution threads, i.e., for any interleaved
execution of some actions there exists a non-interleaved execution of those actions
that establishes an identical nal state starting from the same initial state.
A major point of departure in Seuss is that there is no built-in concurrency and no
commitment to either shared memory or message-passing style of implementation.
communication or synchronization mechanism, except the procedure
call, is built into the model. In particular, the notions of input/output and their
complementary nature in rendezvous-based communication [9, 17] is outside this
model. There is no distinction between computation and communication; process
specications and interface specications are not distinguished. Consequently, we
do not have many of the traditional multiprogramming concepts such as, processes,
locking, rendezvous, waiting, interference and deadlock, as basic concepts in our
model. Yet, typical multiprograms employing message passing over bounded or
unbounded channels can be encoded in Seuss by declaring the processes and channels
as the components of a program; similarly, shared memory multiprograms can
be encoded by having processes and memories as components. Seuss permits a
mixture of either style of programming, and a variety of dierent interaction mechanisms
{ semaphore, critical region, 4-phase handshake, etc. { can be encoded as
components.
Seuss proposes a complete disentanglement of the sequential and concurrent aspects
of programming. We expect large sections of code to be written, understood
and reasoned-about as sequential programs. We view multiprogramming as a way
to orchestrate the executions of these sequential programs, by specifying the conditions
under which each program is to be executed. Typically, several sequential
programs will execute simultaneously; yet, we can guarantee that their executions
would be non-interfering, and hence, each program may be regarded as atomic. We
propose an ecient implementation scheme that can, under user directives, interleave
the individual sequential programs with ne granularity without causing any
interference.
2. Seuss Programming Model
The Seuss programming model is sparse: a program is built out of cats (cat is short
for category) and boxes, and a cat is built out of procedures. A cat is similar in many
ways to a process/class/monitor type; a cat denotes a type and a box is an instance
of a cat. A box has a local state and it includes procedures by which its local state
can be accessed and updated. Procedures in a box may call upon procedures of other
boxes. Cats are used to encode processes as well as the communication protocols
for process interactions; therefore, it is necessary only to develop the methodology
for programming and understanding cats and their component procedures.
We propose two distinct kinds of procedures, to model terminating and potentially
non-terminating computations { representing computations of wait-free programs
and multiprograms, respectively. The former can be assigned a semantic with pre-and
post-conditions, i.e., based on its possible inputs and corresponding outputs
without considerations of interference with its environment. Multiprograms, how-
ever, cannot be given a pre- and post-condition semantic because on-going interaction
with the environment is of the essence. We distinguish between these two types
of computations by using two dierent kinds of procedures: a total procedure never
waits (for an unbounded amount of time) to interact with its environment whereas
a partial procedure may wait, possibly forever, for such interactions. In this view, a
P operation on a semaphore is a partial procedure { because it may never terminate
{ whereas a V operation is a total procedure. A total procedure models wait-free,
or transformational , aspects of programming and a partial procedure models con-
current, or reactive, aspects of programming[15]. Our programming model does not
include waiting as a fundamental concept; therefore, a (partial) procedure does not
wait, but it rejects the call, thus preserving the program state. We next elaborate
on the main concepts: total and partial procedure, cat and program.
2.1. Total procedure
A total procedure can be assigned a meaning based only on its inputs and outputs;
if the procedure is started in a state that satises the input specication then it
terminates eventually in a state that satises the output specication. Procedures
to sort a list, nd a minimum spanning tree in a graph or send a job to an unbounded
print-queue, are examples of total procedures. A total procedure need not be
deterministic; e.g., any minimum spanning tree could be returned by the procedure.
Furthermore, a total procedure need not be implemented on a single processor, e.g.,
the list may be sorted by a sorting network[1], for instance. Data parallel programs
and other synchronous computation schemes are usually total procedures. A total
procedure may even be a multiprogram in our model admitting of asynchronous
execution, provided it is guaranteed to terminate, and its eect can be understood
only through its inputs and outputs; therefore, such a procedure never waits to
receive input, for instance. An example of a total procedure that interacts with its
environment is one that sends jobs to a print-queue (without waiting) and the jobs
may be processed by the environment while the procedure continues its execution.
Almost all total procedures shown in this manuscript are sequential programs.
A total procedure may call only total procedures. When a total procedure is called
(with certain parameters, in a given state) it may (1) terminate normally, (2) fail,
or (3) execute forever. A failure is caused by a programming error; it occurs when
the procedure is invoked in a state in which it should not be invoked, for instance,
if the computation requires a number to be divided by 0 or a natural number to
be reduced below 0. Failure is a general programming issue, not just an issue in
Seuss or multiprogramming. We interpret failure to mean that the resulting state
is arbitrary; any step taken in a failed state results in a failed state. Typically, a
hardware or software trap terminates the program when a failure occurs.
A SIMPLE, object-BASED VIEW OF MULTIPROGRAMMING 283
Non-termination of a total procedure is also the result of a programming error.
We expect the programmer to establish that the procedure is invoked only in those
states where its execution is nite.
2.2. Partial procedure
We rst consider a simple form for a partial procedure, g:
where p is the precondition, h is the preprocedure and S is the body of the procedure.
The precondition is a predicate on the state of the box to which g belongs. The
preprocedure is the name of a partial procedure in another box; the preprocedure is
optional. The body, S, consists of local computations aecting the state of g's box
and calls on total procedures of other boxes. A partial procedure can call another
partial procedure only as a preprocedure.
A partial procedure accepts or rejects each call made upon it. A partial procedure
of the form p ! S, where the preprocedure is absent, accepts a call whenever
holds. A partial procedure, g, of the form accepts a call if its pre-
condition, p, holds and its preprocedure, h, accepts the call made by g (we will
impose additional restrictions on the program structure so that this denition is
well-founded). When a call is accepted, the body of the procedure, S, is executed,
potentially changing the state of the box and returning computed values in the
parameters. When a call is rejected, the procedure body is not executed and the
state does not change. The caller is made aware of the outcome of the call in both
cases; if the call made by g to h is rejected then the caller, g, also rejects the call
made upon it, and if the call to h is accepted then g accepts the call and executes its
body, S. In this sense, partial procedures dier fundamentally from the total ones
because all calls are accepted in the latter case. Examples of partial procedures are
P operation on a semaphore and a get operation on a print-queue performed by a
printer; the call upon P is accepted only if the semaphore value is non-zero, and for
the get if the print-queue is non-empty. Observe that whenever a call is rejected,
the caller's state does not change, and whenever a call is accepted by a procedure
that has the form the body of the preprocedure, h, is executed before
the execution of the procedure body, S.
We require that the execution of the body, S, terminate whenever the partial
procedure, g, accepts a call.
Alternative Now, we introduce a generalization: the body of a partial procedure
consists of one or more alternatives where each alternative is of the form described
previously for partial procedures. Each alternative is positive or negative: the rst
alternative is positive; an alternative preceded by j is positive and one preceded by
j is negative. The precondition of at most one alternative of a partial procedure
holds in any state, i.e., the preconditions are pairwise disjoint.
The rule for execution of a partial method with alternatives is as follows. A
partial-method accepts or rejects each call; it accepts a call if and only if one of
284 JAYADEV MISRA
its positive alternatives accepts the call, and it rejects the call otherwise. An
alternative, positive or negative, accepts a call in a given state as follows. An
alternative of the form p ! S accepts the call if p holds; then its body, S, is
executed and control is returned to its caller. An alternative of the form
accepts a call provided p holds and h accepts the call made by this procedure
(using the same rules, since h is also a partial procedure); upon completion of the
execution of h the body S is executed, and control is returned to the caller. Thus,
an alternative rejects a call if the precondition does not hold, or if the preprocedure,
provided it is present, rejects the call. Note that, since the precondition of at most
one alternative of a partial procedure holds in a given state, at most one alternative
will accept a call (if no alternative accepts the call, the call is rejected). It follows
that the state of the caller's box is unchanged whenever a call is rejected, though
the state of the called box may be changed because a negative alternative may have
accepted the call.
Alternatives are essential for programming concurrent systems; negative alternatives
are especially useful in coding strong semaphores, for instance.
2.3. method and action
A procedure is either a method or an action. An action is executed autonomously
an innite number of times during a (tight) program execution; see section 2.4.1.
A method is not executed autonomously but only by being called from another
procedure. Declaration of a procedure indicates if it is partial or total, and if it is
an action or a method.
Example (Semaphore) A ubiquitous concept in multiprogramming is a semaphore.
cat semaphore
var n: nat init 1 finitially, the semaphore value is 1g
partial method P :: n >
total method V :: n :=
A binary semaphore may be encoded similarly except that the V fails if n 6= 0
prior to its execution. Next, we show a small example employing semaphores. Let
s, t be two instances of semaphore , declared by
Cat user, shown below, executes its critical section only if it holds both s and t,
and it releases both semaphores upon completion of its critical section. The code
for user dealing with accesses to s and t is shown below. Boolean variables hs and
ht are true only when the user holds the semaphores s and t, respectively.
cat user
var hs; ht: boolean init false
A SIMPLE, object-BASED VIEW OF MULTIPROGRAMMING 285
partial action s :acquire:: :hs; s:P ! hs := true
partial action t :acquire:: :ht; t:P ! ht := true
partial action execute::
This solution permits acquiring s and t in arbitrary order. If it is necessary to
acquire them in a specic order, say, rst s and then t, the precondition of action
:acquire should be changed to hs ^ :ht.
2.4. Program
A program consists of a nite set of boxes (cat instances). We restrict the manner
in which a procedure calls other procedures: all the procedures executing at any
time belong to dierent boxes. We impose a condition, Partial Order on Boxes,
below that ensures this restriction.
Denition: For procedures p; q, we write p calls q to mean that in some execution
of p a call is made to q. Let calls + be the transitive closure of calls, and calls the
re
exive transitive closure of calls. Dene a relation calls p
over procedures where,
In operational terms, x calls p y means procedure x calls procedure y in some execution
of procedure p. Each program is required to satisfy the following condition.
Partial Order on Boxes Every procedure p imposes a partial order p over the
boxes; during the execution of p a procedure of box b can call a procedure of box b 0
are made from the procedures
of a higher box to that of a lower box.
Note: Observe that p is re
exive and > p is irre
exive.
Observation 1:
It follows from Observation 1 that all the procedures that are part of a call-chain
belong to dierent boxes.
Observation 2: calls + is an acyclic (i.e., irre
exive, asymmetric and transitive)
relation over the procedures.
The denition of a program is in contrast to the usual views of process networks
in which the processes communicate by messages or by sharing a common mem-
ory. Typically, such a network is not regarded as being partially ordered. For
instance, suppose that process P sends messages over a channel chp to process Q
286 JAYADEV MISRA
and Q sends over chq to P . The processes are viewed as nodes in a cycle where
the edges (channels), chp and chq, are directed from P to Q and from Q to P ,
respectively, representing the direction of message
ow. Similar remarks apply to
processes communicating through shared memory. We view communication media
(message channels and memory) as boxes. Therefore, we would represent the system
described above as a set of four boxes: P , Q, chp and chq with the procedures
in chp, chq being called from P and Q, respectively. The direction of message
ow
is immaterial in this hierarchy; what matters is that P , Q call upon chp and chq
(though chp and chq do not call upon P , Q). A partial order is extremely useful
in deducing properties by induction on the \levels" of the procedures.
The restriction that procedure calls are made along a partial order implies that a
partial procedure at a lowest level consists of one or more alternatives of the form
the preprocedure is absent and the body S contains no procedure
calls. A total procedure at a lowest level contains no procedure calls.
2.4.1. Program Execution We prescribe an execution style for programs, called
tight execution. A tight execution consists of an innite number of steps; in each
step, an action of a box is chosen and executed. If that action calls upon a pre-
procedure that accepts the call, then the preprocedure is rst executed followed
by the execution of the action body. If the action calls upon a preprocedure that
rejects the call then the state of the caller does not change. The choice of the action
to execute in a step is arbitrary except for the following fairness constraint: each
action of each box is chosen eventually.
A tight execution is easy to understand because execution of an action is completed
before another action is started. Each procedure, total or partial, may be
understood from its text alone given the meanings of the procedures that it calls,
without consideration of interference by other procedures. A simple temporal logic,
such as UNITY-logic [19, 18], is suitable for deducing properties of a program in this
execution model. Later, we show how a program may be implemented on multiple
asynchronous processors with a ne grain of interleaving of actions that preserves
the semantics of tight execution.
A SIMPLE, object-BASED VIEW OF MULTIPROGRAMMING 287
3. Small Examples
A number of small examples are treated in this section. The goal is to show that typical
multiprogramming examples from the literature have succinct representations
in Seuss; additionally, that the small number of features of Seuss is adequate for
solving many well-known problems: communications over bounded and unbounded
channels, mutual exclusions and synchronizations. We show a number of variations
of some of these examples, implementing various progress guarantees, for instance.
For operational arguments about program behavior, we use tight executions of
programs as dened in section 2.4.1.
3.1. Channels
Unbounded Channels An unbounded fo channel is a cat that has two methods:
put (i.e., send) is a total method that appends an element to the end of the message
sequence and get (i.e., receive) is a partial method that removes and returns
the head element of the message sequence, provided it is non-empty. We dene
polymorphic version of the channel where the message type is left arbitrary. In the
method put, we use : in the assignment to denote concatenation.
cat FifoChannel of type
var r: seq of type init hi finitially r is emptyg
partial method get(x: type):: r
total method put(x: type):: r := r : x
end fFifoChannel of type g
An instance of this cat may be interposed between a set of senders and a set of
receivers.
Unordered Channels The fo channel guarantees that the order of delivery of
messages is the same as the order in which they arrived. Next, we consider an
unordered channel that returns any message from the channel in response to a call
on get when the channel is non-empty. The channel is implemented as a bag and
get is implemented as a non-deterministic operation. We write x :2 b to denote that
x is assigned any value from bag b (provided b is non-empty). The usual notation
for set operations are used for bags in the following example.
cat uch of type
var b: bag of type init fg finitially b is emptyg
partial method get(x: type):: b 6= fg ! x :2 b; b := b fxg
total method put(x: type):: b := b [ fxg
end fuch of typeg
This channel does not guarantee that every message will eventually be delivered,
given that messages are removed from the bag an unbounded number of times. Such
288 JAYADEV MISRA
a guarantee is, of course, established by the fo channel. We propose a solution
below that implements this additional guarantee. In this solution every message is
assigned an index, a natural number, and the variable t is less than or equal to the
smallest index. A message is assigned an index strictly exceeding t whenever it is
put in the channel. The indices need not be distinct. The get method removes any
message with the smallest index and updates t.
cat nch of type
var b: bag of (index: nat, msg: type) init fg finitially b is emptyg,
t: nat init 0, s: nat, m: type
partial method get(x: type)::
remove any pair (s; m) with minimum index, s, from b;
t; x := s; m
total method put(x: type)::
s is a natural number strictly exceeding t
end fnch of typeg
We now show that every message is eventually removed given that there are an
unbounded number of calls on get. For a message with index i we show that the pair
(i t; p), where p is the number of messages with index t, decreases lexicographically
with each execution of get, and it never increases. Hence, eventually,
implying that this message has been removed. An execution of put does not aect
i, t or p, because the added message receives an index higher than t; thus, (i t; p)
does not change. A get either increases t, thus decreasing i t, or it keeps t the
same and decreases p, thus, decreasing (i t; p).
3.2. Broadcast
We show a cat that implements broadcast-style message communication. Processes,
called writers, attempt to broadcast a sequence of values to a set of N processes,
called readers. We introduce a cat, broadcast, into which a writer writes the next
value and from which a reader reads. The structure of the cat is as follows.
Internally, the value to be broadcast is stored in variable v; and n counts the
number of readers that have read v. Both read and write are partial methods. The
precondition for write is that the counter n equals N , i.e., all readers have read the
current value. The precondition for read is that this particular reader has not read
the current value of v. To implement the precondition for reading, we associate a
sequence number with the value stored in v. It is sucient to have a 1-bit sequence
number, a boolean variable t, as in the Alternating Bit Protocol for communication
over a faulty channel [21]. A read operation has a boolean argument, s, that is
the last sequence number read by this reader. If s and t match then the reader
has already read this value and, hence, the call upon read is rejected. If s and t
dier then the reader is allowed to read the value and both s and n are updated.
The binary sequence number, t, is reversed whenever a new value is written to v.
It is easy to show that n equals the number of readers whose s-value equals the
A SIMPLE, object-BASED VIEW OF MULTIPROGRAMMING 289
cat's t-value. Initially, the local variable s for each reader is true. In the following
denition, N is a parameter of the cat.
cat broadcast of data
partial method read(s: boolean, x: data):: s
partial method write(x: data)::
end fbroadcast of datag
3.3. Barrier Synchronization
The problem and solution in this section are due to Rajeev Joshi[10]. In barrier
synchronization, each process in a group of concurrently executing processes performs
its computation in a sequence of stages. It is required that no process begin
computing its th stage until all processes have completed their k th stage,
k 0. We propose a cat that includes a partial method, sync, that is to be called
by each process in order to start computation of its next stage; the call is accepted
only if all processes have completed the stage that this process has completed, and
then the caller may advance to the next stage.
From the problem description, we see that at any point during the execution, all
users have completed execution upto stage k and some users may be executing (or
may have completed) stage k + 1, for some k, k 0. Initially, As in the
problem of Broadcast, each user has a boolean s, and barrier has a boolean t. We
maintain the invariant that for any user means that the user has not yet
entered stage k + 1, and n is the number of ticket holders with t.
box user
var s: boolean init true
partial action ::
do next phase
box barrier
t: boolean init true
partial method sync(s: boolean)::
3.4. Readers and Writers
We consider the classic Readers Writers Problem [7] in which a common resource
{ say, a le { is shared among a set of reader processes and writer processes. Any
number of readers may have simultaneous access to the le where as a writer needs
exclusive access. The following solution includes two partial methods, StartRead
and StartW rite, by which a reader and a writer gain access to the resource, respec-
tively. Upon completion of their accesses, a reader releases the lock by calling the
total method EndRead, and a writer by calling EndW rite. We assume throughout
that read and write operations are nite, i.e., each accepted StartRead is eventually
followed by a EndRead and a StartW rite by EndW rite.
We employ a parameter N in our solution that indicates the maximum number
of readers permitted to have simultaneous access to the resource; N may be set arbitrarily
high to permit simultaneous access for all readers. The following solution,
based upon one in section 6.10 of [5], uses a pool of tokens. Initially, there are N
tokens. A reader needs 1 token and a writer N tokens to proceed. It follows that
many (up to N) readers could be active simultaneously where as at most one writer
will have access to the resource at any time. Upon completion of their accesses,
the readers and the writers return all tokens they hold, 1 for a reader and N for a
writer, to the pool. In the following program n is the number of available tokens.
cat ReaderWriter
partial method StartRead :: n >
partial method StartW rite ::
total method EndRead :: n :=
total method EndW rite :: n := N
The solution given above can make no guarantee of progress for either the readers
or the writers. Our next solution guarantees that readers will not permanently
overtake writers: if there is a waiting writer then some writer gains access to the
resource eventually. The strategy is as follows: A boolean variable, W riteAttempt,
is set true , using a negative alternative, if a call upon StartW rite is rejected.
Once W riteAttempt holds calls on StartRead are rejected; thus no new readers
are allowed to start reading. All readers will eventually stop reading {
{ and the next call on StartW rite will succeed.
cat ReaderWriter1
partial method StartRead :: n
partial method StartW rite ::
total method EndRead :: n :=
total method EndW rite :: n := N ; W riteAttempt := false
The next solution guarantees progress for both readers and writers; it is similar to
the previous solution { we introduce a boolean variable, ReadAttempt, analogous
to W riteAttempt. However, the analysis is considerably more complicated in this
case. We outline an operational argument for the progress guarantees.
A SIMPLE, object-BASED VIEW OF MULTIPROGRAMMING 291
cat ReaderWriter2
partial method StartRead ::
partial method StartW rite ::
total method EndRead :: n
total method EndW rite :: n := N ; W riteAttempt := false
We show that if W riteAttempt is ever true it will eventually be falsied, asserting
that a write operation will complete eventually, i.e., EndW rite will be called.
Similarly, if ReadAttempt is ever true it will eventually be falsied. To prove the
rst result, consider the state in which W riteAttempt is set true (note that initially
W riteAttempt is false). Since n 6= N is a precondition for such an assignment, either
a read or a write operation is underway. If it is the latter case, then the write
will eventually be completed by calling EndW rite, thus setting W riteAttempt to
false . If W riteAttempt is set when a read is underway then no further call on
StartRead will be accepted and successive calls on EndRead will eventually establish
hold. No method other than StartW rite will execute in this state: none of the
alternatives of StartRead will accept; no call upon EndRead or EndW rite will be
made because no read or write operation is underway, from Therefore, a
call upon StartW rite will be accepted, which will be later followed by a call upon
EndW rite.
The argument for eventual falsication of ReadAttempt is similar. The pre-condition
of the assignment ReadAttempt := true is implying that either
N readers are reading or a write operation is underway. In the former case,
no more readers will be allowed to join, and upon completion of reading (by
any reader) ReadAttempt will be set false. In the latter case, upon completion
of writing EndW rite will be called and its execution will establish
riteAttempt. No method other than StartRead will execute
in this state, and any reader that succeeds in executing StartRead will eventually
execute EndRead, thus falsifying ReadAttempt.
3.5. Semaphore
A binary semaphore, often called a lock, is typically associated with a resource. A
process has exclusive access to a resource only when it holds, the corresponding
semaphore. A process acquires a semaphore by completing a P operation and it
releases the semaphore by executing a V . We regard P as a partial method and V
as a total method.
Traditionally, a semaphore is weak or strong depending on the guarantees made
about the eventual success (i.e., acceptance) of the individual calls on P . For a weak
semaphore no guarantee can be made about the success of a particular process no
matter how many times it attempts a P , though it can be asserted that some call
on P is accepted if the semaphore is available. Thus, a specic process may be
starved: it is never granted the semaphore even though another process may hold it
arbitrarily many times. A strong semaphore avoids individual (process) starvation:
if the semaphore is available innitely often then it is eventually acquired by each
process attempting a P operation. We discuss both types of semaphores and show
some variations.
We restrict ourselves to binary semaphores in all cases; extensions to general
semaphores are straightforward.
3.5.1. Weak Semaphore The following cat describes a weak binary semaphore.
cat semaphore
var avail: boolean init true finitially the semaphore is availableg
partial method P :: avail ! avail := false
total method V :: avail := true
A typical calling pattern on such a semaphore is shown below.
box user
partial action :: c; s:P ! use the resource associated with s; s:V
f other actions of the boxg
Usually, once the precondition c becomes true then it remains true until the
process acquires the semaphore. There is no requirement in Seuss, however, that c
will remain true as described.
3.5.2. Strong Semaphore A strong semaphore guarantees absence of individual
starvation; in Seuss terminology, if a cat contains a partial action of the form,
., where the precondition c remains true as long as s:P is not accepted
and s is a strong semaphore, then s:P will eventually be accepted. The following
cat implements a strong semaphore. The call upon P includes the process id as
a parameter (pid is the type of process id). Procedure P adds the caller id to a
queue, q, if the the id is not in q, and it grants the semaphore to a caller provided
the semaphore is available and the caller id is at the head of the queue.
cat StrongSemaphore
var q: seq of pid init hi, avail: boolean init true
finitially the semaphore is availableg
partial method P(i : pid) ::
A SIMPLE, object-BASED VIEW OF MULTIPROGRAMMING 293
total method V :: avail := true
Observe the way a negative alternative is employed to record a caller's id while
rejecting the call. The sequence q may be replaced by a fair bag, as was done for
the unordered channel, nch.
Note: A call upon P is rejected even when the queue is empty and the semaphore
is available. It is straightforward to add an alternative to grant the semaphore in
this case.
A process requesting a semaphore is a persistent caller if it calls the P operation
innitely often as long as it has not acquired the semaphore, otherwise it is a
transient caller. Our solution for the strong semaphore works only if all callers are
persistent. If there is a transient caller, it will block all other callers from acquiring
the semaphore. Unfortunately, there exists no solution for this case: there can
be no guarantee that every persistent caller will eventually acquire the semaphore
(given that every holder of the semaphore eventually releases it) in the presence
of transient callers[11]. A reasonable compromise is to add a new total method to
the strong semaphore cat, which a transient caller may call to remove its process
id from the queue of callers.
3.5.3. Snoopy Semaphore Traditionally, a semaphore associated with a resource
is rst acquired by a process executing a P , the resource is used and then the
semaphore is released by executing a V . We consider a variation of this traditional
model in which the resource is not released unless there are outstanding requests
for the resource by the other processes. This is an appropriate strategy if there is
low contention for the resource, because a process may use the resource as long as
it is not required by the others. We describe a new kind of semaphore, called a
SnoopySemaphore, and show how it can be used to solve this problem. In a later
section, we employ the snoopy semaphore to solve a multiple resource allocation
problem in a starvation-free fashion.
We adopt the strategy that a process that has used a resource snoops to see if
there is demand for it, from time to time. If there is demand, then it releases the
semaphore; otherwise, it may continue to access the resource.
A weak snoopy semaphore is shown below. We add a new method, S (for snoop),
to the semaphore cat. Thus, a SnoopySemaphore has three methods: P , V , and S.
Methods P and V have the same meaning as for traditional semaphores: a process
attempts to acquire the semaphore by calling the partial method P , and releases it
by calling V . The partial method S accepts if the last call upon P by some process
has been rejected. A process typically calls S after using the resource at least once,
and it releases the semaphore if S accepts. In the following solution, a boolean
variable b is set false whenever a call on P is accepted, and set true whenever a call
on P is rejected. Thus, b is false when a process acquires the semaphore and if it
subsequently detects that b is true then the semaphore is in demand.
cat SnoopySemaphore1
var b: boolean init false , avail: boolean init true
finitially the semaphore is availableg
partial method P ::
total method V :: avail := true
partial method S :: b ! skip
The proposed solution implements a weak snoopy semaphore; there is no guarantee
that a specic process will ever acquire the semaphore. Our next solution
is similar to StrongSemaphore. Since that solution already maintains a queue of
process ids (whose calls on P were rejected), we can implement S very simply.
cat StrongSnoopySemaphore
var q: seq of pid init hi, avail: boolean init true
finitially the semaphore is availableg
partial method P(i : pid) ::
total method V :: avail := true
partial method S :: q 6= hi ! skip
A SIMPLE, object-BASED VIEW OF MULTIPROGRAMMING 295
4. Distributed Implementation
We have thus far considered program executions where each action completes before
another one is started. In section 2.4.1, we dened a tight execution of a Seuss
program to be an innite sequence of steps where each step consists of executing
an action of a box. The choice of actions is arbitrary except that each action of
each box is chosen eventually. This model of execution was chosen because it makes
programming easier. Now, we consider another execution model, loose execution,
where the executions of actions may be interleaved. A loose execution exploits the
available concurrency. We restrict loose executions in such a manner that any loose
execution may be simulated by a tight execution.
Crucial to loose executions is the notion of compatibility among actions: if a set
of actions are pair-wise compatible then their executions are non-interfering, and
their concurrent execution is equivalent to some serial execution of these actions.
The precise denition of compatibility and the central theorem that establishes the
correspondence between loose and tight executions are treated in section 4.4. We
note that compatibility is a weaker notion than commutativity, and it holds for
put; get over channels (see section 4.4), and for operations on semaphores, for
instance.
First, we describe a multiprocessor implementation in which the scheduler may
initiate several compatible actions for concurrent executions. We also describe a
\most general" scheduling strategy for this problem and implementations of the
scheduling strategy on uniprocessors as well as multiprocessors. Then we dene
the notion of compatibility and state the fundamental Reduction Theorem that
establishes a correspondence between loose and tight executions.
4.1. Outline of the Implementation Strategy
The implementation consists of (1) a scheduler that decides which action may next
be scheduled for execution, and (2) processors that carry out the actual executions
of the actions. The boxes of a program are partitioned among the processors. Each
processor thus manages a set of boxes and it is responsible for executions of the
actions of those boxes. The criterion for partitioning of boxes into processors is
arbitrary though heuristics may be employed to minimize message transmissions
among processors.
The scheduler repeatedly chooses some action for execution. The choice is constrained
by the requirement that only compatible procedures may be executed
concurrently and by the fairness requirement. The scheduler sends a message
to the corresponding processor to start execution of this action.
A processor starts executing an action upon receiving a message from the sched-
uler. It may call upon methods of other processors by sending messages and
waiting for responses. Each call includes values of procedure parameters, if any,
as part of the message. It is guaranteed that each call elicits a response, which is
either a accept or a reject. The accept response is sent when the call is accepted
(which is always the case for calls upon total methods), and parameter values,
if any, are returned with the response. A reject response is possible only for
calls upon partial methods; no parameter values accompany such a response.
4.2. Design of the Scheduler
The following abstraction captures the essence of the scheduling problem. Given
is a nite undirected graph; the graph need not be connected. Each vertex in
the graph is black or white; all vertices are initially white. In this abstraction, a
vertex denotes an action and a black vertex an executing action. Two vertices are
neighbors if they are incompatible. We are given that
(E) Every black vertex becomes white eventually (by the steps taken by an
environment over which we have no control).
It is required to devise a coloring (scheduling) strategy so that
neighbors are simultaneously black (i.e., only compatible actions
may be executed simultaneously).
(S2) Every vertex becomes black innitely often (thus ensuring fairness).
Note that the scheduler can only blacken vertices; it may not whiten a vertex.
A simple scheduling strategy is to blacken a single vertex, wait until the environment
whitens it, and then blacken another vertex. Such a strategy implements
trivially because there is at most one black vertex at any time. (S2) may
be ensured by blackening the vertices in some xed, round-robin order. Such a
protocol, however, defeats the goal of concurrent execution. So, we impose the
additional requirement that the scheduling strategy be maximal: it should allow
all valid concurrent executions of the actions; that is, any innite sequence that
satises (E,S1,S2) is a possible execution of our scheduler. A maximal scheduler is
a most general scheduler, because any execution of another scheduler is a possible
execution of the maximal scheduler. By suitable renement of our maximal sched-
uler, we derive a centralized scheduler and a distributed scheduler. See [12] for a
formal denition of the maximality condition.
A Scheduling Strategy Assign a natural number, called height, to each vertex; let
x:h denote the height of vertex x. We will maintain the invariant that neighbors
have dierent heights:
Invariant D: are neighbors: x:h 6= y:h)
For vertex x, x:low holds if the height of x is smaller than all of its neighbors,
i.e., x:low (8y : x; y are neighbors: x:h < y:h). We write v:black to denote that
v is black in a given state. The scheduling strategy is:
A SIMPLE, object-BASED VIEW OF MULTIPROGRAMMING 297
(C1) Consider each vertex, v, for blackening eventually; if :v:black^v:low holds
then blacken v.
(C2) Simultaneous with the whitening of a vertex v (by the environment), increase
v:h (while preserving the invariant D).
It is shown in [12] that this scheduling strategy satises (S1,S2) and, further, it
is maximal in the sense described previously.
4.3. Implementation of the Scheduling Strategy
4.3.1. Central scheduler A central scheduler that implements the given strategy
may operate as follows. The scheduler scans through the vertices and blackens a
vertex holds. The eect of blackening is to send a
message to the appropriate processor specifying that the selected action may be
executed. Upon termination of the execution of the action, a message is sent to the
scheduler; the scheduler whitens the corresponding vertex and increases its height,
ensuring that no two neighbors have the same height. The scheduler may scan the
vertices in any order, but every vertex must be considered eventually, as required
in (C1).
This implementation may be improved by maintaining a set, L, of vertices that
are both white and low, i.e., L contains all vertices v for which :v:black ^ v:low
holds. The scheduler blackens a vertex of L and removes it from L. Whenever a
vertex x is whitened and its height increased, the scheduler checks x and all of its
neighbors to determine if any of these vertices qualify for inclusion in L; if some
vertex, y, qualies then y is added to L. It has to be guaranteed that every vertex
in L is eventually scanned and removed; one way is to keep L as a list in which
additions are done at the rear and deletions from the front. Observe that once a
vertex is in L it remains white and low until it is blackened.
4.3.2. Distributed scheduler The proposed scheduling strategy can be distributed
so that each vertex blackens itself eventually if it is white and low. The vertices
communicate by messages of a special form, called token. Associated with each
edge (x; y) is a token. Each token has a value which is a positive integer; the value
of token (x; y) is jx:h y:hj. This token is held by either x or y, whichever has the
smaller height.
It follows from the description above that a vertex that holds all incident tokens
has a height that is smaller than all of its neighbors; if such a vertex is white, it may
color itself black. A vertex, upon becoming white, increases its height by d, d > 0,
eectively reducing the value of each incident token by d (note that such a vertex
holds all its incident tokens, and, hence, it can alter their values). The quantity d
should be dierent from all token values so that neighbors will not have the same
height, i.e., no token value becomes zero, after a vertex's height is increased. If
token (x; y)'s value becomes negative as a result of reducing it by d, indicating that
the holder x now has greater height than y, then x resets the token value to its
absolute value and sends the token to y.
Observe that the vertices need not query each other for their heights, because a
token is eventually sent to a vertex of a lower height. Also, since the token value
is the dierence in heights between neighbors, it is possible to bound the token
values whereas the vertex heights are unbounded over the course of the computa-
tion. Initially, token values have to be computed and the tokens have to be placed
appropriately based on the heights of the vertices. There is no need to keep the
vertex heights, explicitly, from then on.
We have left open the question of how a vertex's height is to be increased when
it is whitened. The only requirement is that neighbors should never have the same
height. A particularly interesting scheme is to increase a vertex's height beyond all
its neighbors' heights whenever it is whitened; this amounts to sending all incident
tokens to the neighbors when a vertex is whitened. Under this strategy, the token
values are immaterial: a white vertex is blackened if it holds all incident tokens and
upon being whitened, a vertex sends all incident tokens to the neighbors. Assuming
that each edge (x; y) is directed from the token-holder x to y, the graph is initially
acyclic, and each blackening and whitening move preserves the acyclicity. This
is the strategy that was employed in solving the distributed dining philosophers
problem in Chandy and Misra [4]; a black vertex is eating and a white vertex is
hungry; the constraint (S1) amounts the well-known requirement that neighboring
philosophers do not eat simultaneously. Our current problem has no counterpart
of the thinking state, which added slight complication to the solution in [4]. The
tokens are called forks in that solution.
As described in section 4.1, the actions (vertices) are partitioned among a group of
processors. The distributed scheduling strategy has to be modied slightly, because
the steps we have prescribed for the vertices are to be taken by the processors on
behalf of their constituent actions. Message transmissions among the vertices at a
processor can be simulated by simple manipulations of the data structures of that
processor.
4.4. Compatibility
A loose execution of a program allows only compatible actions to be executed si-
multaneously. In this section, we give a denition of compatibility and state the
Reduction Theorem, which says, in eect, that a loose execution may be simulated
by a tight execution (in which executions of dierent actions are not interleaved).
We expect the user to specify the compatibility relation for procedures within each
box; then the compatibility relation among all procedures can be computed e-
ciently from the denition given below.
The states of a box are given by the values of its variables; the state of a program
is given by its box states. With each procedure (partial and total) we associate
a binary relation over program states. Informally, (u; v) 2 p, for program states
denotes that there is a tight execution of p that moves the
state of the system from u to v. In the following, concatenation of procedure names
A SIMPLE, object-BASED VIEW OF MULTIPROGRAMMING 299
corresponds to their relational product. For strings x; y, we write x y to denote
that the relation corresponding to x is a subset of the relation corresponding to y.
Procedures p; q are compatible, denoted by p q, if all of the following conditions
hold. Observe that is a symmetric relation.
C1. If p; q are in the same box,
(p is total ) qp pq), and
(q is total ) pq qp).
C2. If p; q are in dierent boxes, the transitive closure of the relation
is a partial order over the boxes.
Condition C0 requires that procedures that are called by compatible procedures
be compatible; this condition is well-grounded because, p calls
Condition C1 says that for p; q in the same box, the eect of executing a
partial procedure and then a total procedure can be simulated by executing them
in the reverse order. Condition C2 says that compatible procedures impose similar
(i.e., non-con
icting) partial orders on boxes.
Notes:
(1) For procedures with parameters, compatibility is checked with all possible values
of parameters.
(2) Partial procedures of the same box are always compatible.
(3) Total procedures p; q of the same box are compatible provided
Example of Compatibility Consider the unbounded fo channel of Section 3.1. We
show that get put, i.e., for any x; y, get(x) put(y) put(y) get(x). Note that
the pair of states (u; v), where u represents the empty channel, does not belong to
the relation get(x).
The nal states, given by the values of x and r, are identical.
The preceding argument shows that two procedures from dierent boxes that call
put and get (i.e., a sender and a receiver) may execute concurrently. Further, since
get get by denition, multiple receivers may also execute concurrently. However,
it is not the case that put put, that is,
because a fo channel is a sequence, and appending a pair of items in dierent
orders result in dierent sequences. Therefore, multiple senders may not execute
concurrently.
300 JAYADEV MISRA
Lemma 1: Let p q where p is total (p; q may not belong to the same box).
Then, qp pq.
This is the crucial lemma in establishing the Reduction Theorem, given below.
The lemma permits a total procedure p to be moved left over any other procedure
are compatible. This strategy can be employed to bring all the
components of a single procedure together, thereby converting a loose execution
to a tight execution. Observe that the resulting tight execution establishes identical
nal state starting from the same initial state as the original loose execution.
Therefore, properties of loose executions may be derived from those of the tight
executions. For a proof of the following theorem, see chapter 10 of [20].
Reduction Theorem: Let E denote a nite loose execution of some set of ac-
tions. There exists a tight execution, F , of those actions such that E F .
A SIMPLE, object-BASED VIEW OF MULTIPROGRAMMING 301
5. Concluding Remarks
Traditionally, multiprograms consist of processes that execute autonomously. A
typical process receives requests from the other processes, and it may call upon other
processes for data communication or synchronization. The interaction mechanism
{ shared memory, message passing, broadcast, etc. { denes the platform on which
it is most suitable to implement a specic multiprogram.
In the Seuss model, we view a multiprogram as a set of actions where each
action deals with one aspect of the system functionality, and execution of an action
is wait-free. Additionally, we specify the conditions under which an action is to
be executed. Typical actions in an operating system may include the ones for
garbage collection, response to a device failure by posting appropriate warnings
and initiation of communication after receiving a request, for instance. Process
control systems, such as avionics and telephony, may contain actions for processing
of received data, updates of internal data structures, and outputs for display and
archival recordings. The Seuss view that all multiprogramming can be regarded
as (1) coding of the action-bodies, and (2) specifying the conditions under which
each action-body is to be executed, diers markedly from the conventional view;
we consider and justify some of these dierences below.
First, Seuss insists that a program execution be understood by a single thread of
control, avoiding interleaved executions of the action-bodies, because it is simpler
to understand a single thread and formalize this understanding within a logic.
An implementation, however, need not be restricted to a single thread as long
as it achieves the same eect as a single-thread execution. We will show how
implementations may exploit the structures of Seuss programs (and user supplied
directives) to run concurrent threads. A consequence of having a single thread
is that the notion of waiting has to be abandoned, because a thread can aord
to wait only if there is another thread whose execution can terminate its waiting;
rendezvous-based interactions [9, 17] that require at least two threads of control to
be meaningful, have to be abandoned in this model of execution. We have replaced
waiting by the refusal of a procedure to execute. For instance, a call upon a P
operation on a semaphore (which could cause the caller to wait) is now replaced by
the call being rejected if the semaphore is not in the appropriate state; the caller
then attempts the call repeatedly during the ensuing execution.
Second, a cat is a mechanism for grouping related actions. It is not a process,
though traditional processes may be encoded as cats (as we have done for the
multiplex and the database). A cat can be used to encode protocols for communi-
cation, synchronization and mutual exclusion, and it can be used to encode objects
as in object-oriented programming. The only method of communication among the
cats is through procedure calls, much like the programming methodology based on
remote procedure calls. The minimality of the model makes it possible to develop
a simple theory of programming.
Third, Seuss divides the multiprogramming world into (1) programming of action-
bodies whose executions are wait-free, and (2) specifying the conditions for orchestrating
the executions of the action bodies. Dierent theories and programming
methodologies are appropriate for these two tasks. In particular, if the action-
bodies are sequential programs then traditional sequential programming methodologies
may be adopted for their developments. The orchestration of the actions
has to employ some multiprogramming theory, but it is largely independent of the
action-bodies. Seuss addresses only the design aspects of multiprograms { i.e., how
to combine actions { and not the designs of the action-bodies. Separation of sequential
and multiprogramming features has also been advocated in Browne et. al.
[3].
Fourth, Seuss severely restricts the amount of control available to the programmer
at the multiprogramming level. The component actions of a program can be
executed through innite repetitions only. In particular, sequencing of two actions
has to be implemented explicitly. Such loss of
exibility is to be expected when
controlling larger abstractions. For an analogy, observe that machine language offers
complete control over all aspects of a machine operation: the instructions may
be treated as data, data types may be ignored entirely, and control
ow may be
altered arbitrarily. Such
exibility is appropriate when a piece of code is very short;
then the human eye can follow arbitrary jumps, and \mistreatment" of data can
be explained away in a comment. Flow charts are particularly useful in unraveling
intent in a short and tangled piece of code. At higher levels, control structures for sequential
programs are typically limited to sequential composition, alternation, and
repetition; arbitrary jumps have nearly vanished from all high-level programming.
Flow charts are of limited value at this level of programming, because intricate
manipulations are dangerous when attempted at a higher level, and prudent programmers
limit themselves to appropriate programming methodologies in order to
avoid such dangers. We expect that the rules of combination have to become even
simpler at the multiprogramming level. That is why we propose that the component
actions of a multiprogram be executed using a form of repeated non-deterministic
selection only.
Our work incorporates ideas from serializability and atomicity in databases[2],
notions of objects and inheritance[16], Communicating Sequential Processes[9], i/o
automata[14], and Temporal Logic of Actions[13]. A partial procedure is similar to
a database (nested) transaction that may commit or abort; the procedure commits
(to execute) if its precondition holds and its preprocedure commits, and it aborts
otherwise. A typical abort of a database transaction requires a rollback to a valid
state. In Seuss, a partial procedure does not change the program state until it
commits, and therefore, there is no need for a rollback. The form of a partial
procedure is inspired by Communicating Sequential Processes[9]. Our model may
be viewed as a special case of CSP because we disallow nested partial procedures.
Seuss is an outgrowth of our earlier work on UNITY [5]. A UNITY program
consists of statements each of which may change the program state. A program
execution starts in a specied initial state. Statements of the program are chosen
for execution in a non-deterministic fashion, subject only to the (fairness) rule that
each statement be chosen eventually. The UNITY statements were particularly simple
{ assignments to program variables { and the model allowed few programming
abstractions besides asynchronous compositions of programs. Seuss is an attempt
A SIMPLE, object-BASED VIEW OF MULTIPROGRAMMING 303
to build a compositional model of multiprogramming, retaining some of the advantages
of UNITY. An action is similar to a statement, though we expect actions
to be much larger in size. We have added more structure to UNITY, by distinguishing
between total and partial procedures, and imposing a hierarchy over the
cats. Executing actions as indivisible units would extract a heavy penalty in perfor-
mance; therefore, we have developed the theory that permits interleaved executions
of the actions. Programs in UNITY interact by operating on a shared data space;
Seuss cats, however, have no shared data and they interact through procedure calls
only. In a sense, cats may only share cats. As in UNITY, the issues of deadlock,
starvation, progress (liveness), etc., can be treated by making assertions about the
sequence of states in every execution. Also, as in UNITY, program termination is
not a basic concept. A program has reached a xed point when preconditions of
all actions are false ; further execution of the program does not change its state
then, and an implementation may terminate a program execution that reaches a
xed point. We have developed a simple logic for UNITY (for some recent devel-
opments, see [19], [18], [6]) that is applicable to Seuss as well.
Acknowledgments
I am profoundly grateful to Rajeev Joshi who has provided
a number of ideas leading up to the formulation of the concept of compatibility
and the Reduction Theorem. Many of the ideas in this paper were developed after
discussions with Lorenzo Alvisi, Will Adams and Calvin Lin. I am also indebted to
the participants at the Marktoberdorf Summer School of 1998, particularly Tony
Hoare, for interactions.
304 JAYADEV MISRA
--R
Sorting networks and their applications.
Concurrency Control and Recovery in Database Systems.
A language for speci
The drinking philosophers problem.
Parallel Program Design: A Foundation.
Towards Compositional Speci
Concurrent control with readers and writers.
Solution of a problem in concurrent programming control.
Communicating Sequential Processes.
Personal communication.
On the impossibility of robust solutions for fair resource allocation.
Maximally concurrent programs.
The temporal logic of actions.
An introduction to input/output automata.
The Temporal Logic of Reactive and Concurrent Systems
Communication and Concurrency.
A logic for concurrent programming: Progress.
A logic for concurrent programming: Safety.
A Discipline of Multiprogramming.
A note on reliable full-duplex transmission over half-duplex links
--TR
The drinking philosophers problem
Communicating sequential processes
Concurrency control and recovery in database systems
The temporal logic of reactive and concurrent systems
The temporal logic of actions
Object-oriented software construction (2nd ed.)
Concurrent control with MYAMPERSANDldquo;readersMYAMPERSANDrdquo; and MYAMPERSANDldquo;writersMYAMPERSANDrdquo;
A note on reliable full-duplex transmission over half-duplex links
Solution of a problem in concurrent programming control
A discipline of multiprogramming
Communication and Concurrency
Maximally Concurrent Programs
On the Impossibility of Robust Solutions for Fair Resource Allocation
--CTR
Emil Sekerinski, Verification and refinement with fine-grained action-based concurrent objects, Theoretical Computer Science, v.331 n.2-3, p.429-455, 25 February 2005 | multiprogramming;concurrency;communicating processes;object-based programming;semaphore;distributed implementation |
584673 | Deriving Efficient Cache Coherence Protocols Through Refinement. | We address the problem of developing efficient cache coherence protocols for use in distributed systems implementing distributed shared memory (DSM) using message passing. A serious drawback of traditional approaches to this problem is that the users are required to state the desired coherence protocol at the level of asynchronous message interactions involving request, acknowledge, and negative acknowledge messages, and handle unexpected messages by introducing intermediate states. Proofs of correctness of protocols described in terms of low level asynchronous messages are very involved. Often the proofs hold only for specific configurations and buffer allocations. We propose a method in which the users state the desired protocol directly in terms of the desired high-level effect, namely synchronization and coordination, using the synchronous rendezvous construct. These descriptions are much easier to understand and computationally more efficient to verify than asynchronous protocols due to their small state spaces. The rendezvous protocol can also be synthesized into efficient asynchronous protocols. In this paper, we present our protocol refinement procedure, prove its soundness, and provide examples of its efficiency. Our synthesis procedure applies to large classes of DSM protocols. | Introduction
With the growing complexity of concurrent systems, automated procedures for
developing protocols are growing in importance. In this paper, we are interested
in protocol refinement procedures, which we define to be those that accept high-level
specifications of protocols, and apply provably correct transformations on
them to yield detailed implementations of protocols that run efficiently and have
modest buffer resource requirements. Such procedures enable correctness proofs
of protocols to be carried out with respect to high-level specifications, which can
considerably reduce the proof effort. Once the refinement rules are shown to be
sound, the detailed protocol implementations need not be verified.
In this paper, we address the problem of producing correct and efficient cache
coherence protocols used in distributed shared memory (DSM) systems. DSM
systems have been widely researched as the next logical step in parallel processing
[2, 4, 11, 13]. confirmation to the growing importance of DSM. A central
problem in DSM systems is the design and implementation of distributed coherence
protocols for shared cache lines using message passing [8]. The present-day
approach to this problem consists of specifying the detailed interactions possible
between the nodes in terms of low-level requests, acknowledges, negative acknowl-
edges, and dealing with "unexpected" messages. Difficulty of designing these protocols
is compounded by the fact that verifying such low-level descriptions invites
state explosion (when done using model-checking [5, 6]) or tedious (when done
using theorem-proving [17]) even for simple configurations. Often these low-level
descriptions are model-checked for specific resource allocations (e.g. buffer sizes);
? Supported in part by ARPA Order #B990 under SPAWAR Contract #N0039-95-C-
(Avalanche), DARPA under contract #DABT6396C0094 (UV).
it is often not known what would happen when these allocations are changed.
Protocol refinement can help alleviate this situation considerably. Our contribution
in this paper is a protocol refinement procedure which can be applied to
derive a large class of DSM cache protocols.
Most of the problems in designing DSM cache coherence protocols are attributable
to the apparent lack of atomicity in the implementation behaviors.
Although some designers of these protocols may begin with a simple atomic-
transaction view of the desired interactions, such a description is seldom written
down. Instead, what gets written down as the "highest level" specification is a
detailed protocol implementation which was arrived at through ad hoc reasoning
of the situations that can arise. In this paper, we choose CSP [9] as our specification
language to allow the designers to capture their initial atomic-transaction
view (rendezvous protocol). The rendezvous protocol is then subjected to syntax-directed
translation rules to modify the rendezvous communication primitives
of CSP into asynchronous communication primitives yielding an efficient detailed
implementation (asynchronous protocol). We empirically show that the
rendezvous protocols are, several orders of magnitude more efficient to model-check
than their corresponding detailed implementations. In addition, we also
show that in the context of a state of the art DSM machine project called the
Avalanche [2], our procedure can automatically produce protocol implementations
that are comparable in quality to hand-designed asynchronous protocols, where
quality is measured in terms of (1) the number of request, acknowledge, and negative
acknowledge (nack) messages needed for carrying out the rendezvous specified
in the given specification, and (2) the buffering requirements to guarantee a precisely
defined and practically acceptable progress criterion.
The rest of the paper is organized as follows. In this section, we review related
past work. Section 2 presents the structure of typical DSM protocols in distributed
systems. Section 3 presents our syntax-directed translation rules, along with an
important optimization called request/reply. Section 4 presents an informal argument
that the refinement rules we present always produce correct result, and also
points to a formal proof of correctness done using PVS [16]. Section 5 presents
an example protocol developed using the refinement rules, and the efficiency of
model-checking the rendezvous protocol compared to the efficiency of model-checking
the asynchronous protocol. Finally, Section 6 presents a discussion of
buffering requirements and concludes the paper.
Related Work
Chandra et al [3] use a model based on continuations to help reduce the complexity
of specifying the coherency protocols. The specification can then be model
checked and compiled into an efficient object code. In this approach, the protocol
is still specified at a low-level; though rendezvous communication can be
modeled, it is not very useful as the transient states introduced by their compiler
cannot adequately handle unexpected messages. In contrast, in our approach, user
writes the rendezvous protocol using only the rendezvous primitive, verifies the
protocol at this level with great efficiency and compiles it into an efficient asynchronous
protocol or object code. Our work closely resembles that of Buckley and
Silberschatz [1]. Buckley and Silberschatz consider the problem of implementing
rendezvous using message when the processes use generalized input/output guard
to be implemented in software. Their solution is too expensive for DSM protocol
implementations. In contrast, we focus on a star configuration of processes with
suitable syntactic restrictions on the high-level specification language, so that an
efficient asynchronous protocol can be automatically generated.
Gribo'mont [7] explored the protocols where the rendezvous communication
can be simply replaced by asynchronous communication without affecting the
processes in any other way. In contrast, we show how to change the processes
when the rendezvous communication is replaced by asynchronous communica-
tion. Lamport and Schneider [12] have explored the theoretical foundations of
comparing atomic transactions (e.g., rendezvous communication) and split transactions
(e.g., asynchronous communication), based on left and right movers [14],
but have not considered specific refinement rules.
Cache Coherency in Distributed Systems
In directory based cache coherent multiprocessor systems, the coherency of each
line of shared memory is managed by a CPU node, called home node, or simply
home 1 . All nodes that may access the shared line are called remote nodes. The
home node is responsible for managing access to the shared line by all nodes
without violating the coherency policy of the system. A simple protocol used
in Avalanche, called migratory, is shown in Figures 2. The remote nodes and
home node engage in the following activity. Whenever a remote node R wishes to
access the information in a shared line, it first checks if the data is available (with
required access permissions) in its local cache. If so, R uses the data from the
cache. If not, it sends a request for permissions to the home node of the line. The
home node may then contact some other remote nodes to revoke their permissions
in order to grant the required permissions to R. Finally, the home node grants
the permissions (along with any required data) to R. As can be seen from this
description, a remote node interacts only with the home node, while the home
node interacts with all the remote nodes. This suggests that we can restrict the
communication topology of interest to a star configuration, with the home node
as the hub, without loosing any descriptive power. This decision helps synthesize
more efficient asynchronous protocols, as we shall see later.
2.1 Complexity of Protocol Design
As already pointed out, most of the problems in the design of DSM protocols can
be traced to lack of atomicity. For example, consider the following situation. A
shared line is being read by a number of remote nodes. A remote node, say R1,
wishes to modify the data, hence sends a request to the home node for write per-
mission. The home node then contacts all other remote nodes that are currently
accessing the data to revoke their read permissions, and then grants the write
permission to R1. Unfortunately, it is incorrect to abstract the entire sequence of
1 The home for different cache lines can be different. We will derive protocols focusing
on one cache line, as is usually done.
actions consisting of contacting all other remote nodes to revoke permissions and
granting permissions to R1 as an atomic action. This is because when the home
node is in the process of revoking permissions, a different remote node, say R2,
may wish to obtain read permissions. In this case, the request from must be either
nacked or buffered for later processing. To handle such unexpected messages,
the designers introduce intermediate states, also called transient states, leading
to the complexity of the protocols. On the other hand, as we will show in the
rest of the paper, if the designer is allowed to state the desired interactions using
an atomic view, it is possible to refine such a description using a refinement procedure
that introduces transient states appropriately to handle such unexpected
messages.
2.2 Communication Model
We assume that the network that connects the nodes in the systems provides
reliable, point-to-point in-order delivery of messages. This assumption is justified
in many machines, e.g., DASH [13], and Avalanche [2]. We also assume that the
network has infinite buffering, in the sense that the network can always accept new
messages to be delivered. Without this assumption, the asynchronous protocol
generated may deadlock. If the assumption is not satisfied, then the solution
proposed by Hennessy and Patterson in [8] can be used as a post-processing step
of the refined protocol. They divide the messages into two categories: request
and acknowledge. A request message may cause the recipient to generate more
messages in order to complete the transactions, while an acknowledge message
does not. The authors argue that if the network always accepts acknowledge
messages (as opposed to all messages in the case of a network with infinite buffer),
such deadlocks are broken. As we shall see in Section 3, asynchronous protocol has
two acknowledge messages: ack and nack. Guaranteeing that the network always
accepts these two acknowledge messages is beyond the scope of this paper.
We use rendezvous communication primitives of CSP [9] to specify the home
node and the remote nodes to simplify the DSM protocol design. In particular,
we use direct addressing scheme of CSP, where every input statement in process Q
is of the form P?msg(v) or P?msg, where P is the identity of the process that sent
the message, msg is an enumerated constant ("message type") and v is a variable
(local variable of Q) which would be set to the contents of the message, and every
output statement in Q is of the form P!msg(e) or P!msg where e is an expression
involving constants and/or local variables of Q. When P and Q rendezvous by P
executing Q!m(e) and Q executing P?m(v), we say that P is an active process
and Q is a passive process in the rendezvous.
The rendezvous protocol written using this notation is verified using either a
theorem prover or a model checker for desired properties, and then refined using
the rules presented in Section 3 to obtain an efficient asynchronous protocol that
can be implemented directly, for example in microcode.
2.3 Process Structure
We divide the states of processes in the rendezvous protocol into two classes:
internal and communication. When a process is in an internal state, it cannot
(a) Home
h!m
(b) Remote
(c) Remote
Fig. 1. Examples of communication states in the home node and remote nodes
participate in rendezvous with any other process. However, we assume that such
a process will eventually enter a communication state where rendezvous actions
are offered (this assumption can be syntactically checked). The refinement process
introduces transient states where all unexpected messages are handled. We
denote the i th remote node by r i and the home node by h. For simplicity, we
assume that all the remote nodes follow the same protocol and that the only
form of communication between processes (in both asynchronous and rendezvous
protocols) is through messages, i.e., other forms of communication such as global
variables are not available.
As discussed before, we restrict the communication topology to a star. Since
the home node can communicate with all the remote nodes and behaves like a
server of remote-node requests, it is natural to allow generalized input/output
guards in the home node protocols (e.g., Figure 1(a)). In contrast, we restrict
the remote nodes to contain only input non-determinism, i.e., a remote node can
either specify that it wishes to be an active participant of a single rendezvous with
the home node (e.g., Figure 1(b)) or it may specify that it is willing to be a passive
participant of a rendezvous on a number of messages (e.g., Figure 1(c)). Also, as in
Figure
1(c), we allow - guards in the remote node to model autonomous decisions
such as cache evictions. These decisions, empirically validated on a number of real
DSM protocols, help synthesize more efficient protocols. Finally, we assume that
no fairness conditions are placed on the non-deterministic communication options
available from a communication state, with the exception of the forward progress
restriction imposed on the entire system (described below).
2.4 Forward Progress
Assuming that there are no - loops in the home node and remote nodes, the
refinement process guarantees that at least one of the refined remote nodes makes
forward progress, if forward progress is possible in the rendezvous protocol. Notice
that forward progress is guaranteed for some remote node, not for every remote
node. This is because assuring forward progress for each remote node requires
allocating too much buffer space at the home node. If there are n remote nodes,
to assure that every remote node makes progress, the home node needs a buffer
that can hold n requests. This is both impractical and non-scalable as n in DSM
machines can be as high as a few thousands. If we were to guarantee progress
only for some remote node, a buffer that can hold 2 messages suffices, as we will
see in Section 3.
3 The Refinement Procedures
We systematically refine the communication actions in h and r i by inspecting the
syntactic structure of the processes. The technique is to split each rendezvous into
Row State Buffer contents Action
C1 Communication (Active) empty (a) Request for rendezvous
(b) goto transient state
C2 Communication (Active) request (a) delete the request
(b) Request home for rendezvous
(c) goto transient state
C3 Communication (Passive) request Ack/nack the request
Transient ack Successful rendezvous
Transient nack go back to the communication state
T3 Transient request Ignore the request
Table
1. The actions taken by the remote node when it enters a communication
state or a transient state. After each action, the message in the buffer is removed.
two halves: a request for the rendezvous and an acknowledgment (ack) or negative
acknowledgment (nack) to indicate the success or failure of the rendezvous. At any
given time, a refined process is in one of three states: internal, communication, and
transient. Internal and communication states of the refined process are same as in
the corresponding unrefined process in the rendezvous protocol. Transient states
are introduced by the refinement process in the following manner. Whenever a
process P has Q!m(e) as one of the guards in a communication state, P sends
a request to Q and awaits in a transient state for an ack/nack or a request for
rendezvous from Q. In the transient state, P behaves as follows:
receives an ack from Q, the rendezvous is successful, and P changes its
state appropriately.
R2: If P receives a nack from Q, the rendezvous has failed. P goes back to the
communication state and tries the same rendezvous or a different rendezvous.
R3: If P receives a request from Q, the action taken depends on whether P is
the home node or a remote node. If P is a remote node (and Q is then the home
node), P simply ignores the message. (This is because, as discussed in the next
sentence, P "knows" that Q will get its request that is tantamount to a nack
of Q's own request.) If P is the home node, it goes back to the communication
state as though it received a nack ("implicit nack"), and processes the Q's request
in the communication state. The rules R1-R3 govern how the remote node and
home node are refined, as will now be detailed.
3.1 Refining the Remote Node
Every remote node has a buffer to store one message from the home node. When
the remote node receives a request from the home node, the request would be
held in the buffer. When a remote node is at a communication or transient state,
its actions are shown in Table 1. The rows of the table are explained below.
C1: When the remote node is in a communication state, and it wishes to be an
active participant of the rendezvous, and no request from home node is pending
in the buffer, the remote node sends a request for rendezvous to home, goes to a
transient state and awaits for an ack/nack or a request for rendezvous from home
node.
C2: This row is similar to C1, except that there is a request from home is pending
in the buffer. In this case also, the remote sends a request to home and goes to a
transient state. In addition, the request in the buffer is deleted. As explained in
rule R3, when the home receives the remote's request, it acts as though a nack is
received (implicit nack) for the deleted request.
C3: When the remote node is in a communication state, and it is passive in
the rendezvous, it waits for a request for rendezvous from home. If the request
satisfies any guards of the communication state, it sends an ack to the home
and changes state to reflect a successful rendezvous. If not, it sends a nack to
home and continues to wait for a matching request. In both cases, the request is
removed from the buffer.
T1, T2: If the remote node receives an ack, the rendezvous is successful, and
the state of the process is appropriately changed to reflect the completion of the
rendezvous. If the remote node receives a nack from the home, it is because the
home node does not have sufficient buffers to hold the request. In this case, the
remote node goes back to communication state and retransmits the request, and
reenters the transient state.
T3: As explained in the rule R3, if the remote node receives a request from home,
it simply deletes the request from buffer, and continues to wait for an ack/nack
from home.
3.2 Refining the Home Node
The home node has a buffer of capacity k messages (k - 2). All incoming messages
are entered into the buffer when there is space, with the following exception. The
last buffer location (called the progress buffer) is reserved for an incoming request
for rendezvous that is known to complete a rendezvous in the current state of the
home. If no such reservation is made, a livelock can result. For example, consider
the situation when the buffer is full and none of the requests in the buffer can
enable a guard in the home node. Due to lack of buffer space, any new requests
for rendezvous must be nacked, thus the home node can no longer make progress.
In addition, when the home node is in a transient state expecting an ack/nack
from r i , an additional buffer need to be reserved so that a message (ack, nack, or
request for rendezvous) from r i can be held. We refer to this buffer as ack buffer.
When the home is in a communication or transient state, the actions taken are
shown in Table 2. The rows of this table are explained below.
C1: When the home is in a communication state, and it can accept one or more
requests pending in the buffer, the home finishes rendezvous by arbitrarily picking
one of these messages.
C2: If no requests pending in the buffer can satisfy any guard of the communication
state, and one of the guards of the communication state is r i !m i , home node
sends a request for rendezvous to r i , and enters a transient state. As described
above, before sending the message, it also reserves ack buffer, to assure hold the
messages from r i . This step may require the home to generate a nack for one
of the requests in the buffer in order to free the buffer location. Also note that
condition (c) states that no request from r i is pending in the buffer. The rationale
behind this condition is that, if there is a request from r i pending, then r i is at
a communication state with r i being the active participant of the rendezvous.
Due to the syntactic restrictions placed on the description of the remote nodes,
Row State Condition Action
Communication buffer contains a request from (a) an ack is sent to r i
r i that satisfies a rendezvous (b) delete request from buffer
C2 Communication (a) no request in the buffer (a) ack buffer is allocated
satisfies any required rendezvous (if not enough buffer space
(b) home node can be active a nack may be generated)
in a rendezvous with r i on m i (i.e. (b) a request for rendezvous
is a guard in this state) is sent to r i
(c) no request from r i is pending (c) goto transient state
in buffer
Transient ack from r i rendezvous is completed
Transient nack from r i rendezvous failed.
Go back to the communication
state and send next request. If
no more requests left, repeat
starting with the first guard.
T3 Transient (a) request from r i treat the request as a
(b) waiting for ack/nack from r i a nack plus a request
Transient (a) request from r j 6=r i has arrived enter the request into buffer
(b) waiting for ack/nack from r i
(c) buffer has
Transient (a) request from r j 6=r i has arrived enter the request into
(b) waiting for ack/nack from r i progress buffer
(c) buffer has
(d) the request can satisfy a
guard in the communication state
T6 Transient request from r j has arrived nack the request
(all cases not covered above)
Table
2. Home node actions when it enters a communication or transient state.
any requests for rendezvous in this communication state. Hence
it is wasteful to send any request to r i in this case.
T1: A reception of an ack while in transient state indicates completion of a
pending rendezvous.
T2: A reception of an ack while in transient state indicates failure to complete
a rendezvous. Hence the home goes back to the communication state. From that
state, it checks if any new request in the buffer can satisfy any guard of the
communication state. If so, an ack is generated corresponding to that request,
and that rendezvous is completed. If not, the home tries the next output guard
of the communication state. If there are no more output guards, it starts all
over again with the first output guard. The reason for this is that, even though
a previous attempt to rendezvous has failed, it may now succeed, because the
remote node in question might have changed its state through a - guard in its
communication state.
T3: When the home is expecting an ack/nack from r i , if it receives a request
from r i instead, it uses the implicit nack rule, R3. It first assumes that a nack
is received, hence it goes to the communication state, where all the requests,
including the request from r i , are processed as in row T2.
T4: If the home receives a request from r j while expecting an ack/nack from a
different remote r i , and there is sufficient room in the buffer, the request is added
to the buffer.
T5: When the home is in a transient state, and has only two buffer spaces, if it
receives a message from r j , it adds the request to buffer according to the buffer
reservation scheme, i.e., the request is entered into the progress buffer iff the
request can satisfy one of the guards of the communication state. If the request
can't satisfy any guards, it would be handled by row T6.
When a request for rendezvous from r j is received, and there is insufficient
buffer space (all cases not covered by T4 and T5), home nacks r j . r j retransmits
the message.
3.3 Request/Reply Communication
The generic scheme outlined above replaces each rendezvous action with two mes-
sages: a request and an ack. In some cases, it is possible to avoid ack message.
An example is when two messages, say req and repl are used in the following
manner: req is sent from the remote node to home node for some service.
The home node, after receiving the req message, performs some internal actions
and/or communications with other remote nodes and sends a repl message to
the remote node. In this case, it is possible to avoid exchanging ack for both
req and repl. If statements h!req(e) and h?repl(v) always appear together as
h!req(e); h?repl(v) in remote node, and r i !repl always appears after r i ?req
in the home node, then the acks can be dropped. This is because whenever the
home node sends a repl message, the remote node is always ready to receive the
message, hence the home node doesn't have to wait for an ack. In addition, a
reception of repl by the remote node also acts as an ack for req. Of course, if the
remote node receives a nack instead of repl, the remote node would retransmit
the request for rendezvous. This scheme can also be used when req is sent by the
home node and the remote node responds with a repl.
We argue that the refinement is correct by analyzing the different scenarios that
can arise during the execution of the asynchronous protocol. The argument is divided
into two parts: (a) all rendezvous that happen in the asynchronous protocol
are allowed by the rendezvous protocol, and (b) forward progress is assured for
at least one remote node.
The rendezvous finished in the asynchronous protocol when the remote node
executes rows C1, C3, or T1 of Table 1 and the home node executes rows C1 or
of
Table
2. To see that all the rendezvous are in accordance with the rendezvous
protocol, consider what happens when a remote node is the active participant in
the rendezvous (the case when the home node is the active participant is similar).
The remote node r i sends out a request for rendezvous to the home h and starts
waiting for an ack/nack. There are three cases to consider. (1) h does not have
sufficient buffer space. In this case the request is nacked, and no rendezvous
takes place. (2) h has sufficient buffer space, and it is in either an internal state
or a transient state where it is expecting an ack/nack from a different remote
node, r j . In this case, the message is entered into the h's buffer. When h enters
a communication state where it can accept the request, it sends an ack to r i ,
completing the rendezvous. Clearly, this rendezvous is allowed by the rendezvous
protocol. If h sends a nack to r i later to make some space in buffer (row C2), r i
would retransmit the request, in which case no rendezvous has taken place. (3) h
has sent a request for rendezvous to r i and is waiting for an ack/nack from r i in
a transient state (this corresponds to the rule R3). In this case, r i simply ignores
the request from h. h knows that its request would be dropped. Hence it treats
the request from r i as a combination of nack for the request it already sent and a
request for rendezvous. Thus, this case becomes exactly like one of the two cases
above, and h generates an ack/nack accordingly; hence if an ack is generated it
would be allowed by the rendezvous protocol. An ack is generated only in case 2,
and in this case the rendezvous is allowed by the rendezvous protocol.
The above informal argument is formalized with the help of PVS [16] and
proved that the refinement rules are safety preserving; i.e., we showed that if the
a transition is taken in the refined protocol, then it is allowed in the original
rendezvous protocol. The PVS theory files and proofs can be obtained from the
first authors WWW home page.
Proof of forward progress
To see that at least one of the remote nodes makes forward progress, we observe
that when the home node h makes forward progress, one of the remote nodes
also makes forward progress. Since we disallow any process to stay in internal
states forever, from every internal state, h eventually enters a communication
state from which it may go to a transient state. Note that because of the same
restriction, when h sends a request to a remote node, the remote would eventually
respond with an ack, nack, or a request for rendezvous. If any forward progress
is possible in the rendezvous protocol, we show that h would eventually leave the
communication or the transient state by the following case analysis.
1. h is in a communication state, and it completes a rendezvous by row C1 of
Table
2. Clearly, progress is being made.
2. h is in a communication state, and conditions for row C1 and C2 of Table 2
are not enabled. h continues to wait for a request for rendezvous that would
enable a guard in it. Since a buffer location is used as progress buffer, if
progress is possible in the rendezvous protocol, at least one such request
would be entered into the buffer, which enables C1.
3. h is in a communication state, row C2 of Table 2 is enabled. In this case, h
sends a request for rendezvous, and goes to transient state. Cases below argue
that it eventually makes progress.
4. h is in a transient state, and receives an ack. By row T1 of Table 2, the
rendezvous is completed, hence progress is made.
5. h is in a transient state, and receives a nack (row T2 of Table 2) or an implicit
nack (row T3 of Table 2). In response to the nack, the home goes back to
r(o)?LR(data) r(o)!inv
r(i)!gr(data)
r(o)?LR(data)
r(o)?ID(data)
F
r(j)!gr(data)
(a) Home node
I h!req
rw
h!LR(data)
h!ID(data)
h?gr(data)
evict
(b) Remote node
Fig. 2. Rendezvous migratory protocol
the communication state. In this case, the progress argument is based on the
requests for rendezvous that h has received while it was in the transient state,
and the buffer reservation scheme. If one or more requests received enable a
guard in the communication state, at least one such request is entered into
the buffer by rows T4 or T5. Hence an ack is sent in response to one such
request when h goes back to the communication state (row C1), thus making
progress. If no such requests are received, h sends request for rendezvous
corresponding to another output guard (row C2) and reenters the transient
state. This process is repeated until h makes progress by taking actions in C1
or T1. If any progress is possible, eventually either T1 would be enabled, since
h keeps trying all output guards repeatedly, or C1 would be enabled, since h
repeatedly enters communication state repeatedly from T2 or T3, and checks
for incoming requests for rendezvous. So, unless the rendezvous protocol is
deadlocked, the asynchronous protocol makes progress.
5 Example Protocol
We take the rendezvous specification of migratory protocol of Avalanche and
show how the protocol can be refined using the refinement rules described above.
(The architectural team of Avalanche had previously developed the asynchronous
migratory protocol without using the refinement rules described in this paper.)
The protocol followed by the home and remote nodes is shown in Figure 2. Initially
the home node starts in state F (free) indicating that no remote node has access
permissions to the line. When a remote node r i needs to read/write the shared
line, it sends a req message to the home node. The home node then sends a gr
(grant) message to r i along with data. In addition, the home node also records
the identity of r i in a variable o (owner) for later use. Then the home node goes to
state E (exclusive). When the owner no longer needs the data, it may relinquish
the line (LR message). As a result of receiving the LR message, the home node goes
back to F. When the home node is in E, if it receives a req from another remote
node, the home node revokes the permissions from the current owner and then
grants the line to the new requester. To revoke the permissions, it either sends
an inv (invalidate) message to the current owner o and waits for the new value
of data (obtained through ID (invalid done) message), or waits for a LR message
from o. After revoking the permissions from the current owner, a gr message is
sent to the new requester, and the variable o is modified to reflect the new owner.
The remote node initially starts in state I (invalid). When the CPU tries to
read or write (shown as rw in the figure), a req is sent to the home node for
permissions. Once a gr message arrives, the remote node changes the state to V
(valid) where the CPU can read or write a local copy of the line. When the line
is evicted (for capacity reasons, for example), a LR is sent to the home node. Or,
when another remote node attempts to access the line, the home node may send
an inv. In response to inv, an ID (invalid done) is sent to the home node and
the line reverts back to the state I.
To refine the migratory protocol, we note that the messages req and gr can
be refined using the request/reply strategy, where the remote node sends req
and the home node sends gr in response. Similarly, the messages inv and ID can
be refined using request/reply, except that in this case inv is sent by the home
node, and the remote node responds with an ID. By following the request/reply
strategy, a pair of consecutive rendezvous such as r i ?req; r i !gr or r i !inv; r i ?ID
(data) takes only 2 messages as shown in Figures 3.
The refined home and remote nodes are shown in Figure 3. In these figures, we
use "??" and "!!" instead of "?" and "!" to emphasize that the communication is
asynchronous. In both these figures, transient states are shown as dotted circles
(the dotted arrows are explained later). As discussed in Section 3.2, when the
refined home node is in a transient state, if it receives a request from the process
from which it is expecting an ack/nack, it would be treated as a combination
of a nack and a request. We write [nack] to imply that the home node has
received the nack as either an explicit nack message or an implicit nack. Again,
as discussed in Section 3.2, when the home node doesn't have sufficient number
of empty buffers, it nacks the requests, irrespective of whether the node is in an
internal, transient, or communication state. For the sake of clarity, we left out
all such nacks other than the one on transient state (labeled r(x)??msg/nack).
As explained in Section 3.1, when the remote node is in a transient state, if it
receives a message from the home node, the remote node ignores the message; no
ack/nack is ever generated in response to this request. In Figure 3, we showed
this as a self loop on the transient states of remote node with label h??*.
The asynchronous protocol designed by the Avalanche design team differs
from the protocol shown in Figure 3 in that in their protocol the dotted lines
are - actions, i.e., no ack is exchanged after an LR message. We believe that the
loss of efficiency due to the extra ack is small. We are currently in the process of
quantifying the efficiency of the asynchronous protocol designed by hand and the
asynchronous protocol obtained by the refinement procedure.
Verification: As can be expected, verification of the rendezvous protocols is
much simpler than verification the asynchronous protocols. We model-checked
the rendezvous and asynchronous versions of the migratory protocol above and
invalidate, another DSM protocol used in Avalanche, using the SPIN [10]. The
number of states visited and time taken in seconds on these two protocols is shown
in
Figure
3(c). The complexity of verifying the hand designed migratory or invalidate
is comparable to the verification of asynchronous protocol. As can be seen,
verifying of the rendezvous protocol generates far fewer states and takes much
r(x)??msg/nack
r(o)??LR(data)
r(j)!!gr(data)
r(o)!!inv
r(i)!!gr(data)
r(o)!!ack r(o)??LR(data)
r(o)!!ack
r(o)??ID(data)
(a) Home node
rw
I V
h??*
evict
h??*
h!!LR(data)
h??nack
h??nack
h??gr(data)
h??ack
(b) Remote node
Protocol N Asynchronous Rendezvous
protocol protocol
Migratory 2 23163/2.84 54/0.1
Unfinished 235/0.4
8 Unfinished 965/0.5
Invalidate 2 193389/19.23 546/0.6
Unfinished 18686/2.3
6 Unfinished 228334/18.4
(c) Model-checking efficiency
Fig. 3. Refined remote node of the Migratory protocol
less run time than verifying the asynchronous protocol. In fact, the rendezvous
migratory protocol could be model checked for up to 64 nodes in 32MB of mem-
ory, while the asynchronous protocol can be model checked for only two nodes
in 64MB. Currently we are developing a simulation environment to evaluate the
performance of the various asynchronous protocols.
6 Conclusions
We presented a framework to specify the DSM protocols at a high-level using
rendezvous communication. These rendezvous protocols can be efficiently veri-
fied, for example using a model-checker. The protocol can be translated into an
efficient asynchronous protocol using the refinement rules presented in this pa-
per. The refinement rules add transient states to handle unexpected messages.
The rules also address buffering considerations. To assure that the refinement
process generates an efficient asynchronous protocol, some syntactic restrictions
are placed on the processes. These restrictions, namely enforcing a star configuration
and restricting the use of generalized guard, are inspired by domain specific
considerations. We are currently studying letting two remote nodes communicate
in asynchronous protocol so to obtain better efficiency. However, relaxing the star
configuration requirement for the rendezvous protocol does not add much descriptive
power. However, relaxing this constraint for the asynchronous protocol can
improve efficiency.
The refinement rules presented also guarantee forward progress per each line,
but not per remote node. Forward progress per node can be guaranteed with
modest buffer as follows. Every home manages the buffer pool as a shared resource
between all the cache lines. However, instead of using a progress buffer per line,
a progress buffer per node is used. A request from a node is entered into the
shared buffer pool if there is buffer space, or into the progress buffer of that node
if it satisfies a progress criterion. This strategy guarantees forward progress per
node, but not per line. However, virtuall all modern processors have a bounded
instruction issue window. Using this proerpty, and that the protocol actions of a
line do not interfere those of another line [15], one can show that forward progress
is guaranteed per each line as well as each remote node.
--R
An effective implementation for the generalized input-output construct of CSP
A comparison of software and hardware synchronization mechanisms for distributed shared memory multiprocessors.
Language support for writing memory coherency protocols.
Cray Research
Protocol verification as a hardware design aid.
Using formal verification/analysis methods on the critical path in system design: A case study.
Computer Architecture: A Quantitative Appo- rach
Communicating sequential processes.
The state of spin.
The Stanford FLASH multiprocessor.
Pretending atomicity.
The Stanford DASH multiprocessor.
A method of proving properties of parallel programs.
The S3.
PVS: Combining specification
Protocol verification by aggregation of distributed transac- tions
--TR
Design and validation of computer protocols
The temporal logic of reactive and concurrent systems
The Stanford Dash Multiprocessor
The Stanford FLASH multiprocessor
Teapot
Computer architecture (2nd ed.)
An Effective Implementation for the Generalized Input-Output Construct of CSP
Communicating sequential processes
Reduction
From Synchronous to Asynchronous Communication
Protocol Verification as a Hardware Design Aid
Exploiting Parallelism in Cache Coherency Protocol Engines
Using Formal Verification/Analysis Methods on the Critical Path in System Design
Protocol Verification by Aggregation of Distributed Transactions
PVS | DSM protocols;communication protocols;refinement |
584693 | Efficient generation of rotating workforce schedules. | Generating high-quality schedules for a rotating workforce is a critical task in all situations where a certain staffing level must be guaranteed, such as in industrial plants or police departments. Results from ergonomics (BEST, Guidelines for shiftworkers, Bulletin of European Time Studies No. 3, European Foundation for the Improvement of Living and Working Conditions, 1991) indicate that rotating workforce schedules have a profound impact on the health and satisfaction of employees as well as on their performance at work. Moreover, rotating workforce schedules must satisfy legal requirements and should also meet the objectives of the employing organization. In this paper, our description of a solution to this problem is being stated. One of the basic design decisions was to aim at high-quality schedules for realistically sized problems obtained rather quickly, while maintaining human control. The interaction between the decision-maker and the algorithm therefore consists of four steps: (1) choosing a set of lengths of work blocks (a work block is a sequence of consecutive days of work), (2) choosing a particular sequence of blocks of work and days-off blocks amongst these that have optimal weekend characteristics, (3) enumerating possible shift sequences for the chosen work blocks subject to shift change constraints and bounds on sequences of shifts, and (4) assignment of shift sequences to work blocks while fulfilling the staffing requirements. The combination of constraint satisfaction and problem-oriented intelligent backtracking algorithms in each of the four steps allows for finding good solutions for real-world problems in acceptable time. Computational results from a benchmark example found in the literature confirmed the viability of our approach. The algorithms have been implemented in commercial shift scheduling software. | Introduction
Workforce scheduling is the assignment of employees to shifts or days-off for a given period of
time. There exist two main variants of this problem: rotating (or cyclic) workforce schedules and
noncyclic workforce schedules. In a rotating workforce schedule-at least during the planning
stage-all employees have the same basic schedule but start with different offsets. Therefore, while
individual preferences of the employees cannot be taken into account, the aim is to find a schedule
that is optimal for all employees on the average. In noncyclic workforce schedules individual
preferences of employees can be taken into consideration and the aim is to achieve schedules that
fulfill the preferences of most employees. In both variants of workforce schedules other constraints
such as the minimum needed number of employees in each shift have to be satisfied. Both variants
of the problem are NP-complete [8] and thus hard to solve in general, which is consistent with
the prohibitively large search spaces and conflicting constraints usually encountered. For these
reasons, unless it is absolutely required to find an optimal schedule, generation of good feasible
schedules in a reasonable amount of time is very important. Because of the complexity of the
problem and the relatively high number of constraints that must be satisfied, and, in case of soft-
constaints, optimized, generating a schedule without the help of a computer in a short time is almost
impossible even for small instances of the problem. Therefore, computerized workforce scheduling
has been the subject of interest of researchers for more than years. Tien and Kamiyama [11]
give a good survey of algorithms used for workforce scheduling. Different approaches were used
to solve problems of workforce scheduling. Examples for the use of exhaustive enumeration are [5]
and [3]. Glover and McMillan [4] rely on integration of techniques from management sciences and
artificial intelligence to solve general shift scheduling problems. Balakrishnan and Wong [1] solved
a problem of rotating workforce scheduling by modeling it as a network flow problem. Smith
and Bennett [10] combine constraint satisfaction and local improvement algorithms to develop
schedules for anesthetists. Schaerf and Meisels [9] proposed general local search for employee
timetabling problems.
In this paper we focus on the rotating workforce scheduling problem. The main contribution
of this paper is to provide a new framework to solve the problem of rotating workforce scheduling,
including efficient backtracking algorithms for each step of the framework. Constraint satisfaction
is divided into four steps such that for each step the search space is reduced to make possible the
use of backtracking algorithms. Computational results show that our approach is efficient for real-
sized problems. The main characteristic of our approach is the possibility to generate high-quality
schedules in short time interactively with the human decision maker.
The paper is organized as follows. In Section 2 we give a detailed definition of the problem
that we consider. In Section 3 we present our new framework and our algorithms used in this
framework. In Section 4 we discuss the computational results for two real-world problem and for
problem instances taken from the literature. Section 5 concludes and describes work that remains
to be done.
2 Definition of problem
In this section we describe the problem that we consider in this paper. The problem is a restricted
case of a general workforce scheduling problem. General definitions of workforce scheduling
problems can be found in [4, 9, 6]. The definition of the problem that we consider here is given
below:
INSTANCE:
Number of employees: n.
Set A of m shifts (activities) : a 1 ; a am represents a day-off.
w: length of schedule. The total length of a planning period is n w because of the
cyclicity of the schedules. Usually, n w will be a multiple of 7, allowing to take
weekends into consideration even when the schedule will be reused for more than one
planning period.
A cyclic schedule is represented by an n w matrix S 2 A nw . Each element s i;j of
matrix S corresponds to one shift. Element s i;j shows in which shift employee i works
during day j or whether the employee has free. In a cyclic schedule, the schedule of
one employee consists in a sequence of all rows of the matrix S. The last element of a
row is adjacent to the first element of the next row and the last element of the matrix
is adjacent to its first element.
Temporal requirements: (m 1) w matrix R, where each element r i;j of matrix R
shows the required number of employees in shift i during day j.
Constraints:
Sequences of shifts allowed to be assigned to employees (the complement of
c i;j;k of matrix C is 1 then the sequence of shifts (a i ; a j ; a k ) is allowed, otherwise
not. Note that the algorithms we describe in Section 3 can easily be extended
to allow longer allowed/forbidden sequences. Also note that Lau [8] could show
that the problem is NP-complete even if we restrict forbidden sequences to length
two.
Maximum and minimum of length of periods of successive shifts: Vectors
each element shows maximum respectively minimum
allowed length of periods of successive shifts.
Maximum and minimum length of work days blocks: MAXW , MINW
PROBLEM:
Find as many non isomorphic cyclic schedules (assignments of shifts to employees) as
possible that satisfy the requirement matrix, all constraints, and are optimal in terms
of free weekends (weekends off).
The requirement matrix R is satisfied if
The other constraints are satisfied if:
For the shift change matrix C:
ng 8j
where
next(s i;j ) => <
s 1;1 otherwise
and
next k (s i;j
| {z }
Maximum length of periods of successive shifts:
ng 8j
| {z }
Minimum length of periods of successive shifts:
ng 8j
_
next b (s i;j ) 6= a k
Maximum length of work blocks:
ng 8j
Minimum length of work blocks:
ng 8j
_
next b (s i;j
The rationale behind trying to obtain more than one schedule will be made clear in Section 3.
Optimality of free weekends is in most of the times in conflict with the solutions selected for
work blocks.
3 Four step framework
Tien and Kamiyama [11] proposed a five stage framework for workforce scheduling algorithms.
This framework consist of these stages: determination of temporal manpower requirements, total
manpower requirement, recreation blocks, recreation/work schedule and assignment of shifts (shift
schedule). The first two stages can be seen as an allocation problem and the last three stages are
days-off scheduling and assignment of shifts. All stages are related with each other and can be
solved sequentially, but there exist also algorithms which solve two or more stages simultaneously.
In our problem formulation we assume that the temporal requirements and total requirements
are already given. Temporal requirements are given through the requirement matrix and determine
the number of employees needed during each day in each shift. Total requirements are represented
in our problem through the number of employees n. We propose a new framework for solving the
problem of assigning days-off and shifts to the employees. This framework consist of the following
four steps:
1. choosing a set of lengths of work blocks (a work block is a sequence of consecutive days of
work shifts),
2. choosing a particular sequence of work and days-off blocks among those that have optimal
weekend characteristics,
3. enumerating possible shift sequences for the chosen work blocks subject to shift change
constraints and bounds on sequences of shifts, and
4. assignment of shift sequences to work blocks while fulfilling the staffing requirements.
First we give our motivation for using this framework. Our approach is focused on the inter-action
with the decision maker. Thus, the process of generating schedules is only half automatic.
When our system generates possible candidate sets of lengths of work blocks in step 1 the decision
maker will select one of the solutions that best reflects his preferences. This way we satisfy
two goals: On the one hand, an additional soft constraint concerning the lengths of work blocks
can be taken into account through this interaction, and on the other hand the search space for step
2 is significantly reduced. Thus we will be able to solve step 2 much more effectively. In step
our main concern is to find a best solution for weekends off. The user selection in step 1 can
impact features of weekends off versus length of work blocks since these two constraints are the
ones that in practice most often are in conflict. The decision maker can decide if he wishes optimal
length of work blocks or better features for weekends off. With step 3 we satisfy two more
goals. First, because of the shift change constraints and the bounds on the number of successive
shifts in a sequence, each work block has only few legal shift sequences (terms) and thus in step
backtracking algorithms will find very fast assignments of terms to the work blocks such that
the requirements are fulfilled (if shift change constraints with days-off exist, their satisfaction is
checked at this stage). Second, a new soft constraint is introduced. Indeed, as we generate a bunch
of shift plans, they will contain different terms. The user has then the possibility to eliminate some
undesired terms, thus eliminating solutions that contain these terms. Terms can have an impact on
Table
1: A possible schedule with work blocks in the order (46546555)
Employee/day Mon Tue Wen Thu Fri Sat Sun
3 D D N N N
4 A A A A
5 D D D D D
6 D D D D N
8 N N A A
9 A N N
fatigue and sleepiness of the employees and as such are very important when high-quality plans
are sought.
3.1 Determination of lengths of work blocks
A work block is a sequence of work days between two days-off. An employee has a work day
during day j if he/she is assigned a shift different from the days-off shift am . In this step the
feature of work blocks in which we are interested is only its length. Other features of work blocks
(e.g., shifts of which the work block is made of, begin and end of block, etc.) are not known at
this time. Because the schedule is cyclic each of the employees has the same schedule and thus the
same work blocks during the whole planning period.
Example: The week schedule for 9 employees given in Table 1 consists of two work blocks of
length 6, four work blocks of length 5 and two of length 4 in the order (4 6 5 4 6 5 5 5). By
rearranging the order of the blocks, other schedules can be constructed, for example the schedule
with the order of work blocks (5 5 6 5 4 4 5 6). We will represent schedules with the same work
blocks but different order of work blocks through unique solutions called class solution where the
blocks are given in decreasing order. The class solution of above example thus will be f6 6 5 5 5
It is clear that even for small instances of problems there exist many class solutions. Our
main concern in this step is to generate all possible class solution or as many as possible for large
instances of the problem.
One class solution is nothing else than an integer partition of the sum of all working days that
one employee has during the whole planning period. To find all possible class solutions in this step
we have to deal with the following two problems:
Generation of restricted partitions, and
Elimination of those partitions for which no schedule can be created.
Because the elements of a partition represent lengths of work blocks and because constraints
about maximum and minimum length of such work blocks exist, not all partitions must be gen-
erated. The maximum and minimum lengths of days-off blocks also impact the maximum and
minimum allowed number of elements in one partition, since between two work blocks there always
is a days-off block, or recreation block. In summary, partitions that fulfill the following
criteria have to be generated:
Maximum and minimum value of elements in a partition. These two parameters are respectively
maximum and minimum allowed length of work blocks,
Minimum number of elements in a partition:
DaysOSum
DaysOSum: Sum of all days-off that one employee has during the whole planning period.
Maximum number of elements in a partition:
DaysOSum
The set of partitions which fulfill above criteria is a subset of the set of all possible partitions. One
can first generate the full set of all possible partitions and then eliminate those that do not fulfill
the constraints given by the criteria. However, this approach is inefficient for large instances of
the problem. Our idea was to use restrictions for pruning while the partitions are generated. We
implemented a procedure based in this idea for the generation of restricted partitions. Pseudo code
is given below. The set P contains elements of a partition of N .
Initialize N , MAXB , MINB , MAXW , MINW
'Value of arguments for first procedure call
'Recursive procedure
RestrictedPartitions(Pos; MaxValue)
Do While (i <= MaxValue ^ :PartitionIsCompleted)
Add to the set P element i
of elements in a set P
Store partition (set P )
'Pruning
'Recursive call
EndIf
Remove last element from set P
Loop
End
Not all restricted partitions can produce a legal schedule that fulfills the requirements of work
force per day (in this step we test if we have the desired number of employees for a whole day,
not for each shift). As we want to have only class solutions that will bring us to a legal shift plan,
we eliminate all restricted partitions that cannot fulfill the work force per day requirements. A
restricted partition will be legal if at least one distribution of days-off exists that fulfills the work
force per day requirements. In the worst case all distributions of days-off have to be tested if we
want to be sure that a restricted partition has no legal days-off distribution. One can first generate
all days-off distributions and then test each permutation of restricted partitions if at least one
satisfying days-off distribution can be found. This approach is rather ineffective when all class
solution have to be generated because many of them will not have legal days-off distribution and
thus the process of testing takes too long for large instances of typical problems. We implemented
a backtracking algorithms for testing the restricted partitions. Additionally, as we want to obtain
the first class solutions as early as possible, we implemented a three stages time restricted test.
In this manner we will not lose time with restricted partitions which do not have legal days-off
distribution in the beginning of test. The algorithm for testing the restricted partitions is given
below (without the time restriction).
INPUT: Restricted partition, possible days-off blocks.
Initialize vectors W (NumberOfUniqueWorkBlocks) and
F (NumberOfUniqueDaysOBlocks) with unique work blocks, respectively with unique days-off
blocks (for example unique work blocks of class solution f5 5 4 4 3g are blocks 5, 4 and 3).
'i represents one work or days-off block. It takes values from 1 to numberOfWorkBlocks 2 (after
each work block comes a days-off block and our aim is to find the first schedule that fulfills the
requirements per day). For the first procedure call
'Recursive procedure
PartitionTest(i)
If i is odd
Do While(k NumberOfUniqueWorkBlocks)
Assign block i with work block W (k)
Req= for each day the number of employees does not
get larger than the requirement and the number of work
blocks of type W (k) does not get larger than the number
of blocks of type W (k) in the class solution
'Pruning
If Req =true then
Endif
Else
'Only partial schedule until block i is tested
requirements for number of employees during
each day are not overfilled and number of work blocks of
type W (k) does not get larger than the number of blocks
of type W (k) in the class solution
If Req =true then
Endif
Endif
Loop
Else
Do While(k NumberOfUniqueDaysOBlocks)
Assign block i with days-off block F (k)
SumTest =Test if sum of all days-off is as required
If SumTest =true then
Class solution has at least one days-off distribution
test
Endif
Else
'Only partial schedule until block i is tested
FreeTest =Test if not more employees than required have
(test is done for each day)
'Pruning
If FreeTest =true then
Endif
Endif
Loop
Endif
End
3.2 Determination of distribution of work blocks and days-off blocks that
have optimal weekend characteristics
Once the class solution is known different shift plans can be produced subject to the order of work
blocks and to the distribution of days-off. For each order of blocks of the class solution there
may exist many distributions of days-off. We introduce here a new soft constraint. This constraint
concerns weekends off. Our goal here is to find for each order of work blocks the best solution (or
more solutions if they are not dominated) for weekends off. The goal is to maximize the number of
weekends off, maximize the number of long weekends off (the weekend plus Monday or Friday is
free) and to find solution that have a "better" distribution of weekends off. Distribution of weekends
off will be evaluated with the following method: Every time two weekends off appear directly after
each other the distribution gets a negative point. One distribution of weekends off is better than
another one if it has less negative points. Priority is given to the number of weekends off followed
by the distribution of weekends off and finally the number of long weekends off is considered only
if the others are equal. Possible candidates are all permutations of the work blocks found in a
class solution. Each permutation may or may not have days-off distributions. If the permutation
has at least one days-off distribution our goal is to find the best solutions for weekends off. The
best solutions are those that cannot be dominated by another solution. We say that solution Solut 1
dominates solution Solut 2 in the following cases:
has the same number of weekends off as Solut 2 , the evaluation of the weekends
distribution of Solut 1 is equal to the one of Solut2, and Solut 1 has more long weekends off
than
Solut 1 has same number of weekends off as Solut 2 and the evaluation of the weekends
distribution of Solut 1 is better than the one of Solut2
Solut 1 has more weekends off than Solut 2
Two comments have to be made here. First, because some of the permutations of the class
solutions may not have any days-off distribution, we use time restrictions for finding days-off
distributions. In other words, if the first days-off distribution is not found in a predetermined
time, the next permutation is tested. Second, for large instances of problems, too many days-off
distributions may exist and this may impede the search for the best solution. Interrupting the test
can be done manually depending on the size of the problem.
For large instances of the problem it is impossible to generate all permutations of class solutions
and for each permutation the best days-off distributions. In these cases our main concern is to
enumerate as many solutions as possible which have the best day-off distribution and can be found
in a predetermined time. Found solutions are sorted based in weekends attributes such that the user
can decide easier which distribution of days-off and work days he wants to continue with. The user
may select one of the solutions solely based in weekends, but sometimes the order of work blocks
may also decide. One can prefer for example the order of work blocks (7 6 3 7 6 3 7 6) to the order
For finding legal days-off distributions for each permutation of a class solution we use a back-tracking
procedure similar to the one for testing the restricted partitions in step 1 except that the
distribution of work blocks is now fixed. After the days-off distributions for a given order of work
blocks are found, selecting the best solutions based on weekends is a comparatively trivial task and
takes not very long.
Selected solutions in step 2 have a fixed distribution of work blocks and days-off blocks, and
in the final step the assignment of shifts to the work blocks has to be done.
3.3 Generating allowed shift sequences for each work block
In step 2 work and days-off blocks have been fixed. It remains to assign shifts to the employees.
We again use a backtracking algorithm, but to make this algorithm more efficient we introduce
another interaction step. The basic idea of this step is this: For each work block construct the
possible sequences of shifts subject to the shift change constraints and the upper and lower bounds
on the length of sequences of successive same shifts. Because of these constraints, the number of
such sequences (we will call them terms) is not too large and thus backtracking algorithms will
be much more efficient compared to classical backtracking algorithms where for each position of
work blocks all shift possibilities would have to be tried and the test for shift change constraints
would have to be done in a much more time-consuming manner, thus resulting in a much slower
search for solutions.
Example: Suppose that the solution selected by the user in step 2 has the distribution of work
blocks (6 4 4 6 5 4 5).
Shifts: Day (D), Afternoon (A) and Night (N)
Forbidden shift changes: (N D), (N A),
Maximum and minimum lengths of successive shifts: D: 2-6, A: 2-5, N: 2-4
Our task is to construct legal terms for work blocks of length 6, 5, and 4.
For work block of legth 6 the following terms exist:
DDDDDD, DDDDAA, DDDDNN, DDDAAA, DDDNNN, DDAAAA, DDNNNN, DDAANN,
AAAANN, AAANNN, AANNNN
Block of length 5:
DDDDD, DDDAA, DDDNN, DDAAA, DDNNN, AAAAA, AAANN, AANNN
Block of length 4:
DDDD, DDAA, DDNN, AAAA, AANN, NNNN
This approach is very appropriate when the number of shifts is not too large. When the number
of shifts is large we group shifts with similar characteristics in so called shift types. For example
if there exists a separate day shift for Saturday which begins later than the normal day shift, these
two shifts can be grouped together. Such grouping of similar shifts in shift types allows us to have
a smaller number of terms per work blocks and therefore reduces the overall search space. To the
end a transformation from shift types to the substituted shifts has to be done. A similar approach
has been applied by Weil and Heus [12]. They group different days-off shifts in one shift type and
thus reduce the search space. Different days-off shifts can be grouped in one shift only if they are
interchangeable (the substitution has no impact in constraints or evaluation).
The process of constructing the terms takes not long most of the time given that the length of
work blocks usually is less than 9 and some basic shift change constraints always exist because of
legal working time restrictions.
3.4 Assignment of shift sequences to work blocks
Once we know the terms we can use a backtracking algorithm to find legal solutions that satisfy the
requirements in every shift during every day. The size of the search space that should be searched
with this algorithm is:
where b is the number of work blocks and N t (i) is the number of legal terms for block i.
If we would not use terms the search space would be of size:
sum of all work days
Of course in this latter case we would have more constraints, for instance the shift change con-
straints, but the corresponding algorithm would be much slower because the constraints are tested
not only one time as when we construct the terms.
Pseudo code for the backtracking algorithm based on terms is given below. Let us observe that
the terms tests for shift change constraints is done without consideration of shift am (days-off). If
there exist shift changes constraints that include days-off, then test of the solution has to be done
later for these sequences.
INPUT: distribution of work and days-off blocks
Generate all legal shift sequences for each work block
'Value of argument for first call of procedure ShiftAssignment
'Recursive procedure
Number of shift sequences of block i
While (k
Assign block i with sequence number k
Test if requirements are fulfilled and shift change constraints
are not violated (in this stage we test for forbidden shift
sequences that include days-off)
Store the schedule
Endif
Else
'Only partial schedule until block i is tested
PTest=Test each shift if not more than needed employees are assigned
to it
'Pruning .
If Ptest =true then
Endif
Endif
Loop
End
There exist rare cases when even if there exists a work and days-off distribution no assignment
of shifts can be found that fulfills the temporal requirements for every shift in every day because
of shift change constraints. In these cases constraints about minimum and maximum length of
periods of successive shifts must be relaxed to obtain solutions.
Computational results
In this section we report on computational results obtained with our approach. We implemented
our four step framework in a software package called First Class Schedule (FCS) which is part of
a shift scheduling package called Shift-Plan-Assistant (SPA) of XIMES 1 Corp. All our results in
this section have been obtained on an Intel P2 330 Mhz. Our first two examples are taken from a
real-world sized problems and are typical for the kind of problems for which FCS was designed.
After that, we give our results for three benchmark examples from the literature and compare
them with results from a paper of Balakrishnan and Wong [1] . They solved problems of rotating
workforce scheduling through the modeling in a network flow problem. Their algorithms were
implemented in Fortran on an IBM 3081 computer.
Problem 1: An organization operates in one 8 hours shift: Day shift (D). -From Monday to Saturday
4 employees are needed, whereas on Sunday no employees are needed. These requirements are
fulfilled with 5 employees which work on average 38,4 hours per week. A rotating week schedule
has to be constructed which fulfills the following constraints:
1. Length of periods of successive shifts should be: D: 2-6
2. Length of work blocks should be between 2 and 6 days and length of days-off blocks should
be between 1 and 4
3. Features for weekends off should be as good as possible
We note that days-off blocks of length 1 are not preferred, but otherwise no class solution for
this problem would exist.
Using FCS in step 1, all class solutions are generated in 4.5 seconds: f6 4 4 3 3 2 2g, f6 6 3 3
3g. We select the class solution f6 6
to proceed in the next step. In step 2 FCS generates 6 solutions after 0.6 seconds. Each
of them has one long weekend off. We select a solution with the distribution of work blocks (6 4 4
to proceed in the next steps. Step 3 and 4 are solved automatically. We obtain the first and
only existing schedule after 0.02 seconds. This solution is shown in Table 2.
The quality of this schedule stems from the fact that there are at most 8 consecutive work
days with only a single day-off in between them. This constraint is very important when single
days-off are allowed. This example showed a small instance of a problem with only one shift;
nevertheless, even for a such instances it is relatively difficult to find high quality solutions subject
to this constraint.
Table
2: First Class Schedule solution for problem 1
Employee/day Mon Tue Wed Thu Fri Sat Sun
3 D D D D
4 D D D D D D
5 D D D D
Let us note here that the same schedule can be applied for a multiple of 5 employees (the
duties are also multiplied) if the employees are grouped in teams. For example if there are
employees they can be grouped in 5 teams. Each of them will have 6 employees.
Problem 2: An organization operates in three 8 hours shifts: Day shift (D), Afternoon shift (A),
and Night shift (N). From Monday to Friday three employees are needed during each shift, whereas
on Saturday and Sunday two employees suffice. These requirements are fulfilled with 12 employees
which work on average 38 hours per week. A rotating week schedule has to be constructed
which fulfills the following constraints:
1. Sequences of shifts not allowed to be assigned to employees are:
2. Length of periods of successive shifts should be: D: 2-7, A: 2-6, N: 2-5
3. Length of work blocks should be between 4 and 7 days and length of days-off blocks should
be between 2 and 4
4. Features for weekends off should be as good as possible
Using FCS in step 1, the first class solution is generated after 0.07 seconds and we interrupt the
process of generation of class solutions after 1.6 seconds when already 7 class solutions have been
generated out of many others: f6 6 5
4g. The first solution has the highest number of most optimal blocks, namely those with length 5,
but entails weak features for weekends off. For this reason we select the class solution f7
to proceed in the next step. In step 2 the optimal solution for the distribution of blocks
weekends off from which 3 are long, is found in less than 2 seconds.
The first 11 solutions are generated in 11 seconds, where we have one solution with 6 weekends
off, 4 of which are long, and a distribution of weekends off that is acceptable. This solution has
the order of work blocks as follows: (7 7 6 5 5 5 7 5 5 5). We select this solution to proceed in
the next steps. Step 3 and 4 are solved automatically. We obtain a first schedule which is given
in
Table
3 after 0.17 seconds and the first 50 schedules after 4 seconds. The decision maker can
Table
3: First Class Schedule solution for problem 2
Employee/day Mon Tue Wed Thu Fri Sat Sun
3 D D D D D
4 A A A N N
5 N N N
6 N N A A A
7 A A N N
8 A A A A A
9 N N N N N
eliminate some undesired terms. Besides these solutions there exist also a large amount of other
solutions which differ in terms with each other. If a better distribution of weekends off would have
been sought, this could have been found through another class solution, for example
5 5g found in step 1 after 16 seconds, at the cost of longer work sequences.
Problem 3: The first problem from literature for which we discuss computational results for First
Class Schedule is a problem solved by Butler [3] for the Edmonton police department in Alberta,
Canada. Properties of this problem are:
Number of employees: 9
Shifts: 1 (Day), 2 (Evening), 3 (Night)
Temporal requirements:
R 3;7 =B @
Constraints:
Length of work periods should be between 4 and 7 days
Only shift 1 can precede the rest period preceding a shift 3 work period
Before and after weekends off, only shift 3 or shift 2 work periods are allowed
At least two consecutive days must be assigned to the same shift
Table
4: Solution of Balakrishnan and Wong [1] for the problem from [3]
Employee/day Mon Tue Wed Thu Fri Sat Sun
1 A A A A A A
3 A A A A A
4 D D D D D
5 D D N N N
6 N A A A A
7 A A D D D
8 D D D D N
9 N N N N
than two 7-day work periods are allowed and these work periods should not be
consecutive
Balakrishnan and Wong [1] solve this problem using a network model and they needed 73.54
seconds to identify an optimal solution of the problem. This solution is given in Table 4. We use D
for 1, A for shift 2, N for shift 3, and if the element of the matrix is empty the employee has free.
Before we give our computational results some observations should be made. First constraint
two and three cannot be represented in our framework. Let us note here that in all three examples
given, we cannot model the problem exactly (the same was true for Balakrishnan and Wong's [1]
approach to the original problems), which is to a high degree due to the different legal requirements
found in the U.S./Canadian versus those found in the European context, but we tried to
mimic the constraints as closely as possible or to replace them by similar constraints that appeared
more meaningful in the European context. Having said this, let us proceed as follows: The other
constraints can be applied in our model and are left like in the original problem. As mentioned, we
include additional constraints about maximum length of successive shifts and minimum and maximum
length of days-off blocks. In summary, additional constraint used for First Class Schedule
are:
Not allowed shift changes: (N D), (N A),
Length of days-off periods should be between 2 and 4
Vector MAXS
In our model we first generate class solutions. Class solutions that exist for the given problem
and given constraints are:
Table
5: First Class Schedule solution for the problem from [3]
Employee/day Mon Tue Wed Thu Fri Sat Sun
3 A N N N N
4 A A A A A
5 D D N N N
6 A A A A A A
7 D D N N
9 N D D D D
The first solution is generated in 0.14 seconds and all solutions in 4.38 seconds. We select in
this step the class solution with the highest number of optimal blocks: f7 7 6 5 5 5 5 5g
In step 2 the distributions of work and days-off periods that gives best results for weekends is
found. We select the best solution offered in this step from first class schedule that has this order
of work blocks (7 5 7 5 5 6 5 5 ). The computations of the system took 0.73 seconds.
Step 3 and 4 are solved automatically. The first solution given in Table 5 is generated after 0.39
seconds and the first 50 solutions in 4.29 seconds.
There exist also many other solutions that differ only in terms that they contain. Undesired
solutions can then be eliminated through the elimination of unwanted terms.
Problem 4 (Laporte et al. [7]): There exist three non overlapping shifts D, A, and N, 9 employ-
ees, and requirements are 2 employees in each shift and every day. A week schedule has to be
constructed that fulfills these constraints:
1. Rest periods should be at least two days-off
2. Work periods must be between 2 and 7 days long if work is done in shift D or A and between
4 and 7 if work is done in shift N
3. Shift changes can occur only after a day-off
4. Schedules should contain as many weekends as possible
5. Weekends off should be distributed throughout the schedule as evenly as possible
6. Long (short periods) should be followed by long (short) rest periods
Table
Solution of Balakrishnan and Wong [1] of problem from [7]
Employee/day Mon Tue Wed Thu Fri Sat Sun
3 D D D D D
4 A A A A A
5 N N N N N
6 N N A A A
7 A A D D D
8 D N N N N
9 N N N
7. Work periods of 7 days are preferred in shift N
Balakrishnan and Wong [1] need 310.84 seconds to obtain the first optimal solution. The
solution is given in Table 6. The authors report also about another solution with three weekends
off found with another structure of costs for weekends off.
In FCS constraint 1 is straightforward. Constraint 2 can be approximated if we take the minimum
of work blocks to be 4. Constraint 3 can also be modeled if we take the minimum length
of successive shifts to be 4. For maximum length of successive shifts we take 7 for each shift.
Constraints 4 and 5 are incorporated in step 2, constraint 6 cannot be modeled, and constraint 7 is
modeled by selecting appropriate terms in step 3.
With these given parameters for this problem there exist 23 class solutions which are generated
in 5 seconds. For each class solution there exist at least one distribution of days-off, but it could be
that no assignment of shifts to the work blocks exist because the range of blocks with successive
shifts are too narrow in this case. Because in this problem the range of lengths of blocks of
successive shifts is from 4 to 7 for many class solution no assignment of shifts can be found. Class
solution solutions with three free weekends, but they are after each other.
Class solution f7 7 7 7 5 5 4g gives better distribution of weekends. If we select this class solution
in step 1, our system will generate 5 solutions in step 2 in 1.69 seconds. We selected a solution
with the order of work blocks to be (7 7 4 7 5 7 5). Step 3 and 4 are solved simultaneously and the
first solution was arrived at after 0.08 seconds. solutions were found after 0.5
seconds. One of the solutions is shown in Table 7.
With class solution f7 7 7 7 7 7g the same distribution of weekends off can be found as in [7].
As we see we can arrive at the solutions much faster than Balakrishnan and Wong [1], though
in interaction of the human decision maker. Because each step is very fast, the overall process of
constructing an optimal solution still does not take very long.
Problem 5: This problem is a larger problem first reported in [5]. Characteristics of this problem
are:
Number of employees is 17 (length of planning period is 17 weeks).
Table
7: First Class Schedule solution of problem from [7]
Employee/day Mon Tue Wed Thu Fri Sat Sun
3 D N N N N
4 A A A
5 A A A A
6 N N N N N
7 A A A A A
8 A A N N
9 N N N D D
Three nonoverlapping shifts.
Temporal requirements are:
R 3;7 =B @
Constraints:
Rest-period lengths must be between 2 and 7 days
Work-periods lengths must be between 3 and 8 days
A shift cannot be assigned to more than 4 consecutive weeks in a row
Shift changes are allowed only after a rest period that includes a Sunday or a Monday or
both
The only allowable shift changes are 1 to 3, 2 to 1, and 3 to 2
Balakrishnan and Wong [1] need 457.98 seconds to arrive at the optimal solution which is given
in
Table
8.
With First Class Schedule we cannot model constraints 3, 4, and 5 in their original form. We
allow changes in the same block and for these reason we have other shift change constraints. In
our case the following shift changes are not allowed: 2 to 1, 3 to 1, and 3 to 2. Additionally,
we limit the rest period length from 2 to 4 and work periods length from 4 to 7. Maximum and
minimum length of blocks of successive shifts are given with vectors MAXS
With these conditions the first class solution f6 6 5 5 5 5 5 5 5 5 5 5 5 5 5 5g is found after
0.22 and the first 9 solutions after 14.2 seconds. Of course there exist much more class solutions,
Table
8: Solution of Balakrishnan and Wong [1] to the problem from [5]
Employee/day Mon Tue Wed Thu Fri Sat Sun
3 D D D
4 D D D D D D
5 N N N N
6 N N N N N
7 N N N
8 A A A A A A A
9 A A A A A
13 N N N N
14 N N N N
15 A A A A A A
but finding all class solutions will take too much time for this large problem. If we choose the
first solution with the most optimal blocks we will obtain solutions with 5 weekends off, even
though the weekends are one after each other. We arrive at a better solution with the following
class solution: f7 6 5 5 5 5 5 5 5 5 5 5 5 5 5 4g. In step 2 we stop the process of generation
of distributions of work and days-off blocks after 20 seconds and we get 3 solutions. From these
solutions we select the solution with the order of blocks (7 6 5 5 5 5 5 5 5 5 5 5 5 4 5 5 ). The
first solution (steps 3 and 4) is generated after 0.73 seconds and the first 50 solutions after 4.67
seconds. The first solution is given in Table 9.
As you can see the solution has a much worse distribution of weekends than the solution from
[1] but our solution has no blocks of length 8 and has many optimal blocks (of length 5). Our
solutions also have not more than 5 successive night shifts (seven night shifts are considered too
much).
Much better distribution of weekends-off can be found with FCS if the maximum length of
work days is increased to 8. In this case step 2 of FCS takes longer because it depends directly on
the number of blocks.
One disadvantage of FCS is that the user has to try many class solutions to find an optimal
solution. However, the time to generate solutions in each step is so short that interactive use is
possible. Other advantages of interactively solving these scheduling problems is the possibility
to include the user in the decision process. For example one may prefer longer blocks but better
Table
9: First Class Schedule solution to the problem from [5]
Employee/day Mon Tue Wed Thu Fri Sat Sun
3 D D D D D
4 D D D D A
5 A A A A A
6 A A A A N
8 A A A A
9 A A A N N
13 D D D N N
14 A A A N N
17 A A A N N
distribution of weekends to shorter work blocks but worse distribution of weekends.
Conclusions
In this paper we proposed a new framework for solving the rotating workforce scheduling problem.
We showed that this framework is very powerful for solving real problems. The main features of
this framework are the possibility to generate high quality schedules through the interaction with
the human decision maker and to solve real cases in a reasonable amount of time. Besides making
sure that the generated schedules fulfill all hard constraints, it also allows to incorporate preferences
of the human decision maker regarding soft constraints that are otherwise more difficult to
assess and to model. In step 1 an enhanced view of possible solutions subject to the length of work
blocks is given. In step 2 preferred sequences of work blocks in connection with weekends off
features can be selected. In step 4 bounds for successive shifts and shift change constraints can
be specified with much more precision because the decision maker has a complete view on terms
(shift sequences) that are used to build the schedules. Step 2 of our framework can be solved very
efficiently because the search space has already been much reduced in step 1. Furthermore, in step
4 we showed that assignment of shifts to the employees can be done very efficiently with back-tracking
algorithms even for large instances if sequences of shifts for work blocks are generated
first. When the number of employees is very large they can be grouped into teams and thus again
this framework can be used. Even though this framework is appropriate for most real cases, for
large instances of problems optimal solution for weekends off cannot always be guaranteed because
of the size of the search space. One possibility to solve this problem more efficiently could
be to stop backtracking when one solution that has the or almost the maximum number of weekends
off is found (for a given problem we always know the maximum number of weekends off
from the temporal requirements). Once we have a solution with most weekends off, other search
technique like local search can be used to improve the distribution of weekends off. Finally this
framework can be extended by introducing new constraints.
--R
A network model for the rotating workforce scheduling problem.
Computerized manpower scheduling.
The general employee scheduling problem: An integration of MS and AI.
Computerized scheduling of police manpower.
Employee timetabling.
Rotating schedules.
On the complexity of manpower scheduling.
Solving employee timetabling problems by generalized local search.
Combining constraint satisfaction and local improvement algorithms to construct anaesthetist's rotas.
On manpower scheduling algorithms.
Eliminating interchangeable values in the nurse scheduling problem formulated as a constraint satisfaction problem.
--TR
The general employee scheduling problem: an integration of MS and AI
On the complexity of manpower shift scheduling
Solving Employee Timetabling Problems by Generalized Local Search
--CTR
Yana Kortsarts, Integrating a real-world scheduling problem into the basic algorithms course, Journal of Computing Sciences in Colleges, v.22 n.6, p.184-192, June 2007
Mikel Lezaun , Gloria Prez , Eduardo Sinz De La Maza, Rostering in a rail passenger carrier, Journal of Scheduling, v.10 n.4-5, p.245-254, October 2007 | constraint satisfaction;search;backtracking;workforce scheduling |
584756 | Numerical study of quantum resonances in chaotic scattering. | This paper presents numerical evidence that in quantum systems with chaotic classical dynamics, the number of scattering resonances near an energy E scales like h-(D(KE)+1)/2 as h 0. Here, KE denotes the subset of the energy surface bounded for all time under the flow generated by the classical Hamiltonian H and D (KE) denotes its fractal dimension. Since the number of bound states in a quantum system with n degrees of freedom scales like h-n, this suggests that the quantity (D (KE) represents the effective number of degrees of freedom in chaotic scattering problems. The calculations were performed using a recursive refinement technique for estimating the dimension of fractal repellors in classical Hamiltonian scattering, in conjunction with tools from modern quantum chemistry and numerical linear algebra. | Introduction
Quantum mechanics identifies the energies of stationary
states in an isolated physical system with
the eigenvalues of its Hamiltonian operator. Because
of this, eigenvalues play a central role in
the study of bound states, such as those describing
the electronic structures of atoms and molecules. 1
When the corresponding classical system allows
escape to infinity, resonances replace eigenvalues
as fundamental quantities: The presence of a resonance
at
, with E real and
> 0, gives
rise to a dissipative metastable state with energy E
and decay rate
, as described in [37]. Such states
are essential in scattering theory. 2
Department of Mathematics, University of California,
Berkeley, CA 94720. E-mail: kkylin@math.berkeley.edu.
The author is supported by a fellowship from the Fannie and
John Hertz Foundation.
1 For examples, see [5].
2 Systems which are not effectively isolated but interact only
weakly with their environmentcan also exhibit resonant behav-
ior. For example, electronic states of an "isolated" hydrogen
atom are eigenfunctions of a self-adjoint operator, but coupling
the electron to the radiation field turns those eigenstates into
metastable states with finite lifetimes. This paper does not deal
with dissipative systems and is only concerned with scattering.
An important property of eigenvalues is that one
can count them using only the classical Hamiltonian
function
Planck's constant
the number N eig of eigenvalues in
where n denotes the number of degrees of freedom
and vol () phase space volume. This re-
sult, known as the Weyl law, expresses the density
of quantum states using the classical Hamiltonian
generalization to resonances
is currently known.
In this paper, numerical evidence for a Weyl-
like power law is presented for resonances in a
two-dimensional model with three symmetrically-
placed gaussian potentials. A conjecture, based on
the work of Sjostrand [27] and Zworski [35], states
that the number of resonances
with
asymptotically
lies between C 1 h
flow generated by H:
(2)
If D (KE ) depends continuously on E and
sufficiently small, then D (KE1 )
and the number of resonances in such a
region is comparable to
for any E 2 [E
The sets K and KE are trapped sets and consist
of initial conditions which generate trajectories
that stay bounded forever. In systems where
3 For a beautiful exposition of early work on this and related
themes, see [16]. For recent work in the semiclassical context,
see [7].
fH Eg is bounded for all E, the conjecture reduces
to the Weyl asymptotic h n . Note that the
conjecture implies that
effective number of degrees of
freedom for metastable states =
for both quantum and classical chaotic scattering.
The notion of dimension requires some com-
ment: The "triple gaussian" model considered
here has very few trapped trajectories, and K and
KE (for any energy E) have vanishing Lebesgue
measures. Thus, D(K) is strictly less than
and D (KE ) < In fact, the sets K and
KE are fractal, as are trapped sets in many other
chaotic scattering problems. Also, in this paper,
the term "chaotic" always means hyperbolic; see
Sjostrand [27] or Gaspard [12] for definitions.
This paper is organized as follows: First, the
model system is defined. This is followed by
mathematical background information, as well as
a heuristic argument for the conjecture. Then,
numerical methods for computing resonances and
fractal dimensions are developed, and numerical
results are presented and compared with known
theoretical predictions.
Notation. In this paper, H denotes the Hamiltonian
function
H the corresponding
Hamiltonian operator h 2
is the usual Laplacian and
acts by multiplication.
Triple Gaussian Model
The model system has degrees of freedom;
its phase space is R 4 , whose points are denoted by
First, it is convenient to define
G
Similarly, put
G
G
in two dimensions.
define H by
y
-22
Figure
1: Triple gaussian potential
where the potential Vm is given by
(R cos 2k
sin 2k
That is, it consists of m gaussian "bumps" placed
at the vertices of a regular m-gon centered at the
origin, at a distance R > 0 from the origin. This
paper focuses on the case because it is
the simplest case that exhibits nontrivial dynamics
in two dimensions. However, the case
also relevant because it is well-understood: See
Miller [21] for early heuristic results and Gerard
and Sjostrand [13] for a rigorous treatment. Thus,
double gaussian scattering serves as a useful test
case for the techniques described here.
The quantized Hamiltonian b
H is similarly defined
Figure
1.
3 Background
This section provides a general discussion of resonances
and motivates the conjecture in the context
of the triple gaussian model. However, the notation
reflects the fact that most of the definitions
and arguments here carry over to more general
systems with n degrees of freedom. The reader
should keep in mind that for the triple gaussian
model.
There exists an extensive literature on resonances
and semiclassical asymptotics in other set-
tings. For example, see [9, 10, 11, 34] for detailed
studies of the classical and quantum mechanics of
hard disc scattering.
3.1 Resonances
Resonances can be defined mathematically as fol-
lows:
real z, where I
is the identity operator. This one-parameter family
of operators R(z) is the resolvent and is meromorphic
with suitable modifications of its domain
and range. The poles of its continuation into the
complex plane are, by definition, the resonances
of b
H . 4
Less abstractly, resonances are generalized
eigenvalues of b
H. Thus, we should solve the time-independent
Schrodinger equation
to obtain the resonance and its generalized
eigenfunction . In bound state computations,
one approximates as a finite linear combination
of basis functions and solves a finite-dimensional
version of the equation above. To carry out similar
calculations for resonances, it is necessary that
lie in a function space which facilitates such ap-
proximations, for example L 2 .
Let and solve (10). Then e i
the time-dependent Schrodinger equation
ih
@
@t
It follows that Im () must be negative because
metastable states decay in time. Now suppose,
for simplicity, that solutions of
(10) with energy E behave like e i
2Ex for
large x > 0. Substituting
yields e i
2x , which grows exponentially because
Im
< 0. Thus, finite rank approximations
of b
H cannot capture such generalized
eigenfunctions. However, if we make the formal
substitution x 7! xe i , then the wave function
becomes exp
. Choosing
=E) forces to decay exponentially
This procedure, called complex scaling, transforms
the Hamiltonian operator b
H into the scaled
operator b
H . It also maps metastable states
with decay rate
tan (2) to genuine L 2
4 For more details and some references, see [37].
5 The analysis in higher dimensions requires some care, but
the essential result is the same.
Figure
2: Illustration of complex scaling: The
three lines indicate the location of the rotated continuous
spectrum for different values of , while
the box at the top of the figure is the region in
which resonances are counted. Eigenvalues which
belong to different values of are marked with
different styles of points. As explained later, only
eigenvalues near the region of interest are com-
puted. This results in a seemingly empty plot.
eigenfunctions of b
H . The corresponding resonance
becomes a genuine eigenvalue: b
. Furthermore, resonances of b
H will be invariant
under small perturbations in , whereas
other eigenvalues of b
H will not. The condition
=E) implies that, for small
and fixed E, the method will capture a resonance
if and only if
. We
can perform complex scaling in higher dimensions
by substituting r 7! re i in polar coordinates.
In algorithmic terms, this means we can compute
eigenvalues of b
H for a few different values
of and look for invariant values, as demonstrated
in Figure 2. In addition to its accuracy and
flexibility, this is one of the advantages of complex
scaling: The invariance of resonances under
perturbations in provides an easy way to check
the accuracy of calculations, mitigating some of
the uncertainties inherent in computational work. 6
Note that the scaled operator b
H is no longer self-
adjoint, which results in non-hermitian finite-rank
approximations and complex eigenvalues.
This method, first introduced for theoretical
purposes by Aguilar and Combes [1] and Balslev
6 For a different approach to computing resonances, see [33]
and the references there.
and Combes [3], was further developed by B. Simon
in [26]. It has since become one of the main
tools for computing resonances in physical chemistry
[22, 31, 32, 24]. For recent mathematical
progress, see [18, 27, 28] and references therein.
For reference, the scaled triple-gaussian operator
H is
where
Note that these expressions only make sense because
G
x0 (x) is analytic in x, x 0 , and .
3.2 Fractal Dimension
Recall that the Minkowski dimension of a given set
U
where U fy g. A simple
calculation yields
log (vol (U
log (1=)
if the limit exists.
Texts on the theory of dimensions typically
begin with the Hausdorff dimension because it
has many desirable properties. In contrast, the
Minkowski dimension can be somewhat awk-
ward: For example, a countable union of
zero-dimensional sets (points) can have positive
Minkowski dimension. But, the Minkowski dimension
is sometimes easier to manipulate and almost
always easier to compute. It also arises in the
heuristic argument given below.
For a detailed treatment of different definitions
of dimension and their applications in the study of
dynamical systems, see [8, 23].
3.3 Generalizing the Weyl Law
The formula
makes no sense in scattering problems because the
volume on the right hand side is infinite for most
choices of E 0 and E 1 , and this seems to mean that
there is no generalization of the Weyl law in the
setting of scattering theory. However, the following
heuristic argument suggests otherwise:
As mentioned before, a metastable state corresponding
to a resonance
has
a time-dependent factor of the form e i
wave packet whose dynamics
is dominated by (and other resonances near it)
would therefore exhibit temporal oscillations of
frequency O(E=h) and lifetime O(h=
tically, then, the number of times the particle
"bounces" in the "trapping region" 7 before escaping
should be comparable to E
.
In the semiclassical limit, the dynamics of the
wave packet should be well-approximated by a
classical trajectory. Let T (x; the
time for the particle to escape the system starting
at position (x; y) with momentum (p x ; p y ). The
diameter of the trapping region is O(R), and typical
velocities in the energy surface are
O(
E) (mass set to unity), so the number of times
a classical particle bounces before escaping should
be O(T
E=R). This suggests that, in the limit
E=R E=
and consequently
T R
Fix
for fixed energies E 0 and Equation (17) implies
that T R
analogy with the
Weyl law,
vol
follows as an approximation for the number of
quantum states with the specified energies and decay
rates.
Now, the function 1=T is nonnegative for all
and vanishes on K [E0 ;E1
g. Assuming that 1=T is suffi-
7 For our triple gaussian system, that would be the triangular
region bounded by the gaussian bumps.
ciently regular, 8 this suggests
1=T (x;
where
denotes distance to K [E0 ;E1 ] . It
follows that N res should scale like
vol
2o
For small
0 , this becomes
for some constant C, by (14). Choosing
and assuming that D (KE ) decreases monotonically
with increasing E (as is the case in Figure
22), we obtain
res C 0 h
If sufficiently small, then
D (KE
and
res h
In [27], Sjostrand proved the following rigorous
upper bound: For
1=C,
holds for all > 0. When the trapped set is of pure
dimension, that is when the infimum in Equation
is achieved, one can take Setting
h gives an upper bound of the form (24).
In his proof, Sjostrand used the semiclassical argument
above with escape functions and the Weyl
inequality for singular values. Zworski continued
this work in [35], where he proved a similar result
for scattering on convex co-compact hyperbolic
surfaces with no cusps. His work was motivated
by the availability of a large class of examples
with hyperbolic flows, easily computable di-
mensions, and the hope that the Selberg trace formula
could help obtain lower bounds. But, these
hopes remain unfulfilled so far [14], and that partly
motivates this work.
8 In fact, this is numerically self-consistent: Assume that
1=T vanishes to order (with not necessarily equal to 2) on
K, and assume the conjecture. Then the number of resonances
would scale like h (2n D(K))= , from which one can solve
for . With the numerical data we have, this indeed turns out
to be 2 (but with significant fluctuations).
Also, if 1=T does not vanish quadratically everywhere on
K, variations in its regularity may affect the correspondence
between classical trapping and the distribution of resonances.
Computing Resonances
Complex scaling reduces the problem of calculating
resonances to one of computing eigenvalues.
What remains is to approximate the operator b
by a rank N operator b
HN; and to develop appropriate
numerical methods. For comparison, see
[22, 31, 32, 24] for applications of complex scaling
to problems in physical chemistry.
4.1 Choice of Scaling Angle.
One important consideration in resonance computation
is the choice of the scaling angle : Since
we are interested in counting resonances in a box
it is necessary to choose
tan 1
so that the continuous spectrum of
H is shifted out of the box [E
Figure
2).
In fact, the resonance calculation uses
This choice of helps avoid the pseudospectrum
[30, 36]:
Let A be an N N matrix, and let R(z) be
the resolvent It is well known that
when A is normal, that is when A commutes with
its adjoint A , the spectral theorem applies and the
inequality
holds ((A) denotes the spectrum of A). When
A is not normal, no such inequality holds and
jjR(z)jj can become very large for z far from
(A). This leads one to define -pseudospectrum:
Using the fact that A is a matrix, one can show that
is equal to the set
That is, the -pseudospectrum of A consists of
those complex numbers z which are eigenvalues
of an -perturbation of A.
The idea of pseudospectrum can be extended to
general linear operators. In [30], it is emphasized
that for non-normal operators, the pseudospectrum
can create "false eigenvalues" which make the accurate
numerical computation of eigenvalues diffi-
cult. In [36], this phenomenon is explained using
semiclassical asymptotics. Roughly speaking, the
pseudospectrum of the scaled operator b
H is given
by the closure of
of its symbol H , which is the scaled Hamiltonian
function
in this case. Choosing to be comparable to h ensures
that the imaginary part of H is also comparable
to h, which keeps the pseudospectrum away
from the counting box [E larger
would contribute a larger 2 term to the imaginary
part of H and enlarge the pseudospectrum.
As one can see in Figures 31 - 34, the invariance
of resonances under perturbations in also helps
filter out pseudospectral effects.
This consideration also points out the necessity
of the choice
To avoid pseudospectral
effects, must be O (h). On the other
hand, if approximations
may fail to capture resonances in the region
of interest.
4.2 Eigenvalue Computation
Suppose that we have constructed b
HN; . In
the case of eigenvalues, the Weyl law states that
as h ! 0, since our system has
degrees of freedom. Thus, in order to capture
a sufficient number of eigenvalues, the rank N
of the matrix approximation must scale like h 2 .
In the absence of more detailed information on the
density of resonances, the resonance computation
requires a similar assumption to ensure sufficient
numerical resolution.
Thus, for moderately small h, the matrix has
rapidly becomes prohibitive
on most computers available today. Fur-
thermore, even if one does not store the entire ma-
numerical packages like LAPACK [2] require
auxiliary storage, again making practical
calculations impossible.
Instead of solving the eigenvalue problem
directly, one solves the equivalent
eigenvalue problem
Efficient implementations of the Arnoldi algorithm
[19] can solve for the largest few eigenvalues
0 of
. But
this method allows one to compute a subset of the
spectrum of b
HN; near a given 0 .
Such algorithms require a method for applying
the matrix
to a given vector v
at each iteration step. In the resonance computa-
tion, this is done by solving ( b
for w by applying conjugate gradient to the normal
equations (see [4]). 9 The resonance program,
therefore, consists of two nested iterative methods:
An outer Arnoldi loop and an inner iterative linear
solver for ( b
v. This computation
uses which provides a flexible and efficient
implementation of the Arnoldi method.
To compute resonances near a given energy E,
the program uses instead of
This helps control the condition number
of b
HN; 0 and gives better error estimates and
convergence criteria. 11
4.3 Matrix Representations
4.3.1 Choice of Basis
While one can discretize the differential operator
H via finite differences, in practice it is better to
represent the operator using a basis for a subspace
of the Hilbert space should better represent
the properties of wave functions near infinity
and obtain smaller (but more dense) matrices.
Common basis choices in the chemical literature
include so-called "phase space gaussian" [6]
and "distributed gaussian" bases [15]. These bases
are not orthogonal with respect to the usual L 2 inner
product, so one must explicitly orthonormal-
ize the basis before computing the matrix repre-
9 That is, instead of solving
A v. This is necessary because b
HN; is non-hermitian, and
conjugate gradient only works for positive definite matrices.
This is not the best numerical method for non-hermitian prob-
lems, but it is easy to implement and suffices in this case.
for details on the package, as well as an overview
of Krylov subspace methods.
11 Most of the error in solving the matrix equation ( b
concentrates on eigenspaces of ( b
with large eigenvalues. These are precisely the desired eigen-
values, so in principle one can tolerate inaccurate solutions.
However, the calculation requires convergence criteria and error
estimates for the linear solver, and using a > 0, say a
turns out to ensure a relative error of about 10 6 after about
17-20 iterations of the conjugate gradient solver. Since we only
wanted to count eigenvalues, a more accurate (and expensive)
computation of resonances was not necessary.
Figure
3: A sinc function with
sentation of b
H . In addition to the computational
cost, this also requires storing the entire matrix
and severely limits the size of the problem one can
solve. Instead, this computation uses a discrete-
variable representation (DVR) basis [20]:
Consider, for the moment, the one dimensional
problem of finding a basis for a "good" subspace
of L 2 (R). Fix a constant x > 0, and for each
integer m, define
x
sin
x
(This is known as a "sinc" function in engineering
literature [25]. See Figure 3.) The Fourier transform
of m;x is
One can easily verify that f m;x g forms an orthonormal
basis for the closed subspace of L 2
functions whose Fourier transforms are supported
in [ =x;=x].
To find a basis for corresponding space of band-limited
functions in L 2 (R 2 ), simply form the tensor
products
mn
The basis has a natural one-to-one correspondence
with points (m 1 regular
lattice of grid points in a box [X
covering the spatial region of interest. (See Figure
4.) Using this basis, it is easy to compute matrix
elements
H .
y
grid
points
Figure
4: Illustration of resonance program parameters
in configuration space: The lower-left
corner of the mesh is (X while the upper
right corner is (X The mesh contains
x N y grid points, and a basis function mn is
placed at each grid point. Stars mark the centers
of the potentials, the circles have radius
Parameters for the classical computation
are depicted in Figure 5.
4.3.2 Tensor Product Structure
An additional improvement comes from the separability
of the Hamiltonian: Each term in the
scaled Hamiltonian b
H splits into a tensor product
dxI y (36)
I
x
dy 2
G
G
where I x and I y denote identity operators on
copies of L 2 (R). Since the basis fmn g consists
of tensor products of one dimensional bases, b
is also a short sum of tensor products. Thus, if
we let N x denote the number of grid points in
the x direction and let N y denote the number of
grid points in the y direction, then
and b
HN; is a sum of five matrices of the form
A
x
A y , where A x is N x N x and A y is N y N y .
Such tensor products of matrices can be applied
to arbitrary vectors efficiently using the outer
product representation. 12 Since the rank of b
12 The tensor product of two column vectors v and w can be
is y and N x N y in these computa-
tions, we can store the tensor factors of the matrix
using O(N ) storage instead of O(N 2 ), and
apply b
HN; to a vector in O N 3=2
of O N 3
. The resulting matrix is not sparse,
as one can see from the matrix elements for the
Laplacian below.
Note that this basis fails to take advantage of the
discrete rotational symmetry of the triple gaussian
Hamiltonian. Nevertheless, the tensor decomposition
provides sufficient compression of information
to facilitate efficient computation.
4.3.3 Matrix Elements
It is straightforward to calculate matrix elements
for the Laplacian on
There is no closed form expression for the matrix
elements of the potential, but it is easy to perform
numerical quadrature with these functions. For ex-
ample, to compute
Z
G(x)m (x)n (x) dx (40)
for
Vmn
where the stepsize should satisfy x=2. It
is easy to show that the error is bounded by the
sum of
which controls the aliasing error, and
x
exp
which controls the truncation error.
4.3.4 Other Program Parameters
The grid spacing x implies a limit on the maximum
possible momentum in a wave packet formed
representedas vw T . We then have
B)(v
(Bw) T , which extendsby linearity to
by this basis. In order to obtain a finite-rank opera-
tor, it is also necessary to limit the number of basis
functions.
The resonance computation used the following
parameters:
1. are chosen to cover the
region of the configuration space for which
2. Let L
the dimension of the computational domain.
The resonance calculation uses
basis functions, with N
2h and
2h .
3. This gives
which limits the maximum momentum in a
wave packet to jp x j
and jp y j
2E.
5 Trapped Set Structure
5.1 Poincar e Section
Because the phase space for the triple gaussian
model is R 4 and its flow is chaotic, a direct computation
of the trapped set dimension is difficult.
Instead, we try to compute its intersection with a
Poincare section:
Let E be a fixed energy, and recall that R is the
distance from each gaussian bump to the origin.
Choose R 0 < R so that the circles C k of radius R 0
centered at each potential, for do not
intersect. The angular momentum p with respect
to the kth potential center is defined by p
be the submanifold
(see
Figure
5), where the coordinates (; p ) in the
submanifold P k are related to ambient phase space
coordinates (x;
and the radial momentum p r is
s
Figure
5: A typical trajectory: Stars mark the potential
centers. In this case, R = 1:4 and
The circles drawn in the figure have radius 1, and
the disjoint union of their cotangent bundles form
the Poincare section. Trajectories start on the circle
centered at bump 0 (the bumps are, counter-
clockwise, 0, 1, and 2) with some given angle
and angular momentum p . This trajectory generates
the finite sequence ( _
bolic sequences are discussed later in the paper.)
An illustration of resonance computation is depicted
in Figure 4. The dashed line is the time-reversed
trajectory with the same initial condi-
tions, generating the sequence (1; 2; 0; 2; _
0).
Note that this implicitly embeds P into the energy
surface and the radial momentum p r is
always positive: The vector (p x
from the center of C k .
The trapped set is naturally partitioned into two
subsets: The first consists of trajectories which
visit all three bumps, the second of trajectories
which bounce between two bumps. The second set
forms a one-dimensional subspace of KE , so the
finite stability of the Minkowski dimension 13 implies
that the second set does not contribute to the
dimension of the trapped set. More importantly,
most trajectories which visit all three bumps will
also cut through P .
One can thus reduce the dimension of the problem
by restricting the flow to KE \ P , as follows:
Take any point (; p ) in P k , and form the corresponding
point Equation
(45). Follow along the trajectory t (x;
If the trajectory does not escape, eventually it
must encounter one of the other circles, say C
Generically, trajectories cross C k 0 twice at each
encounter, and we denote the coordinates (
(in of the outgoing intersection by
e
If a trajectory escapes from the trapping region,
we can symbolically assign 1 to e
. The map e
then generates stroboscopic recordings of the flow
t on the submanifold P , and the corresponding
discrete dynamical system has trapped set KE \
P . So, instead of computing t on R 4 , one only
needs to compute e
on P . By symmetry, it will
suffice to compute the dimension of e
Pushing e
KE forward along the flow t adds
one dimension, so D (KE
KE
+1. Being
a subset of the two-dimensional space P 0 , e
KE is
easier to work with.
Readers interested in a more detailed discussion
of Poincare sections and their use in dynamics are
referred to [29]. For an application to the similar
but simpler setting of hard disc scattering, see [9,
12]. Also, Knauf has applied some of these ideas
in a theoretical investigation of classical scattering
by Coulombic potentials [17].
5.2 Self-Similarity
Much is known about the self-similar structure of
the trapped set for hard disc scattering [9, 12];
13 That is,
see [8].
theta
ptheta
Figure
Points in P 0 which do not go to 1 after
one iteration of e
. The horizontal axis is and the
vertical axis is p .
less is known about "soft scatterers" like the triple
gaussian system. However, computational results
and analogy with hard disc scattering give strong
support to the idea that K (and hence e
KE ) is self-
similar. 14 Consider Figures 6 - 12: They show
clearly that e
KE is self-similar. (In these images,
However, it is also clear
that, unlike objects such as the Cantor set or the
Sierpinski gasket, e
KE is not exactly self-similar.
5.3 Symbolic Dynamics
The computation of D
KE
uses symbolic se-
quences, which requires a brief explanation: For
any point (; p ), let s i denote the third component
of e
integer i. Thus,
s i is the index k of the circle C k that the trajectory
intersects at the ith iteration of e (or the
jijth iteration of e
occuring only at
the ends. Let us call sequences satisfying these
conditions valid.
For example, the trajectory in Figure 4 generates
the valid sequence ( _
1),where the
dot over 0 indicates that the initial point (; p ) of
the trajectory belongs to P 0 . Thus, we can label
collections of trajectories using valid sequences,
and label points in P with "dotted" sequences.
14 More precisely, self-affine.
theta
ptheta
Figure
7: Points in P 0 which do not go to 1 after
one iteration of e 1 . The horizontal axis is and
the vertical axis is p .
2.5 3 3.5 4
theta
ptheta
Figure
8: The intersection of the sets in Figures
6 and 7. These points correspond to symmetric
sequences of length 3.
-0.050.05theta
ptheta
Figure
9: The lower-right "island" in Figure 8,
magnified. The white cut-out in the middle is the
subset corresponding to symmetric sequences of
length 5.
3.55 3.6 3.65
theta
ptheta
Figure
10: The cut-out part of Figure 9, magni-
fied. Recall that these correspond to symmetric
sequences of length 5; compare with Figure 8.
3.2 3.4 3.6 3.8 40.10.30.5
theta
ptheta
Figure
11: The upper-right island in Figure 8. The
white cut-out in the middle is, again, the subset
corresponding to symmetric sequences of length
5.
ptheta
Figure
12: The cut-out part of Figure 11, magni-
fied. Recall that these correspond to symmetric
sequences of length 5. Compare with Figures 8
and 10.
Clearly, trapped trajectories generate bi-infinite
sequences. 15
The islands in Figures 8 - 11 correspond to
symmetric sequences centered at 0, of the form
track of the symbolic sequences generated by
each trajectory, one can easily label and isolate
each island. This is a useful property from the
computational point of view.
5.4 Dimension Estimates
To compute the Minkowski dimension using
Equation (15), we need to determine when a given
point is within of e
KE . This is generally impossi-
ble: The best one can do is to generate longer and
longer trajectories which stay trapped for increasing
(but finite) amounts of time.
Instead, one can estimate a closely related quan-
tity, the information dimension, in the following
way: Let e
denote the set of all points in P 0
corresponding to symmetric sequences of length
centered at 0. That is, e
consists of
all points in P 0 which generate trajectories (both
forwards and backwards in time) that bounce at
least k times before escaping. The sets e
decrease
monotonically to e
e
KE .
One can then estimate the information dimension
using the following algorithm:
1. Initialization: Cover P 0 with a mesh L 0 with
points and mesh size 0 .
2. Recursion: Begin with e
K (1)
E , which consists
of four islands corresponding to symmetric
sequences of length 2
8). Magnify each of these islands and compute
the sub-islands corresponding to symmetric
sequences of length 5 (see Figures 9
and 11). Repeat this procedure to recursively
compute the islands of e
E from those of
e
sufficiently
large that each island of e
diameter smaller than the mesh size 0 of L 0 .
3. Estimation: Using the islands of e
In hard disc scattering, the converse holds for sufficiently
large R: To each bi-infinite valid sequence there exists a
trapped trajectory generating that sequence. This may not hold
in the triple gaussian model, and in any case it is not necessary
for the computation.
theta
ptheta
Figure
13: This figure illustrates the recursive
step in the dimension estimation algorithm: The
dashed lines represent while the solid lines
represent a smaller mesh centered on one of the
islands. The N 0 N 0 mesh L 0 remains fixed
throughout the computation, but the smaller N 1
constructed for each island of e
to the value of k specified by the algorithm.
mate the probability
vol
vol
for the (ij)th cell of L 0 . We can then compute
the dimension via
KE
log
which reduces to (15) when the distribution
is uniform because 0 1=N 0 .
Under suitable conditions (as is assumed to be
the case here), the information dimension agrees
with both the Hausdorff and the Minkowski di-
mensions.
The algorithm begins with the lattice L 0 with
which one wishes to compute the dimension. It
then recursively computes e
E for for increasing
values of k, until it closely approximates e
KE relative
to the mesh size of L 0 . It is easy to keep track
of points belonging to each island in this com-
putation, since each island corresponds uniquely
See [23] for a discussion of the relationship between these
dimensions, as well as their use in multifractal theory.
Figure
14: These are the eigenvalues of b
f0:0624; 0:0799; 0:0973g. This calculation used
an 102 108 grid, and 90 out of
eigenvalues were computed.
to a finite symmetric sequence. Note that while
the large mesh L 0 remains fixed throughout the
computation, the recursive steps require smaller
around each island of e
to the value of k specified by the algorithm. See
Figure
13.
6 Numerical Results
6.1 Resonance Counting
As an illustration of complex scaling, Figures 14
0:025]. Eigenvalues of b
HN; for different
values of are marked by different styles of
points, and the box has depth h and width 0:2, with
These plots may seem
somewhat empty because only those eigenvalues
of b
HN; in regions of interest were computed.
Notice the cluster of eigenvalues near the bottom
edge of the plots: These are not resonances because
they vary under perturbations in . Instead,
they belong to an approximation of the (scaled)
continuous spectrum.
It is more interesting to see log (N res ) as a function
of log (h) and R. This is shown in Figures
19 and 20. Using least-squares regression, we can
extract approximate slopes for the curves in Figure
19; these are shown in Table 2 and plotted in
Figure
15: Eigenvalues for
using 112 119 grid
and 98 out of eigenvalues.
Figure
using 123 131 grid
and 107 out of eigenvalues.
Figure
17: Eigenvalues for
using 135 144 grid
and 116 out of
Figure
18: Eigenvalues for
using
out of
eigenvalues.
3.7 3.8 3.9 43.64
log(#res)
Figure
19: log (N res ) as a function of log (h),
for h varying from 0:017 to 0:025 and
0:05 k, with 0 k 6. (The lowest curve corresponds
to while the highest curve corresponds
to
R
log(#res)
Figure
20: log (N res ) as a function of R, for different
values of log (h): The highest curve corresponds
to while the lowest curve corresponds
to
R
scaling
exponents
Figure
21: The slopes extracted from Figure 19, as
a function of R. The dotted curve is a least-squares
regression of the "noisy" curve.
1.6 1.2986 1.2725 1.2511
1.7 1.2893 1.2636 1.2524
Table
1: Estimates of D(KE)+1as a function of R.
Figure
21.
6.2 Trapped Set Dimension
For comparison, D(KE)+1is plotted as a function
of R in
Figure
22. The figure contains curves
corresponding to different energies E: The top
curve corresponds to 0:4, the middle curve
0:5, and the bottom curve
also contains curves corresponding to different
program parameters, to test the numerical convergence
of dimension estimates. These curves were
computed with 2
recursion depth k (corresponding to symmetric
sequences of length 2 6 the
caption contains the values of N 0 and N 1 for each
curve. For reference, Table 1 contains the dimension
estimates shown in the graph. It is important
to note that, while the dimension does depend on
R
(D+1)/2
Figure
22: This figure shows D(KE)+1as a function
of R: The top group of curves have
the middle 0:5, and the bottom
Solid curves marked with circles represent computations
3 , and 1
2 . Dashed curves marked
with X's represent computations where N
14142, whereas dashed curves marked with triangles
represent computations where N
and 71. The recursion depth k 0 in all these
figues is 6. The curve does not appear
to have completely converged but suffices for our
purpose here.
R slope D(KE)+1relative error
1.6 1.3055 1.2725 0.025256
Table
2: This table shows the slopes extracted
from
Figure
19, as well as the scaling exponents
one would expect if the conjecture were true
computed at
errors are also shown.
E and R, it only does so weakly: Relative to its
value, D(KE)+1is very roughly constant across
the range of R and E computed here.
6.3 Discussion
Table
contains a comparison of D(KE)+1
(for
as a function of R, versus the scaling exponents
from Figure 21. Figure 23 is a graphical
representation of similar information. This figure
shows that even though the scaling curve in Figure
21 is noisy, its trend nevertheless agrees with
the conjecture. Furthermore, the relative size of
the fluctuations is small. At the present time, the
source of the fluctuation is not known, but it is possibly
due to the fact that the range of h explored
here is simply too large to exhibit the asymptotic
behavior clearly. 17
Figures
plots of log (N res ) versus
log (h), for various values of R. Along with
the numerical data, the least-squares linear fit and
the scaling law predicted by the conjecture are also
plotted. In contrast with Figure 23, these show
clear agrement between the asymptotic distribution
of resonances and the scaling exponent predicted
by the conjecture.
6.4 Double Gaussian Scattering
Finally, we compute resonances for the double
gaussian model (setting in (8). This case is
17 But see Footnote 8.
The conjecture only supplies the exponents for power
laws, not the constant factors. In the context of these logarithmic
plots, this means the conjecture gives us only the slopes,
not the vertical shifts. It was thus necessary to compute an y-intercept
for each "prediction" curve (for the scaling law predicted
by the conjecture) using least squares.
R
slopes
scaling
exponents)
Figure
23: Lines with circles represent D(KE )+1as functions of R, for 0:6g. The
dashed curve with triangles is the scaling exponent
curve from Figure 21, while the solid curve
with stars is the linear regression curve from that
figure. Relative to the value of the dimension, the
fluctuations are actually fairly small: See Table 2
for a quantitative comparison.
3.7 3.8 3.9 43.43.6-log(hbar)
log(#res)
R=1.4
Figure
24: For
data, circles least squares regression, and
stars the slope predicted by the conjecture. h
ranges from 0:025 down to 0:017.
log(#res)
Figure
25: Same for
from 0:025 to 0:017.
3.7 3.8 3.9 43.53.7-log(hbar)
log(#res)
Figure
3.7 3.8 3.9 43.53.73.9
log(#res)
Figure
3.7 3.8 3.9 43.63.8-log(hbar)
log(#res)
R=1.6
Figure
log(#res)
Figure
3.7 3.8 3.9 43.73.9-log(hbar)
log(#res)
Figure
Figure
31: Resonances for two-bump scattering
with
interesting for two reasons: First, there exist rigorous
results [13, 21] against which we can check
the correctness of our results. Second, it helps determine
the validity of semiclassical arguments for
the values of h used in computing resonances for
the triple gaussian model.
The resonances are shown in Figures 31 - 37:
In these plots, ranges from 0:035
to 0:015. One can observe apparent pseudospectral
effects in the first few figures [30, 36]; this is
most likely because the scaling angle used here
is twice as large as suggested in Section 4.1, to exhibit
the structure of resonances farther away from
the real axis.
To compare this information with known results
[13, 21], we need some definitions: For a given
Z x1 (E)
where the limits of integration are
Let (E) denote the larger (in absolute value)
eigenvalue of D e
(0; 0); log () is the Lyapunov
exponent of e
, and is easy to compute numerically
in this case. Note that for two-bump scat-
tering, each energy E determines a unique periodic
trapped trajectory, and C(E) is the classical
action computed along that trajectory.
Figure
32: Resonances for two-bump scattering
with
-0.06
-0.020.020.06
Figure
33: Resonances for two-bump scattering
with
-0.06
Figure
34: Resonances for two-bump scattering
with
Figure
35: Resonances for two-bump scattering
with
Figure
Resonances for two-bump scattering
with
Figure
37: Resonances for two-bump scattering
with
-0.4
F
Figure
38: Lattice points for
Since these expressions are analytic, they have
continuations to a neighborhood of the real line -
C(E) becomes a contour integral. In [13], it was
shown that any resonance
must satisfy
O
where m and n are nonnegative integers. (The 1in m+ 1comes from the Maslov index associated
with the classical turning points.) This suggests
that we define the map F
where
Re (C())
and
Im (C())
h log ( (Re ()))
F should map resonances to points on the square
integer lattice, and this is indeed the case: Figures
images of resonances under
F , with circles marking the nearest lattice points.
The agreement is quite good, in view of the fact
that we neglected terms of order h 2 in Equation
(52).
Conclusions
Using standard numerical techniqes, one can compute
a sufficiently large number of resonances for
the triple gaussian system to verify their asymptotic
distribution in the semiclassical limit h ! 0.
F
Figure
39: Lattice points for
-0.4
F
Figure
40: Lattice points for
-0.4
F
Figure
41: Lattice points for
-0.4
F
Figure
42: Lattice points for
F
Figure
43: Lattice points for
-0.4
F
Figure
44: Lattice points for
This, combined with effective estimates of the
fractal dimension of the classical trapped set, gives
strong evidence that the number of resonances
N res in a box [E sufficiently
small h,
res h
as one can see in Figure 23 and Table 2. Fur-
thermore, the same techniques, when applied to
double gaussian scattering, produce results which
agree with rigorous semiclassical results. This
supports the correctness of our algorithms and the
validity of semiclassical arguments for the range
of h explored in the triple gaussian model. The
computation also hints at more detailed structures
in the distribution of resonances: In Figures 14 -
18, one can clearly see gaps and strips in the distribution
of resonances. A complete understanding
of this structure requires further investigation.
While we do not have rigorous error bounds
for the dimension estimates, the numerical results
are convincing. It seems, then, that the primary
cause for our failure to observe the conjecture in a
"clean" way is partly due to the size of
could study resonances at much smaller values of
h, the asymptotics may become more clear.
Acknowledgments
Many thanks to J. Demmel and B. Parlett for crucial
help with matrix computations, and to X. S.
Li and C. Yang for ARPACK help. Thanks are
also due to R. Littlejohn and M. Cargo for their
help with bases and matrix elements, and to F.
Bonetto and C. Pugh for suggesting a practical
method for computing fractal dimensions. Z. Bai,
W. H. Miller, and J. Harrison also provided many
helpful comments. Finally, the author owes much
to M. Zworski for inspiring most of this work.
KL was supported by the Fannie and John Hertz
Foundation. This work was supported in part by
the Applied Mathematical Sciences subprogram
of the Office of Energy Research of the U. S. Department
of Energy under Contract DE-AC03-76-
SF00098. Computational resources were provided
by the National Energy Research Scientific Computing
Center (NERSC), the Mathematics Department
at Lawrence Berkeley National Laboratory,
and the Millennium Project at U. C. Berkeley.
--R
"A class of analytic perturbations for one body Schrodinger Hamiltonians,"
LAPACK User's Guide
"Spectral properties of many-body Schrodingeroperators with dilation analytic interactions,"
Templates for the Solution of Linear Sys- tems: Building Blocks for Iterative Meth- ods
Quantum Mechanics of One- and Two-Electron Atoms
"Semiclassical Gaussian basis set method for molecular vibrational wave functions,"
Spectral Asymptotics in the Semi-Classical Limit
Fractal Geometry: Mathematical Foundations and Applications.
"Scattering from a classically chaotic repellor,"
"Semiclassical quantization of the scattering from a classically chaotic repellor,"
"Exact quantization of the scattering from a classically chaotic repellor,"
"Semiclassical resonances generated by a closed trajectory of hyperbolic type,"
"Wave trace for Riemann surfaces,"
"On distributed Gaussian bases for simple model multidimensional vibrational problems,"
"Can one hear the shape of a drum?"
"The n-Centre Problem of Celestial Mechanics"
ARPACK User's Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods
"Gen- eralized discrete variable approximation in quantum mechanics,"
"Classical-limit Green's function (fixed-energy propagator) and classical quantization of nonseparable systems,"
"Tunneling and state specificity in unimolecular reactions,"
Dimension Theory in Dynamical Systems: Contemporary Views and Ap- plications
"Collisional breakup in a quantum system of three charged particles,"
"The definition of molecular resonance curves by the method of exterior complex scaling,"
"Geometric bounds on the density of resonances for semi-classical problems,"
"Complex scaling and the distribution of scattering poles,"
Structure and Interpretation of Classical Mechanics.
"Pseudospectra of Linear Operators,"
"Model studies of mode specificity in unimolecular reaction dynamics,"
"Mode specificity in unimolecular reaction dynamics: the Henon-Heiles potential energy surface,"
"Numerical Computation of the Scattering Frequencies for Acoustic Wave Equations,"
"Quantum mechanics and semi- classics of hyperbolic n-disk scattering sys- tems,"
"Dimension of the limit set and the density of resonances for convex co-compact hyperbolic surfaces,"
"Numerical linear algebra and solvability of partial differential equations"
"Resonances in Geometry and Physics,"
--TR
Circuits,signals,and systems
Numerical computational of the scattering frequencies for acoustic wave equations
Pseudospectra of Linear Operators
Structure and interpretation of classical mechanics | scattering resonances;semiclassical asymptotics;chaotic trapping;fractal dimension |
584802 | Topic-oriented collaborative crawling. | A major concern in the implementation of a distributed Web crawler is the choice of a strategy for partitioning the Web among the nodes in the system. Our goal in selecting this strategy is to minimize the overlap between the activities of individual nodes. We propose a topic-oriented approach, in which the Web is partitioned into general subject areas with a crawler assigned to each. We examine design alternatives for a topic-oriented distributed crawler, including the creation of a Web page classifier for use in this context. The approach is compared experimentally with a hash-based partitioning, in which crawler assignments are determined by hash functions computed over URLs and page contents. The experimental evaluation demonstrates the feasibility of the approach, addressing issues of communication overhead, duplicate content detection, and page quality assessment. | INTRODUCTION
A crawler is a program that gathers resources from the
Web. Web crawlers are widely used to gather pages for
indexing by Web search engines [12, 23], but may also be
used to gather information for Web data mining [16, 20],
for question answering [14, 28], and for locating pages with
specific content [1, 9].
A crawler operates by maintaining a pending queue of
URLs that the crawler intends to visit. At each stage of
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
CIKM'02, November 4-9, 2002, McLean, Virginia, USA.
the crawling process a URL is removed from the pending
queue, the corresponding page is retrieved, URLs are extracted
from this page, and some or all of these URLs are
inserted back into the pending queue for future processing.
For performance, crawlers often use asynchronous I/O to allow
multiple pages to be downloaded simultaneously [4,6] or
are structured as multithreaded programs, with each thread
executing the basic steps of the crawling process concurrently
with the others [23].
Executing the crawler threads in parallel on a multiprocessor
or distributed system further improves performance [4,
10, 23]. Implementation on a distributed system has the
potential for allowing wide-scale geographical distribution
of the crawling nodes, minimizing the competition for local
network bandwidth. A distributed crawler may be structured
as a group of n local crawlers, with a single local
crawler running on each node of an n-node distributed sys-
tem. Each local crawler may itself be a multithreaded pro-
gram, maintaining its own pending queue and other data
structures, which are shared between the locally executing
threads. A local crawler may even be a parallel program executing
on a cluster of workstations, with centralized control
and a global pending queue. In this paper, we use the term
"distributed crawler" only in reference to systems in which
the data structures and system control are fully distributed.
Local crawlers cannot operate entirely independently. Collaboration
is necessary to avoid duplicated e#ort in several
respects. The local crawlers must collaborate to reduce or
eliminate the number of resources visited by more than one
local crawler. In an extreme case, with no collaboration,
all local crawlers might execute identical crawls, with each
visiting the same resources in the same order. Overall, the
distributed crawler must partition the Web between the local
crawlers so that each focuses on a subset of the Web
resources that are targeted by the crawl.
Collaboration is also needed to identify and deal with duplicated
content [5, 13, 23]. Often multiple URLs can be
used to reference the same site (money.cnn.com, cnnfn.com,
cnnfn.cnn.com). In some cases, large sets of inter-related
pages will be encountered repeatedly during a crawl, and
the crawler should avoid visiting each copy in its entirety.
For example, many copies of the Sun Java JDK documentation
can be found on the Web. If the results of a crawl
are used in a Web search system, the user might only be
presented with the most authoritative copy of a resource,
such as the copy of Java documentation on the Sun Web-
site. Similarly, Web-based question answering [14] depends
on independence assumptions that duplicated content in-
validates. To recognize duplicated content, a local crawler
might compute a hash function over contents of each Web
page [13, 23]. In a distributed crawler, su#cient information
must be exchanged between the local crawlers to allow
duplicates to be handled correctly.
Collaboration may be necessary to e#ectively implement
URL ordering algorithms. These algorithms attempt to order
the URLs in the pending queue so that the most desired
pages are retrieved first. The resulting order might reflect
the expected quality of the pages or the need to refresh a
local copy of important pages whose contents are known to
change rapidly. Cho et al. [12] compare several approaches
for ordering URLs within the pending queue. They introduce
an ordering based on the pagerank measure [4], which
is a measure of a page's expected quality, and compare it
with a breadth-first ordering, a random ordering, and an
ordering based on a simple backlink count. Their reported
experiments demonstrate that the pagerank ordering is more
likely to place important pages earlier in the pending queue.
Given a page P , its pagerank R(P ) may be determined
from the pagerank of the pages (T1 , that link to it:
where d is a damping factor whose value is usually in the
range 0.8 to 0.9, and where c i is the outdegree of T i - the
total number of pages with links from T i . Given a set of
pages, their pagerank values may be computed by assigning
each page an initial value of 1 and then iteratively applying
the formula above. In the context of a crawler, pagerank
values may be estimated for the URLs in the pending queue
from the backlink information provided by pages that have
already been retrieved. Because of the dependence on backlink
information, local crawlers may need to exchange this
information in order to accurately estimate pagerank values
within their local pending queues [10].
Ideally, the local crawlers would operate nearly indepen-
dently, despite the need for collaboration. The amount
of data exchanged between local crawlers should be mini-
mized, preserving network bandwidth for the actual down-load
of the targeted Web resources. For scalability, the total
amount of data sent and received by a local crawler should
be no more than a small constant factor larger than the
amount of received data that is actually associated with the
crawler's partition. Synchronization between local crawlers,
in which the operation of one local crawler may be delayed
by the need to communicate with another local crawler,
should be avoided. Avoiding synchronization is especially
important when local crawlers are geographically distributed,
and network or node failures may disrupt communication.
Finally, it is desirable for the output of each local crawler
to be usable as an independent subcollection. For example,
if the crawl is being generated for use by a distributed appli-
cation, such as a distributed search engine, it may be possible
to transfer data directly from the source nodes in the
distributed crawler to destination nodes in the distributed
application, without ever centralizing the data. If the output
of a local crawler is to be used as an independent sub-collection
it should exhibit internal "cohesion", with a link
graph containing densely connected components, allowing
pagerank and other quality metrics to be accurately estimated
[2, 4, 27].
In this paper we present X4, a topic-oriented distributed
crawling system. In X4, the Web is partitioned by content
and each local crawler is assigned a broad content category
as the target of its crawl. As pages are downloaded, a pre-trained
classifier is applied to each page, which determines
a unique content category for it. Each local crawler operates
independently unless it encounters a boundary page,
a page not associated with its assigned category. Boundary
pages are queued for transfer to their associated local
crawlers on remote nodes. As boundary pages arrive from
remote crawlers, they are treated as if they were directly
downloaded by the local crawler.
The next section of the paper provides a review of related
work. Section 3 discusses an approach to collaborative
crawling that we view as the primary alternative to our
topic-oriented approach. In this alternative, the Web is partitioned
by hashing URLs and page contents. Sections 4
and 5 then examine issues in the design of the X4 crawler.
In section 6, X4 is evaluated experimentally, including a
comparison with hash-based partitioning.
2. RELATED RESEARCH
The most comprehensive study of Web crawlers and their
design is Cho's 2001 Stanford Ph.D. thesis [10]. In par-
ticular, Chapter 3 of the thesis, along with a subsequent
paper [11], represents the only study (we are aware of) that
attempts to map and explore a full design space for parallel
and distributed crawlers. Sharing many of our own
goals, the work addresses issues of communication band-
width, page quality and the division of work between local
crawlers. As part of the work, Cho proposes and evaluates
a hash-based approach similar to that discussed in the next
section and suggests an occasional exchange of link information
between nodes to improve the accuracy of locally
computed page quality estimates. Cho's thesis does not directly
address the issue of duplicate content detection in a
distributed context.
Apart from Cho's work, most designs for parallel crawlers
implement some form of global control, with the pending
queue and related data structures maintained centrally and
the actual download of pages taking place on worker nodes [4,
6]. One example is the WebFountain crawler [20], which is
based on a cluster-of-workstations architecture. The software
consists of three major components: ants, duplicate
detectors, and a single system controller. The controller
manages the system as a whole, maintaining global data
structures and monitoring system performance. Ants are
responsible for the actual retrieval of Web resources. The
controller assigns each site to an ant for crawling, which
retrieves all URLs from that site. The duplicate detectors
recognize identical or near-identical content. A major feature
of the WebFountain crawler is its maintenance of up-
to-date copies of page contents by identifying those pages
that change frequently and reloading them as needed.
The technical details of the crawlers used by commercial
Web search services are naturally regarded as trade secrets,
and there are few published details about their structure
and implementation. One significant exception is Merca-
tor, the crawler now used by the AltaVista search service
(replacing the older Scooter crawler) 1 . Heydon and Najork
[23, 24] describe in detail the problems associated with
creating a commercial-quality Web crawler and the solutions
used to address these problems in Mercator. Written
in Java, Mercator achieves both scalability and extensibility,
largely through careful software engineering.
Research on focused crawlers is closely connected to the
work described in the present paper. In essence, focused
crawlers attempt to order the pending queue so that pages
concerning a specific topic are placed earlier in the queue,
with the goal of creating a specialized collection about this
target topic. Chakrabarti et al. [9] introduce the notion of a
focused crawler and experimentally evaluate an implementation
of the concept using a set of fairly narrow topics such
as "cycling", "mutual funds", and "HIV/AIDS". In order
to determine the next URL to access, their implementation
uses both a hypertext classifier [8], which determines
the probability that a page is relevant to the topic, and a
distiller, which identifies hub pages pointing to many topic-related
pages [27].
Mukherjea [33] presents WTMS, a system for gathering
and analyzing collections of related Web pages, and describes
a focused crawler that forms a part of the system.
The WTMS crawler uses a vector space similarity measure
to compare downloaded pages with a target vector representing
the desired topic; URLs from pages with higher similarity
scores are placer earlier in the pending queue. Both
McCallum et al. [30], and Diligenti et al. [18] recognize that
the target pages of a focused crawl do not necessarily link
directly to one another and describe focused crawlers that
learn to identify apparently o#-topic pages that reliably lead
to on-topic pages. Menczer et al. [31] consider the problem
of evaluating and comparing the e#ectiveness of the strategies
used by focused crawlers.
The intelligent crawler proposed by Aggarwal et al. [1]
generalizes the idea of a focused crawler, encompassing much
of the prior research in this area. Their work assumes the
existence of a predicate that determines membership in the
target group. Starting at an arbitrary point, the crawler
adaptively learns linkage structures in the Web that lead
to target pages by considering a combination of features,
including page content, patterns matched in the URL, and
the ratios of linking and sibling pages that also satisfy the
predicate.
3. HASH-BASED COLLABORATION
One approach to implementing a distributed crawler partitions
the Web by computing hash functions over both
URLs and page contents. When a local crawler extracts
a URL from a retrieved page, its representation is first normalized
by converting it to an absolute URL (if necessary)
and then translating any escape sequences into their ASCII
values. A hash function is then computed over the normalized
URL, which assigns it to one of the n local crawlers. If
the assigned local crawler is located on a remote node, the
URL is transferred to that node. Once the URL is present
on the correct node, it may be added to the node's pending
queue. Similarly, each local crawler computes a hash
function over the contents of each page that it downloads.
This page-content hash function assigns the contents to one
of the local crawlers, where duplicate detection and other
post-processing takes place.
Under this hash-based scheme for collaboration, up to
three local crawlers may be involved in processing each URL
encountered: 1) the local crawler where it is encountered, 2)
the node where it is assigned by the URL hash function, and
the node where the contents are assigned by the page-
content hash function. Although many transfers between
local crawlers cannot be avoided, local crawlers should maintain
local tables of URLs and page contents that have been
previously seen, to prevent unnecessary transfers
The URL hash function need not take the entire URL into
account. For example, a URL hash function based only on
the hostname ensures that all URLs from a given server are
assigned to the same local crawler for download [10], allowing
the load placed on the server to be better controlled.
Similarly, the page-content hash function be based on normalized
content, allowing near-duplicates to be assigned to
the same node [5, 13].
The use of a page-content hash function may force considerable
data transfer between local crawlers. With n > 2,
a uniform hash function will map most retrieved pages to
remote nodes. A downloaded page that is not mapped to
the local node must be exported to its assigned node. In an
n-node distributed crawler, the expected ratio of exported
data to total data downloaded is (n - 1)/n. As data is exported
to remote nodes, data is imported from these nodes
into the local crawler. If all local crawlers download data
at the same rate, the expected ratio of imported data to
total data downloaded is also (n - 1)/n. In the limit, as
the number of nodes is increased, the amount of data transferred
between local crawlers is twice the amount of data
downloaded from the Web.
A uniform hash function over the contents of a page will
map it to a local crawler independently of the locations of
pages that reference it. As the number of local crawlers
increases, the probability that a referenced page will be assigned
to the same node as its referencing page decreases
proportionally. This property may have a negative impact
on quality heuristics used to order the pending queue and, in
turn, on the e#ectiveness of the crawl. Page quality heuristics
that use backlink information may not have the information
locally available to accurately estimate ordering metrics
A solution proposed by Cho [10] is to have local crawlers
periodically exchange backlink information. Cho demonstrates
that a relatively small number of exchanges substantially
improves local page quality estimates, but the
approach increases the complexity of communication between
local crawlers. To implement the approach, each local
crawler must either selectively query the others for backlink
information or transfer all backlink information to all other
local crawlers.
An important parameter of a collaborative crawler is the
probability p l that linked page and its linking page will be
assigned to the same node. In a hash-based collaborative
crawler using a uniform hash function, p l = 1/n. One goal
of topic-oriented collaboration is to reduce the dependence
of p l on the number of nodes n.
4. TOPIC-ORIENTED COLLABORATION
In the previous section we assumed that the hash value
for the contents of a specific page is independent of the hash
value for pages that reference it. In this section we outline
the design of a topic-oriented collaborative crawler that
uses a text classifier to assign pages to nodes. Given the
contents of a Web page, the classifier assigns the page to
one of n distinct subject categories. Each subject category
is associated with a local crawler. When the classifier assigns
a page to a remote node, the local crawler transfers it
to its assigned node for further processing. A topic-oriented
collaborative crawler may be viewed as a set of broad-topic
focused crawlers that partition the Web between them.
The breadth of the subject categories depends on the value
of n. For n in the range 10-20, two of the subject categories
might be BUSINESS and SPORTS. For larger n, the subject
categories will be narrower, such as INVESTING, FOOTBALL
and HOCKEY. The implementation of the classifier
used in X4 will be discussed in the next section.
A potential advantage of replacing a simple page-content
hash function with a text classifier is an increased likelihood
that a linked page will be mapped to the same node as its
linking page. For example, a link on a page classified as
SPORTS may be more likely to reference another SPORTS
page than a BUSINESS page. Many of the potential benefits
of topic-oriented collaborative crawling derive from this
assumption of topic locality, that pages tend to reference
pages on the same general topic [17]. One immediate benefit
of topic locality is a reduction in the bandwidth required
to transfer pages from one local crawler to another. Only
boundary pages, which are not assigned to the nodes that
retrieved them, will be transferred. In addition, page quality
metrics that depend on backlink information might be more
accurately estimated when pages are grouped by assigned
category, since more complete linkage information may be
present.
As an additional advantage of topic-oriented collabora-
tion, the output of each local crawler can be meaningfully
regarded as an independent subcollection. Information brokers
[21, 22] that weight the output of di#erent search systems
according to their expected performance on di#erent
query types may be able to take advantage of the topic focus
of each subcollection.
The topic-oriented approach to collaborative crawling has
the disadvantage that the same URL may be independently
encountered and downloaded by multiple crawlers. In our
approach, URLs are not hashed and are always retained on
the nodes where they are encountered. If a URL is encountered
by two or more nodes, each node will independently
download the page, transferring it to a common node after
categorization. While this property may appear to represent
a serious limitation of topic-oriented collaborative crawling,
we observe that a page will only be encountered by multiple
nodes if pages from multiple categories reference it, a situation
that the topic locality assumption tends to minimize.
The benefits of topic-oriented collaboration depend on the
accuracy of the classifier and on the actual intra- and inter-
category linkage patterns found in the Web. An experimental
evaluation of these issues using the X4 crawler is reported
in section 6. To provide context for this evaluation, we first
describe and evaluate the simple classifier used in X4.
5. PAGE CATEGORIZATION
Text categorization is the grouping of text documents into
a set of predefined categories or topics. Often, categorization
is based on probabilities generated by training over a set
of pre-classified documents containing examples from each
topic. Document features, such as words, phrases and structural
relationships, are extracted from the pre-classified documents
and used to train the classifier. Given a unclassified
document, the trained classifier extracts features from the
document and assigns the highest-probability category to it.
Text categorization is a heavily studied subject. A variety
of machine learning and information retrieval techniques
have been applied to the problem, including Rocchio
feedback [25], support vector machines [26], expectation-maximization
[34], and boosting [35]. Yang and Liu [37]
provide a recent comparison of five widely used methods.
Many these techniques have been applied to categorize
Web pages and in some cases have been extended to exploit
the unique properties of Web data. Much of this work
has been in the context of focus crawlers, discussed in section
2. Chakrabarti et al. [8] take advantage of the Web's
link structure to improve Web page categorization. Using an
iterative relaxation labeling approach, pages are classified by
using the categories assigned to neighboring pages as part
of the feature set. Dumais and Chen [19] take advantage of
the large collections of hierarchically organized Web pages
provided by organizations such as Yahoo! and LookSmart
to develop a hierarchical categorization technique based on
support vector machines.
To select a page categorization technique for use in X4,
several attributes of the available techniques were consid-
ered. First, categorization should be based only on document
contents. If the contents of neighboring documents
are considered by the classifier, a page's neighborhood would
have to be retrieved by each crawler encountering it. Retrieving
the neighborhood of a page before it is classified is
likely to increase the overlap between crawlers, something
that X4 seeks to avoid. Second, after training is completed
the classifier must remain static and cannot learn from new
data, since the category assigned to a page's contents by
every local crawler must be the same. Third, the classifier
should be e#cient enough to categorize pages at the rate
they are downloaded. Crawling a major portion of the Web
requires a minimum download rate of several Mbps, and
the classifier should be able to match this rate without requiring
more resources than the crawler itself. Finally, the
accuracy of the classifier, the percent of pages correctly clas-
sified, should be as high as possible. Moreover, to minimize
the number of boundary pages encountered, the probability
that a linked page is classified in the same category as its
linking page, the topic locality, should also be as high as
possible.
Of the many available techniques, we choose three for further
study on the basis of their simplicity and potential for
e#cient implementation: a basic Naive Bayes classifier [32],
a classifier based on Rocchio relevance feedback [25], and a
probabilistic classifier due to Lewis [29].
Data from the Open Directory Project 2 (ODP) was used
to train and test the classifiers. The ODP is a self-regulated
organization maintained by volunteer experts who categorize
URLs into a hierarchical class directory, similar to the
directories provided by Yahoo! and others. At the top level,
there are 17 categories. Volunteers examine the contents of
each URL to determine its category. Each level in the hierarchy
contains a list of relevant external links and a list of
links to subcategories.
A snapshot of 673MB of data was obtained from the ODP.
For the purpose of our experiments, the entire URL directory
tree was collapsed into the top categories. Two categories
were given special treatment. The REGIONAL cate-
ADULT ARTS BUSINESS
COMPUTERS GAMES HEALTH
RECREATION REFERENCE SCIENCE
WORLD
Figure
1: ODP categories used in the topic classifier.
gory, which encompasses pages specific to various geographical
areas, was eliminated entirely, since we believe it represents
not so much a separate topic as an alternative organi-
zation. Pages in the WORLD category, which are written
in languages other than English, were also ignored during
the initial classifier selection phase. In the final X4 classifier
non-English pages are handled by a separate language iden-
tifier. The 16 top-level categories (including WORLD but
excluding REGIONAL) are listed in figure 1. Ultimately,
these became the target categories used by the X4 classifier.
For categorization, pages are preprocessed by first removing
tags, scripts, and numerical information. The remaining
text is tokenized into terms based on whitespace and punc-
tuation, and the terms are converted to lower case. The resulting
terms are treated as the document features required
by the classifiers.
From each ODP category, 500 documents were randomly
selected for training and 500 for testing. Only HTML documents
containing more than 50 words after preprocessing
were considered as candidates for selection. The performance
of the classifiers is shown in figure 2. Overall, the
Naive Bayes classifier achieved an accuracy of 60.7%, the
Lewis classifier 58.2%, and the Rocchio-TFIDF classifier
43.7%.
For topic-oriented crawling a more important statistic is
the topic locality, which we measure by the proportion of
linked pages that are classified into the same category as
their linking page. To test topic locality, 100 Web pages
from each category were randomly selected from the training
data. Five random links from each page (or all links if
the page did not have five) were retrieved and classified. The
results are shown in figure 3. Overall, the Naive Bayes classifier
achieved a topic locality of 62.4%, the Lewis classifier
62.3%, and the Rocchio-TFIDF classifier 48.9%.
To test the speed of the classifiers, 30,000 Web pages
with an average length of 7,738 bytes were fed into each.
The Naive Bayes classifier achieved a throughput of 19.50
pages/second, the Lewis classifier 2.89 pages/second, and
the Rocchio-TFIDF classifier 1.61 pages/second. Although
the Naive Bayes and Lewis classifiers perform comparably in
terms of categorization accuracy, outperforming the Rocchio-
TFIDF classifier, the six times greater speed of the Naive
Bayes classifier recommended its use in the X4 crawler.
Before topic categorization, the natural language in which
a page is written must be determined. Language-specific
classifiers may only be used after language identification
to partition the pages into two or more language-specific
topics. The division of nodes between natural languages
and language-specific topics depends on the number of local
crawlers required and mix of Web resources targeted by the
crawl. For the purpose of the experiments reported in this
paper, we group all non-English pages into a single category
WORLD, following the ODP organization.
A number of simple statistical language identification techniques
have shown good performance [3, 7, 15]. For X4, we
identified English-language pages by the proportion of common
English words appearing in them. To train and test
the identifier, we selected 500 random pages from each cat-
egory, including WORLD. The resulting language identifier
has an accuracy of 96.1% and 90.2% on identifying English
and non-English pages respectively.
6. EXPERIMENTAL EVALUATION
The X4 crawler is implemented as an extension to the
MultiText Web Crawler, which was originally developed as
part of the MultiText project at the University of Waterloo
to gather data for Web-based question answering [14].
The crawler has since been used by a number of external
groups. Sharing design goals with Mercator [23], the Mul-
tiText crawler is designed to be highly modular and config-
urable, and has been used to generate collections over 1TB
in size. On an ordinary PC, the crawler can maintain down-load
rates of millions of pages a day, including pre- and post-processing
of the pages. The core of the crawler provides a
dataflow scripting language that coordinates the activities
of independent software components, which perform the actual
operations of the crawl. Each individual component
is responsible for a specific crawling task such as address
resolution, page download, URL extraction, URL filtering,
and duplicate content handling. The core of the crawler also
provides transaction support, allowing crawler actions to be
rolled back and restarted after a system failure.
X4 was created by adding two new components to the
MultiText crawler. One component is the topic classifier;
the other other component is a data transfer utility. Data to
be transferred to remote nodes is queued locally by the topic
classifier. Periodically (every 30 minutes) the data transfer
utility polls remote nodes and downloads any queued
data; scp is used to perform the actual transfer. Apart
from changes to our standard crawl script to add calls to
the classifier and data transfer utility, no other changes to
the MultiText Crawler were required.
For our experimental evaluation, the X4 pending queue
was maintained in breadth-first order with one exception.
If a breadth-first ordering would place an excessive load on
a single host, defined as more than 0.2% of total crawler
activity over a time period of roughly one hour, URLs associated
with that host were removed and requeued until the
anticipated load dropped to an acceptable level.
For our experimental evaluation of X4, we used the Naive
Bayes topic classifier trained on ODP data, described in the
previous section. After preprocessing, each retrieved page
was first checked to determine if it was unclassifiable, which
we defined as containing fewer than 50 terms after the removal
of tags and scripts during preprocessing. We arbitrarily
assigned unclassifiable pages to category #0 (ADULT).
Unless it was unclassifiable, each page then had the language
identifier applied to it. If the page was not assigned to the
WORLD category by the language identifier, the topic classifier
was applied to assign the page to its final category.
In our experiments, a local crawler was associated with
each the 16 categories of figure 1. These local crawlers
were mapped onto six nodes of a workstation cluster by assigning
three local crawlers to five of the nodes and a single
ADU ART BUS COM GAM HEA HOM KID NEW REC REF SCI SHP SOC SPT1030507090Na ve Bayes
Lewis
Rocchio-TFIDF
Category
Figure
2: Accuracy of classifiers (ODP data).
ADU ART BUS COM GAM HEA HOM KID NEW REC REF SCI SHP SOC SPT1030507090Na ve Bayes
Lewis
Rocchio-TFIDF
Category
Topic
Locality
Figure
3: Topic locality of classifiers (ODP data).
category (WORLD) to one of the nodes. Although multiple
local crawlers were assigned to the same node for testing
purposes, these local crawlers acted in all respects as if they
were executing on distinct nodes.
We generated a 137GB experimental crawl in June 2001.
Due to limitations on our available bandwidth to the general
Internet, the crawlers as a group were limited to a down-load
rate of 256KB/second, with each local crawler limited
to 16KB/second. Due to the di#culty of controlling
the download rate at such low speeds, the actual download
rates varied from 11 KB/second to 16 KB/second. In total,
8.9 million pages were downloaded and classified. Figure 4
plots the distribution of the data across the local crawlers,
showing both the volume of downloaded data retained locally
after categorization and the volume of data imported
from other local crawlers.
The topic locality achieved by each local crawler is plotted
in figure 5. Generally the topic locality achieved during the
crawl was slightly lower than that seen on the ODP data
(figure 3), but the relative topic locality achieved for each
topic was roughly the same. The main exception is local
crawler #0 (ADULT), where the topic locality was a#ected
by our arbitrary decision to assign unclassifiable pages to its
associated category. For comparison, the equivalent value
for hash-based collaboration (p l ) is 1/16 or 6.25%.
With hash-based collaborative crawling, URLs are hashed
and exchanged, mapping each URL to a unique local crawler
and preventing multiple crawlers from downloading the same
URL. This is not the case with topic-oriented collaborative
crawling. Since only page contents are exchanged during
topic-oriented collaboration, multiple crawlers will down-load
a URL when that URL is referenced by pages from more
than one category. For shows the log
of the number of pages retrieved by exactly i local crawlers.
The number of pages decreases rapidly as i increases. Only
pages were retrieved by all local crawlers, generally
the home pages of major organizations or products, such as
www.nytimes.com and www.microsoft.com/ie.
In order to examine the properties of the subcollections
generated by the local crawlers, we ordered the pages in each
subcollection using the pagerank algorithm To
permit a direct comparison with hash-based collaboration,
we redistributed the pages into 16 di#erent subcollections
using a page-content hash function and computed the pagerank
ordering of each. Finally, we gathered all of the pages
into a single collection and computed a global pagerank ordering
In figure 7 we compare the pagerank ordering computed
over each subcollection with the global pagerank ordering
computed over the combined collection. For each subcol-
lection, the figure reports the Kendall # rank correlation
between the pagerank ordering computed over the subcol-
ported
Category
Size
Figure
4: Distribution of crawled data.
lection and the pagerank ordering of the same pages computed
over the global collection. A higher correlation coe#-
cient indicates a more accurate local estimate of the global
pagerank ordering. As might be expected from the use of a
uniform hash function, the coe#cients the hash-based sub-collections
are nearly identical. In all cases, the coe#cients
for the topic-oriented subcollections are greater than the co-
e#cients for the hash-based subcollections.
7. CONCLUSIONS AND FUTURE WORK
In this paper we propose the concept of topic-oriented collaborative
crawling as a method for implementing a general-purpose
distributed Web crawler and demonstrate its feasibility
through an implementation and experimental evalu-
ation. In contrast with the URL- and host-based hashing
approach evaluated by Cho [10,11], the approach allows duplicate
page content to be recognized and generates sub-collections
of pages that are related by topic, rather than
location.
X4 could be extended and improved in several ways. In
the current implementation, pending queues are maintained
in breadth-first order. Instead, a focused crawling technique
might be used to order the pending queues, placing URLs
that are more likely to reference on-topic resources closer
to the front of the queue. Such a technique could substantially
increase the proportion of on-topic pages retrieved and
decrease the number of transfers between local crawlers.
X4 transfers the contents of each boundary page from the
node that retrieves it to the node where the classifier assigns
it. This design decision was taken as a consequence of our
desire to map the contents of each page uniquely to a local
crawler in order to facilitate duplicate detection and other
post-processing. An alternative design would be to transfer
only the URLS of links appearing on boundary pages. The
contents of boundary pages would not be transferred. A disadvantage
of this approach is that the contents of each URL
is not uniquely associated with a local crawler. Every local
crawler that encounters the URL as a boundary page will
retain a copy of its contents. An advantage of the approach
is that the communication overhead between local crawlers
is further reduced. The only communication between local
crawlers is the transfer of URLs extracted from boundary
pages. The retention of boundary page contents on multiple
nodes may have advantages of its own. Topic locality
implies that these pages are likely to be related to the topics
of the local crawlers where they are encountered, and
the high backlink count associated with a boundary page
encountered by many crawlers implies high quality.
In the current work, our primary interest was not Web
page categorization itself, and other text categorization methods
could be explored for use in X4. Techniques such as feature
selection [36] might be used to improve both e#ciency
and accuracy. Problems associated with incremental crawling
and dynamically changing content were not considered
and should be examined by future work. Our evaluation
is based on a relatively small crawl (137GB), and a more
thorough evaluation based on a multi-terabyte crawl might
reveal issues that are not obvious from our current experi-
ments. Finally, X4 should be tested for its ability to scale
to a larger number of nodes, with a correspondingly larger
number of categories.
8.
--R
Intelligent crawling on the World Wide Web with arbitrary predicates.
"authority"
Language trees and zipping.
The anatomy of a large-scale hypertextual Web search engine
Crawling towards Eternity.
Enhanced hypertext categorization using hyperlinks.
Martin van den Burg
Crawling the Web: Discovery and Maintenance of Large-Scale Web Data
Parallel crawling.
Finding replicated Web collections.
Exploiting redundancy in question answering.
An autonomous
Learning to construct knowledge bases from the World Wide Web.
Topical locality on the Web.
Focused crawling using context graphs.
Hierarchical classification of Web content.
An adaptive model of optimizing performance of an incremental Web crawler.
Intelligent fusion from multiple
Methods for information server selection.
Mercator: A scalable
Performance limitations of the Java Core libraries.
A probabilistic analysis of the Rocchio algorithm with TFIDF for text classification.
A statistical learning model of text classification for support vector machines.
Authoritative sources in a hyperlinked environment.
Scaling question answering to the Web.
An evaluation of phrasal and clustered representations on a text categorization task.
Building domain-specific search engines with machine learning techniques
Evaluating topic-driven Web crawlers
Machine Learning.
WTMS: A system for collecting and analyzing topic-specific Web information
Text classification from labeled and unlabeled documents using EM.
Boosting and Rocchio applied to text filtering.
Fast categorisation of large document collections.
--TR
An evaluation of phrasal and clustered representations on a text categorization task
Enhanced hypertext categorization using hyperlinks
Syntactic clustering of the Web
Boosting and Rocchio applied to text filtering
Methods for information server selection
The anatomy of a large-scale hypertextual Web search engine
Efficient crawling through URL ordering
Performance limitations of the Java core libraries
A re-examination of text categorization methods
Focused crawling
Authoritative sources in a hyperlinked environment
Finding replicated Web collections
Hierarchical classification of Web content
Topical locality in the Web
Does MYAMPERSANDldquo;authorityMYAMPERSANDrdquo; mean quality? predicting expert quality ratings of Web documents
Text Classification from Labeled and Unlabeled Documents using EM
Learning to construct knowledge bases from the World Wide Web
Intelligent crawling on the World Wide Web with arbitrary predicates
An adaptive model for optimizing performance of an incremental web crawler
Scaling question answering to the Web
A statistical learning learning model of text classification for support vector machines
Evaluating topic-driven web crawlers
Exploiting redundancy in question answering
Machine Learning
Mercator
A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization
Focused Crawling Using Context Graphs
Crawling the web
--CTR
Antonio Badia , Tulay Muezzinoglu , Olfa Nasraoui, Focused crawling: experiences in a real world project, Proceedings of the 15th international conference on World Wide Web, May 23-26, 2006, Edinburgh, Scotland
Jos Exposto , Joaquim Macedo , Antnio Pina , Albano Alves , Jos Rufino, Geographical partition for distributed web crawling, Proceedings of the 2005 workshop on Geographic information retrieval, November 04-04, 2005, Bremen, Germany
Weizheng Gao , Hyun Chul Lee , Yingbo Miao, Geographically focused collaborative crawling, Proceedings of the 15th international conference on World Wide Web, May 23-26, 2006, Edinburgh, Scotland | distributed systems;text categorization;web crawling |
584816 | Query processing of streamed XML data. | We are addressing the efficient processing of continuous XML streams, in which the server broadcasts XML data to multiple clients concurrently through a multicast data stream, while each client is fully responsible for processing the stream. In our framework, a server may disseminate XML fragments from multiple documents in the same stream, can repeat or replace fragments, and can introduce new fragments or delete invalid ones. A client uses a light-weight database based on our proposed XML algebra to cache stream data and to evaluate XML queries against these data. The synchronization between clients and servers is achieved through annotations and punctuations transmitted along with the data streams. We are presenting a framework for processing XML queries in XQuery form over continuous XML streams. Our framework is based on a novel XML algebra and a new algebraic optimization framework based on query decorrelation, which is essential for non-blocking stream processing. | Introduction
1.1 Motivation
XML [31] has emerged as the leading textual language for representing and exchanging data on the web.
Even though HTML is still the dominant format for publishing documents on the web, XML has become the
prevalent exchange format for business-to-business transactions and for enterprise Intranets. It is expected
that in the near future the Internet will be populated with a vast number of web-accessible XML les. One
of the reasons of its popularity is that, by supporting simple nested structures of tagged elements, the XML
format is able to represent both the structure and the content of complex data very eectively. To take
advantage of the structure of XML documents, new query languages had to be invented that go beyond the
simple keyword-based boolean formulas supported by current web search engines.
There are several recent proposals for XML query languages [17], but none has been adopted as a
standard yet. Nevertheless, there is a recent working draft of an XML query language released by the
World-Wide Web Consortium (W3C), called XQuery [7], which may become a standard in the near future.
The basic features of XQuery are illustrated by the following query (taken from a W3C web page [31]):
f for $b in document(\http://www.bn.com")/bib/book
return <book year=f $b/@year g> f $b//title g
which lists books published by Addison-Wesley after 1991, including their year and title. The document(URL)
expression returns an entry point to the XML data contained in the XML document located at the specied
URL address. XQuery uses path expressions to navigate through XML data. For example, the tag selection,
$b/publisher, returns all the children of $b with tag name, publisher, while the wildcard selection, $b//title,
returns all the descendants of $b (possibly including $b itself) with tag name, title. In
uenced by modern
database query languages, such as the OQL language of the ODMG standard [6], XQuery allows complex
queries to be composed from simpler ones and supports many advanced features for navigating and restructuring
XML data, such as XML data construction, aggregations, universal and existential quantication,
and sorting.
There have been many commercial products recently that take advantage of the already established
database management technology for storing and retrieving XML data, although none of them fully supports
XQuery yet. In fact, nearly all relational database vendors now provide some functionality for storing and
handling XML data in their systems. Most of these systems support automatic insertion of canonically-
structured XML data into tables, rather than utilizing the XML schemas for generating application-specic
database schemas. They also provide methods for exporting database data into XML form as well as querying
and transforming these forms into HTML.
The eective processing of XML data requires some of the storage and access functionality provided
by modern database systems, such as query processing and optimization, concurrency control, integrity,
recovery, distribution, and security. Unlike conventional databases, XML data may be stored in various
forms, such as in their native form (as text documents), in a semi-structured database conforming to a
standard schema, or in an application-specic database that exports its data in XML form, and may be
disseminated from servers to clients in various ways, such as by broadcasting data to multiple clients as
XML streams or making them available upon request.
Based on previous experience with traditional databases, queries can be optimized more eectively if
they are rst translated into a suitable internal form with clear semantics, such as an algebra or calculus. If
XML data are stored in an application-specic database schema, then XML queries over these data can be
translated into the native query language supported by the database system, which in turn can be compiled
into its native algebraic form. A major research issue related to this approach, which has been addressed
in our previous work [14], is the automatic generation of the database schema as well as the automatic
translation of XML queries into database queries over the generated schema. On the other hand, if XML
data are stored as semi-structured data or processed on the
y as XML streams, then a new algebra is needed
that captures the heterogeneity and the irregularities intrinsic to XML data [4]. In fact, there are already
algebras for semi-structured data, including an algebra based on structural recursion [4], YATL [10, 8],
SAL [3], x-algebra [16], and x-scan [22]. None of these algebras have been used for the emerging XML query
languages yet, and there is little work on algebraic query optimization based on these algebras.
This paper is focused on the e-cient processing of continuous XML streams. Most current web servers
adopt the \pull-based" processing technology in which a client submits a query to a server, the server
evaluates the query against a local database, and sends the result back to the client. An alternative approach
is the \push-based" processing, in which the server broadcasts parts or the entire local database to multiple
clients through a multicast data stream usually with no acknowledge or handshaking, while each client is fully
responsible for processing the stream [1]. Pushing data to multiple clients is highly desirable when a large
number of similar queries are submitted to a server and the query results are large [2], such as requesting a
region from a geographical database. Furthermore, by distributing processing to clients, we reduce the server
workload while increasing its availability. On the other hand, a stream client does not acknowledge correct
receipt of the transmitted data, which means that, in case of a noise burst, it cannot request the server to
resubmit a data packet to correct the errors. Stream data may be innite and transmitted in a continuous
stream, such as measurement or sensor data transmitted by a real-time monitoring system continuously.
We would like to support the combination of both processing technologies, pull and push, in the same
framework. The unit of transmission in an XML stream is an XML fragment, which corresponds to one
XML element from the transmitted document. In our framework, a server may prune fragments from the
document XML tree, called llers, and replace them by holes. A hole is simply a reference to a ller (through
a unique ID). Fillers, in turn, may be pruned into more llers, which are replaced by holes, and so on. The
result is a sequence of small fragments that can be assembled at the client side by lling holes with llers.
The server may choose to disseminate XML fragments from multiple documents in the same stream, can
repeat some fragments when they are critical or in high demand, can replace them when they change by
sending delta changes, and can introduce new fragments or delete invalid ones. The client is responsible
for caching and reconstructing parts of the original XML data in its limited memory (if necessary) and for
evaluating XML queries against these data. Like relational database data, stream data can be processed
by relational operators. Unlike stored database data though, which may provide fast access paths, the
stream content can only be accessed sequentially. Nevertheless, we would like to utilize the same database
technology used in pulling data from a server for pushing data to clients and for processing these data at
the client side with a light-weight database management system. For example, a server may broadcast stock
prices and a client may evaluate a continuous query on a wireless, mobile device that checks and warns
(by activating a trigger) on rapid changes in selected stock prices within a time period. Since stock prices
may change through time, the client must be able to cache (in main memory or on secondary storage) old
prices. Another example is processing MPEG-7 [24] streams to extract and query video context information.
MPEG-7 is a standard developed by MPEG (Moving Picture Experts Group) and combines multimedia data
of various forms, including content description for the representation of perceivable information.
Since XML fragments are processed at the client side as they become available, the query processing
system resembles a main-memory database, rather than a traditional, secondary-storage-based database. Although
a main-memory database has a dierent back-end from a secondary-storage database, main-memory
queries can be mapped to similar algebraic forms and optimized in a similar way as traditional queries. One
di-cult problem of processing algebraic operators against continuous streams is the presence of blocking
operators, such as sorting and group-by, which require the processing of the entire stream before generating
the rst result. Processing these operators eectively requires that the server passes some hints, called punctuations
[29], along with the data to indicate properties about the data. One example of a punctuation is
the indication that all prices of stocks starting with 'A' have already been transmitted. This is very valuable
information to a client that performs a group-by over the stock names because it can complete and
ush
from memory all the groups that correspond to these stock names. Extending a query processor to make an
eective use of such punctuations has not yet been investigated by others.
1.2 Our Approach
This paper addresses the e-cient processing of continuous XML streams, in which a server broadcasts XML
data to multiple clients concurrently. In our framework, a server may disseminate XML fragments from
multiple documents in the same stream, can repeat or replace fragments, and can introduce new fragments
or delete invalid ones. In our approach, a client uses a light-weight, in-memory database to cache stream
data and physical algorithms based on an XML algebra to evaluate XML queries against these data. The
synchronization between clients and servers is achieved through annotations and punctuations transmitted
along with the data streams. A necessary information needed by a client is the structure of the transmitted
XML document, called the Tag Structure. The server periodically disseminates this Tag Structure as a
special annotation in XML form. For example, the following Tag Structure:
<stream:structure>
<tag name="bib" id="1">
<tag name="vendor" id="2" attributes="id">
<tag name="name" id="3"/>
<tag name="email" id="4"/>
<tag name="book" id="5" attributes="ISBN related_to">
<tag name="title" id="6"/>
<tag name="publisher" id="7"/>
<tag name="year" id="8"/>
<tag name="price" id="9"/>
<tag name="author" id="10">
<tag name="firstname" id="11"/>
<tag name="lastname" id="12"/>
</stream:structure>
corresponds to the following partial DTD:
<!ELEMENT bib (vendor*)>
<!ELEMENT vendor (name, email, book*)>
Query processing is performed at the client side with the help of a light-weight query optimizer. A
major component of a query optimizer is an eective algebra, which would serve as an intermediate form
from the translation of abstract queries to concrete evaluation algorithms. We are presenting a new XML
algebra and a query optimization framework based on query normalization and query unnesting (also known
as query decorrelation). There are many proposals on query optimization that are focused on unnesting
nested queries [23, 18, 26, 11, 12, 9, 28]. Nested queries appear more often in XML queries than in relational
queries, because most XML query languages, including XQuery, allow complex expressions at any point in
a query. Current commercial database systems typically evaluate nested queries in a nested-loop fashion,
which is unacceptable for on-line stream processing and does not leave many opportunities for optimization.
Most proposed unnesting techniques require the use of outer-joins, to prevent loss of data, and grouping, to
accumulate the data and to remove the null values introduced by the outer-joins. If considered in isolation,
query unnesting itself does not result in performance improvement. Instead, it makes possible other opti-
mizations, which otherwise would not be possible. More specically, without unnesting, the only choice of
evaluating nested queries is a naive nested-loop method: for each step of the outer query, all the steps of the
inner query need to be executed. Query unnesting promotes all the operators of the inner query into the
operators of the outer query. This operator mixing allows other optimization techniques to take place, such
as the rearrangement of operators to minimize cost and the free movement of selection predicates between
inner and outer operators, which enables operators to be more selective. Our XML query unnesting method
is in
uenced by the query unnesting method for OODB queries presented in our earlier work [15].
If the data stream were nite and its size were smaller than the client buer size, then the obvious
way to answer XQueries against an XML stream is to reconstruct the entire XML document in memory
and then evaluate the query against the cached document. But this is not a realistic assumption even for
nite streams. Client computers, which are often mobile, have typically limited resources and computing
power. After XQueries are translated to the XML algebra, each algebraic operator is assigned a stream-based
evaluation algorithm, which does not require to consume the entire stream before it produces any
output. Even when a data stream is nite, some operations, such as sorting and group-by with aggregation,
may take too long to complete. However, clients may be simply satised with partial results, such as the
average values of a small sample of the data rather than the entire database. Online aggregation [20, 21]
has addressed this problem by displaying the progress at any point of time along with the accuracy of the
result and by allowing the client to interrupt the aggregation process. Our approach is based on annotations
pred
pred Y
v,path
pred
pred
group;pred
f
Figure
1: Semantics of the XML Algebra
about the data content, called punctuations [29]. A punctuation is a hint sent by the server to indicate a
property about the data already transmitted. This hint takes the form of a predicate that compares a path
expression, which uses tsid's rather than tag names, with a constant value, as follows:
<stream:punctuation property="path cmp constant"/>
For example, the following punctuation
<stream:punctuation property="/1/2[3]/5/@ISBN <= 1000"/>
Here, according to the Tag Structure, the path /1/2[3]/5@ISBN corresponds to the XPath /bib/vendor[3]/book/@ISBN.
This punctuation indicates that the server has already transmitted all books published by the 3rd vendor
whose ISBN is less than or equal to 1000. We are presenting a set of stream-based evaluation algorithms for
our XML algebraic operators that make use of these punctuations to reduce the amount of resources needed
by clients to process streamed data. For example, suppose that a client joins the above stream of data about
books with the stream of book prices provided by Amazon.com using the ISBN as a cross-reference. Then
one way to evaluate this join is using an in-memory hash join [30] where both the build and probe tables
reside in memory. When ISBN punctuations are received from both streams, some of the hash buckets in
both hash tables can be
ushed from memory since, according to the punctuations received, they will not be
used again. In our framework, all punctuations are stored in a central repository at the client side and are
being consulted on demand, that is, only when the local buer space of an evaluation operator over
ows.
The rest of the paper is organized as follows. Section 2 presents a new XML algebra and a new algebraic
optimization framework based on query decorrelation. It also presents translation rules for compiling
XQuery into an algebraic form. Our XML algebra can be used generically for processing any XML data. Section
3 adapts this algebra to handle XML streams and presents various in-memory, non-blocking evaluation
algorithms to process these operators.
Algebra and Query Optimization
2.1 XML Algebra
We are proposing a new algebra and a new algebraic optimization framework well-suited for XML structures
and stream processing. In this section, we are presenting the XML algebra without taking into account the
fragmentation and reconstruction of XML data. These issues will be addressed in detail in Section 3.
The algebraic bulk operators along with their semantics are given in Figure 1. The inputs and output of
each operator are streams, which are captured as lists of records and can be concatenated with list append,
++. There are other non-bulk operators, such as boolean comparisons, which are not listed here. The
semantics is given in terms of record concatenation, -, and list comprehensions, f e unlike the
set former notation, preserve the order and multiplicity of elements. The form =f g reduces the elements
resulted from the list comprehension using the associative binary operator, (a monoid, such as [, +, *, ^,
_, etc). That is, for a non-bulk monoid , such as +, we have =fa 1 ; a an an , while
for a bulk monoid, such as [, we have =fa 1 ; a an g. Sorting is captured by
the special bulk monoid =sort(f ), which merges two sorted sequences by f into one sorted sequence by f .
For example, returns f3; 2; 1g.
The environment, -, is the current stream record, used in the nested queries. Nested queries are
mapped into algebraic form in which some algebraic operators have predicates, headers, etc, that contain
other algebraic operators. More specically, for each record, -, of the stream passing through the outer
operator of a nested query, the inner query is evaluated by concatenating - with each record of the inner
query stream.
An unnest path is, and operator predicates may contain, a path expression, v=path, where v is a stream
record attribute and path is a simple XPath of the form: A, @A, path=A, path=@A, path[n], path=text(),
or path=data(), where n is an integer algebraic form. That is, these path forms do not contain wildcard
selections, such as path==A, and predicate selections, such as path[e]. The unnest operation is the only
mechanism for traversing an XML tree structure. Function P is dened over paths as follows:
The extraction operator, , gets an XML data source, T , and returns a singleton stream whose unique
element contains the entire XML tree. Selection (), projection (), merging ([), and join (./) are similar
to their relational algebra counterparts, while unnest () and nest () are based on the nested relational
algebra. The reduce operator, , is used in producing the nal result of a query/subquery, such
as in aggregations and existential/universal quantications. For example, the XML universal quanti-
cation every $v in $x/A satises $v/A/data()>5 can be captured by the operator, with
v=A=data()>5. Like the XQuery predicates, the predicates used in our XML algebraic operators
have implicit existential semantics related to the (potentially multiple) values returned by path expres-
sions. For example, the predicate v=A=data()>5 used in the previous example has the implicit semantics
since the path v=A=data() may return more than one value. Finally, even though
selections and projections can be expressed in terms of , for convenience, they are treated as separate
operations.
2.2 Translation from XQuery to the XML Algebra
Before XQueries are translated to the XML algebra, paths with wildcard selections, such as e==A, are
instantiated to concrete paths, which may be accomplished, in the absence of type information, with the
help of the Tag Structure. Each XPath expression in an XQuery, which may contain wildcard selections, is
expanded into a concatenation of concrete XPath expressions. For example, the XPath expression /A//X is
expanded to (/A/B/C/X, /A/D/X), if these two concrete paths in the pair are the only valid paths in the
Tag Structure that match the wildcard selection.
Our translation scheme from XQuery to the XML algebra consists of two phases: First, XQueries are
translated into list comprehensions, which are similar to those in Figure 1. Then, the algebraic forms are
derived from the list comprehensions using the denitions in Figure 1.
According to the XQuery semantics [7], the results of nearly all XQuery terms are mapped to sequences
of values. This means that, if an XQuery term returns a value other than a sequence, this value is lifted to a
singleton sequence that contains this value. A notable exception is a boolean value, which is mapped to the
boolean value itself. The following are few rules for T [[e]], which maps XQuery terms into algebraic forms.
First, we translate XPath terms into simple paths without path predicates. By adopting the semantics of
XPath, the implicit reference to the current node in a path predicate, such as from the path A/B in the path
predicate X/Y[A/B=1], is captured by the variable $dot, which is bound to the current element. That is,
X/Y[A/B=1] is equivalent to X/Y[$dot/A/B=1]. Under this assumption, path predicates are removed in a
straightforward way:
T [[e[i to
To complete our translation scheme, XQuery terms are mapped to the XML algebra:
where the function element takes a tag name and a sequence of XML elements and constructs a new XML
element, and e 2 [$v=e 1 ] replaces all free occurrences of variable $v in the term e 2 with the term e 1 . Note that
boolean predicates, such as the equality in the last rule, are mapped to one boolean value as is implied by
the existential semantics of XQuery predicates [7].
Before the list comprehensions are translated to algebraic forms, the generator domains of the comprehensions
are normalized into simple path expressions, when possible. This task is straightforward and has
been addressed by earlier work [5, 15]. The following are some examples of normalization rules:
where
a sequence of terms separated by commas while [v=e2 ]
terms in the sequence
by replacing all occurrences of v with e 2 . The rst rule applies when the domain of a comprehension generator
is another comprehension and is reduced to a case caught by the second rule. The second rule applies when
the domain of a comprehension generator is a singleton sequence. The last rule distributes a tag selection
to the header of a comprehension. The fourth rule may look counter-intuitive, but is a consequence of the
ambiguous semantics of XQuery where non-sequence values are lifted to singleton sequences. According to
the mapping rules and after normalization, a simple path, such as $v/A/B/C, is mapped to itself, that is,
to v/A/B/C.
For example, the following XQuery:
for $b in document(\http://www.bn.com")/bib/book
return f $b/title g
is translated to the following comprehension using the above mapping rules:
and is normalized into the following comprehension:
opr opr
+, head
+, head
Y
Z
q'
G
pred
pred
q'
Y
Z
Figure
2: Algebraic Query Unnesting
The second phase of our translation scheme maps a normalized term e, which consists of list comprehensions
exclusively, into the algebraic form dened by the following rules:
E)
where =( [;e
where doc is a shorthand for document. In addition, since the predicates used in our XML algebraic operators
have existential semantics, similar to the semantics of XQuery predicates, we move the existentially quantied
variables ranged over paths into the predicate iself, making it an implicit existential quantication. This is
achieved by the following rules:
For example, _=f b/publisher g is reduced to b/publisher = \Addison-Wesley",
which has implicit existential semantics.
Under these rules, the normalized list comprehension derived from our example query is mapped to:
[;h
where
2.3 Algebraic Optimization
There are many algebraic transformation rules that can be used in optimizing the XML algebra. Some of
them have already been used successfully for the relational algebra, such as evaluating selections as early as
possible. We are concentrating here on query unnesting (query decorrelation) because it is very important
for processing streamed data. Without query unnesting, nested queries must be evaluated in a nested-loop
fashion, which requires multiple passes through the stream of the inner query. This is unacceptable because
of the performance requirements of stream processing.
Our unnesting algorithm is shown in Figure 2.A: for each box, q, that corresponds to a nested query, it
converts the reduction on top of q into a nest, and the blocking joins/unnests that lay on the input-output
path of the box q into outer-joins/outer-unnests in the box q 0 (as is shown in the example of Figure 2.B).
At the same time, it embeds the resulting box q 0 at the point immediately before is used. There is a very
simple explanation why this algorithm is correct: The nested query, q, in Figure 2.A, consumes the same
input stream as that of the embedding operation, opr, and computes a value that is used in the component,
f , of the embedding query. If we want to splice this box onto the stream of the embedding query we need
to guarantee two things. First, q should not block the input stream by removing tuples from the stream.
This condition is achieved by converting the blocking joins into outer-joins and the blocking unnests into
outer-unnests (box q 0 ). Second, we need to extend the stream with the new value v of q before it is used in f .
This manipulation can be done by converting the reduction on top of q into a nest, since the main dierence
between nest and reduce is that, while the reduce returns a value (a reduction of a stream of values), nest
embeds this value to the input stream. At the same time, the nest operator will convert null values to zeros
so that the stream that comes from the output of the spliced box q 0 will be exactly the same as it was before
the splice.
2.4 A Complete Example
In this subsection, we apply our translation and optimization scheme over the following XQuery (taken
from
f
for $u in document(\users.xml")//user tuple
return
f $u/name g
f for $b in document(\bids.xml")//bid
$i in document(\items.xml")//item
return f $i/description/text() g
which lists all users in alphabetic order by name so that, for each user, it includes descriptions of all the
items (if any) that were bid on by that user, in alphabetic order. Recall that sorting is captured in our
algebra using the monoid sort(f ), which sorts a sequence by f . The algebraic form of the above query is:
where the header h 1 is the XML construction contains a nested algebraic
After query unnesting, the resulting decorrelated query is:
da (X))
where
where =./ and = are left-outer join and outer unnest, respectively.
Processing Continuous XML Streams
In our framework, streamed XML data are disseminated to clients in the form:
<stream:xxx hid=\.">XML fragment </stream:xxx>
where xxx is the tag name ller, repeat, replace, or remove. The hole ID, hid, species the location of the
fragment in the reconstructed XML tree. Our framework uses the XML algebra for processing XML streams.
The main modication needed to the algebra is related to the evaluation of the simple path expressions used
in predicates and the unnest operator since now they have to be evaluated against the document llers. More
specically, function P , given in Section 2, must now be extended to handle holes:
S(<stream:hole
The returned bottom value, ?, indicates that the path has not been completely evaluated against the current
fragment due to a hole in the fragment. If a ? is returned at any point during the path evaluation (as
is dened by P), the XML fragment is suspended. Each client has one central repository of suspended XML
fragments (regardless of the number of input streams), called fragment repository, which can be indexed by
the combination of the stream number and hid (the hid of the stored ller). In addition, each operator has
a local repository that maps hole IDs to ller IDs in the fragment repository. Suppose, for example, that a
ller with hid=m is streamed through an XML algebraic operator and that this operator cannot complete
the evaluation of three paths due to the holes with hid's h 1 , h 2 , and h 3 . Then, the local repository of the
operator will be extended with three mappings: m). When later the ller for one
of the holes arrives at the operator, say the ller for h 2 , then the hole h 2 inside the ller with hid=m in
the fragment repository is replaced with the newly arrived ller. The h 2 mapping is removed from the local
repository of the operator and the resulting ller with hid= m at the fragment repository is evaluated again
for this operator, which may result to more hole mappings in the local repository of the operator. Finally, if
there are no blocking holes during path evaluation, that is, when operator paths are completely evaluated,
then the ller is processed by the operator and is removed from the fragment repository. It is not necessary
for all the holes blocking the evaluation of the operator paths to be lled before the fragment is processed
and released. For example, if both paths in the selection predicate are blocked by
holes, then if one of the holes is lled and the corresponding predicate is false, there is no need to wait for
the second hole to be lled since the fragment can be removed from the stream.
If the server sends repeat, replace, or remove fragments, then the client modies its fragment repository
and may stream some of the fragments through the algebraic operators. If it is a repeat fragment and its
hid is already in the fragment repository, then it is ignored, otherwise it is streamed through the query as
a ller fragment (since the client may have been connected in the middle of the stream broadcast). If it is
a replace fragment and its hid is already in the fragment repository, then the stored fragment is replaced,
otherwise it is streamed through the query as a ller fragment. Finally, if it is a remove fragment, then the
fragment with the referenced hid is removed from the fragment repository (if it exists).
Our evaluation algorithms for the XML algebra are based on in-memory hashing. Blocking operators,
such as join and nest, require buers for caching stream records. The evaluation algorithms for these
operators are hash-based, when possible (it is not possible for non-equijoins). For example, for the join
X./ x=A=B=y=C=D=E Y , we will have two hash tables in memory with the same number of buckets, one for
stream X with a hash function based on the path x=A=B and one for the stream Y based on the path
y=C=D=E. When both streams are completed, the join is evaluated by joining the associated buckets in
pairs. The nest operator can be evaluated using a hash-based algorithm also, where the hash key is the
combination of all group-by paths.
As we have mentioned, to cope with continuous streams and with streams larger than the available
memory, we make use of punctuations sent by the server along with the data. In our framework, when
punctuations are received at the client site, they are stored at a central repository, called punctuation
repository, and are indexed by their stream number. Each blocking operation, such as join or nesting, is
associated with one (for unary) or two (for binary) pairs of stream numbers/hash keys. The hash key is a
path that can be identied by a Tag Structure ID. The blocking operators are evaluated continuously by
reading fragments from their input streams without producing any output but by lling their hash tables.
As soon as one of their hash tables is full, they consult the punctuation repository to nd all punctuations
that match both their stream number and Tag Structure ID. Then they perform the blocking operation over
those hashed fragments that match the retrieved punctuations, which may result to the production of some
output. The last phase,
ushing the hash tables, can be performed in a pipeline fashion, that is, one output
fragment at a time (when is requested by the parent operation). The punctuation repository is cleared from
all the punctuation of a stream when the entire stream is repeated (i.e., when the root fragment is repeated).
Sorting is a special case and is handled separately. It is implemented with an in-memory sorting, such
as quicksort. Each sort operator remembers the largest key value of all data already
ushed from memory.
In the beginning, of course, the largest key is null. Like the other blocking operators, when the buer used
for sorting is full, the punctuation repository is consulted to nd punctuations related to the sorting key. If
there is a punctuation that indicates that we have seen all data between the largest key value (if not null)
and some other key value, then all buered data smaller or equal to the latter key value are
ushed and this
value becomes the new largest key.
4 Future Plans
When we described our XML algebra, we assumed that all the XPath paths used in operator predicates
and by the unnest operators (as unnesting paths) are fully mapped to concrete paths that do not contain
any wildcard selections. This is possible if the XML Tag Structure is provided, as is done when a server
disseminates XML data to clients in our stream processing framework. In the absence of such information,
wildcard selections of the form e==A, can be evaluated with a transitive closure operator that walks through
the XML tree e to nd all branches with tag A. Transitive closures are very hard to optimize and very few
general techniques have been proposed in the literature, such as magic sets [25]. To address this problem,
we are planning to incorporate structural recursion over tree types to our XML algebra. Our starting point
will be our previous work on structural recursion over tree-like data structures [13], which satises very
eective optimization rules, reminiscent to loop fusion and deforestation (elimination of intermediate data
structures) used in functional programming languages. These operators and optimization techniques can
also be used for optimizing queries over xed-point types, supported directly in DTDs and XML Schemas,
such as the part-subpart hierarchy. XQuery does not provide any special language construct for traversing
such structures, but can simulate such operations with unrestricted recursion. We are planning to introduce
language extensions to XQuery to handle structural recursion directly and a mapping from these extensions
to the structural recursion algebraic operators. In addition, we are planning to dene an update language
for XQuery, give it a formal semantics, and optimize it.
Related to query processing of XML streams, we have only presented evaluation algorithms based on
in-memory hashing. We are planning to investigate more evaluation algorithms for our XML algebraic
operators. An alternative approach to hashing is sorting. For example, an alternative way of implementing
a group-by operation is to sort the input by the group-by attributes and then aggregate over consecutive
tuples in the sorted result that belong to the same group [19]. Sorting is a blocking operation, which requires
the availability of the entire stream before it produces any output. It can be turned into a non-blocking
operator if punctuations related to the sorting attributes are transmitted from the server. Supporting
multiple evaluation algorithms for each algebraic operator poses new challenges. How can a good evaluation
plan be selected at the client side? Most relational database systems use a cost-based, dynamic programming
plan selection algorithm, and rely heavily on data statistics for cost estimation [27]. If the server sends a
sorted data stream, then this may indicate that a sort-based algorithm that requires the same order may
be faster and may require less memory than a hash-based one. This is very valuable information to the
client query optimizer and should be transmitted as another stream annotation by the server, maybe as
frequently as the document Tag Structure itself. Another useful information needed by the client and must
be transmitted by the server is statistics about the streamed data. It is still an open problem what form these
statistics should take and how they should be used by the client. One possibility is to annotate each node
in the document Tag Structure with the total number of XML elements in the document that correspond to
that node. Hence, we are planning to investigate cost-based plan selection for streamed data.
5 Conclusion
We have presented a framework for processing streamed XML data based on an XML algebra and an
algebraic optimization framework. The eectiveness of our framework depends not only on the available
resources at the client site, especially buer size, and on the ability of the client to optimize and evaluate
queries eectively, but also on the willingness and the ability of servers to broadcast useful punctuations
through the data stream to help clients utilize their resources better. The server must be aware of all possible
anticipated client queries and disseminate punctuations that reduce the maximum and average sizes of client
resources. In addition, the server can disseminate the fragmentation/repetition policy used in splitting its
XML data as well as statistics about the data sent between punctuations before streaming the actual data.
This information may help clients to allocate their limited memory to various query operations more wisely.
We are planning to address these issues in future work.
Acknowledgments
This work is supported in part by the National Science Foundation under the grant
IIS-9811525 and by the Texas Higher Education Advanced Research Program grant 003656-0043-1999.
--R
Broadcast Disks: Data Management for Asymmetric Communications Environments.
Continuous Queries Over Data Streams.
SAL: An Algebra for Semistructured Data and XML.
A Query Language and Optimization Techniques for Unstructured Data.
Comprehension Syntax.
The Object Data Standard: ODMG 3.0.
A Query Language for XML.
Optimizing Queries with Universal Quanti
Your Mediators Need Data Conversion!
Nested Queries in Object Bases.
Query Engines for Web-Accessible XML Data
Optimizing Object Queries Using an E
An Algebra for XML Query.
Database Techniques for the World-Wide Web: A Survey
Optimization of Nested SQL Queries Revisited.
Query Evaluation Techniques for Large Databases.
Interactive Data Analysis: The Control Project.
Online Aggregation.
On Optimizing an SQL-like Nested Query
Overview of the MPEG-7 Standard (version 5.0)
Magic is Relevant.
Improved Unnesting Algorithms for Join Aggregate SQL Queries.
Access Path Selection in a Relational Database Management System.
Optimization of Nested Queries in a Complex Object Model.
Punctuating Continuous Data Streams.
Data ow Query Execution in a Parallel Main-Memory Environment
World Wide Web Consortium (W3C).
--TR
Comprehension syntax
Broadcast disks
A query language and optimization techniques for unstructured data
Online aggregation
Your mediators need data conversion!
Database techniques for the World-Wide Web
On wrapping query languages and efficient XML integration
Optimizing object queries using an effective calculus
Dataflow query execution in a parallel main-memory environment
Continuous queries over data streams
Query Engines for Web-Accessible XML Data
An Algebra for XML Query
--CTR
Christoph Koch , Stefanie Scherzinger , Nicole Schweikardt , Bernhard Stegmaier, FluXQuery: an optimizing XQuery processor for streaming XML data, Proceedings of the Thirtieth international conference on Very large data bases, p.1309-1312, August 31-September 03, 2004, Toronto, Canada
Kevin Beyer , Don Chambrlin , Latha S. Colby , Fatma zcan , Hamid Pirahesh , Yu Xu, Extending XQuery for analytics, Proceedings of the 2005 ACM SIGMOD international conference on Management of data, June 14-16, 2005, Baltimore, Maryland
Christoph Koch , Stefanie Scherzinger , Nicole Schweikardt , Bernhard Stegmaier, Schema-based scheduling of event processors and buffer minimization for queries on structured data streams, Proceedings of the Thirtieth international conference on Very large data bases, p.228-239, August 31-September 03, 2004, Toronto, Canada
Peter A. Tucker , David Maier , Tim Sheard , Leonidas Fegaras, Exploiting Punctuation Semantics in Continuous Data Streams, IEEE Transactions on Knowledge and Data Engineering, v.15 n.3, p.555-568, March
Song Wang , Elke A. Rundensteiner , Murali Mani, Optimization of nested XQuery expressions with orderby clauses, Data & Knowledge Engineering, v.60 n.2, p.303-325, February, 2007
Sujoe Bose , Leonidas Fegaras, Data stream management for historical XML data, Proceedings of the 2004 ACM SIGMOD international conference on Management of data, June 13-18, 2004, Paris, France
Haifeng Jiang , Howard Ho , Lucian Popa , Wook-Shin Han, Mapping-driven XML transformation, Proceedings of the 16th international conference on World Wide Web, May 08-12, 2007, Banff, Alberta, Canada
Sharma Chakravarthy , Alp Aslandogan , Ramez Elmasri , Leonidas Fegaras , JungHwan Oh, Database research at UT Arlington, ACM SIGMOD Record, v.32 n.1, March
Christoph Koch , Stefanie Scherzinger, Attribute grammars for scalable query processing on XML streams, The VLDB Journal The International Journal on Very Large Data Bases, v.16 n.3, p.317-342, July 2007
Norman May , Sven Helmer , Guido Moerkotte, Strategies for query unnesting in XML databases, ACM Transactions on Database Systems (TODS), v.31 n.3, p.968-1013, September 2006 | query optimization;databases;XML;query processing |
584876 | Inferring hierarchical descriptions. | We create a statistical model for inferring hierarchical term relationships about a topic, given only a small set of example web pages on the topic, without prior knowledge of any hierarchical information. The model can utilize either the full text of the pages in the cluster or the context of links to the pages. To support the model, we use "ground truth" data taken from the category labels in the Open Directory. We show that the model accurately separates terms in the following classes: self terms describing the cluster, parent terms describing more general concepts, and child terms describing specializations of the cluster. For example, for a set of biology pages, sample parent, self, and child terms are science, biology, and genetics respectively. We create an algorithm to predict parent, self, and child terms using the new model, and compare the predictions to the ground truth data. The algorithm accurately ranks a majority of the ground truth terms highly, and identifies additional complementary terms missing in the Open Directory. | INTRODUCTION
Starting with a set of documents, it is desirable to infer automatically
various information about that set. Information
such as a meaningful name or some related concepts may
be useful for searching or analysis. This paper presents a
simple model that identifies meaningful classes of features
to promote understanding of a cluster of documents. Our
CIKM'02, November 4-9, 2002, McLean, Virginia, USA.
Positive frequency
Collection frequency
Parents
Children
Figure
1: A figure showing the predicted relationships
between parent, child and self features. Positive frequency
is the percentage of documents in the positive
set that contain a given feature. Collection frequency is
the overall percentage of documents that contain a given
feature.
simple model defines three types of features: self terms that
describe the cluster as a whole, parent terms that describe
more general concepts, and child terms that describe specializations
of the cluster.
Automatic selection of parent, child and self features can
be useful for several purposes including automatic labeling
of web directories or improving information retrieval. An
important use could be for automatically naming generated
clusters, as well as recommending both more general and
more specific concepts, using only the summary statistics of
a single cluster, and background collection statistics. Also,
popular web directories such as Yahoo (http://www.yahoo.
com/) or the Open Directory (http://www.dmoz.org/) are
manually generated and manually maintained. Even if categories
are defined by hand, automatic hierarchical descriptions
can be useful to recommend new parent or child links,
or alternate names. The same technology could be useful
to improve information retrieval by recommending alternate
queries (both more general and more specific) based on a retrieved
set of pages.
1.1 The Model
We hypothesize that we can distinguish between parent, self,
and child features based on analysis of the frequency of a feature
f in a set of documents (the "positive cluster"), compared
to the frequency of f in the entire collection. Specif-
ically, if f is very common in the positive cluster, but relatively
rare in the collection, then f may be a good self term.
A feature that is common in the positive cluster, but also
somewhat common in the entire collection, is a description
of the positive cluster, but is more general and hence may be
a good parent feature. Features that are somewhat common
in the positive cluster, but very rare in the general collec-
tion, may be good child features because they only describe
a subset of the positive documents.
Figure
1 shows a graphical representation of the model. The
three regions define the predicted relative relationships between
parent, child and self features. Features outside of
the marked regions are considered poor candidates for the
classes of parent, child or self. Figure 1 does not show any
absolute numerical boundaries, only the relative positions
of the regions. The actual regions may be fuzzy or non-
rectangular. The regions depend on the generality of the
class. For example, for the cluster of "biology" the parent
of "science" is relatively common. For a cluster of documents
about "gene sequencing", a parent of "DNA" may be
more rare than "science", and hence the boundary between
parent and self would likely be closer to 0.
Figure
2 shows a view of a set of documents that are in the
areas of "science", "biology", and "botany". The outer circle
represents the set of all documents in the subject area
of "science". The middle circle is the set of documents in
the area of "biology" and the inner-most circle represents
the documents in the area of "botany". If we assume that
the features "science", "biology" and "botany" occur only
within their respective circles, and occur in each document
contained within their respective circles, it is easy to see the
parent, child, self relationships. From this figure, roughly
20% of the total documents mention "science", about 5%
of the documents mention "biology" and about 1% mention
"botany". Within the set of "biology" documents, 100%
mention both "science" and "biology", while about 20%
mention "botany". This is a very simplistic representation,
because we assume that every document in the biology circle
actually contains the word biology - which is not necessarily
the case. Likewise, it is unlikely that all documents in the
sub-category of botany would mention both "biology" and
"science".
To compensate for this, we assume that there is some probability
a given "appropriate" feature will be used. This
probability is likely less for the parents than for the selfs
or children. As a result, in Figure 1, the parent region extends
more to the left than the self region. The probability
of a given feature being used will also a#ect the coordinates
of the lower right corner; a lower probability may shift the
percentage of occurrences in the self to the left. A probability
of one would correspond to every positive document
containing all self features.
2. AN EXPERIMENT
To test the model described in Figure 1, we used ground
truth data and known positive documents to generate a
graph of the actual occurrences of parent, self and child
features. We chose the Open Directory (http://www.dmoz.
org/) as our ground truth data for both parent, child and
self terms, as well as for the documents. Using the top level
categories of "computers", "science" and "sports", we chose
science
biology
botany
Figure
2: Sample distribution of features for the area
of biology, with parent science, and child botany.
the top 15 subject-based sub-categories from each (science
only had 11 subject-based sub-categories) for a total of 41
categories to form the set of positive clusters. Table 1 lists
the 41 categories, and their parents, used for our experi-
ment. We randomly chose documents from anywhere in the
Open Directory to collect an approximation of the collection
frequency of features. The negative set frequencies of
the parent, children and self features should be similar (be-
tween sub-categories) because all 41 sub-categories are at
a similar depth (with respect to the Open Directory root
node).
Parent Categories
Science Agriculture, Anomalies and Alternative
Science, Astronomy, Biology,
Chemistry, Earth Sciences, Environ-
ment, Math, Physics, Social Sciences,
Technology
Computers Artificial Intelligence, CAD, Computer
Science, Consultants, Data Commu-
nications, Data Formats, Education,
Graphics, Hardware, Internet, Multi-
media, Programming, Security, Soft-
ware, Systems
Sports Baseball, Basketball, Cycling, Eques-
trian, Football, Golf, Hockey, Martial
Arts, Motorsports, Running, Skiing,
Soccer, Tennis, Track and Field, Water
Sports
Table
1: The 41 Open Directory categories, and the
three parent categories we used for our experiment.
Each category has an assigned parent (in this case either
science, computers or sports), an associated name, which
formed the self features, and several sub-categories, which
formed the children. In each case, we split the assigned
names on "and", "or", or punctuation such as a comma.
So the category of "Anomalies and Alternative Science" becomes
two selfs, "anomalies" and "alternative science".
fraction
of
negative
documents
fraction of positive documents
Parent Child Self feature statistics from document full-text
selfs
parents
children
Figure
3: Distribution of ground truth features from the Open Directory.0.020.060.1
fraction
of
negative
documents
fraction of positive documents
Parent Child Self feature statistics from document full-text (fixed)
selfs
parents
children
Figure
4: Distribution of ground truth features from the Open Directory, removing the insu#ciently defined children,
and changing the parent of "computers" to "computer".
The first part of the experiment considered an initial set
of 500 random documents from each positive category, and
20,000 random documents from anywhere in the directory
as the negative data (collection statistics). Each of the web
URLs was downloaded and the features were put into a his-
togram. If a URL resulted in a terminal error, the page was
ignored, explaining the variation in the number of positive
documents used for training. Features consisted of words,
or two or three word phrases, with each feature counting a
maximum of once per document.
Then, for each category, we graphed each parent, child and
self feature (as assigned by the Open Directory) with the
coordinate as the fraction of positive documents containing
the feature, and the Y coordinate as the fraction of the
negative documents containing that feature. If a feature
occurred in less than 2% of the positive set it was ignored.
Figure
3 shows the distribution of all parent, child and self
features from our 41 categories. Although there appears to
be a general trend, there are many children that occur near
the parents. Since there were many categories with the same
parent (only three unique parents), and a common negative
set was used, the parents are co-linear with a common value.
Several of the children are words or phrases that are not well
defined in the absence of knowledge of the category. For ex-
ample, the feature "news" is undefined without knowing the
relevant category; is it news about artificial intelligence, or
news about baseball? Likewise several features, including
news, are not "subjects" but rather a non-textual property
of a page. A volunteer went through the list of categories
and their children, removing any child that was not sufficiently
defined in isolation. He removed more than half
of the children. The removal was done prior to seeing any
Category F V Category F V
agriculture 438 67 anomalies and
alternative
science
artificial intel-
ligence
448 77 astronomy 438 64
baseball 419 62 basketball 418 67
biology 454 66 cad 405
chemistry 443 70 computer scienc
consultants 442 139 cycling 438
data communication
439 data formats 434 62
sciences 445 70 education 436 67
environment 439 76 equestrian 433 62
football 426 71 golf 441 64
hockey 411 70 internet 446 74
martial arts 461 61 math 460 69
motorsports 445 64 multimedia 427 64
physics 441 69 programming 446 76
running 436 82 security 426 67
skiing 421 69 soccer 439 73
social sciences 458 71 software 446 73
systems 447 54 technology 439 53
tennis 452 36 track and field 384
water sports 451 40
Table
2: The number of positive documents from each
category for the full-text (F) experiment and for the extended
anchortext (V) experiment.
Yahoo!
Google
Yahoo!
Google
Full text Title
.My favorite search
engine is yahoo.
.Search engine
yahoo is powered by
google.
google
Extended anchor text
Anchor text
Links
Figure
5: Extended anchortext refers to the words in
close proximity to an inbound link.
data, and without knowledge of exactly why he was asked
to remove "insu#ciently defined" words or phrases.
Analyzing the data suggested that the parent of "comput-
ers" should be replaced by "computer". Unlike the word
"sports" often found in the plural when used in the general
sense, "computers" is often found in the singular form. For
this experiment, we did not perform any stemming or stop-word
removal, so "computers" and "computer" are di#erent
features. Figure 4 shows the same data as Figure 3 except
with the parent changed from "computers" to "computer",
and the insu#ciently defined children removed. This change
produces a clearer separation between the regions.
2.1 Extended Anchortext
Unfortunately, documents often do not contain the words
that describe their category. In the category of "Multime-
dia" for example, the feature "multimedia" occurred in only
13% of the positive documents. This is due to a combination
of choice of terms by the page authors as well as the fact
that often a main web page has no textual contents, and is
represented by only a "click here to enter" image.
Our model assumes the "documents" are actually descrip-
tions. Rather than use the words on the page itself, we
decided to repeat the experiment using human assigned descriptions
of a document in what we call "extended anchort-
ext", as shown in Figure 5. Our earlier work [3] describes extended
anchortext, and how it produces features more consistent
with the "summary" than the full text of documents.
Features found using extended anchortext generated clusters
appear to produce more reasonable names.
Extended anchortext refers to the words that occur near a
link to the target page. Figure 5 shows an example of extended
anchortext. Instead of using the full text, we used
a virtual document composed of up to 15 extended anchor-
texts. Inbound links from Yahoo! or the Open Directory
were excluded. When using virtual documents created by
considering up to 25 words before, after and including the
inbound anchortexts, there is a significant increase in the
usage of self features in the positive set (as compared to
the full-texts). In the category of Multimedia, the feature
"multimedia" occurred in 42% of the positive virtual docu-
ments, as opposed to 13% of the full texts. The occurrence
of the feature "multimedia" in the negative (random) set
was nearly identical for both the full text and the virtual
documents, at around 2%.
Table
2 lists the number of positive virtual documents used
for each category (randomly picked from the 500 used in the
first experiment). We used 743 negative virtual documents
as the negative set. However, the generation of virtual documents
is quite expensive, forcing us to reduce the total
number of pages considered. The improved summarization
ability from virtual documents should allow us to operate
with fewer total documents.
Figure
6 shows the results for all parents, children and selfs
for the extended anchortext. The positive percentages have
in general shifted to the right, as selfs become more clearly
separated from children. Figure 7 shows the results after
removal of the insu#ciently defined children and replacing
"computers" with "computer". Very few data points fall
outside of a simple rectangular region defined around each
class. Even including the insu#ciently defined children, the
three regions are well defined.
Despite the fact that most parents, children, and selfs fall
into the shown regions, there are still several factors causing
problems. First, we did not perform any stemming. Some
features may appear in both singular and plural forms, with
one being misclassified. In addition, phrases may occur less
often than their individual terms, making selfs appear falsely
as children, such as the case of "artificial intelligence", where
it appears as a child due to the relatively low occurrence of
the phrase.
fraction
of
negative
documents
fraction of positive documents
Parent Child Self statistics from extended anchortext
selfs
parents
children
Figure
Distribution of ground truth features from the Open Directory using extended anchortext virtual documents
instead of full-text.0.020.060.1
fraction
of
negative
documents
fraction of positive documents
Parent Child Self statistics from extended anchortext (fixed)
selfs
parents
children
Figure
7: Distribution of ground truth features from the Open Directory using extended anchortext virtual documents
instead of full-text, with corrections.
3. EXTRACTING HIERARCHICAL DESCRIPTION
3.1 Algorithm
Figure
7 shows that graphing of the ground-truth features
from the Open Directory for 41 categories in general follows
the predicted model of Figure 1. However, it does not graph
all features occurring in each category, only those assigned
by The Open Directory. To provide extra support for the
model, we present a simple algorithm that ranks all features
as possible parents, children and selfs, and compare the output
with the ground-truth data from the Open Directory.
Predict Parents, Children and Selfs Algorithm
For each feature f from a set of positive features:
1: Assign a label to feature f as follows:
if (f.neg > maxParentNegative){Label='N'}
elseif (f.neg > maxSelfNegative){Label='P'}
elseif (f.pos > minSelfP ositive){Label='S'}
elseif ((f.pos < maxChildP ositive) and
(f.neg < maxChildNegative)){Label='C'}
else {Label='N'}
2: For each label (P,S,C) sort each feature f with that label
by f.pos
Category Parents Selfs Children
agriculture management, scienc
agriculture, agricultural soil, sustainable, crop
anomalies and alternative
science
articles, science alternative, ufo, scientific
artificial intelli-
gence
systems, computer artificial, intelligence ai, computational, artificial intelli-
gence
astronomy science, images space, astronomy physics, sky, astronomical
baseball sports, high baseball, league stats, players, leagues
basketball sports, college basketball, team s basketball, espn, hoops
biology science, university
of
biology biological, genetics, plant
cad systems, computer cad, 3d modeling, architectural, 2d
chemistry science, university
of
chemical, chemistry chem, scientific, of chemistry
computer science systems, computer engineering, computing programming, papers, theory
consultants systems, managemen
solutions, consulting consultants, programming, and web
cycling sports, url bike, bicycle bicycling, mtb, mountain bike
data communication
systems, managemen
communications, solution
networks, clients, voice
data formats collection, which windows, graphics file, mac, truetype
sciences science, systems environmental, data survey, usgs, ecology
education computer, training learning microsoft, tutorials, certification
environment science, managemen
environmental, environ-
ment
conservation, sustainable, the envi-
ronment
equestrian training, sports horse, equestrian riding, the horse, dressage
football sports, board football, league teams, players, leagues
golf sports, equipment golf, courses golfers, golf club, golf course
graphics images, collection graphics 3d, animation, animated
hardware computer, systems hardware, technologies hard, components, drives
hockey sports, canada hockey, team hockey league, teams, ice hockey
internet computer, support web based, rfc, hosting
martial arts arts, do martial, martial arts fu, defense, kung fu
math science, university
of
math, mathematics theory, geometry, algebra
motorsports photos, sports racing, race driver, track, speedway
multimedia media, video digital, flash 3d, animation, graphic
physics science, university
of
physics scientific, solar, theory
programming systems, computer programming, code object, documentation, unix
running sports, training running, race races, track, athletic
security systems, computer security, system security and, nt, encryption
skiing sports, country ski, skiing winter, snowboarding, racing
soccer sports, url soccer, league teams, players, leagues
social sciences science, university
of
social economics, theory, anthropology
software systems, computer windows, system application, tool, programming
systems computer, systems computers, hardware linux, emulator, software and
technology systems, university
of
engineering scientific, engineers, chemical
tennis sports, professional tennis, s tennis men s, women s tennis, of tennis
track and field sports, training running, track track and field, track and, and field
water sports board, sports boat sailing, boats, race
Table
3: Algorithm predicted top two parents, selfs and children for each of the 41 tested categories. Blank values
mean no terms fell into the specified region for that category.
3.2 Results
Using the data from Figure 7, we specified the following
cuto#s:
Table
3 shows the top parents, selfs and children generated
using the algorithm described in Section 3.1 as applied to
the virtual documents, as described in Section 2.1. The
results show that in all 41 categories the Open Directory assigned
parent (replacing "computer" for "computers") was
ranked in the top 5. In about 80% of the categories the
top ranked selfs were identical, or e#ectively the same (syn-
onym, or identical stem) as the Open Directory assigned
self. Children are more di#cult to evaluate since there are
many reasonable children that are not listed.
Although in general the above algorithm appears to work,
there are several obvious limitations. First, in some cate-
gories, such as "Internet", the cut-o# points vary. Our algorithm
does not dynamically adjust to the data for a given
category. The manually assigned cut-o#s simply show that
if we did know the cut-o#s the algorithm would work; it
does not specify how to obtain such cut-o#s automatically.
Second, phrases appear to sometimes have a lower positive
occurrence than single words. For example, the phrase "ar-
tificial intelligence" incorrectly appears as a child instead of
a self. Third, there is no stemming or intelligent feature re-
moval. For example, a feature such as "university of " should
be ignored since it ends with a stop word. Likewise, consulting
as opposed to consult, or computers as opposed to
computer are all examples where failure to stem has caused
problems.
Despite the problems, the simplistic algorithm suggests that
there are some basic relationships between features that can
be predicted based solely on their frequency of occurrence
in a positive set and in the whole collection. Clearly more
work and more detailed experiments are needed.
It should be noted that these categories are all at roughly the
same depth (from the root node of the open directory). This
increases the likelihood that the cut-o#s work for multiple
categories, even though each category may be di#erent.
Analysis of the documents in the clusters revealed that some
categories su#ered from topic drift when random documents
were chosen. Our method for choosing the pages for each
positive cluster randomly picked pages from the set of all
documents in the category or one-level below. Unfortu-
nately, since the Open Directory does not guarantee an equal
number of documents in a category, it is possible to pick a
higher percentage of documents from one child. For exam-
ple, in the category of "Multimedia" there are only six URLs
in the category itself, with 560 pages in the child of "Flash
and Shockwave". Randomly picking documents in that category
biases "flash and shockwave" over the more general
multimedia pages.
4. RELATED WORK
4.1 Cluster Analysis
There is a large body of related work on automatic sum-
marization. For example, Radev and Fan [9] describe a
technique for summarization of a cluster of web documents.
Their approach breaks down the documents into individual
sentences and identifies themes or "the most salient passages
from the selected documents". Their approach uses
"centroid-based summarization" and does not produce sets
of hierarchically related features.
Lexical techniques have been applied to infer various concept
relationships from text [1, 4, 5]. Hearst [5] describes a
method for finding lexical relations by identifying a set of
lexicosyntactic patterns, such as a comma separated list of
noun phrases, e.g. "bruises, wounds, broken bones or other
injuries". These patterns are used to suggest types of lexical
relationships, for example bruises, wounds and broken bones
are all types of injuries. Caraballo describes a technique for
automatically constructing a hypernym-labeled noun hierar-
chy. A hypernym defines a relationship between word A and
"native speakers of English accept that sentence
B is a (kind of) A". Linguistic relationships such as those
described by Hearst and Caraballo are useful for generating
thesauri, but do not necessarily describe the relationship of
a cluster of documents to the rest of a collection. Knowing
that say "baseball is a sport" may be useful for hierarchy
generation if you knew a given cluster was about sports.
However, the extracted relationships do not necessarily relate
to the actual frequency of the concepts in the set. Given
a cluster of sports documents that discusses primarily basketball
and hockey, the fact that baseball is also a sport is
not as important for describing that set as other relationships
Sanderson and Croft [10] presented a statistical technique
based on subsumption relations. In their model, for two
terms x and y, x is said to subsume y if the probability of x
given y is one, 1 and the probability of y given x is less than
one. A subsumption relationship is suggestive of a parent-child
relationship (in our case a self-child relationship). This
allows a hierarchy to be created in the context of a given
cluster. In contrast, our work focuses on specific general regions
of features identified as "parents" (more general than
the common theme), "selfs" (features that define or describe
the cluster as a whole) and "children" (features that describe
common sub-concepts). Their work is unable to distinguish
between a "parent-self " relationship and a "self-child" rela-
tionship. They only deal with a positive set of documents,
but statistics from the entire collection are needed to make
both distinctions. Considering the collection statistics can
also help to filter out less important terms that may not be
meaningful to describe the cluster.
Popescul and Ungar describe a simple statistical technique
using # 2 for automatically labeling document clusters [8].
Each (stemmed) feature was assigned a score based on the
product of local frequency and predictiveness. Their concept
of a good cluster label is similar to our notion of "self
features". A good self feature is one that is both common
in the positive set and rare in the negative set, which corresponds
to high local frequency and a high predictiveness.
Our earlier work [3], describes how ranking features by expected
entropy loss can be used to identify good candidates
for self names or parent or child concepts. Features that
are common in the positive set, and rare in the negative
set make good selfs and children, and also demonstrate high
expected entropy loss. Parents are also relatively rare in
the negative set, and common in the positive set and are
also likely to have high expected entropy loss. This work
focuses on separating out the di#erent classes of features by
considering the specific positive and negative frequencies, as
opposed to ranking by a single entropy-based measure.
They actually used 0.8 instead to reduce the noise.
4.2 Hierarchical Clustering
Another approach to analyzing a single cluster is to break
it down into sub-clusters - forming a hierarchy of clusters.
Fasulo [2] provides a nice summary of a variety of techniques
for clustering (and hierarchical clustering) of docu-
ments. Kumar et al. [7] analyze the web for communities,
using the link structure of the web to determine the clusters.
Hofmann and Puzicha [6] describe several statistical models
for co-occurrence data and relevant hierarchical clustering
algorithms. They specifically address the IR issues and term
relationships.
To clarify the di#erence between our work and hierarchical
clustering approaches, we will present a simple exam-
ple. Imagine a user does a web search for "biology", and
retrieves 20 documents, all of them general biology "hub"
pages. Each page is somewhat similar in that they don't
focus on a specific aspect of biology. Hierarchical clustering
would break the 20 documents down into sub-clusters,
where each sub-cluster would represent the "children" con-
cepts. The topmost cluster could arguably be considered
the "self " cluster. However, given the sub-clusters, there is
no easy way to discern which features (words or phrases)
are meaningful names. Is "botany" a better name for a sub-cluster
than "university"? In addition, given a group of similar
documents, the clustering may not be meaningful. The
sub-clusters could focus on irrelevant aspects - such as the
fact that half of the documents contain the phrase "copy-
right 2002", while the other half do not. This is especially
di#cult for web pages that are lacking of textual content,
i.e. a "welcome page" or a javaScript redirect, or if some of
the pages were about more than one topic (even though the
cluster as a whole is primarily about biology).
Using our approach, the set of the 20 documents would
be analyzed (considering the web structure to deal with
non-descriptive pages), and a histogram summarizing the
occurrence of each feature would be generated (individual
document frequency would be removed). Comparing the
generated histogram to a histogram of all documents (or
some larger reference collection), we would find that the
"best" name for the cluster is "biology", and that "science"
is a term that describes a more general concept. Likewise,
we would identify several di#erent "types" of biology, even
though no document may actually cluster into the set. For
example, "botany", "cell biology", "evolution", etc. Phrases
such as "copyright 2002" would be recognized as unimportant
because of their frequency in the larger collection. In
addition, the use of the web structure (extended anchort-
ext) can significantly improve the ability to name small sets
of documents over just the document full text, dealing with
the problems of "welcome pages" or redirects.
5.
AND FUTURE WORK
This paper presents a simple statistical model that can be
used to predict parent, child and self features for a relatively
small cluster of documents. Self features can be used as a
recommended name for a cluster, while parents and children
can be used to "place" the cluster in the space of the larger
collection. Parent features suggest a more general concept,
while child features suggest concepts that describe a specialization
of the self.
To support our model, we performed two di#erent sets of
experiments. First, we graphed ground truth data, demonstrating
that actual parent, child, and self features generally
obey our predicted model. Second, we described and tested
a simple algorithm that can predict parent, child and self
features given feature histograms. The predicted features
often agreed with the ground truth, and may even suggest
new interconnections between related categories.
To improve the algorithm, we will be exploring methods
for automatically discovering the boundaries of the regions
given only the feature histograms for a single cluster. We
also intend to handle phrases di#erently than single word
terms, and include various linguistic techniques to improve
the selection process.
6.
--R
Automatic construction of a hypernym-labeled noun hierarchy from text
An analysis of recent work on clustering algorithms.
Using web structure for classifying and describing web pages.
Automatic acquisition of hyponyms from large text corpora.
Automated discovery of WordNet relations.
Statistical models for co-occurrence data
Trawling the web for emerging cyber-communities
Automatic labeling of document clusters.
Automatic summarization of search engine hit lists.
Deriving concept hierarchies from text.
--TR
Deriving concept hierarchies from text
Trawling the Web for emerging cyber-communities
Using web structure for classifying and describing web pages
Statistical Models for Co-occurrence Data
--CTR
Pucktada Treeratpituk , Jamie Callan, An experimental study on automatically labeling hierarchical clusters using statistical features, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Pucktada Treeratpituk , Jamie Callan, Automatically labeling hierarchical clusters, Proceedings of the 2006 international conference on Digital government research, May 21-24, 2006, San Diego, California
Shui-Lung Chuang , Lee-Feng Chien, A practical web-based approach to generating topic hierarchy for text segments, Proceedings of the thirteenth ACM international conference on Information and knowledge management, November 08-13, 2004, Washington, D.C., USA
Shui-Lung Chuang , Lee-Feng Chien, Taxonomy generation for text segments: A practical web-based approach, ACM Transactions on Information Systems (TOIS), v.23 n.4, p.363-396, October 2005
Jianhan Zhu , Jun Hong , John G. Hughes, PageCluster: Mining conceptual link hierarchies from Web log files for adaptive Web site navigation, ACM Transactions on Internet Technology (TOIT), v.4 n.2, p.185-208, May 2004
Lina Zhou, Ontology learning: state of the art and open issues, Information Technology and Management, v.8 n.3, p.241-252, September 2007 | web analysis;statistical models;hierarchical relationships;feature selection;cluster naming |
584877 | Evaluation of hierarchical clustering algorithms for document datasets. | Fast and high-quality document clustering algorithms play an important role in providing intuitive navigation and browsing mechanisms by organizing large amounts of information into a small number of meaningful clusters. In particular, hierarchical clustering solutions provide a view of the data at different levels of granularity, making them ideal for people to visualize and interactively explore large document collections.In this paper we evaluate different partitional and agglomerative approaches for hierarchical clustering. Our experimental evaluation showed that partitional algorithms always lead to better clustering solutions than agglomerative algorithms, which suggests that partitional clustering algorithms are well-suited for clustering large document datasets due to not only their relatively low computational requirements, but also comparable or even better clustering performance. We present a new class of clustering algorithms called constrained agglomerative algorithms that combine the features of both partitional and agglomerative algorithms. Our experimental results showed that they consistently lead to better hierarchical solutions than agglomerative or partitional algorithms alone. | Introduction
Hierarchical clustering solutions, which are in the form of trees called dendrograms, are of great interest for a number
of application domains. Hierarchical trees provide a view of the data at different levels of abstraction. The consistency
of clustering solutions at different levels of granularity allows flat partitions of different granularity to be extracted
during data analysis, making them ideal for interactive exploration and visualization. In addition, there are many
times when clusters have subclusters, and the hierarchical structure are indeed a natural constrain on the underlying
application domain (e.g., biological taxonomy, phylogenetic trees) [9].
Hierarchical clustering solutions have been primarily obtained using agglomerative algorithms [27, 19, 10, 11, 18],
in which objects are initially assigned to its own cluster and then pairs of clusters are repeatedly merged until the
whole tree is formed. However, partitional algorithms [22, 16, 24, 5, 33, 13, 29, 4, 8] can also be used to obtain hierarchical
clustering solutions via a sequence of repeated bisections. In recent years, various researchers have recognized
that partitional clustering algorithms are well-suited for clustering large document datasets due to their relatively low
computational requirements [6, 20, 1, 28]. However, there is the common belief that in terms of clustering quality,
partitional algorithms are actually inferior and less effective than their agglomerative counterparts. This belief is based
both on experiments with low dimensional datasets as well was as a limited number of studies in which agglomerative
approaches outperformed partitional K-means based approaches. For example, Larsen [20] observed that group
average greedy agglomerative clustering outperformed various partitional clustering algorithms in document datasets
from TREC and Reuters.
In light of recent advances in partitional clustering [6, 20, 7, 4, 8], we revisited the question of whether or not
agglomerative approaches generate superior hierarchical trees than partitional approaches. The focus of this paper is
# This work was supported by NSF CCR-9972519, EIA-9986042, ACI-9982274, ACI-0133464, by Army Research Office contract
DA/DAAG55-98-1-0441, by the DOE ASCI program, and by Army High Performance Computing Research Center contract number DAAH04-95-
C-0008. Related papers are available via WWW at URL: http://www.cs.umn.edu/-karypis
to compare various agglomerative and partitional approaches for the task of obtaining hierarchical clustering solution.
The partitional methods that we compared use different clustering criterion functions to derive the solutions and the
agglomerative methods use different schemes for selecting the pair of clusters to merge next. For partitional clustering
algorithms, we used six recently studied criterion functions [34] that have been shown to produce high-quality partitional
clustering solutions. For agglomerative clustering algorithms, we evaluated three traditional merging criteria
(i.e., single-link, complete-link, and group average (UPGMA)) and a new set of merging criteria derived from the six
partitional criterion functions. Overall, we compared six partitional methods and nine agglomerative methods.
In addition to the traditional partitional and agglomerative algorithms, we developed a new class of agglomerative
algorithms, in which we introduced intermediate clusters obtained by partitional clustering algorithms to constrain
the space over which agglomeration decisions are made. We refer to them as constrained agglomerative algorithms.
These algorithms generate hierarchical trees in two steps. First, for each of the intermediate partitional clusters, an
agglomerative algorithm builds a hierarchical subtree. Second, the subtrees are combined into a single tree by building
an upper tree using these subtrees as leaves.
We experimentally evaluated the performance of these methods to obtain hierarchical clustering solutions using
twelve different datasets derived from various sources. Our experiments showed that partitional algorithms always
generate better hierarchical clustering solutions than agglomerative algorithms and that the constrained agglomerative
methods consistently lead to better solutions than agglomerative methods alone and in most cases they outperform
partitional methods as well. We believe that the observed poor performance of agglomerative algorithms is because of
the errors they make during early agglomeration. The superiority of partitional algorithm also suggests that partitional
clustering algorithms are well-suited for obtaining hierarchical clustering solutions of large document datasets due to
not only their relatively low computational requirements, but also comparable or better performance.
The rest of this paper is organized as follows. Section 2 provides some information on how documents are represented
and how the similarity or distance between documents is computed. Section 3 describes different criterion
functions as well as criterion function optimization of hierarchical partitional algorithms. Section 4 describes various
agglomerative algorithms and the constrained agglomerative algorithms. Section 5 provides the detailed experimental
evaluation of the various hierarchical clustering methods as well as the experimental results of the constrained agglomerative
algorithms. Section 6 discusses some important observations from the experimental results. Finally, Section 7
provides some concluding remarks.
Preliminaries
Through-out this paper we will use the symbols n, m, and k to denote the number of documents, the number of terms,
and the number of clusters, respectively. We will use the symbol S to denote the set of n documents that we want to
cluster, to denote each one of the k clusters, and n 1 , n 2 , . , n k to denote the sizes of the corresponding
clusters.
The various clustering algorithms that are described in this paper use the vector-space model [26] to represent each
document. In this model, each document d is considered to be a vector in the term-space. In particular, we employed
the weighting model, in which each document can be represented as
where tf i is the frequency of the i th term in the document and df i is the number of documents that contain the i th term.
To account for documents of different lengths, the length of each document vector is normalized so that it is of unit
length (#d tfidf 1), that is each document is a vector in the unit hypersphere. In the rest of the paper, we will assume
that the vector representation for each document has been weighted using tf-idf and it has been normalized so that it
is of unit length. Given a set A of documents and their corresponding vector representations, we define the composite
vector D A to be D d, and the centroid vector C A to be C
| A| .
In the vector-space model, the cosine similarity is the most commonly used method to compute the similarity
between two documents d i and d j , which is defined to be cos(d i , d
. The cosine formula can be
simplified to cos(d i , d
when the document vectors are of unit length. This measure becomes one if the
documents are identical, and zero if there is nothing in common between them (i.e., the vectors are orthogonal to each
other).
Vector Properties By using the cosine function as the measure of similarity between documents we can take
advantage of a number of properties involving the composite and centroid vectors of a set of documents. In particular,
are two sets of unit-length documents containing n i and n j documents respectively, and D i , D j and C i ,
are their corresponding composite and centroid vectors then the following is true:
1. The sum of the pair-wise similarities between the documents in S i and the document in S j is equal to D i t D j .
That is,
cos(d q , d r
d q #D ,d r #D
d q
2. The sum of the pair-wise similarities between the documents in S i is equal to #D i # 2 . That is,
d q ,d r #D i
cos(d q , d r
d q ,d r #D i
Note that this equation includes the pairwise similarities involving the same pairs of vectors.
3 Hierarchical Partitional Clustering Algorithm
Partitional clustering algorithms can be used to compute a hierarchical clustering solution using a repeated cluster
bisectioning approach [28, 34]. In this approach, all the documents are initially partitioned into two clusters. Then,
one of these clusters containing more than one document is selected and is further bisected. This process continues
leading to n leaf clusters, each containing a single document. It is easy to see that this approach builds
the hierarchical agglomerative tree from top (i.e., single all-inclusive cluster) to bottom (each document is in its own
cluster). In the rest of this section we describe the various aspects of the partitional clustering algorithm that we used
in our study.
3.1 Clustering Criterion Functions
A key characteristic of most partitional clustering algorithms is that they use a global criterion function whose optimization
drives the entire clustering process. For those partitional clustering algorithms, the clustering problem can
be stated as computing a clustering solution such that the value of a particular criterion function is optimized.
The clustering criterion functions that we used in our study can be classified into four groups: internal, external,
hybrid and graph-based. The internal criterion functions focus on producing a clustering solution that optimizes a
function defined only over the documents of each cluster and does not take into account the documents assigned to
different clusters. The external criterion functions derive the clustering solution by focusing on optimizing a function
that is based on how the various clusters are different from each other. The graph based criterion functions model the
documents as a graph and use clustering quality measures defined in the graph model. The hybrid criterion functions
simultaneously optimize multiple individual criterion functions.
Internal Criterion Functions The first internal criterion function maximizes the sum of the average pairwise
similarities between the documents assigned to each cluster, weighted according to the size of each cluster. Specifi-
cally, if we use the cosine function to measure the similarity between documents, then we want the clustering solution
to optimize the following criterion function:
maximize I
r
. (3)
The second criterion function is used by the popular vector-space variant of the K-means algorithm [6, 20, 7, 28,
17]. In this algorithm each cluster is represented by its centroid vector and the goal is to find the clustering solution
that maximizes the similarity between each document and the centroid of the cluster that is assigned to. Specifically,
if we use the cosine function to measure the similarity between a document and a centroid, then the criterion function
becomes the following:
maximize I
#D r #. (4)
Comparing the I 2 criterion function with I 1 we can see that the essential difference between these criterion functions
is that I 2 scales the within-cluster similarity by the #D r # term as opposed to n r term used by I 1 . The term #D r # is
nothing more than the square-root of the pairwise similarity between all the document in S r , and will tend to emphasize
the importance of clusters (beyond the #D r # 2 term) whose documents have smaller pairwise similarities compared to
clusters with higher pair-wise similarities. Also note that if the similarity between a document and the centroid vector
of its cluster is defined as just the dot-product of these vectors, then we will get back the I 1 criterion function.
External Criterion Functions It is quite hard to define external criterion functions that lead to meaningful clustering
solutions. For example, it may appear that an intuitive external function may be derived by requiring that the
centroid vectors of the different clusters are as mutually orthogonal as possible, i.e., they contain documents that share
very few terms across the different clusters. However, for many problems this criterion function has trivial solutions
that can be achieved by assigning to the first k - 1 clusters a single document that shares very few terms with the
rest, and then assigning the rest of the documents to the kth cluster. For this reason, the external function that we
will discuss tries to separate the documents of each cluster from the entire collection, as opposed trying to separate
the documents among the different clusters. This external criterion function was motivated by multiple discriminant
analysis and is similar to minimizing the trace of the between-cluster scatter matrix [9, 30].
In particular, our external criterion function is defined as
minimize
where C is the centroid vector of the entire collection. From this equation we can see that we try to minimize the
cosine between the centroid vector of each cluster to the centroid vector of the entire collection. By minimizing the
cosine we essentially try to increase the angle between them as much as possible. Also note that the contribution of
each cluster is weighted based on the cluster size, so that larger clusters will weight heavier in the overall clustering
solution. Equation 5 can be re-written as
where D is the composite vector of the entire document collection. Note that since 1/#D# is constant irrespective of
the clustering solution the criterion function can be re-stated as:
As we can see from Equation 6, even-though our initial motivation was to define an external criterion function, because
we used the cosine function to measure the separation between the cluster and the entire collection, the criterion
function does take into account the within-cluster similarity of the documents (due to the #D r # term). Thus, E 1 is
actually a hybrid criterion function that combines both external and internal characteristics of the clusters.
Hybrid Criterion Functions In our study, we will focus on two hybrid criterion function that are obtained by
combining criterion I 1 with respectively. Formally, the first criterion function is
and the second is
. (8)
Note that since E 1 is minimized, both H 1 and H 2 need to be maximized as they are inversely related to E 1 .
Graph Based Criterion Functions An alternate way of viewing the relations between the documents is to
use similarity graphs. Given a collection of n documents S, the similarity graph G s is obtained by modeling each
document as a vertex, and having an edge between each pair of vertices whose weight is equal to the similarity between
the corresponding documents. Viewing the documents in this fashion, a number of internal, external, or combined
criterion functions can be defined that measure the overall clustering quality. In our study we will investigate one such
criterion function called MinMaxCut, that was proposed recently [8]. MinMaxCut falls under the category of criterion
functions that combine both the internal and external views of the clustering process and is defined as [8]
minimize
where cut(S r , S-S r ) is the edge-cut between the vertices in S r to the rest of the vertices in the graph S-S r . The edge-cut
between two sets of vertices A and B is defined to be the sum of the edges connecting vertices in A to vertices in
B. The motivation behind this criterion function is that the clustering process can be viewed as that of partitioning the
documents into groups by minimizing the edge-cut of each partition. However, for reasons similar to those discussed
in Section 3.1, such an external criterion may have trivial solutions, and for this reason each edge-cut is scaled by the
sum of the internal edges. As shown in [8], this scaling leads to better balanced clustering solutions.
If we use the cosine function to measure the similarity between the documents, and Equations 1 and 2, then the
above criterion function can be re-written as
and since k is constant, the criterion function can be simplified to
3.2 Criterion Function Optimization
Our partitional algorithm uses an approach inspired by the K-means algorithm to optimize each one of the above
criterion functions, and is similar to that used in [28, 34]. The details of this algorithm are provided in the remaining
of this section.
Initially, a random pair of documents is selected from the collection to act as the seeds of the two clusters. Then, for
each document, its similarity to these two seeds is computed, and it is assigned to the cluster corresponding to its most
similar seed. This forms the initial two-way clustering. This clustering is then repeatedly refined so that it optimizes
the desired clustering criterion function.
The refinement strategy that we used consists of a number of iterations. During each iteration, the documents
are visited in a random order. For each document, d i , we compute the change in the value of the criterion function
obtained by moving d i to one of the other k - 1 clusters. If there exist some moves that lead to an improvement in the
overall value of the criterion function, then d i is moved to the cluster that leads to the highest improvement. If no such
cluster exists, d i remains in the cluster that it already belongs to. The refinement phase ends, as soon as we perform
an iteration in which no documents moved between clusters. Note that unlike the traditional refinement approach used
by K-means type of algorithms, the above algorithm moves a document as soon as it is determined that it will lead to
an improvement in the value of the criterion function. This type of refinement algorithms are often called incremental
[9]. Since each move directly optimizes the particular criterion function, this refinement strategy always converges to
a local minima. Furthermore, because the various criterion functions that use this refinement strategy are defined in
terms of cluster composite and centroid vectors, the change in the value of the criterion functions as a result of single
document moves can be computed efficiently.
The greedy nature of the refinement algorithm does not guarantee that it will converge to a global minima, and the
local minima solution it obtains depends on the particular set of seed documents that were selected during the initial
clustering. To eliminate some of this sensitivity, the overall process is repeated a number of times. That is, we compute
different clustering solutions (i.e., initial clustering followed by cluster refinement), and the one that achieves the
best value for the particular criterion function is kept. In all of our experiments, we used 10. For the rest of this
discussion when we refer to the clustering solution we will mean the solution that was obtained by selecting the best
out of these N potentially different solutions.
3.3 Cluster Selection
We experimented with two different methods for selecting which cluster to bisect next. The first method uses the
simple strategy of bisecting the largest cluster available at that point of the clustering solution. Our earlier experience
with this approach showed that it leads to reasonably good and balanced clustering solutions [28, 34]. However, its
limitation is that it cannot gracefully operate in datasets in which the natural clusters are of different sizes, as it will
tend to partition those larger clusters first. To overcome this problem and obtain more natural hierarchical solutions,
we developed a method that among the current k clusters, selects the cluster which leads to the k clustering
solution that optimizes the value of the particular criterion function (among the different k choices). Our experiments
showed that this approach performs somewhat better than the previous scheme, and is the method that we used in the
experiments presented in Section 5.
3.4 Computational Complexity
One of the advantages of our partitional algorithm and that of other similar partitional algorithms, is that it has relatively
low computational requirements. A two-clustering of a set of documents can be computed in time linear on the number
of documents, as in most cases the number of iterations required for the greedy refinement algorithm is small (less than
20), and are to a large extend independent on the number of documents. Now if we assume that during each bisection
step, the resulting clusters are reasonably balanced (i.e., each cluster contains a fraction of the original documents),
then the overall amount of time required to compute all n - 1 bisections is O(n log n).
Hierarchical Agglomerative Clustering Algorithm
Unlike the partitional algorithms that build the hierarchical solution for top to bottom, agglomerative algorithms build
the solution by initially assigning each document to its own cluster and then repeatedly selecting and merging pairs
of clusters, to obtain a single all-inclusive cluster. Thus, agglomerative algorithms build the tree from bottom (i.e., its
leaves) toward the top (i.e., root).
4.1 Cluster Selection Schemes
The key parameter in agglomerative algorithms is the method used to determine the pairs of clusters to be merged at
each step. In most agglomerative algorithms, this is accomplished by selecting the most similar pair of clusters, and
numerous approaches have been developed for computing the similarity between two clusters[27, 19, 16, 10, 11, 18].
In our study we used the single-link, complete-link, and UPGMA schemes, as well as, the various partitional criterion
functions described in Section 3.1.
The single-link [27] scheme measures the similarity of two clusters by the maximum similarity between the documents
from each cluster. That is, the similarity between two clusters S i and S j is given by
sim single-link (S i ,
In contrast, the complete-link scheme [19] uses the minimum similarity between a pair of documents to measure the
same similarity. That is,
sim complete-link (S i ,
In general, both the single- and the complete-link approaches do not work very well because they either base their
decisions on limited amount of information (single-link), or they assume that all the documents in the cluster are very
similar to each other (complete-link approach). The UPGMA scheme [16] (also known as group average) overcomes
these problems by measuring the similarity of two clusters as the average of the pairwise similarity of the documents
from each cluster. That is,
sim UPGMA (S i ,
. (12)
The partitional criterion functions, described in Section 3.1, can be converted into cluster selection schemes for agglomerative
clustering using the general framework of stepwise optimization [9], as follows. Consider an n-document
dataset and the clustering solution that has been computed after performing l merging steps. This solution will contain
exactly n - l clusters, as each merging step reduces the number of clusters by one. Now, given this (n - l)-way
clustering solution, the pair of clusters that is selected to be merged next, is the one that leads to an (n - l - 1)-way
solution that optimizes the particular criterion function. That is, each one of the (n - l)-(n- l -1)/2 pairs of possible
merges is evaluated, and the one that leads to a clustering solution that has the maximum (or minimum) value of the
particular criterion function is selected. Thus, the criterion function is locally optimized within the particular stage of
the agglomerative algorithm. This process continues until the entire agglomerative tree has been obtained.
4.2 Computational Complexity
There are two main computationally expensive steps in agglomerative clustering. The first step is the computation of
the pairwise similarity between all the documents in the data set. The complexity of this step is, in general, O(n 2 )
because the average number of terms in each document is small and independent of n.
The second step is the repeated selection of the pair of most similar clusters or the pair of clusters that best optimizes
the criterion function. A naive way of performing that is to recompute the gains achieved by merging each pair of
clusters after each level of the agglomeration, and select the most promising pair. During the lth agglomeration step,
this will require O((n - l) 2 ) time, leading to an overall complexity of O(n 3 ). Fortunately, the complexity of this step
can be reduced for single-link, complete-link, UPGMA, I 1 , I 2 , . This is because the pair-wise similarities or
the improvements in the value of the criterion function achieved by merging a pair of clusters i and j does not change
during the different agglomerative steps, as long as i or j is not selected to be merged. Consequently, the different
similarities or gains in the value of the criterion function can be computed once for each pair of clusters and inserted
into a priority queue. As a pair of clusters i and j is selected to be merged to form cluster p, then the priority queue is
updated so that any gains corresponding to cluster pairs involving either i or j are removed, and the gains of merging
the rest of the clusters with the newly formed cluster p are inserted. During the lth agglomeration step, that involves
O(n - l) priority queue delete and insert operations. If the priority queue is implemented using a binary heap, the total
complexity of these operations is O((n - l) log(n - l)), and the overall complexity over the n - 1 agglomeration steps
is O(n 2 log n).
Unfortunately, the original complexity of O(n 3 ) of the naive approach cannot be reduced for the H 1 and H 2
criterion functions, because the improvement in the overall value of the criterion function when a pair of clusters i and
j is merged tends to be changed for all pairs of clusters. As a result, they cannot be pre-computed and inserted into a
priority queue.
4.3 Constrained Agglomerative Clustering
One of the advantages of partitional clustering algorithms is that they use information about the entire collection of
documents when they partition the dataset into a certain number of clusters. On the other hand, the clustering decisions
made by agglomerative algorithms are local in nature. This has both its advantages as well as its disadvantages. The
advantage is that it is easy for them to group together documents that form small and reasonably cohesive clusters, a
task in which partitional algorithms may fail as they may split such documents across cluster boundaries early during
the partitional clustering process (especially when clustering large collections). However, their disadvantage is that if
the documents are not part of particularly cohesive groups, then the initial merging decisions may contain some errors,
which will tend to be multiplied as the agglomeration progresses. This is especially true for the cases in which there
are a large number of equally good merging alternatives for each cluster.
One way of improving agglomerative clustering algorithms by eliminating this type of errors, is to use a partitional
clustering algorithm to constrain the space over which agglomeration decisions are made, so that each document is only
allowed to merge with other documents that are part of the same partitionally discovered cluster. In this approach, a
partitional clustering algorithm is used to compute a k-way clustering solution. Then, each of these clusters is treated as
a separate collection and an agglomerative algorithm is used to build a tree for each one of them. Finally, the k different
trees are combined into a single tree by merging them using an agglomerative algorithm that treats the documents of
each subtree as a cluster that has already been formed during agglomeration. The advantage of this approach is that
it is able to benefit from the global view of the collection used by partitional algorithms and the local view used by
agglomerative algorithms. An additional advantage is that the computational complexity of constrained clustering is
log k), where k is the number of intermediate partitional clusters. If k is reasonably large, e.g.,
k equals # n, the original complexity of O(n 2 log n) for agglomerative algorithms is reduced to O(n 2/3 log n).
5 Experimental Results
We experimentally evaluated the performance of the various clustering methods to obtain hierarchical solutions using
a number of different datasets. In the rest of this section we first describe the various datasets and our experimental
methodology, followed by a description of the experimental results. The datasets as well as the various algorithms are
available in the CLUTO clustering toolkit, which can be downloaded from http://www.cs.umn.edu/-karypis/cluto.
5.1 Document Collections
In our experiments, we used a total of twelve different datasets, whose general characteristics are summarized in
Table
1. The smallest of these datasets contained 878 documents and the largest contained 4,069 documents. To
ensure diversity in the datasets, we obtained them from different sources. For all datasets, we used a stop-list to
remove common words, and the words were stemmed using Porter's suffix-stripping algorithm [25]. Moreover, any
term that occurs in fewer than two documents was eliminated.
Data Source # of documents # of terms # of classes
fbis FBIS (TREC) 2463 12674 17
hitech San Jose Mercury (TREC) 2301 13170 6
reviews San Jose Mercury (TREC) 4069 23220 5
la2 LA Times (TREC) 3075 21604 6
re0 Reuters-21578 1504 2886 13
re1 Reuters-21578 1657 3758 25
k1a WebACE 2340 13879 20
k1b WebACE 2340 13879 6
wap WebACE 1560 8460 20
Table
1: Summary of data sets used to evaluate the various clustering criterion functions.
The fbis dataset is from the Foreign Broadcast Information Service data of TREC-5 [31], and the classes correspond
to the categorization used in that collection. The hitech and reviews datasets were derived from the San Jose Mercury
newspaper articles that are distributed as part of the TREC collection (TIPSTER Vol. 3). Each one of these datasets
was constructed by selecting documents that are part of certain topics in which the various articles were categorized
(based on the DESCRIPT tag). The hitech dataset contained documents about computers, electronics, health, med-
ical, research, and technology; and the reviews dataset contained documents about food, movies, music, radio, and
restaurants. In selecting these documents we ensured that no two documents share the same DESCRIPT tag (which
can contain multiple categories). The la1 and la2 datasets were obtained from articles of the Los Angeles Times that
was used in TREC-5 [31]. The categories correspond to the desk of the paper that each article appeared and include
documents from the entertainment, financial, foreign, metro, national, and sports desks. Datasets tr31 and tr41 are
derived from TREC-5 [31], TREC-6 [31], and TREC-7 [31] collections. The classes of these datasets correspond to
the documents that were judged relevant to particular queries. The datasets re0 and re1 are from Reuters-21578 text
categorization test collection Distribution 1.0 [21]. We divided the labels into two sets and constructed datasets ac-
cordingly. For each dataset, we selected documents that have a single label. Finally, the datasets k1a, k1b, and wap are
from the WebACE project [23, 12, 2, 3]. Each document corresponds to a web page listed in the subject hierarchy of
Yahoo! [32]. The datasets k1a and k1b contain exactly the same set of documents but they differ in how the documents
were assigned to different classes. In particular, k1a contains a finer-grain categorization than that contained in k1b.
5.2 Experimental Methodology and Metrics
For each one of the different datasets we obtained hierarchical clustering solutions using the various partitional and
agglomerative clustering algorithms described in Sections 3 and 4. The quality of a clustering solution was determined
by analyzing the entire hierarchical tree that is produced by a particular clustering algorithm. This is often done by
using a measure that takes into account the overall set of clusters that are represented in the hierarchical tree. One such
measure is the FScore measure, introduced by [20]. Given a particular class C r of size n r and a particular cluster S i
of size n i , suppose n r i documents in the cluster S i belong to C r , then the FScore of this class and cluster is defined to
be
is the recall value defined as n r i /n r , and P(C r , S i ) is the precision value defined as n r i /n i for the
class C r and the cluster S i . The FScore of the class C r , is the maximum FScore value attained at any node in the
hierarchical clustering tree T . That is,
The FScore of the entire clustering solution is then defined to be the sum of the individual class FScore weighted
according to the class size.
c
where c is the total number of classes. A perfect clustering solution will be the one in which every class has a
corresponding cluster containing the exactly same documents in the resulting hierarchical tree, in which case the
FScore will be one. In general, the higher the FScore values, the better the clustering solution is.
5.3 Comparison of Partitional and Agglomerative Trees
Our first set of experiments was focused on evaluating the quality of the hierarchical clustering solutions produced by
various agglomerative algorithms and partitional algorithms. For agglomerative algorithms, nine selection schemes or
criterion functions have been tested including the six criterion functions discussed in Section 3.1, and the three traditional
selection schemes (i.e., single-link, complete-link and UPGMA). We named this set of agglomerative methods
directly with the name of the criterion function or selection scheme, e.g., "I 1 " means the agglomerative clustering
method with I 1 as the criterion function and "UPGMA" means the agglomerative clustering method with UPGMA
as the selection scheme. We also evaluated various repeated bisection algorithms using the six criterion functions
discussed in Section 3.1. We named this set of partitional methods by adding a letter "p" in front of the name of
the criterion function, e.g., "pI 1 " means the repeated bisection clustering method with I 1 as the criterion function.
Overall, we evaluated 15 hierarchical clustering methods.
The FScore results for the hierarchical trees for the various datasets and methods are shown in Table 2, where each
row corresponds to one method and each column corresponds to one dataset. The results in this table are provided
primarily for completeness and in order to evaluate the various methods we actually summarized these results in two
ways, one is by looking at the average performance of each method over the entire set of datasets, and the other is by
comparing each pair of methods to see which method outperforms the other for most of the datasets.
The first way of summarizing the results is to average the FScore for each method over the twelve different datasets.
However, since the hierarchical tree quality for different datasets is quite different, we felt that such simple averaging
may distort the overall results. For this reason, we used averages of relative FScores as follows. For each dataset, we
divided the FScore obtained by a particular method by the largest FScore obtained for that particular dataset over the
15 methods. These ratios represent the degree to which a particular method performed worse than the best method for
that particular series of experiments. Note that for different datasets, the method that achieved the best hierarchical
tree as measured by FScore may be different. These ratios are less sensitive to the actual FScore values. We will refer
to these ratios as relative FScores. Since, higher FScore values are better, all these relative FScore values are less
than one. Now, for each method we averaged these relative FScores over the various datasets. A method that has an
average relative FScore close to 1.0 will indicate that this method did the best for most of the datasets. On the other
hand, if the average relative FScore is low, then this method performed poorly.
fbis hitech k1a k1b la1 la2 re0 re1 reviews tr31 tr41 wap
I 1 0.592 0.480 0.583 0.836 0.580 0.610 0.561 0.607 0.642 0.756 0.694 0.588
I 2 0.639 0.480 0.605 0.896 0.648 0.681 0.587 0.684 0.689 0.844 0.779 0.618
slink 0.481 0.393 0.375 0.655 0.369 0.365 0.465 0.445 0.452 0.532 0.674 0.435
clink 0.609 0.382 0.552 0.764 0.364 0.449 0.495 0.508 0.513 0.804 0.758 0.569
Table
2: The FScores for the different datasets for the hierarchical clustering solutions obtained via various hierarchical clustering
methods.
The results of the relative FScores for various hierarchical clustering methods are shown in Table 3. Again, each
row of the table corresponds to one method, and each column of the table corresponds to one dataset. The average
relative FScore values are shown in the last column labeled "Average". The entries that are boldfaced correspond to
the methods that performed the best, and the entries that are underlined correspond to the methods that performed the
best among agglomerative methods or partitional methods.
fbis hitech k1a k1b la1 la2 re0 re1 reviews tr31 tr41 wap Average
I 1 0.843 0.826 0.836 0.927 0.724 0.775 0.878 0.801 0.738 0.847 0.833 0.824 0.821
I 2 0.910 0.826 0.868 0.993 0.809 0.865 0.919 0.902 0.792 0.945 0.935 0.866 0.886
slink 0.685 0.676 0.538 0.726 0.461 0.464 0.728 0.587 0.519 0.596 0.809 0.609 0.617
clink 0.868 0.657 0.792 0.847 0.454 0.571 0.775 0.670 0.590 0.900 0.910 0.797 0.736
Table
3: The relative FScores averaged over the different datasets for the hierarchical clustering solutions obtained via various
hierarchical clustering methods.
A number of observations can be made by analyzing the results in Table 3. First, the repeated bisection method
with the I 2 criterion function (i.e., "pI 2 ") leads to the best solutions for most of the datasets. Over the entire set of
experiments, this method is either the best or always within 6% of the best solution. On the average, the pI 2 method
outperforms the other partitional methods and agglomerative methods by 2%-8% and 7%-37%, respectively. Second,
the UPGMA method performs the best among agglomerative methods followed by the I 2 method. The two methods
together achieved the best hierarchical clustering solutions among agglomerative methods for all the datasets except
re0. On the average, the UPGMA and I 2 methods outperform the other agglomerative methods by 5%-30% and 2%-
27%, respectively. Third, partitional methods outperform agglomerative methods. Except for the pI 1 method, each
one of the remaining five partitional methods on the average performs better than all the nine agglomerative methods
by at least 5%. The pI 1 method performs a little bit worse than the UPGMA method and better than the rest of the
agglomerative methods. Fourth, single-link, complete-link and I 1 performed poorly among agglomerative methods
and pI 1 performed the worst among partitional methods. Finally, on the average, H 1 and H 2 are the agglomerative
methods that lead to the second best hierarchical clustering solutions among agglomerative methods. Whereas, pH 2
and pE 1 are the partitional methods that lead to the second best hierarchical clustering solutions among partitional
methods.
When the relative performance of different methods is close, the average relative FScores will be quite similar.
I
I 2 8
UPGMA
clink
Table
4: Dominance matrix for various hierarchical clustering methods.
Hence, to make the comparisons of these methods easier, our second way of summarizing the results is to create a
dominance matrix for the various methods. As shown in Table 4, the dominance matrix is a 15 by 15 matrix, where
each row or column corresponds to one method and the value in each entry is the number of datasets for which the
method corresponding to the row outperforms the method corresponding to the column. For example, the value in
the entry of the row I 2 and the column E 1 is eight, which means for eight out of the twelve datasets, the I 2 method
outperforms the E 1 method. The values that are close to twelve indicate that the row method outperforms the column
method.
Similar observations can be made by analyzing the results in Table 3. First, partitional methods outperform agglomerative
methods. By looking at the left bottom part of the dominance matrix, we can see that all the entries
are close to twelve except two entries in the row of pI 1 , which means each partitional method performs better than
agglomerative methods for all or most of the datasets. Second, by looking at the submatrix of the comparisons within
agglomerative methods (i.e., the left top part of the dominance matrix), we can see that the UPGMA method performs
the best followed by I 2 and H 1 , and slink, clink and I 1 are the worst set of agglomerative methods. Third, from the
submatrix of the comparisons within partitional methods (i.e., the right bottom part of the dominance matrix), we can
see that pI 2 leads to better solutions than the other partitional methods for most of the datasets followed by pH 2 , and
performed worse than the other partitional methods for most of the datasets.
5.4 Constrained Agglomerative Trees
Our second set of experiments was focused on evaluating the constrained agglomerative clustering methods. These
results were obtained by using the different criterion functions to find intermediate partitional clusters, and then using
UPGMA as the agglomerative scheme to construct the final hierarchical solutions as described in Section 4.3. UPGMA
was selected because it performed the best among the various agglomerative schemes.
The results of the constrained agglomerative clustering methods are shown in Table 5. Each dataset is shown in
a different subtable. There are six experiments performed for each dataset and each of them corresponds to a row.
The row labeled "UPGMA" contains the FScores for the hierarchical clustering solutions generated by the UPGMA
method with one intermediate cluster, which are the same for all the criterion functions. The rows labeled "10", "20",
"n/40" and "n/20" contain the FScores obtained by the constrained agglomerative methods using 10, 20, n/40 and
partitional clusters to constrain the solution, where n is the total number of documents in each dataset. The row
labeled "rb" contains the FScores for the hierarchical clustering solutions obtained by repeated bisection algorithms
with various criterion functions. The entries that are boldfaced correspond to the method that performed the best for
a particular criterion function, whereas the entries that are underlined correspond to the best hierarchical clustering
solution obtained for each dataset.
A number of observations can be made by analyzing the results in Table 5. First, for all the datasets except tr41, the
constrained agglomerative methods improved the hierarchical solutions obtained by agglomerative methods alone, no
matter what partitional clustering algorithm is used to obtain intermediate clusters. The improvement can be achieved
even with small number of intermediate clusters. Second, for many cases, the constrained agglomerative methods
performed even better than the corresponding partitional methods. Finally, the partitional clustering methods that
improved the agglomerative hierarchical results the most are the same partitional clustering methods that performed
fbis hitech
rb 0.623 0.668 0.686 0.641 0.702 0.681 rb 0.577 0.512 0.545 0.581 0.481 0.575
k1a k1b
la1 la2
rb 0.721 0.758 0.761 0.749 0.646 0.801 rb 0.787 0.725 0.742 0.739 0.634 0.766
re0 re1
rb
rb 0.858 0.892 0.877 0.873 0.769 0.893 rb 0.743 0.783 0.811 0.800 0.753 0.833
reviews wap
Table
5: Comparison of UPGMA, constrained agglomerative methods with 10, 20, n/40 and n/20 intermediate partitional clusters,
and repeated bisection methods with various criterion functions.
the best in terms of generating the whole hierarchical trees.
6 Discussion
The most important observation from the experimental results is that partitional methods performed better than agglomerative
methods. As discussed in Section 4.3, one of the limitations of agglomerative methods is that errors may
be introduced during the initial merging decisions, especially for the cases in which there are a large number of equally
good merging alternatives for each cluster. Without a high-level view of the overall clustering solution, it is hard for
agglomerative methods to make the right decision in such cases. Since the errors will be carried through and may be
multiplied as the agglomeration progresses, the resulting hierarchical trees suffer from those early stage errors. This
observation is also supported from the experimental results with the constrained agglomerative algorithms. We can see
in this case that once we constrain the space over which agglomeration decisions are made, even with small number of
intermediate clusters, some early stage errors can be eliminated. As a result, the constrained agglomerative algorithms
improved the hierarchical solutions obtained by agglomerative methods alone. Since agglomerative methods can do a
better job of grouping together documents that form small and reasonably cohesive clusters than partitional methods,
the resulting hierarchical solutions by the constrained agglomerative methods are also better than partitional methods
alone for many cases.
Another surprising observation from the experimental results is that I 1 and UPGMA behave very differently. Recall
from Section 4.1 that the UPGMA method selects to merge the pair of clusters with the highest average pairwise
similarity. Hence, to some extent, via the agglomeration process, it tries to maximize the average pairwise similarity
between the documents of the discovered clusters. On the other hand, the I 1 method tries to find a clustering solution
that maximizes the sum of the average pairwise similarity of the documents in each cluster, weighted by the size of
the different clusters. Thus, I 1 can be considered as the criterion function that UPGMA tries to optimize. However,
our experimental results showed that I 1 performed significantly worse than UPGMA.
By looking at the FScore values for each individual class, we found that for most of the classes I 1 can produce
clusters with similar quality as UPGMA. However, I 1 performed poorly for a few large classes. For those classes, I 1
prefers to first merge in a loose subcluster of a different class, before it merges a tight subcluster of the same class.
This happens even if the subcluster of the same class has higher cross similarity than the subcluster of the different
class. This observation can be explained by the fact that I 1 tends to merge loose clusters first, which is shown in the
rest of this section.
From their definitions, the difference between I 1 and UPGMA is that I 1 takes into account the cross similarities
as well as internal similarities of the clusters to be merged together. Let S i and S j be two of the candidate clusters of
size n i and n j , respectively, also let - i and - j be the average pairwise similarity between the documents in S i and S j ,
respectively (i.e., - be the average cross similarity between the documents in
and the documents in S j (i.e., #
merging decisions are based only on # i j . On the other hand,
I 1 will merge the pair of clusters that optimizes the overall objective functions. The change of the overall objective
function after merging two clusters S i and S j to obtain cluster S r is given by,
From Equation 13, we can see that smaller - i and - j values will result in greater #I 1 values, which makes looser
clusters easier to be merged first. For example, consider three clusters S 1 , S 2 and S 3 . S 2 is tight (i.e., - 2 is high) and of
the same class as S 1 , whereas S 3 is loose (i.e., - 3 is low) and of a different class. Suppose S 2 and S 3 have similar size,
which means the value of #I 1 will be determined mainly by (2# is possible that (2# 13 - 1 - 3 )
is greater than (2# even if S 2 is closer to S 1 than S 3 (i.e., # 12 > # 13 ). As a
result, if two classes are close and of different tightness, I 1 may merge subclusters from each class together at early
stages and fail to form proper nodes in the resulting hierarchical tree corresponding to those two classes.
7 Concluding Remarks
In the paper we experimentally evaluated nine agglomerative algorithms and six partitional algorithms to obtain hierarchical
clustering solutions for document datasets. We also introduced a new class of agglomerative algorithms
by constraining the agglomeration process using clusters obtained by partitional algorithms. Our experimental results
showed that partitional methods produced better hierarchical solutions than agglomerative methods, and that the constrained
agglomerative methods improved the clustering solutions obtained by agglomerative or partitional methods
alone. These results suggest that the poor performance of agglomerative methods may be attributed to the merging
errors they make during early stages, which can be eliminated to some extent by introducing partitional constrains.
--R
On the merits of building categorization systems by supervised clustering.
Document categorization and query generation on the world wide web using WebACE.
Principal direction divisive partitioning.
Baysian classification (autoclass): Theory and results.
Scatter/gather: A cluster-based approach to browsing large document collections
Concept decomposition for large sparse text data using clustering.
Spectral min-max cut for graph partitioning and data clustering
Pattern Classification.
CURE: An efficient clustering algorithm for large databases.
ROCK: a robust clustering algorithm for categorical attributes.
WebACE: A web agent for document categorization and exploartion.
Hypergraph based clustering in high-dimensional data sets: A summary of results
Spatial clustering methods in data mining: A survey.
Data clustering: A review.
Algorithms for Clustering Data.
Concept indexing: A fast dimensionality reduction algorithm with applications to document retrieval
Chameleon: A hierarchical clustering algorithm using dynamic modeling.
Fast and effective text mining using linear-time document clustering
Some methods for classification and analysis of multivariate observations.
Web page categorization and feature selection using association rule and principal component clustering.
Efficient and effective clustering method for spatial data mining.
An algorithm for suffix stripping.
Automatic Text Processing: The Transformation
Numerical Taxonomy.
A comparison of document clustering techniques.
Scalable approach to balanced
Pattern Recognition.
Criterion functions for document clustering: Experiments and analysis.
--TR
Algorithms for clustering data
Automatic text processing: the transformation, analysis, and retrieval of information by computer
Scatter/Gather: a cluster-based approach to browsing large document collections
Bayesian classification (AutoClass)
WebACE
Fast and effective text mining using linear-time document clustering
On the merits of building categorization systems by supervised clustering
Partitioning-based clustering for Web document categorization
Document Categorization and Query Generation on the World Wide Web Using WebACE
Principal Direction Divisive Partitioning
Chameleon
A Scalable Approach to Balanced, High-Dimensional Clustering of Market-Baskets
Efficient and Effective Clustering Methods for Spatial Data Mining
Pattern Classification (2nd Edition)
--CTR
Mohammed Attik , Shadi Al Shehabi , Jean-Charles Lamirel, Clustering quality measures for data samples with multiple labels, Proceedings of the 24th IASTED international conference on Database and applications, p.58-65, February 13-15, 2006, Innsbruck, Austria
Mihai Surdeanu , Jordi Turmo , Alicia Ageno, A hybrid unsupervised approach for document clustering, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Rebecca Cathey , Ling Ma , Nazli Goharian , David Grossman, Misuse detection for information retrieval systems, Proceedings of the twelfth international conference on Information and knowledge management, November 03-08, 2003, New Orleans, LA, USA
Nachiketa Sahoo , Jamie Callan , Ramayya Krishnan , George Duncan , Rema Padman, Incremental hierarchical clustering of text documents, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Ying Lai , Wai Gen Yee, Clustering high-dimensional data using an efficient and effective data space reduction, Proceedings of the 14th ACM international conference on Information and knowledge management, October 31-November 05, 2005, Bremen, Germany
Tianming Hu , Ying Yu , Jinzhi Xiong , Sam Yuan Sung, Maximum likelihood combination of multiple clusterings, Pattern Recognition Letters, v.27 n.13, p.1457-1464, 1 October 2006
Jin Soung Yoo , Shashi Shekhar , John Smith , Julius P. Kumquat, A partial join approach for mining co-location patterns, Proceedings of the 12th annual ACM international workshop on Geographic information systems, November 12-13, 2004, Washington DC, USA
Amol Ghoting , Gregory Buehrer , Srinivasan Parthasarathy , Daehyun Kim , Anthony Nguyen , Yen-Kuang Chen , Pradeep Dubey, A characterization of data mining algorithms on a modern processor, Proceedings of the 1st international workshop on Data management on new hardware, June 12-12, 2005, Baltimore, Maryland
Dina Demner-Fushman , Jimmy Lin, Answer extraction, semantic clustering, and extractive summarization for clinical question answering, Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL, p.841-848, July 17-18, 2006, Sydney, Australia
Krishna Kummamuru , Rohit Lotlikar , Shourya Roy , Karan Singal , Raghu Krishnapuram, A hierarchical monothetic document clustering algorithm for summarization and browsing search results, Proceedings of the 13th international conference on World Wide Web, May 17-20, 2004, New York, NY, USA
Tianming Hu , Sam Yuan Sung, Consensus clustering, Intelligent Data Analysis, v.9 n.6, p.551-565, November 2005
Gne Erkan, Language model-based document clustering using random walks, Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, p.479-486, June 04-09, 2006, New York, New York
Gautam Pant , Kostas Tsioutsiouliklis , Judy Johnson , C. Lee Giles, Panorama: extending digital libraries with topical crawlers, Proceedings of the 4th ACM/IEEE-CS joint conference on Digital libraries, June 07-11, 2004, Tuscon, AZ, USA
David Cheng , Santosh Vempala , Ravi Kannan , Grant Wang, A divide-and-merge methodology for clustering, Proceedings of the twenty-fourth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 13-15, 2005, Baltimore, Maryland
Richi Nayak , Wina Iryadi, XML schema clustering with semantic and hierarchical similarity measures, Knowledge-Based Systems, v.20 n.4, p.336-349, May, 2007
David Cheng , Ravi Kannan , Santosh Vempala , Grant Wang, A divide-and-merge methodology for clustering, ACM Transactions on Database Systems (TODS), v.31 n.4, p.1499-1525, December 2006
Jack G. Conrad , Khalid Al-Kofahi , Ying Zhao , George Karypis, Effective document clustering for large heterogeneous law firm collections, Proceedings of the 10th international conference on Artificial intelligence and law, June 06-11, 2005, Bologna, Italy
Ying Zhao , George Karypis , Usama Fayyad, Hierarchical Clustering Algorithms for Document Datasets, Data Mining and Knowledge Discovery, v.10 n.2, p.141-168, March 2005
Rebecca J. Cathey , Eric C. Jensen , Steven M. Beitzel , Ophir Frieder , David Grossman, Exploiting parallelism to support scalable hierarchical clustering, Journal of the American Society for Information Science and Technology, v.58 n.8, p.1207-1221, June 2007 | partitional clustering;agglomerative clustering;hierarchical clustering |
585199 | Compactly supported radial basis functions for shallow water equations. | This paper presents the application of the compactly supported radial basis functions (CSRBFs) in solving a system of shallow water hydrodynamics equations. The proposed scheme is derived from the idea of piecewise polynomial interpolation using a function of Euclidean distance. The compactly supported basis functions consist of a polynomial which are non-zero on [0,1) and vanish on [1, ). This reduces the original resultant full matrix to a sparse matrix. The operation of the banded matrix system could reduce the ill-conditioning of the resultant coefficient matrix due to the use of the global radial basis functions. To illustrate the computational efficiency and accuracy of the method, the difference between the globally and CSRBF schemes is compared. The resulting banded matrix has shown improvement in both ill-conditioning and computational efficiency. The numerical solutions are verified with the observed data. Excellent agreement is shown between the simulated and the observed data. | Introduction
In marine environments eld monitoring programs are often limited in their ability
to measure spatial and temporal variations in data. This scarcity of information
and a lack of known data about biological and physical phenomenon make
studying these variables extremely di-cult. Because of the complicated inter-action
between hydrological and physical processes mathematical modelling is
commonly used to make predictions of future events. As a consequence, this
1 School of Science and Technology, Open University of Hong Kong, Hong Kong
2 Department of Mathematics, City University of Hong Kong, Hong Kong, Hong Kong
3 2025 University Circle, Las Vegas, Nevada 89119, USA
study aims to numerically simulate the spatial and temporal variation of tidal
currents and water velocities in marine environments.
In this study we introduce an e-cient scheme based on radial basis functions
(RBFs). RBFs were originally devised for scattered geographical data interpolation
by Hardy who introduced a class of functions called multiquadrics [1].
These functions have been widely adopted to solve hyperbolic, parabolic and
elliptic partial dierential equations [2] to [8].
Since the computational cost of using global RBFs is a major factor to
be considered when solving large scale problems with many collocation points,
various techniques have been suggested to improve computational e-ciency. A
common approach is to use domain decomposition. In [9] Dubal used domain
decomposition with a blending technique to cope with the full coe-cient matrix
arising from a multiquadric approximation of the solution of a linear pde. In [10]
Wong et al developed an e-cient multizone decomposition algorithm associated
with multiquadric approximation for a system of non-linear time-dependent
equations. Using this technique the problem of ill-conditioning was reduced
by decreasing the size of the full coe-cient matrix. In addition, multizone
decomposition readily leads itself to parallelization.
However, traditional domain decomposition methods often use iterative corrections
to impose smoothness across internal boundaries and this can increase
computational costs in complicated problems. In this study, we consider an
alternative method to increase computational e-ciency by using compactly
supported RBFs which enable one to work with sparse banded matrices.
The paper is organized as follows. In Section 2 we discuss the basic theory
of the RBF method and the application of compactly supported RBFs is
introduced in Section 3. Section 4 discusses two numerical examples based on
the shallow water equations. The paper ends with a discussion of our results in
Section 5.
2 Application of Radial Basis Functions
The use of radial basis functions to solve PDEs may be viewed as a generalization
of the following multivariate interpolation problem. Let f(x)
a real-valued function and let x
Let be a positive denite radial basis function (see
Denition 3.1) where k x x is the Euclidean distance between x and
Consider an approximation s(x) to f(x)
are the unknown coe-cients to be determined by setting s(x i
This yields the system of linear
@
A
@
@
A
which can be expressed in matrix form as
is an N N
matrices. Since is positive denite, the matrix A is non-singular so (2.2) has
a unique solution which can be determined by Gaussian elimination.
Generalizing this, if L is a linear partial dierential operator, then an approximation
uN to the solution u of can be obtained by letting
as in (2.1) where f j g are obtained by setting
If the matrix then (2.4) has an unique
solution, and f j g can also be obtained by Gaussian elimination. In general L
is not positive denite, even if is, so a general theory for this approach is not
yet available. However, numerous numerical studies over the past decade have
shown that L is invertible in many cases and that uN can provide an accurate
approximation to u for su-ciently large N .
In recent years an alternative procedure to solve based on Her-
mite, rather than Lagrange interpolation has been proposed by Fasshauer [11]
and further studied by Franke and Schaback [12, 13]. Here one uses the basis
functions L y ( y), where L y denotes the operation of L with respect to y
in (x y). In this case one can show under quite general conditions that the
corresponding coe-cient matrix is positive denite if is symmetric, smooth
and positive denite [12]. In this case, one can guarantee the solvability of the
equations (2.4) where is replaced by L y ( y): However, for many complicated
problems such as the nonlinear hydrodynamic equations studied in this
paper, the Hermite approach may not be feasible as indicated in the paper by
Wong et al [10] and so one must resort to the Lagrange approach in (2.4). Many
of these ideas can be easily generalized to the case where is only conditionally
positive denite [14] in which one needs to add a polynomial of suitable degree
to the interpolant s(x) in (2.1) and additional conditions must be satised to
guarantee its uniqueness [6].
Although there are many possible radial basis functions, the most commonly
used have been the multiquadrics (MQs)
log r (in R 2 ) and Gaussians These functions
are globally supported and will generate a system of equations in (2.4) with a
full matrix. However, as shown by Madych and Nelson [15], the MQs can be
exponentially convergent so one can often use a relatively small number of basis
elements in which can be computationally e-cient. As a consequence, the MQ
method has been progressively rened and widely used in [3] to [8] for solving
partial dierential equations in various scientic and engineering disciplines.
For example Hon et al [8] used the MQ method to solve a system of time-dependent
hydrodynamics equations and this was extended by Wong et al to
study the coupled hydrodynamics and mass transport equations [16] to predict
pollutant concentrations. Their numerical results demonstrated that the MQ
scheme was more accurate and e-cient than the nite element method. The
MQ method, however requires one to solve a system of equations with a full
matrix. This can be extremely expensive when there are hundreds of collocation
points.
To overcome this problem, we develop a scheme based on compactly supported
radial basis functions (CSRBFs) in this study. Using global RBFs to
solve partial dierential equations can produce a simple algorithm and accurate
results, however solving systems of equations with full matrices can be computationally
expensive and unstable if the matrix is ill-conditioned. CSRBFs can
convert the global scheme into a local one with banded matrices, which makes
the RBF method more feasible for solving large scale problems. The basic
theory of CSRBFs is given in the next section.
Compactly Supported Radial Basis Functions
A family of CSRBFs f l;k g was rst introduced by Wu in [17] and later expanded
by Wendland [19] in the mid 1990s. Generally a CSRBF f l;k (r)g is
expressed in the form
with the following conditions
where p(r) is a prescribed polynomial, r =k x x is the
Euclidean distance and x; x j 2 R d : The index l is the dimension number and
2k is the smoothness of the function.
As discussed previously, to ensure the uniqueness of the interpolant s(x) in
it is su-cient that be positive denite. We recall the denition of a
positive denite function below:
Denition 3.1 A continuous function : R d !R is said to be positive denite
on R d , i for all sets of distinct centers
vectors 2 R N nf0g; the quadratic form
is positive. A univariate even function : R!R is called positive denite on
R d , written as 2 PD d , if the function
is positive denite.
To obtain positive denite CSRBFs Wu used Bochner's theorem[18] which
relates positive denite integrable functions on R d to non-negative Borel measures
on R d . Using this relation Wu proved that an integrable positive denite
compactly supported function = (r) on R d is continuous if and only if is
bounded and its d-variate Fourier transforms is non-negative and non-vanishing.
The d-variate Fourier transform of a radial function (r) is given by
f d
where J d 2is the Bessel function of the rst kind of order
given by
There are two approaches to the construction of CSRBFs. They can be constructed
using either the integral (I) or derivative (D) of the function (r) 2
CS \ PD n . The proof technique using the operators D and I in combination
with the d-variate Fourier transform was introduced in [17]. The use of Fourier
transform CSRBFs satises the two recursion formulas
f d
f d
where D, I 2 L 1 (R d ) which are dened by
r
d
dr (r); (3.12)
r
Wu's CSRBFs are obtained using equation (3.12). They are constructed by
starting with a highly smooth univariate positive denite function which are
derived from the following convolution product
of degree (4l + 1) in C 2l \PD 1 . This yields the general form of Wu's compactly
supported functions l (2r) of degree (4l 2k +1) in C 2l 2k \PD 2k+1 .
Table
some examples of Wu's CSRBFs, where :
represents equality up
to a constant.
Table
1: Examples of Wu's CSRBF functions
It has been shown that the smoothness of Wu's functions deteriorates in
higher dimensional spaces. Wendland [19] has modied Wu's functions using
the criterion in equation (3.13) which are constructed by starting with a low
smoothness truncated power function
l
Applying the integration operator (I) k times on the function l (r) it will
transform to a polynomial form
Table
some examples of Wendland's functions. We summarize an important theorem
for Wendland's CSRBFs as dened in [19]
Theorem 3.1 For every space dimension d 2 N and every k 2 N there exists
a positive denite compactly supported function l;k on R d of the form in
equation (3.5) with a univariate polynomial with degree
. The function possesses 2k smooth derivatives around zero
smooth derivatives around 1 and is uniquely determined up to a
constant.
Table
2: Examples of Wendland's CSRBF functions
3;0 (r)
bounds for approximation of f 2 H s (R d ) by CSRBFs are given in [21]
and are of the form
(R d
(R d ) is the Sobolev space with
2.
Recently Fasshauer [11, 22] has given a multilevel approximation algorithm
based on Wendland's functions and demonstrated its behaviour by solving some
simple boundary value problems. We have investigated the amount of work in
the recursive computations and found it excessive for solving non-linear time-dependent
systems. Also, additional errors were generated at each time step.
Hence, this method was not used in this study.
Further explicit formulas for Wendland's functions were given by Fasshauer
[22] for values of 3. The value l is given by b d
the dimension number. The explicit formulae are shown in Table 3.
Table
3: Explicit formula of Wendland's functions
l;0 (r)
The basis function l;k (r) can be scaled to have compact support on [0; -] by replacing
r with r
The interpolation function dened by equation (2.1)
can be written as
The scaling factor - j can be variable or constant at dierent node points
depending on the nature of the problem. An important unsolved
problem is to nd a method to determine the optimal size of -. In general, the
smaller the value of -, the higher the percentage of zero entries in the coe-cient
matrix. However, this also results in lower accuracy.
To illustrate the algorithm for solving time-dependent problems, we consider
a simple partial dierential equation with time variable t and space variables
in the form
@t
subject to the boundary conditions, for t 0
@x
(0;
@y (x; 0;
The initial condition is
for all (x; y) in [0; m] 2 .
Let data points in [0; m] 2 of which
are interior points; are boundary
points. The numerical discretization for the time derivative of the governing
equation is obtained using a nite dierence method and the spatial derivatives
are approximated by compactly supported radial basis functions. Using forward
dierences the equation (3.18) can be expressed as
where t is the time step. At each time step n, the values u n
at
the interior points are calculated by equation (3.24), while the boundary points
values
are determined by the given boundary conditions.
It is assumed that the values u n are interpolated by
the following function
where r
. The unknown coe-cients n
are determined
by collocating on the set of distinct points
over the domain
This system of N linear equations may be written in matrix form
where U n and n are single column matrices and Q
is a N N
coe-cient matrix. It is noted that the matrix Q - remains unchanged for a xed
set of data points
and its inverse matrix only needs to be calculated
once. We denote the boundary conditions of the problem in the matrix by
(1) Dirichlet boundary conditions u(x
Neumann boundary @u
boundary conditions @u
@y
These matrices are denoted as
@x
r B+1;N
@x
@x
@x
@y
r C+1;N
@y
@y
7 7: (3.29)
After determining the set of unknown coe-cients f n
j g the numerical values of
the unknown spatial derivatives of u n are approximated by
@x
@x
@y
@y
(3.
It follows that the value of the variable u A at
1)t at the interior points can be computed by substituting the partial
derivatives (3.32) and (3.33) into the equation (3.24).
In the next section we use the method to solve a real-life model. A system
of linear and non-linear shallow water hydrodynamics equations are considered
separately to compare the accuracy and feasibility of the method with the global
radial basis function method.
Mathematical Models and Numerical Results
We rst apply the proposed method to solve a set of linear shallow water equa-
tions. Another example of a non-linear two-dimensional hydrodynamic model
is considered later to further illustrate the performance of the method. The
numerical results are compared with the results obtained by the global multi-
quadric model in [10]. The programs were written in C++ and all results were
generated using double precision on a Pentium II PC.
4.1 Example 1 - Linear shallow water equation
This simple model allows a comparison of the computed result with the analytical
solution. The equations to be solved are given as
@
@t
@t
where V is a vector of the depth-averaged advective velocities in the x; y directions
is the sea water surface elevation; H is the total depth
of sea level, such that is the mean depth of sea level and g
is the gravitational acceleration. The domain is shown in Figure 1. There
are 205 collocation points in a rectangular channel with length
data points are xed,
of which are interior points; the land boundary points are
and the water-water boundary points are
Figure
1: A rectangular channel with 205 collocation points.
The values at the boundary points are determined by the following given con-
ditions. The land boundary conditions are:
The water-water boundary condition is:
cos !t; at
=s. The initial conditions are:
The analytical solution to this boundary-value problem is given by
cos !t
cos
r
sin
sin !t
cos
The solution corresponds only to the interaction between the incident wave and
the re
ected waves from the wall at Equations (4.34) and (4.35) are
discretized in time using the Taylor method with second order accuracy which
yields
At each time step n, the values of the variables n and u n at the interior points
are calculated using these two equations. The unknown
spatial derivatives are approximated using a CSRBF l;k (r) as described in the
previous section. As an illustration, we shall approximate the spatial derivatives
using a simple Wendland's function -
4;2 (r). Assume that the variables n and
are approximated using the following positive denite compactly supported
function with a desired scaling factor - j at collocation point
where r
. The unknown coe-cients n
can
be calculated when all data points
are distinct in the domain
The coe-cient matrices for the N linear equations from (4.47) and (4.48)
can be written in the matrix form@ n
This system involves nding the column vectors
n ]. The coe-cient
matrices
and
R NN are sparse and symmetric. This symmetric
system can be rearranged to form an easy-to-solve banded system by using the
well established LU factorization algorithm.
Having calculated the unknown coe-cients j and
, the rst partial derivatives
of the function n
collocation points with
respect to x can be calculated. In the case of n
j it is obtained by the
following equations
@x
@y
The values of the partial derivatives of the function u n
i can be determined in
a similar manner as the calculation of n
i . In this simple example, a constant
- can be applied throughout the whole domain. The numerical solutions n+1
and u n+1
i at the next time step (n + 1)t can be computed subsequently by
substituting the values of the partial derivatives into equations (4.45) and (4.46).
Table
4 shows some results of Wu's and Wendland's functions. The root-
mean-square (RMS) errors of the tidal level () and water velocity (u) in relation
to the analytical solution are compared. The RMS error at three of the interpolation
points and 112 which are situated along the center of the
basin are shown in the table. The RMS error is calculated by
analytical
computed
where T n is the total number of time steps, analytical
i is the analytical solution
and computed
i is the simulated result. All results are generated with time step
seconds; the simulation is carried out for 1-hour, i.e. T
Our numerical experiments have shown that the level of accuracy of the
CSRBFs decreases considerably with a decrease in scaling factor as can be compared
to the RMS error in Table 4. The computational results at
are still reasonably accurate with 41% non-zero entries in the sparse matrix.
Table
4: Comparison between results among Wu's and Wendland's CSRBFs
and global multiquadric
tidal level() velocity(u)
Data points(N) 92 102 112 92 102 112
Wendland's Functions
abs
RMS error(cm) 4.78 2.58 4.33 2.51 1.48 2.09
abs
abs
RMS error(cm) 4.99 2.40 1.47 2.25 1.15 2.09
abs
Wu's Functions
abs
RMS error(cm) 2.42 2.89 2.64 2.00 1.39 1.63
abs
Global Multiquadric
abs
The high level of accuracy achieved in this model is due to the simplicity of the
equations and the uniform distribution of the collocation points over the computational
domain. Regarding computational e-ciency, CSRBFs performed
more e-ciently with a saving of about 43% CPU time at
compared to the global MQ scheme.
Wendland's functions are slightly more accurate than that of Wu's functions.
This is shown by the good performance of Wendland's function which generates
a high level of accuracy even with a very sparse matrix, such as the results of
the function -
4;2 (r) are very close to those of MQ scheme. Our experiments
have shown that CSRBFs with a suitable choice of scaling factor would perform
better than global MQs.
4.2 Example 2 - Non-linear shallow water equations
In this example, we applied the proposed scheme to solve a real-life two dimensional
time-dependent nonlinear hydrodynamics model. Tolo Harbour of
Hong Kong is chosen as the computational domain for comparison purposes.
Tolo Harbour is a semi-enclosed embayment with a very irregular coastline that
surrounds its land boundary; this typical geographical condition makes it well
suited for verifying the proposed methods.
The location of Tolo Harbour is situated approximately
longitude and 22 - 22 0 to 22 - latitude. The embayment of the harbour occupies
an area of 50 km 2 and is long. The width of the embayment varies from
5 km in the inner basin to just over 1 km at the mouth of the channel Harbour.
The tide in Tolo Harbour is a mixed semi-diurnal type with a tidal period of
hours. The overall range of the tidal level is around 0:1 m to 2:7 m. The
measurement of the current
ows is recorded as an average 10 cm=sec in the
channel of the Harbour. The water depth of Tolo Harbour is shallow in Inner
Tolo Harbour less than while the Tolo channel is more than 20 m.
The governing equations are the two-dimensional depth-integrated version
of three dierential equations, namely the continuity equation (4.53) and the
momentum conservation equations (4.54) and (4.55) in the x and y directions
respectively in a
region
These equations are formulated as:
@
@t
@x
@y
@t
@x
@y
@
@x
fvH a
@t
@x
@y
@
@y
+fuH a
where u; v are the depth-averaged advective velocities in the x; y directions
is the sea water surface elevation; h is the mean depth of sea
level; H is the total depth of sea level, such that are the
wind velocity components in x; y directions respectively, and W s is the wind
speed given as W
y . C b is the Chezy bed roughness coe-cient;
f is the Coriolis force parameter; g is the gravitational acceleration; a is the
density of air; w is density of water and C s is the surface friction coe-cient.
The land boundary conditions is dened as
~
where ~
Q represents the velocity vector (u; v), ~n is the direction of the outward
normal vector on the land boundary. The initial conditions are
~
for all x i , y
The time discretization of equations (4.53) to (4.55) is
given by using a forward nite dierence scheme. The resulting nite dierence
equations are written as follows:
@uH
@x
@y
@ n+1
@x
@x
@y
fv a
Hw
@ n+1
@y
@x
@y
fu a
Hw
where t is the time step, n+1 , u n+1 and v n+1 are the iterative solutions at the
1)t at the points . The unknown partial
derivatives are determined by the radial basis function scheme. It is assumed
that the values of the current velocities u n and the surface
elevation n are approximated using Wendland's function -
3;1 (r i;j ) at
collocation points
The unknown coe-cients
are determined by collocating with a set
of data points . The computation is performed on the set
of 260 distinct data points on the whole domain, of which 23 data points are on
the water boundary; 107 are interior points and 130 are on the land boundary.
These data points are distributed evenly on the domain as indicated in Figure
2.
The algorithm is slightly dierent compared to the previous examples. At
each time step the numerical solution of the variables on the boundaries are
updated by the following conditions. The open sea boundary condition of the
surface elevation n dened as
is the input sea surface elevation level on the water boundary.
The input sea surface elevation ^
n on the water boundary at time step n is
estimated using the equation suggested by the Observatory of Hong Kong given
as
where ~
n (t) is the actual tide data measured at a tide gauge; TCOR i is the time
correction parameter and HCOR i
is the tide level correction parameter. The
tide and wind data are the hourly average observed data at two tide gauges
and are obtained from the Observatory of Hong Kong. The location of these
two tide gauges is indicated in Figure 2.
Similarly, the
ow velocities u n on the land boundaries
are updated at each time step using equations (4.68) and (4.69). The current
velocities fu n g and fv n g on the land boundary are obtained from the following
equations
where ~ u n
are the values computed at data point
on the land boundary from the governing equations (4.60) and (4.61), i is the
outward normal angle at a land boundary point which is computed by taking
the average of the vectors joining the neighbouring points.
To maintain stability, the time-step is chosen to satisfy the following Courant
2.5 3 3.5 4 4.5 54.24.655.45.8Two Major Rivers:
Shing Mun River
a
Inner Tolo
Channel
Tolo Water
Boundary
Two Tide Gauge
Figure
2: Map of Tolo Harbour of Hong Kong showing the locations of Tide
Gauges. The dots () distributed on the map represents the interpolation points
condition
t d min
gh
where d min is the minimum distance between any two adjacent collocation
points and h is the average water depth between the two points.
The simulation is carried out for 1100 hours from 1 February 1991 to mid
March 1991 with a time step of 30 seconds which means that the total number
of time steps T The performance of the compactly supported RBF
models are compared to the global multiquadric model. The numerical results
are veried with the observed data measured at the tide gauge as shown in
Table
5. It should be noted that functions with higher order of smoothness
can generate slightly more accurate results at the expense of the computational
cost.
It is found that the computations will become signicantly unstable in the
channel Tolo harbour if a constant value of - j is used. Our experiments have
further shown that the accuracy of compactly supported models can be improved
when a relatively larger value of scaling factor - j for the collocation
Table
5: Analysis of the tidal level for the interpolation point at a tide gauge
of the Tolo Harbour computed by global Multiquadric and Wendland's function
RMS absolute Max. CPU time
error (m) error (m) ratio (%)
Global Multiquadric
Wendland's function 3;1 ( r
points on the water boundary is used. It means that the scaling factor - j for
water boundary points is set large enough to cover the whole region in such a
way that the updated information can be propagated in each time step. In this
way, all the entries in the columns of the coe-cient matrix corresponding to
those water boundary points are non-zero. The remaining part of the matrix
can still be banded.
By comparing the RMS errors and absolute maximum errors, the level of
accuracy of the CSRBF model at very close to the global
multiquadric model with a saving of 12% CPU in time. With a smaller
1:210 6 the results of the CSRBF model are still reasonably stable and accurate
throughout the 1.5 months simulation period and it saves 26% CPU time.
The comparison of the simulated tidal level with the observed hourly data
for the period between 23 February 1991 and 27 February 1991 is given in Figure
3. These gures show that there are no signicant dierences in the overall
pattern of the tidal elevation if an appropriate - j is used in the model. The
ooding velocities of the CSRBF model and multiquadric model are compared
in
Figure
4. It shows that the smoothness of CSRBF velocities is similar to
those in the global multiquadric method. Since there is no regular monitoring
of the current velocities in the Tolo Harbour, the prediction of the current velocities
cannot be veried precisely at a single point. However, previous eld
measurement of the current
ow was found to be 10 cm/sec on average in
the channel and has a poor
ushing rate in the inner Tolo harbour, which is
consistent with the numerical predictions.
5 Conclusions and Discussion
A system of mathematical models for the shallow water hydrodynamic system
was constructed to simulate the variation of tidal currents and water velocities.
A compactly supported radial basis function method was employed to approximate
the spatial derivatives of the model. The numerical results of Wu's and
Wendland's CSRBFs were compared with the results of a global multiquadric
scheme. The CSRBF method is computationally e-cient due to the sparse
resultant matrix. Reasonable agreement between the predicted and eld data
was observed.
CSRBF approximation makes RBFs more feasible for solving large-scale
problems. The degree of accuracy is very much dependent on the size of the
local support. The accuracy of the computations can be improved by using a
large scaling factor - j to increase the support for the function. However this
results in a higher computational cost.
The numerical examples have demonstrated an improvement in computational
e-ciency without signicant degradation in accuracy if a suitable scaling
factor is used. For models of complicated systems or irregular domains such as
the model of Tolo Harbour, the numerical accuracy can be improved by using a
variable scaling factor - j to increase the support of the function in partial areas
of the domain on the water boundary or source points.
The eect of the density of data points on the performance of the global
RBF method and the conditioning problem were investigated in our previous
Time in hours (between 23/Feb to 28/Feb 1991)
Tide
Level
Simulated results using one global domain with 260 data points
Observed hourly data at Ko Lau Wan Tide Gauge50150250
Time in hours (between 23/Feb and 28/Feb 1991)
Tide
Level
Simulated results by compact support, scaling factor 1.5km
Observed hourly data at Ko Lau Wan Tide Gauge50150250
Time in hours (between 23/Feb and 28/Feb 1991)
Tide
Level
Simulated results using CSRBF with scaling factor 1.3km
Observed hourly data at Ko Lau Wan Tide Gauge
Figure
3: Comparison of the simulated values of the global MQ and CSRBF
with the observed hourly tide level in Tolo Harbour between 23 Feb and 28 Feb
1991.
Figure
4: Distribution of the
ood velocities in Tolo Harbour at 353 hours
simulating of global MQ and Wendland's CSRBF models
study. We have shown that the condition number of the coe-cient matrix for
the RBF scheme increased rapidly with the increase in the number of data
points see [10].
From a modelling point of view, a more realistic model which takes into
consideration variations along the depth of water in the harbour should be
built. The present model is a relatively simple two-dimensional model where
only the depth-averaged variation is modelled. In marine systems, stratication
often occurs in summer, variation in dierent layers of a water body from the
surface down to the seabed cannot be ignored. These aspects may aect the
prediction accuracy of the model. A more realistic multi-layer model with the
use of radial basis function is currently being investigated.
Acknowledgment
This research is supported by the Research and Development Fund of the Open
University of Hong Kong, # 1.7/96; the Research Grant Committee of City
University of HK, #7000943 and UGC grant. #9040286. The authors wish to
thank the Royal Observatory of Hong Kong for providing meteological data for
numerical computation.
--R
"The theory of radial basis function approximation in 1990"
"The parameter R 2 in multiquadric interpola- tions"
"Improved multiquadric approximation for partial dierential equations"
"Sparse approximation multiquadric in- terpolations"
"A multiquadric solution for shallow water equation"
"Solving dierential equations with radial basis functions: Multilevel methods and smoothing"
"Solving partial dierential equations by collocation using radial basis functions"
"Convergence orders of meshless collocation methods using radial basis functions"
"Generalized Hermite interpolation via matrix-valued conditional positive de nite functions"
"Multivariate interpolation: a variational theory"
"A computational model for monitoring water quality and ecological impacts in marine environments"
"Compactly supported positive de nite radial functions"
"Monotone funktionen, stieltjessche integrale and harmonische analyse"
"Piecewise polynomial, positive de nite and compactly supported radial functions of minimal degree"
"On the smoothness of positive de nite and radial func- tions"
"Error estimates for interpolation by compactly supported radial basis functions of minimal degree"
"On smoothing for multilevel approximation with radial basis functions"
--TR
Generalized Hermite interpolation via matrix-valued conditionally positive definite functions
estimates for interpolation by compactly supported radial basis functions of minimal degree
Solving partial differential equations by collocation using radial basis functions
--CTR
X. Zhou , Y. C. Hon , Jichun Li, Overlapping domain decomposition method by radial basis functions, Applied Numerical Mathematics, v.44 n.1-2, p.241-255, January | hydrodynamic equation;RBF;compact support |
585270 | Alternating-time temporal logic. | Temporal logic comes in two varieties: linear-time temporal logic assumes implicit universal quantification over all paths that are generated by the execution of a system; branching-time temporal logic allows explicit existential and universal quantification over all paths. We introduce a third, more general variety of temporal logic: alternating-time temporal logic offers selective quantification over those paths that are possible outcomes of games, such as the game in which the system and the environment alternate moves. While linear-time and branching-time logics are natural specification languages for closed systems, alternating-time logics are natural specification languages for open systems. For example, by preceding the temporal operator "eventually" with a selective path quantifier, we can specify that in the game between the system and the environment, the system has a strategy to reach a certain state. The problems of receptiveness, realizability, and controllability can be formulated as model-checking problems for alternating-time formulas. Depending on whether or not we admit arbitrary nesting of selective path quantifiers and temporal operators, we obtain the two alternating-time temporal logics ATL and ATL*.ATL and ATL* are interpreted over concurrent game structures. Every state transition of a concurrent game structure results from a choice of moves, one for each player. The players represent individual components and the environment of an open system. Concurrent game structures can capture various forms of synchronous composition for open systems, and if augmented with fairness constraints, also asynchronous composition. Over structures without fairness constraints, the model-checking complexity of ATL is linear in the size of the game structure and length of the formula, and the symbolic model-checking algorithm for CTL extends with few modifications to ATL. Over structures with weak-fairness constraints, ATL model checking requires the solution of 1-pair Rabin games, and can be done in polynomial time. Over structures with strong-fairness constraints, ATL model checking requires the solution of games with Boolean combinations of Bchi conditions, and can be done in PSPACE. In the case of ATL*, the model-checking problem is closely related to the synthesis problem for linear-time formulas, and requires doubly exponential time. | Introduction
In 1977, Pnueli proposed to use linear-time temporal
logic (LTL) to specify requirements for reactive
systems [Pnu77]. A formula of LTL is interpreted
over a computation, which is an infinite sequence of
states. A reactive system satisfies an LTL formula if
all its computations do. Due to the implicit use of
universal quantification over the set of computations,
cannot express existential, or possibility, prop-
erties. Branching-time temporal logics, such as CTL
and CTL ? , do provide explicit quantification over the
set of computations [CE81, EH86]. For instance, for
a state predicate ', the CTL formula 83' requires
that a state satisfying ' is visited in all computa-
tions, and the CTL formula 93' requires that there
exists a computation that visits a state satisfying '.
The problem of model checking is to verify whether
a finite-state abstraction of a reactive system satisfies
a temporal-logic specification [CE81, QS81]. Efficient
model checkers exist for both LTL (e.g. SPIN [Hol97])
and CTL (e.g. SMV [McM93]), and are increasingly
being used as debugging aids for industrial designs.
The logics LTL and CTL have their natural interpretation
over the computations of closed systems,
where a closed system is a system whose behavior
is completely determined by the state of the system.
However, the compositional modeling and design of re-active
systems requires each component to be viewed
as an open system, where an open system is a system
that interacts with its environment and whose behavior
depends on the state of the system as well as the behavior
of the environment. Models for open systems,
such as CSP [Hoa85], I/O automata [Lyn96], and re-active
modules [AH96], distinguish between internal
nondeterminism, choices made by the system, and external
nondeterminism, choices made by the environ-
ment. Consequently, besides universal (do all computations
satisfy a property?) and existential (does some
computation satisfy a property?) questions, a third
question arises naturally: can the system resolve its
internal choices so that the satisfaction of a property
is guaranteed no matter how the environment resolves
the external choices? Such an alternating satisfaction
can be viewed as a winning condition in a two-player
game between the system and the environment. Alternation
is a natural generalization of existential and
universal branching, and has been studied extensively
in theoretical computer science [CKS81].
Different researchers have argued for game-like interpretations
of LTL and CTL specifications for open
systems. We list four such instances here. (1) Receptiveness
[Dil89, AL93, GSSL94]: given a reactive
system, specified by a set of safe computations (typ-
ically, generated by a transition relation) and a set
of live computations (typically, expressed by an LTL
formula), the receptiveness problem is to determine
whether every finite safe computation can be extended
to an infinite live computation irrespective of the behavior
the environment. It is sensible, and necessary
for compositionality, to require an affirmative answer
to the receptiveness problem. (2) Realizability (pro-
gram synthesis) [ALW89, PR89a, PR89b]: given an
sets of input and output signals,
the synthesis problem requires the construction of a
reactive system that assigns to every possible input sequence
an output sequence so that the resulting computation
satisfies /. (3) Supervisory control [RW89]:
given a finite-state machine whose transitions are partitioned
into controllable and uncontrollable, and a set
of safe states, the control problem requires the construction
of a controller that chooses the controllable
transitions so that the machine always stays within the
safe set (or satisfies some more general LTL formula).
Module checking [KV96]: given an open system
and a the module-checking problem
is to determine if, no matter how the environment restricts
the external choices, the system satisfies '.
All the above approaches use the temporal-logic
syntax that was developed for specifying closed sys-
tems, and reformulate its semantics for open systems.
In this paper, we propose, instead, to enrich temporal
logic so that alternating properties can be specified explicitly
within the logic: we introduce alternating-time
temporal logics for the specification and verification of
open systems. Our formulation of open systems con-
siders, instead of just a system and an environment,
the more general setting of a set \Sigma of agents that correspond
to different components of the system and the
environment. For the scheduling of agents, we consider
two policies. In each state of a synchronous sys-
tem, it is known in advance which agent proceeds. In
each state of an asynchronous system, several agents
may be enabled, and an unknown scheduler determines
which agent takes the next step. In the latter
case, the scheduler is required to be fair to each agent;
that is, in an infinite computation, an agent cannot be
continuously enabled without being scheduled.
For a set A ' \Sigma of agents, a set \Gamma of computations,
and a state w of the system, consider the following
game between a protagonist and an adversary. The
game starts at the state w. Whenever the scheduled
agent is in the set A, the protagonist chooses the next
state, and otherwise, the adversary chooses the next
state. If the resulting infinite computation belongs to
the set \Gamma, then the protagonist wins. If the protagonist
has a winning strategy, we say that the alternating-
time formula hhAii\Gamma is satisfied in the state w. Here,
hhAii is a path quantifier, parameterized with the set
A of agents, which ranges over all computations that
the agents in A can force the game into, irrespective
of how the agents in \Sigma nA play. Hence, the parameterized
path quantifier hhAii is a generalization of the path
quantifiers of branching-time temporal logics: the existential
path quantifier 9 corresponds to hh\Sigmaii, and the
universal path quantifier 8 corresponds to hh;ii. In par-
ticular, closed systems can be viewed as systems with
a single agent sys . Then, the two possible parameterized
path quantifiers hhsysii and hh;ii match exactly the
path quantifiers 9 and 8 required for specifying such
systems. Depending on the syntax used to specify the
set \Gamma of computations, we obtain two alternating-time
temporal logics: in the logic ATL ? , the set \Gamma is specified
by a formula of LTL; in the more restricted logic
ATL, the set \Gamma is specified by a single temporal operator
applied to a state formula. Thus, ATL is the
alternating generalization of CTL, and ATL ? is the
alternating generalization of CTL ? .
Alternating-time temporal logics can conveniently
express properties of open systems as illustrated by
the following five examples:
1. In a multi-process distributed system, we can require
any subset of processes to attain a goal,
irrespective of the behavior of the remaining
processes. Consider, for example, the cache-coherence
protocol for Gigamax verified using
SMV [McM93]. One of the desired properties
is the absence of deadlocks, where a deadlocked
state is one in which a processor, say a, is permanently
blocked from accessing a memory cell.
This requirement was specified using the CTL formul
The
captures the informal requirement more precisely.
While the CTL formula only asserts that it is always
possible for all processors to cooperate so
that a can eventually read and write ("collabora-
tive possibility"), the ATL formula is stronger: it
guarantees a memory access for processor a, no
matter what the other processors in the system do
("adversarial possibility").
2. While the CTL formula 82' asserts that the
state predicate ' is an invariant of a system component
irrespective of the behavior of all other
components ("adversarial invariance"), the ATL
(which stands for hh\Sigma n fagii 2')
states the weaker requirement that ' is a possible
invariant of the component a; that is, a cannot
violate 2' on its own, and therefore the other
system components may cooperate to achieve 2'
("collaborative invariance"). For ' to be an invariant
of a complex system, it is necessary (but
not sufficient) to check that every component a
satisfies the ATL formula [[a]]2'.
3. The receptiveness of a system whose live computations
are given by the LTL formula / is specified
by the
4. Checking the realizability (program synthesis) of
an corresponds to model checking
of the ATL ? formula hhsysii/ in a maximal model
that considers all possible inputs and outputs.
5. The controllability of a system whose safe states
are given by the state predicate ' is specified by
the
thesis, then, corresponds to model checking of this
formula. More generally, for an LTL formula /,
the ATL ? requirement hhcontrolii/ asserts that
the controller has a strategy to ensure the satisfaction
of /.
Notice that ATL is better suited for compositional reasoning
than CTL. For instance, if a component a satisfies
the CTL formula 93', we cannot conclude that
the compound system akb also satisfies 93'. On the
other hand, if a satisfies the ATL formula hhaii3',
then so does akb.
The model-checking problem for alternating-time
temporal logics requires the computation of winning
strategies. In the case of synchronous ATL, all
games are finite reachability games. Consequently,
the model-checking complexity is linear in the size of
the system and the length of the formula, just as in
the case of CTL. While checking existential reachability
corresponds to iterating the existential next-time
operator 9 , and checking universal reachability corresponds
to iterating the universal next 8 , checking
alternating reachability corresponds to iterating
an appropriate mix of 9 and 8 , as governed by a
parameterized path quantifier. This suggests a simple
model-checking procedure for synchronous
ATL, and shows how existing symbolic model checkers
for CTL can be modified to check ATL specifications,
at no extra cost. In the asynchronous model, due to
the presence of fairness constraints, ATL model checking
requires the solution of infinite games, namely,
generalized B-uchi games [VW86]. Consequently, the
model-checking complexity is quadratic in the size of
the system, and the symbolic algorithm involves a
nested fixed-point computation. The model-checking
problem for ATL ? is much harder: we show it to be
complete for 2EXPTIME in both the synchronous and
asynchronous cases.
2 The Alternating-time Logic ATL
2.1 Syntax
The temporal logic ATL (alternating-time logic) is defined
with respect to a finite set \Pi of propositions and
a finite set \Sigma of agents. An ATL formula is one of the
propositions
are ATL
formulas.
is a set of
agents, and are ATL formulas.
The operator hh ii is a path quantifier, and ("next")
and U ("until") are temporal operators. The logic
ATL is similar to the branching-time logic CTL, only
that path quantifiers are parameterized by sets of
agents. When ang is known, we write
an ii instead of hhfa an gii.
2.2 Synchronous-structure semantics
The formulas of ATL can be interpreted over a synchronous
structure
the set of propositions, \Sigma is the set of agents, W is a
set of states, R ' W \Theta W is a total transition relation
(i.e., for every state w 2 W there exists a state
such that R(w; w 0 )), the function -
maps each state w 2 W to the set -(w) ' \Pi of propositions
that are true in w, and the function oe
maps each state w 2 W to the agent oe(w) 2 \Sigma that is
enabled in w. Note that precisely one agent is enabled
in each state. For a set A ' \Sigma of agents, we denote
by WA ' W the set of states w for which oe(w) 2 A.
When fag is a singleton, we write W a instead
of W fag . For two states w and w 0 with R(w; w 0 ), we
say that w 0 is a successor of w. A computation of S is
an infinite sequence of states such
that for all i - 0, we have R(w We refer to a
computation starting at state w 0 as a w 0 -computation.
For a computation fl and an index i - 0, we use fl[i]
and fl[0; i] to denote the i-th state in fl and the finite
prefix of fl, respectively.
In order to define the semantics of ATL, we first define
the notion of strategies. A strategy for an agent
a 2 \Sigma is a function f a : W \Delta W a ! W such that
if f a (fl \Delta
Thus, a strategy maps a finite prefix of
a computation to a possible extension of the compu-
tation. Intuitively, the strategy f a suggests for each
history fl and state w in which a is enabled, a successor
f a (fl \Delta w) of w. Each strategy f a induces a set
of computations that agent a can enforce. Given a
state w, a set ang of agents, and a set
an g of strategies for the agents in A,
we define the outcomes of FA from w to be the set
out(w; FA ) of all w-computations that the agents in A
can enforce when they cooperate and follow the strategies
in FA ; that is, a w-computation fl is in out (w; FA )
iff whenever fl visits a state whose agent is a k 2 A,
then the computation proceeds according to the strategy
f ak . Formally,
some a k 2 A, then w In particular,
then the outcome set out(w; FA ) contains a
single w-computation, and if A = ;, then the outcome
set out(w; FA ) contains all w-computations.
We can now turn to a formal definition of the semantics
of ATL. We write w to indicate that the
state w of the synchronous structure S satisfies the
(when the subscript S is clear from
the context, we omit it). The satisfaction relation
is defined, for all states w 2 W , inductively as follows:
ffl For p 2 \Pi, we have w
there exists a set FA of strate-
gies, one for each agent in A, such that for all
computations
there exists a set FA of strate-
gies, one for each agent in A, such that for all computations
there exists an index
we have fl[j]
train
train ctr
out of gate
in gate
request
out of gate
grant
out of gate
ctr
Figure
1: A synchronous train-controller system
Note that the next operator hhAii gives a local con-
straint: w there
exists a successor of w that satisfies ', or w 62 WA
and all successors of w satisfy '. For an
we denote by ['] ' W the set of states w
such that w
Since the parameterized path quantifiers hh\Sigmaii and
hh;ii correspond to existential and universal path quan-
tification, respectively, we write 9 for hh\Sigmaii and 8
for hh;ii. The logic CTL is the fragment of ATL
interpreted over structures with a single agent: if
8. As dual of
hh ii we use the path
quantifier hh\Sigma n Aii ranges over the computations that
the agents not in A have strategies to enforce, the path
quantifier ranges over the computations that the
agents in A do not have strategies to avoid. Therefore
also As in CTL, the temporal operators
3 ("eventually") and 2 ("always") are defined
from the until operator: hhAiitrue U' and
Example 2.1 Consider the synchronous structure
shown in Figure 1. The structure describes a protocol
for a train entering a gate at a railroad cross-
ing. At each moment, the train is either out of gate
or in gate. In order to enter the gate, the train issues
a request, which is serviced (granted or rejected)
by the controller in the next step. After a grant, the
train may enter the gate or relinquish the grant. The
structure has two agents: the train and the controller.
Two states of the structure, labeled ctr, are controlled;
that is, when a computation is in one of these states,
the controller chooses the next state. The other two
states are not controlled, and the train chooses successor
states. The system satisfies the following specifications
1. Whenever the train is outside the gate and does
not have a grant to enter the gate, the controller
can prevent it from entering the gate:
((out of gate -:grant) ! hhctr ii2out of gate)
2. Whenever the train is outside the gate, the controller
cannot force it to enter the gate:
(out of gate ! [[ctr]]2out of gate)
3. Whenever the train is outside the gate, the train
and the controller can cooperate so that the train
will enter the gate:
(out of gate ! hhctr ; trainii3in gate)
4. Whenever the train is in the gate, the controller
can force it out in a single step:
out of gate)
The first two specifications cannot be stated in CTL or
They provide more information than the CTL
out of gate).
While ' only requires the existence of a computation
in which the train is always outside the gate, the ATL
formulas guarantee that no matter how the train be-
haves, the controller can prevent it from entering the
gate, and no matter how the controller behaves, the
train can decide to stay outside the gate. Since all the
states of the structure are either controller states or
train states, the third specification is equivalent to the
(out of gate ! 93 in gate).
2.3 Asynchronous-structure semantics
The formulas of ATL can also be interpreted over an
asynchronous structure
\Pi, \Sigma, R, and L are as in a synchronous structure, and
the function oe maps each transition r 2 R to
the agent oe(r) 2 \Sigma that owns r. For an agent a 2 \Sigma,
we denote by R a ' R the set of transitions owned
by a. For two states w and w 0 with R a (w; w 0 ), we say
that w 0 is an a-successor of w. Note that a state w
may have none or several a-successors. For a state w,
we define enabled (w) ' \Sigma to be the set of agents a
for which there exists an a-successor of w. We write
W a for the set of states w with a 2 enabled(w). As
for synchronous structures, a computation of S is an
infinite sequence of states such that
for all i - 0, we have R(w Given a set A ' \Sigma
of agents, the computation fl is A-fair iff for each agent
a 2 A and each index i - 0, there exists a j - i
such that either a 62 enabled (w j ) or R a (w
is, in an A-fair computation, no agent in A can be
continuously enabled without being scheduled. 1
Similar to the synchronous case, a strategy for an
agent a 2 \Sigma is a function f a : W \Delta W a ! W such
that if f a (fl \Delta
then R a (w; w 0 ). Thus, the agent a applies its strategy
and influences the computation whenever it is
scheduled. However, unlike in synchronous structures,
it is not known in advance which agent is scheduled
when a particular state is encountered. Given
a state w, a set ang of agents, and a set
an g of strategies for the agents in A,
we define the outcomes out (w; FA ) to be the set of
all w-computations that the agents in A can enforce
when they follow the strategies f f an and the
scheduling policy is fair with respect to A. Formally,
the computation
for some a k 2 A, then w
is A-fair. The definition of satisfaction of ATL formulas
in an asynchronous structure is the same as in
the synchronous case, with the above definition of out-
comes. For example, w for every agent
a 2 enabled (w), either a 2 A and there exists an a-
successor w 0 of w such that w 0 or a 62 A and for
all a-successors w 0 of w, we have w 0 synchronous
ATL is the fragment of asynchronous ATL
interpreted over structures where for each state w, the
set enabled(w) of enabled agents is a singleton. If the
set \Sigma of agents is a singleton, then the synchronous
and asynchronous interpretations coincide (and are
equal to CTL).
Example 2.2 Consider the asynchronous structure
shown in Figure 2. The structure again describes a
Our algorithms can be easily modified to account for differ-
ent, stronger types of fairness.
out of gate
in gate
request
out of gate
grant
out of gate
train
ctr
train
train
train train
ctr
ctr ctr
Figure
2: An asynchronous train-controller system
protocol for a train entering a gate. The protocol is
similar to the one described in Example 2.1, only that
requests by the train to enter the gate are serviced
asynchronously, at some future step. A fair scheduling
policy ensures that each request will be serviced
(granted or rejected) eventually. All four specifications
from Example 2.1 hold also for the asynchronous
system.
Checking
The synchronous (resp. asynchronous) model-checking
problem for ATL asks, given a synchronous (asyn-
chronous) structure S and an ATL formula ', for
the set ['] of states of S that satisfy '. We measure
the complexity of the model-checking problem in
two ways: the joint complexity of model checking considers
the complexity in terms of both the size of the
structure and the length of the formula; the structure
complexity of model checking considers the complexity
in terms of the structure only, assuming the formula
is fixed. Since the structure is typically much larger
than the formula, and its size is the most common
computational bottle-neck, the structure-complexity
measure is of particular practical interest [LP85].
3.1 The synchronous model
Model checking for synchronous ATL is very similar
to CTL model checking [CE81, QS81, BCM
first present a symbolic algorithm, which manipulates
state sets of the given synchronous structure S. The
algorithm is shown in Figure 3, and uses the following
primitive operations:
ffl The function Sub, when given an
returns a queue Sub(') of subformulas of ' such
that if ' 1 is a subformula of ' and ' 2 is a sub-formula
of ' 1 , then ' 2 precedes ' 1 in the queue
Sub(').
ffl The function PropCheck , when given a proposition
returns the state set [p].
ffl The function Pre, when given two state sets ae
and - , returns the set of states w such that either
some successor of w is in - , or w 62 ae
and all successors of w are in - .
ffl Union, intersection, difference, and inclusion test
for state sets.
These primitives can be implemented using symbolic
representations, such as binary decision diagrams, for
state sets and the transition relation. If given a symbolic
model checker for CTL, such as SMV [McM93],
only the Pre operation needs to be modified for checking
ATL.
for each ' 0 in Sub(') do
case
case
case
case
case
ae := [false]; - := [' 2 ];
while - 6' ae do
ae
case
return ['].
Figure
3: Symbolic ATL model checking
Alternatively, the ATL model-checking algorithm
can be implemented using an enumerative representation
for the state set W of the structure S. Then,
for each subformula ' 0 of ', every state w 2 W is
labeled with ' 0 iff w 2 [' 0 ], and w is labeled with
The labeling of the states with formulas
proceeds in a bottom-up fashion, following the
ordering Sub(') from smaller to larger subformulas.
If ' 0 is generated by the rules (S1) or (S2) or has
the form hhAii ', the labeling procedure is straight-
forward. For the labeling procedure
corresponds to solving a reachability problem for
an AND-OR graph: the states in WA are OR-nodes,
the remaining states are AND-nodes, and we need to
compute the set of nodes from which the OR-player
can reach a state labeled by ' 2 while staying within
states labeled by ' 1 . Since the reachability problem for
AND-OR graphs can be solved in time linear in the
number of edges, it follows that the labeling procedure
requires linear time for each subformula. Furthermore,
since reachability for AND-OR graphs, a PTIME-hard
problem [Imm81], can be reduced to model checking
for synchronous ATL, we conclude the following theorem
Theorem 3.1 The synchronous model-checking problem
for ATL is PTIME-complete, and can be solved in
time O(m') for a structure with m transitions and an
formula of length '. The structure complexity of
the problem is also PTIME-complete.
It is interesting to compare the model-checking complexities
of synchronous ATL and CTL. While both
problems can be solved in time O(m') [CES86], the
structure complexity of CTL model checking is only
NLOGSPACE-complete [BVW94]. This is because
model checking is related to graph reachabil-
ity, as synchronous ATL model checking is related to
AND-OR graph reachability.
3.2 The asynchronous model
Consider an asynchronous structure S with the state
set W , and a formula ' of ATL. As in the synchronous
case, for each subformula ' 0 of ', we compute the set
of states that satisfy ' 0 , starting from the inner-most
subformulas. For this purpose, we transform the
asynchronous structure S into a synchronous structure
S 0 as follows. The propositions of S 0 are the propositions
of S. The agents of S 0 are the agents of S,
plus a new agent b called scheduler. The states of S 0
are the states of S, plus for every state w 2 W and
every agent a enabled in w, a new state q w;a . Then,
each transition of S from w to w 0 owned by agent a
is replaced in S 0 by two transitions, one from w to
q w;a and the other from q w;a to w 0 . The propositions
that are true in the states w and q w;a of S 0 are those
that are true in the state w of S. The agent that is
enabled in the state w of S 0 is b, and the agent that
is enabled in the state q w;a of S 0 is a. Thus, in the
synchronous structure, first the scheduler chooses one
of the enabled agents, and then the chosen agent takes
a step.
Consider the subformula hhaii3' of ' (more general
until formulas can be handled similarly). The evaluation
of this formula in a state w 0 2 W corresponds to
the following game between a protagonist and an ad-
versary. The game is played on the synchronous structure
starting from w 0 . When the agent enabled in
the current state is a, the protagonist chooses a successor
state in S 0 ; otherwise the adversary chooses the
next state. When a state in W satisfying ' is visited,
the protagonist wins. If the game continues forever,
then the adversary wins iff the resulting computation
is fair to the agent a; that is, it contains either infinitely
many states of the form q w;a or infinitely many
states w 2 W such that a is not enabled in the state w
of S. Then, the state w 0 satisfies the formula hhaii3'
in S iff the protagonist has a winning strategy. The
winning condition of the adversary can be specified
by the
a -:W a )), where W a
defines the states of S in which a is enabled and W 0
a
defines the states of S 0 in which a is enabled. This is
a B-uchi game, and the set of winning states for the
adversary can be computed by a nested fixed-point
computation:
ae := [true]; - := [:'];
while ae 6' - do
ae
a [ (W n W a
while - 6' ae 0 do
a
od
od.
In an enumerative implementation, the complexity of
solving a B-uchi game is quadratic in the number of
transitions of a structure [VW86]. If S has m transi-
tions, then S 0 has 2m transitions. Thus, the labeling
procedure for the subformula hhaii3' requires O(m 2 )
time.
For the subformula ' an ii3', the
winning condition of the adversary is a conjunction
of n B-uchi conditions (to be precise, 2(:' -
ak -:W ak )). Such a game can be transformed
into a game with a single B-uchi condition by
introducing a counter variable that ranges over the
set ng [VW86]. Hence, the labeling procedure
for the subformula ' 0 requires O(m 2 To determine
the complexity of evaluating all subformulas
of ', we define the size of a logical connective to be 1,
and the size of a temporal connective to be the number
of agents in the corresponding path quantifier. Then,
are the sizes of all connectives in ', the
time complexity is O(m 2 n 2
k ), or
bounded by the
length ' of the formula '. Finally, since the synchronous
model-checking problem is a special case of
the asynchronous problem, we conclude the following
theorem.
Theorem 3.2 The asynchronous model-checking
problem for ATL is PTIME-complete, and can be
solved in time O(m 2 ' 2 ) for a structure with m transitions
and an ATL formula of length '. The structure
complexity of the problem is also PTIME-complete.
It is interesting to compare the model-checking complexities
of asynchronous ATL and Fair-CTL, with
generalized B-uchi fairness constraints. While the latter
is can be solved in O(mk') time [VW86], where
k is the number of B-uchi constraints, the best known
algorithm for the former is quadratic in m. This is because
model checking is related to checking
the emptiness of B-uchi automata, and asynchronous
model checking is related to checking the emptiness
of alternating B-uchi automata.
4 The Alternating-time Logic ATL ?
The logic ATL is a fragment of a more expressive logic
called . There are two types of formulas in
state formulas, whose satisfaction is related to a specific
state, and path formulas, whose satisfaction is
related to a specific computation. Formally, a state
formula is one of the following:
propositions
are state
formulas.
hhAii/, where A ' \Sigma is a set of agents, and / is
a path formula.
path formula is one of the following:
are path
formulas.
are path
formulas.
The logic ATL ? consists of the set of state formulas
generated by the above rules. It is similar to the
branching-time temporal logic CTL ? , only that path
quantifiers are parameterized by sets of agents. The
logic ATL is the fragment of ATL ? that consists of all
formulas in which every temporal operator is immediately
preceded by a path quantifier.
The semantics of ATL ? is defined similarly to the
semantics for ATL. We write fl to indicate that
the computation fl of the structure S satisfies the path
(the subscript S is usually omitted). The
satisfaction relation j= is defined, for all states w and
computations fl, inductively as follows:
ffl For state formulas generated by the rules (S1) and
(S2), the definition of j= is the same as for ATL.
there exists a set FA of strategies,
one for each agent in A, such that for all computations
for a state formula ' iff fl[0]
there exists an index i - 0 such
that fl[i]
As before,
The temporal operators 3 and 2 are defined from
U as usual:
For example, the
asserts that agent a has a strategy to enforce
that, whenever a request is continuously issued,
infinitely many grants are given. This specification
cannot be expressed in CTL ? or in ATL. For single-agent
structures, degenerates to CTL ? .
While there is an exponential price to be paid in
model-checking complexity when moving from CTL to
price becomes even more significant when
we consider the alternating-time versions of the logics.
Theorem 4.1 In both the synchronous and asynchronous
cases, the model-checking problem for ATL ?
is 2EXPTIME-complete. In both cases, the structure
complexity of the problem is PTIME-complete.
Proof (sketch). Given a synchronous structure S with
state set W , and an ATL ? formula ', we label the
states in S with state subformulas of ', starting from
the innermost state subformulas. For state subformulas
of the form hhAii/, we employ the algorithm
module checking from [KV96] as follows.
Let / 0 the result of replacing in / all state subfor-
mulas, which have already been evaluated, by appropriate
new propositions. For a state w and a set FA of
strategies, the set out(w; FA ) of computations induces
a tree, which is obtained by unwinding S from w and
then pruning all subtrees whose roots are successors
of states in WA that are not chosen by the strategies
in FA . We construct a B-uchi tree automaton Tw;A
that accepts all trees induced by out(w; FA ) for any
set FA of strategies, and a Rabin tree automaton T/
that accepts all trees satisfying the
By [KV96], Tw;A has jW j states. By [ES84], T/ has
states and 2 O(j/j) acceptance pairs. The intersection
of the two automata is a Rabin tree automaton
that contains the outcome trees satisfying 8/ 0 . Hence,
by the semantics of ATL ? , the state w satisfies hhAii/
iff this intersection is nonempty. By [EJ88, PR89a],
the nonemptiness problem for a Rabin tree automaton
with n states and k pairs can be solved in time
O(kn) 3n . Hence, evaluating the subformula hhAii/ in
a single state requires at most time
. Since
there are jW j states and O(j'j) many subformulas,
membership in 2EXPTIME follows.
The asynchronous case can be reduced to the synchronous
case similar to the proof of Theorem 3.2.
For the lower bounds, we use a reduction from the realizability
problem of LTL, a 2EXPTIME-hard problem
[Ros92], to model checking for synchronous ATL ? .
By contrast, CTL ? model checking is only PSPACE-complete
[CES86], and its structure complexity is
NLOGSPACE-complete [BVW94].
The verification problem for open systems, more than
it corresponds to the model-checking problem for temporal
logics, corresponds, in the case of linear time,
to the realizability problem [ALW89, PR89a, PR89b],
and in the case of branching time, to the module-
checking problem [KV96]; that is, to a search for winning
strategies of !-regular games. In general, this
involves a hefty computational price. The logic ATL
identifies an interesting class of properties that can be
checked by solving finite games, which demonstrates
that there is still a great deal of reasoning about open
systems that can be performed efficiently. We conclude
with several remarks on variations of ATL which
support our design choices.
5.1 Agents with limited memory
In the definitions of ATL and ATL ? , the strategy of an
agent may depend on an unbounded amount of infor-
mation, namely, the full history of the game up to the
current state. However, since all involved games are !-
regular, the existence of a winning strategy implies the
existence of a winning finite-state strategy [Tho95],
which depends only on a finite amount of information
about the history of the game. Thus, the semantics
of ATL and ATL ? can be defined, equivalently, using
the outcomes of finite-state strategies only. This is in-
teresting, because a strategy can be thought of as the
parallel composition of the system with a controller,
which makes sure that the system follows the strat-
egy. Then, for an appropriate definition of parallel
composition, it is precisely the finite-state strategies
that can be implemented using controllers that are
synchronous structures. Indeed, for the finite reachability
games and generalized B-uchi games of ATL,
it suffices to consider memory-free strategies [Tho95],
which can be implemented as control maps (i.e., controllers
without state). This is not the case for ATL ? ,
whose formulas can specify the winning positions of
games [Tho95].
5.2 Agents with limited information
Our models assume that the agents of a structure have
complete information about all propositions (which
state satisfies which propositions) and about the other
agents (which agent owns which transitions). Sometimes
it may be more appropriate to assume that each
agent a 2 \Sigma can observe only a subset \Pi a ' \Pi of
the propositions, and a strategy f a : \Pi
a ! \Pi a for a
must (1) depend only on the observable part of the
history and (2) decide only on the observable part of
the next state. From undecidability results on multi-player
games with incomplete information, it follows
that the model-checking problem for ATL with incomplete
information is undecidable in both the synchronous
[Yan97] and asynchronous [PR79] cases. In
the special case that all path quantifiers are parameterized
by single agents, and no cooperation between
agents with different information is possible, decidability
follows from the results on module checking
with incomplete information [KV97]. In this case,
the model-checking complexity for both synchronous
and asynchronous ATL is EXPTIME-complete, and
2EXPTIME-complete for ATL ? . The structure complexity
of all four problems is EXPTIME-complete,
thus rendering reasoning about agents with incomplete
information infeasible even under severe restrictions
5.3 Game logic and module checking
The parameterized path quantifier hhAii first stipulates
the existence of strategies for the agents in A and then
universally quantifies over the outcomes of the stipulated
strategies. One may generalize ATL and ATL ?
by separating the two concerns into strategy quantifiers
and path quantifiers, say, by writing 99A: 8 instead
of hhAii (read 99A as "there exist strategies for
the agents in A"). Then, for example, the formula
asserts that the agents in
A have strategies such that for some behavior of the
remaining agents, ' 1 is always true, and for some possibly
different behavior of the remaining agents, ' 2 is
always true. To define the semantics of strategy quan-
tifiers, we need to consider the tree that is induced by
the outcomes of a set of strategies, and obtain three
types of formulas: state formulas and path formulas
as in CTL ? or ATL ? , and tree formulas, whose satisfaction
is related to a specific outcome tree. For
instance, while ' is a state formula, its subformula
is a tree formula. We refer to the general
logic with strategy quantifiers, path quantifiers,
temporal operators, and boolean connectives as game
logic. Then, ATL ? is the fragment of game logic that
consists of all formulas in which every strategy quantifier
is immediately followed by a path quantifier (note
that 99A: 9 is equivalent to 9).
Another fragment of game logic is studied in module
checking [KV96]. There, one considers formulas of
the form 99A: ', with a single outermost strategy quantifier
followed by a CTL or CTL ? formula '. From
an expressiveness viewpoint, alternating-time logics
and module checking identify incomparable fragments
of game logic: the formula ' from above is not
equivalent to any ATL ? formula, and the ATL formula
not equivalent to any formula
with a single strategy quantifier. In [KV96],
it is shown that the module-checking complexity
is EXPTIME-complete for CTL and 2EXPTIME-
complete for CTL ? , and the structure complexity of
both problems is PTIME-complete. Applying the
method there in a bottom-up fashion can be used
to solve the model-checking problem for game logic,
resulting in a joint complexity of 2EXPTIME and a
structure complexity of PTIME. Thus, game logic is
no more expensive than ATL ? . We feel, however, that
unlike state and path formulas, tree formulas are not
natural specifications of reactive systems.
5.4 Alternating-time fixpoint logic
Temporal properties using the until operator can be
defined as fixed points of next-time properties. For
closed systems, this gives the -calculus as a generalization
of temporal logics [Koz83]. In a similar fashion,
one can generalize alternating-time temporal logics to
obtain an alternating-time -calculus, whose primitives
are the parameterized next constructs hhAii and
least and greatest fixed-point operators, and
positive boolean connectives. Then, ATL ? is a proper
fragment of the alternating-time -calculus, and every
formula is equivalent to a fixed-point formula
without alternation of least and greatest fixed
points. In practice, however, designers prefer temporal
operators over fixed points [BBG just as
CTL and CTL ? capture useful and friendly subsets of
the -calculus for the specification of closed system,
ATL and ATL ? capture useful and friendly subsets
of the alternating-time -calculus for the specification
of open systems. It is worth noting that CTL with
parameterized next constructs is not of sufficient use,
because it cannot specify the unbounded alternating
reachability property hhAii3' [BVW94]. Hence it is
essential that in ATL we can parameterize path quan-
tifiers, not just next-time operators.
Alternating-time transition systems
The parameterized next construct hhAii is different
from similar operators commonly written as 9 A and
interpreted as "for some agent a 2 A, there exists
an a-successor." Rather, the ATL formula hhAii ' is
equivalent to 9 A' - 8 \SigmanA '. For the abstract specification
of open systems, it is essential that the parameterized
next has a game-like interpretation, not a
standard modal interpretation. This is because our
definitions of synchronous and asynchronous structures
are only approximations of "real" concurrency
models for open systems (synchronous, e.g. [AH96],
or asynchronous, e.g. [Lyn96]). While in our struc-
tures, each transition corresponds to a step of a single
agent, in these models, a transition may be the result
of simultaneous independent decisions by several
agents. The more general situation gives rise to games
with complex individual moves that can be captured
abstractly by alternating transition systems (see the
full paper). The game-like interpretation of the parameterized
next makes ATL robust with respect to
such changes in the definition of individual moves. In
particular, all of our results carry over to alternating
transition systems, and therefore apply, for example,
to Reactive Modules [AH96].
Acknowledgments
. We thank Amir Pnueli, Moshe
Vardi, and Mihalis Yannakakis for helpful discussions.
--R
Reactive modules.
Composing specifica- tions
Realizable and unrealizable concurrent program specifications.
Symbolic model checking: 10 20 states and beyond.
An automata-theoretic approach to branching-time model checking
Design and synthesis of synchronization skeletons using branching-time temporal logic
Automatic verification of finite-state concurrent systems using temporal logic specifications
Trace Theory for Automatic Hierarchical Verification of Speed-independent Circuits
on branching versus linear time.
The complexity of tree automata and logics of programs.
Deciding branching-time logic
Liveness in timed and untimed systems.
Communicating Sequential Pro- cesses
The model checker SPIN.
Number of quantifiers is better than number of tape cells.
Results on the propositional - calculus
Module checking.
Module checking re- visited
Checking that finite-state concurrent programs satisfy their linear specification
Distributed Algorithms.
Symbolic Model Checking.
The temporal logic of programs.
On the synthesis of a reactive module.
On the synthesis of an asynchronous reactive module.
Specification and verification of concurrent systems in Cesar.
The control of discrete event systems.
Modular Synthesis of Reactive Sys- tems
On the synthesis of strategies in infinite games.
Personal communication
--TR
Communicating sequential processes
MYAMPERSANDldquo;SometimesMYAMPERSANDrdquo; and MYAMPERSANDldquo;not neverMYAMPERSANDrdquo; revisited
Automatic verification of finite-state concurrent systems using temporal logic specifications
On the synthesis of a reactive module
Trace theory for automatic hierarchical verification of speed-independent circuits
Automata on infinite objects
Temporal and modal logic
Symbolic Boolean manipulation with ordered binary-decision diagrams
Symbolic model checking
and ECTL as fragments of the modal MYAMPERSANDmgr;-calculus
Conjoining specifications
The Model Checker SPIN
Modalities for model checking (extended abstract)
Checking that finite state concurrent programs satisfy their linear specification
On the menbership problem for functional and multivalued dependencies in relational databases
Alternation
Reactive Modules
An automata-theoretic approach to branching-time model checking
Module checking
JMOCHA
Distributed Algorithms
Symbolic Model Checking
Automata on Infinite Objects and Church''s Problem
Realizable and Unrealizable Specifications of Reactive Systems
On the Synthesis of an Asynchronous Reactive Module
Fair Simulation Relations, Parity Games, and State Space Reduction for BMYAMPERSANDuuml;chi Automata
Small Progress Measures for Solving Parity Games
The Control of Synchronous Systems
The Control of Synchronous Systems, Part II
On the Complexity of Branching Modular Model Checking (Extended Abstract)
Specification and verification of concurrent systems in CESAR
MOCHA
A Linear-Time Model-Checking Algorithm for the Alternation-Free Modal Mu-Calculus
Design and Synthesis of Synchronization Skeletons Using Branching-Time Temporal Logic
Liveness in Timed and Untimed Systems
Trees, automata, and games
Deciding branching time logic
--CTR
Xianwei Lai , Shanli Hu , Zhengyuan Ning, An Improved Formal Framework of Actions, Individual Intention and Group Intention for Multi-agent Systems, Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology, p.420-423, December 18-22, 2006
Wojciech Jamroga , Thomas gotnes, What agents can achieve under incomplete information, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Thomas gotnes , Wiebe van der Hoek , Michael Wooldridge, On the logic of coalitional games, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Suchismita Roy , Sayantan Das , Prasenjit Basu , Pallab Dasgupta , P. P. Chakrabarti, SAT based solutions for consistency problems in formal property specifications for open systems, Proceedings of the 2005 IEEE/ACM International conference on Computer-aided design, p.885-888, November 06-10, 2005, San Jose, CA
Luigi Sauro , Jelle Gerbrandy , Wiebe van der Hoek , Michael Wooldridge, Reasoning about action and cooperation, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Krishnendu Chatterjee , Luca de Alfaro , Thomas A. Henzinger, The complexity of quantitative concurrent parity games, Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, p.678-687, January 22-26, 2006, Miami, Florida
van der Hoek , Michael Wooldridge, On the dynamics of delegation, cooperation, and control: a logical account, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Aleksandra Nenadi , Ning Zhang , Qi Shi, RSA-based verifiable and recoverable encryption of signatures and its application in certified e-mail delivery, Journal of Computer Security, v.13 n.5, p.757-777, October 2005
Aldewereld , Wiebe van der Hoek , John-Jules Meyer, Rational Teams: Logical Aspects of Multi-Agent Systems, Fundamenta Informaticae, v.63 n.2-3, p.159-183, April 2004
Valentin Goranko , Govert van Drimmelen, Complete axiomatization and decidability of alternating-time temporal logic, Theoretical Computer Science, v.353 n.1, p.93-117, 14 March 2006
Thomas A. Henzinger, Games in system design and verification, Proceedings of the 10th conference on Theoretical aspects of rationality and knowledge, June 10-12, 2005, Singapore
van der Hoek , Alessio Lomuscio , Michael Wooldridge, On the complexity of practical ATL model checking, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
van der Hoek , Mark Roberts , Michael Wooldridge, Knowledge and social laws, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Alur , Pavol ern , P. Madhusudan , Wonhong Nam, Synthesis of interface specifications for Java classes, ACM SIGPLAN Notices, v.40 n.1, p.98-109, January 2005
van der Hoek , Wojciech Jamroga , Michael Wooldridge, A logic for strategic reasoning, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Krishnendu Chatterjee , Thomas A. Henzinger , Marcin Jurdziski, Games with secure equilibria, Theoretical Computer Science, v.365 n.1, p.67-82, 10 November 2006
Alessio Lomuscio , Franco Raimondi, Model checking knowledge, strategies, and games in multi-agent systems, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Alur , Salvatore La Torre , P. Madhusudan, Modular strategies for recursive game graphs, Theoretical Computer Science, v.354 n.2, p.230-249, 28 March 2006
Wojciech Jamroga , Wiebe van der Hoek, Agents that Know How to Play, Fundamenta Informaticae, v.63 n.2-3, p.185-219, April 2004
Magdalena Kacprzak , Wojciech Penczek, Fully Symbolic Unbounded Model Checking for Alternating-time Temporal Logic1, Autonomous Agents and Multi-Agent Systems, v.11 n.1, p.69-89, July 2005
van der Hoek , Michael Wooldridge, On the logic of cooperation and propositional control, Artificial Intelligence, v.164 n.1-2, p.81-119, May 2005
Yves Bontemps , Pierre-Yves Schobbens , Christof Lding, Synthesis of Open Reactive Systems from Scenario-Based Specifications, Fundamenta Informaticae, v.62 n.2, p.139-169, April 2004
D. R. Ghica , A. S. Murawski , C.-H. L. Ong, Syntactic control of concurrency, Theoretical Computer Science, v.350 n.2, p.234-251, 7 February 2006
van der Hoek, Knowledge, Rationality and Action, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.16-23, July 19-23, 2004, New York, New York | games;temporaxl logic;alternation;model checking |
585435 | A Two-Variable Fragment of English. | Controlled languages are regimented fragments of natural language designed to make the processing of natural language more efficient and reliable. This paper defines a controlled language, E2V, whose principal grammatical resources include determiners, relative clauses, reflexives and pronouns. We provide a formal syntax and semantics for E2V, in which anaphoric ambiguities are resolved in a linguistically natural way. We show that the expressive power of E2V is equal to that of the two-variable fragment of first-order logic. It follows that the problem of determining the satisfiability of a set of E2V sentences is NEXPTIME complete. We show that E2V can be extended in various ways without compromising these complexity results; however, relaxing our policy on anaphora resolution renders the satisfiability problem for E2V undecidable. Finally, we argue that our results have a bearing on the broader philosophical issue of the relationship between natural and formal languages. | Introduction
Controlled languages are regimented fragments of natural language
designed to make the processing of natural language more ecient
and reliable. Although work on controlled languages was originally
motivated by the need to produce uniform and easily translatable
technical documentation, attention has recently turned to their possible
application to system specications (Fantechi et al. (1994), Fuchs,
Schwertl and Torge (1999b), Fuchs, Schwertl and Schwitter (1999a),
Holt (1999), Holt and Klein (1999), Macias and Pulman (1995), Nelken
and Francez (1996), Vadera and Meziane (1994) ). This interest in
natural specication languages is motivated by the fact that many design
engineers and programmers nd formal specication languages|
usually, some variety of logic|alien and hard to understand. The hope
This paper was largely written during a visit to the Department of Computer
Science at the University of Zurich in 1999. The author wishes to thank David Bree,
Norbert Fuchs, Nick Player and an anonymous referee for their valuable comments
on this paper, and Michael Hess, Rolf Schwitter and Uta Schwertl for stimulating
discussions.
c
2001 Kluwer Academic Publishers. Printed in the Netherlands.
Ian Pratt-Hartmann
is that, by selecting a regimented subset of natural language, the precision
oered by formal specication languages can be combined with
the ease-of-understanding associated with natural language.
A new factor with important implications for the use of controlled
languages is the explosion of research into decidable fragments of logic.
Known decidable fragments include various prex classes (see Borger,
Gradel and Gurevich (1997) for a survey), the guarded fragment
(Andreka, van Benthem and Nemeti (1998), Gradel (1999)) and, most
importantly for the purposes of this paper, the two-variable fragment
(Mortimer (1975), Gradel and Otto (1999)). The relevance of this re-search
to controlled languages is clear: given a controlled language
which is mapped to some logic, the question naturally arises as to
whether that logic enjoys good computational characteristics; conversely,
given a logic whose computational characteristics are well-understood,
it would be useful to identify a controlled language which maps to it.
This paper provides a study in how to match a controlled language
to a decidable logic with known computational characteristics. The
controlled language in question, called E2V, is shown to have the expressive
power of the two-variable fragment of rst-order logic. The
grammar of E2V has been kept as simple as possible, in order to clarify
the logical issues involved; thus, E2V is certainly not being proposed
as a practically useful controlled language. However, the techniques
developed in this paper easily carry over to various salient extensions
of E2V; therefore, we claim, our results are of direct relevance to the
development of practically useful controlled languages. In addition, we
argue that these results have a bearing on the broader philosophical
issue of the relationship between natural and formal languages.
The plan of the paper is as follows. Section 2 introduces the syntax
and semantics of E2V; section 3 establishes upper bounds on its
section 4 establishes corresponding lower bounds; and
section 5 discusses the broader philosophical signicance of this work.
2. The syntax and semantics of E2V
E2V is a fragment of English coinciding, in a sense to be made precise
below, with the two-variable fragment of rst-order logic. Its key
grammatical resources include determiners, relative clauses, re
exives
and pronouns. Examples of E2V sentences are:
(1) Every artist despises every beekeeper
(2) Every artist admires himself
Two-Variable Fragment of English 3
(3) Every beekeeper whom an artist despises admires him
Every artist who employs a carpenter despises every beekeeper
who admires him.
The remainder of this section is devoted to a formal specication of the
syntax and semantics of E2V. The syntax determines which strings of
words count as E2V sentences, and the semantics determines how those
sentences are to be interpreted, by mapping them to formulas of rst-
order logic. The relatively formal nature of the presentation facilitates
the proofs of theorems in subsequent sections concerning the expressive
power of E2V and the computational complexity of reasoning within
it. Generally, the semantics of E2V is unsurprising, in that it interprets
E2V sentences in accordance with English speakers' intuitions. In
particular, sentences (1){(4) are mapped to the respective formulas:
Thus, when reading E2V, care must be taken to respect its particular
conventions regarding scope ambiguities and pronoun resolution. It is
not dicult to see that each of the formulas (5){(8) can be equivalently
written using only two-variables. We establish below that, given our
chosen conventions regarding scope ambiguities and pronoun resolu-
tion, this is true of all translations of E2V sentences. We note in passing
that formula (5) does not lie in the guarded fragment.
2.1. Syntax
The syntax of E2V has four components: a grammar, a lexicon, a
movement rule and a set of indexing rules.
Grammar
The grammar of E2V consists of a set of denite clause grammar (DCG)
rules, for example:
4 Ian Pratt-Hartmann
exive(A,I)
every
Re
does not
a) Grammar b) Closed-class lexicon
Figure
1. The syntax of E2V
with the labels IP, NP, etc. indicating categories of phrases in the
usual way. The variable expressions A, B and B/A in these rules unify
with semantic values, which represent the meanings of the phrases in
question; and the variables I and J unify with indices, which regulate
variable bindings in those meanings. Semantic values are explained in
detail in section 2.2; for the present, however, think of a semantic value
of the form B/A as a function which takes B as input and yields A as
output. Thus, the rst rule above states that the semantic value of an
IP consisting of an NP and a VP is obtained by applying the semantic
value of the NP to the semantic value of the VP.
The complete grammar for E2V is given in gure 1a. We mention
in passing that, in the presentation of E2V here, the issue of pronoun
agreement in person, case and gender has been ignored. Such details
are easily handled within the framework of DCGs, and need not be
discussed further.
Lexicon
The lexicon of E2V also consists of a set of DCG rules, and is divided
into two parts: the closed-class lexicon and the open-class lexicon. The
closed-class lexicon gives the meanings of those words in our English
fragment concerned with logical form, for example:
every
Re
Two-Variable Fragment of English 5
The rst of these rules assigns the semantic value A/(B/every(A,B))
to the determiner every. It helps to think of A/(B/every(A,B)) as a
function mapping two semantic values, A and B, to the more complex
semantic value every(A,B). The second rule assigns the semantic value
A/A to any of the re
exive pronouns itself, himself and herself. This
semantic value is in eect the identity function, re
ecting the fact that
the semantic force of a re
exive pronoun is exhausted by its eect on
the pattern of indexing in the sentence (discussed below). The third
rule assigns the semantic value A/(B/(rel(A,B))) to a (covert) comple-
mentizer. Again, it helps to think of A/(B/(rel(A,B))) as a function
mapping the semantic values A and B to the more complex semantic
value rel(A,B), indicating a relative clause. The complete closed-class
lexicon for E2V is given in gure 1b.
The open-class lexicon is an indenitely large set of DCG rules for
the terminal categories N and V. These rules determine the semantic
values of words of these categories: unary predicates for nouns and
binary-predicates for verbs. The open-class lexicon might contain the
following entries:
artist
Notice how, in open-class lexicon entries, the index variables appear
as arguments in the semantic values. We assume that the open- and
closed-class lexica are disjoint.
Together, the grammar and lexicon generate sentences via successive
expansion of nodes under unication of variables in the usual way.
Figure
2 illustrates the parsing of sentence (1) using the DCG rules.
(The values x 1, x 2 for the index variables are explained below.) The
indeterminate nature of the open-class lexicon means that the English
fragment we are describing is in reality a family of fragments|one
for each choice of open-class lexicon. What these fragments have in
common is just the overall syntax and the xed stock of 'logical' words
in the closed-class lexicon. This is exactly the situation encountered in
logic, where fragments of rst-order logic are dened over a variable
signature of non-logical constants. We call an open-class lexicon a vo-
cabulary, and, for a given choice of vocabulary, we speak of the English
fragment E2V over that vocabulary.
6 Ian Pratt-Hartmann
despises
artist
Every
beekeeper
every
Figure
2. Phrase-structure of sentence (1).
Movement rule
The simplied grammar of E2V allows us to state the usual rule for
wh-movement with marginally less technical apparatus than usual. We
take one phrase to dominate another in a sentence of E2V if the second
is reachable from the rst by following zero or more downward links in
the phrase-structure of the sentence.
Denition 1. A phrase of category NP c-commands a phrase
of
category RelPro in a parsed E2V sentence if the parent of dominates
, but itself does not dominate
The movement rule of E2V is:
Every phrase of the form RelPro(A,I) moves to the position
immediately below a nearest phrase of form NP(empty,I) which c-commands
moreover, every node NP(empty,I) is the destination
of such a movement.
As usual, we speak of an NP from which a RelPro has been moved as a
trace NP. Figures 4 and 5 illustrate how this movement rule is applied
in the case of sentences (3) and (4).
It is important to uderstand the variables I and A mentioned in the
movement rule. The index variable I occurs twice: once in the moved
RelPro and once in the NP it is moved to. This is to be understood as
requiring that the index variables for these phrases must unify. Given
the further unications of indices forced by the grammar rules, the
movement rule thus has the eect of unifying the index variable of a
trace NP in a relative clause with the index variable of the NP which
that relative clause modies. By contrast, the variable A, representing
the semantic value of the moved RelPro, occurs only once in the move-
Two-Variable Fragment of English 7
ment rule. This means that the movement rule imposes no constraints
on A, and, in eect, does not care about the semantic value of the moved
RelPro. However, this semantic value has not been wasted, because the
rules
which (who, whom)
used in the construction of the relative clause force the semantic value
of a trace NP to be the identity function A/A. The combined eect
of these rules on the semantic values of relative clauses can be seen in
gures 4 and 5.
Indexing rules
In the grammar of E2V, most phrasal categories have exactly one index
variable; and in that case, we speak of the value of that variable during
parsing as being the index of the phrase in question. We insist that
every index variable in a phrase-structure tree of an E2V sentence be
assigned one of the values x 1, x 2, . For example, in the parse
displayed in gure 2, the NP every artist, the N 0 artist and the VP
despises every beekeeper all have x 1 as their index. (Two phrases with
the same index are said to be coindexed.) The assignment of values to
index variables must of course conform to the unications enforced by
the DCG rules. However, indices of NPs are additionally required to
obey a set of indexing rules, which function so as to regulate the use
of pronouns and re
exives. Sequences of words corresponding to parse
trees where the index variables cannot be assigned values in accordance
with the indexing rules fail to qualify as E2V sentences.
Within the transformational tradition, the use of Pronouns and
Re
exives is accounted for by binding theory; and that is the theory
which we adopt for E2V. No particular fealty to one linguistic school is
hereby implied: for our purposes, this strategy is simply a convenient
way to ensure that our grammar conforms to normal English usage.
The indexing rules are divided into two classes: the natural indexing
rules and an the articial indexing rules. The natural indexing rules
are:
I1: Indices of NPs must obey all the usual constraints of binding
theory
I2: No two Ns may be coindexed.
Readers unfamiliar with government and binding theory are referred
to a a standard text on the subject e.g. Cowper (1992), p. 171. Using the
standard technical terminology, these rules state that: a Re
exive must
8 Ian Pratt-Hartmann
artist
Every
admires
himself
Figure
3. Phrase-structure of sentence (2), illustrating re
exive pronouns.
be A-bound within its minimal governing category; a Pronoun must be
A-free within its minimal governing category; and an R-expression must
be A-free. Because of the limited grammar of E2V, the ner points of
binding theory may be ignored here. We remark that, in E2V, the
minimal governing category of a Pronoun or Re
exive is always the
nearest IP dominating it.
Sentences (1){(3) illustrate the eect of rule I1. Consider rst sentence
(1), containing the two nouns artist and beekeeper. Its phrase-structure
is shown in gure 2. The requirement that an R-expression be
A-free forces the indices of these nouns to be distinct. The unications
enforced by the grammar rules then imply that the two indices of the
verb despises are distinct, and that the whole sentence has the semantic
value shown, up to renaming of indices. Consider now sentence (2),
containing the single noun artist and the re
exive pronoun himself. Its
phrase-structure is shown in gure 3. The requirement that a Re
ex-
ive be A-bound in its minimal governing category forces himself and
every artist to have the same index. The unications enforced by the
rules then imply that the two indices of the verb admires are
identical, and that the whole sentence has the semantic value shown,
up to renaming of indices. Consider nally sentence (3), containing the
two nouns artist and beekeeper as well as the pronoun him. Its phrase-structure
is shown in gure 4. The requirement that a pronoun be
A-free in its minimal governing category prevents him from coindexing
with beekeeper, but allows it to coindex with artist, resulting in the
semantic value shown.
The status of rule I2 is rather dierent. Though not part of standard
binding theory, it is consistent with it. That is: given an assignment of
indices to an E2V sentence obeying rule I1, we can always rename
indices of some NPs if necessary in such a way that rules I1 and I2
are both satised. (This is straightforward to verify.) As we shall see in
Two-Variable Fragment of English 9
Every
beekeeper
despises
whom
admires
some artist
Figure
4. Phrase-structure of sentence (3), illustrating pronouns and movement.
section 2.2, rule I2 allows us to state the semantics for E2V in a very
simple way.
Having dealt with the natural indexing rules, we move on to the
articial indexing rules. These are:
I3: Every pronoun must take an antecedent in the sentence in which
it occurs
I4: Every pronoun must be coindexed with the closest possible NP
consistent with rules I1{I3.
In I4, 'closest' means `closest in the phrase-structure', not 'closest in
the lexical order'.
These rules are articial in that they make no attempt to describe
natural English usage. In particular, rule I3 requires that all anaphora
be resolved intrasententially. Clearly, this constitutes a restriction on
normal English usage, and is introduced in order to simplify the language
we are studying. Rule I4 is rather more interesting. The eect
of this rule can be seen by examining sentence (4), containing the
nouns artist, beekeeper and carpenter, and the pronoun him. Its phrase-structure
is shown in gure 5. Rules I1{I3 permit the pronoun him
Ian Pratt-Hartmann
to coindex with either artist or carpenter (but not beekeeper), corresponding
to a perceived (anaphoric) ambiguity in the English sentence.
However, we see from gure 5 that the NP every artist who employs a
carpenter is closer to the pronoun in the phrase-strcuture than the NP
a carpenter is. Hence, rule I4 requires him to coindex with the former,
and results in the semantic value shown. Rule I4 does not change the
set of strings accepted by the fragment E2V; but it does ensure that
any accepted string has a unique indexation pattern up to renaming.
It also plays a crucial role in restricting the expressive power of E2V
to that of the two-variable fragment of rst-order logic.
We say that a string of words is an E2V-sentence (over a given
vocabulary) if, according to the above syntax, it is the list of terminal
nodes below an IP node with no parent.
2.2. Semantics
Having dened the set of E2V sentences over a given vocabulary, we
now turn to the translation of these sentences into rst-order logic.
We have already seen that the syntax of E2V assigns a semantic
value to every E2V sentence. This semantic value is a complex term
formed from the primitives occurring in the vocabulary by means of
the constructors some(A,B), every(A,B), rel(A,B) and not(A). For ex-
ample, we see from gures 2{5 that sentences (1){(4) are assigned the
respective semantic values:
artist(x 1)),
To complete the semantics for E2V, it suces to dene a function
mapping semantic values of E2V sentences to formulas of rst-order
logic. The key idea behind this translation is that indices x 1, x 2, . in
semantic values can simultaneously be regarded as variables x 1
in formulas. (The dierent styles of writing indices/variables help to
make semantic values and formulas more visually distinct.)
Denition 2. Let A be the semantic value of an E2V-sentence, and let
B be any (not necessarily proper) subterm of A. We dene the function
T from such subterms to formulas of rst-order logic as follows:
Two-Variable Fragment of English 11
artist
who
employs
a
carpenter
beekeeper
every
despises
who
admires
Every
Figure
5. Phrase-structure of sentence (4).
z is the
tuple of indices which are free variables in T(C) ^T(D) but which
do not occur in A outside B
z is
the tuple of indices which are free variables in T(C) ! T(D) but
which do not occur in A outside B
T5: Otherwise,
We then take the translation of the semantic value A to be simply
(A).
Under these translation rules, the semantic values (9){(12) of the
sentences (1){(4) translate to the formulas (5){(8). To see how this
translation works in detail, consider rst sentence (1) and its semantic
value (9). The simple subterms artist(x 1), beekeeper(x 2) and de-
spise(x 1,x 2) translate to the atomic formulas artist(x 1
and respectively, by rule T5. The complex subterm
then translates to the formula
by rule T2, because, of the two free variables in question, namely x 1 and
only the latter satises the condition that it does not occur outside
the subterm. Finally, the whole expression (9) translates to (5), again
by rule T2.
We note that, in some applications of rule (and indeed, with the
grammar as presented above, some applications of rule T2), the tuple
z may be empty, in which case, of course, 9v is understood to be absent
altogether. Consider sentence (3) and its semantic value (11). Here, the
subterm
translates to the (unquantied) formula
Two-Variable Fragment of English 13
by rule T1, because, of the two free variables in question, namely x 1
and x 2 , both occur in (11) outside the subterm. Suitable applications
of rules T2 and T3 generate the nal translation (7).
2.3. Further remarks on the syntax and semantics of E2V
Relation to DRT
An alternative and more general approach to the semantics of anaphoric
constructions found in E2V is provided by Discourse Representation
Theory (DRT) (Kamp and Reyle (1993)). Like the semantics presented
above, DRT in eect employs a two-stage approach in translating from
English to rst-order logic. First, English sentences are mapped to so-called
Discourse Representation Structures (DRSs)|a formal representation
language with nonstandard mechanisms for quantication|and
these DRSs can then be translated into formulas of rst-order logic in
accordance with the standard semantics for the DRS language. This
two-stage approach allows DRT to give an account of the variable-binding
encountered in sentence (3) very similar to that given in this
paper. An elegant and technical account of DRT-style semantics for a
fragment of English similar to E2V can be found in Muskens (1996).
The main dierence between our approach and that of DRT concerns
the point at which quantiers corresponding to indenite articles are in-
troduced. Our rule translates the indenite article as an existential
quantier provided that the variable which the quantier would bind is
not going to end up appearing outside its scope in the resulting formula.
The strategy of DRT, by contrast, is that the indenite article does not
introduce a quantier at all: rather, 'left-over' variables corresponding
to these determiners are gathered up by quantiers introduced in the
interpretation of DRSs.
For example, consider the sentence
Every artist who admires a beekeeper who berates a carpenter
despises himself.
On our approach, sentence (17) has the semantic value
beekeeper(x 2)),
admire(x 1,x 2)),
artist(x 1)),
despise(x 1,x 1))
which then translates to
14 Ian Pratt-Hartmann
By contrast, DRT would assign (17) the DRS
which then translates to
Formulas (19) and (21) are, of course, logically equivalent. How-
ever, there is a dierence in the way variables are used: our approach
quanties x 2 and x 3 as early as possible, while DRT does so as late as
possible. It turns out that this strategy of early quantication means
that we use variables more 'eciently' than DRT does; and that is why
the semantics presented here makes the expressiveness of E2V easier
to determine. Thus, we are not (as far as we know) taking issue with
DRT; rather, we are simply presenting the semantics of E2V in a form
which is more convenient for the issues at hand.
Accessibility
One respect in which our semantics for E2V fails to do justice to
English speakers' intuitions concerns pronoun accessibility. Consider
again sentence (3), repeated here as (22):
Every beekeeper whom an artist despises admires him.
Recall that we assume all anaphora to be resolved intrasententially,
so that the pronoun in this sentence takes the NP an artist as its an-
tecedent. However, the availability of this NP as an antecedent depends
on the fact that it is existentially quantied. Thus, in the sentence
Every beekeeper whom every artist despises admires him,
Two-Variable Fragment of English 15
the anaphora cannot be resolved intrasententially: the admired individual
must have been introduced by some earlier sentence in the discourse.
Thus, we might say that the NP every artist in (23) is inaccessible to
the pronoun which follows it. An adequate grammar for a restricted
fragment of English should rule out sentences in which pronouns are
required to take inaccessible antecedents. It is thus a defect of the above
grammar for E2V that sentence (23) is accepted, and|as the reader
may verify|translated into the same formula as sentence (22)!
Accessibility restrictions are discussed in detail within the frame-work
of DRT; and we have nothing to add to their proper treatment in
a grammar of English. However, this issue may be safely ignored for the
purposes of the present paper, because the argument in section 4 shows
that no reasonable accessibility restrictions could reduce the expressive
power of E2V. Thus, to simplify the proofs in the following sections,
we take E2V to include sentences such as (23), condent that their
eventual exclusion will make no dierence to our results.
Quantier scoping
Another respect in which E2V fails to do justice to English speak-
ers' intuitions is quantier rescoping. It is generally claimed that the
sentence
Every artist despises a beekeeper
is ambiguous between two choices for the scoping of the quantiers.
By contrast, the above semantics unambiguously assigns wide scope
to the universal quantier. We do not consider quantier rescoping in
this paper. Generally, proposals for controlled languages eliminate such
ambiguities by stipulating that quantiers scope in a particular way;
and this is the approach we take. Very roughly, the eect of the above
grammar rules is that the quantier introduced by the subject determiner
outscopes the quantier introduced by the object determiner,
and that the quantier introduced by any NP determiner outscopes
those introduced in any relative clauses within that NP. This policy
seems the most sensible default in those cases where a choice has to be
made.
Negation
The primary mechanism in E2V which provides negation is the category
Neg, whose sole member is the two-word phrase does not. For example,
the sentence
Some artist does not despise every beekeeper
is assigned the semantic value
Ian Pratt-Hartmann
which then translates to
A more sensitive account of the semantics of E2V would complicate
the rules regarding negation in two related respects. First, the eect of
negation on verb-in
ection should be taken into account; second, the
word does should be assigned category I, with the single word not taken
to be the representative of the category Neg. These changes can be
eected by adopting the following grammar rules for IP (with variables
suppressed for readability):
I
I
and by subjecting verbs in unnegated sentences to movement into the
I position, where they are joined to the in
ection.
Negation brings with it some additional complications concerning
scoping and the negative polarity determiner any. Again, scoping ambiguities
are resolved by at. Simplifying somewhat, the above translation
rules take the negation in a NegP to outscope quantication within the
NegP, but to be outscoped by quantication in the subject governing
that NegP, as in sentence (25). Again, this seems to be the most
reasonable default.
Negative polarity is ignored altogether in this paper. Thus for ex-
ample, E2V employs
Some artist does not despise a beekeeper,
rather than the less ambiguous-sounding
Some artist does not despise any beekeeper,
to express
Other negative sentences accepted by E2V are somewhat awkward, for
example:
Every artist does not despise a beekeeper,
which translates to
Two-Variable Fragment of English 17
In keeping with the general desire to remove ambiguity from controlled
languages, it might be important to consider fragments of English
from which sentences such as (28) and (31) are either replaced
by sentences involving negative polarity items or excluded altogether.
However, as with the issue of pronoun accessibility, so too with that of
negation, the restrictions in question can be ignored for the purposes of
the present paper. The argument in section 4 shows that no reasonable
restrictions on the use of negation could reduce the expressive power of
E2V. Thus, to simplify the proofs in the following sections, we take E2V
to include sentences such as (28) and (31), condent that their eventual
exclusion or modicatation will make no dierence to our results.
Our fragment E2V includes one further mechanism for introducing
negation, namely, the determiner no. This addition to the closed class
lexicon is a small concession to naturalness of expression. In fact, it
could be dropped from E2V without aecting its expressive power.
2.4. Implementation
The foregoing specication of E2V was couched in terms which permit
direct computer implementation. In particular, the DCG rules of the
grammar and lexicon in gure 1 map almost literally to Prolog code,
with some minimal extra control structure required to implement the
indexing rules I1{I4. The movement rule can be incorporated into
this DCG using standard argument-passing techniques, for example,
as described in Walker et al. (1987), pp. 351 . Implementation of
the translation rules T1{T5 is routine. All semantic values and rst-
order translations of E2V sentences given in this paper are the unedited
output of a Prolog program constructed in this way. This program also
incorporates standard DRT accessibility restrictions and enforces correct
verb-in
ections in negated and unnegated sentences, as discussed
in section 2.3. Inspection of this program shows that the rst-order
translation of an E2V sentence can in fact be computed in linear time.
Hence the complexity of translation from E2V is not an issue we shall
be concerned with in the sequel.
3. Expressiveness: upper bound
The main result of this section states that, essentially, the translations
of E2V-sentences remain inside the two-variable fragment of rst-order
logic. In the sequel, we denote the set of formulas of rst-order logic
by L, and the set of formulas of the two-variable fragment by L 2 . The
following notion captures what we mean by \essentially" in the sentence
before last.
Ian Pratt-Hartmann
Denition 3. If 2 L, we say that is two-variable compatible (for
short: 2vc) if no proper or improper subformula of contains more
than two
We have the following result.
Lemma 1. Every 2vc formula is equivalent to a formula of L 2 .
Proof. Routine.
The property of being a 2vc formula is not closed under the relation of
logical equivalence. For example, formulas (19) and (21) are logically
equivalent, but only the former is 2vc. Indeed, this example shows why
our approach to the semantics for E2V makes for an easier analysis of
expressive power than that of DRT.
The result we wish to establish in this section is:
Theorem 1. If is an E2V-sentence with semantic value A, then T(A)
is a closed 2vc formula.
To avoid unnecessary circumlocutions, if is a phrase with semantic
value B, then we say that translates to the formula T (B). Our strategy
will be to show that, if is any phrase in , then translates to a 2vc
certain constraints. For clarity, we
henceforth display all possible free variables in formulas. Thus, (x; y)
has no free variables except for (possibly) x and y, (x) has no free
variables except for (possibly) x, and so on.
A word is in order concerning the treatment of negation in this
section. Inspection of the grammar rules concerning NEGP
and the translation rule
makes it clear that NEGP-phrases merely serve to insert negations into
the translations of E2V sentences, and have no other eect on their
quanticational structure. In establishing theorem 1, then, we omit
all mention of NEGP-phrases, since their inclusion would not aect
the results we derive. This omission applies to all the lemmas used to
derive theorem 1, and serves merely to keep the lengths of proofs within
manageable bounds.
The following terminology will prove useful:
Two-Variable Fragment of English 19
Denition 4. Let and
be phrases in an E2V sentence and Y a
category (e.g. or VP). Then
is said to be a maximal Y
(properly) dominated by if
is a Y (properly) dominated by and
no other phrase which is a Y (properly) dominated by dominates
A phrase of category X or VP) is said to be pronomial
if a maximal NP dominated by expands to a Pronoun.
A VP is said to be re
exive if the maximal NP dominated by
expands to a Re
exive.
A VP is said to be trace if the maximal NP dominated by expands
to a RelPro which has been subjected to movement.
The following examples should help to clarify these denitions (t indicates
a moved RelPro):
pronomial VP admires him
pronomial N 0 beekeeper whom he despises t
beekeeper who t admires him
re
exive VP despises himself
trace VP admires t
We also need some terminology to deal with so-called donkey-sentences.
For the purposes of E2V, we may dene:
Denition 5. A pronomial E2V sentence|that is, one where the object
of the main verb is a pronoun|is called a donkey sentence, and
the pronoun in question is called its donkey pronoun. In addition, a
phrase of category X or VP) is said to be donkey if it
is a maximal phrase of category X dominated by the subject NP of a
donkey sentence.
The paradigm donkey-sentence is:
Every farmer who owns a donkey beats it.
Within this sentence, the following donkey phrases occur:
donkey N 0 farmer who t owns a donkey
donkey IP t owns a donkey
donkey VP owns a donkey.
It is clear that a donkey sentence contains exactly one donkey N 0 , IP
and VP. We remark that a phrase may be both pronomial and donkey,
for example, the italicized N 0 in the sentence:
Every farmer who owns a donkey which likes him beats it,
Ian Pratt-Hartmann
which, incidentally, translates to the 2vc formula:
Finally, the following notation will be useful:
Denition 6. If is a phrase of category N 0 or VP, we write ind() to
denote the index of . If is a phrase of category IP, but not a sentence,
we write ind() to denote the index of the NP which is moved out of
by the movement rule, and we refer to ind() informally as the index
of .
A word of warning is in order about the notation ind(). According to
the grammar of E2V, if is an IP, then has no index. Yet denition 6
nevertheless allows us to write ind() for a non-sentence IP . We have
adopted this notation to re
ect the fact that, from a semantic point of
view, the index of the NP moved out of is always a free variable in
the translation of . Doing so helps to simplify many of the lemmas
which follow.
Denition 7. Let and
be phrases in an E2V sentence and Y a
category (e.g. or VP). Then is said to be a minimal Y
(properly) dominating
if is a Y (properly) dominating
and no
other phrase which is a Y (properly) dominating
is dominated by .
With this terminology behind us, we prove some auxiliary lemmas
about the grammar and translation rules.
Lemma 2. If is a non-sentence IP, then ind() is also the index of
the minimal NP (or N 0 ) dominating .
Proof. By the unications of index variables forced by the movement
rule and the grammar rules.
Lemma 3. If is a re
exive VP, then translates to an atomic
formula of the form d(x; x), where
Proof. By the grammar rule
with its left sister. By rule I1, the left sister of coindexes with the
re
exive in . Hence the semantic value of will be a simple term of
the form d(x; x). The result then follows from T5.
Lemma 4. If is either an N 0 , a non-sentence IP or a VP, and is
pronomial, then translates to a 2vc formula (x; y) where
and y is the index of the pronoun contained in .
Two-Variable Fragment of English 21
d)
g: CP(A/rel(d(x,y),A),x)
d:
b:
a)
g:
c)
trace
b:
g: NP(A/A,x)
g: V(c(x,y),x,y)
trace
Pronoun
Pronoun
Pronoun
Figure
6. Structures of some pronomial, non-sentence phrases. The letters x and y
stand for indices.
Proof. If is a VP, then must have the structure shown in gure 6a).
Certainly,
translates to an atomic formula c(x; y), where
and y is the index of the pronoun. Then also translates to c(x; y).
If is a non-sentence IP, then must have one of the two possible
structures shown in gure 6b) and c). From denition 6, ind() is the
index of the trace NP. It is then immediate that translates to an
atomic formula of either of the forms c(x; y) or c(y; x), where
and y is the index of the pronoun contained in .
If is an N 0 , then must have the structure shown in gure 6d), where
the IP is also pronomial. Let
). By lemma 2,
Certainly,
translates to some atomic formula c(x)
and by the result just obtained translates to an atomic formula of
one of the forms d(x; y) or d(y; x). Then translates to d(x;
or
We remark that lemma 4 holds whether or not is donkey.
Lemma 5. Let be either of the phrases depicted in gure 7, with
pronomial. Then the index of the pronoun in
is also the index of .
22 Ian Pratt-Hartmann
Det Det
a) b)
b:
IP
b:
g: g:
trace
Figure
7. Statement of lemma 5.
b:
Det g: N'
e:
IP
Pronoun
IP or VP
d:
Figure
8. Proof of lemma 6.
Proof. In case a), rule I4 forces the pronoun in
to coindex with the
minimal NP dominating . (There must be such an NP, because an NP
has been moved out of .) The result then follows from lemma 2. In case
b), the left-sister of is either a full NP (expanding to a Det and N 0 )
or an NP trace which coindexes with the minimal NP dominating .
Either way, I4 forces the pronoun in
to coindex with this NP. Again,
the unications of index variables enforced by the grammar rules and
(if applicable) the movement rule ensure that also coindexes with
this NP.
Lemma 6. Let be an E2V sentence, be a non-donkey phrase of
category IP or VP properly dominated by , and
a maximal N 0
properly dominated by . Then ind(
does not occur outside in the
semantic value of .
Proof. Refer to gure 8. It is easy to check using the grammar rules that
any index occurring outside is the index of some NP occurring outside
Two-Variable Fragment of English 23
, so that if ind(
occurs outside , it does so as the index of some
N, Re
exive or Pronoun. Rule I2 rules out the rst possibility, and I1
rules out the second. It follows that ind(
can occur outside only
if it is the index of a pronoun . In fact, the minimal NP dominating
must be the antecedent of . Since
and hence must precede ,
must have a minimal dominating NP, say ". By rules I2 and I4, no
full NP (expanding to a Det and N 0 ) other than " can intervene on the
path between and
. It is then easy to see that must be a donkey
pronoun, and that the minimal NP dominating
is its antecedent. This
contradicts the supposition that is not donkey.
We are now ready to state the main lemma underlying theorem 1.
Lemma 7. For every phrase such that is either an N 0 , a non-
sentence IP or a non-trace VP, we have:
a) if is non-donkey and non-pronomial, then translates to a 2vc
b) if is donkey, then translates to a 2vc formula (x; y) where
and y is the index of the donkey pronoun.
Proof. We proceed by induction on the number of N 0 , IP or VP nodes
properly dominated by in the phrase-structure tree for the sentence
in question.
We have three cases to consider, depending on whether is (i) an
an IP or (iii) a VP.
(i) Let be an N 0 , with of the lemma
are then established as follows:
a) Suppose is non-pronomial and non-donkey. Then is either a
single noun and so translates to, say, b(x), or is of the form depicted
in gure 9a). Since is non-pronomial and non-donkey, so is . By
by inductive hypothesis translates
to a 2vc formula (x). Since ind(
translates to an
atomic formula, say c(x), whence translates to c(x) ^ (x) by
T3 as required.
Suppose is donkey. Then must be of the form depicted in
gure 9a), with also donkey. By lemma 2
by inductive hypothesis translates to a 2vc formula (x; y), where
y is the index of the donkey pronoun. Again,
translates to an
atomic formula, say c(x), whence translates to c(x) ^ (x; y) by
T3 as required.
Ian Pratt-Hartmann
(ii) Let be a non-sentence IP, with prevents the
subject of an IP from being a re
exive; furthermore, an IP with a pronoun
subject cannot fall under either of the cases a) or b) of the lemma.
Hence, we have two sub-cases to consider, depending on whether the
subject or the object of the IP is a trace, as depicted in gure 9b) and
c), respectively. Note: in gure 9c){e), we use the notation Q(S,T) to
denote any of some(S,T), every(S,T) and every(S,not(T)), where S and
are semantic values.
We consider rst the sub-case of gure 9b). From denition 6,
x is the index of the trace in . And by the phrase structure rules, we
have ind(
clauses a) and b) of the lemma
as follows:
a) Suppose is non-pronomial and non-donkey. Then so is
. By
inductive hypothesis,
translates to a 2vc formula (x). But then
so does .
Suppose is donkey. Then so is
. By inductive hypothesis,
translates to a 2vc formula (x; y), where y is the index of the
donkey pronoun. But then so does .
Next, we consider the sub-case of gure 9c). From denition 6,
x is the index of the trace in . Let
certainly, translates
to some atomic formula d(y; x). Furthermore, from its position
in the phrase-structure,
is certainly not donkey. Suppose in addition
that it is not pronomial. Then, by inductive hypothesis,
translates to
some 2vc formula (y), which we may write as (y; x). Suppose on the
other hand that
is pronomial. Then by lemma 4, it translates to a
z is the index of the pronoun contained in
Moreover, by lemma 5, Hence
translates to (y; x).
We now establish clauses a) and b) of the lemma as follows:
a) Suppose is non-pronomial and non-donkey. By lemma 6 then, y
does not occur outside . Depending on whether the determiner
in question is a, every or no, by T1, T2 and (possibly) T4,
translates to one of 9y(
Suppose is donkey. By rule I4, y must be the index of the donkey
pronoun, so that both x and y occur outside . By similar
reasoning to case a), translates to one of (
(iii) Let be a non-trace VP, with
is re
exive, it translates to an atomic formula b(x; x). Moreover, no
Two-Variable Fragment of English 25
donkey VP can be re
exive (otherwise the donkey pronoun would have
no antecedent). For the purposes of establishing clauses a) and b) of
the lemma then, we may assume that is not re
exive. Furthermore,
if is donkey, then it cannot be pronomial, for then there would be
no antecedent for the pronoun in . Again, then, for the purposes of
establishing clauses a) and b) of the lemma, we may assume that is
not pronomial, so we can take to have the form depicted in gure 9d).
translate to the atomic formula c(x; y). By its
position in the phrase-structure, is not donkey. If, in addition, is not
pronomial, by inductive hypothesis, it translates to a 2vc formula (y),
which we may write as (y; x). If, on ther other hand, is pronomial,
by lemma 4, it translates to a 2vc formula (y; z), where z is the index
of the pronoun contained in . Moreover, by lemma 5,
Hence translates to (y; x). We now establish clauses a) and b) of
the lemma as follows:
a) Suppose is not pronomial, and not donkey. By lemma 6, y does not
occur outside . Depending on whether the determiner in question
is a, every or no, by T1, T2 and (possibly) T4, translates to
one of 9y(
:c(x; y)), as required.
Suppose is donkey. By rule I4, y must be the index of the donkey
pronoun, so that x and y both occur outside . By similar
reasoning to case a), translates to one of (
Now we can return to the main theorem of this section.
Proof of theorem 1. Let the sentence have the structure shown in
gure 9e), and let
rst that is non-
donkey. Then
and are non-pronomial and non-donkey, and is
certainly non-trace. By lemma 7,
and translate to 2vc formulas
(x) and (x) respectively. Since has no parent, by T1 and T2 and
(possibly) T4, it translates to one of 9x((x)^ (x)), 8x((x) ! (x))
or
Suppose on the other hand that is donkey. By lemma 7,
translates
to a 2vc formula (x; y), where y is the index of the donkey pronoun.
By lemma 4, translates to a formula (x; y). Again, since has
no parent, by T1 and T2 and (possibly) T4, it translates to one of
26 Ian Pratt-Hartmann
a) b) c)
d) e)
RelPro
C'
trace
Det d:
Det
d:
b:
g:
b:
b:
d:
g:
b:
g:
a:
g:
IP(T)
Det N'(S,y)
g:
d:
trace
Figure
9. Proofs of lemma 7 and theorem 1. The letters S and T stand for semantic
values.
The following result gives us an upper complexity bound for E2V-
Theorem 2. (Borger, Gurevich and Gradel (1997) corollary 8.1.5) The
problem of deciding the satisability of a sentence in L 2 is in NEXP-
TIME.
Corollary 1. The problem of determining the satisability of a set E
of E2V sentences is in NEXPTIME.
4. Expressiveness: lower bound
In the previous section, we showed that the English fragment E2V does
not take us beyond the expressive power of the two-variable fragment.
In this section, we show that, with two minor semantic stipulations, we
essentially obtain the whole of the two-variable fragment.
The rst semantic stipulation concerns the expression of identity.
The identiy relation is not expressible in E2V, and so we need to single
out a verb|eqs|which will be mapped to the identity relation. That
is, we have the lexicon entry
Two-Variable Fragment of English 27
eqs.
The second semantic stipulation concerns the expression of the universal
property. All noun-phrases in E2V contain common nouns, which
automatically restricts quantication in sentence-subjects. It is therefore
convenient to single out a noun|thing|which expresses the property
possessed by everything. That is, we have the lexicon entry
thing.
For ease of reading, we will perform the usual contractions of every
thing to everything, some thing to something etc. We note that both
semantic stipulations can be dropped without invalidating corollaries 2
and 3 below.
Table
I. Some formulas of L 2 and their E2V-translations
Everything bees everything which ays it
Everything which ays something bees itself
Everything which something ays bees itself
Everything which bees itself ays everything
Everything ays everything which bees itself
Everything which ays something eqs a bee
Everything which something ays eqs a bee
everything
Everything ays every bee
Nothing bees something which it ays
Everything bees everything which it ays
8x8ya(x;y) Everything ays everything
8x9ya(x;y) Everything ays something
Everything ays everything which it does not bee
Everything which bees something which it cees
ays it
To understand how E2V can essentially express the whole of the
two-variable fragment, we need to establish an appropriate notion of
expressive equivalence.
Denition 8. Let ; 0 2 L. We say that 0 is a denitional equivalent
of if 0 structure A interpreting only the non-logical
primitives of such that A j= , there is a unique expansion B of A
such that B
It is obvious that, if 0 is a denitional equivalent of , then is
satisable if and only if 0 is satisable. We show that, for any closed
28 Ian Pratt-Hartmann
formula in the two-variable fragment, a set E of E2V sentences can
be found such that E translates to a set of formulas whose conjunction
is a denitional equivalent of .
First, we establish an easy result about the two-variable fragment.
Lemma 8. There is a function f : computable in polynomial
time and space, such that, for all 2 L 2 , f() is a denitional
equivalent of consisting of a conjunction of formulas of the forms
found in the left-hand column of table I, where the a, b and c are
relation-symbols of the indicated arity.
Proof. Let 2 L 2 . Using the well-known transformation due to Scott
(see, e.g. Borger, Gurevich and Gradel (1997), lemma 8.1.2), we can
compute, in polynomial time and space, a formula 0 of the form
where the i (0 i m) are quantier-free, such that 0 is a deni-
tional equivalent of . Using the same technique, we may then compute,
again in polynomial time and space, a formula 00 which is a conjunction
of formulas of the forms found in the left-hand column of table I,
such that 00 is a denitional equivalent of 0 . Details of this second
transformation may be found in Pratt-Hartmann (2000), lemma 8, but
are completely routine.
Theorem 3. Let be in L 2 . Then we can compute, in polynomial
time and space, a set E of sentences in E2V, such that E translates to
a set of sentences whose conjunction is a denitional equivalent of .
Proof. Compute f() as in lemma 8. The formulas in the left-hand
column of table I are, modulo some trivial logical manipulation, the
translations of the corresponding E2V sentences in the right-hand col-
umn. (This fact has been veried using the Prolog implementation
mentioned in section 2.4.) Thus, we can read o the E2V translations
of each conjunct in f() from table I.
At this point, we are in a position to return to remarks made in
section 2.3 concerning possible restrictions of (or modications to) E2V.
In particular, we claimed that imposing standard pronoun accessibility
restrictions (thus outlawing certain currently legal E2V sentences)
would not reduce the fragment's expressive power. That this is so follows
immediately from consideration of table I, since all the sentences
occurring there are perfectly acceptable English sentences in which
anaphora can be resolved intrasententially, and so could not possibly
violate any (reasonable) accessibility restrictions.
Two-Variable Fragment of English 29
Similarly, we claimed in section 2.3 that banning some very awkward
negative sentences or insisting on the use of negative polarity determiners
would also not aect the expressive power of E2V. But we see that
the only faintly objectionable piece of English in table I occurs in the
row
Nothing bees something which it ays
where, arguably, something should be replaced by anything. But any
reasonable modication of E2V along these lines must surely assign
the meaning 8x8y(a(x; y) ! :b(x; y)) to the sentence Nothing bees
anything which it ays, in which case, the proof of theorem 3 would
proceed as before. Thus, no reasonable|i.e. linguistically motivated|
modication of the way negative sentences are treated in E2V could
lead to a reduction in expressive power.
The following well-known result gives us a lower complexity bound
for E2V-satisability:
Theorem 4. The problem of deciding the satisability of a formula in
A proof can be found in Borger, Gurevich and Gradel (1997), theorem
6.2.13. What these authors actually show is that the domino
problem for a toroidal grid of size 2 n can be polynomially reduced to
the satisability problem for L 2 . But the former problem is known to
be NEXPTIME-hard.
This gives us an immediate bound for the complexity of reasoning
in E2V:
Corollary 2. The problem of determining the satisability of a set E
of E2V sentences is NEXPTIME hard.
In fact, if complexity (rather than expressive power) is all we are interested
in, we could do much better than this. Referring to denition 8,
dropping the requirement that the expansion B be unique still guarantees
that and 0 are satisable over the same domains. By taking
care to move negations inwards in the i in formula (36), we can optimize
the transformations in the proof of lemma 8 so that the resulting
conjunction 00 makes no use of the formulas below the horizontal line
in table I, while still guaranteeing that and 00 are equisatisable.
Since there is no mention of the word not in the corresponding E2V
translations, we have:
Denition 9. Denote by E2V 0 the fragment of English dened in exactly
the same way as for E2V, but with all grammar rules involving
NegP removed.
Ian Pratt-Hartmann
Corollary 3. The problem of determining the satisability of a set E
of E2V 0 sentences is NEXPTIME hard.
Of course, E2V 0 still contains the negative determiner no.
We remark that theorem 4 also holds for the two-variable fragment
without equality. It is then routine to show that the assumptions that
thing denotes the universal property and eqs expresses the identity
relation can both be dropped without invalidating corollary 3. We
remark in addition that theorem 4 applies only when no restrictions are
imposed on the non-logical primitives that may occur in formulas of L 2 .
For formulas of L 2 over a xed signature, the satisability problem is in
NP. Corresponding remarks therefore apply to E2V when interpreted
over a xed vocabulary.
It is natural to ask what happens when the articial indexing rule
I4 is removed. In this case, additional patterns of indexing are allowed
for a given string of words in E2V, resulting in increased expressive
power. Consider again sentence (4), repeated here as (37),
Every artist who employs a carpenter despises every beekeeper
who admires him.
As we remarked above, rules I1{I3 allow the pronoun to coindex with
either artist or carpenter, while rule I4 forbids the latter possibility.
But what if we abolished I4 and allowed all (intrasentential) anaphoric
references conforming to the usual rules of binding theory? For sentence
(37), this would result in the possible semantic value:
artist(x 1)),
Let us apply the standard translation rules T1{T5 to this semantic
value. (Our translation rules in no way depend on the adoption of
rule I4.) The result is the formula
We mention in passing that the Prolog implementation discussed in
section 2.4 has been so written that the rst solution it nds corresponds
to an indexing pattern conforming to I1{I4, and that subsequent
solutions, obtainable by backtracking in the normal way, are those for
other indexation patterns, if any, conforming only to I1{I3.
Formula (39) is clearly not 2vc. Does this make a dierence? Unfor-
tunately, it does: Pratt-Hartmann (2000) shows that the satisability
Two-Variable Fragment of English 31
problem for E2V without rule I4 is undecidable. Actually, that paper
establishes a slightly stronger result: even if all negative items
(the determiner no and the rules for NegP) are removed from the
grammar, without rule I4, the problem of determining whether one
set of sentences entails another is also undecidable. The techniques
used are similar to those employed in this section, though somewhat
long-winded. Here, we simply state without proof:
Theorem 5. If the indexing rule I4 is removed from E2V, the problem
of determining the satisability of a set of E2V sentences (with specied
indexing patterns) is undecidable.
We conclude that the articial indexing rule I4 really is essential in
enforcing decidability.
Many useful extensions to E2V could be straightforwardly implemented
without essential change to the corresponding logical frag-
ment. Extensions in this category include proper nouns, intransitive
verbs, (intersective) adjectives, as well as some special vocabulary|
most obviously, the verb to be. Other useful extensions would be equally
straightforward to implement, but would, however, yield a fragment of
logic for which the satisability problem has higher complexity. The
most salient of these is the extension of the closed-class lexicon to
include the determiner the (interpreted, say, in a standard Russellian
fashion), and possibly also counting determiners such as at least four,
at most seven, exactly three etc. Such expressions will not in general
translate into the two-variable fragment, but they will translate into
the fragment C 2 , which adds counting quantiers to the two-variable
fragment. The satisability problem for this language is shown by Pa-
cholski, Szwast and Tendera (1999) to be decidable in nondeterministic
doubly exponential time. These examples suggest a programme of work:
for a host of grammatical constructions, provide a semantics which is
amenable to the kind of analysis of expressive power undertaken above
for our original fragment E2V.
5. Natural language and formal languages
Let us say that an argument in E2V is a nite (possibly empty) set
of E2V sentences (the premises) paired with a further E2V sentence
(the conclusion). Informally, an argument is said to be valid if the
conclusion must be true whenever the premises are true. Assuming that
our semantics is a faithful account of the meanings of E2V sentences, we
may take an argument to be valid if and only if the translations of the
premises entail the translation of the conclusion, in the usual sense of
Ian Pratt-Hartmann
entailment in rst-order logic. We take this claim to be uncontentious.
Therefore, the validity of arguments in E2V is certainly decidable. The
question then arises as to the most ecient practical way of determining
the validity of arguments in E2V automatically.
The most obvious strategy is as follows: translate the relevant E2V
sentences into rst-order logic as specied in section 2; then use a
known algorithm for solving satisability problems in the two-variable
fragment. Of particular interest are algorithms based on ordered resolution
(see, for example, de Nivelle (1999), de Nivelle and Pratt-
Hartmann (2001)); such an approach enjoys the obvious engineering
advantage that it can be implemented as a heuristic in a general-purpose
resolution theorem-prover. These observations suggest that
the best way to decide the validity of arguments in E2V is to pipe
the output of an E2V parser to a resolution theorem-prover equipped
with suitable heuristics.
Historically, however, the strategy of translating natural language
deduction problems into rst-order logic and then using standard logical
techniques to solve them has always had its dissenters. Two observations
encourage this dissent: (i) that the syntax of rst-order
logic is unlike that of natural language|particularly in its treatment
of quantication|and (ii) that standard theorem-proving techniques
are unlike the kinds of reasoning patterns which strike people as most
natural. It is then but a small step to the idea that we might obtain a
better (i.e. more ecient) method of assessing the validity of arguments
if we reason within a logical calculus whose syntax is closer to that of
natural language. This idea is attractive because it suggests a naturalistic
dictum: treat the syntax of natural language with the respect it is
due, and your inference processes will run faster.
The purest manifestation of this school of thought is to give an
account of deductive inference in terms of proof-schemata which match
directly to patterns of (analysed) natural language sentences. Examples
of such natural-language-schematic proof systems are Fitch (1973),
Hintikka (1974) and Suppes (1979), as well as the work of the 'tra-
ditional' logicians such as Englebretsen (1981) and Sommers (1982).
However, it is certainly possible to accept the usefulness of translation
to a formal language, while maintaining that such a language should
be more like natural language than rst-order logic. Thus, for example,
Purdy (1991) presents a language he calls LN , a variable-free syntax
with slightly less expressive power than ordinary rst-order logic.
(Satisability in LN remains undecidable, however.) Purdy provides
a sound and complete proof procedure for LN and a grammar mapping
sentences from a fragment of English to formulas in LN . He also
Two-Variable Fragment of English 33
shows how his proof procedure can be used to solve various deductive
problems in|as he claims|a natural fashion.
The trouble with all of these systems, however, is that no hard
evidence is adduced to support their claimed superiority to the obvious
alternative of reasoning with rst-order logic translations. The work of
Purdy just cited provides a good illustration. It is clear that the very
simple natural language fragment he provides does not have the same
expressive power as LN . Indeed, Purdy's English fragment is, with some
trivial additions, less expressive than our E2V, and linguistically rather
unsatisfactory (for example, relative clauses can only have subject- and
never object-traces). It is worth remarking that the example inference
problems he gives to illustrate the 'naturalness' of his proof procedure
cannot be parsed by his grammar. We conclude that the case for LN as
an appropriate formalism for solving natural language reasoning problems
has not been established. The same goes for all of the alternative
schemes mentioned above: vague and unsubstantiated claims about the
psychological naturalness of certain reasoning procedures have little
merit.
McAllister and Givan (1992) present a much more restricted logical
language involving a construction similar to the 'window' operator of
Humberstone (1983; 1987), and specically motivated by the quantica-
tion patterns arising in simple natural language sentences. Determining
satisability in this language is shown to be an NP-complete problem|
and indeed to be solvable in polynomial time given certain restrictions.
(We remark that the computational complexity of similar, but more
expressive, languages is explored in Lutz and Sattler (2001).) McAllister
and Givan do not present a grammar corresponding to their
fragment, though it would not be dicult to write a parser for simple
sentences involving nouns (common and proper), transitive verbs, relative
clause constructions and the determiners some and every. Whether
any linguistically natural fragment of natural language could be given
which expresses the whole of McAllister and Givan's formal language
is unclear. However, McAllister and Givan's work has the great merit
of establishing a precise claim about the computational advantage of
restricting attention to their natural-language-inspired formalism.
The complexity results presented above show that no fragment of
English translating into McAllister and Givan's formalism could equal
the expressive resources of E2V. As McAllister and Givan point out,
they seem to have captured a fragment of English from which all
anaphora has been removed. Moreover, our results give us reason to
believe that no analogous natural-language-inspired formalism could
confer any computational advantages when it comes to fragments of
natural language as expressive as E2V. Section 3 establishes that we
34 Ian Pratt-Hartmann
can determine the validity of E2V arguments in nondeterministic exponential
time by adopting the straightforward strategy of translating the
relevant sentences into rst-order logic; however, section 4|and speci-
cally table I|tells us that determining the validity of arguments in E2V
is NEXPTIME-hard anyway, so that it is dicult to see how an alternative
representation language would confer any computational benet,
at least in terms of (worst case) complexity analysis. Finally, and on
a more practical note, it is important to bear in mind the diculty
of developing any reasoning procedure for the alternative formalism
which could compete with the impressive array of well-maintained and
-documented software already available for theorem-proving in rst-
order logic. Thus, we remain skeptical as to whether formal languages
whose syntax is inspired by natural language|or whose syntax just
deviates from rst-order logic in some other way|really constitute
more ecient representation languages for natural-language deduction
than does rst-order logic.
6. Conclusion
This paper has provided a study in how to match a controlled language
with a decidable logic whose computational properties are well-
understood. The controlled language we chose, E2V, was shown to
correspond in expressive power exactly to the two-variable fragment
of rst-order logic. Two features of this study deserve emphasis. The
rst is logical rigour: the syntax and semantics of E2V were presented
in such a way that results about its expressive power and computational
complexity could be established as theorems. Logical rigour
is important in the context of controlled languages, because software
support for such languages must be shown to be robust and reliable.
The second feature is conservativity: our presentation of the syntax
and semantics of E2V borrowed heavily from accepted linguistic theory
(especially in the treatment of anaphora); and our chosen logical
representation language was|in contrast to some previous work on
natural language deduction|standard rst-order logic. Conservativity
is important, because of the obvious benets of relying on well-attested
linguistic theories and well-maintained, ecient theorem-proving software
The ultimate goal of this research is to provide useable tools for natural
language system specication. Before that goal is achieved, many
questions remain to be answered, not least questions of a psychological
nature concerning the practical utility of such tools. However, the work
reported here is at least a step towards this goal. At the very least,
Two-Variable Fragment of English 35
we have demonstrated that work in formal semantics, mathematical
logic and computer science has now reached the stage where relatively
expressive controlled languages can be precisely specied and their
computational properties rigorously determined.
--R
A Consice Introduction to Syntactic Theory.
Three Logicians.
From Discourse to Logic: Introduction to Modeltheoretic Semantics of Natural Language
Linguistics and Philosophy 19(2)
The Logic of Natural language.
Knowledge Systems and Prolog: A Logical Approach to Expert Systems and Natural language Processing.
Addison Wesley.
--TR
--CTR
Ian Pratt-Hartmann, Fragments of Language, Journal of Logic, Language and Information, v.13 n.2, p.207-223, Spring 2004 | controlled languages;two-variable fragment;logic;specification;natural language |
585500 | Automated discovery of concise predictive rules for intrusion detection. | This paper details an essential component of a multi-agent distributed knowledge network system for intrusion detection. We describe a distributed intrusion detection architecture, complete with a data warehouse and mobile and stationary agents for distributed problem-solving to facilitate building, monitoring, and analyzing global, spatio-temporal views of intrusions on large distributed systems. An agent for the intrusion detection system, which uses a machine learning approach to automated discovery of concise rules from system call traces, is described.We use a feature vector representation to describe the system calls executed by privileged processes. The feature vectors are labeled as good or bad depending on whether or not they were executed during an observed attack. A rule learning algorithm is then used to induce rules that can be used to monitor the system and detect potential intrusions. We study the performance of the rule learning algorithm on this task with and without feature subset selection using a genetic algorithm. Feature subset selection is shown to significantly reduce the number of features used while improving the accuracy of predictions. | The AAFID group at Purdues COAST project has
prototyped an agent-based intrusion detection system.
Their paper analyzes the agent-based approach to intrusion
detection and mentions the prototype work that
has been done on AAFID (Balasubramaniyan et al.,
1998). Our project diers from AAFID in that we are
using data mining to detect intrusions on multiple
components, emphasizing the use of learning algorithms
in intrusion detection, and using mobile agents. AAFID
is implemented in Perl while our system is implemented
in Java.
3. Design of our agent-based system
A system of intelligent agents using collaborative information
and mobile agent technologies (Bradshaw,
1997; Nwana, 1996) is developed to implement an intrusion
detection system (Denning, 1987).
The goals of the system design are to:
Learn to detect intrusions on hosts and networks using
individual agents targeted at particular subsystems
Use mobile agent technologies to intelligently process
audit data at the sources;
Have agents collaborate to share information on suspicious
events and determine when to be more vigilant
or more relaxed;
Apply data mining techniques to the heterogeneous
data and knowledge sources to identify and react to
coordinated intrusions on multiple subsystems.
A notable feature of the intrusion detection system
based on data mining is the support it oers for gathering
and operating on data and knowledge sources
from the entire observed system. The system could
identify sources of concerted or multistage intrusions,
initiate countermeasures in response to the intrusion,
and provide supporting documentation for system administrators
that would help in procedural or legal action
taken against the attacker.
An example of an intrusion involving more than one
subsystem would be a combined NFS and rlogin intru-
sion. In the first step, an attacker would determine an
NFS filehandle for an .rhosts file or /etc/hosts.equiv
(assuming the appropriate filesystems are exported by the
UNIX system) (van Doorn, 1999). Using the NFS file-
handle, the attacker would re-write the file to give himself
login privileges to the attacked host. Then, using rlogin
from the formerly untrusted host, the attacker would be
able to login to an account on the attacked host, since the
attacked host now mistakenly trusts the attacker. At this
point, the attacker may be able to further compromise
the system. The intrusion detection system based on data
mining would be able to correlate these intrusions, help
to identify the origin of the intrusion, and support system
management in responding to the intrusion. The components
of the agent-based intrusion detection system are
shown in Fig. 1. Information routers read log files and
monitor operational aspects of the systems. The information
routers provide data to the distributed data
cleaning agents who have registered their interest in
particular data. The data cleaning agents process data
obtained from log files, network protocol monitors, and
system activity monitors into homogeneous formats. The
mobile agents, just above the data cleaning agents in the
system architecture, form the first level of intrusion de-
tection. The mobile agents travel to each of their associated
data cleaning agents, gather recent information,
and classify the data to determine whether suspicious
activity is occurring.
Like the JAMsystem (Stolfo et al., 1997), the low-level
agents may use a variety of classification algo-
rithms. Unlike the JAMsystem, though, the agents at
this level will collaborate to set their suspicion level to
determine cooperatively whether a suspicious action is
more interesting in the presence of other suspicious activity
Fig. 1. Architecture of the intrusion detection system.
At the top level, high-level agents maintain the data
warehouse by combining knowledge and data from the
low-level agents. The high-level agents apply data mining
algorithms to discover associations and patterns.
Because the data warehouse provides a global, temporal
view of the knowledge and activity of the monitored
distributed system, this system could help train system
administrators to spot and defend intrusions. Our system
could also assist system administrators in developing
better protections and countermeasures for their
systems and identifying new intrusions.
The interface agent for the agent-based intrusion
detection system directs the operation of the agents in
the system and maintains the status reported by the
mobile agents. The interface agent also provides access
to the data warehouse features.
In our projects current state, several data cleaning
and low-level agents have been implemented. This paper
discusses the agent that monitors privileged programs
using machine learning techniques. Our work in progress
includes the integration of data-driven knowledge
discovery agents into a distributed knowledge network
for monitoring distributed computing systems. In gen-
eral, we are interested in machine learning approaches to
discovering patterns of coordinated intrusions on a
system wherein individual intrusions are spread over
space and time.
4. Rule learning from system call traces
Programs that provide network services in distributed
computing systems often execute with special privileges.
For example, the popular sendmail mail transfer agent
operates with superuser privileges on UNIX systems.
Privileged programs like sendmail are often a target for
intrusions.
The trace of system calls executed by a program can
identify whether an intrusion was mounted against a
program (Forrest et al., 1996; Lee and Stolfo, 1998).
Forrests project at the University of New Mexico
(Forrest et al., 1996) developed databases of system calls
from normal and anomalous uses of privileged programs
such as sendmail. Forrests system call data is a
set of files consisting of lines giving a process ID number
(PID) and system call number. The files are partitioned
based on whether they show behavior of normal or
anomalous use of the privileged sendmail program
running on SunOS 4.1.
Forrest organized system call traces into sequence
windows to provide context. Forrest showed that a
database of known good sequence windows can be developed
from a reasonably sized set of non-intrusive
sendmail executions. Forrest then showed that intrusive
behavior can be determined by finding the percentage of
system call sequences that do not match any of the
known good sequences. The data sets that were used by
Forrests project are available in electronic form on their
Web site (Forrest, 1999). We use the same data set to
enable comparison with techniques used in related papers
(Lee and Stolfo, 1998; Warrender et al., 1999).
Our feature vector technique improves on Forrests
technique because it does not depend on a threshold
percentage of abnormal sequences. Our feature vector
technique compactly summarizes the vast data obtained
from each process, enabling longer-term storage of the
data for reference and analysis. With respect to other
rule learning techniques, our technique induces a compact
rule set that is easily carried in lightweight agents.
Our technique also may mine knowledge from the data
in a way that can be analyzed by experts.
Lee and Stolfo (1998) used a portion of the data from
Forrests project to show that the RIPPER (Cohen,
learning algorithm could learn rules from system
call sequence windows. Lee empirically found sequences
of length 7 and 11 gave the best results in his experiments
(Lee and Stolfo, 1998). For training, each window
is assigned a label of normal if it matches one of the
good windows obtained from proper operations of
sendmail; otherwise, the window is labeled as abnor-
mal. An example of the system call windows and labels
are shown in Table 1. After RIPPER is trained, the
learned rule set is applied to the testing data to generate
classifications for each sequence window. Lee uses a
window across the classifications of length 2L 1, where
L is the step size for the window, to group labels (Lee
and Stolfo, 1998). If the number of abnormal labels in
the window exceeds L, the window is considered ab-
normal. An example of a single window over the clas-
sifications is shown in Table 2.
The window scheme filters isolated noise due to occasional
prediction errors. When an intrusion takes
Table
Sample system call windows with training labels
System call sequences Label
4, 2, 66, 66, 4, 138, 66 Normal
2, 66, 66, 4, 138, 66, 5 Normal
66, 66, 4, 138, 66, 5, 5 Normal
66, 4, 138, 66, 5, 5, 4 Abnormal
4, 138, 66, 5, 5, 4, 39 Abnormal
Table
Sample system call windows with classifications
RIPPERs System call sequences Actual label
classification
Normal 4, 2, 66, 66, 4, 138, 66 Normal
Normal 2, 66, 66, 4, 138, 66, 5 Normal
Abnormal 66, 66, 4, 138, 66, 5, 5 Normal
Abnormal 66, 4, 138, 66, 5, 5, 4 Abnormal
Abnormal 4, 138, 66, 5, 5, 4, 39 Abnormal
place, a cluster of system call sequences will usually be
classified abnormal. In Table 2, since there are more
abnormal classifications than normal in this window,
then this entire window is labeled anomalous. Lee empirically
found that values of L 3andL 5 worked
best for identifying intrusions (Lee and Stolfo, 1998).
Finally, when the window has passed over all the
classifications, the percentage of abnormal regions is
obtained by dividing the number of anomalous windows
by the total number of windows. Lee uses this percentage
to empirically derive a threshold that separates
normal processes from anomalous processes. Warrender
et al. (1999) uses a similar technique, the Locality Frame
Count (LFC), that counts the number of mismatches in
a group and considers the group anomalous if the count
exceeds a threshold. Warrenders technique allows intrusion
detection for long-running daemons, where an
intrusion could be masked by a large number of normal
windows with Lees technique.
Lee and Stolfo (1998) developed an alternate technique
that predicts one of the system calls in a sequence.
The alternate technique allows learning of normal behavior
in the absence of anomalous data. Our technique
is less suitable in that is does require anomalous data for
training.
5. Representing system call traces withfeature vectors
One of the goals of automated discovery of predictive
rules for intrusion detection is to extract the relevant
knowledge in a form that lends itself to further analysis
by human experts. A natural question that was raised by
examination of the rules learned by RIPPER (Cohen,
1995) in the experiments of Lee and Stolfo (1998) and
Helmer et al. (1998) was whether essentially the same
performance could be achieved by an alternative approach
that induced a smaller number of simpler rules.
To explore this question, we designed an alternative
representation scheme for the data. This representation
was inspired by the success of the bag of words representation
of documents (Salton, 1983) that has been
successfully used by several groups to train text classi-
fication systems (Yang et al., 1998). In this representa-
tion, each document is represented using a vector whose
elements correspond to words in the vocabulary. In the
simplest case, the vectors are binary and a bit value of 1
indicates that the corresponding word appears in the
document in question and bit value of 0 denotes the
absence of the word.
In this experiment, the data were encoded as binary-valued
bits in feature vectors. Each bit in the vector is
used to indicate whether a known system call sequence
appeared during the execution of a process. This encoding
is similar in spirit to the bag of words encoding
used to represent text documents.
Feature vectors were computed on a per-process basis
from the sendmail system call traces (Forrest, 1999).
Based on ideas from previous work (Forrest et al., 1996;
Lee and Stolfo, 1998), sequence windows of size 5-12
were evaluated for use with our feature vector approach.
Sequence windows of size 7 were selected for their good
performance in learning accuracy and relatively small
dictionary size.
The training data was composed of 80% of the feature
vectors randomly selected from normal traces and
all of the feature vectors from the selected abnormal
traces. To compare our results to those from the JAM
project, four specific anomalous traces were selected for
training. Five dierent selections of anomalous traces
were also tested to ensure that arbitrarily selecting these
four anomalous traces did not significantly aect the
results.
The number of abnormal records in the training data
was quite small (15 records) in proportion to the set of
normal training data (520 records). To balance the
weightings, the abnormal training data was duplicated
36 times so that 540 abnormal records were present in
the training data. Lee and Stolfo (1998) explains the
rationale for balancing the data to obtain the desired
results from RIPPER. From the feature vectors built
from sequences of length 7, RIPPER eciently learned a
rule set containing seven simple rules:
good IF a1406 t
good if a67 t
good if a65 t
good if a576 t
good if a132 t
good if a1608 t
bad otherwise
The size of this set of rules compares favorably to the set
of 209 rules RIPPER learned when we used Lees system
call window approach. The feature vector approach
condenses information about an entire process history
of execution. Feature vectors may make it easier for
learning algorithms by aggregating information over the
entire execution of a process rather than by looking at
individual sequences.
Applying the learned rule set produced the results
shown in Table 3. All traces except Normal sendmail
are anomalous. Boldface traces were used for training.
The total numbers of feature vectors, numbers of vectors
predicted abnormal by RIPPER, and detection results
are shown. Since a single feature vector represents
each process, each trace tends to have few feature vectors
The rules cannot be expected to flag all of the processes
in an attacked trace as an intrusion. While handling
a mail message, sendmail spawns child processes
that handle dierent parts of the procedures involved in
receiving, queuing, and forwarding or delivering the
message. Some of these processes involved in handling
Table
Results of learning rules for feature vectors
Trace name Total Vectors Attack
feature predicted detected?
vectors abnormal
chasin 6 3 Y
recursive
smdhole
Normal sendmail 130 3
(not used for
training)
an intrusive transaction may be indistinguishable from
processes handling a normal transaction because the
attack only aects one of the processes. Therefore, if at
least one of the processes involved in an intrusion is
flagged as abnormal, we can identify the group of related
processes as anomalous.
Several attacks did not result in successful intrusions.
For our intrusion detection system, we identify all attacks
as intrusive activity that merits further investigation
elsewhere in the IDS. It would be unlikely that an
attacker would attempt a single exploit and give up if it
fails. The data mining portion of our intrusion detection
system would then correlate these multiple (successful
and unsuccessful) attacks.
The anomalous traces are clearly identified in our
experiment with the exception of one of the minor in-
trusions, fwd-loops-2. The fwd-loop attacks are denial-
of-service attacks where the sendmail process spends its
time repeatedly forwarding the same message. The feature
vector technique may need to be adjusted from
simple binary values to statistical measures to identify
this class of attack.
A benefit of the feature vector approach is the simplicity
of the learned rules. Training takes place
line due to the amount of time need to learn a rule set.
Each learned rule set for the sendmail system call feature
vectors is simple: generally fewer than 10 rules, where
each rule often consists of a conjunction of one or two
Boolean terms. Such a small set of rules applied to this
simple data structure should allow us to use this approach
in a near real-time intrusion detection agent
without placing an excessive load on a system. A small,
simple rule set also may lend itself to human expert
examination and analysis in data mining situations
(Cabena et al., 1998).
Another benefit of the feature vector approach is the
condensed representation of a process by its fixed-length
feature vector. The list of system calls executed by a
process can be enormous. Storing this information in its
entirety is infeasible. Representing the data by a relatively
short fixed-length string helps solve the problems
of transmitting and storing the data. This technique
realizes the mobile agent architectures goal of reducing
and summarizing data at the point of generation.
6. Feature subset selection using genetic algorithms
A learning algorithms performance in terms of
learning time, classification accuracy on test data, and
comprehensibility of the learned rules often depends on
the features or attributes used to represent the examples.
Feature subset selection has been shown to improve the
performance of a learning algorithm and reduce the effort
and amount of data required for machine learning
on a broad range of problems (Liu and Motoda, 1998).
A discussion of alternative approaches to feature subset
selection can be found in John et al. (1994), Yang and
Honavar (1998), Liu and Motoda (1998).
The benefits and aects of feature subset selection
include:
Feature subset selection aects the accuracy of a
learning algorithm because the features of a data set
represent a language. If the language is not expressive
enough, the accuracy of any learning algorithm is adversely
aected.
Feature subset selection reduces the computational
eort required by a learning algorithm. The size of
the search space depends on the features; reducing
the feature set to exclude irrelevant features reduces
the size of the search space and thus reduces the
learning eort.
The number of examples required to learn a classifi-
cation function depends on the number of features
(Langley, 1995; Mitchell, 1997). More features require
more examples to learn a classification function
to a desired accuracy.
Feature subset selection can also result in lower cost
of classification (because of the cost of obtaining feature
values through measurement or simply the computation
overhead of processing the features).
Against this background, it is natural to consider feature
subset selection as a possible means of improving the
performance of machine learning algorithms for intrusion
detection
Genetic algorithms and related approaches (Gold-
berg, 1989; Michalewicz, 1996; Koza, 1992) oer an
attractive alternative to exhaustive search (which is infeasible
in most cases due to its computational com-
plexity). They also have an advantage over commonly
used heuristic search algorithms that rely on the monotonicity
assumption (i.e., addition of features does not
worsen classification accuracy) which is often violated in
practice (Yang and Honavar, 1998).
The genetic algorithm for feature subset selection
starts with a randomly generated population of indi-
viduals, where each individual corresponds to a candidate
feature subset. Each individual is encoded as a
string of 0s and 1s. The number of bits in the string is
equal to the total number of features. A 1 in the bit
string indicates an attribute is to be used for training,
and a 0 indicates that the attribute should not be used
for training. The fitness of a feature subset is measured
by the test accuracy (or cross-validation accuracy of the
classifier learned using the feature subset) and any other
criteria of interest (e.g., number of features used, the
complexity of the rules learned).
We used the RIPPER rule learning algorithm as the
classifier. The training data is provided to RIPPER,
which learns a rule set from the data. The number of
conditions in the learned rule set is counted, and this
value is used to determine the complexity of the learned
hypothesis. The learned rule set is applied to the test
examples and the determined accuracy is returned to the
feature subset selection routine. The fitness of the individual
is calculated, based on the accuracy of the learned
hypothesis (accuracyx), the number of attributes
(costx) used in learning, the complexity of the learned
hypothesis (complexityx), and weights (waccuracy, wcost,
wcomplexity) for each parameter:
fitnessxwaccuracy accuracyxwcost costx
wcomplexity complexityx:
This fitness is then used to rank the individuals for se-
lection. Other methods of computing fitness are possible
and are discussed by Yang and Honavar (1998).
A primary goal in using feature subset selection on
this intrusion detection problem is to improve accuracy.
A high percentage of the intrusion detection alerts reported
by current intrusion detection systems are false
alarms. Our system needs to be highly reliable, and we
would like to keep false alarms to a minimum. A secondary
goal is to reduce the amount of data that must
be obtained from running processes and classified. This
would reduce the overhead of our intrusion detection
approach on the monitored system.
6.1. Feature subset selection results
The genetic algorithm used standard mutation and
crossover operators with 0.001 probability of mutation
and 0.6 probability of crossover with rank-based selec-
Table
Feature subset selection results with constant parameters
Trial Training accuracy of Attributes used by
best individual best individual
tion (Goldberg, 1989). The probability of selecting the
best individual was 0.6. A population size of 50 was used
and each run went through five generations.
We started with the training data used for the previous
feature vector experiment (1060 feature vectors).
We added an additional copy of each unique feature
vector in the training data (72 feature vectors) to ensure
that rare but potentially important cases had a reason-able
probability of being sampled in the training and
testing phases. This gave a total of 1132 feature vectors
in the input to the genetic algorithm.
To show the general eectiveness of genetic feature
selection on this problem, Table 4 shows the results of
five separate runs of the genetic algorithm with RIPPER
with identical parameters used for each run. The number
of attributes is significantly reduced while the accuracy is
maintained.
Table
5 shows the results of using the rules from the
best individuals found in the five genetic feature selection
runs and compares the results to the original results
learned from all the features. All traces except Normal
sendmail are intrusions. Boldface traces were used for
training. Despite using only about half the features in
the original data set, the performance of the learned
rules was comparable to that obtained using the entire
set of features. After feature subset selection, none of the
feature vectors from normal sendmail are labeled as
abnormal. This shows an improvement in the rate of
false positives.
7. Analysis
A comparison of the eectiveness of RIPPER on the
problem using two dierent data representations and
genetic feature selection algorithm follows.
Table
6 illustrates the advantages of the feature vector
representation over the system call windows for this
learning problem. The feature vector representation allows
the learning algorithm to learn a hypothesis much
faster and with comparable accuracy on the normal test
data, and the complexity of the hypothesis is much
smaller. Using genetic feature selection on the feature
vectors is time consuming but further improves the
learned hypothesis and reduces the set of attributes used
for learning.
Table
Results from rules learned by genetic feature selection
Trace All attributes Trial 1 Trial 2 Trial 3 Trial 4 Trial 5
chasin Y YYYYY
recursive Y YYYYY
smdhole Y YYYYY
Normal sendmail 1/120 0/120 0/120 0/120 0/120 0/120
Table
Eectiveness of dierent learning techniques
Measure Sequence windows Feature vectors Genetic algorithm feature selection
Learning eort Moderate (30 min) Very good (under 1 min) Intensive (approx. 4 h)
Accuracy of learned hypothesis Good (0.53% false positive) Good (0.83% false positive) Very good (0% false positive)
Complexity of learned hypothesis Poor (Avg. 225 rules) Good (4 rules, 7 tests) Good (Avg. 8.6 rules, 9.6 tests)
Number of attributes used 7 (7 system calls in window) 1832 Avg. 848.9
Classification eort Moderate (large rule set) Small (trivial rule set) Smaller (trivial rule set, fewer features)
7.1. Rules learned by RIPPER
An example set of rules that were learned in first trial
of RIPPER with genetic feature subset selection is
shown below:
good IF a1024 t.
good IF a27 t.
good IF a873 f AND a130 f.
good IF a12 t.
good IF a191 t.
good IF a223 t.
good IF a327 t.
bad IF.
The set above contains eight individual rules composed
of eight tests, which correspond to this pseudo-code
IF \unlink,close,unlink,unlink,close,getti-
meofday,open" seen THEN good
link,rename" seen THEN good
sigsetmask,sigblock,sigvec" not seen and
\close,setitimer,close,gettimeofday,link,
socket,fcntl" not seen THEN good
accept,fork" seen THEN good
accept,fork,close" seen THEN good
getdents" seen THEN good
accept,close" seen THEN good
Each of the rule sets from the five genetic algorithm
trials contains rules that can be found in the other rule
sets. The third and fourth trials contain mostly unique
rules, while the other three runs contain a majority of
rules that are duplicated in other rule sets. The similarities
of rules between runs likely indicates the
strength of particular sequences in identifying normal
behavior.
Because the rule sets identify normal processes and
consider all others abnormal, none of the rules identifies
particular abnormal system call sequences. Conse-
quently, the rules do not identify system call sequences
that would directly signal an intrusion. However, these
rules may lead to an understanding of how an attack
causes the typical sequence of system calls to change.
In general, the small size of the rules sets learned by
RIPPER from the system call feature vectors and the
performance of these learned rule sets indicates that a
concise set of rules clearly distinguish normal sendmail
processes from anomalous.
8. Conclusion and future work
Intrusion detection and abuse detection in computer
systems in networked environments is a problem
of great practical interest. This paper investigated the
classification of system call traces for intrusion detection
through the technique of describing each process
by a feature vector. From the feature vector
representation RIPPER learned a small, concise set of
rules that was successful at classifying intrusions. In
comparison with other techniques, the feature vector
representation does not depend on thresholds to separate
normal from anomalous. We are concerned that
establishing an arbitrary threshold is dicult and
would require tuning in practice to balance false
alarms (false positives) against missed intrusions (false
negatives).
The rule sets learned using the feature vector representation
are an order of magnitude simpler than
those obtained using other approaches reported in the
literature (Helmer et al., 1998; Lee and Stolfo, 1998).
This is especially noteworthy given the fact that all of
the experiments in question used the same rule learning
algorithm. We conjecture that the feature vector representation
used in our experiments is primarily responsible
for the dierences in the rule sets that are
learned. The feature vectors condense information
from the entire execution of a process compared to the
fine-grained detail of individual sequences. The scope
of information contained in the feature vectors may
make it easier for learning algorithms to learn simple
rules.
It was further shown that feature subset selection
reduced the number of features in the data, which
resulted in less data and eort required for training
due to the smaller search space. Feature selection also
gave equivalent accuracy with a smaller set of features
We have integrated the learned rules into a mobile
agent running on a distributed system consisting of
Pentium II systems running FreeBSD. This laboratory
network is connected by a firewall to the Department
of Computer Sciences network so we may
operate the intrusion detection system in a controlled
environment. For operation of the IDS, a Voyager
server is started on each host in the monitored distributed
system. The mobile agent is travel through
the system, classifies sample sendmail system call
feature vectors, and reports the results to its media-
tor. The mediator reports the results to the user interface
and optionally stores the information in a
database for potential mining and warehousing op-
erations. We have implemented a set of Java classes
that can interpret and apply the RIPPER rules,
which allows our mobile agent to bring its classifier
and rule set(s) with it as it travels through the distributed
system.
Open issues include the use of this technique in heterogeneous
distributed systems. Specific rule sets may
need to be developed for each node in a distributed
system due to variabilities between operating systems
and workload characteristics. Fortunately, the rule sets
discovered by RIPPER have been small, so mobile
agents ought to be able to carry multiple rule sets
without becoming overly heavy.
Another issue is whether this technique could be applied
in real time. Feature subset selection itself is
computationally expensive, so training and refining the
agent cannot be done in real time. After the agent is
trained, our technique can determine whether a process
is an intruder only after the process has finished, which
provides near real time detection. Warrender et al.
or Lee and Stolfo (1998) techniques would allow
anomaly detection in real time during the execution of
the process. Our technique could be refined to determine
the likelihood that a process is intrusive during the
process execution, giving real time detection. This re-
finement would be necessary for long-lived daemons
such as HTTP servers.
We would also like to know how well this technique
applies to privileged programs other than sendmail.
Warrender worked with five distinct privileged programs
and identified cases where dierent thresholds
and/or dierent algorithms worked better for dierent
programs (Warrender et al., 1999). Based on her work,
we expect this technique will be successful for more
programs than just sendmail.
Work in progress on intrusion detection is aimed at
the integration of data-driven knowledge discovery
agents into a distributed knowledge network for monitoring
and protection of distributed computing systems
and information infrastructures. The investigation of
machine learning approaches to discover patterns of
coordinated intrusions on a system wherein individual
intrusions are spread over space and time is of particular
interest in this context.
Acknowledgements
This work was supported by the Department of De-
fense. Thanks to the Computer Immune System Project
at the University of New Mexicos Computer Science
Department for the use of their sendmail system call
data.
--R
An Introduction to Software Agents.
Discovering Data Mining: From Concept to Implementation.
Fast e
An intrusion-detection model
computer immune systems data sets.
computer immunol- ogy
Artificial intelligence and intrusion detection: Current and future direction.
Genetic Algorithms in Search
The architecture of a network level intrusion detection system.
Intelligent agents for intrusion detection.
Distributed knowledge networks.
Irrelevant features and the subset selection problem.
Genetic Programming.
Elements of Machine Learning.
Data mining approaches for intrusion detection.
Feature Extraction
Genetic Programs Algorithms
Machine Learning.
Network intrusion detection.
Software agents: An overview.
ObjectSpace Inc
Open infrastructure for scalable intrusion detection.
Automated Text Processing.
JAM: Java agents for meta-learning over distributed databases
Detecting intrusions using system calls: alternative data models.
In
Mobile intelligent agents for document classification and retrieval: A machine learning approach.
Guy Helmer is a Senior Software Engineer at Palisade Systems
Johnny Wong is a Full Professor of the computer Science Department
Wong is also involved in the Coordinated Multimedia System (COMS) in Courseware Matrix Software Project
Les Miller is a professor and chair of computer Science at Iowa State University.
--TR
An intrusion-detection model
Automatic text processing
Elements of machine learning
Genetic algorithms data structures = evolution programs (3rd ed.)
Computer immunology
Software agents
Discovering data mining
Genetic Algorithms in Search, Optimization and Machine Learning
Machine Learning
Feature Extraction, Construction and Selection
A Sense of Self for Unix Processes
--CTR
Wun-Hwa Chen , Sheng-Hsun Hsu , Hwang-Pin Shen, Application of SVM and ANN for intrusion detection, Computers and Operations Research, v.32 n.10, p.2617-2634, October 2005
Ningning Wu , Jing Zhang, Factor-analysis based anomaly detection and clustering, Decision Support Systems, v.42 n.1, p.375-389, October 2006
Chi-Ho Tsang , Sam Kwong , Hanli Wang, Genetic-fuzzy rule mining approach and evaluation of feature selection techniques for anomaly intrusion detection, Pattern Recognition, v.40 n.9, p.2373-2391, September, 2007
Qingbo Yin , Rubo Zhang , Xueyao Li, An new intrusion detection method based on linear prediction, Proceedings of the 3rd international conference on Information security, November 14-16, 2004, Shanghai, China | intrusion detection;feature subset selection;machine learning |
585661 | A Survey of Optimization by Building and Using Probabilistic Models. | This paper summarizes the research on population-based probabilistic search algorithms based on modeling promising solutions by estimating their probability distribution and using the constructed model to guide the exploration of the search space. It settles the algorithms in the field of genetic and evolutionary computation where they have been originated, and classifies them into a few classes according to the complexity of models they use. Algorithms within each class are briefly described and their strengths and weaknesses are discussed. | Introduction
Recently, a number of evolutionary algorithms that guide the exploration of the search space by
building probabilistic models of promising solutions found so far have been proposed. These algorithms
have shown to perform very well on a wide variety of problems. However, in spite of a few
attempts to do so, the field lacks a global overview of what has been done and where the research
in this area is heading to.
The purpose of this paper is to review and describe basic principles of the recently proposed
population-based search algorithms that use probabilistic modeling of promising solutions to guide
their search. It settles the algorithms in the context of genetic and evolutionary computation,
classifies the algorithms according to the complexity of the class of models they use, and discusses
the advantages and disadvantages of each of these classes.
The next section briefly introduces basic principles of genetic algorithms as our starting point.
The paper continues by sequentially describing the classes of approaches classified according to
complexity of a used class of models from the least to the most general one. In Section 4 a few
approaches that work with other than string representation of solutions are described. The paper
is summarized and concluded in section 5.
Genetic Algorithms, Problem Decomposition, and Building Blocks
Simple genetic algorithms (GAs) (Holland, 1975; Goldberg, 1989) are population-based search
algorithms that guide the exploration of the search space by application of selection and genetic
operators of recombination/crossover and mutation. They are usually applied to problems where
the solutions are represented or can be mapped onto fixed-length strings over a finite alphabet.
The user defines the problem that the GA will attempt to solve by choosing the length and base
alphabet of strings representing the solutions and defining a function that discriminates the string
solutions according to their quality. This function is usually called fitness. For each string, the
fitness function returns a real number quantifying its quality with respect to the solved problem.
The higher the fitness, the better the solution.
GAs start with a randomly generated population of solutions. From the current population
of solutions the better solutions are selected by the selection operator. The selected solutions are
processed by applying recombination and mutation operators. Recombination combines multiple
(usually two) solutions that have been selected together by exchanging some of their parts. There
are various strategies to do this, e.g. one-point and uniform crossover. Mutation performs a slight
perturbation to the resulting solutions. Created solutions replace some of the old ones and the
process is repeated until the termination criteria given by the user are met.
By selection, the search is biased to the high-quality solutions. New regions of the search space
are explored by combining and mutating repeatedly selected promising solutions. By mutation,
close neighborhood of the original solutions is explored like in a local hill-climbing. Recombination
brings up innovation by combining pieces of multiple promising solutions together. GAs
should therefore work very well for problems that can be somehow decomposed into subproblems
of bounded difficulty by solving and combining the solutions of which a global solution can be con-
structed. Over-average solutions of these sub-problems are often called building blocks in GA liter-
ature. Reproducing the building blocks by applying selection and preserving them from disruption
in combination with mixing them together is a very powerful principle to solve the decomposable
problems (Harik, Cant'u-Paz, Goldberg, & Miller, 1997; Muhlenbein, Mahnig, & Rodriguez, 1998).
However, fixed, problem-independent recombination operators often either break the building
blocks frequently or do not mix them effectively. GAs work very well only for problems where
the building blocks are located tightly in strings representing the solutions (Thierens, 1995). On
problems with the building blocks spread all over the solutions, the simple GAs experience very poor
performance (Thierens, 1995). That is why there has been a growing interest in methods that learn
the structure of a problem on the fly and use this information to ensure a proper mixing and growth
of building blocks. One of the approaches is based on probabilistic modeling of promising solutions
to guide the further exploration of the search space instead of using crossover and mutation like in
the simple GAs.
3 Evolutionary Algorithms Based on Probabilistic Modeling
The algorithms that use a probabilistic model of promising solutions to guide further exploration of
the search space are called the estimation of distribution algorithms (EDAs) (Muhlenbein & Paa,
1996). In EDAs better solutions are selected from an initially randomly generated population of
solutions like in the simple GA. The true probability distribution of the selected set of solutions
is estimated. New solutions are generated according to this estimate. The new solutions are then
added into the original population, replacing some of the old ones. The process is repeated until
the termination criteria are met.
The EDAs therefore do the same as the simple GAs except for that they replace genetic recombination
and mutation operators by the following two steps:
(1) A model (an estimate of the true distribution) of selected promising solutions is constructed.
(2) New solutions are generated according to the constructed model.
Although EDAs process solutions in a different way than the simple GAs, it has been theoretically
and empirically proven that the results of both can be very similar. For instance, the
simple GA with uniform crossover which randomly picks a value on each position from either of the
two parents works asymptotically the same as the so-called univariate marginal distribution algorithm
(Muhlenbein & Paa, 1996) that assumes that the variables are independent (Muhlenbein,
1997; Harik et al., 1998; Pelikan & Muhlenbein, 1999).
A distribution estimate can capture a building-block structure of a problem very accurately
and ensure a very effective mixing and reproduction of building blocks. This results in a linear
or subquadratic performance of EDAs on these problems (Muhlenbein & Mahnig, 1998; Pelikan
et al., 1998). In fact, with an accurate distribution estimate that captures a structure of the
solved problem the EDAs unlike the simple GAs perform the same as GA theory with mostly used
assumptions claims. However, estimation of the true distribution is far from a trivial task. There
is a trade-off between the accuracy and efficiency of the estimate.
The following sections describe three classes of EDAs that can be applied to problems with
solutions represented by fixed-length strings over a finite alphabet. The algorithms are classified
according to the complexity of the class of models they use. Starting with methods that assume
that the variables in a problem (string positions) are independent, through the ones that take into
account some pairwise interactions, to the methods that can accurately model even a very complex
problem structure with highly overlapping multivariate building blocks.
An example model from each presented class of models will be shown. Models will be displayed
as Bayesian networks, i.e. directed acyclic graphs with nodes corresponding to the variables in a
problem (string positions) and edges corresponding to probabilistic relationships covered by the
model. An edge between two nodes in a Bayesian network relates the two nodes so that the value
of the variable corresponding to the ending node of this edge depends on the value of the variable
corresponding to the starting node.
3.1
The simplest way to estimate the distribution of promising solutions is to assume that the variables
in a problem are independent and to look at the values of each variable regardless of the remaining
solutions (see figure 1). The model of the selected promising solutions used to generate the new
ones contains a set of frequencies of all values on all string positions in the selected set. These
frequencies are used to guide further search by generating new string solutions position by position
according to the frequency values. In this fashion, building blocks of order one are reproduced and
mixed very efficiently. Algorithms based on this principle work very well on linear problems where
the variables are not mutually interacting (Muhlenbein, 1997; Harik et al., 1997).
In the population-based incremental learning (PBIL) algorithm (Baluja, 1994) the solutions
are represented by binary strings of fixed length. The population of solutions is replaced with the
so-called probability vector which is initially set to assign each value on each position with the
same probability 0:5. After generating a number of solutions the very best solutions are selected
and the probability vector is shifted towards the selected solutions by using Hebbian learning
rule (Hertz, Krogh, & Palmer, 1991). The PBIL has been also referred to as the hill-climbing with
learning (HCwL) (Kvasnicka, Pelikan, & Pospichal, 1996) and the incremental univariate marginal
distribution algorithm (IUMDA) (Muhlenbein, 1997) recently. Some analysis of the PBIL algorithm
can be found in Kvasnicka et al. (1996).
In the univariate marginal distribution algorithm (UMDA) (Muhlenbein & Paa, 1996) the population
of solutions is processed. In each iteration the frequencies of values on each position in the
selected set of promising solutions are computed and these are then used to generate new solutions
Figure
1: Graphical model with no interactions covered.
which replace the old ones. The new solutions replace the old ones and the process is repeated until
the termination criteria are met. Some theory of the UMDA can be found in Muhlenbein (1997).
The compact genetic algorithm (cGA) (Harik, Lobo, & Goldberg, 1998) replaces the population
with a single probability vector like the PBIL. However, unlike the PBIL, it modifies the probability
vector so that there is direct correspondence between the population that is represented by the
probability vector and the probability vector itself. Instead of shifting the vector components
proportionally to the distance from either 0 or 1, each component of the vector is updated by
shifting its value by the contribution of a single individual to the total frequency assuming a
particular population size. By using this update rule, theory of simple genetic algorithms can be
directly used in order to estimate the parameters and behavior of the cGA.
All algorithms described in this section perform similarly. They work very well for linear
problems where they achieve linear or sub-quadratic performance, depending on the type of a
problem, and they fail on problems with strong interactions among variables. For more information
on the described algorithm as well as theoretical and empirical results of these see the cited papers.
Algorithms that do not take into account any interdependencies of various bits (variables) fail
on problems where there are strong interactions among variables and where without taking into
account these the algorithms are mislead. That is why a lot of effort has been put in extending
methods that use a simple model that does not cover any interactions to methods that could solve
a more general class of problems as efficiently as the simple PBIL, UMDA, or cGA can solve linear
problems.
3.2 Pairwise Interactions
First algorithms that did not assume that the variables in a problem were independent could
cover some pairwise interactions. The mutual-information-maximizing input clustering (MIMIC)
algorithm (De Bonet, Isbell, & Viola, 1997) uses a simple chain distribution (see figure 2a) that
maximizes the so-called mutual information of neighboring variables (string positions). In this
fashion the Kullback-Liebler divergence (Kullback & Leibler, 1951) between the chain and the
complete joint distribution is minimized. However, to construct a chain (which is equivalent to
ordering the variables), MIMIC uses only a greedy search algorithm due to its efficiency, and
therefore global optimality of the distribution is not guaranteed.
Baluja and Davies (1997) use dependency trees (see figure 2b) to model promising solutions.
There are two major advantages of using trees instead of chains. Trees are more general than
chains because each chain is a tree. Moreover, by relaxing constraints of the model, in order to
find the best model (according to a measure decomposable into terms of order two), a polynomial
maximal branching algorithm (Edmonds, 1967) that guarantees global optimality of the solution
can be used. On the other hand, MIMIC uses only a greedy search because in order to learn chain
distributions, an NP-complete algorithm is needed. Similarly as in the PBIL, the population is
replaced by a probability vector which contains all pairwise probabilities.
In the bivariate marginal distribution algorithm (BMDA) (Pelikan & Muhlenbein, 1999) a forest
(a set of mutually independent dependency trees, see figure 2c) is used. This class of models is
even more general than the class of dependency trees because a single tree is in fact a set of one
tree. As a measure used to determine which variables should be connected and which should not,
Pearson's chi-square test (Marascuilo & McSweeney, 1977) is used. This measure is also used to
discriminate the remaining dependencies in order to construct the final model.
(a) MIMIC (b) Baluja&Davies (1997) (c) BMDA
Figure
2: Graphical models with pairwise interactions covered.
Pairwise models allow covering some interactions in a problem and are very easy to learn. The
algorithms presented in this section reproduce and mix building blocks of order two very efficiently,
and therefore they work very well on linear and quadratic problems (De Bonet et al., 1997; Baluja
Davies, 1997; Muhlenbein, 1997; Pelikan & Muhlenbein, 1999; Bosman & Thierens, 1999). The
latter two approaches can also solve 2D spin-glass problems very efficiently (Pelikan & Muhlenbein,
1999).
3.3 Multivariate Interactions
However, covering only some pairwise interactions has still shown to be insufficient to solve problems
with multivariate or highly-overlapping building blocks(Pelikan & Muhlenbein, 1999; Bosman
1999). That is why research in this area continued with more complex models. On one
hand, using general models has brought powerful algorithms that are capable of solving decomposable
problems quickly, accurately, and reliably.
On the other hand, using general models has also resulted in a necessity of using complex
learning algorithms that require significant computational time and still do not guarantee global
optimality of the resulting models. However, in spite of increased computational time spent by
learning the models, the number of evaluations of the optimized function is reduced significantly.
In this fashion the overal time complexity is reduced. Moreover, on many problems other algorithms
simply do not work. Without learning the structure of a problem, algorithms must be either given
this information by an expert or they will simply be incapable of biasing the search in order to
solve complex decomposable problems with a reasonable computational cost.
Algorithms presented in this section use models that can cover multivariate interactions. In the
extended compact genetic algorithm (ECGA) (Harik, 1999), the variables are divided into a number
of intact clusters which are manipulated as independent variables in the UMDA (see figure 3a).
Therefore, each cluster (building block) is taken as a whole and different clusters are considered to
be mutually independent. To discriminate models, the ECGA uses a minimum description length
(MDL) metric (Mitchell, 1997) which prefers models that allow higher compression of data (selected
set of promising solutions). The advantage of using the MDL metric is that it penalizes complex
models when they are not needed and therefore the resulting models are not overly complex. To
find a good model, a simple greedy algorithm is used. Starting with all variables separated, in each
iteration current groups of variables are merged so that the metric increases the most. If no more
improvement is possible, the current model is used.
Following from theory of the UMDA, for problems that are separable, i.e. decomposable into
non-overlapping subproblems of a bounded order, the ECGA with a good model should perform in
a sub-quadratic time. A question is whether the ECGA finds a good model and how much effort
it takes. Moreover, many problems contain highly overlapping building blocks (e.g., 2D spin-glass
systems) which can not be accurately modeled by simply dividing the variables into distinct classes.
This results in a poor performance of the ECGA on these problems.
The factorized distribution algorithm uses
a factorized distribution as a fixed model throughout the whole computation. The FDA is not
capable of learning the structure of a problem on the fly. The distribution and its factorization
are given by an expert. Distributions are allowed to contain marginal and conditional probabilities
which are updated according to the currently selected set of solutions. It has been theoretically
proven that when the model is correct, the FDA solves decomposable problems quickly, reliably, and
accurately (Muhlenbein, Mahnig, & Rodriguez, 1998). However, the FDA requires prior information
about the problem in form of its decomposition and its factorization. Unfortunately, this is usually
not available when solving real-world problems, and therefore the use of FDA is limited to problems
where we can at least accurately approximate the structure of a problem.
The Bayesian optimization algorithm (BOA) (Pelikan, Goldberg, & Cant'u-Paz, 1998) uses a
more general class of distributions than the ECGA. It incorporates methods for learning Bayesian
networks (see figure 3b) and uses these to model the promising solutions and generate the new
ones. In the BOA, after selecting promising solutions, a Bayesian network that models these is
constructed. The constructed network is then used to generate new solutions. As a measure of
quality of networks, any metric can be used, e.g. Bayesian-Dirichlet (BD) metric (Heckerman,
Geiger, & Chickering, 1994), MDL metric, etc. In recently published experiments the BD scoring
metric has been used. The BD metric does not prefer simpler models to the more complex ones. It
uses accuracy of the encoded distribution as the only criterion. That is why the space of possible
models has been reduced by specifying a maximal order of interactions in a problem that are to
be taken into account. To construct the network with respect to a given metric, any algorithm
that searches over the domain of possible Bayesian networks can be used. In recent experiments, a
greedy algorithm has been used due to its efficiency.
The BOA uses an equivalent class of models as the FDA; however, it does not require any
information about the problem on input. It is able to discover this information itself. Nevertheless,
prior information can be incorporated and the ratio of prior information and information contained
in the set of high-quality solutions found so far can be controlled by the user. Not only does the
BOA fill the gap between the FDA and uninformed search methods but also offers a method that is
efficient even without any prior information (Pelikan et al., 1998; Schwarz & Ocenasek, 1999; Pelikan
et al., 1999) and still does not prohibit further improvement by using this. Another algorithm that
uses Bayesian networks to model promising solutions, called the estimation of Bayesian network
(a) ECGA (b) BOA
Figure
3: Graphical models with multivariate interactions covered.
algorithm (EBNA), has been later proposed by Etxeberria and Larra~naga (1999).
The algorithms that use models capable of covering multivariate interactions achieve a very
good performance on a wide range of decomposable problems, e.g. 2D spin-glass systems (Pelikan
et al., 1998; Muhlenbein & Mahnig, 1998), graph partitioning (Schwarz & Ocenasek, 1999), communication
network optimization (Rothlauf, 1999), etc. However, problems which are decomposable
into terms of bounded order can still be very difficult to solve. Overlapping the subproblems can
mislead the algorithm until the right solution to a particular subproblem is found and sequentially
distributed across the solutions (e.g., see F 0\Gammapeak
in Muhlenbein and Mahnig (1998)). Without
generating the initial population with the use of problem-specific information, building blocks of
size proportional to size of a problem have to be used which results in an exponential performance
of the algorithms. This brings up a question on what are the problems we aim to solve by algorithms
based on reproduction and mixing of building blocks that we have shortly discussed earlier
in section 2. We do not attempt to solve all problems that can be decomposed into terms of a
bounded order. The problems we approach to solve are decomposable in a sense that they can be
solved by approaching the problem on a level of solutions of lower order by combining the best
of which we can construct the optimal or a close-to-optimal solution. This is how we bias the
search so that the total space explored by the algorithm substantially reduces by a couple orders
of magnitude and computationally hard problems can be solved quickly, accurately, and reliably.
4 Beyond String Representation of Solutions
All algorithms described above work on problems defined on fixed-length strings over a finite alpha-
bet. However, recently there have been a few attempts to go beyond this simple representation and
directly tackle problems where the solutions are represented by vectors of real number or computer
programs without mapping the solutions on strings. All these approaches use simple models that
do not cover any interactions in a problem.
In the stochastic hill-climbing with learning by vectors of normal distributions (SHCLVND)
(Rudlof & Koppen, 1996) the solutions are represented by real-valued vectors. The population of
solutions is replaced (and modeled) by a vector of mean values of Gaussian normal distribution
for each optimized variable (see figure 4a). The standard deviation oe is stored globally and
it is the same for all variables. After generating a number of new solutions, the mean values i
are shifted towards the best of the generated solutions and the standard deviation oe is reduced to
x
Variable 4
Variable 3
Variable 5
(a) SHCLVND
a
a
a
a b b b b
Variable 4
Variable 3
z
z
z
z 14(b) (Servet et al., 1998)
Figure
4: Probabilistic models of real vectors of independent variables.
make future exploration of the search space narrower. Various ways of modifying the oe parameter
have been exploited in (Sebag & Ducoulombier, 1998). In another implementation of a real-coded
PBIL (Servet, Trave-Massuyes, & Stern, 1997), for each variable an interval (a
) and a number
are stored (see figure 4b). The z i
stands for a probability of a solution to be in the right half of
the interval. It is initialized to 0:5. Each time new solutions are generated using the corresponding
intervals, the best solutions are selected and the numbers z i
are shifted towards them. When z i
for a variable gets close to either 0 or 1, the interval is reduced to the corresponding half of it. In
figure 4b, each z i is mapped to the corresponding interval (a
In the probabilistic incremental program evolution (PIPE) algorithm (Salustowicz & Schmid-
huber, 1997) computer programs or mathematical functions are evolved as like in genetic programming
(Koza, 1992). However, pair-wise crossover and mutation are replaced by probabilistic
modeling of promising programs. Programs are represented by trees where each internal node
represents a function/instruction and leaves represent either input variable or a constant. In the
PIPE algorithm, probabilistic representation of the program trees is used. Probabilities of each
instruction in each node in a maximal possible tree are used to model promising and generate new
programs (see figure 5). Unused portions of the tree are simply cut before the evaluation of the
program by a fitness function. Initially, the model is set so that the trees are generated at random.
From the current population of programs the ones that perform the best are selected. These are
then used to update the probabilistic model. The process is repeated until the termination criteria
are met.
5 Summary and Conclusions
Recently, the use of probabilistic modeling in genetic and evolutionary computation has become
very popular. By combining various achievements of machine learning and genetic and evolutionary
computation, efficient algorithms for solving a broad class of problems have been constructed.
The most recent algorithms are continuously proving their powerfulness and efficiency, and offer a
promising approach to solving the problems that can be resolved by combining high-quality pieces
of information of a bounded order together.
In this paper, we have reviewed the algorithms that use probabilistic models of promising
p(sdasd) 231
p(x)=12 p(sdasd) 231
p(x)=12 p(sdasd) 231
p(x)=12 p(sdasd) 231
p(x)=12 p(sdasd) 231
Figure
5: Graphical model of a program with no interactions covered used in PIPE.
solutions found so far to guide further exploration of the search space. The algorithms have been
classified in a few classes according to the complexity of the class of models they use. Basic
properties of each of these classes of algorithms have been shortly discussed and a thorough list of
published papers and other references has been given.
6
Acknowledgments
The authors would like to thank Erick Cant'u-Paz, Martin Butz, Dimitri Knjazew, and Jiri Pospichal
for valuable discussions and useful comments that helped to shape the paper.
The work was sponsored by the Air Force Office of Scientific Research, Air Force Materiel Com-
mand, USAF, under grant number F49620-97-1-0050. Research funding for this project was also
provided by a grant from the U.S. Army Research Laboratory under the Federated Laboratory
Program, Cooperative Agreement DAAL01-96-2-0003. The U.S. Government is authorized to reproduce
and distribute reprints for Governmental purposes notwithstanding any copyright notation
thereon. The views and conclusions contained herein are those of the authors and should not be
interpreted as necessarily representing the official policies and endorsements, either expressed or
implied, of the Air Force of Scientific Research or the U.S. Government.
--R
Using optimal dependency-trees for combinatorial optimization: Learning the structure of the search space
Linkage information processing in distribution estimation algorithms.
Optimum branching.
Genetic algorithms in search
Linkage learning via probabilistic modeling in the ECGA (IlliGAL Report No.
The compact genetic algorithm.
Learning Bayesian networks: The combination of knowledge and statistical data (Technical Report MSR-TR-94-09)
Introduction to the theory of neural compu- tation
Adaptation in natural and artificial systems.
Genetic programming: on the programming of computers by means of natural selection.
On information and sufficiency.
Hill climbing with learning (An abstraction of genetic algorithm).
Nonparametric and distribution-free methods for the social sciences
Machine learning.
The equation for response to selection and its use for prediction.
Convergence theory and applications of the factorized distribution algorithm.
BOA: The Bayesian optimization algo- rithm
The bivariate marginal distribution algorithm.
Communication network optimization.
Stochastic hill climbing with learning by vectors of normal distributions.
Probabilistic incremental program evolution: Stochastic search through program space.
Extending population-based incremental learning to continuous search spaces
Telephone network traffic overloading diagnosis and evolutionary computation techniques.
Analysis and design of genetic algorithms.
--TR
Adaptation in natural and artificial systems
Genetic programming
Genetic Algorithms in Search, Optimization and Machine Learning
Machine Learning
Schemata, Distributions and Graphical Models in Evolutionary Optimization
Probabilistic Incremental Program Evolution
Using Optimal Dependency-Trees for Combinational Optimization
Extending Population-Based Incremental Learning to Continuous Search Spaces
From Recombination of Genes to the Estimation of Distributions I. Binary Parameters
Telephone Network Traffic Overloading Diagnosis and Evolutionary Computation Techniques
Fuzzy Recombination for the Breeder Genetic Algorithm
Population-Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning
--CTR
Radovan Ondas , Martin Pelikan , Kumara Sastry, Scalability of genetic programming and probabilistic incremental program evolution, Proceedings of the 2005 conference on Genetic and evolutionary computation, June 25-29, 2005, Washington DC, USA
Martin Pelikan , David E. Goldberg, A hierarchy machine: learning to optimize from nature and humans, Complexity, v.8 n.5, p.36-45, May/June
Paul Winward , David E. Goldberg, Fluctuating crosstalk, deterministic noise, and GA scalability, Proceedings of the 8th annual conference on Genetic and evolutionary computation, July 08-12, 2006, Seattle, Washington, USA
J. M. Pea , J. A. Lozano , P. Larraaga, Unsupervised learning of Bayesian networks via estimation of distribution algorithms: an application to gene expression data clustering, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, v.12 n.SUPPLEMENT, p.63-82, January 2004
Chao-Hong Chen , Wei-Nan Liu , Ying-Ping Chen, Adaptive discretization for probabilistic model building genetic algorithms, Proceedings of the 8th annual conference on Genetic and evolutionary computation, July 08-12, 2006, Seattle, Washington, USA
Juergen Branke , Clemens Lode , Jonathan L. Shapiro, Addressing sampling errors and diversity loss in UMDA, Proceedings of the 9th annual conference on Genetic and evolutionary computation, July 07-11, 2007, London, England
Jun Sakuma , Shigenobu Kobayashi, Real-coded crossover as a role of kernel density estimation, Proceedings of the 2005 conference on Genetic and evolutionary computation, June 25-29, 2005, Washington DC, USA
Paul Winward , David E. Goldberg, Fluctuating crosstalk, deterministic noise, and GA scalability, Proceedings of the 8th annual conference on Genetic and evolutionary computation, July 08-12, 2006, Seattle, Washington, USA
Joseph Reisinger , Risto Miikkulainen, Selecting for evolvable representations, Proceedings of the 8th annual conference on Genetic and evolutionary computation, July 08-12, 2006, Seattle, Washington, USA
Fernando G. Lobo , Cludio F. Lima, A review of adaptive population sizing schemes in genetic algorithms, Proceedings of the 2005 workshops on Genetic and evolutionary computation, June 25-26, 2005, Washington, D.C.
J. L. Shapiro, Drift and Scaling in Estimation of Distribution Algorithms, Evolutionary Computation, v.13 n.1, p.99-123, January 2005
Hung , Ying-ping Chen , Hsiao Wen Zan, Characteristic determination for solid state devices with evolutionary computation: a case study, Proceedings of the 9th annual conference on Genetic and evolutionary computation, July 07-11, 2007, London, England
Shigeyoshi Tsutsui , Martin Pelikan , Ashish Ghosh, Edge histogram based sampling with local search for solving permutation problems, International Journal of Hybrid Intelligent Systems, v.3 n.1, p.11-22, January 2006
Peter A. N. Bosman , Jrn Grahl , Franz Rothlauf, SDR: a better trigger for adaptive variance scaling in normal EDAs, Proceedings of the 9th annual conference on Genetic and evolutionary computation, July 07-11, 2007, London, England
Jrn Grahl , Peter A.N. Bosman , Franz Rothlauf, The correlation-triggered adaptive variance scaling IDEA, Proceedings of the 8th annual conference on Genetic and evolutionary computation, July 08-12, 2006, Seattle, Washington, USA
Dong-Il Seo , Byung-Ro Moon, An Information-Theoretic Analysis on the Interactions of Variables in Combinatorial Optimization Problems, Evolutionary Computation, v.15 n.2, p.169-198, Summer 2007
A. Mendiburu , J. Miguel-Alonso , J. A. Lozano , M. Ostra , C. Ubide, Parallel EDAs to create multivariate calibration models for quantitative chemical applications, Journal of Parallel and Distributed Computing, v.66 n.8, p.1002-1013, August 2006
Martin Pelikan , Kumara Sastry , David E. Goldberg, Sporadic model building for efficiency enhancement of hierarchical BOA, Proceedings of the 8th annual conference on Genetic and evolutionary computation, July 08-12, 2006, Seattle, Washington, USA
Xavier Llor , Kumara Sastry , David E. Goldberg , Abhimanyu Gupta , Lalitha Lakshmi, Combating user fatigue in iGAs: partial ordering, support vector machines, and synthetic fitness, Proceedings of the 2005 conference on Genetic and evolutionary computation, June 25-29, 2005, Washington DC, USA
Marcus Gallagher , Marcus Frean, Population-Based Continuous Optimization, Probabilistic Modelling and Mean Shift, Evolutionary Computation, v.13 n.1, p.29-42, January 2005
Yunpeng , Sun Xiaomin , Jia Peifa, Probabilistic modeling for continuous EDA with Boltzmann selection and Kullback-Leibeler divergence, Proceedings of the 8th annual conference on Genetic and evolutionary computation, July 08-12, 2006, Seattle, Washington, USA
Steven Thierens , Mark De Berg, On the Design and Analysis of Competent Selecto-recombinative GAs, Evolutionary Computation, v.12 n.2, p.243-267, June 2004
Martin V. Butz , Martin Pelikan, Studying XCS/BOA learning in Boolean functions: structure encoding and random Boolean functions, Proceedings of the 8th annual conference on Genetic and evolutionary computation, July 08-12, 2006, Seattle, Washington, USA
Martin V. Butz , David E. Goldberg , Kurian Tharakunnel, Analysis and improvement of fitness exploitation in XCS: bounding models, tournament selection, and bilateral accuracy, Evolutionary Computation, v.11 n.3, p.239-277, Fall
Mark Hauschild , Martin Pelikan , Claudio F. Lima , Kumara Sastry, Analyzing probabilistic models in hierarchical BOA on traps and spin glasses, Proceedings of the 9th annual conference on Genetic and evolutionary computation, July 07-11, 2007, London, England
Martin V. Butz , Kumara Sastry , David E. Goldberg, Strong, Stable, and Reliable Fitness Pressure in XCS due to Tournament Selection, Genetic Programming and Evolvable Machines, v.6 n.1, p.53-77, March 2005
Martin Pelikan , Kumara Sastry , David E. Goldberg, Multiobjective hBOA, clustering, and scalability, Proceedings of the 2005 conference on Genetic and evolutionary computation, June 25-29, 2005, Washington DC, USA
Martin Pelikan , David E. Goldberg , Shigeyoshi Tsutsui, Getting the best of both worlds: discrete and continuous genetic and evolutionary algorithms in concert, Information Sciences: an International Journal, v.156 n.3-4, p.147-171, 15 November
Kumara Sastry , Hussein A. Abbass , David E. Goldberg , D. D. Johnson, Sub-structural niching in estimation of distribution algorithms, Proceedings of the 2005 conference on Genetic and evolutionary computation, June 25-29, 2005, Washington DC, USA
Martin V. Butz , Martin Pelikan , Xavier Llor , David E. Goldberg, Extracted global structure makes local building block processing effective in XCS, Proceedings of the 2005 conference on Genetic and evolutionary computation, June 25-29, 2005, Washington DC, USA
Martin Pelikan , James D. Laury, Jr., Order or not: does parallelization of model building in hBOA affect its scalability?, Proceedings of the 9th annual conference on Genetic and evolutionary computation, July 07-11, 2007, London, England
Martin Pelikan , Rajiv Kalapala , Alexander K. Hartmann, Hybrid evolutionary algorithms on minimum vertex cover for random graphs, Proceedings of the 9th annual conference on Genetic and evolutionary computation, July 07-11, 2007, London, England
Ying-Ping Chen , David E. Goldberg, Convergence Time for the Linkage Learning Genetic Algorithm, Evolutionary Computation, v.13 n.3, p.279-302, September 2005
Martin V. Butz , Martin Pelikan , Xavier Llor , David E. Goldberg, Automated Global Structure Extraction for Effective Local Building Block Processing in XCS, Evolutionary Computation, v.14 n.3, p.345-380, September 2006
Martin V. Butz , Martin Pelikan , Xavier Llor , David E. Goldberg, Automated global structure extraction for effective local building block processing in XCS, Evolutionary Computation, v.14 n.3, p.345-380, September 2006
J. M. Pea , J. A. Lozano , P. Larraaga, Globally Multimodal Problem Optimization Via an Estimation of Distribution Algorithm Based on Unsupervised Learning of Bayesian Networks, Evolutionary Computation, v.13 n.1, p.43-66, January 2005
Martin Pelikan , Kumara Sastry , David E. Goldberg, Sporadic model building for efficiency enhancement of the hierarchical BOA, Genetic Programming and Evolvable Machines, v.9 n.1, p.53-84, March 2008 | genetic algorithms;genetic and evolutionary computation;stochastic optimization;decomposable problems;model building |
585663 | Fast Global Optimization of Difficult Lennard-Jones Clusters. | The minimization of the potential energy function of Lennard-Jones atomic clusters has attracted much theoretical as well as computational research in recent years. One reason for this is the practical importance of discovering low energy configurations of clusters of atoms, in view of applications and extensions to molecular conformation research; another reason of the success of Lennard Jones minimization in the global optimization literature is the fact that this is an extremely easy-to-state problem, yet it poses enormous difficulties for any unbiased global optimization algorithm.In this paper we propose a computational strategy which allowed us to rediscover most putative global optima known in the literature for clusters of up to 80 atoms and for other larger clusters, including the most difficult cluster conformations. The main feature of the proposed approach is the definition of a special purpose local optimization procedure aimed at enlarging the region of attraction of the best atomic configurations. This effect is attained by performing first an optimization of a modified potential function and using the resulting local optimum as a starting point for local optimization of the Lennard Jones potential.Extensive numerical experimentation is presented and discussed, from which it can be immediately inferred that the approach presented in this paper is extremely efficient when applied to the most challenging cluster conformations. Some attempts have also been carried out on larger clusters, which resulted in the discovery of the difficult optimum for the 102 atom cluster and for the very recently discovered new putative optimum for the 98 atom cluster. | Introduction
One of the simplest to describe yet most difficult to solve problems
in computational chemistry is the automatic determination of molecular
conformation. A molecular conformation problem can be described
c
2000 Kluwer Academic Publishers. Printed in the Netherlands.
as that of finding the global minimum of a suitable potential energy
function which depends on relative atom positions. Many models have
been proposed in the literature, ranging from very simple to extremely
like, e.g., the so-called protein folding problem (Neumaier,
1997). While realistic models of atomic interactions take into account
different components of the potential energy function, like pairwise in-
teractions, dihedral angles, torsion, and allow the analysis of composite
molecules in which atom of different kinds interact, it is commonly
recognized in the chemical literature that a fundamental step towards
a better understanding of some molecular conformation problems is
the knowledge of the global minimum of the so called Lennard-Jones
potential energy model; this model is a sufficiently accurate one for
noble gas clusters. Moreover some of the most difficult to find Lennard-Jones
structures, exactly those towards which this paper is oriented,
were found to represent very closely the structure of nickel and gold
clusters (Wales and Scheraga, 1999).
In this model all atoms are considered to be equal and only pairwise
interaction is included in the definition of the potential energy. Let N -
2 be an integer representing the total number of atoms. The Lennard-Jones
(in short L-J) pairwise potential energy function is defined as
follows: if the distance between the centers of a pair of atoms is r, then
their contribution to the total energy is defined to be
r 12
\Gammar 6
(1)
and the L-J potential energy E of the molecule is defined as
represents the coordinates of the center of the i-th
atom and the norm used is the usual Euclidean one. An optimum L-
N g is defined as the solution of the
global optimization problem
Although extremely simplified, this model has attracted research
in chemistry and biology, as it can be effectively considered as a reasonably
accurate model of some clusters of rare gases and as it represents
an important component in most of the potential energy models
used for complex molecular conformation problems and protein folding
(Neumaier, 1997).
From the point of view of numerical optimization methods, this
problem is an excellent test for local and global unconstrained optimization
methods; it is one of the simplest models in the literature of test
problems (see e.g. Floudas and Pardalos, 1999 pag. 188-193), yet one
of the most difficult as it has been conjectured (Hoare, 1979) that the
number of local optimum conformations grows at least exponentially
with N . Many computational approaches can be found in the literature,
ranging from mathematical programming models (Gockenbach et al.,
1997), to genetic methods (Deaven et al., 1996), to Monte Carlo sampling
(Wales and Doye, 1997) and many others. For a recent review of
the state of the art in this subject, the reader may consult (Wales and
Scheraga, 1999).
Unfortunately very few theoretical results are available which could
be used to tune an optimization method for the L-J minimization
problem. One notable exception is the result of (Xue, 1997) where it
is proven that in any global optimum the pairwise interatomic distance
is bounded from below by 0:5: while this result is considered
obvious in the chemical literature, only recently it has been proven
in a formal way. In practice, in all known putative global optima,
no pair of atoms is observed whose distance is less than about 0.95.
In the literature, pairs of atoms which are roughly 1.0 unit apart are
called "near neighbors". Very little is a-priori known on the structure
of the global optima; even a quite reasonable conjecture stating that
the diameter of an optimum N-atoms cluster is O(N 1=3 ) is still open.
Thus, except for extremely small and easy cases, there are no proofs
of global optimality for the putative global optimum configurations
known in the literature (see the L-J page on the Cambridge Clusters
Database at URL http://www.brian.ch.cam.ac.uk). All published
results are aimed at confirming, through experimentation, numerical
results known in the literature or at improving current estimates of the
global optima.
Despite the complexity of the problem, most putative global optima
of micro-clusters (with up to 147 atoms) have been discovered by means
of a very simple and efficient algorithm first proposed in (Northby, 1987)
and further refined in (Xue, 1994), which is based on the idea of starting
local optimization from initial configurations built by placing atoms
in predefined points in space, according to lattice structures which
researchers in chemistry and physics believe are the most common ones
found in nature. However quite a few exceptions to regular lattice-based
structures do exist; these structures are extremely difficult to discover
with general purpose methods. It is to be remarked that new optima
are still being discovered, even though at a slower rate than just a
few years ago. In August 1999, for example, a new configuration for
discovered (Leary and Doye, 1999) and possibly other new
records will appear even in the range N - 147 which has been the most
thoroughly and extensively studied in the last decade. In the chemical
physics literature, it is well known that some "magic numbers" exist,
like 102, for which classical, lattice-based procedures,
are doomed to fail. These numbers correspond to particularly stable
members of particular geometric classes; usually these magic number
clusters correspond to "closed shell" configurations and exhibit a higher
degree of geometric regularity, symmetry, compactness and sphericity
than non-magic number clusters within the same geometric class. Moreover
magic number clusters have a relatively larger number of near
neighbors than non-magic number clusters of the same class.
In this paper we propose a new methodology aimed at discovering
the most difficult structures for Lennard-Jones clusters. Our main aim
is not to introduce a general purpose method, but that of defining a
new strategy for local searches which can be profitably included in any
algorithm which is based on local searches, including the basin-hopping
method (Wales and Doye, 1997), the big-bang method (Leary, 1997),
Leary's descent method (Leary and Doye, 1999) or genetic algorithms
(see (Deaven et al., 1996), (Pullan, 1999)). Our method consists of a
modification of the objective function, in the first phase of the descent,
which enables a local search algorithm to escape from the enormous
number of local optima of the L-J energy landscape; implemented in a
straightforward Multistart-like method, our modification improved by
at least two order of magnitude the number of local searches required to
find the difficult 38 and 75 atom cases and could find the new 98 atom
cluster and the difficult 102 case in less than 10 000 local searches.
In a series of runs the LJ 38 optimum was discovered in 56% of the
local searches performed, an incredible performance if compared with
the best result published so far in which LJ 38 is found in 0.3% of the
attempts (Leary, 1997). In our first attempt to attack the LJ 98 case,
which was discovered only in summer 1999 (Leary and Doye, 1999)
using "millions of local searches" (Anonymous, 1999), we were able to
find the global optimum in less than 10 000 local searches, on average.
The success of the proposed procedure seems to be due to the fact
that the penalty term introduced into the objective function and the
modifications to the potential tend to enlarge the region of attraction
of regular, compact, spherical clusters: these are the characteristics of
the most difficult to find magic number clusters.
2. A new approach to the detection of Lennard-Jones
clusters
2.1. Multistart-like methods
We consider Multistart-like approaches to the problem of globally minimizing
the Lennard-Jones potential function. The pure Multistart method
can be described as follows.
Pure Multistart
1. Generate a point X 2 IR 3N from the uniform distribution over
a sufficiently large box centered at the origin;
2. perform a local search in IR 3N using X as a starting point;
3. if a stopping condition is not satisfied, go back to 1, otherwise
return the local minimum with the lowest function value.
Of course, we can not expect Pure Multistart to be a feasible method
for the solution of the Lennard-Jones problem. Indeed, even though the
difficulty of solving a global optimization problem by Multistart is not
actually related to the number of local minima, but to the measure of
the basin of attraction of the global minimum, the fact that the number
of local minima is exponentially large is a clear indication that Multi-start
may experience great difficulties in solving this kind of problems.
Numerical computations tend to confirm this fact. In Table I we notice
that Pure Multistart (PMS) applied to problems with N 2
fails to detect the putative global optimum for
trials and has very low percentage of successes for many other
values of N .
Therefore, it seems necessary to modify the basic Multistart scheme
in order to be able to solve larger problems. A simple idea is to exploit
the special structure of the Lennard-Jones potential function and
modify the search mechanism accordingly. Looking at the form of the
interaction function (1) we notice that good solutions should
possess some or all of the following characteristics:
atoms should not be too close each other (also recalling the result
in Xue, 1997);
\Gamma the distance between many pairs of atoms should be close to 1.0
(near neighbors), since at 1.0 the Lennard-Jones pair potential
attains its minimum;
\Gamma the optimal configuration should be as spherical and hence compact
as possible;thus the diameter of the cluster should not be too
large while still a minimum near neighbor separation is mantained.
According to these elementary observations, it is possible to substitute
the uniform random generation of points (Step 1 in the Pure
Multistart method) by a generation mechanism which tends to favor
point configuration possessing the above characteristics. As a first attempt
in this direction we substituted the uniform random generation
procedure with the following:
Point Generation Procedure
1. Start with a single atom placed in the origin, i.e. let
2.
2. Generate a random direction d 2 IR 3 and a point X k along
this direction in such a way that its minimum distance r from
every point in X is at least 0:5.
3. If r is greater than a threshold R ? 1 then X k is shifted towards
the origin along direction d, until its distance from at least one
point in X becomes equal to R.
4. Set g. If
and go back to Step 2.
This different point generation procedure slightly improves the performance
of Multistart as it can be seen from Table I where the results
of Multistart equipped with this generation algorithm are displayed
under the heading MMS (Modified Multistart); however even this modified
algorithm soon starts failing in detecting the putative global optima
of moderately large clusters.
2.2. Two phase Multistart
It might be possible to refine further the Point Generation Procedure
in order to produce better starting points, but it is felt that no real
breakthrough can be achieved in this direction. It seems more reasonable
to attack the problem by changing another component of the
Multistart method, i.e. the local search procedure; we are thus led to
search for a local optimization method which avoids as much as possible
being trapped in stationary points of the Lennard-Jones potential
characterized by a high value of the potential energy (2). The idea
is that of performing local searches employing a modified objective
Table
I. Number of successes in 1000 random
trials by Pure Multistart (PMS) and
Modified Multistart (MMS) methods.
14
function which, although related to the Lennard-Jones potential, is in
some sense "biased" towards configurations which satisfy the above
requirements. The local minimum of this modified potential is then
used as a starting point for a local optimization of the Lennard-Jones
potential function. This leads to the following version of the Multistart
method. Let ME(X) be a suitably defined modified potential function.
Two-Phase Multistart
1. Generate a point X 2 IR 3N according to the Point Generation
2. perform a local minimization of the modified potential function
in IR 3N using X as a starting point; let -
X be the local
optimum thus obtained;
3. perform a local optimization of the Lennard-Jones potential (2)
starting from -
4. if a stopping condition is not satisfied, go back to Step 1; otherwise
return the local minimum of E with the lowest function
value.
We notice that, in place of the usual local search of the Pure or
Modified Multistart method, here we have what we call a two-phase
local search: first the function ME is optimized, and then the Lennard-Jones
potential E. We underline that, even if at each iteration two local
searches are started, the computational effort is not doubled: indeed,
the local minimum -
X of ME is typically quite close to a local minimum
of E, so that the computational effort of the second local search is much
lower than that of the first one.
Accordingly we need now to define ME in such a way that the local
minima of this function possess the desired characteristics. In what
follows two classes of functions, among which ME can be chosen, are
introduced. The first class contains functions with the following form
where
are real constants; we note that choosing
coincides with the Lennard-Jones pair potential (1).
In
Figure
1 the case displayed and compared
with the Lennard-Jones pair potential. The parameters p and - have
Lennard-Jones potential
Modified potential
Figure
1. Comparison between Lennard-Jones and Modified potentials.
important effects. By choosing atoms can be moved more freely;
by decreasing p, the effect of the infinite barrier at
prevents atoms from getting too close to each other, is also decreased.
The parameter - has two important effects.
stronger penalty to distances between atoms
greater than 1.0; actually, it also assigns low penalty for pair distances
lower than 1.0, but this is largely overcome by the barrier
effect which, as already remarked, prevents atoms from getting too
close each other.
Global effect : it gives strong penalty to large distances between
atoms, e.g. to the diameter of the molecule.
In order to test the feasibility of this approach, a series of numerical
experiments have been performed by running 10 000 times the algorithm
for As these experiments were carried out on
Pentium II PC's, we did not performed extensive and generalized trials
In Table II the number of two-phase local searches which
led to the putative global optimum are reported. We notice that the
percentage of successes is much higher than the one of the Pure or
Modified Multistart algorithm. In particular, two important cases are
discussed. The first case is which is considered in the literature
a particularly difficult one (Doye et al., 1999a). While most putative
global optima in the range have a so called icosahedral
structure, the putative global optimum for has a FCC (Face
Centered Cubic) structure, and many algorithmic approaches, such as
the lattice search in (Northby, 1987) and (Xue, 1994), biased towards
icosahedral structures, are unable to detect this solution. The new
putative global optimum was first observed only recently in (Pillardy
and Piela, 1995) using a direct approach based on molecular dynamics;
more recently, in (Leary, 1997) the putative global optimum was found
using the "big bang" global optimization algorithm employing on the
average local searches, while for the basin hopping algorithm proposed
in (Wales and Doye, 1997), the expected number of local searches
required to first hit this putative global optimum is 2 000. In the new
approach, choosing 0:3, the expected number of local
searches is reduced to 80, but, with the method described later in this
paper, we were able to obtain the incredible hitting rate of 1.79 local
searches on the average. It is important to notice that the FCC LJ 38
cluster is a truncated octahedron, which is in fact one of the 13 regular
Archimedean solids.
As it can be observed from Table II, although quite successful for
some configurations, our method fails in several cases; most notably
it does not discover, at least in the first 10 000 local searches, the
difficult structure of LJ 75 . This case is the second hard case in the
range and it is much harder than the case (in
order to appreciate the difficulties of both cases see the discussion about
multiple funnel landscapes in (Doye et al., 1999b)). As for 38, the
structure of the putative global optimum is non icosahedral (actually
the structure is a decahedral one). The putative global optimum has
been detected for the first time in (Doye et al., 1995); by employing
the Basin Hopping algorithm the reported expected number of local
searches to first detect this configuration is over 100 000. Thus our
failure in detecting LJ 75 during the first 10 000 local searches was not
a surprise.
2.3. Adding a penalty to the diameter
Instead of persisting with an higher number of local searches, a modification
of (5) was introduced in order to strengthen the global effect.
This lead to the following class of modified functions:
where
where is an underestimate of the diameter of the
cluster. In Figure 2 the case displayed
and compared with the Lennard-Jones pair potential function. We no-
Lennard-Jones potential
Modified potential
Figure
2. Comparison between Lennard-Jones and modified potentials with diameter
penalization.
tice that the penalty term fi(maxf0; r no influence on
pairs of atoms close to each other, but strongly penalizes atoms far
away from each other. Thus, the new term does not affect the local
properties, but strengthens the global ones. The results for this class of
modified functions are reported in Table III. In particular, we note the
following results for the two difficult, non icosahedral cases, obtained
with suitable choices of the parameters.
\Gamma For the expected number of (two-phase) local searches to
first hit the putative global optimum is 10000= 5:46, more than
times faster, in terms of local searches performed, than Big Bang
and 366 faster than Basin Hopping;
the expected number of local searches is 3 333, while
it was 125 000 for the Basin Hopping algorithm: the improvement
factor is thus more than 37.
Given the results of better explanation of the failure
of our first approach can be given, supported by the observation of
the structure of the optimal decahedral structure (see Figure 3) and
icosahedral structure (see Figure 4).
In the best icosahedral structure the number of near neighbors is
higher than what observed in the optimal decahedral structure
(319 pairs). In some sense, the icosahedral structure has better local
properties than the decahedral one. However, this local disadvantage
is compensated by the compactness and sphericity of the decahedral
structure with respect to the icosahedral one: the diameter of the deca-
hedral structure is quite lower than the diameter of the icosahedral one.
Moreover, thanks to the spherical structure, many pairs of atoms in the
decahedral structure have a distance which is equal to the diameter (10
pairs in total, while the icosahedral structure has only 2). In some sense
we can say that the decahedral structure has better global properties
than the icosahedral one. In view of this comparison, it is now possible
to understand the failure for employed. The linear
penalty term -r has, as already remarked, a double effect: a local effect,
rewarding solutions with good local properties (like the icosahedral
structure), and a global effect, rewarding solutions with good global
properties (like the decahedral structure). What appears to happen for
is that the local effect dominates the global one, thus favoring
the icosahedral structure with respect to the decahedral one.
Even though complete computations have been performed only up to
the new approach has been tested for two other difficult cases,
for which the putative global optimum is known to be not icosahedral.
Figure
3. Putative optimum for LJ75
Very recently in (Leary and Doye, 1999) a new, non icosahedral,
putative global optimum for has been detected, displaying
a very compact and spherical structure; it is reported that this
discovery required "millions of local searches" (Anonymous, 1999);
our new approach could detect this solution within local
searches on the average.
In (Wales and Doye, 1997) a non icosahedral putative global optimum
for detected. The new approach was able
to detect this solution within local searches, while the Basin
Hopping algorithm could detect the same solution only 3 times out
of 500 000 local searches.
In other series of experiments with different parameter settings sometimes
better results were found. As a particularly significative instance,
for LJ 38 with parameters 1: the incredible
result of 56% successes was recorded in 1000 random trials. In practice
this means that, with such a parameter setting, the FCC structure of
Figure
4. Icosahedral optimum for LJ75
LJ 38 can be observed after a fraction of a second of CPU time on a
normal personal computer.
2.4. Limits of the proposed approach
It is fair to consider the limits of the proposed approach and possible
ways to overcome them. The main limits of the approach can be seen
from the tables. We notice that for N ? 60 in many cases the putative
global optimum could not be detected. It is not possible to claim that
our new approach is a general one to solve problems for any value of N .
What can be safely assumed is that it is an extremely successful method
in detecting those structures which differ from the icosahedral one; this
might be particularly important as it is believed that when the number
of atoms N is very large, compact, non-icosahedral structures prevail.
For most tested value of N for which the optimal structure is known
to be non icosahedral our method is much
faster (up to two orders of magnitude) than any other approach found
in the literature. It must be again underlined that, in the literature,
these cases are considered by far the most difficult ones. However, the
approach is not able to detect in an effective way some of the optimal
icosahedral structures. In order to detect these optimal structures it
is possible to incorporate the two-phase local search on which the
new approach is based into some of the approaches proposed in the
literature such as the Basin Hopping algorithm: the rationale behind
this is that it appears that the use of our modified potential, two-phase
local optimization actually enlarges the region of attraction of global
optima. In this respect, the choice of imposing a very low penalty on
the diameter should be considered safer, as the effect of this penalty
is usually so strong that only very compact structures are effectively
found (most micro clusters are indeed non compact).
An approach based on a forward procedure followed by a correction
procedure, has been tried, which enabled us to detect all the solutions
which could not be detected by the previous approach. The forward
procedure is already known in the literature and consists of starting the
optimization of LJN from a good configuration of LJN \Gamma1 , adding first a
single atom and then optimizing the overall potential; we implemented
a variant which incorporates the two-phase local search in place of the
regular local optimization. The correction procedure, starting from the
some of the optima found during a two-phase optimization, is based
upon the displacement of a few atoms randomly chosen among those
with highest energy contribution into a different position, followed by
the usual two-phase optimization. Details on these procedures can be
found in (Locatelli and Schoen, 2000), where it is shown that all the
configurations up to 110 atoms can be quite efficiently obtained by
means of these more specialized methods.
Another critic to the proposed approach is the difficulty of choosing
sensible values for the parameters. Again it has to be remarked that
there is no general rule to choose a set of parameters which is sufficiently
good for a whole range of clusters. The main reason for this is
that, as it has already been remarked, cluster structures vary abruptly
around some magic numbers, like However, in an
attempt to find general rules, experiments were performed using, for
the parameter D, a lower bound obtained from a regression analysis of
the diameters of known putative local optima. The results obtained are
very encouraging - details on these experiments will appear elsewhere.
3. Conclusions and further research issues
In a recent survey (Doye, 2000b) global optimization algorithms for
the minimization of the Lennard-Jones potential have been thoroughly
analyzed; the author claims that "Any global optimization method
'worth its salt' should be able to find [. ] the truncated octahedron
at Success for the other non-icosahedral globl minima would
indicate that the method has particular promise". Our method not
only is able to find the two difficult clusters LJ 38 and LJ 75 within a
number of local searches which is roughly two orders of magnitude less
than any previously known method; it is also capable of discovering
the even more difficult cases of LJ 98 and LJ 102 , again within very low
computational times on standard PC hardware. So it can be safely
concluded that the method proposed in this paper performs excitingly
well. An interesting analysis of the reasons for the success of our method
has recently apperared in (Doye, 2000a). However, what we think is
the main result presented in this paper is not an original algorithm,
although a very efficient method has been analyzed and its performance
discussed. The major contribution of this paper is the definition of a
new local search strategy, composed of two phases, the first of which
is built in such a way as to pass over non interesting local minima.
Moreover, this local search promises to be very well suited for general
approaches for the Lennard-Jones and similar problems in molecular
conformation studies; in this paper it was shown how the most difficult
configurations for the Lennard-Jones cluster problem can be discovered
with much greater efficiency by using a simple Multistart algorithm in
which our two-phase local search is used in place of a standard descent
method. Some experiments have already been performed to see if this
two-phase local optimization might be useful when substituted in place
of a standard local search in a more refined method. Our first results
with forward and correct methods are extremely encouraging.
In any case, already from the results presented here it is possible to
infer that the penalties and rewards included in the first phase optimization
succeed in driving the optimization close to very good, compact
clusters, avoiding being trapped in local optima which for a regular local
search method display very large regions of attraction. The structures of
optimal Lennard-Jones clusters are so radically different in some cases
that it seems quite unreasonable to look for general purpose methods
capable of discovering all optima in reasonable computer times. Our
approach greatly reduces the computational effort required to discover
what are commonly accepted as the most difficult configurations. It is
hoped that, when applied to larger clusters, this method will succeed in
finding better putative global optima. Of course, in case of much larger
clusters, the problem arises of efficiently computing the potential as
well as the penalized functions and gradients. Using a naive approach,
these computations require O(N 2 ) distances to be evaluated for each
iteration during local optimization; for large values of N this cost might
be prohibitively large. In order to cope with the curse of dimensionality,
it is planned in the next future to explore the possibility of parallelizing
energy computations; another possibility, which we did not explore up
to now, might be that of using faster approximate potential calcula-
tion, based on approaches similar to the one described in (Hingst and
Phillips, 1999).
Finally, it is important to stress once again that the results in this
paper give a clear demonstration that a carefully chosen, geometry
inspired, penalty function may be dramatically effective in helping to
discover clusters within special geometrical classes. This might prove to
be of great effectiveness when applied to more complex conformation
problems like, e.g., some protein folding models.
4. Appendix: details on the computational experiments
All of the experiments have been performed either on 266Mhz Pentium
II Personal Computers or on a Sun Ultra 5 Workstation. For
local optimization a standard conjugate gradient method was employed
and, in particular, the implementation described in (Gilbert and No-
cedal, 1992) was used with standard parameter settings. For every
choice of the parameters, we ran random 10 000 trials. Experiments
performed with different parameter settings, like those in tables 1 and
2, were conducted using the same seeds for the random generation
mechanism. That is, common random numbers were used for different
experiments: this way, in particular for those instances in which
finding the global optimum is a rare event, a comparison between
the efficiency of different parameter settings becomes more reliable.
The executable code, compiled both for Pentium PC's and for Sun
Ultra Sparc Stations, is freely available for research purpose at URL
http://globopt.dsi.unifi.it/users/schoen.
Acknowledgements
This research was partly supported by projects MOST and COSO of
the Italian Ministry of University and Research (MURST).
The authors would also like to thank an anonymous referee for very
careful reading of the paper and for many comments, which we included
almost verbatim in our paper and greatly enhanced its readability.
--R
Handbook of Test Problems in Local and Global Optimization
Dordrecht: Kluwer Academic Publishers.
'A Performance Analysis of Appel's Algorithm for Performing Pairwise Calculations in a Many Particle System'.
--TR
An Infeasible Point Method for Minimizing the Lennard-Jones Potential
Molecular Modeling of Proteins and Mathematical Prediction of Protein Structure
A Performance Analysis of Appel''s Algorithm for Performing Pairwise Calculations in a Many Particle System
Global Optima of Lennard-Jones Clusters
Minimum Inter-Particle Distance at Global Minimizers of Lennard-Jones Clusters
--CTR
Tams Vink, Minimal inter-particle distance in atom clusters, Acta Cybernetica, v.17 n.1, p.107-121, January 2005
N. P. Moloi , M. M. Ali, An Iterative Global Optimization Algorithm for Potential Energy Minimization, Computational Optimization and Applications, v.30 n.2, p.119-132, February 2005
Marco Locatelli , Fabio Schoen, Minimal interatomic distance in Morse clusters, Journal of Global Optimization, v.22 n.1-4, p.175-190, January 2002
Marco Locatelli , Fabio Schoen, Efficient Algorithms for Large Scale Global Optimization: Lennard-Jones Clusters, Computational Optimization and Applications, v.26 n.2, p.173-190, November
B. Addis , F. Schoen, Docking of Atomic Clusters Through Nonlinear Optimization, Journal of Global Optimization, v.30 n.1, p.1-21, September 2004
Jonathan P. K. Doye , Robert H. Leary , Marco Locatelli , Fabio Schoen, Global Optimization of Morse Clusters by Potential Energy Transformations, INFORMS Journal on Computing, v.16 n.4, p.371-379, Fall 2004
U. M. Garcia-Palomares , F. J. Gonzalez-Castao , J. C. Burguillo-Rial, A Combined Global & Local Search (CGLS) Approach to Global Optimization, Journal of Global Optimization, v.34 n.3, p.409-426, March 2006
A. Ismael Vaz , Lus N. Vicente, A particle swarm pattern search method for bound constrained global optimization, Journal of Global Optimization, v.39 n.2, p.197-219, October 2007 | lennard-jones clusters;global optimization;molecular conformation |
585761 | Minimal hierarchical collision detection. | We present a novel bounding volume hierarchy that allows for extremely small data structure sizes while still performing collision detection as fast as other classical hierarchical algorithms in most cases. The hierarchical data structure is a variation of axis-aligned bounding box trees. In addition to being very memory efficient, it can be constructed efficiently and very fast.We also propose a criterion to be used during the construction of the BV hierarchies is more formally established than previous heuristics. The idea of the argument is general and can be applied to other bounding volume hierarchies as well. Furthermore, we describe a general optimization technique that can be applied to most hierarchical collision detection algorithms.Finally, we describe several box overlap tests that exploit the special features of our new BV hierarchy. These are compared experimentally among each other and with the DOP tree using a benchmark suite of real-world CAD data. | INTRODUCTION
Fast and exact collision detection of polygonal objects undergoing
rigid motions is at the core of many simulation algorithms
in computer graphics. In particular, virtual reality
applications such as virtual prototyping need exact collision
detection at interactive speed for very complex, arbitrary
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
VRST'02, November 11-13, 2002, Hong Kong.
"polygon soups". It is a fundamental problem of dynamic
simulation of rigid bodies, simulation of natural interaction
with objects, and haptic rendering. It is very important for
a VR system to be able to do all simulations at interactive
frame rates. Otherwise, the feeling of immersion or the
usability of the VR system will be impaired.
The requirements on a collision detection algorithm for
virtual prototyping are: it should be real-time in all sit-
uations, it should not make any assumption about the in-
put, such as convexity, topology, or manifoldedness (because
CAD data is usually not "well-behaved" at all), and it should
not make any assumptions or estimations about the position
of moving objects in the future. Finally, polygon numbers
are very high, usually in the range from 10,000 up to 100,000
polygons per object.
The performance of any collision detection based on hierarchical
bounding volumes depends on two factors: (1)
the tightness of the bounding volumes (BVs), which will influence
the number of bounding volume tests, and (2) the
simplicity of the bounding volumes, which determines the
e#ciency of an overlap test of a pair of BVs [11]. In our
algorithm, we sacrifice tightness for a fast overlap test.
Our hierarchical data structure is a tree of boxes, which
are axis-aligned in object space. Each leaf encloses exactly
one polygon of the object. Unlike classical AABB trees,
however, the two children of a box cannot be positioned
arbitrarily (hence we call this data structure restricted box-
tree, or just boxtree). This allows for very fast overlap tests
during simultaneous traversal of "tumbled" boxtrees. 1
Because of the restriction we place on the relation between
child and parent box, each node in the tree needs very little
memory, and thus we can build and store hierarchies for
very large models with very little memory. This is important
as the number of polygons that can be rendered at interactive
frame rates seems to increase currently even faster than
Moore's Law would predict.
We also propose a very e#cient algorithm for constructing
good boxtrees. This is important in virtual prototyping
because the manufacturing industries want all applications
to compute auxiliary and derived data at startup time,
so that they do not need to be stored in the product data
management system. With our algorithm, boxtrees can be
constructed at load-time of the geometry even for high complexities
extended version of this paper can be found at http:
//web.cs.uni-bonn.de/~zach/index.html#publications
In order to guide the top-down construction of bounding
volume hierarchies, a criterion is needed to determine a good
split of the set of polygons associated with each node. In this
paper, we present a more formal argument than previous
heuristics did to derive such a criterion that yields good hierarchies
with respect to collision detection. The idea of the
argument is generally applicable to all hierarchical collision
detection data structures.
The techniques presented in this paper can be applied to
other axis-aligned bounding volumes, such as DOPs, as well.
They would probably yield similar savings in memory and
computational costs, since the motivation for our techniques
is valid if the bounding volume hierarchy is constructed in
a certain "natural" way. We also propose a general optimization
technique, that can be applied to most hierarchical
collision detection algorithms.
Our new BV hierarchy can also be used to speed up ray
tracing or occlusion culling within AABBs.
The rest of the paper is organized as follows. Section 2
gives an overview of some of the previous work. Our new
data structure and algorithms are introduced in Section 3,
while Section 4 describes the e#cient computation of box-
trees. Results are presented in Section 5.
2. RELATED WORK
Bounding volume trees seem to be a very e#cient data
structure to tackle the problem of collision detection for rigid
bodies.
Hierarchical spatial decomposition and bounding volume
data structures have been known in computational geome-
try, geometrical data bases, and ray tracing for a long time.
Some of them are k-d trees and octrees [18], R-trees [4], and
OBB trees [2].
For collision detection, sphere trees have been explored
by [12] and [17]. [11] proposed an algorithm for fast overlap
tests of oriented (i.e., non-axis-parallel) bounding boxes
(OBBs). They also showed that an OBB tree converges
more rapidly to the shape of the object than an AABB tree,
but the downside is a much more expensive box-box intersection
test. In addition, the heuristic for construction of
OBB trees as presented in [11] just splits across the median
of the longest side of the OBB.
trees have been applied to collision detection by [14]
and [21]. AABB trees have been studied by [20, 19, 15]. All
of these data structures work quite e#ciently in practice,
but their memory usage is considerably higher than needed
by our hierarchy, even for sphere trees.
More recently, hierarchical convex hulls have been proposed
for collision detection and other proximity queries by
While showing excellent performance, the memory footprint
of this data structure is even larger than that of the
previously cited ones. This is even further increased by the
optimization techniques the authors propose for the collision
detection algorithm.
Non-hierarchical approaches try to subdivide object space,
for instance by a voxel grid [16] or Delaunay triangulation
[9]. In particular, non-hierarchical approaches seem to be
more promising for collision detection of deformable objects
[1, 13, 8].
Regarding the name of our data structure, we would like
to point out that [3] presented some theoretical results on
a class of bounding volume hierarchies, which they called
BOXTREE, too. However, their data structure is substantially
di#erent from ours, and they do not provide any run-time
results.
Bounding volume hierarchies are also used in other areas,
such as nearest-neighbor search and ray tracing. For point
k-d trees, [5] have shown that a longest-side cut produces
optimal trees in the context of nearest-neighbor searches.
However, it seems that this rule does not apply to collision
detection.
3. DATASTRUCTUREANDALGORITHMS
Given two hierarchical BV data structures for two objects,
the following general algorithm scheme can quickly discard
sets of pairs of polygons which cannot intersect:
if A and B do not overlap then
return
if A and B are leaves then
return intersection of primitives
enclosed by A and B
else
for all children A[i] and B[j] do
traverse(A[i],B[j])
end for
Almost all hierarchical collision detection algorithms implement
this traversal scheme in some way. It allows to
quickly "zoom in" on pairs of close polygons. The characteristics
of di#erent hierarchical collision detection algorithms
lie in the type of BV used, the overlap test for a pair of
nodes, and the algorithm for construction of the BV trees.
In the following, we will first introduce our type of BV,
then we will present several algorithms to check them for
overlap.
3.1 Restricted Boxtrees
In a BV hierarchy, each node has a BV associated that
completely contains the BVs of its children. Usually, the
parent BV is made as tight as possible. In binary AABB
trees, this means that a parent box touches each child box
on 3 sides on average. We have tried to quantify this observation
further: in the AABB tree of three representative
objects, we have measured the empty space between each of
its nodes and their parent nodes. 2 Our experiments show
that for about half of all nodes the volume of empty space
between its bounding box and its parent's bounding box is
only about 10%.
Consequently, it would be a waste of memory (and computations
during collision detection), if we stored a full box
at each node. Therefore, our hierarchy never stores a box
explicitly. Instead, each node stores only one plane that is
perpendicular to one of the three axes, which is the (almost)
least possible amount of data needed to represent a box that
is su#ciently di#erent from its parent box. We store this
plane using one float, representing the distance from one of
the sides of the parent box (see Figure 1). The reason for
this will become clear below. 3 In addition, the axis must
be stored (2 bits) and we need to distinguish between two
2 For all nodes, one side was excluded in this calculation,
which was the side where the construction performed the
split.
3 One could picture the resulting hierarchy as a cross between
k-d trees and AABB trees.
splitting planes
lower child c l
cu upper child
Figure
1: Child nodes are obtained from the parent node by
splitting o# one side of it. The drawing shows the case where
a parent has a lower and an upper child that have coplanar
splitting planes.
cases: whether the node's box lies on the lower side or on
the upper side of the plane (where "upper" and "lower" are
defined by the local coordinate axes of the object).
Because each box in such a hierarchy is restricted on most
sides, we call this a restricted boxtree. Obviously, this restriction
makes some nodes in the boxtree hierarchy slightly
less tight than in a regular AABB tree. But, as we will see,
this does not hurt collision detection performance. The observation
we made above for AABB trees is probably also
true for other hierarchies utilizing some kind of axis-aligned
BV (such as DOPs). Section 4 will describe in detail our
algorithm to construct a restricted boxtree.
3.2 Box Overlap Tests
In the following, we will describe several variations of an
algorithm to test the overlap of two restricted boxes, exploiting
the special properties of the restricted boxtree. All
algorithms will assume that we are given two boxes A and
B, and that (b 1 , b 2 , b 3 ) is the coordinate frame of box B
with respect to A's object space.
3.2.1 Axis alignment
Since axis-aligned boxes o#er probably the fastest overlap
test, the idea of our first overlap test is to enclose B by an
axis-aligned box (l, h) # R 3
then test this against
A (which is axis-aligned already). In the following, we will
show how this can be done with minimal computational effort
for restricted boxtrees.
We already know that the parent boxes of A and B must
overlap (according to the test proposed here). Notice that
we need to compute only 3 values of (l, h), one along each
axis. The other 3 can be reused from B's parent box. Notice
further that we need to perform only one operation to derive
box A from its parent box.
Assume that B is derived from its parent box by a splitting
plane perpendicular to axis b # {b 1 , b 2 , b 3
which
is distance c away from the corresponding upper side of the
parent box (i.e., B is a lower child). We have already computed
the parent's axis-aligned box, which we denote by
(l can be computed by (see Figure 2)
and
l
l 0
l 0
Similarly, l x , hx can be computed if B is an upper child, and
analogously the new value along the other 2 axes.
In addition to saving a lot of computations, we also save
half of the comparisons of the overlap tests of aligned boxes.
Notice that we need to compare only 3 pairs of coordinates
(instead of 6), because the status of the other 3 has not
changed. For example, the comparison along the x axis to
be done is
and A do not overlap if
hx < l A
l x > h A
where l A
x are the x-coordinates of box A. Note that the
decision bx 7 0 has been made already when computing hx
or l x .
this method needs 12 floating point operations (3
checking the overlap of a pair of
nodes of a restricted boxtree.
3.2.2 Lookup tables
Since the b i are fixed for the complete traversal of two
boxtrees, and the c's can be only from a range that is known
at the beginning of the traversal, it would seem that the
computations above could possibly be sped up by the use of
a lookup table.
At the beginning of the traversal, we compute 3x3 lookup
tables
with 1000 entries each, one table for each component
during traversal, a term of the form
x is replaced by
c now is an integer and s is the splitting axis.
See below for results and discussion.
3.2.3 Separating axis test
The separating axis test (SAT) is a di#erent way to look
at linear separability of two convex polytopes: two convex
bodies are disjoint if and only if we can find a separating
plane, which is equivalent to finding an axis such that the
two bodies projected onto that axis (yielding two line inter-
vals) are disjoint (hence "separating axis"). For boxes, [11]
have shown that it su#ces to consider at most 15 axes.
We can apply this test to the nodes in our restricted box-
tree. 4 As previously, we do not need to compute all line
intervals from scratch. Instead, we modify only those ends
of the 15 line intervals that are di#erent from the ones of
the parent.
Let s be one of the candidate separating axes and assume
that B is a lower box (see again Figure 2). As above, for a
lower box we compute only
l
(and analogously for an upper box). Of course, we can pre-compute
all 3x15 possible products b i
The advantage of this test is that it is precise, i.e., there
are no false positives. One disadvantage of this test is that
In fact, our test by axis alignment can be considered a variant
of the separating axis test, where only a special subset
of axes is being considered instead of the full set.
A
s
c A
c A
Figure
2: Only one value per axis s needs to be recomputed
for the overlap test.
Figure
3: A reduced version of the separating axis test seems to
be a good compromise between the number of false positives
and computational e#ort.
there are 9 axes that are not perpendicular to A nor to B
(these are the edge-edge cross products).
In total, this method needs 82 FLOPS in the worst case,
which is much higher than the method above, but still less
than 200 FLOPS worst case for OBBs. 5
3.2.4 SAT lite
As we have seen in the section above, the 9 candidate axes
obtained from all possible cross products of the edge orientations
do not lend themselves to some nice optimizations.
Therefore, a natural choice of a subset of the 15 axes would
be the set of 6 axes consisting of the three coordinate axes of
A and B resp. (see Figure 3). This has been proposed by [19]
(who has called it "SAT lite"). Here, we apply the idea to
restricted boxtrees and show that even more computational
e#ort can be saved.
This variant can also be viewed as the first variant being
executed two times (first using A's then B's coordinate
frame). The total operation count is 24.
3.3 Further Optimizations
In this section, we will describe several techniques to further
improve the speed of restricted boxtrees and hierarchical
collision detection in general.
The first optimization technique is a general one that can
be applied to all algorithms for hierarchical collision detection
if the overlap test of a pair of nodes involves some
node-specific computations that can be performed independently
for each node (as opposed to pair-specific computa-
tions). The idea is to shift the node-specific computations
up one level in the traversal. Assume that the costs of a
node pair overlap test consist of a node-specific part c1 and
an overlap-test-specific part c2 . Then, the costs for a pair
(A, B) are C(A,
because if (A,B) are found to overlap, all 4 pairs of children
need to be checked. However, we can compute the node-
specific computations already while still visiting (A,B) and
pass them down to the children pairs, which reduces the
costs to C(A,
If the node-specific computations have to be applied only
to one of the two hierarchies (or the node-specific costs of
the other hierarchy are neglectible), then the di#erence is
even larger, i.e., 9c1
5 The worst case actually happens for exactly half of all node
pairs being visited during a simultaneous traversal.
Depending on the actual proportions of c1 and c2 , this
can result in dramatic savings. In the case of restricted
boxtrees with our first overlap test (axis alignment), this
technique reduces the number of floating point operations
to 1.5 multiplications, 2 additions, and 5 comparisons!
While this technique might already be implemented in
some collision detection code, it has not been identified as a
general optimization technique for hierarchical collision detection
As usual, we keep only one pointer in the parent to the
first of its children. This does save considerable memory
for our restricted boxtrees, because its nodes have a small
memory footprint.
4. CONSTRUCTION OF BOXTREES
The performance of any hierarchical collision detection depends
not only on the traversal algorithm, but also crucially
on the quality of the hierarchy, i.e., the construction algorithm
Our algorithm pursues a top-down approach, because that
usually produces good hierarchies and allows for very efficient
construction. Other researchers have pursued the
bottom-up approach [3], or an insertion method [10, 4].
4.1 A General Criterion
Any top-down construction of BV hierarchies consists of
two steps: given a set of polygons, it first computes a BV (of
the chosen type) covering the set of polygons, then it splits
the set into a number of subsets (usually two).
Before describing our construction algorithm, we will derive
a general criterion that can guide the splitting process,
such that the hierarchy produced is good in the sense of fast
collision detection.
Let C(A,B) be the expected costs of a node pair (A, B)
under the condition that we have already determined during
collision detection that we need to traverse the hierarchies
further down. Assuming binary trees and unit costs for an
overlap test, this can be expressed by
are the children of A and B, resp., and P
is the probability that this pair must be visited (under the
condition that the pair (A, B) has been visited).
d
A
possible
locus of
anchor points
d
Figure
4: By estimating the volume of the Minkowski sum of
two BVs, we can derive an estimate for the cost of the split of
a set of polygons associated with a node.
An optimal construction algorithm would need to expand
(1) down to the leaves:
.
(2)
and then find the minimum. Since we are interested in finding
a local criterion, we approximate the cost function by
discarding the terms corresponding to lower levels in the
hierarchy, which gives
Now we will derive an estimate of the probability P (A1 , B1 ).
For sake of simplicity, we will assume in the following that
AABBs are used as BVs. However, similar arguments should
hold for all other kinds of convex BVs.
The event of box A intersecting box B is equivalent to
the condition that B's ``anchor point'' is contained in the
Minkowski sum A # B. This situation is depicted in Figure
4.Because B1 is a child of B, we know that the anchor
point of B1 must lie somewhere in the Minkowski sum
A#B#d, where
inside A and B1 inside B, we know that A1#B1 # A#B#d.
So, for arbitrary convex BVs the probability of overlap is
In the case of AABBs, it is safe to assume that the aspect
ratio of all BVs is bounded by #. Consequently, we can
bound the volume of the Minkowski sum by
So we can estimate the volume of the Minkowski sum of two
boxes by
yielding
Since Vol(A) +Vol(B) has already been committed by an
earlier step in the recursive construction, Equation 3 can be
minimized only by minimizing This is
our criterion for constructing restricted boxtrees.
4.2 The Algorithm
According to the criterion derived above, each recursion
step will try to split the set of polygons so that the cost
function (3) is minimized. This is done by trying to find
a good splitting for each of the three coordinate axes, and
then selecting the best one. Along each axis, we consider
three cases: both subsets form lower boxes with respect to
its parent, both are upper boxes, or one upper and one lower
box.
In each case, we first try to find a good "seed" polygon
for each of the two subsets, which is as close as possible to
the outer border that is perpendicular to the splitting axis.
Then, in a second pass, we consider each polygon in turn,
and assign it to that subset whose volume is increased least.
After good splitting candidates have been obtained for all
three axes, we just pick the one with least total volume of
the subsets.
The algorithm and criterion we propose here could also be
applied to construct hierarchies utilizing other kinds of BVs,
such as OBBs, DOPs, and even convex hulls. We suspect
that the volume of AABBs would work fairly well as an
estimate of the volume of the respective BVs.
We have also tried a variant of our algorithm, which considers
only one axis (but all three cases along that axis).
This was always the axis corresponding to the longest side
of the current box. This experiment was motivated by a
recent result for k-d trees [5]. For the result, see Section 5.
In our current implementation, the splitting planes of both
children are coplanar. We have not yet explored the full
potential of allowing perpendicular splitting planes.
Our algorithm has proven to be geometrically robust, since
there is no error propagation. Therefore, a simple epsilon
guard for all comparisons su#ces.
4.3 Complexity
Under certain assumptions, the complexity of constructing
a boxtree is in O(n), where n is the number of polygons.
This is supported by experiments (see Section 5).
Our algorithm takes a constant number of passes over all
polygons associated with a node in order to split a set of
polygons F . Each pass is linear in the number of polygons.
BV hierarchy Bytes FLOPS
Restr. Boxtree (3.2.1) 9 12
Restr. Boxtree (3.2.4) 9 24
Restr. Boxtree (3.2.3) 9 82
sphere trees
AABB tree 28 90
OBB tree 64 243
tree 100 168
Table
1: This table summarizes the amount of memory per
node needed by the various BV hierarchies, and the number
of floating point operations per node pair in the worst case
during collision detection. The number of bytes also includes
one pointer to the children. If the optimization technique from
Section 3.3 is applied, then all FLOPS counts can be further
reduced (about a factor 2 for boxtrees).
sphere
car
lock
headlight
pgons / 1000
time
Figure
5: This plot shows the build time of restricted boxtrees
for various objects.
Every split will produce two subsets such that
#F2 .
Let us assume that |F1 | # |F2 | #n, with 1
for depth d of a boxtree
Let T (n) be the time needed to build a boxtree for n
polygons. Then,
d
5. RESULTS
Memory requirements of di#erent hierarchical data structures
can be compared by calculating the memory footprint
of one node, since a binary tree with n leaves always has
summarizes the number of bytes per
node for di#erent BV hierarchies .
Table
also compares the number of floating point operations
needed for one node-node overlap test by the methods
described above and three other fast hierarchical collision
detection algorithms (OBB, DOP, and sphere tree).
In the following, all results have been obtained on a
Pentium-III with 1 GHz and 512 MB. All algorithms have
been implemented in C++ on top of the scene graph OpenSG.
The compiler was gcc 3.0.4.
For timing the performance of our algorithms, we have
used a set of CAD objects, each of which with varying complexities
(see Figure 6), plus some synthetic objects like
sphere and torus. Benchmarking is performed by the following
scenario: two identical objects are positioned at a
certain distance d = d start from each other. The distance
is computed between the centers of the bounding boxes of
the two objects; objects are scaled uniformly so they fit in a
cube of size 2 3 . Then, one of them performs a full tumbling
turn about the z- and the x-axis by a fixed, large number
of small steps (5000). With each step, a collision query is
done, and the average collision detection time for a complete
revolution at that distance is computed. Then, d is
decreased, and a new average collision detection time is com-
puted. When comparing di#erent algorithms, we summarize
these times by the average time taken over that range of dis-
full
lite
axis alignm
pgons / 1000
time
millisec60402001.61.20.80.4
Figure
7: A comparison of the di#erent overlap tests for pairs
of boxtree nodes shows that the "SAT lite" test seems to o#er
the best performance.
tances which usually occur in practical applications, such as
physically-based simulation.
Figure
7 compares the performance of the various box
overlap tests presented in Section 3.2 for one of the CAD
objects. Similar results were obtained for all other objects
in our suite. Although the full separating axis test can determine
the overlap of boxes without false positives, it seems
that the computational e#ort is not worth it, at least for
axis-aligned boxes ([19] has arrived at a similar conclusion
for general AABB trees). From our experiments it seems
that the "SAT lite" o#ers the best performance among the
three variants.
A runtime comparison between our boxtree algorithm and
DOP trees for various objects can be found in Figure 8.
It seems that boxtrees o#er indeed very good performance
(while needing much less memory). This result puts restricted
boxtrees in the same league as DOP trees [21] and
OBB trees [11].
For sake of brevity, we have omitted our experiments assessing
the performance of the lookup table approach. It has
turned out that lookup tables o#er a speedup of at most 8%,
and they were even slower than the non-lookup table version
for the lower polygon complexities because of the setup time
for the tables. The reason might be that floating point and
Figure
Some of the objects of our test suite. They are (left to right): body of a car, a car headlight, the lock of a car door (and
a torus). (Data courtesy of VW and BMW)
integer arithmetic operations take almost the same number
of cycles on current CPUs.
As shown in Section 4, boxtrees can be built in O(n). Figure
5 reveals that the constant is very small, too, so that the
boxtrees can be constructed at startup time of the application
6. CONCLUSION
We have proposed a new hierarchical BV data structure
(the restricted boxtree) that needs arguably the least possible
amount of memory among all other BV trees while
performing about as fast as DOP trees. It uses axis-aligned
boxes at the nodes of the tree, but it does not store them
explicitly. Instead, it just stores some "update" information
with each node, so that it uses, for instance, about a factor
7 less memory than OBB trees.
In order to construct such restricted boxtrees, we have
developed a new algorithm that runs in O(n) (n is the number
of polygons) and can process about 20,000 polygons per
second on a 1 GHz Pentium-III.
We also propose a better theoretical foundation for the
criterion that guides the construction algorithm's splitting
procedure. The basic idea can be applied to all BV hierarchies
A number of algorithms have been developed for fast collision
detection utilizing restricted boxtrees. They gain their
e#ciency from the special features of that BV hierarchy.
Benchmarking them has shown that one of them seems to
perform consistently better than the others.
Several optimization techniques have been presented that
further increases the performance of our new collision detection
algorithm. The most important one can also be applied
to most other hierarchical collision detection algorithms, and
will significantly improve their performance.
Finally, using a suite of CAD objects, a comparison with
trees suggested that the performance of restricted box-
trees is about as fast in most cases.
6.1 Future Work
While BV trees work excellently with rigid objects, it is
still an open issue to extend these data structures to accommodate
deforming objects.
Our new BV hierarchy could also be used for other queries
such as ray tracing or occlusion culling. It would be interesting
to evaluate it in those application domains.
As stated above, most BV trees are binary trees. However,
as [6] have observed, other arities might yield better per-
formance. This parameter should be optimized, too, when
constructing boxtrees.
So far, we have approximated the cost equation only to
first order (or rather, first level). By approximating it to a
higher order, one could possibly arrive at a kind of "look-
ahead" criterion for the construction algorithm, which could
result in better hierarchies.
7.
--R
tiling for kinetic collision detection.
A survey of ray tracing acceleration techniques.
BOXTREE: A hierarchical representation for surfaces in 3D.
Accurate and fast proximity queries between polyhedra using convex surface de- composition
Fast penetration depth estimation for elastic bodies using deformed distance fields.
Automatic creation of object hierarchies for ray tracing.
A hierarchical structure for rapid interference detection.
Approximating polyhedra with spheres for time-critical collision detection
Collision resolutions in cloth simulation.
Six degrees-of-freedom haptic rendering using voxel sam- pling
Collision detection for animation using sphere-trees
Computational Ge- ometry: An Introduction
Rapid collision detection by dynamically aligned DOP-trees
--TR
Computational geometry: an introduction
Automatic creation of object hierarchies for ray tracing
A survey of ray tracing acceleration techniques
The R*-tree: an efficient and robust access method for points and rectangles
OBBTree
Six degree-of-freedom haptic rendering using voxel sampling
Efficient Collision Detection Using Bounding Volume Hierarchies of k-DOPs
K-D Trees Are Better when Cut on the Longest Side
Efficient collision detection of complex deformable models using AABB trees
Real-Time Collision Detection and Response for Complex Environments
Rapid Collision Detection by Dynamically Aligned DOP-Trees
--CTR
Jan Klein , Gabriel Zachmann, Time-critical collision detection using an average-case approach, Proceedings of the ACM symposium on Virtual reality software and technology, October 01-03, 2003, Osaka, Japan
Kavan , Carol O'Sullivan , Ji ra, Efficient collision detection for spherical blend skinning, Proceedings of the 4th international conference on Computer graphics and interactive techniques in Australasia and Southeast Asia, November 29-December 02, 2006, Kuala Lumpur, Malaysia
Esther M. Arkin , Gill Barequet , Joseph S. B. Mitchell, Algorithms for two-box covering, Proceedings of the twenty-second annual symposium on Computational geometry, June 05-07, 2006, Sedona, Arizona, USA | physically-based modeling;hierarchical partitioning;virtual prototyping;hierarchical data structures;r-trees;interference detection |
585788 | Orienting polyhedral parts by pushing. | A common task in automated manufacturing processes is to orient parts prior to assembly. We consider sensorless orientation of an asymmetric polyhedral part by a sequence of push actions, and show that is it possible to move any such part from an unknown initial orientation into a known final orientation if these actions are performed by a jaw consisting of two orthogonal planes. We also show how to compute an orienting sequence of push actions.We propose a three-dimensional generalization of conveyor belts with fences consisting of a sequence of tilted plates with curved tips; each of the plates contains a sequence of fences. We show that it is possible to compute a set-up of plates and fences for any given asymmetric polyhedral part such that the part gets oriented on its descent along plates and fences. | Introduction
An important task in automated assembly is orienting parts prior to assembly.
A part feeder takes in a stream of identical parts in arbitrary orientations and
outputs them in a uniform orientation. Usually, part feeders use data obtained
from some kind of sensing device to accomplish their task. We consider the
problem of sensorless orientation of parts, in which no sensors but only knowledge
of the geometry of the part is used to orient it from an unknown initial
orientation to a unique nal orientation. In sensorless manipulation, parts are
positioned or oriented using open-loop actions which rely on passive mechanical
compliance.
A widely used sensorless part feeder in industrial environments is the bowl
feeder [9, 8]. Among the sensorless part feeders considered in the scientic
literature are the parallel-jaw gripper [12, 17], the single pushing jaw [3, 18,
19, 21], the conveyor belt with a sequence of (stationary) fences placed along
its sides [10, 22, 26], the conveyor belt with a single rotational fence [1, 2], the
tilting tray [16, 20], and vibratory plates and programmable vector elds [6, 7].
Institute of Information and Computing Sciences, Utrecht University, PO Box 80089, 3508
Utrecht, The Netherlands.
y Supported by the Dutch Organization for Scientic Research (N.W.O.)
Figure
1: The three-dimensional part feeder. Plates with fences mounted to a
cylinder.
Scientic literature advocates a risc (Reduced Intricacy in Sensing and Con-
trol) approach to designing manipulation systems for factory environments.
These systems benet from their simple design and do not require guru programming
skills [11]. The pushing jaw [3, 18, 19, 21] orients a part by an
alternating sequence of pushes and jaw reorientations. The objective of sensorless
orientation by a pushing jaw is to nd a sequence of push directions that
will move the part from an arbitrary initial orientation into a single known nal
orientation. Such a sequence is referred to as a push plan. Goldberg [17] showed
that any polygonal part can be oriented by a sequence of pushes. Chen and
Ierardi [12] proved that any polygonal part with n vertices can be oriented by
O(n) pushes. They showed that this bound is tight by constructing (pathologi-
cal) n-gons that
require
n) pushes to be oriented. Eppstein [15] observed that
for a special class of feeders, polynomial planning algorithms exist. Goldberg
gave an algorithm for computing the shortest push plan for a polygon. His algorithm
runs in O(n 2 ) time. Berretty et al.[5] gave an algorithm for computing
the shortest sequence of fences over a conveyor belt that orients a part as it
slides along the fences. The algorithm runs in O(n 3 log n) time.
The drawback of the majority of the achievements in the eld of sensorless
orientation is that they only apply to
at, two-dimensional parts, or to parts
where the face the part rests on is known beforehand. In this paper, we narrow
the gap between industrial feeders and the scientic work on sensorless
orientation, by introducing a feeder which orients three-dimensional parts up to
rotational symmetry. This is the rst device which can be proven to correctly
feed three-dimensional parts. The device we use is a cylinder with plates tilted
toward the interior of the cylinder attached to the side. Across the plates, there
are fences. The part cascades down from plate to plate, and slides along the
fences as it travels down a plate. A picture of the feeder is given in Figure 1.
The goal of this paper is to compute the set-up of plates and fences that is
guaranteed to move a given asymmetric polyhedral part towards a unique nal
orientation. Such a set-up, consisting of a sequence of plate slopes, and for each
plate a sequence of fence orientations is referred to as a (plate and fence) design.
The alignment of the part with plates and fences strongly resembles the
alignment of that part with a jaw consisting of two orthogonal planes: nding
a design of plates and fences corresponds to nding a constrained sequence of
push directions for the jaw. This relation motivates our study of the behavior
of a part pushed by this jaw. We show that a three-dimensional polyhedral
part P can be oriented up to rotational symmetry by a (particular) sequence of
push actions, or push plan for short, of length O(n 2 ), where n is the number of
vertices of P . Furthermore, we give an O(n 3 log n) time algorithm to compute
such a push plan. We shall show how to transform this three-dimensional push
plan to a three-dimensional design. The resulting design consists of O(n 3 ) plates
and fences, and can be computed in O(n 4 log n) time.
The paper is organized as follows. We rst discuss the device we use to orient
parts, introduce the corresponding jaw, and study the behavior of a part being
pushed in Section 2. We then show, in Section 3, that the jaw can orient any
given polyhedral part up to symmetry. In Section 4 we show how to compute a
sequence of push actions to orient a given part. In Section 5 we show how the
results for the generic jaw carry over to the cylinder with plates and fences. In
Section 6, we conclude and pose several open problems.
Pushing Parts
A polyhedral part in three-dimensional space has three rotational degrees of
freedom. There are numerous ways to represent orientations and rotations of
objects in the three-dimensional world. We assume that a xed reference frame
is attached to P . We denote the orientation of P relative to this reference frame
by are the polar coordinates of a point on the sphere
of directions, and the roll, which is a rotation about the ray emanating from
intersecting (; ). See Figure 2 for a picture. This representation will
be shown to be appropriate considering the rotational behavior of the part as
it aligns to our feeder. We discuss our feeder in Section 2.1. The rotational
behavior of P in contact with the feeder is discussed in Section 2.2.
2.1 Modeling the feeder
A part in three-dimensional space can have innitely many orientations. The
device we use to orient this part discretizes the set of possible orientations of the
part. The feeder consists of a series of bent plates along which the part cascades
down. Across a plate, there are fences which brush against the part as it slides
down the plate. A picture of a part sliding down a plate is given in Figure 3(a).
The plate on which the part slides discretizes the rst two degrees of freedom of
rotation of the part. A part in alignment with a plate retains one undiscretized
rotational degree of freedom. The rotation of the part is determined up to its
roll, i.e. the rotation about the axis perpendicular to the plate. The fences,
which are mounted across the plates, push the part from the side, and discretize
x
y
z
Figure
2: The rotation is specied by a point (; ) on the sphere of directions,
and a rotation about the vector through this point.
(a)(a) (b)
secondary
primary plane
plane
Figure
3: (a) A part sliding down a plate with fences. (b) The same part on
the jaw.
the roll of its rotation. We assume that P rst settles on the plate before it
reaches the fences which are mounted across the plate, and there is only rotation
about the roll axis as the fences brush the part.
We look at the push actions of the plates and the fences in a more general
setting. We generalize the cylindrical feeder by substituting the plate along
which the part slides by a plane on which the part rests. We substitute the
fences by an orthogonal second plane, which pushes the part from the side.
We call the planes the primary and secondary (pushing) plane, respectively. A
picture of the resulting jaw is given in Figure 3(b).
Since the planes can only touch P at its convex hull, we assume without
loss of generality that P is convex. We assume that the center-of-mass of P ,
denoted by c, is inside the interior of P . Analogously to the cylindrical feeder,
we assume that only after P has aligned with the primary plane, we apply the
secondary plane. As the part rests on the primary plane, the secondary plane
pushes P at its orthogonal projection onto the primary plane. We assume that
the feature on which P rests retains contact with the primary plane as the
secondary plane touches P . We assume that for any equilibrium orientation,
which is an orientation for which P rests on the jaw (see Section 2.2 for a
denition of an equilibrium orientation), the projection of P onto the primary
plane has no rotational symmetry. We refer to a part with this property as
being asymmetric.
In order to be able to approach the part from any direction, we make the
(obviously unrealistic) assumption that the part
oats in the air, and assume
that we can control some kind of gravitational eld which attracts the part in a
direction towards the jaw. Also, we assume that the part quasi-statically aligns
with the jaw, i.e. we ignore inertia. Studying this unrealistic situation is useful
for analyzing our feeder later.
In order to be able to determine a sequence of push directions that orients
P , we need to understand the rotational behavior of P when pushed by the jaw.
We analyze this behavior below.
2.2 The push function
A basic action of the jaw consists of directing and applying the jaw. The result of
a basic action for a part in its reference orientation is given by the push function.
The push function
maps a push direction of the jaw relative to P in its reference orientation onto
the orientation of P after alignment with the jaw. The orientation of P after
a basic action for a dierent initial orientation than its reference orientation is
equal to the push function for the push direction plus the oset between the
reference and the actual initial orientation of P .
We dedicate the next three subsections to the discussion of the push function
for P in its reference orientation. As P aligns with the device, we identify two
subsequent stages; namely alignment with the primary plane, and alignment
with the secondary plane.
Since we assume that we apply the secondary plane only after the part has
aligned with the primary pushing plane, we shall separately discuss the rotational
behavior of the part during the two stages. In the next two subsections
we discuss the rst stage of alignment. The last subsection is devoted to the
second stage of alignment.
2.2.1 Alignment with the primary plane
The part P will start to rotate when pushed unless the normal to the primary
plane at the point of contact passes through the center-of-mass of P [19]. We
refer to the corresponding direction of the contact normal as an equilibrium
contact direction or orientation.
c
(a) (b)
Figure
4: (a) The radius for contact direction . (b) The bold curves show the
radius function for a planar part P . The dots depict local minima, the circles
local maxima of the radius function.
The contact direction of a supporting plane of P is uniquely dened as the
direction of the normal of the plane pointing into P . We study the radius
function of the part, in order to explain the alignment of P with the primary
plane. The radius function r : ([0; maps a direction (; )
onto the distance from c to the supporting plane of P with contact direction
We rst study the planar radius function for a planar part P p with center-of-
mass c p . The planar radius function easily generalizes to the radius function for
a three-dimensional part. The planar radius function r maps a
direction onto the distance from c to the supporting line of P p with contact
direction , see Figure 4(a). With the aid of elementary trigonometry, we derive
that the distance of c to the supporting line of P p in contact with a xed vertex
v for contact direction equals the distance of c to the intersection of the ray
emanating from c in direction and the boundary of the disc with diameter
v). Combining the discs for all vertices of P p gives a geometric method to
derive r p . The radius r p () is the distance of c to an intersection of the ray
emanating from c in direction and the boundary of a disc through a vertex of
. If there are multiple discs intersecting the ray, r p () equals the maximum of
all distances from c to the intersection with any disc|a smaller value would not
dene the distance of a supporting line of P , but rather line intersecting P . In
conclusion, r p () equals the distance from c to the intersection of the boundary
of union of discs for each vertex of P with the ray emanating from c in direction
. In
Figure
4(b), we show a planar part with for each vertex v, the disc with
diameter (c; v). The boundary of the discs is drawn in bold.
The three-dimensional generalization of a disc with diameter (c; v) is a ball
with with diameter (c; v). The three-dimensional radius function r(; ) is the
distance of c to the intersection of the ray emanating from c in direction (; )
with the union of the set of balls for each vertex of P . We call the boundary
of the union the radius terrain; it links every contact direction of the primary
plane to a unique distance to c.
The radius terrain contains maxima, minima, and saddle points. If the
contact direction of the primary plane corresponds to a local extremum, or
saddle point of the radius function, the part is at an equilibrium orientation,
and the contact direction of the primary plane remains unchanged. If, on the
other hand, the radius function of the part for a contact direction of the primary
plane is not a local extremum, or saddle point, the gravitational force will move
the center-of-mass closer to the primary plane, and the contact direction will
change. We assume that, in this case, the contact direction traces a path of
steepest descent in the radius terrain until it reaches a equilibrium contact
direction. In general, the part can pivot along dierent features of the part, as
the contact direction follows the path of steepest descent towards an equilibrium.
Dierent types of contact of the primary plane correspond to dierent features
of the radius terrain. The contact directions of the primary plane with a
vertex of P dene a (spherical) patch in the terrain, the contact directions of
the primary plane with an edge of P dene an arc, and the contact direction
of the primary plane with a face of P denes a vertex. In Figure 5, we show
dierent types of contacts of P with the primary plane. Figure 5(a) shows an
equilibrium contact direction with the primary plane in contact with vertex v 1
of P . The contact direction corresponds to a maximum in the radius terrain.
Figure
5(b) shows a vertex contact which is not an equilibrium. Figure 5(c)
shows an equilibrium contact direction for edge (v 3 ; v 4 ) of P . Figure 5(d) shows
a non-equilibrium contact for edge (v 5 In Figure 5(e) we see a degenerated
non-equilibrium contact for edge (v 7 actually corresponds to a non-equilibrium
vertex contact with the primary plane in contact with vertex v 8 .
The direction of steepest descent in the radius terrain corresponds to a rotation
about v 8 . Figure 5(f) shows a stable equilibrium face contact. The contact
direction corresponds to a local minimum of the radius terrain. In Figure 5(g)
we see a degenerated face contact which corresponds to an edge contact for edge
(v
Figure
5(h) shows a degenerated face contact which corresponds
to a vertex contact for vertex v 15 .
The alignment of the part to the primary plane is a concatenation of simple
rotations, i.e. a rotation about a single vertex or edge. The path of a simple
rotation in the radius terrain is either a great arc on a balls with radius (c; v)
for a vertex of P , or a part of a intersection of two balls
vertices which is a part of the boundary of a disc. It is easy to see that the
projection of the arcs in the radius terrain of any of the simple rotations project
to great arcs on the sphere of directions. Hence, during a simple rotation, the
contact direction of the primary plane traces a great arc on the sphere of contact
directions. During each single stage of alignment, we assume that there is no
(instantaneous) rotation about the roll axis.
(a) (b)
(c) (d)
Figure
5: Dierent contacts for the primary plane, with a projection of c onto
the primary plane. The primary plane is assumed at the bottom of the pictures.
2.2.2 Computation of the roll after alignment with the primary
plane
The mapping of Section 2.2.1 only tells us which feature of the part will be
in contact with the primary plane after rotation. It leaves the change in the
part's roll out of consideration. Nevertheless, we need to keep track of the roll
as P aligns with the primary plane. We remember that the alignment with the
primary plane is a concatenation of simple rotations each corresponding to a
great arc on the sphere of contact directions of the primary plane.
With the aid of spherical trigonometry, it is possible to compute the change
in roll caused by a reorientation of the primary plane (prior to pushing). Sub-
sequently, we can compute the change in roll for a simple rotation of P . Since
the alignment of the part can be regarded as a concatenation of such simple
rotations, we obtain the nal roll by repeatedly applying the change in the roll
of P for each simple rotation in the alignment to the primary plane.
2.2.3 Alignment with the secondary plane
Let us assume that P is in equilibrium contact with the primary plane. The
next step in the application of the jaw is a push operation of the secondary
(orthogonal) plane. The push action by the secondary plane changes the orientation
of the projection of P onto the primary plane. The application of the
secondary plane to the part can, therefore, be regarded as a push operation on
the two-dimensional orthogonal projection of P onto the primary plane.
The planar push function for a planar projection of P
links every orientation to the orientation $ proj () in which the part
P proj settles after being pushed by a jaw with initial contact direction (relative
to the frame attached to P proj ). The rotation of the part due to pushing causes
the contact direction of the jaw to change. The nal orientation $ proj () of
the part is the contact direction of the jaw after the part has settled. The
equilibrium push directions are the xed points of $ proj .
Summarizing, we can compute the orientation of P after application of the
jaw. In the next section, we shall show we can always orient P up to symmetry
in the push function by means of applications of the jaw.
3 Orienting a polyhedral part
In this section we will show that any polyhedral part P can be oriented up
to rotational symmetry in the push functions of the projections of P onto the
primary plane. The part P has at most O(n) equilibria with respect to the
primary plane, and any projection of P onto the primary plane has O(n) vertices.
Hence, the total number of orientations of P compliant to the jaw is O(n 2 ).
Figure
6 shows an example of a part
with
polyhedral part of with n vertices has O(n 2 ) stable orientations.
This bound is tight in the worst case.
(a) (b)
Figure
with
Side view.
From the previous section, we know that the pushing jaw rotates P towards
one of its equilibrium orientations with respect to the primary plane, and the
secondary plane. Let us, for a moment, assume that the contact direction (; )
of the primary plane is known.
We can now redirect and apply the secondary plane. We remember that
we assume that applying the secondary plane has no in
uence on the contact
direction of the primary plane. Consequently, the rotations of the part, due to
applications of the secondary pushing plane, are fully captured by the planar
push function of the projection of the part onto the primary plane. Chen and
Ierardi [12] show that a two-dimensional part with m vertices can be oriented
up to symmetry by means of planar push plan of length O(m). Consequently,
we can orient P in equilibrium contact with the primary plane up to symmetry
in the projection of the part onto the primary plane by O(n) applications of the
secondary plane.
be an asymmetric polyhedral part with n vertices. There exists
a plan of length O(n) that puts P into a given orientation (; ; ) from any
initial orientation
We call the operation which orients P for a single equilibrium contact direction
of the primary plane (; ) CollideRollsSequence(; ). We can
eliminate the uncertainty in the roll for any equilibrium contact direction of
the primary plane. The initialization of the push plan that orients P reduces
the number of possible orientations to O(n) by a concatenation of Collid-
eRollsSequence for all equilibrium contact directions of P . Lemma 3 will
give us a push operation to further reduce the number of possible orientations.
Figure
7: Two orientations on the sphere of directions. Their equator is dashed.
A desired reorientation of the primary plane is dotted.
Lemma 3 For every pair of orientations of a polyhedral
part there exist two antipodal reorientations of the primary plane which map
these orientations onto ( ~
0 and ~
Proof: We will prove that there is a reorientation of the primary plane for
which the resulting contact directions of the primary plane for P in initial
orientation and 0 are the same. We focus on the rst two parameters of the
orientations and points on the sphere of
directions. We want to nd a push direction that maps these two points onto
another point 7. Let E denote the great circle consisting of
all points equidistant to (; ) and divides the sphere of directions
into a hemisphere containing (; ) and a hemisphere containing
reorientation of the primary plane corresponds maps (; ) and
contact directions which are equidistant to these original contact directions.
Let r denote the ray emanating from (; ), in the direction of , and r 0 denote
the ray emanating from ( in the direction of 0 . Points on the rays (with
equal distance to the origins) correspond to a reorientation of the primary
pushing plane by (0; ). Both rays intersect E. We aim for a push direction
such that the the jaw touches P at an orientation in E. The
component of the push direction changes the direction of the rays emanating
from (; ) and We will show that there is
, such that for both orientations the push direction touches the part at the
same point. If their rst intersection with E is in the same point, we have found
a push direction which maps both orientations onto the same face. Since the
orientations are in dierent hemispheres, increasing will move the intersections
of the rays with E in opposite direction along E. This implies that there are
two antipodal reorientations of the primary plane where the intersections must
pass. These push directions correspond to push directions which map (; ),
and the same point. 2 2
We call the basic operation which collides two orientations onto the same
equilibrium for the primary plane CollidePrimaryAction. Combining Lemma 2
and 3 leads to a construction of a push plan for a polyhedral part. The following
algorithm orients a polyhedral without symmetry in the planar projections of
contact directions of the primary plane.
OrientPolyhedron(P
. After initialization
while jj > 1
do pick
plan
. Lemma 3;
.
for all ( ~
plan
. Lemma 2
for all ( ~
The number of pushes used by this algorithm sums up to O(n 2 ). Correctness
follows directly from Lemma's 2 and 3.
Theorem 4 Any asymmetric polyhedral part can be oriented by O(n 2 ) push
operations by two orthogonal planes.
4 Computing a Push Plan
In this section, we present an algorithm for computing a push plan for a three-dimensional
part. We know from Section 3 that such a plan always exists for
asymmetric parts. The push plans of Section 3 consist of two stages. During
the initialization stage of the algorithm we reduce the number of possible orientations
to O(n) dierent equilibrium contact directions of the primary plane
with a unique roll each. The initialization consists of O(n 2 ) applications of
the secondary plane. In the second stage, we run algorithm OrientPolyhe-
dron which repeatedly decreases the number of possible orientations of the
part by one, by means of a single application of the primary plane followed by
O(n) applications of the secondary plane, until one possible orientation remains.
Summing up, a push plan of Section 3 corresponds to O(n) applications of the
primary plane, and O(n 2 ) applications of the secondary plane.
We maintain the O(n) dierent orientations which remain after the initialization
stage in an array. During the execution of the second stage, we update
the entries of the array. Hence, for each application of either of the two planes
of the jaw, we compute for O(n) orientations of the array the orientation after
application of the jaw.
In order to compute the orientation of P after application of the primary
plane, we need to be able to compute the path of steepest descent in the radius
terrain. In order to determine the orientation of P after application of the
secondary plane, we need to be able to compute the planar projection of P onto
the primary plane for stable orientations of P , and we need to compute planar
push plans.
We start by discussing the computation of the path of steepest in the radius
terrain from the initial contact direction of the primary plane. The path is a
concatenation of great arcs on the sphere of contact directions of the primary
plane. Lemma 5 bounds the complexity of the radius terrain.
Lemma 5 Let P be a convex polyhedral part with n vertices. The complexity
of the radius terrain of P is O(n).
Proof: There exist bijections between the faces of P and the vertices of the
radius terrain, the vertices of P and the patches of the radius terrain, and
the edges of P and the edges of the radius terrain. Hence, the combinatorial
complexity of the radius terrain equals the combinatorial complexity of P , which
is O(n). 2 2
In a piecewise-linear terrain with combinatorial complexity n, the complexity
of a path of steepest descent can consist of
We shall show,
however, that a path of steepest descent in the radius terrain had complexity
O(n).
Lemma 6 Let P be convex polyhedral part. A path of steepest descent in the
radius terrain of P has combinatorial complexity O(n).
Proof: A steepest-descent path in the radius terrain consist of simple sub-paths
connecting vertices and points on arcs. Thus, the complexity of the path
depends on the number of visits of vertices and crossings of arcs. We prove the
theorem by showing that the number of visits of a single vertex, and the number
of crossings of a single arc is bounded by a constant.
A vertex of the terrain|which corresponds to a face contact|can be visited
only once. If the path crosses a vertex, the radius must be strictly decreasing.
Hence the path will never reach the height of the vertex again.
We shall show that the path crosses an arc|which corresponds to an edge
contact|of the terrain (from one patch to a neighboring patch) at most once.
Let us assume that the part is crossing the arc in the terrain which corresponds
to a contact of the primary plane to edge (v of the part. Let us assume that
the path in the terrain rst travels through the patch of v 1 , and then through
the patch of v 2 . In this case, the part rst rotates about v 1 , until the edge
reaches the primary plane. Instead of rotating about (v
subsequently rotates about v 2 |the primary plane immediately breaks contact
with v 1 . Since we assume that the center-of-mass follows the path of steepest
descend in the radius terrain, the primary plane can only break contact with v 1
if the distance of v 1 to c is greater than the distance of v 2 to c. See Figure 8.
c
Figure
8: The path of steepest descent, crossing an edge of the radius terrain.
The distance from v 1 to c is greater than the distance from v 2 to c.
Hence for each arc crossing, the part pivots on a vertex with smaller distance
to c, and consequently crosses each arc at most one time.
Since the number of arcs and vertices of the radius terrain is bounded by
O(n), the proof follows.
In order to compute the path of steepest descent, we need not compute the
radius terrain. We can suce with a decomposition of the sphere of contact
directions|of which the cells correspond to primary plane-vertex contacts|
together with the position of the corresponding vertices on the sphere of directions
We assume that P is given as a doubly-connected edge list. A doubly connected
edge list consists of three arrays which contain the vertices, the edges,
and the faces of the part. We refer the reader to [24, 14] for a detailed descrip-
tion, and [4] for a discussion on the implementation of the doubly-connected
edge list to represent polyhedra. For our purposes, it suces to know that the
doubly-connected edge list allows us to answer all relevant adjacency queries in
constant time.
We compute the decomposition of the sphere of contact directions from the
doubly-connected edge list of P . We recall that the cells of the arrangement
on the sphere of contact directions correspond to plane-vertex contacts. For
contact directions at the boundary of a cells, the primary plane is in contact
with at least two vertices, and thus with an edge or face of P .
We use the aforementioned correspondence between the part and the arrangement
to eciently compute the latter from the former. For each edge of
the part, we add an edge to the arrangement. The vertices of the edge correspond
to the contact directions of the primary plane at the faces of the part
neighboring the edge. These contact directions are computed in constant time
from the edge and a third vertex on the boundary of the face. The connectivity
captured by the representation of the part, easily carries over to the connectivity
of the arrangement. Hence, the computation of the doubly-connected edge
list representing the arrangement on the sphere of directions can be carried out
in O(n) time. With each cell of the arrangement, we store the corresponding
vertex of the part. Figure 9(a) shows the decomposition of the sphere of contact
directions for a cubic part.
Figure
9: (a) The decomposition of the sphere of directions (solid), together with
the projection of the part (dotted). (b) The face for which the primary plane
is in contact with v 1 . The arrows show the contact directions of the primary
plane starting at squared the contact point until the part settles on face f 1 .
In the example each face, each edge, and each vertex of the cube has an
equilibrium contact direction of the primary plane. As a consequence, any
contact direction which corresponds to a face contact is an equilibrium contact
direction and the pivoting stops after a constant number of steps. In Figure 9(b),
we show the great arcs on the sphere of directions which correspond to the simple
rotations of the alignment of the part to the primary plane. Firstly, the part
rotates about vertex v 1 , until edge e 1 reaches the primary plane. The part
continues to rotate about edge e 1 , until it nally reaches face f 1 .
In order to determine the orientation for a given initial contact direction,
we need to determine the contact vertex. In other words, we need to determine
which cell of the arrangement corresponds to the contact direction. It is not
hard to see that this can be accomplished in linear time, by walking through
the arrangement.
Lemma 7 Let P be a polyhedral part with n vertices in its reference orientation.
) be a push direction of the primary plane. We can determine the
orientation of the part after application of the primary plane in O(n)
time.
Computing an orthogonal projection of P onto the primary plane can be
carried out in linear time per equilibrium by means of an algorithm of Ponce
et al. [23], which rst nds the leftmost vertex of the projection through linear
programming, and then traces the boundary of the projection.
The planar push function of a given projection can be computed in O(n)
time by checking its vertices. Querying the planar push function can be carried
out in O(log n) time by performing a binary search on the initial orientation.
Lemma 8 Let P be a polyhedral part with n vertices in equilibrium orientation
be a push direction of the secondary plane. We can determine
the orientation (; ; 0 ) of the part after application of the secondary plane in
O(log n) time.
For almost all parts, the computation of a planar push plan of linear length
can be done in O(n) time using an algorithm due to Chen and Ierardi [12]. Chen
and Ierardi show that there are pathological parts for which they only give an
computing a push plan. So, the best upper bound on
the running time to compute CollideRollsSequence for O(n) projections
of P is O(n 3 ). It remains open whether a polyhedral part can have
pathological projections, or to improve the bound on the running time in an
other way. Computing the push direction of the primary plane which maps
two dierent faces onto the same equilibrium with respect to the primary plane
(CollidePrimaryAction) can be done in constant time.
Summarizing, the total cost of computing the reorientations of the jaw takes
time. The cost of the necessary maintenance of O(n) possible orientations
of P is the sum of O(n 2 ) updates for applications of the secondary plane which
take O(n log n) time each, and O(n) updates for applications of the primary
plane, which take O(n 2 ) maintenance time each. Theorem 9 gives the main
result of this section.
Theorem 9 A push plan of length O(n 2 ) for an symmetric polyhedral part with
n vertices can be computed in O(n 3 log n) time.
5 Plates with fences
In this section we will use the results from the preceding sections to determine a
design for the feeder consisting of tilted plates with curved tips, each carrying a
sequence of fences. The motion of the part eectively turns the role of the plates
into the role of the primary pushing plane, and the role of the fences into the
role of the secondary pushing plane. We assume that the part quasi-statically
aligns to the next plate, similar to the alignment with the primary plane of the
generic jaw. Also, we assume that the contact direction of the plate does not
change as the fences brush the part, i.e. the part does not tumble over.
The fact that the direction of the push, i.e., the normal at the fence, must
have a non-zero component in the direction opposite to the motion of the part,
which is pulled down by gravity, imposes a restriction on successive push directions
of the secondary plane. Fence design can be regarded as nding a
constrained sequence of push directions. The additional constraints make fence
design in the plane considerably more dicult than orientation by a pushing
jaw.
As the part moves towards the end of a plate, the curved end of the plate
causes the feature on which the part rests to align with the vertical axis, while
retaining the roll of the part. When the part leaves the plate, the next plate can
Figure
10: The next plate can only touch the lower half of the part after it left
a plate.
only push the part from below. This draws restrictions on the possible reorientations
of the primary plane, in the model with the generic three-dimensional
jaw.
From
Figure
follows that the reorientation of the primary plane is within
when the last fence of the last plate was a left fence. Similarly, for
a last right fence, the reorientation of the primary plane is within (0; )(0; ).
Berretty et al.[5] showed that it is possible to orient a planar polygonal
part (hence a polyhedral part resting on a xed face) using O(n 2 ) fences. The
optimal fence design can be computed in O(n 3 log n) time.
The gravitational force restricts our possible orientations of the primary
plane in the general framework. Fortunately, Lemma 3 gives us two antipodal
possible reorientations of the primary plane. It is not hard to see that one of
these reorientations is in the reachable hemisphere of reorientations between two
plates.
This implies we can still nd a fence and plate design, which consists of
Theorem 10 An asymmetric polyhedral part can be oriented using O(n 3 ) fences
and plates. We can compute the design in O(n 4 log n) time.
6 Conclusion
We have shown that sensorless orientation of an asymmetric polyhedral part
by a sequence of push actions by a jaw consisting of two orthogonal planes is
possible. We have shown that the length of the sequence of actions is O(n 2 ) for
parts with n vertices, and that such a sequence can be determined in O(n 3 log n)
time.
We have proposed a three-dimensional generalization of conveyor belts with
fences [5]. This generalization consists of a sequence of tilted plates with curved
tips, each carrying a sequence of fences. A part slides along the fences of a
plate to reach the curved tip where it slides o onto the next plate. Under the
assumptions that the motion of the part between two plates is quasi-static and
that a part does not tumble from one face onto another during its slide along
one plate, we can compute a set-up of O(n 3 ) plates and fences in O(n 4 log n)
time that will orient a given part with n vertices. (As in the two-dimensional
instance of fence design, the computation of such a set-up boils down to the
computation of a constrained sequence of push actions.)
Our aim in this paper has been to gain insight into the complexity of
sensorless orientation of three-dimensional parts rather than to create a perfect
model of the behaviour of pushed (or sliding) and falling parts. Nevertheless,
we can relax some of the assumptions in this paper. First of all, in a practical
setting, a part which does not rest on a stable face, but on a vertex or edge
instead, will most likely change its contact direction with the primary plane if it
is pushed from the side. Hence, we want to restrict ourselves orientations of P
which have stable equilibrium contact directions of the primary plane. After the
rst application of the jaw, it might be the case that P is in one of its unstable
rather than stable equilibria. A suciently small reorientation of the jaw in an
appropriate direction, followed by a second application of the jaw, will move
the part towards a stable orientation though, allowing us to start from stable
orientations only.
The computation of the reorientation of the primary plane results in two candidate
reorientations. Although extremely unlikely, these reorientations could
both correspond to unstable equilibrium contact directions. As mentioned, in a
practical situation one wants to avoid such push directions. It is an open question
whether there exist parts which can not be oriented without such unstable
contact directions.
Our approach works for parts which have asymmetric projections onto the
primary plane for all equilibrium contact directions of primary plane. It is an
open problem to exactly classify parts that cannot be fed by the jaw.
It is interesting to see how the ideas from this paper can be extended to other
feeders such as the parallel jaw gripper, which rst orients a three-dimensional
part in the plane, and subsequently drops it onto another orientation. Rao et
al.[25] show how to compute contact directions for a parallel jaw gripper to
move a three-dimensional part from a known orientation to another one. We
want to see if this method generalizes to sensorless reorientation.
The algorithm of this paper generates push plans of quadratic length. It
remains to be shown whether this bound is asymptotically tight. Also, it is
interesting to nd an algorithm which computes the shortest push plan that
orients a given part. Such an algorithm would need to decompose the space of
possible reorientations of the jaw for P in its reference orientation into regions
which map onto dierent nal orientations of P . This requires a proper algebraic
formulation of the push function, and a costly computation of corresponding
the arrangement in the space of push directions. In contract to the planar
push function, the three-dimensional push function is not a monotonous transfer
function. Eppstein [15] showed that, in general, nding a shortest plans is NP-
complete. It is an open question whether we can nd an algorithm for computing
a shortest plan for the generic jaw that runs in polynomial time.
--R
Sensorless parts feeding with a one joint robot.
Sensorless parts feeding with a one joint manipulator.
Posing polygonal objects in the plane by pushing.
DCEL: A polyhedral database and programming environ- ment
Design for Assembly - A Designers Hand- book
Automatic Assembly.
Optimal curved fences for part alignment on a belt.
Risc for industrial robotics: Recent results and open problems.
The complexity of oblivious plans for orienting and distinguishing polygonal parts.
The complexity of rivers in triangulated terrains.
Computational Geometry: Algorithms and Applications.
Reset sequences for monotonic automata.
An exploration of sensorless manipula- tion
Orienting polygonal parts without sensors.
Stable pushing: Mechanics
Manipulator grasping and pushing operations.
An algorithmic approach to the automated design of parts orienters.
The motion of a pushed sliding work- piece
Planning robotic manipulation strategies for workpieces that slide.
On computing four- nger equilibrium and force- closure grasps of polyhedral objects
Computational Geometry: An Intro- duction
Complete algorithms for reorienting polyhedral parts using a pivoting gripper.
A complete algorithm for designing passive fences to orient parts.
--TR
Computational geometry: an introduction
Robot hands and the mechanics of manipulation
Automatic grasp planning in the presence of uncertainty
Reset sequences for monotonic automata
Stable pushing
On computing four-finger equilibrium and force-closure grasps of polyhedral objects
Computational geometry
On fence design and the complexity of push plans for orienting parts
Computing fence designs for orienting parts
Algorithms for fence design
The Complexity of Rivers in Triangulated Terrains
Sorting convex polygonal parts without sensors on a conveyor | robotics;sensorless part feeding;planning |
585789 | Polygon decomposition for efficient construction of Minkowski sums. | Several algorithms for computing the Minkowski sum of two polygons in the plane begin by decomposing each polygon into convex subpolygons. We examine different methods for decomposing polygons by their suitability for efficient construction of Minkowski sums. We study and experiment with various well-known decompositions as well as with several new decomposition schemes. We report on our experiments with various decompositions and different input polygons. Among our findings are that in general: (i) triangulations are too costly, (ii) what constitutes a good decomposition for one of the input polygons depends on the other input polygon - consequently, we develop a procedure for simultaneously decomposing the two polygons such that a "mixed" objective function is minimized, (iii) there are optimal decomposition algorithms that significantly expedite the Minkowski-sum computation, but the decomposition itself is expensive to compute - in such cases simple heuristics that approximate the optimal decomposition perform very well. | Introduction
Given two sets P and Q in IR 2 , their Minkowski sum (or vector sum), denoted by P Q,
is the set fp Qg. Minkowski sums are used in a wide range of applications,
including robot motion planning [26], assembly planning [16], computer-aided design and
P.A. is supported by Army Research O-ce MURI grant DAAH04-96-1-0013, by a Sloan fellowship, by
NSF grants EIA{9870724, EIA{997287, and CCR{9732787 and by a grant from the U.S.-Israeli Binational
Science Foundation. D.H. and E.F. have been supported in part by ESPRIT IV LTR Projects No. 21957
(CGAL) and No. 28155 (GALIA), and by a Franco-Israeli research grant (monitored by AFIRST/France
and The Israeli Ministry of Science). D.H. has also been supported by a grant from the U.S.-Israeli Binational
Science Foundation, by The Israel Science Foundation founded by the Israel Academy of Sciences
and Humanities (Center for Geometric Computing and its Applications), and by the Hermann Minkowski {
Minerva Center for Geometry at Tel Aviv University.
A preliminary version of this paper appeared in the proceedings of the 8th European Symposium on Algorithms
| ESA 2000.
y Department of Computer Science, Duke University, Durham, NC 27708-0129. pankaj@cs.duke.edu.
z Department of Computer Science, Tel Aviv University, Tel-Aviv 69978, Israel. flato@post.tau.ac.il.
x Department of Computer Science, Tel Aviv University, Tel-Aviv 69978, Israel. danha@post.tau.ac.il.
Figure
1: Robot and obstacles: a reference point is rigidly attached to the robot on the left-hand
side. The conguration space obstacles and a free translational path for the robot on the
right-hand side.
manufacturing (CAD/CAM) [9], and marker making (cutting parts from stock material)
Consider for example an obstacle P and a robot Q that moves by translation. We can
choose a reference point r rigidly attached to Q and suppose that Q is placed such that the
reference point coincides with the origin. If we let Q 0 denote a copy of Q rotated by 180 - ,
then P Q 0 is the locus of placements of the point r where P \ Q 6= ;. In the study of
motion planning this sum is called a conguration space obstacle because Q collides with P
when translated along a path exactly when the point r, moved along , intersects P Q 0 .
Figure
1.
Motivated by these applications, there has been much work on obtaining sharp bounds
on the size of the Minkowski sum of two sets in two and three dimensions, and on developing
fast algorithms for computing Minkowski sums. It is well known that if P is a polygonal set
with m vertices and Q is another polygonal set with n vertices, then P Q is a portion of
the arrangement of O(mn) segments, where each segment is the Minkowski sum of a vertex
of P and an edge of Q, or vice-versa. Therefore the size of P Q is O(m 2 n 2 ) and it can
be computed within that time; this bound is tight in the worst case [20] (see Figure 2). If
both P and Q are convex, then P Q is a convex polygon with at most m+n vertices, and
it can be computed in O(m only P is convex, then a result of Kedem et
al. [21] implies that P Q has (mn) vertices (see Figure 3). Such a Minkowski sum can
be computed in O(mn log(mn)) time [28].
Minkowski sums of curved regions have also been studied, (e.g., [4] [19] [27]), as well as
Minkowski sums in three-dimensions (e.g., see a survey paper [2]). Here however we focus
on sums of planar polygonal regions.
We devised and implemented three algorithms for computing the Minkowski sum of
two polygonal sets based on the CGAL software library [1, 11]. Our main goal was to
produce a robust and exact implementation. This goal was achieved by employing the
CGAL planar maps [14] and arrangements [17] packages while using exact number types.
We use rational numbers and ltered geometric predicates from LEDA | the Library of
Figure
2: Fork input: P and Q are polygons with m and n vertices respectively each having
horizontal and vertical teeth. The complexity of P Q is (m
Figure
3: Comb input: P is a convex polygon with m vertices and Q is a comb-like polygon
with n vertices. The complexity of P Q is (mn).
E-cient Data-structures and Algorithms [29, 30].
We are currently using our software to solve translational motion planning problems in
the plane. We are able to compute collision-free paths even in environments cluttered with
obstacles, where the robot could only reach a destination placement by moving through
tight passages, practically moving in contact with the obstacle boundaries. See Figure 4
for an example. This is in contrast with most existing motion planning software for which
tight or narrow passages constitute a signicant hurdle. More applications of our package
are described in [12].
The robustness and exactness of our implementation come at a cost: they slow down the
Figure
4: Tight passage: the desired target placement for the small polygon is inside the inner
room dened by the larger polygon (left-hand side). In the conguration space (right-hand side)
the only possible path to achieve this target passes through the line segment emanating into
the hole in the sum.
running time of the algorithms in comparison with a more standard implementation that
uses
oating point arithmetic. This makes it especially necessary to expedite the algorithms
in other ways. All our algorithms begin with decomposing the input polygons into convex
subpolygons. We discovered that not only the number of subpolygons in the decomposition
of the input polygons but also their shapes had dramatic eect on the running time of the
Minkowski-sum algorithms; see Figure 5 for an example.
In the theoretical study of Minkowski-sum computation (e.g., [21]), the choice of decomposition
is often irrelevant (as long as we decompose the polygons into convex subpolygons)
because it does not aect the worst-case asymptotic running time of the algorithms. In
practice however, dierent decompositions can induce a large dierence in running time
of the Minkowski-sum algorithms. The decomposition can aect the running time of algorithms
for computing Minkowski sums in several ways: some of them are global to all
algorithms that decompose the input polygons into convex polygons, while some others are
specic to certain algorithms or even to specic implementations. The heart of this paper
is an examination of these various factors and a report on our ndings.
Polygon decomposition has been extensively studied in computational geometry; it is
beyond the scope of this paper to give a survey of results in this area and we refer the reader
to the survey papers by Keil [25] and Bern [5], and the references therein. As we proceed,
we will provide details on specic decomposition methods that we will be using.
We apply several optimization criteria to the decompositions that we employ. In the context
of Minkowski sums, it is natural to look for decompositions that minimize the number
of convex subpolygons. As we show in the sequel, we are also interested in decompositions
with minimal maximum vertex degree of the decomposition graph, as well as several other
criteria.
We report on our experiments with various decompositions and dierent input polygons.
As mentioned in the Abstract, among our ndings are that in general: (i) triangulations are
too costly, (ii) what constitutes a good decomposition for one of the input polygons depends
on the other input polygon | consequently, we develop a procedure for simultaneously
decomposing the two polygons such that a \mixed" objective function is minimized, (iii)
there are optimal decomposition algorithms that signicantly expedite the Minkowski-sum
computation, but the decomposition itself is expensive to compute | in such cases simple
heuristics that approximate the optimal decomposition perform very well.
In the next section we survey the Minkowski sum algorithms that we have implemented.
In Section 3 we describe the dierent decomposition algorithms that we have implemented.
We present a rst set of experimental results in Section 4 and lter out the methods that
turn out to be ine-cient. In Section 5 we focus on the decomposition schemes that are
not only fast to compute but also help compute the Minkowski sum e-ciently. We give
concluding remarks and propose directions for further work in Section 6.
nave triang. min d 2
triang. min convex
# of convex subpolygons in P 33 33 6
time (mSec) to compute P Q 2133 1603 120
Figure
5: Dierent decomposition methods applied to the polygon P (leftmost in the gure),
from left to right: nave triangulation, minimum d 2
triangulation and minimum convex decomposition
(the details are given in Section 3). The table illustrates, for each decomposition,
the sum of squares of degrees, the number of convex subpolygons, and the time in milliseconds
to compute the Minkowski sum of P and a convex polygon, Q, with 4 vertices.
Algorithms
Given a collection C of curves in the plane, the arrangement A(C) is the subdivision of
the plane into vertices, edges, and faces induced by the curves in C. Planar maps are arrangements
where the curves are pairwise interior disjoint. Our algorithms for computing
Minkowski sums rely on arrangements, and in the discussion below we assume some familiarity
with these structures, and with a renement thereof called the vertical decomposition;
we refer the reader to [2, 15, 34] for information on arrangements and vertical decomposi-
tion, and to [14, 17] for a detailed description of the planar maps and arrangements packages
in CGAL on which our algorithms are based.
The input to our algorithms are two polygonal sets P and Q, with m and n vertices
respectively. Our algorithms consist of the following three steps:
Step 1: Decompose P into the convex subpolygons P 1
into the convex
subpolygons
Step 2: For each i 2 [1::s] and for each j 2 [1::t], compute the Minkowski subsum
which we denote by R ij . We denote by R the set fR i;j [1::t]g.
Step 3: Construct the union of all the polygons in R, computed in Step 2; the output is
represented as a planar map.
The Minkowski sum of P and Q is the union of the polygons in R. Each R ij is a convex
polygon, and it can easily be computed in time that is linear in the sizes of P i and Q j [26].
Let k denote the overall number of edges of the polygons in R, and let I denote the overall
number of intersections between (edges of) polygons in R
We brie
y present two dierent algorithms for performing Step 3, computing the union
of the polygons in R, which we refer to as the arrangement algorithm and the incremental
union algorithm. A detailed description of these algorithms is given in [12].
Arrangement algorithm. The algorithm constructs the arrangement A(R) induced by
the polygons in R (we refer to this arrangement as the underlying arrangement of the
Minkowski sum) by adding the (boundaries of the) polygons of R one by one in a random
order and by maintaining the vertical decomposition the arrangement of the polygons added
so far; each polygon is chosen with equal probability at each step. Once we have constructed
the arrangement, we e-ciently traverse all its cells (vertices, edges, or faces) and we mark
a cell as belonging to the Minkowski sum if it is contained inside at least one polygon of R.
The construction of the arrangement takes randomized expected time O(I
The traversal stage takes O(I
Incremental union algorithm. In this algorithm we incrementally construct the union of
the polygons in R by adding the polygons one after the other in random order. We maintain
the planar map representing the union of the polygons added so far. For each r 2 R we
insert the edges of r into the map and then remove redundant edges from the map. All
these operations can be carried out e-ciently using the CGAL planar map package. We
can only give a nave bound O(k 2 log 2 k) on the running time of this algorithm, which in
the worst case is higher than the worst-case running time of the arrangement algorithm.
Practically however the incremental union algorithm works much better on most problem
instances.
Remarks. (1) We also implemented a union algorithm using a divide-and-conquer approach
but since it mostly behaves worse than the incremental algorithm we do not describe
it here. The full details are given in [12]. (2) Our planar map package provides full support
for maintaining the vertical decomposition, and for e-cient point location in a map.
However, using simple point location strategies (nave, walk-along-a-line) is often faster in
practice [14]. Therefore we ran the tests reported below without maintaining the vertical
decomposition.
3 The Decomposition Algorithms
We describe here the algorithms that we have implemented for decomposing the input
polygons into convex subpolygons. We use decompositions both with or without Steiner
points. Some of the techniques are optimal and some use heuristics to optimize certain
objective functions. The running time of the decomposition stage is signicant only when we
search for the optimal solution and use dynamic programming; in all other cases the running
time of this stage is negligible even when we implemented a nave solution. Therefore we
only mention the running time for the \heavy" decomposition algorithms.
We use the notation from Section 2. For simplicity of the exposition we assume here
that the input data for the Minkowski algorithm are two simple polygons P and Q. In
practice we use the same decomposition schemes that are presented here for general polygonal
sets, mostly without changing them at all. However this is not always possible. For
example, Keil's optimal minimum convex decomposition algorithm does not work on polygons
with holes 1 . Furthermore, the problem of decomposing a polygon with holes to convex
subpolygons is proven to be NP-Hard irrespective of whether Steiner points are allowed; see
[23]. Other algorithms that we use (e.g., AB algorithm) can be applied to general polygons
without changes. We discuss these decomposition algorithms in the following sections.
In what follows P is a polygon with n vertices r of which are re
ex.
3.1 Triangulation
Nave triangulation. This procedure searches for a pair of vertices such that the
segment is a diagonal, namely it lies inside the polygon. It adds such a diagonal,
splits the polygon into two subpolygons by this diagonal, and triangulates each subpolygon
recursively. The procedure stops when the polygon becomes a triangle. See Figure 5 for an
In some of the following decompositions we are concerned with the degrees of vertices in
the decomposition (namely the number of diagonals incident to a vertex). Our motivation
for considering the degree comes from an observation on the way our planar map structures
perform in practice: we noted that the existence of high degree vertices makes maintaining
the maps slower. The DCEL structure that is used for maintaining the planar map has,
from each vertex, a pointer to one of its incident halfedges. We can traverse the halfedges
around a vertex by using the adjacency pointers of the halfedges. If a vertex v i has d incident
halfedges then nding the location of a new edge around v i will take O(d) traversal steps.
To avoid the overhead of a search structure for each vertex, the planar-maps implementation
does not include such a structure. Therefore, since we build the planar map incrementally,
if the degree of v i in the nal map is d i then we performed d
steps on this vertex. Trying to minimize this time over all the vertices we can either try
to minimize the maximum degree or the sum of squares of degrees, d 2
. Now, high degree
vertices in the decomposition result in high degree vertices in the underlying arrangement,
and therefore we try to avoid them. We can apply the same minimization criteria to the
vertices of the decomposition.
Optimal triangulation | minimizing the maximum degree. Using dynamic programming
we compute a triangulation of the polygon where the maximum degree of a vertex
minimal. The algorithm is described in [18], and runs in O(n 3 ) time.
Optimal triangulation | minimizing d 2
i . We adapted the minimal-maximum-degree
algorithm to nd the triangulation with minimum d 2
is the degree of vertex v i
1 In such cases we can apply a rst decomposition step that connects the holes to the outer boundary and
then use the algorithm on the simple subpolygons. This is a practical heuristic that does not guarantee an
optimal solution.
Figure
From left to right: Slab decomposition, angle \bisector" (AB) decomposition, and
KD decomposition
of the polygon. (See Figure 5 for an illustration.) The adaptation is straightforward. Since
both d 2
are global properties of the decomposition that can be updated in
constant time at each step of the dynamic programming algorithm | most of the algorithm
and the entire analysis remain the same.
3.2 Convex Decomposition without Steiner Points
Greedy convex decomposition. The same as the nave triangulation algorithm except
that it stops as soon as the polygon does not have a re
ex vertex.
Minimum number of convex subpolygons (min-convex). We apply the algorithm
of Keil [22], which computes a decomposition of a polygon into the minimum number of
convex subpolygons without introducing new vertices (Steiner points). The running time
of the algorithm is O(r 2 n log n). This algorithm uses dynamic programming. See Figure 5.
This result was recently improved to O(n
Minimum d 2
modied Keil's algorithm so that it will
compute decompositions that minimize d 2
i , the sum of squares of vertex degree. Like the
modication of the min-max degree triangulation, in this case we also modify the dynamic
programming scheme by simply replacing the cost function of the decomposition. Instead
of computing the number of polygons (as the original min-convex decomposition algorithm
does) we compute a dierent global property, namely the sum of squares of degrees. We
can compute d 2
i in constant time given the values d 2
i of the decompositions of two sub-
polygons.
3.3 Convex Decomposition with Steiner Points
Slab decomposition. Given a direction ~e, from each re
ex vertex of the polygon we extend
a segment in directions ~e and ~e inside the polygon until it hits the polygon boundary. The
result is a decomposition of the polygon into convex slabs. If ~e is vertical then this is the
well-known vertical decomposition of the polygon. See Figure 6. This decomposition gives
a 4-approximation to the optimal convex decomposition as it partitions the polygon into at
most 2r subpolygons and one needs at least dr=2e subpolygons. The obvious advantage
of this decomposition is its simplicity.
Angle \bisector" decomposition (AB). In this algorithm we extend the internal angle
\bisector" from each re
ex vertex until we rst hit the polygon's boundary or a diagonal
that we have already extended from another vertex 2 . See Figure 6. This decomposition
(suggested by Chazelle and Dobkin [6]) gives a 2-approximation to the optimal convex
If P has r re
ex vertices then every decomposition of P must include at
least dr=2e since every re
ex vertex should be eliminated by at least one
diagonal incident to it and each diagonal can eliminate at most 2 re
ex vertices. The AB
decomposition method extends one diagonal from each re
ex vertex until P is decomposed
into at most r subpolygons.
KD decomposition. This algorithm is inspired by the KD-tree method to partition a set
of points in the plane [8]. First we divide the polygon by extending vertical rays inside the
polygon from a re
ex vertex horizontally in the middle (the number of vertices to the left
of a vertex v, namely having smaller x-coordinate than v's, is denoted v l and the number of
vertices to the right of v is denoted v r . We look for a re
ex vertex v for which maxfv l
is minimal). Then we divide each of the subpolygons by extending an horizontal line from a
vertex vertically in the middle. We continue dividing the subpolygons that way (alternating
between horizontal and vertical division) until no re
ex vertices remain. See Figure 6. By
this method we try to lower the stabbing number of the subdivision (namely, the maximum
number of subpolygons in the subdivision intersected by any line) | see the discussion
in Section 5.2 below. The decomposition is similar to the quad-tree based approximation
algorithms for computing the minimum-length Steiner triangulations [10].
4 A First Round of Experiments
We present experimental results of applying the decompositions described in the previous
section to a collection of input pairs of polygons. We summarize the results and draw
conclusions that lead us to focus on a smaller set of decomposition methods (which we
study further in the next section).
4.1 Test Platform and Frame Program
Our implementation of the Minkowski sum package is based on the CGAL (version 2.0) and
LEDA (version 4.0) libraries. Our package works with Linux (g++ compiler) as well as with
(Visual C++ 6.0 compiler). The tests were performed under WinNT workstation
on a 500 MHz PentiumIII machine with 128 Mb of RAM.
2 It is not necessary to compute exactly the direction of the angle bisector, it su-ce to nd a segment
that will eliminate the re
ex vertex from which it is extended. Let v be a re
ex vertex and let u (w) be
the previous (resp. next) vertex on the boundary of the polygon then a segment at the direction !
divides the angle \uvw into two angles with less than 180 - each.
Figure
7: Star input: The input (on the left-hand side) consists of two star-shaped polygons.
The underlying arrangement of the Minkowski sum is shown in the middle. Running times in
seconds for dierent decomposition methods (for two star polygons with 20 vertices are
presented in the graph on the right-hand side.
Figure
8: Border input: The input (an example on the left-hand side) consists of a border of
a country and a star shaped polygon. The Minkowski sum is shown in the middle, and running
times in seconds for dierent decomposition methods (for the border of Israel with 50 vertices
and a star shaped polygon with 15 vertices) are shown in the graph on the right-hand side.
We implemented an interactive program that constructs Minkowski sums, computes
conguration space obstacles, and solves polygon containment and polygon separation prob-
lems. The software lets the user choose the decomposition method and the union algorithm.
It than presents the resulting Minkowski sum and underlying arrangement. The software
is available from http://www.math.tau.ac.il/~flato/.
4.2 Results
We ran the union algorithms (arrangement and incremental-union) with all nine decomposition
methods on various input sets. The running times for the computation of the
Minkowski sum for four input examples are summarized in Figures 7{10.
It is obvious from the experimental results that using triangulations causes the union
Figure
9: Random polygons input: The input (an example on the left-hand side) consists of
two random looking polygons. The Minkowski sum is shown in the middle, and running times in
seconds for dierent decomposition methods (for two random looking polygons with vertices
are shown in the graph on the right-hand side.
Figure
10: Fork input: The input consists of two orthogonal fork polygons. The Minkowski
sum is shown in the middle, and running times in seconds for dierent decomposition methods
(for two fork polygons with 8 teeth each) are shown in the graph on the right-hand side.
Figure
11: An example of a case where when using the min-convex decomposition the union
computation time is the smallest but it becomes ine-cient when considering the decomposition
time as well. The graph on the right-hand side shows the running times in seconds for computing
the Minkowski sum of two polygons (left-hand side) representing the borders of India and Israel
with 478 and 50 vertices respectively. Note that while constructing the Minkowski sum of these
two polygons the incremental union algorithm handles more than 40000 possibly intersecting
segments.
algorithms to run much slower (the left three pairs of columns in the histograms of Figures 7{
10). By triangulating the polygons, we create (n 1)(m 1) hexagons in R with potentially
intersections between the edges of these polygons. We get those poor results
since the performance of the union algorithms strongly depends on the number of vertices
in the arrangement of the hexagon edges. Minimizing the maximum degree or the sum
of squares of degrees in a triangulation is a slow computation that results in better union
performance (compared to the nave triangulation) but is still much worse than other simple
convex-decomposition techniques.
In most cases the arrangement algorithm runs much slower than the incremental union
approach. By removing redundant edges from the partial sum during the insertion of
polygons, we reduce the number of intersections of new polygons and the current planar
map features. The fork input is an exception since the complexity of the union is roughly
the same as the complexity of the underlying arrangement and the edges that we remove
in the incremental algorithm do not signicantly reduce the complexity of the planar map;
see
Figure
10. More details on the comparison between the arrangement union algorithm
and the incremental union algorithm are given in [13].
Although the min-convex algorithm is almost always the fastest in computing the union,
constructing this optimal decomposition may be expensive. For some inputs running with
the min-convex decomposition becomes ine-cient | see for example Figure 11. Minimizing
the sum of squares of degrees in a convex decomposition rarely results in a decomposition
that is dierent from the min-convex decomposition.
This rst round of experiments helped us to lter out ine-cient methods. In the next
section we focus on the better decomposition algorithms, i.e., minimum convex, slab, angle
\bisector", KD. We further study them and attempt to improve their performance.
5 Revisiting the More E-cient Algorithms
In this section we focus our attention on the algorithms that were found to be e-cient in
the rst round of experiments. As already mentioned, we measure e-ciency by combining
the running times of the decomposition step together with that of the union step. We
present an experiment showing that minimizing the number of convex subpolygons in the
decomposition does not always lead to better Minkowski-sum computation time; this is in
contrast with the impression that the rst round of results may give.
We also show in this section that in certain instances the decision how to decompose
the input polygon P may change depending on the other polygon Q, namely for the same
P and dierent Q's we should decompose P dierently based on properties of the other
polygon. This leads us to propose a \mixed" objective function for the simultaneous optimal
decomposition of the two input polygons. We present an optimization procedure for this
mixed function. Finally, we take the two most eective decomposition algorithms (AB and
KD) | not only are they e-cient, they are also very simple and therefore easy to modify
| and we try to improve them by adding various heuristics.
5.1 Nonoptimality of Min-Convex Decompositions
Minimizing the number of convex parts of P and Q can be not only expensive to compute,
but it does not always yield the best running time of the Minkowski-sum construction. In
some cases other factors are important as well. Consider for example the knife input data.
P is a long triangle with j teeth along its base and Q is composed of horizontal and vertical
teeth. See Figure 12. P can be decomposed into parts by extending diagonals
from the teeth in the base to the apex of the polygon. Alternatively, we can decompose
it into subpolygons with short diagonals (this is the \minimal length AB"
decomposition described below in Section 5.3). If we x the decomposition of Q, the latter
decomposition of P results in considerably faster Minkowski-sum running time, despite
having more subpolygons, because the Minkowski sum of the long subpolygons in the rst
decomposition with the subpolygons of Q results in many intersections between the edges
of polygons in R. In the rst decomposition we have long subpolygons while in the
latter we have subpolygons when only one of them is a \long" subpolygon and the
rest are subpolygons.
We can also see a similar behavior in real-life data. Computing the Minkowski sum of
the (polygonal representation of) countries with star polygons mostly worked faster while
using the KD-decomposition than with the AB technique; with the exception of degenerate
number of vertices 23448 9379
running time (sec) 71.7 25.6
Figure
12: Knife input: The input polygons are on the left-hand side. Two types of decompositions
of P (enlarged) are shown second left: on top, subpolygons with short diagonals
length, and below minimum convex decomposition with subpolygons with long diagonals.
Third from the left is the Minkowski sum of P and Q. The underlying arrangement (using
the short decomposition of P ) is shown on the right-hand side. The table below presents the
number of vertices in the underlying arrangement and the running time for both decompositions
(P has 20 teeth and 42 vertices and Q has 34 vertices).
polygons (i.e., with some re
ex vertices that share the same x or y coordinates), the KD
decomposition always generates at least as many subpolygons as the AB decomposition.
5.2 Mixed Objective Functions
Good decomposition techniques that handle P and Q separately might not be su-cient
because what constitutes a good decomposition of P depends on Q. We measured the
running time for computing the Minkowski sum of a knife polygon P (Figure 12 | the
knife polygon is second left) and a random polygon Q (Figure 9). We scaled Q dierently
in each test. We xed the decomposition of Q and decomposed the knife polygon P once
with the short length AB" decomposition and then with the long
minimum convex decomposition. The results are presented in Figure 13. We can see that
for small Q's the short decomposition of the knife P with more subpolygons performs better,
but as Q grows the long decomposition of P with fewer subpolygons wins.
These experiments imply that a more careful strategy would be to simultaneously decompose
the two input polygons, or at least take into consideration properties of one polygon
when decomposing the other.
The running time of the arrangement union algorithm is O(I is the
number of edges of the polygons in R and I is the overall number of intersections between
(edges of) polygons in R (see Section 2). The value of k depends on the complexity of the
convex decompositions of P and Q. Hence, we want to keep this complexity small. It is
harder to optimize the value of I . Intuitively, we want each edge of R to intersect as few
polygons of R as possible. If we consider the standard rigid-motion invariant measure on
lines in the plane [33] and use L(C) to denote the set of lines intersecting a set C, then for any
Figure
13: Minkowski sum of a knife, P , with 22 vertices and a random polygon, Q, with 40
vertices using the arrangement union algorithm. On the left-hand side the underlying arrangement
of the sum with the smallest random polygon and on the right-hand side the underlying
arrangement of the sum with the largest random polygon. As Q grows, the number of vertices
I in the underlying arrangement is dropping from (about) 15000 to 5000 for the \long"
decomposition of P , and from 10000 to 8000 for the \short" decomposition.
is the perimeter of R ij . This suggests that we want to minimize the
total lengths of the diagonals in the convex decompositions of P and Q (Aronov and Fortune
[3] use this approach to show that minimizing the length of a triangulation can decrease
the complexity of the average-case ray-shooting query). But we want to minimize the two
criteria simultaneously, and let the decomposition of one polygon govern the decomposition
of the other.
We can see supporting experimental results for segments in Figure 14. In these experiments
we randomly chose a set T of points inside a square in R 2 and connected pairs of
them by a set S of random segments (for each segment we randomly chose its two endpoints
from T ). Then we measured the average number of intersections per segment as a function
of the average length of a segment. To get dierent average length of the segments, at each
round we chose each segment by taking the longest (or shortest) segment out of l randomly
chosen segments, where l is a small integer varying between 1 and 15. The average number
of intersections is I
where I is the total number of intersections in the arrangement A(S).
We performed 5 experiments for each value of l between 1 and 15, each plotted point in
the graph in Figure 14 represents such an experiment. The values of l are not shown in
the graph | they were used to generate sets of segments with dierent average lengths.
For the presented results, we took (this is a typical ratio between points and
segments in the set R for which we compute the arrangement A(R)). As the results show,
the intersection count per segment grows linearly (or close to linearly) with the average
length of a segment.
Therefore, we assume that the expected number of intersection of a segment in the
Figure
14: Average number of intersections per segment as a function of the average segment
length. Each point in the graph represent a conguration containing 125 randomly chosen points
in a square [0; 1000] [0; 1000] in R 2 and 500 randomly chosen segments connecting pairs of
these points.
arrangement A(R) of the polygons of R is proportional to the total length of edges of A(R),
which we denote by A(R) . The intuition behind the mixed objective function, which we
propose next, is that minimizing A(R) will lead to minimizing I .
be the convex subpolygons into which P is decomposed. Let P i
be
the perimeter of P i . Similarly dene . If R ij is the perimeter of R ij
(the Minkowski sum of P i and
Summing over all (i; j) we get
Let P denote the perimeter of P and P the sum of the lengths of the diagonals in P .
subpolygons and Q has kQ subpolygons. Let D P;Q
be the decomposition of P and Q. Then
The function c(D P;Q ) is a cost function of a simultaneous convex decomposition of P and
Q. Our empirical results showed that this cost function approximates the running time. We
want to nd a decomposition that minimizes this cost function. Let c
If we do not allow Steiner points, we can modify the dynamic-programming algorithm
by Keil [22] to compute c in O(n 2 r 4
an auxiliary cost
which is the minimum total length of diagonals in a convex decomposition
of P into at most i convex polygons. Then
Since the number of convex subpolygons in any minimal convex decomposition of a simple
polygon is at most twice the number of the re
ex vertices in it, the values i and j are
at most 2r P and 2r Q , respectively, where r P (resp. r Q ) is the number of re
ex vertices in
Q). One can compute ^ c(P; i) by modifying Keil's algorithm [22] | the modied
algorithm as well as the algorithm for computing c are described in detail in Appendix A.
Since the running time of this procedure is too high to be practical, we neither implemented
it, nor did we make any serious attempt to improve the running time. We regard this
algorithm as a rst step towards developing e-cient algorithms for approximating mixed
objective functions.
If we allow Steiner points, then it is an open question whether an optimal decomposition
can be computed in polynomial time. Currently, we do not even have a constant-factor
approximation algorithm. The di-culty arises because no constant-factor approximation is
known for minimum-length convex decomposition of a simple polygon if Steiner points are
allowed [23].
5.3 Improving the AB and KD methods
It seems from most of the tests that in general the AB and KD decomposition algorithms
work better than the other heuristics. We next describe our attempts to improve these
algorithms.
Minimal length angle \bisector" decomposition. In each step we handle one re
ex
vertex. A re
ex vertex can always be eliminated by at most two diagonals. For any three
diagonals that eliminate a re
ex vertex, at least one of them can be removed while the
vertex is still eliminated. In this algorithm, for each re
ex vertex we look for the shortest
one or two diagonals that eliminate it. As we can see in Figure 16, the minimal length AB
decomposition performs better than the nave AB even though it generally creates more
subpolygons.
While the AB decomposition performs very well, in some cases (concave chains, countries
borders) the KD algorithm performs better. We developed the KD-decomposition technique
aiming to minimize the stabbing number of the decomposition of the input polygons (which
in turn, as discussed above, we expect to reduce the overall number I of intersections in the
underlying arrangement A(R) of the polygons of R). This method however often generates
too many convex parts. We tried to combine these two algorithms as follows.
Angle \bisector" and KD decomposition (AB+KD). In this algorithm we check the
two neighbors vertices of each re
ex vertex v; if v 1 and v 2 are convex, we extend a
\bisector" from v. We apply the KD decomposition algorithm for the remaining non-convex
polygons. By this method we aim to lower the stabbing number without creating redundant
convex polygons in the sections of the polygons that are not bounded by concave chains).
We tested these algorithms on polygons with dierent number of convex vertices, vertices
in concave chains and \tooth vertices". The results in Figure 15 suggest that AB+KD
performs best when the numbers of vertices in concave chains and of tooth vertices are
roughly the same. If there are more tooth vertices than vertices in concave chains, then the
AB decomposition performs better.
Next, we tried to further decrease the number of convex subpolygons generated by the
decomposition algorithm. Instead of emanating a diagonal from any re
ex vertex, we rst
tested whether we can eliminate two re
ex vertices with one diagonal (let's call such a
diagonal a 2-re
ex eliminator). All the methods listed below generate at most the same
number of subpolygons generated by the AB algorithm but practically the number is likely
to be smaller.
Improved angle \bisector" decomposition. For a re
ex vertex, we look for 2-re
ex
eliminators. If we cannot nd such a diagonal we continue as in the standard AB algorithm.
Re
ex angle \bisector" decomposition. In this method we work harder trying to nd
2-re
ex eliminator diagonals. In each step we go over all re
ex vertices trying to nd an
eliminator. If there are no more 2-re
ex eliminators, we continue with the standard AB
algorithm on the rest of the re
ex vertices.
Small side angle \bisector" decomposition. As in the re
ex AB decomposition, we
are looking for 2-re
ex eliminators. Such an eliminator decomposes the polygon into two
parts, one on each of its side. Among the candidate eliminators we choose the one that has
the minimal number of re
ex vertices on one of its sides. Vertices on dierent sides of the
added diagonal cannot be connected by another diagonal because it will intersect the added
diagonal. By choosing this diagonal we are trying to \block" the minimal number of re
ex
vertices from being connected (and eliminated) by another 2-re
ex eliminator diagonal.
Experimental results are shown in Figure 16. These latter improvements to the AB
decomposition seem to have the largest eect on the union running time, while keeping
the decomposition method very simple to understand and implement. Note that the small
side AB heuristic results in 20% faster union time than the improved AB and re
ex AB
decompositions, and 50% faster than the standard angle \bisector" method. When we use
the small side AB with the input set used in Figure 11 the overall running time is about
376 seconds which is at least three times faster than the results achieved by using other
decomposition methods.
6 Conclusions
We presented a general scheme for computing the Minkowski sum of polygons. We implemented
union algorithms which overcome all possible degeneracies. Using exact number
types and special handling for geometric degeneracies we obtained a robust and exact im-
Figure
15: Running times for computing the chain input using AB, KD, and AB+KD decomposition
Figure
Union running times for countries borders input (Chile with 368 vertices on the left-hand
side and Norway with 360 vertices on the right-hand side) with the improved decomposition
algorithms.
plementation that could handle all kinds of polygonal inputs. The emphasis of this paper
is on the eect of the decomposition method on the e-ciency of the overall process.
We implemented over a dozen of decomposition algorithms, among them triangulations,
optimal decompositions for dierent criteria, approximations and heuristics. We examined
several criteria that aect the running time of the Minkowski-sum algorithm. The most effective
optimization is minimizing the number of convex subpolygons. Thus, triangulations
which are widely used in the theoretical literature are not practical for the Minkowski-sum
algorithms. We further found that minimizing the number of subpolygons is not always
su-cient. Since we deal with two polygonal sets that are participating in the algorithm we
found that it is smarter to decompose the polygons simultaneously minimizing a cost function
which takes into account the decomposition of both input set. Optimal decompositions
for this function and also simpler cost functions like the overall number of convex sub-
polygons were practically too slow. In some cases the decomposition step of the Minkowski
algorithm took more time than the union step. Therefore, we developed some heuristics that
approximate very well a cost function and run much faster than their exact counterparts.
Allowing Steiner points, the angle \bisector" decomposition gives a 2-approximation for
the minimal number of convex subpolygons. The AB decomposition with simple practical
modications (small-side AB decomposition) is a decomposition that is easy to implement,
very fast to execute and gives excellent results in the Minkowski-sum algorithm.
We propose several direction for further research:
1. Use the presented scheme and the practical improvement that we proposed with real-life
applications such as motion planning and GIS and examine the eect of dierent
decompositions for those special types of input data.
2. Further improve the AB decomposition algorithms to give better theoretical approximation
and better running times.
3. We tested the e-ciency of the Minkowski-sum algorithm with dierent convex decomposition
methods, but the algorithm will still give a correct answer if we will have
a covering of the input polygons by convex polygons. Can one further improve the
e-ciency of the Minkowski sum program using coverings instead of decompositions.
--R
The CGAL User Manual
Approximating minimum-weight triangulations in three dimensions
Kim Generation of con
Optimal convex decompositions.
Limited gaps.
Computational Ge- ometry: Algorithms and Applications
Approximating the minimum weight Steiner triangulation.
Robust and e-cient construction of planar Minkowski sums
Robust and e-cient construction of planar Minkowski sums
The design and implementation of planar maps in CGAL.
A general framework for assembly planning: The motion space approach.
The design and implementation of planar arrangements of curves in CGAL.
Triangulating planar graphs while minimizing the maximum degree.
Computing Minkowski sums of plane curves.
Computing Minkowski sums of regular polygons.
On the union of Jordan regions and collision-free translational motion amidst polygonal obstacles
Decomposing a polygon into simpler components.
Minimum decompositions of polygonal objects.
On the time bound for convex decomposition of simple polygons.
Polygon decomposition.
Robot Motion Planning.
Polynomial/rational approximation of minkowski sum boundary curves.
Planning a purely translational motion for a convex object in two-dimensional space using generalized Voronoi diagrams
Placement and compaction of nonconvex polygons for clothing manufacture.
Computational Geometry: An Introduction Through Randomized Algo- rithms
--TR
On the union of Jordan regions and collision-free translational motion amidst polygonal obstacles
Davenport-Schinzel sequences and their geometric applications
Computational geometry
Arrangements
Triangulations
Polynomial/rational approximation of Minkowski sum boundary curves
LEDA
On the design of CGAL a computational geometry algorithms library
The design and implementation of panar maps in CGAL
Robot Motion Planning
Triangulating Planar Graphs While Minimizing the Maximum Degree
--CTR
Hayim Shaul , Dan Halperin, Improved construction of vertical decompositions of three-dimensional arrangements, Proceedings of the eighteenth annual symposium on Computational geometry, p.283-292, June 05-07, 2002, Barcelona, Spain
Ron Wein, Exact and approximate construction of offset polygons, Computer-Aided Design, v.39 n.6, p.518-527, June, 2007
Eyal Flato , Efi Fogel , Dan Halperin , Eran Leiserowitz, Exact minkowski sums and applications, Proceedings of the eighteenth annual symposium on Computational geometry, p.273-274, June 05-07, 2002, Barcelona, Spain
Fogel , Dan Halperin , Christophe Weibel, On the exact maximum complexity of Minkowski sums of convex polyhedra, Proceedings of the twenty-third annual symposium on Computational geometry, June 06-08, 2007, Gyeongju, South Korea
Goce Trajcevski , Peter Scheuermann , Herv Brnnimann , Agns Voisard, Dynamic topological predicates and notifications in moving objects databases, Proceedings of the 6th international conference on Mobile data management, May 09-13, 2005, Ayia Napa, Cyprus
Goce Trajcevski , Peter Scheuermann , Herve Brnnimann, Mission-critical management of mobile sensors: or, how to guide a flock of sensors, Proceeedings of the 1st international workshop on Data management for sensor networks: in conjunction with VLDB 2004, August 30-30, 2004, Toronto, Canada
Ron Wein , Jur P. van den Berg , Dan Halperin, The visibility--voronoi complex and its applications, Proceedings of the twenty-first annual symposium on Computational geometry, June 06-08, 2005, Pisa, Italy
Ron Wein , Jur P. van den Berg , Dan Halperin, The visibility-Voronoi complex and its applications, Computational Geometry: Theory and Applications, v.36 n.1, p.66-87, January 2007
Vladlen Koltun, Pianos are not flat: rigid motion planning in three dimensions, Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, January 23-25, 2005, Vancouver, British Columbia
Goce Trajcevski , Ouri Wolfson , Klaus Hinrichs , Sam Chamberlain, Managing uncertainty in moving objects databases, ACM Transactions on Database Systems (TODS), v.29 n.3, p.463-507, September 2004
Julia A Bennell , Xiang Song, A comprehensive and robust procedure for obtaining the nofit polygon using Minkowski sums, Computers and Operations Research, v.35 n.1, p.267-281, January, 2008 | minkowski sum;experimental results;convex decomposition;polygon decomposition |
585815 | Rational | Under general conditions, linear decision rules of agents with rational expectations are equivalent to restricted error corrections. However, empirical rejections of rational expectation restrictions are the rule, rather than the exception, in macroeconomics. Rejections often are conditioned on the assumption that agents aim to smooth only the levels of actions or are subject to geometric random delays. Generalizations of dynamic frictions on agent activities are suggested that yield closed-form, higher-order decision rules with improved statistical fits and infrequent rejections of rational expectations restrictions. Properties of these generalized `rational' error corrections are illustrated for producer pricing in manufacturing industries. | dismal. A survey of existent journal publications by Ericsson and Irons (1995) summarizes an
extensive accumulation of empirical evidence against rational expectations, including frequent
rejections of rational expectations overidentifying restrictions. A review of policy simulation
models used by central banks and international agencies, such as documented in Bryant et al.
(1993), indicates that many key rational expectations specifications are either imposed or fit by
rough empirical calibrations.
Macroeconomists have adopted a variety of responses to the absence of strong empirical
support for rational expectations. One is to maintain the rational expectations hypothesis, while
aiming to interpret a more limited subset of empirical regularities as discussed by Kydland and
(1991). Another approach is to view rational expectations as a limiting case of complete
information in a more general treatment of the information processing abilities of agents, such as
the "bounded rationality" models of learning reviewed in Sargent (1993). Closely related is the
position that rational expectations are more likely to prevail at low frequencies, a view compatible
with tests of long-run theoretical restrictions in cointegrating relationships, as discussed by Watson
(1994). Others reject the hypothesis of model-based rational expectations, such as the use in
Ericsson and Hendry (1991) of rule-of-thumb extrapolations.
This paper examines an alternative explanation for the poor empirical properties of rational
expectations models. Because most rational expectations restrictions are inherently dynamic
due to the forecasting requirements of constraints on dynamic adjustment, a plausible source
of difficulty could be the sharp friction priors typically imposed on agent responses. The
standard dynamic specification in rational expectations models utilizes geometric lead response
schedules to anticipated future events and geometric lag responses to recent "news." Because
this two-sided geometric response schedule is not a clear implication of economic theory, a
generalized polynomial frictions specification is explored in this paper. Suggested interpretations
of generalized frictions range from costs of adjusting weighted averages of current and lagged
actions to convolutions of geometric random delay distributions of agent responses.
For difference-stationary variables, the decision rules based on generalized frictions are shown
to be isomorphic to a class of "rational" error correction models. 1 The parameters of the
1 Discussion in this paper is aimed primarily at decision rules for difference-stationary variables, a specification
that is not rejected by standard tests of the integration order of most macroeconomic aggregates in postwar samples.
decision rules are subject to tight cross-coefficient restrictions due to polynomial frictions and
cross-equation restrictions due to the assumption of rational expectations. A closed-form solution
that incorporates these restrictions is derived using two companion matrix systems, one a lead
system for the forward planning required by polynomial frictions and the other a lag system
associated with the agents' forecast model. 2
Rational error corrections under polynomial frictions inherit many desirable properties of
atheoretic reduced-form time-series models, including serially independent equation residuals and
small standard errors relative to many theory-based alternatives. However, whereas conventional
error corrections and VARs have been criticized for in-sample overfitting attributable to large
numbers of estimated parameters, rational error corrections are subject to many overidentifying
restrictions which substantially reduce the number of free parameters.
To provide concrete illustrations of the consequences of restrictive priors on adjustment
costs, linear decision rules associated with alternative specifications of frictions are estimated
for producer pricing in several manufacturing industries. The basic specification is based on the
assumptions of difference-stationary producer prices and stationary price markups over costs of
production, assumptions that are not rejected for postwar U.S. producer prices. 3
A difficulty with interpreting many reported rejections of rational expectations overidentifying
restrictions is that the rejections could be due also to misspecified models. An advantage of
the side-by-side comparisons reported in this paper is that several empirical problems, including
rejections of rational expectations restrictions, are unambiguously linked to the use of second-order
Euler equations rather than higher-order Euler equations.
The paper is organized as follows. Section I summarizes dynamic properties of the restricted
error correction that is implied by the standard decision rule with geometric response schedules.
Section II derives the error correction format of rational decision rules implied by a generalized
polynomial description of frictions. Several interpretations of polynomial frictions are suggested.
Section III presents empirical estimates of decision rules for industry pricing, comparing the
geometric responses of the standard decision rule with the responses of higher-order decision
rules. A tractable method of two-stage maximum likelihood estimation of rational error corrections
2 Although this paper develops a general framework for formulating higher-order error correction decision rules,
there are several precedents for recasting decision rules as restricted error corrections or applying higher-order linear
decision rules to macroeconomic aggregates. Nickell (1985) appears to be the first paper to explore similarities
between second-order decision rules and error correction models. Generally, fourth-order decision rules have been
confined to empirical inventory models, such as Blanchard (1983), Hall, Henry, and Wren-Lewis (1986), Callen, Hall,
and Henry (1990), and Cuthbertson and Gasparro (1993). Exceptions are the applications of fourth-order decision
rules to employment in the coal industry by Pesaran (1991) and to the price of manufactured goods in Price (1992).
3 State-independent frictions in pricing would not be expected to hold over all possible states, such as episodes of
extreme hyperinflation. Nevertheless, linear time series models appear to be a useful way to analyze sticky pricing in
periods of moderate inflation, vid. Sims (1992) and Christiano, Eichenbaum, and Evans (1994).
is derived in the Appendix, including corrections required for the sampling errors of the second
stage. Section IV concludes.
I. Rational Error Correction under Geometric Frictions
Just as one-sided polynomials in the lag operator are characteristic of atheoretic, linear time
series models, two-sided polynomials in the lead and lag operators are a defining characteristic
of linear models of rational behavior. The principal vehicle of analysis in this paper is the dynamic
first-order condition linking a decision variable, y t , to its equilibrium objective, y
denotes expectations based on information at the end of t \Gamma 1; A(L) is a backwards
scalar polynomial in the lag operator, L j x polynomial in the
lead operator, F j x
In contrast to conventional backwards-looking time series models of the relationship between
y t and y
t , a notable feature of equation (1) is that the expectation of the current decision variable,
fy t
is a two-sided moving average of past and expected future values of the desired equilibrium, y .
The agent response schedule, w i , determines the relative importance of past and future, whereP
Many models used in macroeconomics assume these relative importance weights are
adequately represented by two-sided geometric response schedules.
In the case of linear decision rules, there are two prominent rationalizations of geometric
response schedules. One is that changes in the level of the decision variable are subject to quadratic
adjustment costs, and this strictly convex friction induces geometric adjustments of the decision
variable toward its equilibrium. The second interpretation is that each agent is subjected to a
geometric distribution of random delays in adjustment, so that the level selected for a decision
variable in a given period is a weighted average of desired target settings over the expected interval
between allowable resets. As noted by Rotemberg (1996), the assumption of a geometric random
delay distribution leads to aggregate behavior that is observationally equivalent to that generated
by the assumption of quadratic costs on adjusting the level of the aggregate decision variable.
Under either interpretation, the required Euler equation is second-order, implying that the
polynomial components of equation (1) are first-order polynomials,
1. As in Tinsley (1970), the optimal decision rule that
satisfies the Euler equation and the relevant endpoint (initial and transversality) conditions implies
partial adjustment of the decision variable to a discounted weighted average of expected forward
positions of the desired equilibrium. 4 This decision rule solution is obtained by multiplying the
Euler equation (1) by the inverse of the lead polynomial, A(BF
\Deltay
where -B is the geometric discount factor over the infinite planning horizon.
To complete the derivation of the conventional decision rule, the data generating process of the
forcing term, y
t , must be provided. In linear decision rules, the decision variable, y, is cointegrated
with the conditional equilibrium target, y , which is defined by a linear function of q variables. 5
y
Agent forecasts of the target, y
t+i , are assumed in this paper to be generated by p-order VARs in
the q arguments, x 0
Thus, the effective information set of agents is obtained by
stacking p lags of the regressors, into a single vector, z 0
t\Gammap ]. The companion form
of the agent VAR forecast model is denoted by E t fz t Consequently, agent forecasts
of the forcing term of the Euler equation are generated by
fy
where the pq \Theta 1 selector vector, ' , contains the coefficients of the cointegrating relationship, (4),
that defines the equilibrium objective.
Substituting forecasts of the forcing term from (5) into the decision rule (3) yields
Historical references and discussion of issues in formulating linear decision rules with dynamic forcing terms are
found in Tinsley (1970, 1971).
5 As needed, the q variables include deterministic trend, intercept, and seasonal dummy series. The concept of a
dynamic, frictionless equilibrium is discussed by Frisch (1936).
6 Companion forms for a variety of linear forecasting models are illustrated in Swamy and Tinsley (1980).
As shown in the first line of (6), given the matrix of the forecast model coefficients, H , and the
discount factor, B, a single friction parameter determines a geometric pattern of rational dynamic
A(1). The second line of (6) indicates that the weighted sum of expected
forward changes in the forcing term, E t f\Deltay
t+i g, can be reduced to the inner product of a restricted
coefficient vector times the industry information set
where equation (7) provides a transparent summary of rational expectations overidentifying
restrictions on the coefficient vector of the agent information vector, z t\Gamma1 .
Thus, there are two principal differences between the dynamic format of the "rational" error
correction in equation (6) and the format of a conventional error correction with p lags of each
regressor. First, only one lag of the decision variable is specified by the second-order decision
rule in equation (6), whereas up to additional lags of the first-difference of the decision
variable may appear in a conventional error correction. Second, as indicated in equation (7), the
coefficient vector, h , is completely determined by cross-coefficient restrictions due to the friction
parameter in the error correction coefficient, -, and cross-equation restrictions due to the forecast
model coefficients, H . By contrast, the coefficients of the information vector in a conventional
error correction are unrestricted. 7
Residual independence and rational expectations overidentifying restrictions are frequently
rejected in macroeconomic studies of rational behavior. 8 One interpretation of the often
disappointing empirical performances of conventional two-root decision rules, including rejections
of rational expectation restrictions, is simply that expectations of actual agents may not be formed
under conditions required for rational expectations, such as symmetric access to full system
information by all agents. However, the limited dynamic specifications illustrated in equation
suggest another contributing factor-the arbitrary prior that agents responses are adequately
captured by two-sided geometric response schedules. The next section explores the dynamic
formats of higher-order Euler equations and rational error corrections associated with a polynomial
generalization of agent response schedules.
7 In conventional error corrections, an unrestricted coefficient vector is applied to the first-difference of the
information vector.
8 Examples of studies that report rejections of restrictions imposed by rational expectations include Sargent
(1978), Meese (1980), Rotemberg (1982), Pindyck and Rotemberg (1983), and Shapiro (1986). Significant residual
autocorrelations are indicated for rational expectations decision rules in Epstein and Denny (1983), Abel and Blanchard
(1986), and Muscatelli (1989). See also the extensive rational expectations literature review in Ericsson and Irons
(1995).
II. Rational Error Corrections under Polynomial Frictions
The standard second-order Euler equation provides a two-sided geometric description of agent
responses to anticipated and past events. The geometric schedules are determined by the roots
of the first-order component polynomials, A(L) and A(BF). This section discusses decision rules
associated with higher-order Euler equations, where the degree of the component polynomials is
increased to m ? 1. These m-order polynomials are obtained by relaxing the modeling prior that
agents aim to smooth only the levels of decision variables or, equivalently, that stochastic delays
of decision variable adjustment are generated only by geometric distributions. The first subsection
derives the closed form of rational error correction decision rules associated with higher-order
Euler equations. The second subsection briefly reviews some categories of polynomial frictions on
agent actions that are consistent with 2m-order Euler equations.
II.1 Solving for Rational Error Correction Decision Rules with Higher-Order Euler Equations
As demonstrated later in this section, the Euler equation under polynomial frictions is the same as
that initially shown in equation (1), except the factor polynomials are now m-order polynomials,
instead of first-order polynomials.
To obtain the decision rule in the case of 2m-order Euler equations, multiply by the inverse of
the lead polynomial, A(BF ) \Gamma1 , to give
The analytical solution for the forcing term of this equation, f t , is obtained by introducing a
second companion system that describes the forward motion of the (m \Gamma 1) \Theta 1 lead vector,
over the planning horizon,
where the m \Theta 1 selector vector, ' m , has a one in the mth element and zeroes
G is the m \Theta m bottom row companion matrix of the lead polynomial, A(BF ),
Substituting the solution of the forcing term, f t , from equation (9) into (8) yields the
generalized 2m-order decision rule,
\Deltay
In the first line of (10), the lag polynomial is partitioned into a level and difference format, A(L) j
is an (m-1)-order polynomial whose coefficients
are moving sums of the coefficients of A(.), as shown in the Appendix. The second line of (10)
partitions the forward path of the target into an initial level and forward differences, and uses the
to isolate the error correction ``gap,'' y
.
The final step in deriving the decision rule under polynomial frictions is to eliminate forecasts
of the equilibrium path, E t fy
t+i g, using the companion form of the forecast model. Substituting
forecasts of forward changes E t f\Deltay
t+i g, from (5) into (10) provides the closed form solution of
the generalized rational error correction decision rule 9
There are two major differences in the dynamic formats of the polynomial frictions version of
rational error correction in equation (11) and the conventional geometric frictions variant shown
earlier in (6). First, use of the m-order component polynomial, A(L), introduces lags of the
dependent variable, A (L)\Deltay t\Gamma1 to accompany the single lag of the decision variable in the error
correction term. 10 Second, in contrast to the single forward discount factor, -B, employed by the
decision rule under geometric frictions in (3), forecasts of anticipated changes in the equilibrium
path are now discounted by the m eigenvalues embedded in the lead companion matrix, G.
Just as the rational expectations restrictions were summarized by a coefficient vector under the
geometric frictions prior, coefficient restrictions of the generalized rational error correction also
can be compactly stated. As indicated in the last line of (11), the sum of the forward-looking terms
is again equivalent to the inner product of a weighting vector, h , and the information vector, z
9 The direct solution format using companion forms may be compared with alternative solution methods for linear
decision rules ranging from partial fractions expansions of the characteristic roots in Hansen and Sargent (1980) to
Schur decompositions in Anderson and Moore (1985) and Anderson, Hansen, McGrattan, and Sargent (1996).
will often be much smaller that the number of lags of the dependent variable in a conventional error
Two types of restrictions are imposed on the coefficient vector, h , under polynomial frictions:
the cross-coefficient restrictions imposed by the component polynomials of the Euler equation, as
summarized by the forward companion coefficient matrix G; and the cross-equation restrictions
imposed by the agents' forecast model, summarized by the lag companion coefficient matrix H .
To reveal these restrictions, successive column stacks are applied to simplify the solution for the
coefficient vector, h . 11
\Gamma1\Omega [H
\Gamma1\Omega [H
This definition of the restricted coefficient vector in (12) provides a closed form solution for
the linear decision rule under polynomial frictions, and a summary of differences between the
unrestricted regression coefficients in a conventional error correction and the tightly restricted
coefficients of the information vector, z t\Gamma1 , in a generalized error correction.
Finally, equation (11) indicates the friction parameters of the generalized rational error
correction are collected in A(1) and A (:). This separable format is convenient for maximum
likelihood estimation by an iterative sequence of linear regressions, as discussed in the Appendix.
II.2 Higher-Order Euler Equations due to Polynomial Frictions
As noted earlier, standard rationalizations of the linear, second-order Euler equation are based on
the assumption of: (1) quadratic costs of adjusting the level of the decision variable, or (2) a discrete
geometric distribution of random delays in adjustments of the decision variable. Polynomial
extensions of these two prior specifications are discussed.
Adjustment costs on weighted averages of decision variables
One class of generalized frictions is associated with agent efforts to smooth weighted averages
of current and lagged values of decision variables. This smoothing is represented by quadratic
penalties on C(L)y t , where C(L) is a m-order polynomial in the lag operator,
and
Agents choose a sequence of decision variables that minimize the criterion, - t , defined by a
11 The column stack of the product of three matrices is denoted by
0\Omega A)vecB,
the Kronecker product.
second-order expansion of profits or utility around the path of equilibrium settings,
The associated Euler equation is a 2m-order equation in the lead and lag operators,
where the s k coefficients in the first line of (14) are defined by coefficients of the friction
polynomial, C(L).
Because the extended Euler equation in the first line of (14) is symmetric in L and BF , the equation
is unaffected if these two operator expressions are interchanged. This, in turn, implies that a
solution of the characteristic equation of the Euler equation, say - accompanied by the
reciprocal solution, B-. Consequently, the characteristic equation can be factored as shown in in
the second line of equation (14). The format of this Euler equation is the same as that shown earlier
in equation (1), except the factor polynomials are now explicitly identified as m-order polynomials.
The criterion and second-order Euler equation associated with the standard specification that
quadratic costs apply only to changes in the level of the decision instrument are nested in equations
and (14), respectively, for the standard prior assumption that
Smoothing levels and differences
In a frequent interpretation of higher-order Euler equations, the decision variable, y t , is an asset
stock, and adjustment costs may be applicable not only to changes in the level of the asset but also
changes in the first-difference. An example is optimal inventory planning, where y t indicates the
inventory stock at the end of period t. The change in inventories, production
less sales. Given exogenous sales, the assumption of quadratic costs on the level of production
implies a quadratic penalty on changes in the planned level of inventory, c 1 Similarly,
quadratic costs associated with changes in the rate of production can be represented by a quadratic
smoothing penalty on changes in the planned first-difference of the inventory stock, c 1
Thus, in the example of inventory modeling, it is not uncommon to assume polynomial frictions of
the general form, P 2
A generalized criterion for smoothing levels and higher-order differences,
decision variables is
with the associated 2m-order Euler equation, E t fc 0
Smoothing weighted averages
A second generalization is the case where quadratic penalties are associated with weighted moving
averages of the decision instrument. For example, let y t denote new labor hires by a firm in
period t. Suppose various job families within the firm require different durations of training by
supervisors, and the number of employees occupied by training in a given period is represented
by a fixed distribution of recent vintages of new hires, c 0 y associated
with variations in the rate of training may be approximated by the quadratic penalty,
which is a restatement of the polynomial friction
specification in equation (13). 13
Another variation is the extension of quadratic penalties from smoothed one-period changes
in the level of the decision variable, (1 \Gamma L)y t+i , to smoothed changes in moving averages,
include seasonal or term contracts where some costs are
associated with one-period averages, others with two-period averages, and so on. The criterion in
this instance takes the form,
with the associated 2m-order Euler equation, E t fc 0
Stochastic response delays
Given the tractability of linear first-order conditions, the quadratic adjustment cost specification is
widely used to characterize optimal adjustment. Applications include decision variables such as
nominal prices, extending from the seminal paper by Rotemberg (1982) to the recent example of
Hairault and Portier (1993), although the assumption of strictly increasing costs in the size of price
12 See Hall, Henry, and Wren-Lewis (1986), Callen, Hall, and Henry (1990), and Cuthbertson and Gasparro (1993).
13 In all examples, note that linear cost components can be accommodated by redefining the equilibrium target, y
of the relevant decision variable.
adjustments is often disputed. However, as noted by Rotemberg (1996), the aggregate response
arising from the quadratic adjustment cost model is equivalent to the aggregate adjustment of
agents subject to random decision delays drawn from an exponential distribution, as proposed
by Calvo (1983). Although it appears to have received little attention outside the field of
dynamic pricing, the stochastic delay model would appear to be a useful framework for modeling
adjustments in other market contexts when agent responses are dependent on unpredictable
transmissions of decisions, such as distributed production or communication networks. 14
In a discrete-time implementation of the stochastic delay approach, each agent j controls a
decision variable, y j;t , with the associated equilibrium trajectory, y
j;t . When adjustment of a
decision variable occurs, the movement to equilibrium is complete but the timing of adjustment is
stochastic. The probability of an agent adjustment in the ith period of the planning horizon, having
not adjusted in the preceding periods, is r i . The schedule of future adjustment probabilities is
represented by the lead polynomial, r(F the r i are nonnegative
and 1. Using a discrete geometric distribution as the analogue of the exponential response
distribution in Calvo (1983), the generating function is r(F is the
first-order polynomial, A(F
Given the constraint that the decision variable must remain at the level selected, say ~y
the next allowable adjustment period, the optimal setting that minimizes the expected sum of
squared deviations from the discounted path of equilibrium settings is E t f~y j;t
g.
Using simple sum aggregation, the aggregate of decision variables adjusted in t is
g. The aggregate decision variable is a normalized average of current and past vintage
decisions that survive in t. In the example of a geometric delay distribution, the survival probability
in t of a past decision variable setting from t\Gammai is proportional to r i . 15 Thus, the generating function
for the normalized survival probabilities over an infinite horizon is r(L), and the aggregate decision
variable in period t may be represented by
fy t
The lag polynomial, r(L), remains to the left of the expectations operator on the right-hand-side
of (17) to ensure that the lagged expectations embedded in past decisions are represented in
14 Effects of costly and stochastic communications in distributed production are discussed in Board and Tinsley
(1996). See also Bertsekas and Tsitsiklis (1989) for representative configurations of communication networks.
15 The hazard function, the ratio of the adjustment probability, r i , to the survival probability at lag i, is constant for
the univariate geometric distribution, Johnson, Kotz, and Kemp (1993).
the current aggregate, y t . 16 Replacing the generating functions by the polynomial components
yields the analogue to the familiar second-order decision rule for the aggregate decision variable,
g.
Aggregated delay schedules
In principle, the choice of the appropriate stochastic delay distribution should be an empirical issue.
Remaining within the polynomial frictions framework of this paper, the approach suggested below
considers higher-order polynomial approximations of more general stochastic delay distributions.
As indicated, these generalizations can be interpreted as convolutions of component geometric
distributions.
It is unlikely that agents have perfect information about the distribution of delays or stochastic
congestion in future decisions. Suppose, for example, agents may be confronted by a "low-cost"
response distribution, r 1 expected
response lag from the first distribution is smaller than that of the second. However, draws from
either delay distribution are random. In this example, the generating function of the effective
response probabilities is the product of the generating functions of the component response
probabilities,
More generally, in the case of random aggregation over m geometric response schedules, the
aggregate reset of decision variables adjusted in t is
g. As in the
case of a univariate geometric distribution, a constant-hazard approximation permits the survival
distribution of past vintage decisions to be represented by the polynomial generating function of
the stochastic delays,
Thus, under an m-order polynomial stochastic delay distribution,
As discussed in Taylor (1993), some model specifications and instrumental methods of estimating decision rules
move the equivalent of the lag polynomial, r(L), inside the expectation operator.
17 If v and w are non-negative independent random variables, the generating function of the convolution, v + w, is
the product of the generating functions of v and w, Feller (1968).
The mean absolute error of constant-hazard approximations of quarterly survival probabilities is about :02
percentage points for the first sixteen lags in the empirical decision rules using polynomial frictions discussed in
the next section. The reason for the relatively modest approximation errors can be shown using a partial fractions
representation of the approximation error. Denote the partial fractions expansion of an m-order polynomial generating
function by: A(1)A(L)
denote the characteristic roots of A(L).
The error of the constant-hazard approximation of the survival probability at lag i is
denotes the sum of the survival probabilities. In the case of the univariate geometric
distribution, -
- is equal to the single root, and the approximation error is zero. In the case of convoluted geometric
distributions with a single dominant root, as with empirical examples in this paper, - is generally very close to the
modulus of the dominant root. In addition, error components associated with smaller roots decay rapidly with lag i
because the spread for a smaller root, scaled by powers of that root, -
.
the aggregate decision variable in period t is represented by
fy t
Y
Y
Substituting in the component polynomials of the generating functions for the stochastic delay
and survival probabilities yields a solution for the aggregate decision variable that is identical to
that derived earlier for the decision rule under polynomial frictions
where the component polynomials, A(L) and A(BF ), are m-order.
III. Empirical Examples of Rational Error Corrections for Industry Pricing
Empirical contrasts of second-order and higher-order rational error corrections are discussed in
this section. The examples used are pricing decision rules of six SIC two-digit manufacturing
industries: textiles, lumber, rubber & plastics, primary metals, motor vehicles, and scientific
instruments. 19 In addition to an expected difference in statistical fits, the rational expectation
overidentifying restrictions are rejected by all but one of the second-order decision rules and by
none of the higher-order rules.
III.1 The Equilibrium Price
The equilibrium log price of the output of industry j with s j identical producers is represented by
is the log markup by producers and mc j is the log of marginal cost. Ignoring strategic
considerations, the markup is
is the price elasticity of demand,
and the monopoly and competitive solutions are obtained as s Gross production
is Cobb-Douglas in both purchased materials and rented services of primary factors. Also, returns
to scale are constant so that the log of marginal cost is proportional to the weighted average of log
input prices,
19 Motor vehicles is a large subset of the SIC two-digit industry, transportation equipment.
j is the log price of primary commodity production inputs, p i
j is the log price of
intermediate materials purchased from other industries, p v
j is the log unit cost of value added in
the j industry, and ' c
As indicated in (21), input price regressors are specific to industry j and constructed from
input-output weightings of industry producer prices. Industry producer prices at the SIC two-digit
level of aggregation are generally available only from the mid-1980s, and industry prices in earlier
periods were assembled from specific commodity prices, often at lower levels of aggregation. The
industry log unit cost of primary factors, p v
estimated by the log of hourly earnings, w j;t ,
less the log of trend productivity, ae j;t . The latter was constructed from smoothed estimates of the
log of industry industrial production less the log of industry employment hours.
Industry producer prices in the U.S. do not reject the hypothesis of difference-stationarity
over postwar samples. A common format was used to explore cointegration constructions of the
equilibrium price of each industry
Given data limitations of the trend productivity estimates, both industry log productivity trends,
ae j;t and time trends, t, were added as additional regressors, and the intercept, c 0 contains both
the log margin, - j , as well as proportional mean errors in measurements of unit cost inputs. The
relevant industry input share weights, (' c
are displayed in the initial columns of Table 1.
These share weights are not estimated but defined by benchmark input-output estimates, obtained
by manipulating the Bureau of Economic Analysis (1991) industry use and make tables.
As shown in the columns headed by f(t) in Table 1, additional trend productivity regressors
were required for cointegration in three industries. As noted above, the log price of primary factors
already incorporates trend productivity, p v
. Because this assigns a weight of \Gamma' v to
ae j , the additional positive coefficients lower the effective contribution of the trend productivity
constructions. Finally, as shown in the last column of Table 1, the hypothesis that the cointegrating
discrepancy,
j;t is I(1) is rejected at the 90% confidence level or higher for all industries.
III.2 Empirical Estimates of Pricing Decision Rules under Geometric Frictions
Table
presents summary statistics for second-order pricing rules of the six manufacturing
industries, estimated under the prior of geometric frictions. Estimated parameters of the industry
decision rules and VAR forecast model parameters were obtained by the maximum likelihood
estimator described in the Appendix. 20
Industry prices are quarterly averages of monthly, seasonally unadjusted series from the US Bureau of Labor
database on commodity and industry producer prices for the 1954-1995 sample. As noted earlier, the equilibrium
The estimated error correction coefficients, A(1) in Table 2, indicate the average quarterly
reduction rates planned for the price "gap," p
, of each industry. The proportion of
explained variation in quarterly price changes can be substantial, with R 2 in three industries
ranging from .3 to .6. The row in Table 2 labeled \DeltaR 2 (%) indicates that for four industries,
the modal source of explained variation is the sample variability of industry forecasts of future
equilibrium prices, as captured by the rational forecast term, h 0
z t\Gamma1 . Table 2 also contains the
estimated mean lag of producer responses to unexpected shocks and the estimated mean lead of
responses to anticipated events. The mean lead of the industry planning horizon is typically smaller
than the mean lag response due to discounting of forward events.
Three characteristics of these estimated equations suggest significant dynamic specification
problems. First, the mean lag responses appear to be unusually large relative to previous estimates
of response lags for manufacturing prices. 21 Second, serial independence of the residuals is
rejected for all but one industry at 95% confidence levels. Although it is possible that producers
may have serially correlated information that has not been included in the industry forecast models,
it is plausible also that residual correlations could be due to misspecifications of the frictions in
producer responses.
A final indication of potential misspecifications is indicated in the bottom row of Table 2. This
row, labeled LR(h jz lists the rejection probabilities of a likelihood ratio test to determine
if the data prefers an unrestricted forecast model of forward equilibrium price changes to the
rational forecasts embedded in the geometric frictions version of rational error correction. With
one exception (motor vehicles), the rational expectations overidentifying restrictions are rejected
at 99% confidence levels.
III.3 Empirical Estimates of Pricing Decision Rules under Polynomial Frictions
Estimates of the industry pricing rules under the polynomial generalization of frictions are listed
in
Table
3. Because the conventional two-root decision rule, nested in the generalized
frictions model, it is interesting to note that additional lags of the dependent variable are always
price forecast model for each industry is a VAR containing the equilibrium price and the prices of production inputs.
Although seasonality is not pronounced in most industry prices (one exception is motor vehicles, as noted later), all
industry VARs contained at least four lags in regressors, and seasonal dummies were added to all VAR and error
correction equations. To reduce space, estimates of equation intercepts and seasonal dummy coefficients are not
reported in the tables. In all equations presented here, the quarterly discount factor, B, was set to :98, approximating
the postwar annual real return to equity of about 8%; empirical results are not noticeably altered by moderate variations
in B.
In analysis of the Stigler-Kindahl data of producers' prices, Carlton (1986) reports an average adjustment
frequency of about once a year. Reduced form regressions by Blanchard (1987, Table 8) for the U.S. manufacturing
price aggregate indicate a mean lag of about two quarters. By contrast, the levels friction model in Rotemberg (1982)
suggests a mean lag of about 12 quarters for the U.S. GDP deflator.
significant in the industry pricing models, generally consistent with polynomial components of
3. 22 In the case of motor vehicles, the preferred specification is
5, requiring four lags of the dependent variable. This is due to a significant seasonal pattern in
the producer price of motor vehicles which could not be adequately captured by fixed seasonal
dummies.
Without exception, all of the problems noted for the estimated decision rules under geometric
frictions in Table 2 are eliminated under polynomial frictions. The percentage of explained
considerably higher for most industries in Table 3; mean lags are more plausible;
the assumption of serially independent residuals is retained in all industries; and the rejection
probabilities in the bottom row in Table 3 indicate that the rational expectations overidentifying
restrictions are not rejected at confidence levels of 95% or higher. The latter is noteworthy because
rejections of rational expectations overidentifying restrictions are often interpreted as evidence
of non-rational forecasting by agents or of inadequate specifications of agent forecast models of
forcing terms. Because the only difference between industry model specifications used in the
side-by-side comparisons of Table 2 and Table 3 is the degree of the Euler equation polynomials,
m, the culprit, at least in these examples and for the statistical properties considered, is rigid priors
on the specification of dynamic frictions.
More intuitive insights into the dynamic effects of the higher-order lag and lead polynomials
are obtained by rearranging the Euler equation to define the current period response weights to lags
and expected leads of the forcing term, E t fp
t+i g, implied by the industry decision rules,
where negative subscripts, i ! 0, denote responses to lagged events and positive subscripts, i ? 0,
responses to anticipated events. The lag and lead weights of the six estimated industry decision
rules are displayed in the panels of Figure 1. The dotted lines are the friction weights generated by
the two-root decision rules reported in Table 2 and the solid lines are the friction weights
associated with the 2m-root decision rules (m ? 1) shown in Table 3.
Several effects of the generalization of frictions are apparent from the plots of the industry
friction weights in Figure 1. In each panel, the vertical line is positioned in the current period
22 This is not an isolated finding. Every macroeconomic aggregate to which the generalized frictions model was fit in
the FRB/US macroeconomic model also rejected the conventional prior that
As discussed in a recent literature survey by Taylor (1997), many studies of empirical staggered contract models for
wages do not support geometric response schedules, including an estimated bimodal distribution of contract lengths
in Levin (1991).
0). The mean lag of responses to unanticipated events is captured by the weighted average
of lags using the friction weights to the left of center. The friction weights are nearly symmetric
about the current period with the mean lead, associated with weights to the right of center, slightly
smaller than the mean lag due to the discounting of future events. Thus, the net mean response lag
to perfectly anticipated events is small for most industries.
Larger mean leads require longer planning horizons and are characteristic of the flatter friction
weight distributions indicated by the dotted lines in Figure 1 for the two-root decision rules,
Thus, vertical distances between the two sets of friction weight distributions in each panel are
indicative of differences between the industry mean leads of Table 2 and the corresponding mean
leads of Table 3.
As shown in the panels of Figure 1, relatively low-order friction polynomials, A(B)A(BF
can generate a variety of flexible shapes, including the seasonal weights at distances of \Sigma4 quarters
indicated for the motor vehicles industry, SIC 371. Some estimated friction distributions are
relatively flat for several quarters, while others fall off rapidly from the modal weight in the
current quarter. The plots in Figure 1 do not support the two-sided geometric distribution prior
that is consistent with two-root decision rules, In almost all industries, the drawback
of a two-sided geometric response schedule is an inability to capture relatively stronger industry
responses to events in a one - or two-quarter neighborhood of the current quarter.
Cyclical Variations in Pricing Margins
Thus far, specification of the desired industry price settings has proceeded under the assumption
that relevant arguments are difference-stationary, with estimation of industry "target" paths, p
t , by
cointegration. However, economic theory may suggest additional stationary variables as possible
arguments of the desired target. 23 If there is prior information that agents' perceptions of the
forcing terms of Euler equations are significantly influenced by additional variables, this prior
information should be introduced into the model to avoid possible distortions in estimated frictions.
An example is useful to illustrate how the distinction between friction and forecast parameters is
maintained for trial arguments of the forcing term by imposing dynamic friction restrictions.
In the present example of a price markup model, cyclical indicators such as industry capacity
utilization rates may capture variations in planned margins, - t , due to boom or bust pricing
strategies or cyclical movements in the price elasticity of demand. The Euler equation for industry
price is restated to include the effect of current and lagged industry utilization rates, u t\Gammai , 24 on the
As discussed by Wickens (1996), economic theory is required for structural interpretations of cointegrations.
Industry utilization rates are constructed by the FRB staff from surveys of capacity utilization, see Raddock
(1985). Log industry utilization rates are stationary, and the error correction responses of capacity output are either
insignificant or an order of magnitude smaller than that of industry output. Sample means are removed so u t can be
current price target,
t continues to denote I(1) arguments of the equilibrium target, and D(L) is an m 0 -order
polynomial in L. It is convenient to assume m which is the minimal order necessary to
distinguish between pricing effects of changing utilization rates and effects of higher or lower
levels of capacity utilization,
Multiplying through by the inverse of the lead polynomial, A(BF substituting in
forecasts from the agents' information set, z defines the augmented rational error correction
where the industry price decision rule now contains an infinite-horizon forecast of forward industry
utilization rates, discounted by the m eigenvalues contained in the frictions companion matrix, G.
Using the simplifying operations discussed earlier, the closed-form of the extended rational error
correction solution is
z
z
The additional coefficient vectors, h u k
discounted sums of the expected forward
path of the industry utilization rate,
m\Omega I n ][I mn \Gamma
where the n \Theta 1 selector vectors, ' u k
, locate u t\Gammak in the information vector, z t , now extended to
include current and lagged values of the industry utilization rate. Because industry log utilization
rates are stationary, the restricted coefficient vectors in (26) differ somewhat from that derived for
forecasts of difference-stationary trend prices, h , in (12).
Effects of adding industry utilization rates to the rational error correction are reported in
Table
4. The rejection probabilities in the row labeled LR(D(L)) indicate that expected forward
utilization rates are a significant determinant of pricing at a 90% level of confidence in three of the
six industries. Cyclical markup effects associated with the level of the industry utilization rate are
indicated in the next row of Table 4, labeled D(1). Procyclical margins are indicated for primary
metals (SIC 33) and countercyclical margins for motor vehicles (SIC371).
All significant features of the rational error corrections in Table 3 are retained in Table 4,
including serially independent residuals and nonrejection of the RE overidentifying restrictions
interpreted as industry output deviations from trend or preferred utilization.
which are now extended to include forecasts of forward utilization rates. Thus, the polynomial
frictions description of industry pricing appears to be robust to the addition of a conventional
determinant of cyclical pricing.
IV. Concluding Comments
After two decades of research in macroeconomics, the rational expectations conjecture is a fixture
in theoretical macroeconomic models but is routinely rejected in empirical macroeconomic models
that test the associated overidentifying restrictions. Rather than indicting the rational expectations
assumption, it appears that the main culprit may be the arbitrary tight prior used to characterize
dynamic frictions in macroeconomic models.
The workhorse of macroeconomic descriptions of rational dynamic behavior is the
conventional linear decision rule with two characteristic roots, where one determines the geometric
discount factor of anticipated events and the other provides a geometric description of lagged
responses to unanticipated shocks. The two-sided geometric lead and lag response schedules are
generally motivated by a geometric frictions prior where agents aim to smooth levels of activity
or are subject to geometric random delays. Although it leads to tractable models of economic
behavior, the geometric frictions prior is not based on compelling economic theory and is usually
rejected by macroeconomic data.
An alternative specification of polynomial frictions is suggested in this paper which appears
to eliminate many of the empirical drawbacks of the conventional frictions specification. The
generalized frictions specification can be interpreted as the result of agents that smooth linear
combinations of current and lagged actions or a consequence of convoluted geometric distributions
of stochastic delays in decisions.
Polynomial frictions lead to higher-order Euler equations whose decision rules are solved
generally by numerical techniques. A method of obtaining closed-form solutions is presented that
uses two simple, first-order companion systems: a lead system for the forward planning required
when agent actions are restricted by frictions; and a lag system for the agents' forecast model of the
Euler equation forcing term. A tractable method of maximum likelihood estimation by a sequence
of regressions is outlined in the Appendix.
Empirical models of producer pricing are estimated for six manufacturing industries. The
second-order decision rule implied by the geometric frictions prior is nested within the polynomial
frictions specification and rejected by the data for all industries. The decision rules based on
geometric frictions had poor empirical properties, including overstatement of mean lags, strong
residual correlations, and rejections of rational expectation restrictions. Rational error corrections
using the generalized friction specification eliminated these empirical shortcomings, including
the rejection of rational expectations restrictions. The estimated generalized friction models of
industry pricing generally required fourth-order or sixth-order decision rules.
Polynomial descriptions of frictions define a rich class of rational error correction
specifications. The empirical applications indicate that higher-order decision rules provide
empirical fits comparable to reduced-form error corrections and, unlike the latter, provide useful
distinctions between lags due to forecasts of market events and lags due to constrained agent
responses to these forecasts.
--R
"A Linear Algebraic Procedure for Solving Linear Perfect Foresight Models."
"The Present Value of Profits and Cyclical Movements in Investment."
"Mechanics of Forming and Estimating Dynamic Linear Economies."
Parallel and Distributed Computation
"The Production and Inventory Behavior of the American Automobile Industry."
"Aggregate and Individual Price Adjustment."
"Smart Systems and Simple Agents."
"A Guide to FRB/US: A Macroeconomic Model of the United States."
Empirical Evaluation of Alternative Policy Regimes.
Bureau of Economic Analysis.
"Manufacturing Stocks: Expectations, Risk and Co-Integration."
"Staggered Prices in a Utility-Maximizing Framework."
"The Rigidity of Prices."
"Identification and the Effects of Monetary Policy Shocks."
"The Determinants of Manufacturing Inventories in the UK."
"The Multivariate Flexible Accelerator Model: Its Empirical Restrictions and An Application to U.S. Manufacturing."
"Modeling the Demand for Narrow Money in the United Kingdom and the United States."
"The Lucas Critique in Practice: Theory Without Measurement."
An Introduction to Probability Theory and Its Applications
"On the Notion of Equilibrium and Disequilibrium."
"Money, New-Keynesian Macroeconomics and the Business Cycle."
"Manufacturing Stocks and Forward-Looking Expectations in the UK."
"Formulating and Estimating Dynamic Linear Rational Expectations Models."
Univariate Discrete Distributions
"The Econometrics of the General Equilibrium Approach to Business Cycles."
"The Macroeconomic Significance of Nominal Wage Contract Duration."
"Econometric Policy Evaluation: A Critique."
"Critical Values for Cointegration Tests,"
"Dynamic Factor Demand Schedules for Labor and Capital under Rational Expectations."
"Estimation and Inference in Two-Step Econometric Models."
"A Comparison of the 'Rational Expectations' and 'General-to-Specific' Approaches to Modelling the Demand for M1."
"Error Correction, Partial Adjustment and All That: An Expository Note."
"Cost Adjustment under Rational Expectations: A Generalization."
"Dynamic Factor Demands under Rational Expectations."
"Forward Looking Price Setting in UK Manufacturing."
"Revised Federal Reserve Rates of Capacity Utilization."
"Sticky Prices in the United States."
"Prices, Output, and Hours: An Empirical Analysis Based on a Sticky Price Model."
"Estimation of Dynamic Labor Demand Schedules under Rational Expectations."
Bounded Rationality in Macroeconomics.
"The Dynamic Demand for Capital and Labor."
"Interpreting the Macroeconomic Time Series Facts."
"Linear Prediction and Estimation Methods for Regression Models with Stationary Stochastic Coefficients."
Macroeconomic Policy in a World Economy
"Temporary Price and Wage Rigidities in Macroeconomics: A Twenty-five Year Review."
"On Ramps, Turnpikes, and Distributed Lag Approximations of Optimal Intertemporal Adjustment."
"A Variable Adjustment Model of Labor Demand."
"Vector Autoregressions and Cointegration."
"Interpreting Cointegrating Vectors and Common Stochastic Trends."
--TR | error correction;producer pricing;companion systems;rational expectations |
586118 | Efficient, DoS-resistant, secure key exchange for internet protocols. | We describe JFK, a new key exchange protocol, primarily designed for use in the IP Security Architecture. It is simple, efficient, and secure; we sketch a proof of the latter property. JFK also has a number of novel engineering parameters that permit a variety of trade-offs, most notably the ability to balance the need for perfect forward secrecy against susceptibility to denial-of-service attacks. | Simplicity: The resulting protocol must be as simple as possible,
within the constraints of the requirements.
The Security requirement is obvious enough (we use the security
model of [7, 8]). The rest, however, require some discussion.
The PFS property is perhaps the most controversial. (PFS is
an attribute of encrypted communications allowing for a long-term
key to be compromised without affecting the security of past session
Rather than assert that ?we must have perfect forward
secrecy at all costs,? we treat the amount of forward secrecy as an
engineering parameter that can be traded off against other necessary
functions, such as ef?ciency or resistance to denial-of-service
attacks. In fact, this corresponds quite nicely to the reality of to-
day's Internet systems, where a compromise during the existence
of a security association will reveal the plaintext of any ongoing
transmissions. Our protocol has a forward secrecy interval; security
associations are protected against compromises that occur outside
of that interval. Speci?cally, we allow a party to reuse the same
secret Dif?e-Hellman exponents for multiple exchanges within a
time period; this may save a large number of costly modular
exponentiations.
The Privacy property means that the protocol must not reveal
the identity of a participant to any unauthorized party, including an
active attacker that attempts to act as the peer. Clearly, it is not
possible for a protocol to protect both the initator and the responder
against an active attacker; one of the participants must always
?go ?rst.? In general, we believe that the most appropriate choice
is to protect the initator, since the initator is typically a relatively
anonymous ?client,? while the responder's identity may already be
known. Conversely, protecting the responder's privacy may not be
of much value (except perhaps in peer-to-peer communication): in
many cases, the responder is a server with a ?xed address or characteristics
(e.g., well-known web server). One approach is to allow
for a protocol that allows the two parties to negotiate who needs
identity protection. In JFK, we decided against this
is unclear what, if any, metric can be used to determine which
party should receive identity protection; furthermore, this negotiation
could act as a loophole to make initiators reveal their identity
?rst. Instead, we propose two alternative protocols: one that protects
the initator against an active attack, and another that protects
the responder.
The Memory-DoS and Computation-DoS properties have become
more important in the context of recent Internet denial-of-service
attacks. Photuris[24] was the ?rst published key management protocol
for which DoS-resistance was a design consideration; we suggest
that these properties are at least as important today.
The Ef?ciency property is worth discussing. In many proto-
cols, key setup must be performed frequently enough that it can
become a bottleneck to communication. The key exchange protocol
must minimize computation as well total bandwidth and round
trips. Round trips can be an especially important factor when communicating
over unreliable media. Using our protocols, only two
round-trips are needed to set up a working security association.
This is a considerable saving in comparison with existing proto-
cols, such as IKE.
The Non-Negotiated property is necessary for several reasons.
Negotiations create complexity and round trips, and hence should
be avoided. Denial of service resistance is also relevant here; a
partially-negotiated security association consumes resources.
The Simplicity property is motivated by several factors. Ef?-
ciency is one; increased likelihood of correctness is another. But
our motivation is especially colored by our experience with IKE.
Even if the protocol is de?ned correctly, it must be implemented
correctly; as protocols become more complex, implementation and
interoperability errors occur more often. This hinders both security
and interoperability. Our design follows the traditional design
paradigm of successful internetworking protocols: keep individual
building blocks as simple as possible; avoid large, complex, monolithic
protocols. We have consciously chosen to omit support for
certain features when we felt that adding such support would cause
an increase in complexity that was disproportional to the bene?t
gained.
Protocol design is, to some extent, an engineering activity, and
we need to provide for trade-offs between different types of secu-
rity. There are trade-offs that we made during the protocol design,
and others, such as that between forward secrecy and computational
effort, that are left to the implementation and to the user, e.g., selected
as parameters during con?guration and session negotiation.
2. PROTOCOL DEFINITION
We present two variants of the JFK protocol. Both variants take
two round-trips (i.e., four messages) and both provide the same
level of DoS protection. The ?rst variant, denoted JFKi, provides
identity protection for the initiator even against active attacks. The
identity of the responder is not protected. This type of protection
is appropriate for a client-server scenario where the initiator (the
client) may wish to protect its identity, whereas the identity of the
responder (the server) is public. As discussed in Section 4, this
protocol uses the basic design of the ISO 9798-3 key exchange
protocol [20, 7], with modi?cations that guarantee the properties
discussed in the Introduction.
The second variant, JFKr, provides identity protection for the
responder against active adversaries. Furthermore, it protects both
sides' identities against passive eavesdroppers. This type of protection
is appropriate for a peer-to-peer scenario where the responder
may wish to protect its identity. Note that it is considerably easier
to mount active identity-probing attacks against the responder
than against the initiator. Furthermore, JFKr provides repudiability
on the key exchange, since neither side can prove to a third party
that their peer in fact participated in the protocol exchange with
them. (In contrast, JFKi authentication is non-repudiable, since
each party signs the other's identity along with session-speci?c information
such as the nonces). This protocol uses the basic design
of the Sign-and-MAC (SIGMA) protocol from [28], again with the
appropriate modi?cations.
2.1 Notation
First, some notation:
Hk(M) Keyed hash (e.g., HMAC[29]) of message M using key
k. We assume that H is a pseudorandom function. This
also implies that H is a secure message authentication
In some places we make a somewhat
stronger assumption relating H and discrete logarithms;
see more details within.
fMgKea Encryption using symmetric key Ke, followed by MAC
authentication with symmetric key Ka of message M.
The MAC is computed over the ciphertext, pre?xed with
the literal ASCII string "I" or "R", depending on who
the message sender is (initiator or responder).
Digital signature of message M with the private key belonging
to principal x (initiator or responder). It is assumed
to be a non-message-recovering signature.
The message components used in JFK are:
IPI Initiator's network address.
gx Dif?e-Hellman (DH) exponentials; also identifying the
group-ID.
Initiator's current exponential, (mod p).
gr Responder's current exponential, (mod p).
NI Initiator nonce, a random bit-string.
NR Responder nonce, a random bit-string.
IDI Initiator's certi?cates or public-key identifying information
IDR Responder's certi?cates or public-key identifying information
IDR0 An indication by the initiator to the responder as to what
authentication information (e.g., certi?cates) the latter
should use.
HKr A transient hash key private to the responder.
sa Cryptographic and service properties of the security association
(SA) that the initiator wants to establish. It
contains a Domain-of-Interpretation which JFK under-
stands, and an application-speci?c bit-string.
sa0 SA information the responder may need to give to the
initiator (e.g., the responder's SPI, in IPsec).
Kir Shared key derived from gir, NI , and NR used for protecting
the application (e.g., the IPsec SA).
Ke;Ka Shared keys derived from gir, NI , and NR, used to encrypt
and authenticate Messages (3) and (4) of the protocol
grpinfo All groups supported by the responder, the symmetric
R
algorithms used to protect Messages (3) and (4), and the
hash function used for key generation.
Both parties must pick a fresh nonce at each invocation of the JFK
protocol. The nonces are used in the session-key computation, to
provide key independence when one or both parties reuse their DH
exponential; the session key will be different between independent
runs of the protocol, as long as one of the nonces or exponentials
changes. HKR is a global parameter for the responder ? it stays
the same between protocol runs, but can change periodically.
2.2 The JFKi Protocol
The JFKi protocol consists of four messages (two round trips):
I
I
HHKR (gr;NR;NI ;IPI );
The keys used to protect Messages (3) and (4), Ke and Ka, are
computed as Hgir (NI ;NR; "1") and Hgir (NI ;NR; "2") respec-
tively. The session key passed to IPsec (or some other application),
Kir,isHgir (NI ;NR; "0"). (Note that there may be a difference
in the number of bits from the HMAC and the number produced by
the raw Dif?e-Hellman exchange; the 512 least-signi?cant bits are
of gir are used as the key in that case). If the key used by IPsec is
longer than the output of the HMAC, the key extension method of
IKE is used to generate more keying material.
Message (1) is straightforward; note that it assumes that the initiator
already knows a group and generator that are acceptable to
the responder. The initiator can reuse a gi value in multiple instances
of the protocol with the responder, or other responders that
accept the same group, for as long as she wishes her forward secrecy
interval to be. We discuss how the initiator can discover what
groups to use in a later section. This message also contains an indication
as to which ID the initiator would like the responder to use
to authenticate. IDR0 is sent in the clear; however, the responder's
ID in Message (2) is also sent in the clear, so there is no loss of
privacy.
Message (2) is more complex. Assuming that the responder accepts
the Dif?e-Hellman group in the initiator's message (rejections
are discussed in Section 2.5), he replies with a signed copy of his
own exponential (in the same group, also (mod p)), information
on what secret key algorithms are acceptable for the next message,
a random nonce, his identity (certi?cates or a string identifying his
public key), and an authenticator calculated from a secret, HKR,
known to the responder; the authenticator is computed over the re-
sponder's exponential, the two nonces, and the initiator's network
address. The responder's exponential may also be reused; again, it
is regenerated according to the responder's forward secrecy inter-
val. The signature on the exponential needs to be calculated at the
same rate as the responder's forward secrecy interval (when the exponential
itself changes). Finally, note that the responder does not
need to generate any state at this point, and the only cryptographic
operation is a MAC calculation. If the responder is not under heavy
load, or if PFS is deemed important, the responder may generate
a new exponential and corresponding signature for use in this ex-
change; of course, this would require keeping some state (the secret
part of the responder's Dif?e-Hellman computation).
Message (3) echoes back the data sent by the responder, including
the authenticator. The authenticator is used by the responder
to verify the authenticity of the returned data. The authenticator
also con?rms that the sender of the Message (3) used the
same address as in Message (1) ? this can be used to detect and
counter a ?cookie jar? DoS attack1. A valid authenticator indicates
to the responder that a roundtrip has been completed (between Messages
(1), (2), and (3)). The message also includes the initiator's
identity and service request, and a signature computed over the
nonces, the responder's identity, and the two exponentials. This
latter information is all encrypted and authenticated under keys Ke
and Ka, as already described. The encryption and authentication
use algorithms speci?ed in grpinfo . The responder keeps a copy
R
of recently-received Messages (3), and their corresponding Message
(4). Receiving a duplicate (or replayed) Message (3) causes
the responder to simply retransmit the corresponding Message (4),
without creating new state or invoking IPsec. This cache of messages
can be reset as soon as HKR is changed. The responder's
exponential (gr) is re-sent by the initiator because the responder
may be generating a new gr for every new JFK protocol run (e.g.,
if the arrival rate of requests is below some threshold). It is important
that the responder deal with repeated Messages (3) as described
above. Responders that create new state for a repeated Message (3)
open the door to attacks against the protocol and/or underlying application
(IPsec).
Note that the signature is protected by the encryption. This is
necessary for identity protection, since everything signed is public
except the sa, and that is often guessable. An attacker could verify
guesses at identities if the signature were not encrypted.
Message (4) contains application-speci?c information (such as
the responder's IPsec SPI), and a signature on both nonces, both
exponentials, and the initiator's identity. Everything is encrypted
and authenticated by the same Ke and Ka used in Message (3),
which are derived from NI , NR, and gir. The encryption and authentication
algorithms are speci?ed in grpinfo .
R
2.3 Discussion
The design follows from our requirements. With respect to communication
ef?ciency, observe that the protocol requires only two
round trips. The protocol is optimized to protect the responder
against denial of service attacks on state or computation. The initia-
1The ?cookie jar? DoS attack involves an attacker that is willing
to reveal the address of one subverted host so as to acquire a valid
cookie (or number of cookies) that can then be used by a large
number of other subverted hosts to launch a DDoS attack using the
valid cookie(s).
tor bears the initial computational burden and must establish round-trip
communication with the responder before the latter is required
to perform expensive operations. At the same time, the protocol
is designed to limit the private information revealed by the initia-
she does not reveal her identity until she is sure that only the
responder can retrieve it. (An active attacker can replay an old
Message (2) as a response to the initiator's initial message, but he
cannot retrieve the initiator's identity from Message (3) because he
cannot complete the Dif?e-Hellman computation).
The initiator's ?rst message, Message (1), is a straightforward
Dif?e-Hellman exponential. Note that this is assumed to be encoded
in a self-identifying manner, i.e., it contains a tag indicating
which modulus and base was used. The nonce NI serves two
purposes: ?rst, it allows the initiator to reuse the same exponential
across different sessions (with the same or different responders,
within the initiator's forward secrecy interval) while ensuring that
the resulting session key will be different. Secondly, it can be used
to differentiate between different parallel sessions (in any case, we
assume that the underlying transport protocol, i.e., UDP, can handle
the demultiplexing by using different ports at the initiator).
Message (2) must require only minimal work for the responder,
since at that point he has no idea whether the initiator is a legitimate
correspondent or, e.g., a forged message from a denial of service at-
tack; no round trip has yet occurred with the initiator. Therefore, it
is important that the responder not be required at this point to perform
expensive calculations or create state. Here, the responder's
cost will be a single authentication operation, the cost of which (for
HMAC) is dominated by two invocations of a cryptographic hash
function, plus generation of a random nonce NR.
The responder may compute a new exponential gb (mod p) for
each interaction. This is an expensive option, however, and at times
of high load (or attack) it would be inadvisable. The nonce prevents
two successive session keys from being the same, even if both the
initiator and the responder are reusing exponentials. One case when
both sides may reuse the same exponentials is when the initiator is
a low-power device (e.g., a cellphone) and the responder is a busy
server.
A simple way of addressing DoS is to periodically (e.g., once every
seconds) generate an (r; gr;HHKR (gr);SR[gr]) tuple and
place it in a FIFO queue. As requests arrive (in particular, as valid
Messages (3) are processed), the ?rst entry from the FIFO is re-
moved; thus, as long as valid requests arrive at under the generation
rate, PFS is provided for all exchanges. If the rate of valid protocol
requests exceeds the generating rate, a JFK implementation should
reuse the last tuple in the FIFO. Notice that in this scheme, the same
gr may be reused in different sessions, if these sessions are inter-
leaved. This does not violate the PFS or other security properties
of the protocol.
If the responder is willing to accept the group identi?ed in the
initiator's message, his exponential must be in the same group. Oth-
erwise, he may respond with an exponential from any group of his
own choosing. The ?eld grpinfo lists what groups the responder
R
?nds acceptable, if the initiator should wish to restart the proto-
col. This provides a simple mechanism for the initiator to discover
the groups currently allowed by the responder. That ?eld also lists
what encryption and MAC algorithms are acceptable for the next
two messages. This is not negotiated; the responder has the right to
decide what strength encryption is necessary to use his services.
Note that the responder creates no state when sending this mes-
sage. If it is fraudulent, that is, if the initiator is non-existent or
intent on perpetrating a denial-of-service attack, the responder will
not have committed any storage resources.
In Message (3), the initiator echoes content from the responder's
message, including the authenticator. The authenticator allows the
responder to verify that he is in round-trip communication with a
legitimate potential correspondent. The initiator also uses the key
derived from the two exponentials and the two nonces to encrypt
her identity and service request. The initiator's nonce is used to
ensure that this session key is unique, even if both the initiator and
the responder are reusing their exponentials and the responder has
?forgotten? to change nonces.
Because the initiator can validate the responder's identity before
sending her own and because her identifying information (ignoring
her public key signature) is sent encrypted, her privacy is protected
from both passive and active attackers. An active attacker can replay
an old Message (2) as a response to the initiator's initial mes-
sage, but he cannot retrieve the initiator's identity from Message (3)
because he cannot complete the Dif?e-Hellman computation. The
service request is encrypted, too, since its disclosure might identify
the requester. The responder may wish to require a certain strength
of cryptographic algorithm for selected services.
Upon successful receipt and veri?cation of this message, the responder
has a shared key with a party known to be the initiator. The
responder further knows what service the initiator is requesting. At
this point, he may accept or reject the request.
The responder's processing on receipt of Message (3) requires
verifying an authenticator and, if that is successful, performing several
public key operations to verify the initiator's signature and cer-
ti?cate chain. The authenticator (again requiring two hash opera-
tions) is suf?cient defense against forgery; replays, however, could
cause considerable computation. The defense against this is to
cache the corresponding Message (4); if a duplicate Message (3)
is seen, the cached response is retransmitted; the responder does
not create any new state or notify the application (e.g., IPsec). The
key for looking up Messages (3) in the cache is the authenticator;
this prevents DoS attacks where the attacker randomly modi?es the
encrypted blocks of a valid message, causing a cache miss and thus
more processing to be done at the responder. Further, if the authenticator
veri?es but there is some problem with the message (e.g., the
certi?cates do not verify), the responder can cache the authenticator
along with an indication as to the failure (or the actual rejection
message), to avoid unnecessary processing (which may be part of a
DoS attack). This cache of Messages (3) and authenticators can be
purged as soon as HKR is changed (since the authenticator will no
longer pass veri?cation).
Caching Message (3) and refraining from creating new state for
replayed instances of Message (3) also serves another security pur-
pose. If the responder were to create a new state and send a new
Message (4), and a new sa0 for a replayed Message (3), then an attacker
who compromised the initiator could replay a recent session
with the responder. That is, by replaying Message (3) from a recent
exchange between the initiator and the responder, the attacker
could establish a session with the responder where the session-key
would be identical to the key of the previous session (which took
place when the initiator was not yet compromised). This could
compromise the Forward Security of the initiator.
There is a risk, however, in keeping this message cached for too
long: if the responder's machine is compromised during this pe-
riod, perfect forward secrecy is compromised. We can tune this by
changing the MAC key HKR more frequently. The cache can be
reset when a new HKR is chosen.
In Message (4), the responder sends to the initiator any responder-
speci?c application data (e.g., the responder's IPsec SPI), along
with a signature on both nonces, both exponentials, and the ini-
tiator's identity. All the information is encrypted and authenticated
using keys derived from the two nonces, NI and NR, and
the Dif?e-Hellman result. The initiator can verify that the responder
is present and participating in the session, by decrypting the
message and verifying the enclosed signature.
2.4 The JFKr Protocol
Using the same notation as in JFKi, the JFKr protocol is:
I
I
As in JFKi, the keys used to protect Messages (3) and (4), Ke
and Ka, are respectively computed as Hgir (NI ;NR; "1") and
Hgir (NI ;NR; "2"). The session key passed to IPsec (or some
other application), Kir,isHgir (NI ;NR; "0").
Both parties send their identities encrypted and authenticated under
Ke and Ka respectively, providing both parties with identity
protection against passive eavesdroppers. In addition, the party that
?rst reveals its identity is the initiator. This way, the responder is
required to reveal its identity only after it veri?es the identity of the
initiator. This guarantees active identity protection to the responder.
We remark that it is essentially impossible, under current technology
assumptions, to have a two-round-trip protocol that provides
DoS protection for the responder, passive identity protection
for both parties, and active identity protection for the initiator. An
informal argument proceeds as follows: if DoS protection is in
place, then the responder must be able to send his ?rst message
before he computes any shared key; This is so since computing
a shared key is a relatively costly operation in current technology.
This means that the responder cannot send his identity in the second
message, without compromising his identity protection against passive
eavesdroppers. This means that the responder's identity must
be sent in the fourth (and last) message of the protocol. Conse-
quently, the initiator's identity must be sent before the responder's
identity is sent.
2.5 Rejection Messages
Instead of sending Messages (2) or (4), the responder can send
a 'rejection' instead. For Message (2), this rejection can only be
on the grounds that he does not accept the group that the initiator
has used for her exponential. Accordingly, the reply should indicate
what groups are acceptable. Since Message (2) already contains the
?eld grpinfo (which indicates what groups are acceptable), no explicit
rejection message is needed. (For ef?ciency's sake, the group
information could also be in the responder's long-lived certi?cate,
which the initiator may already have.)
Message (4) can be a rejection for several reasons, including
lack of authorization for the service requested. But it could also
be caused by the initiator requesting cryptographic algorithms that
the responder regards as inappropriate, given the requester (initia-
tor), the service requested, and possibly other information available
to the responder, such as the time of day or the initiator's location
as indicated by the network. In these cases, the responder's reply
should list acceptable cryptographic algorithms, if any. The initiator
would then send a new Message (3), which the responder would
accept anew; again, the responder does not create any state until after
a successful Message (3) receipt.
3. WHAT JFK AVOIDS
By intent, JFK does not do certain things. It is worth enumerating
them, if only to stimulate discussion about whether certain
protocol features are ever appropriate. In JFK, the ?missing? features
were omitted by design, in the interests of simplicity.
3.1 Multiple Authentication Options
The most obvious ?omission? is any form of authentication other
than by certi?cate chains trusted by the each party. We make no
provisions for shared secrets, token-based authentication, certi?-
cate discovery, or explicit cross-certi?cation of PKIs. In our view,
these are best accomplished by outboard protocols. Initiators that
wish to rely on any form of legacy authentication can use the protocols
being de?ned by the IPSRA[41] or SACRED[1, 14] IETF
working groups. While these mechanisms do add extra round trips,
the expense can be amortized across many JFK negotiations. Sim-
ilarly, certi?cate chain discovery (beyond the minimal capabilities
implicit in IDI and IDR) should be accomplished by protocols de-
?ned for that purpose. By excluding the protocols from JFK, we
can exclude them from our security analysis; the only interface between
the two is a certi?cate chain, which by de?nition is a stand-alone
secure object.
We also eliminate negotiation generally, in favor of ukases issued
by the responder. The responder is providing a service; it is entitled
to set its own requirements for that service. Any cryptographic
primitive mentioned by the responder is acceptable; the initiator
can choose any it wishes. We thus eliminate complex rules for selecting
the ?best? choice from two different sets. We also eliminate
the need that state be kept by the responder; the initiator can either
accept the responder's desires or restart the protocol.
3.2 Phase II and Lack Thereof
JFK rejects the notion of two different phases. As will be discussed
in Section 5, the practical bene?ts of quick mode are limited.
Furthermore, we do not agree that frequent rekeying is necessary.
If the underlying block cipher is suf?ciently limited as to bar long-term
use of any one key, the proper solution is to replace that cipher.
For example, 3DES is inadequate for protection of very high speed
transmissions, because the probability of collision in CBC mode
becomes too high after encryption of 232 plaintext blocks. Using
AES instead of 3DES solves that problem without complicating the
exchange.
Phase II of IKE is used for several things; we do not regard any
of them as necessary. One is generating the actual keying material
used for security associations. It is expected that this will be done
several times, to amortize the expense of the Phase I negotiation. A
second reason for this is to permit very frequent rekeying. Finally,
it permits several separate security associations to be set up, with
different parameters.
We do not think these apply. First, with modern ciphers such as
AES, there is no need for frequent key changes. AES keys are long
enough that brute force attacks are infeasible. Its longer block size
protects against CBC limitations when encrypting many blocks.
We also feel that JFK is ef?cient enough that avoiding the overhead
of a full key exchange is not required. Rather than adding new
SAs to an existing Phase I SA, we suggest that a full JFK exchange
be initiated instead. We note that the initiator can also choose to
reuse its exponential, it if wishes to trade perfect forward secrecy
for computation time. If state already exists between the initiator
and the responder, they can simply check that the Dif?e-Hellman
exponentials are the same; if so, the result of the previous exponentiation
can be reused. As long as one of the two parties uses
a fresh nonce in the new protocol exchange, the resulting cryptographic
keys will be fresh and not subject to a related key (or other,
similar) attack. As we discuss in Section 3.3, a similar performance
optimization can be used on the certi?cate-chain validation.
A second major reason for Phase II is dead-peer detection. IPsec
gateways often need to know if the other end of a security association
is dead, both to free up resources and to avoid ?black holes.?
In JFK, this is done by noting the time of the last packet received.
A peer that wishes to elicit a packet may send a ?ping.? Such hosts
may decline any proposed security associations that do not permit
such ?ping? packets.
A third reason for Phase II is general security association control,
and in particular SA deletion. While such a desire is not wrong,
we prefer not to burden the basic key exchange mechanism with
extra complexity. There are a number of possible approaches. Ours
requires that JFK endpoints implement the following rule: a new
negotiation that speci?es an SPD identical to the SPD of an existing
SA overwrites it. To some extent, this removes any need to delete
an SA if black hole avoidance is the concern; simply negotiate a
new SA. To delete an SA without replacing it, negotiate a new SA
with a null ciphersuite.
3.3 Rekeying
When a negotiated SA expires (or shortly before it does), the
JFK protocol is run again. It is up to the application to select the
appropriate SA to use among many valid ones. In the case of IPsec,
implementations should switch to using the new SA for outgoing
traf?c, but would still accept traf?c on the old SA (as long as that
SA has not expired).
To address performance considerations, we should point out that,
properly implemented, rekeying only requires one signature and
one veri?cation operation in each direction, if both parties use the
same Dif?e-Hellman exponentials (in which case, the cached result
can be reused) and certi?cates: the receiver of an ID payload compares
its hash with those of any cached ID payloads received from
the same peer. While this is an implementation detail, a natural location
to cache past ID payloads is along with already established
SAs (a convenient fact, as rekeying will likely occur before existing
SAs are allowed to expire, so the ID information will be readily
available). If a match is found and the result has not ?expired? yet,
then we do not need to re-validate the certi?cate chain. A previously
veri?ed certi?cate chain is considered valid for the shortest
of its CRL re-validate time, certi?cate expiration time, OCSP result
validity time, etc. For each certi?cate chain, there is one such value
associated (the time when one of its components becomes invalid
or needs to be checked again). Notice that an implementation does
not need to cache the actual ID payloads; all that is needed is the
hash and the expiration time.
That said, if for some reason fast rekeying is needed for some
application domain, it should be done by a separate protocol.
4. TOWARDS A PROOF OF SECURITY
This section very brie?y overviews our security analysis of the
JFK protocol. Full details are deferred to the full analysis paper.
There are currently two main approaches to analyzing security
of protocols. One is the formal-methods approach, where the cryptographic
components of a protocol are modeled by ?ideal boxes?
and automatic theorem-veri?cation tools are used to verify the validity
of the high-level design (assuming ideal cryptography). The
other is the cryptographic approach, which accounts for the fact
that cryptographic components are imperfect and may potentially
interact badly with each other. Here, security of protocols is proven
based on some underlying computational intractability assumptions
(such as the hardness of factoring large numbers, computing discrete
logarithms modulo a large prime, or inverting a cryptographic
hash function). The formal-methods approach, being automated,
has the advantage that it is less susceptible to human errors and
oversights in analysis. On the other hand, the cryptographic approach
provides better soundness, since it considers the overall security
of the protocol, and in particular accounts for the imperfections
of the cryptographic components.
Our analysis follows the cryptographic approach. We welcome
any additional analysis. In particular, analysis based on formal
methods would be a useful complement to the analysis described
here.
We separate the analysis of the ?core security? of the protocol
(which is rather tricky) from the analysis of added security features
such as DoS protection and identity protection (which is much
more straightforward). The rest of this section concentrates on the
?core security? of the protocol. DoS and identity protection were
discussed in previous sections.
4.1 Core security
We use the modeling and treatment of [7], which in turn is based
on [6]; see there for more references and comparisons with other
analytical work. Very roughly, the ?core security? of a key exchange
protocol boils down to two requirements:
1. If party A generates a key KA associated with a session-
identi?er s and peer identity B, and party B generates a key
KB associated with the same session identi?er s and peer A,
then
2. No attacker can distinguish between the key exchanged in
a session between two unbroken parties and a truly random
value. This holds even if the attacker has total control over
the communication, can invoke multiple sessions, and is told
the keys generated in all other sessions.
We stress that this is only a rough sketch of the requirement.
For full details see [7, 8]. We show that both JFKi and JFKr satisfy
the above requirement. When these protocols are run with
perfect forward secrecy, the security is based on a standard intractability
assumption of the DH problem, plus the security of
the signature scheme and the security of MAC as a pseudo-random
function. When a party reuses its DH value, the security is based
on a stronger intractability assumption involving both DH and the
HMAC pseudo-random function.
We ?rst analyze the protocols in the restricted case where the
parties do not reuse the private DH exponents for multiple sessions;
this is the bulk of the work. Here, the techniques for demonstrating
the security of the two protocols are quite different.
4.1.1 JFKi:
The basic cryptographic core of this protocol is the same as the
ISO 9798-3 protocol, which was analyzed and proven secure in [7].
This protocol can be brie?y summarized as follows:
A salient point about this protocol is that each party signs, in addition
to the nonces and the two public DH exponents, the identity
of the peer. If the peer's identity is not signed then the protocol is
completely broken. JFKi inherits the same basic core security. In
addition, JFKi adds a preliminary cookie mechanism for DoS protection
(which results in adding one ?ow to the protocol and having
the responder in JFKi play the role of A), and encrypts the last two
messages in order to provide identity protection for the initiator.
Finally, we note that JFKi enjoys the following additional prop-
erty. Whenever a party P completes a JFKi exchange with peer
Q, it is guaranteed that Q has initiated an exchange with P and is
aware of P's existence. This property is not essential in the context
of IPsec (indeed, JFKr does not enjoy this property). Nonetheless,
it may be of use in other contexts.
4.1.2 JFKr:
The basic cryptographic core of this protocol follows the design
of the SIGMA protocol [28] (which also serves as the basis to the
signature mode of IKE). SIGMA was analyzed and proven secure
in [8]. This basic protocol can be brie?y summarized as follows:
HKa (NA;NB;B)
Here, neither party signs the identity of its peer. Instead, each
party includes a MAC, keyed with a key derived from gab, and applied
to its own identity (concatenated with NA and NB). JFKr enjoys
the same basic core security as this protocol. In addition, JFKr
adds a preliminary cookie mechanism for DoS protection (which
results in adding one ?ow to the protocol and having the Responder
in JFKr play the role of A), and encrypts the last two messages in
order to provide identity protection. The identity protection against
passive adversaries covers both parties, since the identities are sent
only in the last two messages.
The next step in the analysis is to generalize to the case where
the private DH exponents are reused across sessions. This is done
by making stronger (but still reasonable) computational intractability
assumptions involving both the DH problem and the HMAC
pseudo-random function. We defer details to the full analysis paper
5. RELATED WORK
The basis for most key agreement protocols based on public-key
signatures has been the Station to Station (StS)[11] protocol. In its
simplest form, shown in Figure 1, this consists of a Dif?e-Hellman
exchange, followed by a public key signature authentication step,
typically using the RSA algorithm in conjunction with some certi?-
cate scheme such as X.509. In most implementations, the second
message is used to piggy-back the responder's authentication infor-
mation, resulting in a 3-message protocol, shown in Figure 2. Other
forms of authentication may be used instead of public key signatures
(e.g., Kerberos[37] tickets, or preshared secrets), but these
are typically applicable in more constrained environments. While
the short version of the protocol has been proven to be the most
ef?cient[13] in terms of messages and computation, it suffers from
some obvious DoS vulnerabilities.
5.1 Internet Key Exchange (IKE)
The Internet Key Exchange protocol (IKE)[15] is the current
IETF standard for key establishment and SA parameter negotiation.
initiator responder
Initiator Diffie-Hellman public value
Responder Diffie-Hell an public value
Initiator RSA signature and certificate(s)
Responder RSA signature and certificate(s)
Figure
1: 4-message Station to Station key agreement protocol.
IKE is based on the ISAKMP [33] framework, which provides encoding
and processing rules for a set of payloads commonly used
by security protocols, and the Oakley protocol, which describes an
adaptation of the StS protocol for use with IPsec.2 The public-key
encryption modes of IKE are based on SKEME [27].
IKE is a two-phase protocol: during the ?rst phase, a secure
channel between the two key management daemons is established.
Parameters such as an authentication method, encryption/hash al-
gorithms, and a Dif?e-Hellman group are negotiated at this point.
This set of parameters is called a ?Phase I SA.? Using this infor-
mation, the peers authenticate each other and compute key material
using the Dif?e-Hellman algorithm. Authentication can be
based on public key signatures, public key encryption, or preshared
passphrases. There are efforts to extend this to support Kerberos
tickets[37] and handheld authenticators. It should also be noted
that IKE can support other key establishment mechanisms (besides
Dif?e-Hellman), although none has been proposed yet.3
Furthermore, there are two variations of the Phase I message ex-
change, called ?main mode? and ?aggressive mode.? Main mode
provides identity protection, by transmitting the identities of the
peers encrypted, at the cost of three message round-trips (see Figure
3). Aggressive mode provides somewhat weaker guarantees,
but requires only three messages (see Figure 4).
As a result, aggressive mode is very susceptible to untraceable4
denial of service (DoS) attacks against both computational and memory
Main mode is also susceptible to untraceable
memory exhaustion DoS attacks, which must be compensated for
in the implementation using heuristics for detection and avoidance.
To wit:
2We remark, however, that the actual cryptographic core of IKE's
signature mode is somewhat different than Oakley. In Oakley the
peer authentication is guaranteed by having each party explicitly
sign the peer identity. In contrast, IKE guarantees peer authentication
by having each party MAC its own identity using a key derived
from the agreed Dif?e-Hellman secret. This method of peer
authentication is based on the Sign-and-Mac design [28].
3There is ongoing work (still in its early stages) in the IETF to
use IKE as a transport mechanism for Kerberos tickets, for use in
protecting IPsec traf?c.
4The attacker can use a forged address when sending the ?rst message
in the exchange.
initiator responder
Initiator Diffie-Hellman public value
Responder Diffie-Hell anadn cpeurbtilficicvatael(use)
Responder RSA signature
Initiator RSA signature and certificate(s)
Figure
2: 3-message Station to Station key agreement protocol.
The responder has to create state upon receiving the ?rst message
from the initiator, since the Phase I SA information is
exchanged at that point. This allows for a DoS attack on the
responder's memory, using random source-IP addresses to
send a ?ood of requests. To counter this, the responder could
employ mechanisms similar to those employed in countering
maintains no state at all
after receiving the ?rst message.
An initiator who is willing to go through the ?rst message
round-trip (and thus identify her address) can cause the responder
to do a Dif?e-Hellman exponential generation as
well as the secret key computation on reception of the third
message of the protocol. The initiator could do the same with
the ?fth message of the protocol, by including a large number
of bogus certi?cates, if the responder blindly veri?es all
signatures. JFK mitigates the effects of this attack by reusing
the same exponential across different sessions.
The second phase of the IKE protocol is commonly called ?quick
mode? and results in IPsec SAs being established between the two
negotiating parties, through a three-message exchange. Parameters
such as the IP security protocol to use (ESP/AH), security algo-
rithms, the type of traf?c that will be protected, etc. are negotiated
at this stage. Since the two parties have authenticated each other
and established a shared key during Phase I, quick mode messages
are encrypted and authenticated using that information. Further-
more, it is possible to derive the IPsec SA keying material from
the shared key established during the Phase I Dif?e-Hellman ex-
change. To the extent that multiple IPsec SAs between the same
two hosts are needed, this two-phase approach results in faster and
more lightweight negotiations (since the same authentication information
and keying material is reused).
Unfortunately, two hosts typically establish SAs protecting all
the traf?c between them, limiting the bene?ts of the two-phase
protocol to lightweight re-keying. If PFS is desired, this bene?t
is further diluted.
Another problem of the two-phase nature of IKE manifests itself
when IPsec is used for ?ne-grained access control to network
services. In such a mode, credentials exchanged in the IKE protocol
are used to authorize users when connecting to speci?c ser-
vices. Here, a complete Phase I & II exchange will have to be done
for each connection (or, more generally, traf?c class) to be pro-
initiator responder
Initiator cookie, proposed phase1 SA
Responder cookie, accepted Phase1 SA
Initiator Diffie-Hellman value & Nonce
Responder Diffie-Hellman value & Nonce
Initiator signature, certs & identity
Responder signature, certs & identity
Figure
3: IKE Main Mode exchange with certi?cates.
tected, since credentials, such as public key certi?cates, are only
exchanged during Phase I.
IKE protects the identities of the initiator and responder from
eavesdroppers.5 The identities include public keys, certi?cates, and
other information that would allow an eavesdropper to determine
which principals are trying to communicate. These identities can
be independent of the IP addresses of the IKE daemons that are
negotiating (e.g., temporary addresses acquired via DHCP, public
workstations with smartcard dongles, etc. However, since the initiator
her identity ?rst (in message 5 of Main Mode), an
attacker can pose as the responder until that point in the protocol.
The attackers cannot complete the protocol (since they do not possess
the responder's private key), but they can determine the initia-
tor's identity. This attack is not possible on the responder, since she
can verify the identity of the initiator before revealing her identity
(in message 6 of Main Mode). However, since most responders
would correspond to servers (?rewalls, web servers, etc.), the identity
protection provided to them seems not as useful as protecting
the initiator's identity.6 Fixing the protocol to provide identity protection
for the initiator would involve reducing it to 5 messages and
having the responder send the contents of message 6 in message 4,
with the positive side-effect of reducing the number of messages,
but breaking the message symmetry and protocol modularity.
Finally, thanks to the desire to support multiple authentication
mechanisms and different modes of operation (Aggressive vs. Main
mode, Phase I / II distinction), both the protocol speci?cation and
the implementations tend to be bulky and fairly complicated. These
are undesirable properties for a critical component of the IPsec architecture
Several works (including [12, 26, 25]) point out many de?cien-
cies in the IKE protocol, speci?cation, and common implemen-
5Identity protection is provided only in Main Mode (also known
as Protection Mode); Aggressive Mode does not provide
identity protection for the initiator.
6One case where protecting the responder's identity can be more
useful is in peer-to-peer scenarios.
initiator responder
Initiator cookie, proposed Phase 1 SA
m Diffie-Hell an value &
Responder cookie, accepted PhaIsde1nStiAty
Responder Diffie-Hell anavnadluceertificate(s)
Responder signature
Initiator signature and certificate(s)
Figure
4: IKE Aggressive Mode exchange with certi?cates.
tations. They suggest removing several features of the protocol
(e.g., aggressive mode, public key encryption mode, etc.), restore
the idea of stateless cookies, and protect the initiator's (instead of
the responder's) identity from an active attacker. They also suggest
some other features, such as one-way authentication (similar
to what is common practice when using SSL/TLS[10] on the web).
These major modi?cations would bring the IKE protocol closer to
JFK, although they would not completely address the DoS issues.
A measure of the complexity of IKE can be found in the analyses
done in [34, 36]. No less than 13 different sub-protocols are iden-
ti?ed in IKE, making understanding, implementation, and analysis
of IKE challenging. While the analysis did not reveal any attacks
that would compromise the security of the protocol, it did identify
various potential attacks (DoS and otherwise) that are possible under
some valid interpretations of the speci?cation and implementation
decisions.
Some work has been done towards addressing, or at least ex-
amining, the DoS problems found in IKE[31, 32] and, more gener-
ally, in public key authentication protocols[30, 21]. Various recommendations
on protocol design include use of client puzzles[23, 3],
stateless cookies[39], forcing clients to store server state, rearranging
the order of computations in a protocol[18], and the use of a
formal method framework for analyzing the properties of protocols
with respect to DoS attacks[35]. The advantages of being state-
less, at least in the beginning of a protocol run, were recognized in
the security protocol context in [22] and [2]. The latter presented
a 3-message version of IKE, similar to JFK, that did not provide
the same level of DoS protection as JFK does, and had no identity
protection.
5.2 IKEv2
IKEv2[16] is another proposal for replacing the original IKE
protocol. The cryptographic core of the protocol, as shown in Figure
5, is very similar to JFKr. The main differences between IKEv2
and JFKr are:
IKEv2 implements DoS protection by optionally allowing
the responder to respond to a Message (1) with a cookie,
which the sender has to include in a new Message (1). Under
normal conditions, the exchange would consist of the 4 messages
shown; however, if the responder detects a DoS attack,
it can start requiring the extra roundtrip. One claimed bene?t
of this extra roundtrip is the ability to avoid memory-based
initiator responder
Initiator Kcoeoykinieg aterial, Phase I SA,
aterial, Phase 1 SA,
Responder Keying
Responder cookie
Initiator authentication
Phase II SA, Traffic Selaencdtocres,rtIidfiecnatieti(ess)
Responder authentication andracfefrictifSicealetec(tso)rs
Accepted Phase II SA and T
Figure
5: IKEv2 protocol exchange.
DoS attacks against the fragmentation/reassembly part of the
networking stack. (Brie?y, the idea behind such an attack is
that an attacker can send many incomplete fragments that ?ll
out the reassembly queue of the responder, denying service
to other legitimate initiators. In IKEv2, because the ?large?
messages are the last two in the exchange, it is possible for
the implementation to instruct the operating system to place
fragments received from peers that completed a roundtrip to
a separate, reserved reassembly queue.)
IKEv2 supports a Phase II exchange, similar to the Phase
I/Phase II separation in the original IKE protocol. It supports
creating subsequent IPsec SAs with a single roundtrip,
as well as SA-teardown using this Phase II.
IKEv2 proposals contain multiple options that can be combined
in arbitrary ways; JFK, in contrast, takes the approach
of using ciphersuites, similar to the SSL/TLS protocols[10].
IKEv2 supports legacy authentication mechanisms (in par-
ticular, pre-shared keys). JFK does not, by design, support
other authentication mechanisms, as discussed in Section 3;
while it is easy to do so (and we have a variant of JFKr that
can do this without loss of security), we feel that the added
value compared to the incurred complexity does not justify
the inclusion of this feature in JFK.
Apart from these main differences, there are a number of super?-
cial ones (e.g., the ?wire? format) which are more a matter of taste
than of difference in protocol design philosophy. The authors of
the two proposals have helped create a joint draft[19], submitted
to the IETF IPsec Working Group. In that draft, a set of design
options re?ecting the differences in the two protocols is presented
to the working group. Concurrent with the writing of this paper,
and based on this draft, a uni?ed proposal is being written. This
uni?ed proposal combines properties from both JFK and IKEv2. It
adopts the approach of setting up a security association within two
round trips, while providing DoS protection for the responder (and,
in particular, allowing the responder to be almost completely state-less
between the sending of message 2 and the receipt of message
5.3 Other Protocols
The predecessor to IKE, Photuris[24], ?rst introduced the concept
of cookies to counter ?blind? denial of service attacks. The
protocol itself is a 6-message variation of the Station to Station
protocol. It is similar to IKE in the message layout and purpose,
except that the SA information has been moved to the third mes-
sage. For re-keying, a two-message exchange can be used to request
a uni-directional SPI (thus, to completely re-key, 4 messages
are needed). Photuris is vulnerable to the same computation-based
DoS attack as IKE, mentioned above. Nonetheless, one of the variants
of this protocol has 4 messages and provided DoS protection
via stateless cookies.
SKEME[27] shares many of the requirements for JFK, and many
aspects of its design were adopted in IKE. It serves more as a set of
protocol building blocks, rather than a speci?c protocol instance.
Depending on the speci?c requirements for the key management
protocol, these blocks could be combined in several ways. An interesting
aspect of SKEME is its avoidance of digital signatures;
public key encryption is used instead, to provide authentication as-
surances. The reason behind this was to allow both parties of the
protocol to be able to repudiate the exchange.
SKIP[5] was an early proposal for an IPsec key management
mechanism. It uses long-term Dif?e-Hellman public keys to derive
long-term shared keys between parties, which is used to distribute
session keys between the two parties. The distribution of the session
occurs in-band, i.e., the session key is encrypted with the
long-term key and is injected in the encrypted packet header. While
this scheme has good synchronization properties in terms of re-
keying, the base version lacks any provision for PFS. It was later
provided via an extension [4]. However, as the authors admit, this
extension detracts from the original properties of SKIP. Further-
more, there is no identity protection provided, since the certi?cates
used to verify the Dif?e-Hellman public keys are (by design) publicly
available, and the source/destination master identities are contained
in each packet (so that a receiver can retrieve the sender's
Dif?e-Hellman certi?cate). The latter can be used to mount a DoS
attack on a receiver, by forcing them to retrieve and verify a Dif?e-
Hellman certi?cate, and then compute the Dif?e-Hellman shared
secret.
The Host uses cryptographic public
keys as the host identi?ers, and introduces a set of protocols for establishing
SAs for use in IPsec. The HIP protocol is a four-packet
exchange, and uses client puzzles to limit the number of sessions
an attacker can initiate. HIP also allows for reuse of the Dif?e-
Hellman value over a period of time, to handle a high rate of ses-
sions. For re-keying, a HIP packet protected by an existing IPsec
session is used. HIP does not provide identity protection, and it depends
on the existence of an out-of-band mechanism for distributing
and certi?cates, or on extra HIP messages for exchanging
this information (thus, the message count is effectively 6, or even
8, for most common usage scenarios).
6. CONCLUSION
Over the years, many different key exchange protocols have been
proposed. Some have had security ?aws; others have not met certain
requirements.
JFK addresses the ?rst issue by simplicity, and by a proof of
correctness. (Again, full details of this are deferred to the analysis
paper.) We submit that proof techniques have advanced enough
that new protocols should not be deployed without such an anal-
ysis. We also note that the details of the JFK protocol changed in
order to accommodate the proof: tossing a protocol over the wall to
the theoreticians is not a recipe for success. But even a proof of correctness
is not a substitute for simplicity of design; apart from the
chance of errors in the formal analysis, a complex protocol implies
a complex implementation, with all the attendant issues of buggy
code and interoperability problems.
The requirements issue is less tractable, because it is not possible
to foresee how threat models or operational needs will change
over time. Thus, StS is not suitable for an environment where denial
of service attacks are a concern. Another comparatively-recent
requirement is identity protection. But the precise need ? whose
identity should be protected, and under what threat model ? is still
unclear, hence the need for both JFKi and JFKr.
Finally, and perhaps most important, we show that some attributes
often touted as necessities are, in fact, susceptible to a cost-bene?t
analysis. Everyone understands that cryptographic primitives are
not arbitrarily strong, and that cost considerations are often used in
deciding on algorithms, key lengths, block sizes, etc. We show that
DoS-resistance and perfect forward secrecy have similar character-
istics, and that it is possible to improve some aspects of a protocol
(most notably the number of round trips required) by treating others
as parameters of the system, rather than as absolutes.
7.
ACKNOWLEDGEMENTS
Ran Atkinson, Matt Crawford, Paul Hoffman, and Eric Rescorla
provided useful comments, and discussions with Hugo Krawczyk
proved very useful. Dan Harkins suggested the inclusion of IPI
in the authenticator. David Wagner made useful suggestions on the
format of Message (2) in JFKi. The design of the JFKr protocol
was in?uenced by the SIGMA and IKEv2 protocols.
8.
--R
The TLS protocol version 1.0.
A Cryptographic Evaluation of
Securely available credentials - credential server framework
The Internet Key Exchange (IKE).
Proposal for the IKEv2 Protocol.
Attack Class: Address Spoo?
Enhancing the resistance of a
Features of Proposed Successors to IKE.
Proofs of work and bread pudding protocols.
Scalability and
Client puzzles: A cryptographic countermeasure against connection depletion attacks.
Analysis of IKE.
SKEME: A Versatile Secure Key Exchange Mechanism for Internet.
The IKE-SIGMA Protocol
http://www.
Comments 2104
Towards network denial
of service resistant protocols.
International Information Security Conference (IFIP/SEC)
Resolution of ISAKMP/Oakley
Modi?ed aggressive mode of
Internet security association and key management protocol
Analysis of the Internet Key Exchange
protocol using the NRL protocol analyzer.
IEEE Symposium on Security and Privacy
A formal framework and evaluation method for
network denial of service.
Computer Security Foundations Workshop
Open issues in formal methods for
cryptographic protocol analysis.
Information Survivability Conference and Exposition
Kerberos Authentication and Authorization System.
The Host
Protecting key exchange and management
protocols against resource clogging attacks.
IFIP TC6 and TC11 Joint Working Conference on
Communications and Multimedia Security (CMS
Analysis of a denial of service attack on tcp.
IEEE Security and Privacy Conference
IKE/ISAKMP Considered Harmful.
--TR
Authentication and authenticated key exchanges
Entity authentication and key distribution
Stateless connections
Enhancing the Resistence of a Provably Secure Key Agreement Protocol to a Denial-of-Service Attack
Security Analysis of IKE''s Signature-Based Key-Exchange Protocol
Analysis of Key-Exchange Protocols and Their Use for Building Secure Channels
Towards Network Denial of Service Resistant Protocols
DOS-Resistant Authentication with Client Puzzles
Protecting Key Exchange and Management Protocols Against Resource Clogging Attacks
Proofs of Work and Bread Pudding Protocols
A Formal Framework and Evaluation Method for Network Denial of Service
Scalability and Flexibility in Authentication Services
Analysis of a Denial of Service Attack on TCP
--CTR
Steven M. Bellovin , Matt Blaze , Ran Canetti , John Ioannidis , Angelos D. Keromytis , Omer Reingold, Just fast keying: Key agreement in a hostile internet, ACM Transactions on Information and System Security (TISSEC), v.7 n.2, p.242-273, May 2004
Suratose Tritilanunt , Colin Boyd , Ernest Foo , Juan Manuel Gonzlez Nieto, Cost-based and time-based analysis of DoS-resistance in HIP, Proceedings of the thirtieth Australasian conference on Computer science, p.191-200, January 30-February 02, 2007, Ballarat, Victoria, Australia
Kui Ren , Wenjing Lou , Kai Zeng , Feng Bao , Jianying Zhou , Robert H. Deng, Routing optimization security in mobile IPv6, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.13, p.2401-2419, 15 September 2006
Theodore Diament , Homin K. Lee , Angelos D. Keromytis , Moti Yung, The dual receiver cryptosystem and its applications, Proceedings of the 11th ACM conference on Computer and communications security, October 25-29, 2004, Washington DC, USA
Changhua He , John C. Mitchell, Analysis of the 802.11i 4-way handshake, Proceedings of the 2004 ACM workshop on Wireless security, October 01-01, 2004, Philadelphia, PA, USA
k-anonymous secret handshakes with reusable credentials, Proceedings of the 11th ACM conference on Computer and communications security, October 25-29, 2004, Washington DC, USA
Zhiguo Wan , Robert H. Deng , Feng Bao , Akkihebbal L. Ananda, Access control protocols with two-layer architecture for wireless networks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.3, p.655-670, February, 2007
Martn Abadi , Bruno Blanchet , Cdric Fournet, Just fast keying in the pi calculus, ACM Transactions on Information and System Security (TISSEC), v.10 n.3, p.9-es, July 2007
Heng Yin , Haining Wang, Building an application-aware IPsec policy system, Proceedings of the 14th conference on USENIX Security Symposium, p.21-21, July 31-August 05, 2005, Baltimore, MD
Martn Abadi , Cdric Fournet, Private authentication, Theoretical Computer Science, v.322 n.3, p.427-476, 6 September 2004
Martn Abadi , Bruno Blanchet, Analyzing security protocols with secrecy types and logic programs, Journal of the ACM (JACM), v.52 n.1, p.102-146, January 2005
Angelos D. Keromytis , Janak Parekh , Philip N. Gross , Gail Kaiser , Vishal Misra , Jason Nieh , Dan Rubenstein , Sal Stolfo, A holistic approach to service survivability, Proceedings of the ACM workshop on Survivable and self-regenerative systems: in association with 10th ACM Conference on Computer and Communications Security, p.11-22, October 31-31, 2003, Fairfax, VA
Heng Yin , Haining Wang, Building an application-aware IPsec policy system, IEEE/ACM Transactions on Networking (TON), v.15 n.6, p.1502-1513, December 2007
Robert C. Chalmers , Kevin C. Almeroth, A Security Architecture for Mobility-Related Services, Wireless Personal Communications: An International Journal, v.29 n.3-4, p.247-261, June 2004 | cryptography;denial of service attacks |
586129 | Sensor-based intrusion detection for intra-domain distance-vector routing. | Detection of routing-based attacks is difficult because malicious routing behavior can be identified only in specific network locations. In addition, the configuration of the signatures used by intrusion detection sensors is a time-consuming and error-prone task because it has to take into account both the network topology and the characteristics of the particular routing protocol in use. We describe an intrusion detection technique that uses information about both the network topology and the positioning of sensors to determine what can be considered malicious in a particular place of the network. The technique relies on an algorithm that automatically generates the appropriate sensor signatures. This paper presents a description of the approach, applies it to an intra-domain distance-vector protocol and reports the results of its evaluation. | INTRODUCTION
Attacks against the IP routing infrastructure can be used
to perform substantial denial-of-service attacks or as a basis
for more sophisticated attacks, such as man-in-the-middle
and non-blind-spoofing attacks. Given the insecure nature
of the routing protocols currently in use, preventing these
attacks requires modifications to the routing protocols, the
routing software, and, possibly, the network topology itself.
Because of the critical role of routing, there is a considerable
inertia in this process. As a consequence, insecure routing
protocols are still widely in use throughout the Internet.
A complementary approach to securing the routing infrastructure
relies on detection of routing attacks and execution
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
CCS'02, November 18-22, 2002, Washington, DC, USA.
of appropriate countermeasures. Detecting routing attacks
is a complex task because malicious routing behavior can
be identified only in specific network locations. In addi-
tion, routing information propagates from router to router,
throughout the network. Therefore, the presence of malicious
routing information is not necessarily restricted to the
location where an attack is carried out.
We describe a misuse detection technique that uses a set
of sensors deployed within the network infrastructure. Sensors
are intrusion detection components equipped with a set
of signatures, which describe the characteristics of malicious
behavior. The tra#c that is sent on the network links is
matched against these signatures to determine if it is malicious
or not.
The use of multiple sensors for intrusion detection is a
well-established practice. The analysis of network tra#c at
di#erent locations in the network supports more comprehensive
intrusion detection with respect to single-point analysis.
The disadvantage of a distributed approach is the di#culty
of configuring the sensors according to the characteristics of
the protected network. This problem is exacerbated by the
nature of routing. The configuration of the sensors has to
take into account the network topology, the positioning of
the sensors in the network, and the characteristics of the
particular routing protocol in use. In addition, some attacks
can be detected only by having sensors communicate
with each other. As a consequence, the configuration of
the signatures used by the sensors is a time-consuming and
error-prone task.
The novel contribution of our approach is an algorithm
that, given a network topology and the positioning of the intrusion
detection sensors, can automatically determine both
the signature configuration of the sensors and the messages
that the sensors have to exchange to detect attacks against
the routing infrastructure. This paper introduces the general
approach and describes its application to the Routing
Information Protocol (RIP).
RIP is an intra-domain distance-vector routing protocol
[13]. At startup, every RIP router knows only its own addresses
and the links corresponding to these addresses. Every
RIP router propagates this information to its immediate
neighbors. On receiving the routing information, the neighbors
update their routing tables to add, modify, or delete
routes to the advertised destinations.
Routers add a route to a destination if they do not have
one. A route is modified if the advertised route is better
than the one that the router already has. If a router receives
a message from a neighbor advertising unreachability to a
certain destination and if the router is using that neighbor
to reach the destination, then the router deletes the route
to the destination from its routing table.
Under certain conditions, RIP might not converge. It may
exhibit the bouncing-e#ect problem or the count-to-infinity
problem [9]. These problems are partially overcome by using
the split-horizon technique, triggered-updates, and by
limiting the number of hops that can be advertised for a
destination 1 .
In order to decide whether a routing update is malicious
or not, a router needs to have reliable, non-local topology
information. Unfortunately, RIP routers do not have this
information. To support router-based intrusion detection it
would be necessary to modify both the RIP protocol and the
routing software. Therefore, our approach relies on external
sensors.
Sensor configurations are generated o#ine on the basis of
the complete network topology and the positions of the sensors
in the network. The configuration generation algorithm
determines every possible path from every router to every
other router. The configuration for an individual sensor is a
subset of this information based on the position of the sensor
in the network. Sensors need to be reconfigured if routers
and links are added to the topology. However, sensors do
not need to be reconfigured if the topology changes due to
link or router failures.
The remainder of this paper is organized as follows. Section
related work in the field. Section 3 introduces
an abstract reference model of the network routing
infrastructure. Section 4 presents an algorithm to generate
the configuration of intrusion detection sensors for the RIP
distance-vector protocol. Section 5 discusses how routing
attacks are detected. Section 6 describes the experimental
setup that was used to analyze the attacks and evaluate the
detection technique. Section 7 discusses the strengths and
weaknesses of the approach. Section 8 draws some conclusions
and outlines future work.
2. RELATED WORK
One of the earliest works on securing routing protocols is
by Radia Perlman [16]. Perlman suggests the use of digital
signatures in routing messages to protect against Byzantine
failures. The main drawback of this approach is that generating
digital signatures is a computationally intensive task.
Signature verification is usually not as expensive, but most
of the solutions that use digital signatures require that a
number of them be verified, leading to a considerable performance
overhead.
The use of digital signatures is also advocated by Murphy
et al. [14, 15] for both distance-vector and link-state
advertisements. Kent et al. [12, 11] describe an approach to
allow the recipient of a BGP [18] message to verify the entire
path to a destination. Smith et al. [19] introduce a scheme
to protect BGP using digital signatures and also describe a
scheme to secure distance-vector routing protocols by using
predecessor information [20].
Several other schemes have been proposed to reduce the
performance overhead associated with securing routing protocols
using digital signatures. Hauser et al. [7] describe two
techniques for e#cient and secure generation and processing
of updates in link-state routing. Zhang [24] describes
that routing messages can be protected by using one-time
1 Limiting the number of hops ensures that a route is declared
unusable, when the protocol does not converge and the number
of advertised hops exceeds the maximum number of allowed hops.
However, this also limits the diameter of the networks in which
RIP can be used.
signatures on message chains. In [6], Goodrich describes a
leap-frog cryptographic signing protocol that uses secret-key
cryptography.
While the research referenced so far focuses on preventing
attacks, a complementary approach to the problem of
securing the routing infrastructure focuses on detecting attacks
[1, 5, 8]. For example, Cheung et al. [3, 4] present
solutions to the denial-of-service problem for the routing
infrastructure using intrusion detection. Another example
is a protocol called WATCHERS described by Bradley et
al. [2]. The protocol detects and reacts to routers that drop
or misroute packets by applying the principle of conservation
of flow to the routers in a network. The JiNao project
at MCNC/NCSU focuses on detecting intrusions, especially
insider attacks, against OSPF. Wu et al. [21, 17, 23, 22, 10]
consider how to e#ciently integrate security control with
intrusion detection in a single system.
The approach described in this paper di#ers from other intrusion
detection approaches because it focuses on the topological
characteristics of the network to be protected and the
placement of the detection sensors. The approach relies on
topology information to automatically generate the signatures
used by sensors to detect the attacks. The details of
the algorithm and its application are described in the following
sections.
3. REFERENCE MODEL
An abstract reference model of the network is introduced
to describe the algorithm used to generate the signatures
and how these signatures are used to detect attacks against
the routing infrastructure. A network is represented by an
undirected graph E) where vertices
, vn} denote the set of routers. Positive weight edges
links connecting router
interfaces. An routers v i and v j . 2
A subnet is a range of IP addresses with the same network
mask. Every link e ij is associated with a subnet s ij .
{s1 , s2 , , sm} is the set of subnets corresponding to the
set of links E. We assume that e
Every vertex has an associated set
} # E that represents the set of edges connected to v j .
is the set of subnets corresponding to
the set of links E j . Every link e ij is associated with a cost
sensor placed on link e ij is identified
as sensor ij . A host p that is connected to link e ij is denoted
by h p
ij .
Consider the sub-graph Gsub # G shown in Figure 1. In
that context, routing problems can occur due to faults in
routers or due to malicious actions of both
the routers and the host h p
ij . Hosts or routers are termed
malicious when they have been compromised and are used
as a means to modify the normal operation of the network.
Both network failures and threats due to malicious intent
need to be included in the threat model because a sensor
cannot always di#erentiate between the two. The approach
described here is concerned with attacks involving injection,
alteration, or removal of routes to specific destinations, by
2 For the sake of simplicity, this model does not consider the possibility
of more than two routers being connected to a single link.
The model can be extended to support that possibility by assuming
graph G to be a hyper-graph.
3 The model does not consider the possibility of links being asym-
metric, i.e., having di#erent costs in di#erent directions. The
model can be extended to support asymmetric costs by assuming
graph G to be a directed graph
s ik
e ik
Figure
1: Threat Reference Model
malicious hosts or routers. Security threats, e.g., unauthorized
access to information, dropping and alteration of data
packets by routers, etc., are not addressed here. More pre-
cisely, we can describe our threat model with respect to Gsub
as follows:
1. v i fails. This will result in sub-optimal paths or no
paths at all from s ij to s ik and vice-versa.
2. v i is compromised and misconfigured to advertise a
sub-optimal or infinite-cost path to s ik . This will result
in sub-optimal paths or no paths at all from s ij
to s ik .
3. v j is compromised and misconfigured to advertise a
better than optimal path for s ik . This will result in
hosts from subnet s ij using v j to reach hosts in subnet
s ik even though v i has a better route to subnet s ik .
actually has no path to subnet s ik then packets
from subnet s ij will not reach subnet s ik .
4. h p
ij pretends to be v i with respect to v j or pretends
to be either v j or vk with respect to v i . h p
ij can then
advertise routes as in 2 and 3.
5. h p
renders v i unusable and pretends to be v i . h p
advertises the same information as v i but since v i is
unusable, packets do not get routed.
Gsub represents a segment of the network, represented by G,
where incorrect routing information is generated. If there is
a sensor present on every link of Gsub , the faulty or malicious
entity can be identified and a response process can be
initiated. In the absence of sensors on every link of Gsub ,
the incorrect routing information can propagate to the rest
of the network. In this case, the attack can still be detected,
but determining the origin of the attack requires an analysis
of the e#ects of the incorrect routing information. This
analysis is considerably di#cult to perform and is not in the
scope of the approach described here.
4. SENSOR CONFIGURATION
A sensor is configured on the basis of the network topology
and the sensor's position in the network. A sensor's
position in the network is specified by the link on which the
sensor is placed. A separate component, called the Sensor
Configurator, is given the network topology and the position
of all available sensors as inputs. The Sensor Configurator
uses an algorithm to generate the configuration of each sen-
sor. The configurations are then loaded into the appropriate
sensors.
The first step of the Sensor Configurator algorithm is
to find all paths and their corresponding costs from every
s
e
e
Figure
2: Sensor Configuration Example
router to every other router in the graph G. The results
are organized into a 2-dimensional vertex-to-vertex matrix.
The (i, th entry of the matrix points to a list of 2-tuples
ij )}. The list contains a tuple for each path between
vertices
ij is a set of vertices traversed to reach
v j from v i . c k
ij is the cost of the path p k
ij between vertices
. For example, consider Figure 2. v1 , v2 , v3 are
routers. e12 , e13 , e23 , e24 , e35 are links with associated sub-nets
respectively. The cost of all links
is equal to 1. Table 1 shows the vertex-to-vertex matrix for
the graph in Figure 2. The {(cost, path)} entry in row v1 ,
column v1 , is the set of possible paths and their corresponding
costs that vertex v1 can take to reach vertex v1 . The
entry {(0, (#))} means that vertex v1 can reach vertex v1 at
cost 0, through itself. The entry in row v1 , column v2 , means
that vertex v1 can reach vertex v2 at cost 1 through itself
or at cost 2 through vertex v3 . In the second step of the
algorithm, the vertex-to-vertex matrix is transformed into a
2-dimensional vertex-to-subnet matrix. Each column, representing
a vertex v j of the vertex-to-vertex matrix, is replaced
by a set of columns, one for each subnet directly connected
to the vertex, i.e., one for each member of S j . Consider
the set of columns S replacing column
. The set of paths in the vertex-to-subnet matrix is
i,jx is the k th path from router v i to subnet s jx in the
vertex-to-subnet matrix and p k
ij is the corresponding path in
the vertex-to-vertex matrix. The set of costs in the vertex-
to-subnet matrix is {(c k
i,jx is the cost of the k th path
from router v i to subnet s jx in the vertex-to-subnet matrix
and c k
ij is the corresponding cost in the vertex-to-vertex ma-
trix. c jx is the cost of the link e jx associated with subnet
s jx . This cost must be taken into account because c k
ij only
represents the cost of the path from router v i to v j . The cost
to reach subnet s jx from router v j should be added to c k
to get the total cost c k
i,jx . For example, the vertex-to-vertex
matrix shown in Table 1 is transformed into the vertex-to-
subnet matrix shown in Table 2. The {(cost, path)} entry in
row v1 , column s12 is the set of possible paths that vertex v1
can take to reach subnet s12 and their corresponding costs.
The entry {(1, (v1 ))} means that vertex v1 can reach subnet
s12 at cost 1, through v1 . The entry in row v1 , column s23 ,
means that vertex v1 can reach subnet s23 at cost 2 through
vertex v2 or at cost 3 through vertices v3 and v2 , and so on.
v1 {(0, (#))} {(1, (#)), (2, (v3 ))} {(1, (#)), (2, (v2 ))}
v2 {(1, (#)), (2, (v3 ))} {(0, (#))} {(1, (#)), (2, (v1 ))}
v3 {(1, (#)), (2, (v2 ))} {(1, (#)), (2, (v1 ))} {(0, (#))}
Table
1: Vertex-to-Vertex Matrix
v1 {(1, (v1 ))} {(1, (v1 ))} {(2, (v2 )), {(2, (v2 )), {(2, (v2 )), {(2, (v3 )), {(2, (v3 )), {(2, (v3 )),
(3, (v3 , v2 ))} (3, (v3 , v2 ))} (3, (v3 , v2 ))} (3, (v2 , v3 ))} (3, (v2 , v3 ))} (3, (v2 , v3 ))}
v2 {(2, (v1 )), {(2, (v1 )), {(1, (v2 ))} {(1, (v2 ))} {(1, (v2 ))} {(2, (v3 )), {(2, (v3 )), {(2, (v3 )),
(3, (v3 , v1 ))} (3, (v3 , v1 ))} (3, (v1 , v3 ))} (3, (v1 , v3 ))} (3, (v1 , v3 ))}
v3 {(2, (v1 )), {(2, (v1 )), {(2, (v2 )), {(2, (v2 )), {(2, (v2 )), {(1, (v3 ))} {(1, (v3 ))} {(1, (v3 ))}
(3, (v2 , v1 ))} (3, (v2 , v1 ))} (3, (v1 , v2 ))} (3, (v1 , v2 ))} (3, (v1 , v2 ))}
Table
2: Vertex-to-Subnet Matrix
The vertex-to-subnet matrix is in a format that is similar to
the one used by the routers themselves.
Once the vertex-to-subnet matrix has been computed for
the entire network, the portion of the vertex-to-subnet matrix
relevant to each sensor is extracted. Each sensor uses
its vertex-to-subnet matrix to validate the routing advertisements
that are sent on the link on which the sensor is
placed. More precisely, sensor ij , placed on link e ij , needs to
validate routing advertisements from routers v i and v j only. 4
Therefore, the vertex-to-subnet matrix for sensor ij has rows
from the common vertex-to-subnet matrix corresponding to
Some entries in a vertex-to-subnet matrix may not correspond
to actual routing information. Therefore, the matrix
can be further reduced by ignoring these entries. Consider
the vertex-to-subnet matrix for sensor ij on link e ij with
routers v i and v j . s ik is the subnet on
link e ik between routers v i and vk . sab is any other subnet.
In this context, the vertex-to-subnet matrix for sensor ij is
reduced according to the following rules:
1. For neighboring routers v i and vk , row v i , columns s ik
or ski , a {(cost, path)} tuple is ignored if the path is of
the form (vk ). For example, in Table 2, for row v1 , column
s31 , the {(cost, path)} tuple {(2, (v3 ))} is ignored
because subnet s31 is directly connected to router v1 .
Therefore, v1 will never advertise a route on link e12
for subnet s31 through router v3 at cost 2.
2. For any row, columns sab or sba , a {(cost, path)} tuple
is ignored if the path is of the form ( , va , v b , ) or
For example, in Table 2, for row v1 ,
column s23 , the {(cost, path)} tuple {(3, (v3 , v2 ))} is
ignored because router v1 can reach subnet s23 at cost
through router v3 . Therefore, v1 will not advertise a
route to subnet s23 through (v3 , v2 ) at cost 3.
3. For neighboring routers v i and v j , row v i , columns s ij
or s ji , a {(cost, path)} tuple is ignored. For example,
in
Table
2, for row v1 , column s12 , the {(cost, path)}
tuple {(2, (v2 ))} is ignored because both routers v1 and
v2 are directly connected to subnet s12 and have the
same cost to s12 . Router v2 will never use a longer
path through v1 to reach a directly connected subnet
4 A sensor needs to validate routing advertisements only from
routers connected to the link on which it is placed, because
distance-vector routers advertise routing information only on
links connected to them directly.
s12 . Therefore, v1 will never advertise such a route to
v2 .
4. For neighboring routers v i and v j , row v i , any column,
a {(cost, path)} tuple is ignored if the path is of the
form (v j , ). For example, in Table 2, for row v1 , column
s23 , the {(cost, path)} tuple {(2, (v2 ))} is ignored
because if in the route advertised by v1 for subnet s23
the first hop is router v2 , then v1 has learned that route
from v2 . This implies that v2 has a better route to s23
than v1 has and will never use v1 to reach s23 . There-
fore, v1 will never advertise such a route to v2 . The
split-horizon check in RIP ensures the same thing.
After the simplification of the vertex-to-subnet matrix for
sensor ij , with rows v i and v j , the term v i is actually replaced
by a tuple {v link
link
i is the link-level
address of the interface of router v i that is connected to
i is the corresponding IP address. Similarly, the
replaced by a tuple {v link
Finally, the information
regarding the position of other sensors is added
to the vertex-to-subnet matrix by marking the links where
the sensors are placed.
5. SENSOR DETECTION ALGORITHM
Once the o#ine process of generating the sensor configurations
is completed, the configurations are loaded into the
sensors. At run-time the sensors analyze the routing advertisements
that are sent on the corresponding link. They
match the contents of a routing advertisement with their
configuration to decide whether the routing advertisement
represents evidence of an attack or not.
Consider sensor ij , placed on link e ij . In addition to storing
link
link
sensor ij also stores e link bcast
ij and e ip bcast
ij , which are the link-level
and IP broadcast addresses for link e ij and rip linkmcast
and rip ipmcast , which are the link-level and IP multicast addresses
for RIP routers.
In its vertex-to-subnet matrix, sensor ij also stores {(cost,
path)} sets from router v i to subnet sab of the form {(c
is the optimal cost at which router v i can send data to subnet
sab , through path p
i,ab . There can be multiple optimal-cost
paths {p
i,ab , } with costs {c
i,ab , } from
router v i to subnet sab such that c
i,ab .
Router v i can also send data to subnet sab through a path
i,ab with a sub-optimal cost c s 1
i,ab . There can be multiple
sub-optimal-cost paths {p s 1
i,ab , } from router v i to subnet sab .
Next, consider a distance-vector routing advertisement m,
where m is of the type:
[Link-Level-Header [IP-Header [UDP-Header [Distance-Vector Routing
For routing advertisement m, m linksrc and m link dst are the
link-level source and destination addresses respectively, m ipsrc
and m ip dst are the IP source and destination addresses re-
spectively, m ttl is the time-to-live field in the IP header,
and m c ab is the cost advertised for subnet m s ab . By using
the information stored by the sensor and the information
contained in the routing message it is possible to verify
the correctness of link-level and network-level information,
the plausibility of the distance-vector information, the messages
that are needed to verify advertised routes, and the
frequency of routing updates. These four verifications are
described in the following sections.
5.1 Link-Level and Network-Layer Information
Verification
A legitimate routing advertisement must have the link-level
address and IP address of one of the routers connected
to the link 5 and have a time-to-live value equal to 1.
The following is a relation between the fields m linksrc ,
link dst , m ipsrc , m ip dst , m ttl , m c ab and m s ab , of a legitimate
routing advertisement m:
link
{(m link dst = v link
(m link dst = e link bcast
bcast
(m link dst = rip linkmcast V
ipmcast )}]
link
{(m link dst = v link
(m link dst = e link bcast
bcast
(m link dst = rip linkmcast
ipmcast )}]}
In the above relation, if the link-level and IP source addresses
of routing advertisement m are those of router v i ,
then the link-level and IP destination addresses of m should
be those of router v j , the broadcast address of link e ij , or
the multicast address of RIP routers. If m has originated
from router v j , the source link-level and IP addresses should
be those of router v j and destination link-level and IP addresses
should be those of router v i , the broadcast address
of link e ij , or the multicast address of RIP routers. The
time-to-live field of m should be 1. Note that link-level and
network-layer information can be spoofed. Therefore, this
verification alone is not enough to conclude that a routing
advertisement is not malicious.
5.2 Distance-Vector Information Verification
Routing advertisements for subnets that do not belong
to the local autonomous system indicate that the routing
advertisements are malicious. 6 A sensor scans a routing ad-
5 In distance-vector routing protocols, routing advertisements are
meant for a router's immediate neighbors only.
6 Routers running intra-domain routing protocols do not have
routes to every possible subnet. Usually, routes to subnets out-
sensor 34
sensor 12
sensor
sensor 67 sensor 89
e 34
e
e 78
e
e 67
e 26
e 59
Figure
3: Sensor Detection Example
vertisement for attacks by matching every advertised subnet
against the list of subnets that the sensor has in its config-
uration. If the sensor does not find a match, it declares the
routing advertisement to be malicious.
Next, a routing advertisement is analyzed to determine
whether the advertised cost is optimal, sub-optimal, or im-
possible. An optimal cost is the cost of one of the shortest
paths from a router to a destination advertised by the router.
A sub-optimal cost is the cost of one of the paths from a
router to a destination advertised by the router. This path
may not necessarily be the shortest. An impossible cost is
a cost not associated to any of the paths from a router to
a destination advertised by the router. Unreachability, i.e.,
where the advertised cost is 15, is considered sub-optimal
rather than impossible.
Consider a routing advertisement m, originating from router
v i on link e ij , advertising a cost m c ab for subnet m s ab . The
sensor configuration defines the following costs for reaching
sab from
optimal-cost
, } sub-optimal-cost
impossible-cost
The above relation states that in a routing advertisement
m, advertised by router v i for subnet sab , the cost m c ab is
optimal if it belongs to the set of optimal costs; sub-optimal
if it belongs to the set of sub-optimal costs; impossible if it
does not belong to the set of optimal costs or sub-optimal
costs. Assuming that a sensor has the correct topological
information, an impossible-cost advertisement detected by
the sensor is considered malicious. Better-than-optimal-cost
routing advertisements can be detected by checking the routing
advertisements for impossible-cost advertisements. For
example, consider Figure 3. The costs of all links are equal
to 1. The set of optimal costs that router v2 can advertise
for subnet s45 , on link e12 , is {3}, using path {(v2 , v3 , v4 )}.
The set of sub-optimal costs that router v2 can advertise for
subnet s45 , on link e12 , is {5, 6, 6} with paths {(v2 , v6 , v7 , v8 ,
tively. No paths from router v1 are considered, since router
v2 will not advertise any paths that it has learned through
router v1 on link e12 . Costs advertised by router v2 on link
e12 for subnet s45 that are not 3, 5, or 6 are impossible costs.
side the autonomous system are only known to the border routers.
Non-border routers use default routes to the border routers for
external subnets.
5.3 Path Verification
The impossible-cost-advertisement verification cannot determine
the validity of an optimal-cost or sub-optimal-cost
advertisement. A malicious entity can advertise a sub-optimal
cost to a destination even though the corresponding
optimal-cost path is available for use. For example, in Figure
3, subnet s45 can be reached at cost 3 from router v2 but
a malicious entity on link e12 can pretend to be router v2 and
cost 6 for subnet s45 . This will result in router v1
using router v10 , instead of router v2 , to send data to subnet
s45 . A malicious entity can also advertise an optimal cost
when the optimal-cost path is not available. For example,
in
Figure
3, a malicious entity on link e12 can pretend to be
router v2 and advertise cost 3 for subnet s45 when subnet
s45 is no longer reachable using path {(v2 , v3 , v4 )}. In addi-
tion, a malicious entity can subvert a router and configure
it to drop data packets, or it may impersonate a router after
having disabled the router.
In all the above attacks, the advertised distance-vector
information is correct. Therefore, these attacks cannot be
detected by merely verifying the distance-vector informa-
tion. To detect such attacks, sensors use a path-verification
algorithm. Consider sensor ij on link e ij between routers v i
and v j . If sensor ij finds in a routing advertisement m the
cost m c ab advertised by router v i for subnet m s ab to be optimal
or sub-optimal, then for all costs c k
ab , the sensor searches
its configuration for all paths p k
i,ab that have cost c k
i,ab .
The set of sensors on path p k
i,ab is Sensor k
i,ab . sensor ij
verifies path p k
i,ab by sending a message to every sensoryz
i,ab . For example, consider Figure 3. In this case,
sensor12 on link e12 detects a routing advertisement from
router v2 for subnet s45 at cost 3. Therefore, sensor12
searches its configuration to find all paths that have cost 3.
sensor12 finds that path {(v2 , v3 , v4 )} has cost 3. In its con-
figuration, sensor12 also has the information that for path
links e34 and e45 have sensor34 and sensor45
on them. To validate the advertisement, sensor12 sends
messages to sensor34 and sensor45 . If the advertised cost
had been 5, sensor12 would have sent messages to sensor67 ,
sensor89 and sensor45 .
The path-verification algorithm can be more formally stated
as follows. sensor ij on link e ij verifies a routing advertisement
m from router v i , advertising an optimal or sub-optimal
cost m c ab for subnet sab , using the following steps:
1. If m c ab is the optimal cost c
i,ab from router v i to subnet
its configuration to find all paths
i,ab from router v i to subnet sab , corresponding to
cost c
i,ab , and the set of available sensors Sensor k
on those paths.
2. If Sensor k
there are no sensors on path
verify if a path from router
v i to subnet sab exists. If there are sensors on every
link of path p k
i,ab then the entire path can be verified.
If sensors are not present on every link of path p k
i,ab but
a sensor is present on eab then the intermediate path
cannot be verified but it can be verified that subnet sab
is reachable from router v i . If there is no sensor present
on eab then it cannot be verified whether subnet sab is
reachable from router v i or not.
3. If Sensor k
i,ab #} then sensor ij sends a message to
every sensoryz # Sensor k
i,ab for every path p k
i,ab .
4. Every path p k
i,ab is an available path for which every
i,ab replies to sensor ij . If there
are one or more available paths p k
a valid routing advertisement. If there are none,
declares m to be malicious.
5. If m c ab is a sub-optimal cost, sensor ij searches its configuration
to find paths p q k
i,ab from router v i to subnet
sab corresponding to every cost c q
i,ab such that c
c q
i,ab is the optimal cost from router v i to
subnet sab . sensor ij also determines the sets of available
sensors Sensor q k
i,ab corresponding to paths p q k
i,ab .
sends a message to every sensoryz # Sensor s k
for paths p s k
i,ab from router v i to subnet sab corresponding
to every cost c s
i,ab such that c
i,ab is the optimal cost from router v i to subnet
sab . Note that the only di#erence between p q k
i,ab and
i,ab is that the latter does not contain paths with costs
equal to m c ab . Therefore, p s k
i,ab and Sensor s k
i,ab .
7. Every path p s k
i,ab is an available path for which every
i,ab replies to sensor ij . If no available
exist, then sensor ij verifies paths p k
from router v i to subnet sab corresponding to cost
ab . Every path p k
i,ab for which every sensoryz #
i,ab replies to sensor ij is an available path. If
there are one or more available paths p k
routing
advertisement m with cost m c ab is considered a
valid routing advertisement. If there are no available
paths then sensor ij declares m to be malicious.
8. For every available path p s k
i,ab , i.e., where every sensoryz
i,ab replies to sensor ij , sensor ij waits for a
time-period t delay . For every available path p s k
sensor ij does not get a routing advertisement m # , with
cost c s k
i,ab , within t delay , then sensor ij declares routing
advertisement m to be malicious.
For example, consider Figure 3. sensor12 detects a routing
advertisement from router v2 for subnet s45 at cost 3.
sensor12 searches its configuration and finds 3 to be the
optimal cost from router v2 to subnet s45 . sensor12 finds
path {(v2 , v3 , v4 )} that has cost 3. Therefore, sensor12 sends
messages to sensors available on this path, i.e., sensor34
and sensor45 . If sensor12 does not get replies from both
sensor34 and sensor45 , it concludes that the path from
router v2 to subnet s45 is unavailable. If sensor12 gets a
reply from sensor45 but not from sensor34 , sensor12 can
conclude that subnet s45 is reachable from router v2 but
it cannot verify the path. If sensor12 gets a reply from
sensor34 but not from sensor45 , sensor12 cannot be sure if
subnet s45 is reachable from router v2 or not. If sensor12
gets a reply from both sensor34 and sensor45 , sensor12 can
be sure that subnet s45 is reachable from router v2 . For the
placement of sensors in Figure 3, sensor12 can never be sure
of the complete optimal path from router v2 to subnet s45 ,
since link e23 does not have a sensor on it.
Assume now that sensor12 detects a routing advertisement
from router v2 for subnet s45 at cost 5. The sensor
searches its configuration and finds 5 to be a sub-optimal
cost from router v2 to subnet s45 and identifies path {(v2 ,
v6 , v7 , v8 , v4 )} as the only path that has cost 5 from router
v2 to subnet s45 . In addition, the sensor looks for paths
that have costs greater than or equal to the optimal cost
and less than 5. The only such path in this case is the
path having the optimal cost 3. Therefore, sensor12 sends
messages to sensors available on the path having cost 3.
If sensor12 finds this path unavailable then sensor12 sends
messages to sensors available on the path having cost 5, i.e.,
sensor67 , sensor89 and sensor45 . If sensor12 gets replies
from all sensor67 , sensor89 and sensor45 , then the routing
advertisement for cost 5 is valid. However, it is not possible
to reliably determine if the path having cost 3 is unavailable.
In general, in a hostile environment a sensor can determine
availability of a path to a subnet but it cannot determine its
unavailability. By dropping or rerouting a message from the
requesting sensor or the replying sensor, a malicious entity
can make the requesting sensor believe that a path to a
subnet is unavailable. On the other hand, if a malicious
can drop or re-route packets on a path, the path is
unreliable and an unreliable path is as good as not being
available at all.
If sensor12 finds that the path having cost 3 is available
then it is possible that either the routing advertisement of
cost 5 is malicious or the routing advertisement is due to a
transitory change in the routing configuration. If the routing
advertisement is transitory and the path from v2 to subnet
s45 at cost 3 is available, then v2 should eventually advertise
a route to subnet s45 at cost 3. If sensor12 does not
see a routing advertisement at cost 3 then the routing advertisement
at cost 5 is malicious. If sensor12 sees a routing
advertisement at cost 3 then the sensor does not verify the
path having cost 5 any further.
5.4 Timing Verification
Routers advertise routing messages at certain intervals of
time. The interval of time at which RIP messages are advertised
is rip interval . A router can send more than one RIP
message in rip interval . 7 rip high
threshold is the maximum number
of packets that a sensor should ever see within rip interval .
A sensor maintains a counter rip i
counter and a timer rip i
timer
for each router v i that is connected to the link on which the
sensor is placed. The sensor initializes rip i
timer and sets
rip i
counter to 0. It sets the time-out value to rip interval .
The sensor increments rip i
counter for every RIP advertisement
that it sees from router v i . If rip i
counter is greater then
rip high
threshold when rip i
timer expires, then this is considered an
attack.
The sensor also maintains a value rip low
threshold , which is
the minimum number of packets that a sensor should see
within rip interval . If rip i
counter is less that rip low
threshold when
rip i
timer expires, it can be inferred that the router is not
working. This could be due to a denial-of-service attack
against the router or due to a failure. rip high
threshold and rip low
threshold
are implementation and topology dependent. These values
have to be experimentally or statistically determined for a
network.
6. EXPERIMENTAL VALIDATION
An experimental testbed network was built to test the
vulnerability of routing protocols. The testbed routers are
multi-homed PCs, equipped with the GNU Zebra routing
daemon, version 0.91a. The testbed contains five di#erent
autonomous systems. The autonomous systems exchange
7 RIP information from one router may be split and advertised in
multiple RIP messages.
routing information using BGP. The interior gateway protocol
used within the autonomous systems is either RIP or
OSPF.
Figure
4 is a schematic of the complete testbed topol-
ogy. The experiments presented in this paper only use autonomous
system 1. The other autonomous systems were
used to conduct experiments on OSPF and BGP. Those experiments
are not discussed in this paper.
The networks in the testbed have addresses in the range
192.168.x.x. Each network has a subnet mask of 255.255.-
255.0. In the following, subnet "x" is used to denote sub-net
192.168.x.0 and address "x.x" denotes the IP address
192.168.x.x.
The following sections discuss three very simple attacks
that were carried out on the testbed. These attacks are
proof-of-concepts to demonstrate the vulnerabilities in distance-vector
routing protocols and how the approach described
here is used to detect these attacks.
6.1 Routing Loop Attack
The Routing Loop Attack demonstrates how false routing
information generated by a remote host can propagate to
routers and, as a consequence, install wrong routes in the
routing tables.
Consider the network shown in Figure 4. A routing loop
for subnet 30 is created by spoofing routing advertisements.
Note that subnet 30 does not exist in Figure 4. The spoofed
routing advertisements are generated by host mike. The
source IP address of the spoofed routing advertisements is
set to be the address of router hotel. The destination of the
spoofed routing advertisements is router foxtrot. This particular
choice of source and destination addresses is dictated
by the particular topology of the network. Spoofed routing
advertisements with the source address of router golf are not
forwarded by golf. Therefore, the source of the spoofed routing
advertisement cannot be golf. In addition, routers accept
routing information only from their neighbors. Therefore,
the source of the spoofed routing advertisements has to be
hotel.
The spoofed routing advertisements from mike are routed
through golf to reach foxtrot. As a consequence, foxtrot adds
a route for subnet
this route to golf but not to hotel, because it believes that
hotel is the source of the route. After receiving the advertisement
from foxtrot, golf adds a route to subnet 30, through
foxtrot. Then, golf sends a routing advertisement for sub-net
to hotel. When the advertisement is processed by
hotel, a route to subnet through golf is added to hotel 's
routing table. This results in a routing loop. A traceroute
from host india for an address in subnet 30 shows the path
golf-foxtrot-hotel-golf-golf.
6.2 Faulty Route Attack
The Faulty Route Attack demonstrates how a malicious
can divert tra#c by advertising a route with a cost
that is lower that the optimal cost. Consider Figure 4. A
malicious host on the link between golf and foxtrot wants
to access incoming tra#c to subnet 25. To achieve this, the
attacker has to convince foxtrot that the best way to reach
subnet 25 is to route tra#c through golf instead of romeo.
The route associated with romeo has cost 2.
Therefore, the malicious host pretends to be golf and
sends a routing message to foxtrot advertising a route to
subnet 25 at cost 1. As a consequence, foxtrot modifies the
route to subnet 25 in its routing table. The new route goes
through golf instead of romeo.
bravo
alpha
charlie echo oscar
papa quebec
november
foxtrot
golf hotel
juliet
1.1 1.23.13.34.25.2
6.1 6.27.28.2
9.1 9.210.411.1
12.1 12.213.216.214.115.1 15.217.220.1
21.1 22.1AS3 AS423.2
AS2
OSPF
BGP
BGP
BGP
kilo
RIP
romeo
sierra24.2
AS5
25.2 25.1
tango
uniform26.127.228.2
BGP
BGP
Figure
4: Testbed Topology
6.3 Denial-of-Service Attack
In the Denial-of-Service Attack, malicious hosts on links
between foxtrot and golf, foxtrot and hotel and foxtrot and
romeo collaborate to make subnet 10 unreachable from fox-
trot. The malicious host on the link between foxtrot and
golf pretends to be golf and spoofs routing advertisements
to foxtrot advertising unreachability to subnet 10. On receipt
of the spoofed routing advertisements, foxtrot modifies
the route to subnet 10 in its routing table to go through hotel
instead of golf.
Next, the malicious host on the link between foxtrot and
hotel pretends to be hotel and spoofs a routing message to
foxtrot advertising unreachability to subnet 10. On receipt
of the spoofed routing advertisement, foxtrot modifies the
route to subnet 10 in its routing table to go through romeo
instead of hotel.
When romeo too advertises unreachability to subnet 10,
foxtrot removes the route to subnet 10 from its routing table
because it has no other way to reach subnet 10 except
through golf, hotel, or romeo. A traceroute from foxtrot to
returns a Network Unreachable error.
The malicious hosts keep sending spoofed routing updates
at a rate faster than the actual routing updates, thus
preventing the infrastructure from re-establishing correct
routes.
These experiments demonstrate how spoofed routing advertisements
from unauthorized entities can disrupt the distance-vector
routing infrastructure. A malicious entity can
advertise non-existent routes, advertise unreachability when
a route is actually present, or advertise better-than-optimal
routes to divert tra#c.
6.4 Detection
A preliminary experimental evaluation of the intrusion detection
approach is presented here. A detailed evaluation of
the approach is the subject of our current research. The current
objective is to establish a proof-of-concept by detecting
the attacks outlined in Section 6. Two sensors are placed
on the testbed of Figure 4. sensor gf is placed on the link
between routers golf and foxtrot. sensorgm is placed on the
link connecting golf to subnet 10. The sensor configurations
are generated following the algorithm presented in Section 4.
Consider the Routing Loop Attack. Routing advertisements
for subnet spoofed by mike are analyzed by sen-
sorgm on the link connecting golf to subnet 10. sensorgm
detects the source link-level and IP addresses of the routing
advertisements to be incorrect. The spoofed routing advertisements
have the source link-level and IP addresses of hotel
whereas the only possible routing advertisement on that link
should be from router golf. sensor gf also detects the spoofed
routing advertisements because subnet 30, advertised in the
spoofed routing message, does not exist in the domain.
Consider the Faulty Route Attack. sensorgf detects a
routing message that appears to come from router golf advertising
a route to subnet 25 at cost 1. Therefore, sensor gf
searches its configuration for possible paths from golf to sub-net
25 that do not have foxtrot as the first hop. The possible
paths are through either router uniform or router hotel but
none of the paths have cost 1. Therefore, sensor gf infers
that the advertised cost from golf to subnet 25 is an impossible
cost. As a consequence, the routing advertisement is
considered malicious.
Consider the Denial-of-Service attack. When sensor gf
receives a spoofed routing message, which appears to be
coming from router golf, it considers unreachability as a sub-
optimal-cost advertisement. Therefore, sensor gf sends a
message to sensorgm to validate whether the route to subnet
through golf is available or not. sensor gf receives a reply
from sensorgm . As a consequence, sensorgm concludes that
the routing message is malicious.
7. ALGORITHM EVALUATION
The experimental evaluation validates the fact that the
suggested approach can successfully detect malicious routing
advertisements. Nonetheless, our approach has a few
limitations, which are discussed here.
7.1 Computational Complexity
The all-pair/all-path algorithm is used to generate sensor
configurations. It finds all paths from every router to every
other router in the network. This algorithm has an exponential
lower bound. However, the algorithm is suitable
for medium-size sparse topologies. Several real topologies
that run RIP were analyzed. The all-pair/all-path algorithm
converged in acceptable time for all these topologies.
The algorithm should converge in acceptable time for most
topologies running RIP, since these topologies are not very
large or dense.
In addition, the sensor configurations are generated o#-
line. The configurations have to be regenerated if routers
and links are added to the topology. The sensors have to be
brought o#ine during this time. However router crashes or
link failures do not require that the sensor configurations be
regenerated.
7.2 Message Overhead
Another drawback of the approach is the additional tra#c
that is generated to validate optimal and sub-optimal routes.
The sensor tra#c increases as the number of sensors grows
and the required security guarantees become more stringent.
Consider a routing advertisement m from router v i , advertising
a path to subnet sab . If a sensor sensor ij determines
that the cost advertised in m is an impossible cost, sensor ij
can declare the routing advertisement to be malicious without
communicating with any other sensor. Therefore, there
is no tra#c overhead in this case.
determines that the cost advertised in m is an
optimal cost, sensor ij will search its configuration to find
all the optimal paths from router v i to subnet sab . Let the
set of optimal paths from router v i to subnet sab be {p
3 , } and the corresponding set of number of sensors
on each path be {n
3 , }. Now, the maximum
number of messages that sensor ij will generate to verify
m will be n
1 is the first
optimal path to be verified and p
1 is found to be a valid path
from router v i to subnet sab , sensor ij will only generate n omessages to verify m. If p
1 is the first path to be verified,
noptimal is the
number of messages that sensor ij will generate to verify
an optimal-cost advertisement received from router v i for
subnet sab . Assuming that every sensor replies to every
request, the total overhead is 2 # noptimal messages.
determines that the cost advertised in m is a
sub-optimal cost, sensor ij will search its configuration to
find the set of paths {p
, p a
} from router v i to subnet sab where p
1 is
an optimal path, p s 1
1 is a sub-optimal path, p a
1 is a path
corresponding to the advertised cost. {n
is the corresponding
set of number of sensors on each path. Now, the maximum
number of messages that sensor ij will generate to verify
that all paths from router v i to subnet sab with costs less
than the advertised cost are not available, is n<advertised
. If all
paths with less than advertised cost are found unavailable,
sensor ij will generate nadvertised messages to verify that at
least one path with the advertised cost is available where n a# nadvertised # n a
. In this case, the total
overhead is 2 # n<advertised nadvertised .
For a sub-optimal-cost advertisement, sensor ij might find
an available path to subnet sab from router v i with a cost
that is less than the advertised cost. Let the available path
e 34
e 45s
sensor 12
sensor
Figure
5: Message Overhead Example
be
. Now, the number of messages generated by sensor ij
is n<advertised
since no messages are generated to verify paths with the
advertised cost unless it is determined that all paths with
costs lesser than the advertised cost are unavailable. In this
case, the total overhead is 2 # n<advertised .
Path verification is done for every routing update that
advertises an optimal or sub-optimal cost. Every sensor that
detects a routing advertisement will generate an overhead of
noptimal for every optimal route and 2 # n<advertised
nadvertised for every sub-optimal route in the routing
advertisement. However, the overheads 2 # noptimal and 2 #
n<advertised nadvertised decrease as the proximity of a
sensor with the advertised destination increases.
Routing updates are generated every seconds. If we
assume the verification packet size to be 48 bytes (including
the headers), then every seconds, the verification of one
destination will result in an overhead of 48 # 2 # noptimal or
nadvertised bytes depending on
whether the advertised cost is optimal or sub-optimal.
For example, consider Figure 5. The cost of all links is
equal to 1. When router v4 advertises a route for subnet s45
at cost 1 on link e34 , sensor34 searches its configuration and
determines that an advertisement from v4 for s45 at cost 1
is an optimal-cost advertisement. Since there is only one
path {(v4 )} from v4 to s45 , and sensor45 is the available
sensor on that path, sensor34 sends a verification request
to sensor45 . sensor45 replies with a verification reply to
sensor34 . Since each verification message has a size of 48
bytes, this verification by sensor34 requires 96 bytes.
Next, v3 advertises a route for subnet s45 at cost 2 on link
e23 . Since e23 does not have any sensor on it, no verification
messages are generated. Next, when v2 advertises a route
for s45 at cost 3 on e12 , sensor12 searches its configuration
and determines that an advertisement from v2 for s45 at cost
3 is an optimal-cost advertisement. There is only one path
{(v2 , v3 , v4 )} from v2 to s45 , and two sensors (sensor34 and
sensor45 are available on that path. Therefore, sensor12
sends one message to sensor34 and one message to sensor45 .
Both sensor34 and sensor45 reply back. Therefore, the verification
by sensor12 requires 48 # bytes. The entire
verification process requires 96 bytes. This
is the amount of overhead that will be generated every
seconds.
The overhead due to path-verification might be tolerable
under normal circumstances. Under attack conditions, the
overhead will be the same as discussed above depending on
whether the advertised cost is impossible, optimal or sub-
optimal. A malicious entity might try to use the sensors to
do a denial-of-service attack by sending an excessive amount
of routing updates. However, this will result in the generation
of more routing updates than the sensor's threshold.
Under such a condition, the sensor will appropriately scale
back its path verification mechanism and raise an alert.
However, the overhead due to path-verification may be
unacceptable under conditions where a major part of the
network fails. If a link breaks, then every router that is using
a path that contains the broken link will send a routing up-date
to its neighbors. Every routing update will generate an
overhead as discussed above. Moreover, a link breakage will
result in sub-optimal cost advertisements or unreachability
advertisements, which are also treated as sub-optimal cost
advertisements. The overhead to verify a sub-optimal-cost
advertisement is more than that of verifying an optimal-cost
advertisement.
If a number of links fail, then routes to many destinations
will change or become unavailable, leading to many
routing updates. Consequently, there will be an increase in
the number of sensor verification messages. When links get
reconnected 8 , better paths might be advertised and as a
consequence, again, increase the number of sensor verification
messages.
The message overhead due to the path-verification protocol
can be reduced by reducing the number of sensors in the
network, reducing the frequency of path verification, or verifying
the advertisement selectively. However, all of these
approaches will lead to weaker security guarantees. Our
present work is focused on reducing the overhead due to path
verification without reducing the e#ectiveness of detection.
A possible way of reducing the overhead is by modifying the
path verification algorithm so that it does not try to verify
every path that corresponds to the advertised cost. Instead
of sending a verification message to every sensor on every
path having the advertised cost, the verifying sensor sends
just one message to an IP address in the destination subnet.
This message will traverse a certain path to reach the des-
tination. All sensors present on this path will send back a
reply to the verifying sensor. Based on the replies that are
sent back, the sensor decides whether there exists a path
with the advertised cost that has all the replying sensors on
it or not. This approach should reduce the path verification
overhead significantly.
7.3 Other Limitations
Our approach is not capable of detecting attacks under
certain conditions. Random dropping or modification of
data packets and unauthorized access to routers cannot be
detected. The verifying sensor uses probe packets to verify
that a path with a cost less than the advertised cost is
not available. Since, a malicious router present on the path
between the verifying sensor and the destination can drop
the probe packets, it is not possible to reliably determine
that. Consider there is a malicious router on every available
path, with a cost less than the advertised cost, from
the verifying sensor to the destination. If all these routers
drop probe packets, then the verifying sensor can be made
to believe that all paths with a cost less than the advertised
cost are unavailable. The verifying sensor will then consider
a sub-optimal path to be valid even though better paths are
present.
Attacks cannot be detected on links where sensors are not
present. A malicious entity can advertise false routing information
on links where there are no sensors without being
detected. The false routing information will be accepted
by the routers connected to the link and will be advertised
further. The e#ects of the false routing information will
propagate undetected until it reaches a link where a sensor
is present.
8 Again, we consider only those links that were a part of the topology
that was used to generate the sensor configurations.
The detection scheme might also generate false positives.
Sensors that do not have configurations based on the correct
network topology might generate false alarms. Incorrect
threshold values and timers also may cause false alarms. For
example, when a sensor sends a verification message, it sets
a timer within which it expects to receive a reply from the
other sensor. If the reply gets delayed due to some network
condition, then false alarms may be generated.
False alarms may also be generated when verifying sub-
optimal-cost advertisements. To verify a sub-optimal-cost
advertisement, a sensor first verifies that all optimal-cost
paths are unavailable. If the verifying sensor receives replies
from all the sensors associated with an optimal path, it will
infer that the optimal path is available. However, if sensors
are not placed on every link, there might be a broken link
after the last sensor used for the verification of the optimal
path. As a consequence, the verifying sensor will assume
that the optimal path is available and will generate a false
alarm indicating that the sub-optimal path is malicious.
8. CONCLUSION
This paper presented a novel approach to detect attacks
against the routing infrastructure. The approach uses a set
of sensors that analyze routing tra#c in di#erent locations
within a network. An algorithm to automatically generate
both the detection signatures and the inter-sensor messages
needed to verify the state of the routing infrastructure has
been devised for the case of the RIP distance-vector routing
protocol.
The approach described here has a number of advantages.
First, the implementation of our approach does not require
any modification to routers and routing protocols. Most
current approaches require routers and routing protocols be
changed. The high cost of replacing routers and the risk of
large-scale service disruption due to possible routing protocol
incompatibility has resulted in some inertia in the deployment
of these approaches. Our approach, on the other
hand, is deployable and provides a preliminary solution to
detecting attacks against the routing infrastructure.
Second, the detection process does not use the computational
resources of routers. There might be additional load
on the routers from having to forward the tra#c generated
by sensors. However, this additional load should not be as
much as it would be if a router had to perform public-key
decryption of every routing update that it received, which
is what most current schemes require.
Third, the approach supports the automatic generation of
intrusion detection signatures, which is a human-intensive
and error-prone task.
However, the approach has some drawbacks. First, the
complexity of the o#ine computation that generates sensor
configurations increases as the density of the network
graph increases. Our experience suggests that this will not
be a problem in the case of real-life topologies, but it could
become unmanageable for densely interconnected infrastruc-
tures. In addition, sensors generate additional tra#c to validate
routing advertisements. The amount of additional traffic
generated increases as the required security guarantees
become more stringent. Finally, attacks where subverted
routers modify the contents of data packets or drop packets
selectively, cannot be detected using this approach.
Our future work will be focused on extending the approach
described here to intra-domain link-state protocols
(e.g., OSPF) and inter-domain protocols (e.g., BGP). The
vulnerabilities associated with these protocols have already
been analyzed in our testbed network and a preliminary version
of the algorithm for both the OSPF and BGP protocols
has been devised. To address large-scale networks where
knowledge of the complete topology cannot be assured, we
are working on a model where intelligent decisions can be
made based on partial topology information.
Acknowledgments
We want to thank Prof. Richard Kemmerer for his invaluable
comments.
This research was supported by the Army Research Of-
fice, under agreement DAAD19-01-1-0484 and by the Defense
Advanced Research Projects Agency (DARPA) and
Rome Laboratory, Air Force Materiel Command, USAF,
under agreement number F30602-97-1-0207. The U.S. Government
is authorized to reproduce and distribute reprints
for Governmental purposes notwithstanding any copyright
annotation thereon.
The views and conclusions contained herein are those of
the authors and should not be interpreted as necessarily
representing the o#cial policies or endorsements, either expressed
or implied, of the Army Research O#ce, the Defense
Advanced Research Projects Agency (DARPA), Rome Lab-
oratory, or the U.S. Government.
9.
--R
Intrusion Detection Systems: A Taxomomy and Survey.
Detecting Disruptive Routers: A Distributed Network Monitoring Approach.
Protecting Routing Infrastructures from Denial of Service Using Cooperative Intrusion Detection.
Intrusion Detection for Network Infrastructures.
Reducing the Cost of Security in Link-State Routing
An intrusion-detection system for large-scale networks
Routing in the Internet.
Design and Implementation of a Scalable Intrusion Detection System for the Protection of Network Infrastructure.
Secure Border Gateway Protocol (Secure-BGP) - Real World Performance and Deployment Issues
Secure Border Gateway Protocol (Secure-BGP)
Rip version 2.
Presentation on Security Architecture of the Internet Infrastructure.
Digital Signature Protection of the OSPF Routing Protocol.
Network Layer Protocols with Byzantine Robustness.
Statistical Anomaly Detection for Link-State Routing Protocols
A border gateway protocol 4 (bgp-4)
On the Vulnerablity and Protection of OSPF Routing Protocol.
Design and implementation of a scalable intrusion detection system for the ospf routing protocol
Intrusion Detection for Link-State Routing Protocols
--TR
Routing in the Internet
Protecting routing infrastructures from denial of service using cooperative intrusion detection
Digital signature protection of the OSPF routing protocol
Reducing The Cost Of Security In Link-State Routing
Securing Distance-Vector Routing Protocols
On the Vulnerabilities and Protection of OSPF Routing Protocol
An efficient message authentication scheme for link state routing
--CTR
Marco Rolando , Matteo Rossi , Niccol Sanarico , Dino Mandrioli, A formal approach to sensor placement and configuration in a network intrusion detection system, Proceedings of the 2006 international workshop on Software engineering for secure systems, May 20-21, 2006, Shanghai, China
Yi-an Huang , Wenke Lee, A cooperative intrusion detection system for ad hoc networks, Proceedings of the 1st ACM workshop on Security of ad hoc and sensor networks, October 31, 2003, Fairfax, Virginia
Antonio Pescap , Giorgio Ventre, Experimental analysis of attacks against intradomain routing protocols, Journal of Computer Security, v.13 n.6, p.877-903, December 2005 | routing security;intrusion detection;network topology |
586145 | Mimicry attacks on host-based intrusion detection systems. | We examine several host-based anomaly detection systems and study their security against evasion attacks. First, we introduce the notion of a mimicry attack, which allows a sophisticated attacker to cloak their intrusion to avoid detection by the IDS. Then, we develop a theoretical framework for evaluating the security of an IDS against mimicry attacks. We show how to break the security of one published IDS with these methods, and we experimentally confirm the power of mimicry attacks by giving a worked example of an attack on a concrete IDS implementation. We conclude with a call for further research on intrusion detection from both attacker's and defender's viewpoints. | INTRODUCTION
The goal of an intrusion detection system (IDS) is like
that of a watchful burglar alarm: if an attacker manages to
penetrate somehow our security perimeter, the IDS should
set alarms so that a system administrator may take appropriate
action. Of course, attackers will not necessarily
cooperate with us in this. Just as cat burglars use stealth
to escape without being noticed, so too we can expect that
computer hackers may take steps to hide their presence and
try to evade detection. Hence if an IDS is to be useful, it
This research was supported in part by NSF CAREER
CCR-0093337.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
CCS'02, November 18-22, 2002, Washington, DC, USA.
would be a good idea to make it di-cult for attackers to
cause harm without being detected. In this paper, we study
the ability of IDS's to reliably detect stealthy attackers who
are trying to avoid notice.
The fundamental challenge is that attackers adapt in response
to the defensive measures we deploy. It is not enough
to design a system that can withstand those attacks that are
common at the time the system is deployed. Rather, security
is like a game of chess: one must anticipate all moves the
attacker might make and ensure that the system will remain
secure against all the attacker's possible responses. Conse-
quently, an IDS that is susceptible to evasion attacks (where
the attacker can cloak their attack to evade detection) is of
uncertain utility over the long term: we can expect that if
such an IDS sees widespread deployment, then attackers will
change their behavior to routinely evade it. Since in practice
many attacks arise from automated scripts, script writers
may someday incorporate techniques designed to evade the
popular IDS's in their scripts. In this sense, the very success
of an approach for intrusion detection may lead to its
own downfall, if the approach is not secure against evasion
attacks.
Broadly speaking, there are two kinds of intrusion detection
systems: network intrusion detection systems, and
host-based intrusion detection systems. Several researchers
have previously identied a number of evasion attacks on
network intrusion detection systems [19, 18, 7, 1]. Motivated
by those results, in this paper we turn our attention
to host-based intrusion detection.
Though there has been a good deal of research on the security
of network IDS's against evasion attacks, the security
of host-based intrusion detection systems against evasion attacks
seems not to have received much attention in the security
literature. One can nd many papers proposing new
techniques for intrusion detection, and authors often try to
measure their detection power by testing whether they can
detect currently-popular attacks. However, the notion of security
against adaptive adversarial attacks is much harder
to measure, and apart from some recent work [23, 24], this
subject does not seem to have received a great deal of coverage
in the literature. To remedy this shortcoming, in this
paper we undertake a systematic study of the issue.
Host-based intrusion detection systems can be further divided
into two categories: signature-based schemes (i.e.,
misuse detection) and anomaly detection. Signature-based
schemes are typically trivial to bypass simply by varying
the attack slightly, much in the same way that polymorphic
viruses evade virus checkers. We show in Section 4.2
how to automatically create many equivalent variants of a
given attack, and this could be used by an attacker to avoid
matching the IDS's signature of an attack. This is an unavoidable
weakness of misuse detection. Evasion attacks on
signature-based schemes are child's play, and so we do not
consider them further in this paper.
Anomaly detection systems are more interesting from the
point of view of evasion attacks, and in this paper we focus
specically on anomaly detection systems. We show in Section
3 several general evasion methods, including the notion
of a mimicry attack and the idea of introducing \semantic
no-ops" in the middle of the attack to throw the IDS o.
Next, in Section 4, we introduce a principled framework for
nding mimicry attacks, building on ideas from language
and automata theory. We argue in Section 4.2 that nearly
every system call can be used as a \no-op," giving the attacker
great freedom in constructing an attack that will not
trigger any intrusion alarms. Sections 5 and 6 describe our
empirical experience in using mimicry attacks to escape de-
tection: we convert an o-the-shelf exploit script into one
that works without being detected by the pH IDS. Finally,
in Sections 8 and 9 we conclude with a few parting thoughts
on countermeasures and implications.
For expository purposes, this paper is written from the
point of view of an attacker. Nonetheless, our goal is not
to empower computer criminals, but rather to explore the
limits of current intrusion detection technology and to enable
development of more robust intrusion detection sys-
tems. The cryptographic community has benetted tremendously
from a combination of research on both attacks and
defenses|for instance, it is now accepted wisdom that one
must rst become expert in codebreaking if one wants to
be successful at codemaking, and many cryptosystems are
validated according to their ability to stand up to concerted
adversarial analysis|yet the intrusion detection community
has not to date had the benet of this style of adversarial
scholarship. We hope that our work will help to jump-start
such a dialogue in the intrusion detection research literature.
2. A TYPICAL HOST-BASED IDS
There have been many proposals for how to do host-based
anomaly detection, but a paradigmatic (and seminal) example
is the general approach of Forrest, et al. [3, 2, 8, 26,
21]. We will brie
y review their scheme. They monitor the
behavior of applications on the host by observing the inter-action
of those applications with the underlying operating
system. In practice, security-relevant interactions typically
take the form of system calls, and so their scheme works
by examining the trace of system calls performed by each
application.
Their scheme is motivated by using the human immune
system as a biological analogy. If the system call traces of
normal applications are self-similar, then we can attempt to
build an IDS that learns the normal behavior of applications
and recognizes possible attacks by looking for abnormalities.
In the learning phase of this sort of scheme, the IDS gathers
system call traces from times when the system is not
under attack, extracts all subtraces containing six consecutive
system calls, and creates a database of these observed
subtraces 1 . A subtrace is deemed anomalous if it does not
In practice, pH uses lookahead pairs to reduce the size of
the database. This only increases the set of system call
appear in this database. Then, in the monitoring phase,
the abnormality of a new system call trace is measured by
counting how many anomalous subtraces it contains.
The authors' experience is that attacks often appear as
radically abnormal traces. For instance, imagine a mail
client that is under attack by a script that exploits a buer
overrun, adds a backdoor to the password le, and spawns a
new shell listening on port 80. In this case, the system call
trace will probably contain a segment looking something like
this:
open(), write(), close(), socket(), bind(), listen(),
accept(), read(), fork().
Since it seems unlikely that the mail client would normally
open a le, bind to a network socket, and fork a child in immediate
succession, the above sequence would likely contain
several anomalous subtraces, and thus this attack would be
easily detected.
We selected Somayaji and Forrest's pH intrusion detection
system [21] for detailed analysis, mainly because it was
the only system where full source code could be obtained
for analysis. Many other proposals for host-based anomaly
detection may be found in the literature [3, 2, 8, 26, 21, 5,
14, 15, 4, 26, 12, 13, 17, 27]. However, pH is fairly typical,
in the sense that many host-based IDS's rely on recognizing
attacks based on the traces they produce, be it traces of system
calls, BSM audit events, or Unix commands. We will
use pH as a motivating example throughout the paper, but
we expect that our techniques will apply more generally to
host-based intrusion detection systems based on detecting
anomalies in sequences of events. For instance, it should be
possible to use our approach to analyze systems based on
system call sequences [3, 2, 8, 26, 5, 27], data mining [14,
15], neural networks [4], nite automata [17], hidden Markov
models [26], and pattern matching in behavioral sequences
[12, 13].
3. BUILDING BLOCKS FOR EVASION
Background. First, let us start with a few assumptions to
simplify the analysis to follow. It seems natural to assume
that the attacker knows how the IDS works. This seems
unavoidable: If the IDS becomes popular and is deployed at
many sites, it will be extremely di-cult to prevent the source
code to the IDS from leaking. As usual, security through
obscurity is rarely a very reliable defense, and it seems natural
to assume that the IDS algorithm will be available for
inspection and study by attackers.
Similarly, if the IDS relies on a database of normal behav-
ior, typically it will be straightforward for the attacker to
predict some approximation to this database. The behavior
of most system software depends primarily on the operating
system version and conguration details, and when
these variables are held constant, the normal databases produced
on dierent machines should be quite similar. Hence,
an attacker could readily obtain a useful approximation to
the database on the target host by examining the normal
databases found on several other hosts of the same type, retaining
only program behaviors common to all those other
databases, and using the result as our prediction of the normal
database on the target host. Since in our attacks the
traces allowed by pH.
attacker needs only an under-approximation to the normal
database in use, this should su-ce. Hence, it seems reasonable
to assume that the database of normal behaviors is
mostly (or entirely) known.
Moreover, we also assume that the attacker can silently
take control of the application without being detected. This
assumption is not always satised, but for many common
attack vectors, the actual penetration leaves no trace in the
system call trace. For instance, exploiting a buer overrun
vulnerability involves only a change in the control
ow
of the program, but does not itself cause any system calls
to be invoked, and thus no syscall-based IDS can detect
the buer overrun itself. In general, attacks can be divided
into a penetration phase (when the attacker takes control of
the application and injects remote code) and a exploitation
phase (when the attacker exploits his control of the application
to bring harm to the rest of the system by executing
the recently-injected foreign code), and most anomaly detection
systems are based on detecting the harmful eects
of the exploitation, not on detecting the penetration itself.
Consequently, it seems reasonable to believe that many applications
may contain vulnerabilities that allow attackers
to secretly gain control of the application.
With that background, the remainder of this section describes
six simple ideas for avoiding detection, in order of
increasing sophistication and power. We presume that the
attacker has a malicious sequence of actions that will cause
harm and that he wants to have executed; his goal is to
execute this sequence without being detected.
Slip under the radar. Our rst evasion technique is based
on trying to avoid causing any change whatsoever in the observable
behavior of the application. A simple observation
is that system call-based IDS's can only detect attacks by
their signature in the system call trace of the application.
If it is possible to cause harm to the system without issuing
any system calls, then the IDS has no hope of detecting
such an attack. For instance, on some old versions of Solaris
it was possible to become root simply by triggering the
divide-by-zero trap handler, and this does not involve any
system calls. However, such OS vulnerabilities appear to
be exceptionally rare. As a more general instance of this
attack class, an attacker can usually cause the application
to compute incorrect results. For instance, a compromised
web browser might invisibly elide all headlines mentioning
the Democratic party whenever the user visits any news site,
or a compromised mailer might silently change the word \is"
to \isn't" in every third email from the company's CEO.
There seems to be little that an IDS can do about this
class of attacks. Fortunately, the harm that an attacker can
do to the rest of the system without executing any system
calls appears to be limited.
Be patient. A second technique for evading detection is
simply to be patient: wait passively for a time when the
malicious sequence will be accepted by the IDS as normal
behavior, and then pause the application and insert the malicious
sequence. Of course, the attacker can readily recognize
when the sequence will be allowed simply by simulating
the behavior of the IDS. Simulating the IDS should be easy,
since by our discussion above there are no secrets in the IDS
algorithm.
Moreover, it is straightforward for the attacker to retain
control while allowing the application to execute its usual sequence
of system calls. For instance, the attacker who takes
control of an application could embed a Trojan horse by replacing
all the library functions in the application's address
space by modied code. The replacement implementation
might behave just like the pre-existing library code, except
that before returning to its caller each function could check
whether the time is right to begin executing the malicious
sequence. After this modication is completed, the attacker
could return the
ow of program control to the application,
condent in the knowledge that he will retain the power to
regain control at any time. There are many ways to accomplish
this sort of parasitic infection, and there seems to be
no defense against such an invasion.
There is one substantial constraint on the attacker, though.
This attack assumes that there will come a time when the
malicious sequence will be accepted; if not, the attacker
gains nothing. Thus, the power of this attack is limited
by the precision of the database of normal behavior.
Another limitation on the attacker is that, after the malicious
sequence has been executed, resuming execution of the
application may well lead to an abnormal system call trace.
In such a case, only two choices immediately present them-
selves: we could allow the application to continue executing
(thereby allowing the IDS to detect the attack, albeit after
the harm has already been done), or we could freeze the application
permanently (which is likely to be very noticeable
and thus might attract attention). A slightly better strategy
may be to cause the application to crash in some way that
makes the crash appear to have come from an innocuous
program bug rather than from a security violation. Since
in practice many programs are rather buggy, system administrators
are used to seeing coredumps or the Blue Screen
of Death from time to time, and they may well ignore the
crash. However, this strategy is not without risk for the
attacker.
In short, a patient attacker is probably somewhat more
dangerous than a naive, impatient attacker, but the attacker
still has to get lucky to cause any harm, so in some scenarios
the risk might be acceptable to defenders.
Be patient, but make your own luck. One way the attacker
can improve upon passive patience is by loading the
dice. There are typically many possible paths of execution
through an application, each of which may lead to a slightly
dierent system call trace, and this suggests an attack strat-
egy: the attacker can look for the most favorable path of execution
and nudge the application into following that path.
As an optimization, rather than embedding a Trojan horse
and then allowing the application to execute normally, the
attacker can discard the application entirely and simulate
its presence. For example, the attacker can identify the
most favorable path of execution, then synthetically construct
the sequence of system calls that would be executed
by this path and issue them directly, inserting his malicious
sequence at the appropriate point. The analysis eort can
all be pre-computed, and thus a stealthy attack might simply
contain a sequence of hard-coded system calls that simulate
the presence of the application for a while and then
eventually execute the malicious sequence.
In fact, we can see there is no reason for the attacker to
restrict himself to the feasible execution paths of the appli-
cation. The attacker can even consider system call traces
that could not possibly be output by any execution of the
application, so long as those traces will be accepted as \nor-
mal" by the IDS. In other words, the attacker can examine
the set of system call traces that won't trigger any alarms
and look for one such trace where the malicious sequence
can be safely inserted. Then, once such a path is identied,
the attacker can simulate its execution as above and proceed
to evade the IDS.
In essence, we are mimicking the behavior of the applica-
tion, but with a malicious twist. To continue the biological
analogy, a successful mimic will be recognized as \self" by
the immune system and will not cause any alarms. For this
reason, we dub this the mimicry attack [25]. This style of
attack is very powerful, but it requires a careful examination
of the IDS, and the attacker also has to somehow identify
favorable traces. We will study this topic in greater detail
in Section 4.
Replace system call parameters. Another observation is
that most schemes completely ignore the arguments to the
system call. For instance, an innocuous system call
open("/lib/libc.so", O_RDONLY)
looks indistinguishable (to the IDS) from the malicious call
open("/etc/shadow", O_RDWR).
The evasion technique, then, is obvious. If we want to
write to the shadow password le, there is no need to wait
for the application to open the shadow password le during
normal execution. Instead, we may simply wait for the application
to open any le whatsoever and then substitute our
parameters ("/etc/shadow", O_RDWR) for the application's.
This is apparently another form of mimicry attack.
As far as we can tell, almost all host-based intrusion detection
systems completely ignore system call parameters and
return values. The only exception we are aware of is Wagner
and Dean's static IDS [25], and they look only at a small
class of system call parameters, so parameter-replacement
attacks may be very problematic for their scheme as well.
Insert no-ops. Another observation is that if there is no
convenient way to insert the given malicious sequence into
the application's system call stream, we can often vary the
malicious sequence slightly by inserting \no-ops" into it. In
this context, the term \no-op" indicates a system call with
no eect, or whose eect is irrelevant to the goals of the at-
tacker. Opening a non-existent le, opening a le and then
immediately closing it, reading 0 bytes from an open le de-
scriptor, and calling getpid() and discarding the result are
all examples of likely no-ops. Note that even if the original
malicious sequence will never be accepted by the IDS, some
modied sequence with appropriate no-ops embedded might
well be accepted without triggering alarms.
We show later in the paper (see Section 4.2 and Table 1)
that, with only one or two exceptions, nearly every system
call can be used as a \no-op." This gives the attacker great
power, since he can pad out his desired malicious sequence
out with other system calls chosen freely to maximize the
chances of avoiding detection. One might expect intuitively
that every system call that can be found in the normal
database may become reachable with a mimicry attack by
inserting appropriate no-ops; we develop partial evidence to
support this intuition in Section 6.
Generate equivalent attacks. More generally, any way
of generating variations on the malicious sequence without
changing its eect gives the attacker an extra degree
of freedom in trying to evade detection. One can imagine
many ways to systematically create equivalent variations on
a given malicious sequence. For instance, any call to read()
on an open le descriptor can typically be replaced by a call
to mmap() followed by a memory access. As another exam-
ple, in many cases the system calls in the malicious sequence
can be re-ordered. An attacker can try many such possibilities
to see if any of them can be inserted into a compromised
application without detection, and this entire computation
can be done oine in a single precomputation.
Also, a few system calls give the attacker special power,
if they can be executed without detection as part of the
exploit sequence. For instance, most IDS's handle fork()
by cloning the IDS and monitoring both the child and parent
application process independently. Hence, if an attacker
can reach the fork() system call and can split the exploit
sequence into two concurrent chunks (e.g., overwriting the
password le and placing a backdoor in the ls program),
then the attacker can call fork() and then execute the rst
chunk in the parent and the second chunk in the child. As
another example, the ability to execute the execve() system
call gives the attacker the power to run any program
whatsoever on the system.
Of course, the above ideas for evasion can be combined
freely. This makes the situation appear rather grim for
the defenders: The attacker has many options, and though
checking all these options may require a lot of eort on the
attacker's part, it also seems unclear whether the defenders
can evaluate in advance whether any of these might work
against a given IDS. We shall address this issue next.
4. A THEORETICAL FRAMEWORK
In this section, we develop a systematic framework for methodically
identifying potential mimicry attacks. We start
with a given malicious sequence of system calls, and a model
of the intrusion detection system. The goal is to identify
whether there is any trace of system calls that is accepted
by the IDS (without triggering any alarms) and yet contains
the malicious sequence, or some equivalent variant on it.
This can be formalized as follows. Let denote the set of
system calls, and the set of sequences over the alphabet
. We say that a system call trace T 2 is accepted (or al-
lowed) by the IDS if executing the sequence
does not trigger any alarms. Let A denote the set of
system call traces allowed by the IDS, i.e.,
accepted by the IDSg:
Also, let M denote the set of traces that achieve the
attacker's goals, e.g.,
is an equivalent variant
on the given malicious sequenceg:
Now we can succinctly state the condition for the existence
of mimicry attacks. The set A \ M is exactly the set of
traces that permit the attacker to achieve his goals without
detection, and thus mimicry attacks are possible if and only
if A \ M 6= ;. If the intersection is non-empty, then any
of its elements gives a stealthy exploit sequence that can be
used to achieve the intruder's goals while reliably evading
detection.
The main idea of the proposed analytic method is to frame
this problem in terms of formal language theory. In this pa-
per, A is a regular language. This is fairly natural [20], as
nite-state IDS's can always be described as nite-state automata
and thus accept a regular language of syscall traces.
Moreover, we insist that M also be a regular language. This
requires a bit more justication (see Section 4.2 below), but
hopefully it does not sound too unreasonable at this point.
It is easy to generalize this framework still further 2 , but this
formulation has been more than adequate for all the host-based
IDS's considered in our experiments.
With this formulation, testing for mimicry attacks can be
done automatically and in polynomial time. It is a standard
theorem of language theory that if L; L 0 are two regular
languages, then so is L \ L 0 , and L \ L 0 can be computed
eectively [11, x3.2]. Also, given a regular language L 00 , we
can e-ciently test whether L 00 ?
;, and if L 00 is non-empty,
we can quickly nd a member of L 00 [11, x3.3]. From this,
it follows that if we can compute descriptions of A and M,
we can e-ciently test for the existence of mimicry attacks.
In the remainder of this section, we will describe rst how
to compute A and then how to compute M.
4.1 Modelling the IDS
In Forrest's IDS, to predict whether the next system call
will be allowed, we only need to know the previous ve system
calls. This is a consequence of the fact that Forrest's
IDS works by looking at all subtraces of six consecutive system
calls, checking that each observed subtrace is in the
database of allowable subtraces.
Consequently, in this case we can model the IDS as a
nite-state automaton with statespace given by ve-tuples
of system calls and with a transition for each allowable system
call action. More formally, the statespace is
(recall that denotes the set of system calls), and we have
a transition
for each subtrace found in the IDS's database
of allowable subtraces. The automaton can be represented
e-ciently in the same way that the normal database is represented
Next we need a initial state and a set of nal (accepting)
states, and this will require patching things up a bit. We
introduce a new absorbing state Alarm with a self-transition
Alarm
Alarm on each system call s 2 , and we ensure
that every trace that sets o an intrusion alarm ends up in
the state Alarm by adding a transition (s0
Alarm for each subtrace that is not found in the
IDS's database of allowable subtraces. Then the nal (ac-
cepting) states are all the non-alarm states, excluding only
the special state Alarm.
The initial state of the automaton represents the state the
application is in when the application is rst penetrated.
This is heavily dependent on the application and the attack
vector used, and presumably each dierent vulnerability will
lead to a dierent initial state. For instance, if there is a
buer overrun that allows the attacker to gain control just
2 For instance, we could allow A or M (but not both) to
be context-free languages without doing any violence to the
polynomial-time nature of our analysis.
after the application has executed ve consecutive read()
system calls, then the initial state of the automaton should
be (read; read; read; read; read).
Extensions. In practice, one may want to rene the model
further to account for additional features of the IDS. For
instance, the locality frame count, which is slightly more
forgiving of occasional mismatched subtraces and only triggers
alarms if su-ciently many mismatches are seen, can
be handled within a nite-state model. For details, see Appendix
A.
4.2 Modelling the malicious sequence
Next, we consider how to express the desired malicious
sequence within our framework, and in particular, how to
generate many equivalent variations on it. The ability to
generate equivalent variations is critical to the success of
our attack, and rests on knowledge of equivalences induced
by the operating system semantics. In the following, let
malicious sequence we
want to sneak by the IDS.
Adding no-ops. We noted before that one simple way to
generate equivalent variants is by freely inserting \no-ops"
into the malicious sequence M . A \no-op" is a system call
that has no eect, or more generally, one that has no eect
on the success of the malicious sequence M . For instance, we
can call getpid() and ignore the return value, or call brk()
and ignore the newly allocated returned memory, and so on.
A useful trick for nding no-ops is that we can invoke
a system call with an invalid argument. When the system
call fails, no action will have been taken, yet to the IDS it
will appear that this system call was executed. To give a
few examples, we can open() a non-existent pathname, or
we can call mkdir() with an invalid pointer (say, a NULL
pointer, or one that will cause an access violation), or we can
call dup() with an invalid le descriptor. Every IDS known
to the authors ignores the return value from system calls,
and this allows the intruder to nullify the eect of a system
call while fooling the IDS into thinking that the system call
succeeded.
The conclusion from our analysis is that almost every system
call can be nullied in this way. Any side-eect-free
system call is already a no-op. Any system call that takes a
pointer, memory address, le descriptor, signal number, pid,
uid, or gid can be nullied by passing invalid arguments.
One notable exception is exit(), which kills the process no
matter what its argument is. See Table 1 for a list of all system
calls we have found that might cause di-culties for the
all the rest may be freely used to generate equivalent
variants on the malicious sequence 3 . The surprise is
not how hard it is to nd nulliable system calls, but rather
how easy it is to nd them|with only a few exceptions,
nearly every system call is readily nulliable. This gives the
attacker extraordinary freedom to vary the malicious exploit
sequence.
We can characterize the equivalent sequences obtained
this way with a simple regular expression. Let N denote
the set of nulliable system calls. Consider the regular
3 It is certainly possible that we might have overlooked
some other problematic system calls, particularly on systems
other than Linux. However, we have not yet encountered
any problematic system call not found in Table 1.
System call Nulli-
able?
Useful to an
attacker?
Comments
the process, which will cause problems for the intruder.
Unlikely Puts the process to sleep, which would cause a problem for the intruder. Attacker
might be able to cause process to receive signal and wake up again (e.g., by sending
SIGURG with TCP out of band data), but this is application-dependent.
Hangs up the current terminal, which might be problematic for the
intruder. But it is very rarely used in applications, hence shouldn't cause a problem.
Usually Creates a new copy of the process. Since the IDS will probably clone itself to
monitor each separately, this is unlikely to cause any problems for the attacker.
(Similar comments apply to vfork() and to clone() on Linux.)
Calling alarm(0) sets no new alarms, and will likely be safe. It does have the
side-eect of cancelling any previous alarm, which might occasionally interfere with
normal application operation, but this should be rare.
Usually Creates a new session for this process, if it is not already a session leader. Seems
unlikely to interfere with typical attack goals in practice.
Nullify by passing socket type parameter.
Nullify by passing NULL pointer parameter.
Nullify by passing NULL lename parameter.
. Yes Yes .
Table
1: A few system calls and whether they can be used to build equivalent variants of a given malicious
sequence. The second column indicates whether the system call can be reliably turned into a \no-op" (i.e.,
nullied), and the third column indicates whether an attacker can intersperse this system call freely in a
given malicious sequence to obtain equivalent variants. For instance, exit() is not nulliable and kills the
process, hence it is not usable for generating equivalent variants of a malicious sequence. This table shows
all the system calls we know of that an attacker might not be able to nullify; the remaining system calls not
shown here are easily nullied.
expression dened by
This matches the set of sequences obtained from M by inserting
no-ops, and any sequence matching this regular expression
will have the same eect as M and hence will be
interchangeable with M . Moreover, this regular expression
may be expressed as a nite-state automaton by standard
methods [11, x2.8], and in this way we obtain a representation
of the set M dened earlier, as desired.
Extensions. If necessary, we could introduce further variability
into the set of variants considered by considering
equivalent system calls. For instance, if a read() system
call appears in the malicious sequence M , we could also easily
replace the read() with a mmap() system call if this helps
avoid detection. As another example, we can often collapse
multiple consecutive read() calls into a single read() call,
or multiple chdir() system calls into a single chdir(), and
so on.
All of these equivalences can also be modelled within our
nite-state framework. Assume we have a relation R on
obeying the following condition: if
may assume that the sequence
X can be equivalently replaced by X 0 without altering the
resulting eect on the system. Suppose moreover that this
relation can be expressed by a nite-state transducer, e.g., a
Mealy or Moore machine; equivalently, assume that R forms
a rational transduction. Dene
By a standard result in language theory [11, x11.2], we nd
that M is a regular language, and moreover we can easily
compute a representation of M as a nite-state automaton
given a nite-state representation of R.
Note also that this generalizes the strategy of inserting
no-ops. We can dene a relation RN by RN (X;
obtained from X by inserting no-ops from the set N , and
it is not hard to see that the relation RN can be given by
a nite-state transduction. Hence the idea of introducing
no-ops can be seen as a special case of the general theory
based on rational transductions.
In summary, we see that the framework is fairly general,
and we can expect to model both the IDS and the set of
malicious sequences as nite-state automata.
5. IMPLEMENTATION
We implemented these ideas as follows. First, we trained
the IDS and programmatically built the automaton A from
the resulting database of normal sequences of system calls.
The automaton M is formed as described above.
The next step is to form the composition of A and M
by taking the usual product construction. Our implementation
tests for a non-empty intersection by constructing the
product automaton A M explicitly in memory [11, x3.2]
and performing a depth-rst search from the initial state to
see if any accepting state is reachable [11, x3.3]; if yes, then
we've found a stealthy malicious sequence, and if not, the
mimicry attack failed. In essence, this is a simple way of
model-checking the system A against the property M.
We note that there are many ways to optimize this computation
by using ideas from the model-checking literature.
For instance, rather than explicitly computing the entire
product automaton in advance and storing it in memory, to
reduce space we could perform the depth-rst search generating
states lazily on the
y. Also, we could use hashing
to keep a bit-vector of previously visited states to further
reduce memory consumption [9, 10]. If this is not
enough, we could even use techniques from symbolic model-checking
to represent the automata A and M using BDD's
and then compute their product symbolically with standard
algorithms [16].
However, we have found that these fancy optimizations
seem unnecessary in practice. The simple approach seems
adequate for the cases we've looked at: in our experiments,
our algorithm runs in less than a second. This is not surprising
when one considers that, in our usage scenarios, the
automaton A typically has a few thousand states and M
contains a half dozen or so states, hence their composition
contains only a few tens of thousands of states and is easy
to compute with.
6. EMPIRICAL EXPERIENCE
In this section, we report on experimental evidence for
the power of mimicry attacks. We investigated a number
of host-based anomaly detection systems. Although many
papers have been written proposing various techniques, we
found only one working implementation with source code
that we could download and use in our tests: the pH (for
process homeostasis) system [21]. pH is a derivative of For-
rest, et al.'s early system, with the twist that pH responds
to attacks by slowing down the application in addition to
raising alarms for the system administrator. For each system
call, pH delays the response by 2
counts the number of mismatched length-6 subtraces in the
last 128 system calls. We used pH version 0.17 running on
a fresh Linux Redhat 5.0 installation with a version 2.2.19
kernel 4 . Our test host was disconnected from the network
for the duration of our experiments to avoid the possibility
of attacks from external sources corrupting the experiment.
We also selected an o-the-shelf exploit to see whether
it could be made stealthy using our techniques. We chose
one more or less at random, selecting an attack script called
autowux.c that exploits the \site exec" vulnerability in the
wuftpd FTP server. The autowux attack script exploits a
string vulnerability, and it then calls setreuid(0,0),
escapes from any chroot protection, and execs /bin/sh using
the execve() system call. It turns out that this is a fairly
typical payload: the same shellcode can be found in many
other attack scripts that exploit other, unrelated vulnerabilities
5 . We conjecture that the authors of the autowux script
just copied this shellcode from some previous source, rather
than developing new shellcode. Our version of Linux Redhat
5.0 runs wuftpd version wu-2.4.2-academ[BETA-15](1),
and we trained pH by running wuftpd on hundreds of large
4 Since this work was done, version 0.18 of pH has been re-
leased. The new version uses a longer window of length 9,
which might improve security. We did not test whether this
change improves the resistance of pH to mimicry attacks.
5 It is interesting and instructive to notice that such a
widespread attack payload includes provisions by default
to always attempt escaping from a chroot jail. The lesson
is that, if a weak protection measure becomes widespread
enough, eventually attackers will routinely incorporate
countermeasures into all their attacks. The implications for
intrusion detection systems that are susceptible to mimicry
attacks are troubling.
le downloads over a period of two days. We veried that
pH detects the unmodied exploit 6 .
Next, we attempted to modify the exploit to evade de-
tection. We parsed pH's database of learned length-6 sub-
traces and built an automaton A recognizing exactly those
system call traces that never cause any mismatches. We did
not bother to rene this representation to model the fact
that intruder can safely cause a few occasional mismatches
without causing problems (see Appendix A), as such a renement
turned out to be unnecessary. Also, we examined
the point in time where autowux mounts its buer over
ow
attack against the wuftpd server. We found that the window
of the last ve system calls executed by wuftpd is
when the exploit rst gains control. This determines the
initial state of A.
In addition, we reverse engineered the exploit script and
learned that it performs the following sequence of 15 system
calls:
9 chdir("."); chroot("/"); execve("/bin/sh"):
We noticed that the nine consecutive chdir(".") calls can,
in this case, be collapsed into a single
As always, one can also freely introduce no-ops. With these
two simple observations, we built an automaton M recognizing
the regular expression
Our program performs a depth-rst search in the product
automaton A M and informs us that A \
there is no stealthy trace matching the above regular expression
Next, we modied the attack sequence slightly by hand
to repair this deciency. After interactively invoking our
tool a few times, we discovered the reason why the original
pattern was infeasible: there is no path through the normal
database reaching dup2(), mkdir(), or execve(), hence no
attack that uses any of these system calls can completely
avoid mismatches. However, we note that these three system
calls can be readily dispensed with. There is no need to
create a new directory; an existing directory will do just as
well in escaping from the chroot jail, and as a side benet
will leave fewer traces. Also, the dup2() and execve() are
needed only to spawn an interactive shell, yet an attacker
can still cause harm by simply hard-coding in the exploit
shellcode the actions he wants to take without ever spawning
a shell. We hypothesized that a typical harmful action an
attacker might want to perform is to add a backdoor root
account into the password le, hence we proposed that an
attacker might be just as happy to perform the following
6 We took care to ensure that the IDS did not learn the exploit
code as \normal" in the process. All of our subsequent
experiments were on a virgin database, trained from scratch
using the same procedure and completely untouched by any
attack.
read()
stat()
close()
close() munmap() brk() fcntl() setregid() open() fcntl()
close() brk() time() getpid() sigaction() socketcall()
Figure
1: A stealthy attack sequence found by our
tool. This exploit sequence, intended to be executed
after taking control of wuftpd through the \site
exec" format string vulnerability, is a modication
of a pre-existing sequence found in the autowux ex-
ploit. We have underlined the system calls from the
original attack sequence. Our tool takes the underlined
system calls as input, and outputs the entire
sequence. The non-underlined system calls are intended
to be nullied: they play the role of \seman-
tic no-ops," and are present only to ensure that the
pH IDS does not detect our attack. The eect of the
resulting stealthy exploit is to escape from a chroot
jail and add a backdoor root account to the system
password le.
variant on the original exploit sequence:
open("/etc/passwd", O APPEND|O WRONLY);
close(fd); exit(0)
where fd represents the le descriptor returned by the open()
call (this value can be readily predicted). The modied attack
sequence becomes root, escapes from the chroot jail,
and appends a backdoor root account to the password le.
To check whether this modied attack sequence could be
executed stealthily, we built an automaton M recognizing
the regular expression
We found a sequence that raises no alarms and matches this
pattern. See Fig. 1 for the stealthy sequence. Finding this
stealthy sequence took us only a few hours of interactive
exploration with our search program, once the software was
implemented.
We did not build a modied exploit script to implement
this attack. Instead, to independently verify the correctness
of the stealthy sequence, we separately ran this sequence
through stide
7 and conrmed that it would be accepted
with zero mismatches by the database generated earlier.
Note that we were able to transform the original attack sequence
into a modied variant that would not trigger even
a single mismatch but that would have a similarly harmful
eect. In other words, there was no need to take advantage
of the fact that pH allows a few occasional mismatches
without setting alarms: our attack would be successful
no matter what setting is chosen for the pH locality frame
count threshold. This makes our successful results all the
more meaningful.
In summary, our experiments indicate that sophisticated
attackers can evade the pH IDS. We were fairly surprised at
the success of the mimicry attack at converting the autowux
script into one that would avoid detection. On rst glance,
we were worried that we would not be able to do much with
this attack script, as its payload contains a fairly unusual-
looking system call sequence. Nonetheless, it seems that the
database of normal system call sequences is rich enough to
allow the attacker considerable power.
Shortcomings. We are aware of several signicant limitations
in our experimental methodology. We have not compiled
the stealthy sequence in Fig. 1 into a modied exploit
script or tried running such a modied script against a machine
protected by pH. Moreover, we assumed that we could
modify the autowux exploit sequence so long as this does not
aect the eect of a successful attack; however, our example
would have been more convincing if the attack did not
require modications to the original exploit sequence.
Also, we tested only a single exploit script (autowux), a
single vulnerable application (wuftpd), a single operating
system (Redhat Linux), a single system conguration (the
default Redhat 5.0 installation), and a single intrusion detection
system (pH). This is enough to establish the presence
of a risk, but it does not provide enough data to assess
the magnitude of the risk or to evaluate how dierences in
operating systems or congurations might aect the risk.
We have not tried to assess how practical the attack might
be. We did not study how much eort or knowledge is required
from an attacker to mount this sort of attack. We
did not empirically test how eectively one can predict the
conguration and IDS normal database found on the target
host, and we did not measure whether database diversity is
a signicant barrier to attack. We did not estimate what
percentage of vulnerabilities would both give the attacker
su-cient control over the application to mount a mimicry
attack and permit injection of enough foreign code to execute
the entire stealthy sequence. Also, attacks often get
better over time, and so it may be too soon to draw any
denite conclusions. Because of all these unknown factors,
more thorough study will be needed before we can con-
dently evaluate the level of risk associated with mimicry
attacks in practice.
7 Because pH uses lookahead pairs, stide is more restrictive
than pH. However, the results of the test are still valid:
since our modied sequence is accepted by stide, we can
expect that it will be accepted by pH, too. If anything,
using stide makes our experiment all the more meaningful,
as it indicates that stide-based IDS's will also be vulnerable
to mimicry attacks.
7. RELATED WORK
There has been some other recent research into the security
of host-based anomaly detection systems against so-
phisticated, adaptive adversaries.
Wagner and Dean brie
y sketched the idea of mimicry
attacks in earlier work [25, x6]. Gi-n, Jha, and Miller elaborated
on this by outlining a metric for susceptibility to evasion
attacks based on attack automata [6, x4.5]. Somayaji
suggested that it may be possible in principle, but di-cult
in practice, to evade the pH IDS, giving a brief example to
justify this claim [22, x7.5]. None of these papers developed
these ideas in depth or examined the implications for the
eld, but they set the stage for future research.
More recently, and independently, Tan, Killourhy, and
Maxion provided a much more thorough treatment of the
issue [23]. Their research shows how attackers can render
host-based IDS's blind to the presence of their attacks, and
they presented compelling experimental results to illustrate
the risk. In follow-up work, Tan, McHugh, and Killourhy
rened the technique and gave further experimental conr-
mation of the risk from such attacks [24]. Their methods
are dierent from those given in this paper, but their results
are in agreement with ours.
8. DISCUSSION
Several lessons suggest themselves after these experiments.
First and foremost, where possible, intrusion detection systems
should be designed to resist mimicry attacks and other
stealthy behavior from sophisticated attackers. Our attacks
also give some specic guidance to IDS designers. It might
help for IDS's to observe not only what system calls are attempted
but also which ones fail and what error codes are
returned. It might be a good idea to monitor and predict
not only which systems calls are executed but also what arguments
are passed; otherwise, the attacker might have too
much leeway. Moreover, the database of normal behavior
should be as minimal and precise as possible, to reduce the
degree of freedom aorded to an attacker.
Second, we recommend that all future published work
proposing new IDS designs include a detailed analysis of the
proposal's security against evasion attacks. Even if this type
of vulnerability cannot be completely countered through
clever design, it seems worthwhile to evaluate carefully the
risks.
Finally, we encourage IDS designers to publicly release a
full implementation of their designs, to enable independent
security analysis. There were several proposed intrusion detection
techniques we would have liked to examine in detail
for this work, but we were unable to do so because we did
not have access to a reference implementation.
9. CONCLUSIONS
We have shown how attackers may be able to evade detection
in host-based anomaly intrusion detection systems,
and we have presented initial evidence that some IDS's may
be vulnerable. It is not clear how serious a threat mimicry
attacks will be in practice. Nonetheless, the lesson is that
it is not enough to merely protect against today's attacks:
one must also defend against tomorrow's attacks, keeping in
mind that tomorrow's attackers might adapt in response to
the protection measures we deploy today. We suggest that
more attention could be paid in the intrusion detection community
to security against adaptive attackers, and we hope
that this will stimulate further research in this area.
10.
ACKNOWLEDGEMENTS
We thank Umesh Shankar, Anil Somayaji, and the anonymous
reviewers for many insightful comments on an earlier
draft of this paper. Also, we are indebted to Somayaji for
making the pH source code publicly available, without which
this research would not have been possible.
11.
--R
Design and Validation of Computer Protocols
Introduction to Automata Theory
--TR
Design and validation of computer protocols
The Model Checker SPIN
Temporal sequence learning and data reduction for anomaly detection
Bro
Enforceable security policies
Symbolic Model Checking
Introduction To Automata Theory, Languages, And Computation
Intrusion Detection Using Variable-Length Audit Trail Patterns
Using Finite Automata to Mine Execution Data for Intrusion Detection
Detecting Manipulated Remote Call Streams
Learning Program Behavior Profiles for Intrusion Detection
Hiding Intrusions
Self-Nonself Discrimination in a Computer
A Sense of Self for Unix Processes
Intrusion Detection via Static Analysis
Operating system stability and security through process homeostasis
--CTR
Wun-Hwa Chen , Sheng-Hsun Hsu , Hwang-Pin Shen, Application of SVM and ANN for intrusion detection, Computers and Operations Research, v.32 n.10, p.2617-2634, October 2005
Hilmi Gne Kayacik , Malcolm Heywood , Nur Zincir-Heywood, On evolving buffer overflow attacks using genetic programming, Proceedings of the 8th annual conference on Genetic and evolutionary computation, July 08-12, 2006, Seattle, Washington, USA
Jesse C. Rabek , Roger I. Khazan , Scott M. Lewandowski , Robert K. Cunningham, Detection of injected, dynamically generated, and obfuscated malicious code, Proceedings of the ACM workshop on Rapid malcode, October 27-27, 2003, Washington, DC, USA
Christopher Kruegel , Engin Kirda , Darren Mutz , William Robertson , Giovanni Vigna, Automating mimicry attacks using static binary analysis, Proceedings of the 14th conference on USENIX Security Symposium, p.11-11, July 31-August 05, 2005, Baltimore, MD
James Poe , Tao Li, BASS: a benchmark suite for evaluating architectural security systems, ACM SIGARCH Computer Architecture News, v.34 n.4, p.26-33, September 2006
Debin Gao , Michael K. Reiter , Dawn Song, On gray-box program tracking for anomaly detection, Proceedings of the 13th conference on USENIX Security Symposium, p.8-8, August 09-13, 2004, San Diego, CA
Haizhi Xu , Steve J. Chapin, Improving address space randomization with a dynamic offset randomization technique, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Timothy Hollebeek , Rand Waltzman, The role of suspicion in model-based intrusion detection, Proceedings of the 2004 workshop on New security paradigms, September 20-23, 2004, Nova Scotia, Canada
Analyzing and evaluating dynamics in stide performance for intrusion detection, Knowledge-Based Systems, v.19 n.7, p.576-591, November, 2006
Niels Provos, A virtual honeypot framework, Proceedings of the 13th conference on USENIX Security Symposium, p.1-1, August 09-13, 2004, San Diego, CA
Prahlad Fogla , Wenke Lee, Evading network anomaly detection systems: formal reasoning and practical techniques, Proceedings of the 13th ACM conference on Computer and communications security, October 30-November 03, 2006, Alexandria, Virginia, USA
Wes Masri , Andy Podgurski, Using dynamic information flow analysis to detect attacks against applications, ACM SIGSOFT Software Engineering Notes, v.30 n.4, July 2005
Hassen Sadi, Guarded models for intrusion detection, Proceedings of the 2007 workshop on Programming languages and analysis for security, June 14-14, 2007, San Diego, California, USA
Gaurav Tandon , Philip Chan , Debasis Mitra, MORPHEUS: motif oriented representations to purge hostile events from unlabeled sequences, Proceedings of the 2004 ACM workshop on Visualization and data mining for computer security, October 29-29, 2004, Washington DC, USA
Salvatore J. Stolfo , Shlomo Hershkop , Chia-Wei Hu , Wei-Jen Li , Olivier Nimeskern , Ke Wang, Behavior-based modeling and its application to Email analysis, ACM Transactions on Internet Technology (TOIT), v.6 n.2, p.187-221, May 2006
Niels Provos, Improving host security with system call policies, Proceedings of the 12th conference on USENIX Security Symposium, p.18-18, August 04-08, 2003, Washington, DC
Darren Mutz , Fredrik Valeur , Giovanni Vigna , Christopher Kruegel, Anomalous system call detection, ACM Transactions on Information and System Security (TISSEC), v.9 n.1, p.61-93, February 2006
C. M. Linn , M. Rajagopalan , S. Baker , C. Collberg , S. K. Debray , J. H. Hartman, Protecting against unexpected system calls, Proceedings of the 14th conference on USENIX Security Symposium, p.16-16, July 31-August 05, 2005, Baltimore, MD
R. Sekar , V.N. Venkatakrishnan , Samik Basu , Sandeep Bhatkar , Daniel C. DuVarney, Model-carrying code: a practical approach for safe execution of untrusted applications, Proceedings of the nineteenth ACM symposium on Operating systems principles, October 19-22, 2003, Bolton Landing, NY, USA
Maja Pusara , Carla E. Brodley, User re-authentication via mouse movements, Proceedings of the 2004 ACM workshop on Visualization and data mining for computer security, October 29-29, 2004, Washington DC, USA
Janak J. Parekh , Ke Wang , Salvatore J. Stolfo, Privacy-preserving payload-based correlation for accurate malicious traffic detection, Proceedings of the 2006 SIGCOMM workshop on Large-scale attack defense, p.99-106, September 11-15, 2006, Pisa, Italy
Kenneth L. Ingham , Anil Somayaji , John Burge , Stephanie Forrest, Learning DFA representations of HTTP for protecting web applications, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.5, p.1239-1255, April, 2007
Christopher Kruegel , Giovanni Vigna , William Robertson, A multi-model approach to the detection of web-based attacks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.48 n.5, p.717-738, 5 August 2005
Shuo Chen , Jun Xu , Emre C. Sezer , Prachi Gauriar , Ravishankar K. Iyer, Non-control-data attacks are realistic threats, Proceedings of the 14th conference on USENIX Security Symposium, p.12-12, July 31-August 05, 2005, Baltimore, MD
Jedidiah R. Crandall , S. Felix Wu , Frederic T. Chong, Minos: Architectural support for protecting control data, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.4, p.359-389, December 2006
Martn Abadi , Mihai Budiu , lfar Erlingsson , Jay Ligatti, Control-flow integrity, Proceedings of the 12th ACM conference on Computer and communications security, November 07-11, 2005, Alexandria, VA, USA | evasion attacks;host-based intrusion detection;anomaly detection |
586237 | Database support for evolving data in product design. | We argue that database support for design processes is still inadequate, despite the many transaction models that have been put forth to fill this deficiency. In our opinion, substantial improvement is not to be gained by introducing yet another transaction model, but by questioning the basic paradigm of transaction processing itself. Instead of the usual view of transactions as destructive operations replacing an outdated database state with a more current one, we propose to view design transactions as non-destructive operations importing additional knowledge about an essentially unchanging final design solution. This leads to a model of designing with constraints, and a natural way of concurrent design. We give a formal presentation of the model and then discuss implementation techniques for centralized and distributed constraint management. | Introduction
Product design, like most other business processes today,
requires database support for all but the most simple under-
takings. Yet, design processes differ from ordinary business
processes in that they are not easily decomposable
into a series of independent steps that would lend themselves
to the ACID paradigm of classical database trans-action
processing. Instead, the various subtasks in a design
task tend to be highly interrelated, and the design process is
best viewed as a set of cooperative activities pursuing these
subtasks simultaneously.
Consequently, research in database support for design
processes has mostly focused on developing alternatives to
the strict isolation property of classical database transac-
tions, and a variety of mechanisms to define and control
cooperative transactions has been proposed.
While these proposals have - sometimes considerably
- departed from the notion of serializability as the classic
correctness criterion, the basic paradigm of transaction
Funded by the German Research Council (DFG) under project number
SFB346.
processing has remained the same: transactions effect state
transitions between consistent database states, which are
defined as being complete and unambiguous descriptions
of the current state of the miniworld under consideration.
From this point of view, the amount of information in
a database is essentially constant: each consistent database
state provides a complete description of exactly one state of
the miniworld. Transactions switch between these states as
dictated by the evolution of the miniworld; they do not, in
general, increase the amount of information available about
the same state. Thus, transactions are essentially non-
monotonous: information about the old state is destroyed
and replaced by information about a new state. From non-monotonicity
follows non-commutativity, and therefore the
importance of serializability.
In a design process, however, information is predominantly
added to the information already available. Ideally,
design means a continuous refinement of specifications until
a unique artifact is singled out. In other words, the
"state" of the world - the artifact that is the eventual result
of design - never changes; instead, the amount of knowledge
available about that state increases monotonically. Of
course, real design processes are less straightforward; proposed
designs may prove to be infeasible, in which case
some specifications must be retracted. Nevertheless, the
fundamental operation in design is narrowing the solution
space, which is a monotonous, and hence commutative, operation
We believe, therefore, that a database system useful for
design processes should support the notion of specification
(as a constraint on admissible artifacts) and refinement and
release of specifications as primitives in its data model, and
that the leeway offered by the commutativity of refinement
should be exploited in its transaction model.
As a first step in that direction, we propose a constraint-based
data model, thus adopting a perspective originally
introduced by constraint logic programming and then taken
up by constraint database systems. From this perspective, a
database does not provide an explicit characterization of a
single state of the world by assigning precise values to the
state variables. Instead, a database contains a set of constraints
over these variables, which delineate the admissible
values and thereby the possible states of the world. Put
in less abstract terms, a design database holds the requirements
and design decisions gathered so far and thereby
constrains the set of possible design solutions. As design
proceeds, further constraints are added, until either a contradiction
is reached and backtracking becomes necessary,
or the specification progresses to the point that all information
needed for eventual production of the artifact is available
with sufficient precision.
The fundamental primitives in this data model are re-
finement, querying and release of constraints, and we demonstrate
how these primitives play together in collaborative
design. Given that the database always denotes an approximation
to the design solution, the notion of consistency
becomes rather simple: constraints must not be contradictory
(at least not on a level the system can detect), and constraints
introduced by one party must not be released by
another party without proper authorization.
In the case of a single, centralized database, the data
model and its associated consistency guarantees can be
implemented in a straightforward way using techniques
known from constraint databases. The situation becomes
more difficult once data replication is taken into account,
e.g., a sales representative loading a set of product specifications
onto her laptop, making some changes offline at
a customer meeting and then reintegrating the data into
the main database. To maintain consistency in this case,
changes to replicated data must, to a certain extent, be arranged
in advance. We present a locking protocol designed
to announce anticipated changes to replicated data and to
prevent, in a pessimistic fashion, conflicts during reintegra-
tion. However, even if the protocol cannot be observed and
unanticipated changes need to be made, conflicting specifications
will be detected at reintegration time and can then
be resolved manually.
To summarize, the contributions of this paper are as fol-
lows: we demonstrate that the notion of data as constraints
lends itself in a natural way to concurrent design, and we
present the protocols necessary to coordinate concurrent
access and maintain consistency in both in a centralized
and a distributed constraint store.
2. Related work
The view of design as refinement of specifications is fairly
common in design theory (see, e.g., [2]). In the most general
setting, specifications are given by arbitrary concepts
(where a concept denotes a family of related artifacts), and
the result of design is an artifact defined by the intersection
of a sufficient number of such concepts [13]. However,
there is no hope of treating specifications this general algo-
rithmically. We limit ourselves to specifications that constrain
the value of some attribute to a certain range. Such
specifications do cover the most common cases in practice,
and they admit efficient satisfiability tests.
As pointed out in the introduction, the constraint paradigm
has been put to fruitful use in constraint database
systems [7, 10, 8]. The primary motivation for this line
of research was the desire to generalize ordinary relations
(i.e., finite collections of tuples) to infinite relations while
maintaining a finite description. Consequently, constraint
database systems have become very popular for geographic
and temporal applications, where regions or intervals containing
an infinite number of points need to be manipulated.
The transactions conducted by such applications, however,
are rather conventional and in particular non-monotonous,
so that constraint databases have not yet been able to realize
the potential for increased parallelism inherent in the
constraint framework.
Quite to the contrary, parallelism was the main focus in
Saraswat's work on parallel constraint programming [11].
There, the notion of a shared constraint store is proposed
as an elegant means of communication and synchronization
between cooperating agents. Our data model is a rather
simple application of the general framework laid out in this
thesis. Saraswat, coming from a programming language
background, was mostly concerned with a centralized constraint
store; although he addresses distributed constraint
systems, he does not discuss any mechanisms to maintain
consistency in such a setting. This gap is filled by the protocol
presented in Section 6.
Fuzzy sets [14] may be viewed as generalized constraints
that allow varying degrees of satisfaction. Fuzzy
database systems [1, 3] have been proposed to capture
the uncertainty of specifications inherent in early design
stages [12, 15]. Thus, fuzzy database systems share our
view of design as refinement, or reduction of uncertainty.
However, most research in applying fuzzy technology to
design processes has focused on the modeling aspects and
semantics of fuzzy sets, whereas we are mainly concerned
with coordinating parallel access to design information.
Yet, the data model and protocols presented below apply
equally well to "crisp" and "fuzzy" constraints, provided
that efficient consistency checking algorithms for the class
of constraints under consideration are available (e.g., [6]).
Constraint and fuzzy database systems may be regarded
as combinations of a non-standard data model and a standard
transaction model. Conversely, there have also been
many attempts to graft an advanced transaction model
onto a conventional object-oriented or relational database
system, seeking to provide better support for the long-
running, cooperative transactions typical for design pro-
cesses. Space limitations forbid a detailed discussion of
this rather large body of work (see [5] for a survey). In
general, these transaction models trade the strong consistency
guarantees offered by serializable transaction processing
against more flexible control over transaction isolation
and notions of consistency. However, such flexibility
comes at the price of more complex transaction management
schemes and, worse perhaps, depending on each user
to provide information about the desired semantics along
with the transactions themselves. That, in our view, is a
bad tradeoff; the consistency guarantees and semantics of a
database system should be as simple as possible.
A much less drastic modification of the classic ACID
paradigm is the notion of ffl-serializability, introduced in [9]
as a means of trading precision for concurrency. ffl-seriali-
zability is applicable to data that possess a - numerically
quantifiable - degree of uncertainty, and it extends ordinary
serializability by permitting transaction histories whose results
differ (according to a suitable metric) by no more than
a quantity ffl from a serializable history. Unfortunately, ffl-
serializability does not handle write transactions very well:
in the absence of any knowledge about the internal workings
of a write transaction, one must assume that small variations
in the input can lead to arbitrarily large variations in
the output, and hence accept the possibility of unbounded
divergence of the database state from the result of a serial
execution. [4] addresses this problem by suggesting that
transactions that "went too far astray" be periodically un-
done, but in a design setting, with transactions easily representing
a day's work, that may prove very costly.
3. A formal model of constraint-based design
We view design as the manipulation of specifications,
where each specification poses a restriction on the admissible
design solutions. Specifications imposed by external
agents (i.e, design requirements) and specifications resulting
from decisions internal to the design process are treated
alike. We model the design space as a finite collection x 1 ,
of design parameters of types t 1 , t 2 , . , t n ,
such that every possible design outcome is uniquely characterized
by an assignment of values to x 1 , x 2 , . , xn .
The types t 1 , t 2 , . , t n are arbitrary; thus it is entirely conceivable
that a design parameter describes, say, a complex
geometry.
A specification in general is a syntactical characterization
of a subset of t 1 \Theta t 2 \Theta \Delta \Delta \Delta \Theta t n . As pointed out in
Section 2, this notion is too broad to be algorithmically
tractable, so we limit ourselves to specifications that restrict
the value of a single design parameter x i to a range S,
where S has a syntactical characterization in some language
L. Such one-dimensional specifications will henceforth
be called conditions. The exact language L used to
express ranges is left unspecified; for example, S might
be defined by explicit enumeration, numerical intervals,
logical formulas, etc. We do make certain assumptions
about L,
1. If ranges S 1 , S 2 , . , Sm are expressible in L, then
is expressible in L as well,
and there exists a (reasonably efficient) procedure for
computing the corresponding expression.
2. There exists a (reasonably efficient) procedure for
deciding whether the intersection of a collection S 1 ,
ranges expressible in L is empty.
3. The ranges expressible in L are "convex" in the following
sense: if S, S 1 , S 2 , . , Sm are expressible
in L and " m
(This is a technical and not
particularly essential condition having to do with the
constrain operator introduced below.)
These assumptions are obviously true for ranges defined
by numerical intervals, but also for more powerful logical
frameworks (cf. [8] for some examples).
Associated with each condition is a principal, i.e., an
abstract representation of the agent that imposed the condi-
tion. This could be an individual designer, a design team, a
manager, a customer or any other entity authorized to participate
in the design process. The combination of condition
and principal is called constraint.
A design state, or design database, is given by a collection
of constraints, which together describe the values
considered admissible for x 1 , x 2 , . , xn in that particular
state. A design state is called consistent if there is at least
one assignment of values to x 1 , x 2 , . , xn satisfying all
constraints.
This notion of consistency is admittedly naive, because
it does not take interactions between design parameters
due to physical laws or technological limitations into ac-
count. For example, in microprocessor design the conditions
"clock rate - 1000MHz" and "power dissipation -
1W" are formally consistent (because they refer to different
design parameters), but currently not simultaneously
achievable. If desired, such dependencies between design
parameters could be introduced into the model as multi-dimensional
conditions, at the expense of increased overhead
for consistency checking. We believe, however, that
even the simple-minded consistency checks defined here
will prove beneficial in practice.
Design proceeds by means of three basic operations.
Each operation is carried out atomically.
Adds a new constraint with condition x 2
S and the invoking agent as principal to the design
state. This operation succeeds only if the new state
is consistent. If it fails, the set of constraints conflicting
with x 2 S is returned (this set is well defined
because of the convexity property assumed above).
Removes the specified constraint from the
design state, if it exists. (Note that this does not
affect other constraints on x.) This operation succeeds
only if the invoking agent is authorized to remove
constraints introduced by principal p. The specific
scheme whereby such authorization is obtained
is left open.
peek(x) Returns a condition describing the set of values
admissible for x in the current state (i.e., the intersection
of all current constraints on x).
Let us briefly comment on these operations and the invariants
they imply.
constrain is a monotone operation that is used to establish
new design requirements, new design decisions or -
this case will be discussed in the next section - a stake in
existing decisions. release is the inverse operation, used
to retract requirements or decisions that did not come to
fruition. peek serves to query the current state, typically at
the beginning of a design task. From the definition of these
operations, it is clear that the following two properties hold:
Constraint Consistency (CC): The design state is always
consistent. In particular, the result of a peek operation
never denotes the empty set.
Constraint Durability (CD): Once a constraint has been
successfully established, that constraint will be satisfied
by any design solution unless explicitly released
by a properly authorized agent (presumably with notification
of the constraint's principal).
It is worth pointing out that the notion of durability described
here is quite different from the one offered by ACID
transactions: the latter refers to protection from system or
media failures, but not the effects of other transactions. In
values written by ACID transactions are safe from
other transactions only while they are not yet committed;
after being committed, they may be overwritten at any time.
This is contrary to the semantics of committed design de-
cisions, which are supposed to be stable unless explicitly
retracted.
4. Concurrent design with constraints
We will now illustrate how the constraint manipulation
primitives play together in concurrent design.
The design process will usually begin by determining
the major design requirements and entering corresponding
constraints into the design database. As soon as a sufficient
set of requirements has been established to allow a meaningful
subdivision of the design task, design subtasks may
be formulated and assigned. Meanwhile, the acquisition
of requirements continues, resulting in further constraints
being imposed.
A designer charged with a certain design task will initially
use the peek operation to obtain the current constraints
on the relevant design parameters. Three outcomes
are possible:
1. The constraints are such that no solution is possible.
In this case, the design process has reached a dead
end, and some constraints need to be released.
2. The problem is underspecified to the extent that nothing
useful can be done. In this case, the design task
must be postponed until more information is forthcoming
3. The constraints admit one or more solutions. In this
case, a solution is determined and constraints describing
the solution are added to the database, together
with constraints that describe the assumptions
on which the solution was based. In the simplest
case, these additional constraints are simply copies
of the constraints obtained from the initial peek oper-
ations, and thus redundant. More frequently, though,
a design solution requires stronger assumptions than
just the design requirements, so that the additional
constraints describing the design assumptions do actually
narrow the solution space.
In this way, many design tasks can proceed in parallel,
while - by virtue of constraint durability - each designer
can rest assured that the assumptions and results of her design
remain valid throughout the entire process, unless explicitly
retracted.
Of course, this also means that an attempt to establish
a constraint may fail, because it conflicts with the assumptions
or results of a another design activity. No attempt is
made to resolve a conflict automatically, because resolution
typically requires domain-specific expertise. The only
recourse in a conflict is to contact the principal of the offending
constraint (which is returned by an unsuccessful
constrain) and to negotiate a solution.
Note that the consistency and durability guarantees offered
by the constraint store do not require any locking protocols
(besides executing each primitive operation atomi-
cally), but follow simply from the definition and the monotonicity
of constrain.
5. Implementing a centralized constraint store
While the current prototype of our system is a stand-alone
implementation using a main-memory database system, a
layered implementation on top of a conventional relational
database system can be envisioned in a rather straightforward
way.
First, the design parameters describing the design space
need to be identified. It is hardly feasible to submit every
single dimension occurring in an engineering drawing or
every line of code in a program to the constraint paradigm.
Therefore, one would try to identify a set of parameters representing
information that truly needs to be shared among
concurrent design activities, and then use the transaction
management facilities of the underlying database system
to couple other, non-shared parameters to their "govern-
ing" shared parameters. For example, if two design groups
working on a mechanical assembly needed to exchange
only bounding box information for their respective com-
ponents, then just the coordinates of these bounding boxes
would be considered design parameters, and access to all
other dimensions would be wrapped in ACID transactions
that ensure that the bounding box data remain consistent
with the true geometry data.
Second, a domain and a condition language need to
be chosen for every design parameter. In mechanical de-
sign, most information is numerical, and lower and upper
bounds usually suffice as conditions. On the other hand,
software engineering usually deals with free-text specifica-
tions, and for these some sort of symbolic representation
must be found, e.g., by picking the most salient keywords
and defining conditions by explicitly enumerating the required
keywords.
Third, the design parameters and their associated sets
of constraints have to be represented in the relational
model. For example, a numerical design parameter with
constraints defined by intervals might be represented by a
three-column table having a tuple (principal, lower bound,
upper bound) for each constraint on the design parameter.
Finally, the operations constrain, release and peek need
to be implemented as ACID transaction procedures, where
constrain and peek would presumably invoke custom procedures
for satisfiability testing and intersection, respectively
Note that with this scheme, applications can continue
to use ordinary ACID transactions. In fact, if "wrapper"
transactions are written for access to non-shared parame-
ters, transparently inserting calls to constrain and release
their governing shared parameters as outlined above, then
applications need not be aware of the constraint framework
at all.
6. Handling distribution and replication
Our goal is now to extend the (CC) and (CD) guarantees
to a distributed and replicated constraint system. We assume
that replicas may operate temporarily in disconnected
mode, without being able to exchange operations with other
replicas. In this case, two incompatible constrain operations
may be executed at different replicas, raising a conflict
when the replicas are eventually merged. If (CC) and
(CD) are to be maintained under these assumptions, no
such conflicts can be allowed. Hence, constrain operations
must be executed pessimistically. release and peek operations
are less critical, because they can never raise conflicts
on a merge.
6.1. Intention locks
We introduce intention locks as a means to protect disconnected
replicas from incompatible constrain operations.
Intention locks are acquired while the replicas are still con-
nected, and held until reconnection occurs.
The idea behind intention locks is to announce changes
anticipated for the period of disconnected operation. Obvi-
ously, some advance knowledge of these changes is necessary
to acquire the proper locks, but this is not an unreasonable
assumption. If a sales representative copies data to her
notebook and meets with a customer to discuss a design,
she probably has a rough idea of which data may change.
Formally, intention locks are conditions in the sense of
Section 3, i.e., one-dimensional specifications expressed in
some language L. Intention locks are always associated
with a replica and, unlike constraints, do not have a princi-
pal. Also, there are two kinds of intention locks, and their
semantics is somewhat different from conditions.
The two kind of intention locks correspond to classical
shared and exclusive locks and reflect the two different attitudes
a designer may exhibit towards a design parameter.
Firstly, a design parameter x may be viewed as input to
a design task. In this case, the designer will presumably use
peek to obtain the current range of x and then use constrain
to re-impose this range (or perhaps a range somewhat nar-
rower) as a design assumption. The semantics of this constraint
is that she is prepared to accept any outcome of the
design process as long as her design assumption is satisfied.
In particular, she is prepared to accept arbitrary constraints
imposed on x by other principals, perhaps on other replicas,
as long as these other constraints have nonempty intersection
with her own. To express this attitude when about to
enter disconnected mode, she would request a shared intention
lock on x with a range equal to the minimum range acceptable
to her. A shared intention lock on x with range S
held by a replica prohibits other replicas from imposing
constraints on x with a range disjoint from S. As such
it is very similar to a constraint, except that shared intention
locks are immediately and globally published, whereas
constraints are only eventually published (see the section
on protocol implementation below).
Second, a design parameter x may be viewed as a (yet
unknown) output of a design task. In this case, a designer
may foresee that she needs to impose certain constraints
on x, but does not yet know what these constraints will be.
In order to keep other designers from restricting her freedom
of choice, she will want to impose a lock that prohibits
other principals from introducing any constraints on x that
eliminate values she considers potentially interesting. This
attitude is expressed by requesting an exclusive intention
lock on x with a range that is the union of all constraints
the designer might want to introduce later. An exclusive
intention lock on x with range S held by a replica prohibits
other replicas from imposing constraints on x with a range
that does not include S.
The entire locking protocol is determined by the following
rules. The first two rules govern the acquisition of
locks, and the last rule governs the execution of constrain
operations (release and peek operations are always permit-
ted).
Shared lock acquisition A request for a shared intention
lock on design parameter x with range S will be
granted iff the following conditions are met:
1. If another replica holds an exclusive intention
lock on x with range X , then S ' X .
2. If another replica holds a shared intention lock
on x with range S 0 , then S " S 0 6= ;.
Exclusive lock acquisition A request for an exclusive intention
lock on design parameter x with range X will
be granted iff the following conditions are met:
1. No other replica holds an exclusive intention
lock on x.
2. If another replica holds a shared intention lock
on x with range S, then X ' S.
Constraint introduction A request constrain(x; S) executed
on some replica will succeed iff the following
conditions are met:
1. The replica holds a shared intention lock on x
with a range S 0 ' S or an exclusive intention
lock on x with a range X " S 6= ;.
2. The intersection of S and all other constraints
currently imposed on x at that replica is non-empty
and, if the replica holds an exclusive intention
lock on x, meets the range of that lock.
Note that the admissibility of a constrain operation can
be decided locally, whereas the acquisition of locks requires
global communication. How exactly this global
communication is implemented is discussed next.
6.2. Protocol implementation
The protocol for the acquisition of intention locks may
seem simple at first, but the details are somewhat complex.
We note first that constrain operations can be executed locally
at a replica without communication. The resulting
constraints can be propagated eventually to the other repli-
cas, along with the release of the lock covering the opera-
tion. The difficult part is setting and releasing the intention
locks. We call these operations global, because they must
be propagated to all replicas. Global operations can be executed
asynchronously as long as all replicas execute them
in the same order. In particular, a disconnected replica will
see and execute global operations originating from other
replicas only at reconnection time, but as long as their ordering
is preserved, the protocol will remain correct.
The set of replicas will usually be divided into a set of
connected partitions. It is easy to show that only one parti-
tition can be permitted to execute global operations. Now
the problem is to decide which partition is allowed to initiate
global operations. Usually this is done using a quorum
consensus algorithm. However, in our scenario the majority
of replicas might be offline, so that there is no partition
big enough to achieve a quorum. In this case the requirements
for a quorum have to be modified accordingly, e.g. a
quorum could be any partition that contains more than half
of the members of the last quorum.
Last, but not least, even inside a quorum only one replica
can initiate a global operation. No other replica can initiate
a global operation until the preceding global operation has
been completed by all other replicas within the quorum.
7. Conclusions and open problems
We started from the observation that design, unlike most
other business processes, is a process of successive refinement
and monotonous information accretion. This led us
to adopt constraints as the basic design objects. We then
investigated how concurrent design can be achieved within
the constraint framework, and showed how the basic guarantees
of the constraint model, combined with the explication
of design assumptions, lead to a very natural coordination
model that does not require any additional transaction
management. Finally, we presented implementation mechanism
for centralized and distributed constraint systems,
thus demonstrating the basic feasibility of the approach.
The most interesting continuation of this work is certainly
the handling of multidimensional constraints, where
a constraint can specify dependencies between several design
parameters. Unfortunately, the time needed to compute
intersection and satisfiability of such multidimensional
constraints tends to grow rather fast with the number
of dimensions. Further work is necessary to identify constraint
languages and implementation techniques that can
handle multidimensional constraints with reasonable performance
--R
Fuzziness in Database Management Systems.
A Mathematical Theory of De- sign: Foundations
Fuzzy Logic in Data Modeling.
Asynchronous consistency restoration under epsilon serializability.
Database Transaction Models.
A methodology for the reduction of imprecision in the engineering design process.
Constraint Query Languages.
Constraint Databases.
Relaxing the limitations of serializable transactions in distributed systems.
Constraint databases: A survey.
Concurrent Constraint Programming.
Fuzzy Sets in Engineering Design and Configuration.
General design theory and a CAD system.
Fuzzy Sets.
Zimmermann, editor. Practical Applications of Fuzzy Technologies.
--TR
Nested transactions: an approach to reliable distributed computing
RCSMYAMPERSANDmdash;a system for version control
Sagas
Cooperative transaction hierarchies: a transaction model to support design applications
Toward a unified framework for version modeling in engineering databases
ACTA: a framework for specifying and reasoning about transaction structure and behavior
Principles and realization strategies of multilevel transaction management
A flexible and adaptable tool kit approach for transaction management in non standard database systems
Multi-level transactions and open nested transactions
Concurrent constraint programming
Implementing extended transaction models using transaction groups
Fuzzy logic in data modeling
Constraint query languages (preliminary report)
Relaxing the limitations of serializable transactions in distributed systems
Practical Applications of Fuzzy Technologies
Fuzziness in Database Management Systems
A Mathematical Theory of Design
Constraint Databases
Constraint Databases | constraints;concurrent design;databases;consistency |
586293 | A TACOMA retrospective. | For seven years, the TACOMA project has investigated the design and implementation of software support for mobile agents. A series of prototypes has been developed, with experiences in distributed applications driving the effort. This paper describes the evolution of these TACOMA prototypes, what primitives each supports, and how the primitives are used in building distributed applications. | Introduction
In the Tacoma project, our primary mode of investigation has been to build
prototypes, use them to construct applications, reflect on the experience,
and then move on. None of the systems we built were production-quality
# Department of Computer Science, University of Troms-, Norway.Tacoma This work
was supported by NSF (Norges forskningsr-ad) Norway DITS grant no. 112578/431 and
126107/431.
Department of Computer Science, Cornell University, Ithaca, New York 14853. Supported
in part by ARPA/RADC grant F30602-96-1-0317, AFOSR grant F49620-00-1-0198,
Defense Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory
Air Force Material Command USAF under agreement number F30602-99-1-0533,
National Science Foundation Grant 9703470, and a grant from Intel Corporation. The
views and conclusions contained herein are those of the authors and should not be interpreted
as necessarily representing the o#cial policies or endorsements, either expressed or
implied, of these organizations or the U.S. Government.
nor were they intended to be. The dramatic collapse of our namesake 1 -
the Tacoma Narrows bridge-played an important role in the evolution of
suspension-bridge design [Pet92] by teaching the importance of certain forms
of dynamic analysis. It was in that spirit that we set out with the Tacoma
project to build artifacts, stress them, and learn from their collapse.
Each version of Tacoma has provided a framework to support the execution
of programs, called agents, that migrate from host to host in a computer
network. Giving an agent explicit control over where it executes is attractive
for a variety of technical reasons:
. Agent-based applications can make more e#cient use of available communication
bandwidth. An agent can move to a processor where data is
stored, scanning or otherwise digesting the data locally, and then move
on, carrying with it only some relevant subset of what it has read. By
moving the computation to the data-rather than moving the data to
the computation-significant bandwidth savings may result.
. Because agents invoke operations on servers locally (hence cheaply), it
becomes sensible for servers to provide low-level RISC-style APIs. An
agent can synthesize from such an API operations specifically tailored
to its task. Contrast this with traditional client-server distributed computing
where, because communications cost must be amortized over
each server operation invocation, servers provide general-purpose high-level
APIs.
. Agents execute autonomously and do not require continuous connectivity
to all servers where they execute. This makes the agent abstraction
ideal for settings in which network connections are intermittent, such
as wireless and other forms of ad hoc networks.
Agent-based systems also provide an attractive architecture for upgrading
the functionality of fielded software systems. Software producers rightly
consider extensibility to be crucial for preserving their market share. It is not
unusual for a web browser to support downloading of "helper applications"
that enable new types of data to be presented. And much PC software is
installed and upgraded by downloading files over the Internet. The logical
next step is a system architecture where performing an upgrade does not
require overt action by the user. Agents can support that architecture.
1 The name Tacoma is an acronym for Troms- And Cornell Moving Agents.
Version Innovation
Tacoma v1.0 basic agent support
Tacoma v1.2 multiple language support and remote creation
Tacoma v2.0 rich synchronization between running agents
Tacoma v2.1 agent wrappers
lite small footprint for PDAs
TOS factoring mobility and data transformation
Table
1: The most important versions of the Tacoma platform.
Engineering and marketing justifications aside, agents raise intriguing
scientific questions. An agent is an instance of a process, but with the identity
of the processor on which execution occurs made explicit. It might, at first,
seem that this change to the process abstraction is largely cosmetic. But
with processor identities explicit, certain communication becomes implicit-
rather than sending messages to access or change information at another site,
an agent can visit that site and execute on its own behalf. The computational
model is then altered in fundamental and scientifically interesting ways. For
example, coordinating replicas of agents-processes that move from site to
site-in order to implement fault-tolerance requires solving problems that
do not arise when stationary processes are what is being replicated. Other
new challenges arise from the sharing and interaction that agents enable, as
mobile code admits attacks by hosts as well as to hosts.
In order to understand some of the scientific and engineering research
questions raised by the new programming model, we experimented with a
series of Tacoma prototypes. Each prototype provided a means for agents to
migrate, and each provided primitives for agents to interact with one another.
The most important versions of Tacoma are listed in Table 1. Version 1.0
provided basic support for agents written in the Tcl language. Version 1.2
added support for multiple languages and agent creation on remote hosts. In
version 2.0, new synchronization mechanisms were added. Agent wrappers,
the innovation in version 2.1, facilitated modular construction of agents. To
enable support for agents on small PDA devices, we built Tacoma lite with
a small footprint. And TOS, the most recent version of Tacoma, allows the
mobility and data transformation aspects of agents to be cleanly separated.
A recurring theme in our work has been to consider Tacoma as a form
of glue for composing programs, rather than as providing a full-fledged computing
environment for writing programs from scratch. And almost from
the start, Tacoma avoided designing or prescribing a language for programming
agents. This language-independence allows an agent to be written in
the language that best suits the task at hand. It also allows applications to
be constructed from multiple interacting agents, each written in a di#erent
language. But, as will be clear, our goal of language-independent support for
program migration has had broad consequences on the abstractions Tacoma
supports.
The rest of this paper is organized as follows. Section 2 describes some
of the primitives we experimented with in the various Tacoma versions. A
novel approach to structuring agents-based on wrappers-is the subject of
section 3. Section 4 explores the need for an agent integrity solution and
describes our approaches. Section 5 presents our experiences in connection
with building a mobile agent application. Agent support for PDAs and cell
phones is discussed in Section 6. Section 7 describes recent developments in
Tacoma, and section 8 contains some conclusions.
2.1 Abstractions for Storing State
In moving from one host to the next, any agent whose future actions depend
on its past execution must be accompanied by state. With strong mobility,
state capture is automatic and complete-not unlike what is done in operating
systems that support process migration [Zay87, TLC85, PM83] where
the system extracts the state of a process, moves it to another processor, and
then starts the process running there. Determining which state components
to extract for a given process has proved tricky and expensive for process
migration in the presence of run-time stacks, caches, and various stateful
libraries; reincarnating the state of a process on a machine architecture that
di#ers from the one on which the process was running is also known to be
a di#cult problem. Nevertheless, strong mobility is supported by Telescript
[Whi94], Agent-Tcl [Gra95], and Ara [PS97]. The convenience to application
programmers is a strong attraction.
With weak mobility, the programmer of an agent must identify and write
code to collect whatever state will be migrated. Java provides an object
serialization facility for extracting the state of a Java object and translating
it into a representation suitable for transmission from one host to an-
other. So it is not surprising that weak mobility is what many Java-based
agent systems (Aglets [LO98], Mole [BHRS98], and Voyager by ObjectSpace)
adopt-especially since capturing the execution state of a Java thread without
modifying the JVM is impossible, as shown by Sumatra [ARS97]. Note,
however, that Java's object serialization mechanism incorporates the entire
object tree rooted on a single object. Unless the agent programmer is careful
to design data structures that avoid certain links, the high costs associated
with strong mobility can be incurred.
Tacoma supports weak mobility. This decision derives from our goals
of having run-time costs be under programmer control and of providing
language-independent support for agents. Only the agent programmer understands
what information is actually needed for the agent's future execution;
presumably the agent programmer also understands how that information
is being stored. So, in Tacoma, the agent's programmer is responsible for
building routines to collect and package the state needed by an agent when
it migrates.
Folders, Briefcases, and File Cabinets. The state of a Tacoma agent
is represented and migrated in its briefcase. Each agent is associated with,
and has access to, one briefcase. The briefcase itself comprises a set of
folders, named by ASCII strings unique to their briefcase. In turn, folders
are structured as ordered lists of byte strings called folder elements.
Various functions are provided by Tacoma to manipulate these data
structures. There are operations to create folders, delete folders, as well
as to add or remove elements from folders. For transport and storage, the
Tacoma archive and unarchive operations serialize and restore a briefcase.
Because each folder is an ordered list, it can be treated as either a stack or
a queue. As a queue, we find folders particularly useful for implementing
FIFO lists of tasks; as a stack, they are useful for saving state to backtrack
over a trajectory and return to the agent's source.
Not only must state accompany an agent that migrates, but it may be
equally important that some state remain behind.
. It is unnecessary and perhaps even costly to migrate data that is used
only when a given site is visited.
. Secure data is best not moved to untrusted sites, so such data may
have to be saved temporarily at intermedicate locations.
. Having site-local state allows agents that visit a site to communicate-
even if they are never co-resident at that site.
Tacoma therefore provides a file cabinet abstraction to store collections of
folders at a specific site.
Each site may maintain multiple file cabinets, and agents can create new
file cabinets as needed at any site they visit. Every file cabinet at a site is
named by a unique ASCII string. By choosing a large, random name, an
agent can create a secret file cabinet because guessing such a name will be
hard and, therefore, only those agents that have been told the name of the file
cabinet will be able to access its contents. File cabinets having descriptive or
well-publicized names are well suited for sharing information between agents.
Whereas a briefcase is accessed by a single agent, file cabinets can be
accessed by multiple agents. To support such concurrent access, an agent
specifies when opening a file cabinet whether updates should be applied immediately
or updates should be applied atomically as the file cabinet is closed.
Note that Tacoma's file cabinet and briefcase abstractions do not preclude
using Tacoma to support an agent programming language providing
strong mobility. The run-time for such a language would employ one or more
folders for storing the program's state. Our experience so far, however, has
been that programmers in most any language have no di#culty working directly
with folders, briefcases, and file cabinets as a storage abstraction-it
has not been necessary to hide these structures behind other abstractions.
And Java's object serialization has been used for Tacoma agents written
in Java, just as Python's associative arrays have been used for Tacoma
agents written in the Python language to access the contents of briefcases
and folders.
For historical reasons, Tacoma folders store and are named by ASCII
strings. Had XML or KQML [FFMM94] been in widespread use when our
project started, then we would probably have selected one of these representation
approaches. In fact, any machine- and language-independent data
representation format su#ces.
2.2 Primitives for Agent Communication
In Tacoma, agents communicate with each other using briefcases. The
Tacoma meet primitive supports such inter-agent communication by allowing
one agent to pass a briefcase to another. Its operational semantics evolved
as we gained experience writing agent applications, with functionality added
only when a clear need had been demonstrated.
The meet in Tacoma v1.2 initially was similar to a local, cross address-
space, procedure call. Execution of
by an agent A caused another agent A # to be started at the same site with
a copy of briefcase bc provided by (and shared with) A. A blocked until
A # executed finish to terminate execution. Arguments were passed to A # in
briefcase folders; results were returned to A in that same briefcase.
This first version of meet was soon extended to allow communication
between agents at di#erent sites. In addition, a means was provided for an
invoking agent to continue executing in parallel with the agent it invoked.
Execution of
@host with bc block
by an agent A caused execution of A to suspend while another agent A #
executes to completion at site host and with a copy of briefcase bc. Were the
block keyword omitted, execution of A would proceed in parallel with A # .
With Tacoma v2.0, the non-blocking variation of meet was replaced with
two new primitives: activate and await.
. Execution of await by an agent A # blocks A # until some other agent
names A # in a meet or activate. An agent name A could be specified
in the await to cause A # to block until it is A that executes the corresponding
meet or activate.
. Execution by an agent A of meet A # or activate A # first checks to see
if there is an agent A # blocked at an await that can be activated and
restarts that. If there isn't, a new instance of A # is created and executes
concurrently with A.
The meet, await, and activate primitives can be used by agent programmers
to implement a broad range of synchronization functionality, including an
Ada-style rendezvous operation. We eschewed building direct support for
an rendezvous because such high-level constructs too often are, on the one
hand, expensive and, on the other hand, only a crude approximation for
what is really needed by any specific application. Lower-level primitives, like
activate and await, that can be composed, do not su#er these di#culties. And
being equivalent to co-routines, they should be adequately powerful. Our one
foray in the direction of high-level synchronization constructs was a waiting
room abstraction, which enabled agents to store their state (briefcases) and
suspend execution. Application programmers found the mechanism costly
and cumbersome to use.
Service Agents and Other Implementation Details. The Tacoma
run-time at each site makes various services available to agents executing at
that site in the form of service agents in much the same way as Ara [PS97]
and Agent-Tcl [Gra95] do. An agent A obtains service by executing a meet
that names the appropriate service agent. In Tacoma v1.0, for example,
each site provided a taxi agent to migrate an agent A to a site named in a
well-known folder in A's briefcase.
The extended functionality of meet in Tacoma v1.2 obsoleted taxi agents.
But our goal of supporting multiple languages led us to augment the set of
service agents with the new class of virtual machine service agents. Each host
ran a virtual machine service agent for each programming language that could
be executed on that host; that virtual machine service agent would execute
any code it found in the xCODE folder. So, for example, an agent A would
migrate to site host and execute a Java program P there by storing P in the
JAVACODE folder and executing a meet naming the virtual machine service
agent for Java, JVM@host.
The VM BIN virtual machine service agent executes native binary exe-
cutables. We allow for heterogeneity in machine architectures by associating
a di#erent briefcase folder with each di#erent type of machine. VM BIN identifies
the folder that corresponds to the current machine, extracts a binary
from it, and runs the result.
For obvious security reasons, virtual machine service agents must guarantee
that agents they execute only interact with the underlying operating
system and remainder of the site's environment through Tacoma primitives
they provide. Service agents, however, do have broader access to their
environment-they are a form of trusted processes.
In addition, Tacoma allows agents to be accompanied by digital certificates
(and stored in the xCODE-SIG folder). These certificates are interpreted
by the service agents to define accesses permitted by the signed code. The
current version of the system gives any agent accompanied by a certificate
wrapper
vm_tcl
ag
ag ag
vm_java vm_bin
Firewall
Operating system
library library library
library
Figure
1: The architecture of Tacoma v2.1.
complete and unrestricted access to the environment for its signed code; in
any serious realization, we would associate types of access with particular
signers, and we would also allow a signer to specify in the certificate additional
restrictions on access.
Figure
1 illustrates the overall architecture of more recent (i.e., post v2.1)
Tacoma systems. Each host runs a collection of virtual machine service
agents that are responsible for executing agents and that contain a library
with routines for agent synchronization and communication as well as for
briefcase, file cabinet, and folder manipulation. There is also a firewall process
used to
. coordinate local meetings and
. send messages to firewalls at other sites in order to migrate an agent
from one site to another.
Thus, meet and activate operations are forwarded to the local firewall for
handling.
3 Wrappers for Structuring Agents
A Tacoma wrapper intercepts the operations performed by an agent and
either redirects them or performs pre- and/or post-processing. The e#ect is
similar to stackable protocol layers as seen, for example, in Ensemble [vRBH
The wrapper itself is a Tacoma agent. Redirection is performed by the
Tacoma run-time-specifically the firewall and any virtual machine service
agents responsible for interpreting agent code. To create a wrapper, the appropriate
virtual machine service agent is contacted using meet and with a
briefcase whose folders detail operations (i.e. meet executions) to intercept
and give code to run when those operations are intercepted. The wrapped
agent and its wrapper must be executed on the same host, but they may use
di#erent virtual machine service agents and, therefore, they may be written
with di#erent programming languages.
A wrapped agent can be wrapped again, creating an onion-like structure.
From the outside, the onion appears to move and execute as a monolithic
unit; from Tacoma's perspective, each wrapper is itself a separate agent.
Di#erent wrappers thus could execute in their own security domains. And, as
a corollary, a wrapper could serve as a trusted process, accessing functionality
that the agent it wraps cannot.
We have to date experimented with three wrappers:
A Remote Debugger Wrapper. The Tacoma remote debugger intercepts
all operations going to and coming from the agent it wraps. When operating
in the passive mode, a notification is sent (using activate) to
a remote monitor before passing each operation on, unmodified; when
operating in the active mode, the remote debugger performs a meet with
a specified controller (which can change the briefcase before passing on
the operation). Our experience with remote debugger is quite positive-
after it became operational, the need for explicit remote-debugging
support in the Tacoma run-time system disappeared.
A Reference Monitor Wrapper. The wr codeauth wrapper is a form of
reference monitor. It is wrapped around untrusted service agents and
imposes authorization checks to preserve the integrity of the Tacoma
run-time environment. By implementing this security functionality using
a wrapper, we avoided having to modify each of the service agents
individually. It is also wr codeauth that checks digital signatures and
rejects operations not allowed by accompanying certificates.
A Legacy Migration Wrapper. The Tacoma Webbot wrapper was developed
to change a legacy Web crawler into a mobile agent (see section
5). This wrapper moves crawler code from site to site.
Wrappers currently under development include one to provide fault-tolerance
using the NAP protocols (see section 4) and one to implement multicast
communication.
Wrappers were added to Tacoma (in v2.1) [SJ00] because we found our
agents for applications becoming large and unwieldy from code to support
functionality that might well have been included in the Tacoma run-time
itself but hadn't been. Adding code to the run-time would have worked for
our experimental set-up but clearly would not scale-up to large deployments,
where we had no control over when and whether upgrades would be applied
to the Tacoma run-time. Wrappers, then, were conceived as a means to
provide extensibility to the base system.
Implementation of Wrappers. In a wrapped agent, it is helpful to distinguish
the core and inner wrappers from the outer ones. The core and
inner wrappers move from host to host; the outer wrappers do not move,
being added by the host as a means to enforce policies and make site-specific
functionality available. Typically, the outer wrappers are trusted and thus
have enhanced privileges.
A wrapped agent moves from site to site because a meet is issued by one of
the agents comprising the core and inner wrappers. This meet is intercepted
by each surrounding wrapper, which archives the briefcase in the intercepted
meet, stores that in a folder of its own briefcase, and then re-issues that meet
(for interception by the wrapper one layer further out). By definition, a meet
issued by the inner-most of the outer wrappers is not intercepted by another
wrapper. Such a meet is thus handled by Tacoma's run-time system, with
the e#ect that the agent migrates to the specified host. Once there, what is
reactivated is actually the outer-most of the inner wrappers. This wrapper,
however, will extract a briefcase from its folder and re-instantiate the agent it
wrapped. The process continues recursively until the all the inner wrappers
and core have been re-started.
Agent Integrity
The benefits of easily implemented computations that span multiple hosts
must be tempered by the realization that
. computations must be protected from faulty or malicious hosts, the
agent integrity problem, and
. hosts must be protected from faulty or malicious agents, the host integrity
problem.
We have concentrated in Tacoma on the agent integrity problem, both because
of expertise within the project and because this problem area was being
neglected by other researchers. (In comparison, the host integrity problem
has attracted considerable interest in the research community).
Replication Approaches
In an open distributed system, agents comprising an application must not
only survive malicious failures of the hosts they visit, but they must also be
resilient to potentially hostile actions by other hosts. We now turn our attention
to fault-tolerance protocols for that setting. Replication and voting
enable an application to survive some failures of the hosts it visits. Hosts
that are not visited by agents of the application, however, can masquerade
and confound a replica-management scheme. Clearly, correctness of an agent
computation must remain una#ected by hosts not visited by that computation
One example we studied extensively is a computation involving an agent
that starts at some source host and visits a succession of hosts, called stages,
ultimately delivering its results to a sink host (which may be the same as
the source). We assumed stages are not a priori determined, because (say)
dynamic load-leveling is being used to match processors with tasks-only
during execution of stage i is stage determined. The di#culties in
making such a pipeline computation fault-tolerant are illustrative of those
associated with more complex agent-computations.
The pipeline computation as just described is not fault-tolerant. Every
stage depends on the previous stage, so a single malicious failure could prevent
progress or could cause incorrect data to be propagated. Therefore, a
first step towards achieving fault-tolerance is to triplicate the host in each
stage. 2 Each of the three replicas in stage i takes as its input the majority
of the inputs it receives from the nodes comprising stage i - 1 and sends its
output to the three nodes that it determines comprise stage i + 1.
Even with this replication, the system can tolerate at most one faulty host
anywhere in the network. Two faulty hosts-in the pipeline or elsewhere-
could claim to be in the last stage and foist a bogus majority value on the
sink. These problems are avoided if the sink can detect and ignore such
Assume execution of each stage is deterministic. This assumption can be relaxed
somewhat without fundamentally a#ecting the solution.
masquerading agents, so we might consider passing a privilege from the source
to the sink. One way to encode the privilege is by using a secret known only
to the source and sink. However, then the source cannot simply send a copy
of the secret to the hosts comprising the first stage of the pipeline, because if
one of these were faulty it could steal the secret and masquerade as the last
stage. To avoid this problem, a series of protocol based on an (n, threshold
schemes [Sha79] have been developed. Details are discussed in [Sch97].
Primary Backup Approaches
Redundant processing is expensive, so the approach to fault-tolerance of the
previous subsection may not always be applicable. Furthermore, preserving
the necessary consistency between replicas can be done e#ciently only
within a local-area network. Replication and voting approaches are also unable
to tolerate program bugs. Thus, a fault-tolerance method based on
failure detection and recovery is often the better choice when agent-based
computations must operate beyond a local area network and must employ
potentially buggy software.
We developed such a fault-tolerance method and presented it in [JMS
Our method has roots in the well known primary-backup approach, whereby
one or more backups are maintained and some backup is promoted to the
primary whenever failure of the primary is detected. With our method, the
backup processors are implemented by mobile agents called rear guards, and
a rear guard performs some recovery action and continues the computation
after a failure is detected. We call our protocol NAP. 3
The key di#erences between NAP and the primary-backup approach are:
. Unlike a backup which, in response to a failure, continues executing the
program that was running, a recovering rear guard executes recovery
code. The recovery code can be identical to the code that was executing
when the failure occurred, but it need not be.
. Rear guards are not executed by a single, fixed, set of backups. In-
stead, rear guards are hosted by recently visited sites. Much of what
is novel about NAP stems from the need to orchestrate rear guards as
the computation moves from site to site.
3 NAP stands for Norwegian Army Protocol. The protocol was motivated by a strategy
employed by the first author's Army troop for moving in a hostile territory.
NAP provides fault-tolerance at low cost. The replication needed for
fault-tolerance is obtained by leaving some code at hosts the mobile agent
visited recently. No additional processors are required, and the recovery that
a mobile agent performs in response to a crash is something that can be
specified by the programmer.
We have been able to demonstrate that when only a few concurrent failures
are possible, the latency of NAP is subsumed by the cost of moving
to another host, the most common method of terminating an action by a
Tacoma agent. However, NAP cannot be implemented in a system that
can experience partitions, because no failure-detector can be implemented in
such a system.
5 Web Crawler Application
Web crawlers follow links to web servers and retrieve the data found there
for processing at some other server. They have been implemented in a wide
variety of languages 4 with Perl and C dominant. By building a hyperlink validation
agent from an existing freely available-but stationary-web crawler
application program, we hoped to evaluate
. whether moving a Tacoma agent to the data leads to better performance
by reducing communication costs and
. whether Tacoma can be used as a form of glue for building agent
applications from existing components.
We presumed Tacoma would be available at all sites to be visited by our
crawler agent but made no assumptions about the language used to implement
the original web crawler or about how that web crawler worked.
We chose Webbot 5 from the W3C organization as the basis for our validation
agent. Available as a binary executable for di#erent common machine
architectures, Webbot was never intended to serve as part of an agent. For
our agent realization, we used Tacoma's VM BIN virtual machine service
agent to execute Webbot binaries on the various di#erent kinds of hosts.
ahoythehomepagefinder.html.
5 http://www.w3c.org/Robot.
Since VM BIN is unsafe, we configured it to run only those binaries accompanied
with a certificate signed by some trusted principal-the wr codeauth
wrapper does this. And, finally, we designed a wrapper (called Webbot)
to extend the functionality of the W3C stationary Webbot application for
execution as an agent. This wrapper
. moves the Webbot binary to a specified set of web servers, one at a
time, and
. restricts Webbot to checking only local links, storing in a folder any
remote links encountered for checking when that remote site is later
being visited.
The relative ease with which the crawler agent was built confirms that
Tacoma's primitives can facilitate easy construction of agents from existing
components.
To evaluate the performance of our crawler agent, we used a web server
at the University of Troms- with 3600 pages totaling 381 Mbytes of data.
Webbot first crawled this server remotely from Cornell University. We then
dispatched our crawler agent from Cornell to crawl the web locally in Troms-.
The UNIX program traceroute reported 12 hops between the webbot and
the web server in the remote case and 2 hops in the local case. And we found
that the crawling took 1941 seconds when the Webbot was run remotely from
but took only 474 seconds when the Webbot-based crawler agent was
run. Detailed experimental data can be found in [SJ00].
6 TACOMA Support for Thin Clients
The run-time footprint of Tacoma renders the system unsuitable for execution
on small portable devices and thin clients, such as PDAs and cellular
phones. Also, Tacoma does not provide adequate support for disconnected
hosts. Since so many have claimed that agents would be ideal for structuring
applications that run on these devices [Whi94, HCK95, GCKR00], we
built Tacoma lite, a version of Tacoma for devices hosting PalmOS and
In doing so, we hoped to gain experience structuring
distributed applications involving thin clients with mobile agents.
The Tacoma lite programming model di#ers from Tacoma in its handling
of disconnected hosts. With Tacoma, execution of a meet that names
gateway
SMS agent Email agent
Email agent
notification
server
Sensor
Sensor
Sensor
Figure
2: SMS Messages as Mobile Agents.
a disconnected host simply fails; Tacoma lite supports hostels for agents
trying to migrate to a disconnected site. Agents in a hostel are queued until
the destination host connects; they are then forwarded in a manner similar
to the Docking System in the D'Agents system [KG99, KGN
The run-time footprint problem is solved in Tacoma lite by using existing
functionality on the portable device-email and HTTP support-so
that full Tacoma functionality can be located on a larger server elsewhere
in the system [JRS96]. The transport mechanism used for sending agents
and receiving results on cellular phones is the GSM Short Message Service
(SMS), a store-and-forward service like SMTP [RGT]. GSM base stations
bu#er messages for delivery if the target phone is disconnected. To bridge
between the GSM and IP networks, we deployed an SMS to IP gateway
available from Telenor.
. SMS messages from cellular phones are received on the gateway processor
and converted to email messages. These are sent on the IP network
for delivery to Tacoma, where they are converted into a briefcase. A
meet is then issued to a service agent designated in the original SMS
message.
. An agent can communicate with cellular phone by doing a meet with
a service agent. That service agent sends the briefcase to the gateway
processor, which then generates a suitable message for display on the
cellular phone.
Figure
2 summarizes this infrastructure for handling communication between
cellular phones and Tacoma.
SMS messages cannot exceed 160 characters. This means agents constructed
by cellular phone users must be terse, and that precludes use of a
general-purpose programming language here. Application-specific languages
appear to be a viable solution for the time being-at least until the capacity
of these small devices grows. The first application we built for a cellular
phone was a weather alarm system [JJ99]. For that system, one would write
ws
to request notification if ever the windspeed (ws) is greater than (gt) 20
meters per second and (&) the temperature (t) is less than (lt) degrees.
Whenever the predicate specified by the agent evaluates to true, a short
notification is sent back to the cellular phone. Obviously, use of application-specific
languages limits the class of "agents" that can be written by the user
of a cellular phone.
7 Factored Agents: Beyond Wrappers
Over the course of the Tacoma project, it has become clear that developing
an agent as if it were a single monolithic object is a bad idea, because it
forces the agent programmer to deal unnecessarily with complexity. Adopting
wrappers for agents was a start at developing an infrastructure to allow better
agent structuring. Recently, we have been exploring a new approach that
leverages characteristics intrinsic to all agents and separates three concerns:
Function which deals with data and its transformation.
Mobility which deals with determining sites the agent will visit and mechanisms
involved in transferring data and control.
Management which deals with the glue that controls the agent's function
and mobility.
This new structuring approach is embodied in the latest Tacoma prototype
TOS, which defines a language for programming carriers, the management
parts of agents. 6
The design of TOS was an outgrowth of our work in developing a set
of applications for distributed management: a generic software management
6 The name "carrier" was chosen to reflect the intended usage as a carrier of information
and software.
platform, a distributed resum-e-database search engine [Joh98], a peer-to-
peer network computing platform, and a distributed intrusion-detection system
[LJM01]. It became clear that these application had much in common
but we lacked the structuring tools to make that commonality apparent in the
agent's code. So, we are now developing a library of carriers for constructing
classes of agent applications.
The largest of these e#orts is OpenGrid, a platform based on TOS-hence
"open" to diverse function, mobility, and management regimes-for running
highly parallel computations [FK98]. Highly parallel computations are often
structured according to the controller/worker paradigm. With TOS, different
carriers can be invoked in OpenGrid to facilitate computations for a
variety of network topology and computer configurations. Carriers from the
TOS standard library are used to install and configure legacy software on
the grid's computers; other carriers, written specifically for OpenGrid, provide
the communication infrastructure used by the controllers and workers,
as well as providing di#erent levels of fault-tolerance. So programmers of the
parallel applications on OpenGrid often are not required to write carriers;
they need only implement algorithms for the workers and, if necessary, the
controller.
Conclusions
Methods for structuring distributed computations seem to fall in and out
of fashion: client-server distributed computing, middleware and groupware,
mobile agents, and now peer-to-peer computing. No one of these has proved
a solution to all distributed application structuring problems. Nor should we
expect to find such a magic bullet. Each hides some details at the expense
of others. And the same computation could be written using any of them.
What di#ers from one to the other is the ease with which a computation can
be expressed and understood. What details are brought to the fore. What
details are hidden. What is easy to say and what is hard to say. One size is
never going to fit all, and having expectations to the contrary is naive.
Much research in mobile agents has focused on issues concerning mechanisms
and abstractions. In the Tacoma project, we took a strong stand on
some of these issues and were agnostic on others. We religiously avoided designing
a language or o#ering language-based guarantees, because we wanted
agents to serve as a glue for building distributed applications. By choosing
a language or a class of languages or a particular representation for data we
would have artificially limited the applicability of our experimental proto-
types. The generality of our folder and meet mechanisms decouples Tacoma
from the choice of language used in writing individual agents. A program in
any language can be stored in a folder and moved from host to host. And,
any language that a given host supports can be used to program the portion
of an agent executed on that host. This generality is particularly useful in
using agents for system integration. Existing applications do not have to be
rewritten; COTS components can be accommodated.
Similarly, we took the view that agents themselves should be responsible
for packaging and transferring their state from site to site-adopting what
became known as weak mobility. Awkward as this might seem, the problem
of automatically performing state capture is now understood to be quite
complex. And by our choice of mechanisms, we managed to avoid confronting
that problem. However, in higher-level programming models where state is
invisible to the programmer, automatic state capture becomes a necessity.
The cost of moving an agent from one processor then cannot be predicted,
and designing applications to meet performance goals becomes di#cult.
Not all of the work performed under the auspices of the Tacoma project
was concerned with new mechanisms or abstractions. Our work in fault-tolerance
and, later, in security for systems of agents are examples. A test
that we applied here as new approaches were developed was to ask: "What
about mobile agents enables this solution?". The answer was often surprising.
From the work in fault-tolerance, we learned that if replicas can move, then
their votes must be authenticated-a problem that can be ignored in traditional
TMR (Triple Modular Redundancy) replication where point-to-point
communication channels provide hardware-implemented authentication. Our
work in security led to a new enforcement mechanism (called inlined reference
monitors) for fine-grained access control, but we quickly realized that this
mechanism had no real dependence on what was unique about mobile agents.
So that investigation changed course and concentrated on implementing the
Principle of Least Privilege in broader settings. Not being agent-based, this
security work is not discussed in this paper-in a sense, it has outgrown the
world of mobile agents.
The lack of benchmark applications has clearly hampered research in
the area of mobile agents. The existence of such applications would allow
researchers to evaluate choices they make and might settle some of the de-
bates. Applications would also give insight into what problems are important
to solve.
Some have taken the dearth of benchmarks as a symptom that mobile
agents are a solution in search of a problem. That is one interpretation. But
other interpretations are also plausible. It could be that mobile code a#ords
an expressive power that we are not used to exploiting, and a new generation
of applications will emerge as we become comfortable with such expressive
power. It also could be that applications that simultaneously exploit all of
the flexibility of mobile agents are rare, but applications that take advantage
of a few dimensions are not rare (but are never considered paradigmatic).
Acknowledgements
The authors would like to express their gratitude to Keith Marzullo and
Dmitrii Zagorodnov at the University of San Diego, Yaron Minsky at Cornell
University, and many students who have been involved in the Tacoma
project at the University of Troms- over the years. Anonymous reviewers
provided many detailed and helpful suggestions on an earlier version of this
paper.
--R
A Language for Resource-Aware Mobile Programs
Kqml as an agent communication language.
Mobile agents: Motivations and state of the art.
Agent Tcl: A transportable agent system.
Mobile Agents: Are they a good idea?
Mobile Software on Mobile Hardware - Experiences with TACOMA on PDAs
Ubiquitous Devices United: Enabling Distributed Computing Through Mobile Code.
NAP: Practical fault-tolerance for itinerant com- putations
Mobile Agent Applicability.
Supporting broad internet access to TACOMA.
Mobile code: The future of the Inter- net
Agent Tcl: Targeting the needs of mobile computers.
TOS: Kernel Support for Distributed Systems Management.
Programming and Deploying Java Mobile Agents with Aglets.
To Engineer Is Human: The Role of Failure in Successful Design.
Process Migration in DEMOS/MP.
The Architecture of the Ara Platform for Mobile Agents.
Gsm 07.05: Short message service (sms) and cell broadcasting service cbs.
Towards fault-tolerant and secure agentry
How to share a secret.
Adding Mobility to Non-mobile Web Robots
Preemtable remote execution facilities for the V-system
Building Adaptive Systems Using Ensemble.
Telescript technology: The foundation for the electronic marketplace.
Attacking the process migration bottleneck.
--TR
Attacking the process migration bottleneck
KQML as an agent communication language
Building adaptive systems using ensemble
The grid
Ubiquitous devices united
Mobile agents and the future of the internet
Preemptable remote execution facilities for the V-system
How to share a secret
TOS
Supporting broad internet access to TACOMA
Programming and Deploying Java Mobile Agents Aglets
Mole MYAMPERSANDndash; Concepts of a mobile agent system
Agent Tcl
Towards Fault-Tolerant and Secure Agentry
The Architecture of the Ara Platform for Mobile Agents
Mobile Agent Applicability
Sumatra
Process migration in DEMOS/MP
Mobile Agents: Motivations and State-of-the-Art Systems
--CTR
M. J. O'Grady , G. M. P. O'Hare, Mobile devices and intelligent agents: towards a new generation of applications and services, Information SciencesInformatics and Computer Science: An International Journal, v.171 n.4, p.335-353, 12 May 2005
Michael Luck , Peter McBurney , Chris Preist, A Manifesto for Agent Technology: Towards Next Generation Computing, Autonomous Agents and Multi-Agent Systems, v.9 n.3, p.203-252, November 2004 | agent integrity;wrappers;distributed applications;mobile agent system;communication and synchronization of agents;weak mobility;mobile code |
586450 | Principles of component-based design of intelligent agents. | Compositional multi-agent system design is a methodological perspective on multi-agent system design based on the software engineering principles process and knowledge abstraction, compositionality, reuse, specification and verification. This paper addresses these principles from a generic perspective in the context of the compositional development method DESIRE. An overview is given of reusable generic models (design patterns) for different types of agents, problem solving methods and tasks, and reasoning patterns. Examples of supporting tools are described. | Introduction
The area of Component-Based Software Engineering is currently a well-developed area of
research within Software Engineering; e.g., [15], [27], [42], [43]. More specific approaches
to component-based design of agents are often restricted to object-oriented implementation
environments, usually based on Java [2], [23], [36]. In these approaches, agents are often
rarely knowledge-based architectures are covered, and if so, only with only
agents that are based on one knowledge base [38]. Techniques for complex, knowledge-intensive
tasks and domains developed within Knowledge Engineering play no significant
role. In contrast, this paper addresses the design of component-based intelligent agents in
the sense that (1) the agents can be specified on a conceptual (design) level instead of an
implementation level, and (2) specifications exploit knowledge-based techniques as
developed within Knowledge Engineering, enabling the design of more complex agents, for
example for knowledge-intensive applications.
The compositional multi-agent design method DESIRE (DEsign and Specification of
Interacting REasoning components) supports the design of component-based autonomous
interactive agents. Both the intra-agent functionality (i.e., the expertise required to perform
the tasks for which an agent is responsible in terms of the knowledge, and reasoning and
acting capabilities) and the inter-agent functionality (i.e., the expertise required to perform
and guide co-ordination, co-operation and other forms of social interaction in terms of
knowledge, and reasoning and acting capabilities) are explicitly modelled. DESIRE views
the individual agents and the overall system as compositional structures - hence all
functionality is designed in terms of interacting, compositionally structured components. In
this paper an overview is given of the principles behind this design method. Sections 2
briefly discusses the process of design and the role of compositionality within this process.
Section 3 discuuses the problem analysis and requirements elicitation process. Section 4
introduces the elements used to specify conceptual design and detailed design: process
composition, knowledge composition and their relationships. Design rationale and
verification is discussed in Section 5. Section 6 discusses the notion of component-based
generic models that form the basis of reuse during design processes. The availability of a
large variety of such generic models for agents and tasks forms an important basis of the
design method. In this section a number of these models are presented. Section 7 briefly
discusses the graphical software environment to support the design process. Section 8
concludes the paper with a discussion.
2 The design process and types of compositionality
The design of a multi-agent system is an iterative process, which aims at the identification
of the parties involved (i.e., human agents, system agents, external worlds), and the
processes, in addition to the types of knowledge needed. Conceptual descriptions of specific
processes and knowledge are often first attained. Further explication of these conceptual
design descriptions results in detailed design descriptions, most often in iteration with
conceptual design. During the design of these models, partial prototype implementations
may be used to analyse or verify the resulting behaviour. On the basis of examination of
these partial prototypes, new designs and prototypes are generated and examined, and so on
and so forth. This approach to evolutionary development of systems, is characteristic to the
development of multi-agent systems in DESIRE.
During a multi-agent system design process, DESIRE distinguishes the following
descriptions (see Figure 1):
. problem description
. conceptual design
. detailed design
. operational design
. design rationale
The problem description includes the requirements imposed on the design. The rationale
specifies the choices made during design at each of the levels, and assumptions with respect
to its use.
Conceptual
Design
Detailed
Design
Operational
Design
Problem
Description
Design
Rationale
Figure
Problem description, levels of design and design rationale
The relationship between the levels of design (conceptual, detailed, operational) is well-defined
and structure-preserving. The conceptual design includes conceptual models for
each individual agent, the external world, the interaction between agents, and the
interaction between agents and the external world. The detailed design of a system, based
on the conceptual design, specifies all aspects of a system's knowledge and behaviour. A
detailed design provides sufficient detail for operational design. Prototype
implementations, are automatically generated from the detailed design.
There is no fixed sequence of design: depending on the specific situation, different types
of knowledge are available at different points during system design. The end result, the
final multi-agent system design, is specified by the system designer at the level of detailed
design. In addition, important assumptions and design decisions are specified in the design
rationale. Alternative design options together with argumentation are included. On the basis
of verification during the design process, properties of models can be documented with the
related assumptions. The assumptions define the limiting conditions under which the model
will exhibit specific behaviour.
Compositionality is a general principle that refers to the use of components to structure a
design. Within the DESIRE method components are often complex compositional
structures in which a number of other, more specific components are grouped. During
design different levels of process abstraction are identified. Processes at each of these levels
(except the lowest level) are modelled as (process) components composed of components at
the adjacent lower level.
Processes within a multi-agent system may be viewed as the result of interaction
between more specific processes. A complete multi-agent system may, for example, be seen
to be one single component responsible for the performance of the overall process. Within
this one single component a number of agent components and an external world may be
distinguished, each responsible for a more specific process. Each agent component may, in
turn, have a number of internal components responsible for more specific parts of this
process. These components may themselves be composed, again entailing interaction
between other more specific processes.
The ontology used to express the knowledge needed to reason about a specific domain
may also be seen as a single (knowledge) component. This knowledge structure may be
composed of a number of more specific knowledge structures which, in turn, may again be
composed of other even more specific knowledge structures.
As shown in Figure 2 compositionality of processes and compositionality of knowledge
are two separate, orthogonal dimensions. The compositional knowledge structures are
referenced by compositional process structures, when needed.
compositionality of knowledge
compositionality
of processes
Figure
2 Compositionality of processes and compositionality of knowledge
Compositionality is a means to acquire information and process hiding within a model:
by defining processes and knowledge at different levels of abstraction, unnecessary detail
can be hidden. Compositionality also makes it possible to integrate different types of
components in one agent. Components and groups of components can be easily included in
new designs, supporting reuse of components at all levels of design.
3. Problem Description and Requirements Elicitation
Which techniques are used to acquire a problem description is not pre-defined. Techniques
vary in their applicability, depending on, for example, the situation, the task, the type of
knowledge on which the system developer wishes to focus. Acquisition of requirements to
be imposed on the system as part of the problem description is crucial. These requirements
are part of the initial problem definition, but may also evolve during the development of a
system.
Requirements Engineering is a well-studied field of research. In recent years
requirements engineering for distributed and agent systems has been studied, e.g., [19],
At the level of the multi-agent system, requirements are related to the
dynamics of interaction and co-operation patterns. At the level of individual agents,
requirements are related to agent behaviour. Due to the dynamic complexity, analysis and
specification of such requirements is a difficult process.
Requirements can be expressed in an informal, semi-formal or formal manner. In the
context described above, the following is an informally expressed requirement for the
dynamics of the multi-agent system as a whole:
R2: Each service request must be followed by an adequate service proposal after a certain time delay.
In a structured, semi-formal manner, this requirement can be expressed as follows:
if at some point in time
an agent A outputs: a service request, to an appropriate other agent B
then at a later point in time
agent B outputs: a proposal for the request, to agent A
and at a still later point in time
agent A outputs: proposal is accepted, to agent B
The following temporal formalisation is made:
"# , t, A
holds(state#, t, output(A)), communication_from_to(request(r), A, B)
The formal language used is comparable to situation calculus (e.g., compare holdsto the
holds-predicate in situation calculus), but with explicit variables for traces and time. The
expression
holds(state(#, t, output(A)), communication_from_to(request(r), A, B))
means that within trace #
at time point t a communication statement
is placed in the output interface of agent A. Here a trace is a sequence over time of three-valued
information states of the system, including input and output information states of all
of the agents, and their environment. The time frame can be discrete, or a finite variability
assumption can be used. For further details on the use of this predicate logic temporal
language, see [25].
Besides requirements on the dynamics of the overall multi-agent system, also
requirements can be expressed on the behaviour of single agents. For example, an agent
who is expected to adequately handle service requests should satisfy the following
behaviour requirements:
A1: If the agent B receives a request for a service from a client A
And the necessary information regarding this client is not available
agent B issues a request for this information to that client.
Requirements on the dynamics of a multi-agent system are at a higher process abstraction
level than the behaviour requirements on agents.
4. Conceptual Design and Detailed Design
Conceptual and detailed designs consist of specifications of the following three types:
. process composition,
. knowledge composition,
. the relation between process composition and knowledge composition.
These three types of specifications are discussed in more detail below.
4.1 Process Composition
Process composition identifies the relevant processes at different levels of (process)
abstraction, and describes how a process can be defined in terms of lower level processes.
Depending on the context in which a system is to be designed two different views can be
taken: a task perspective, and a multi-agent perspective. The task perspective refers to the
view in which the processes needed to perform an overall task first are identified. These
processes (or sub-tasks) are then delegated to appropriate agents and the external world,
after which these agents and the external world are designed. The multi-agent perspective
refers to the view in which agents and an external world are first identified and then the
processes within each agent and within the external world.
4.1.1 Identification of processes at different levels of abstraction
Processes can be described at different levels of abstraction; for example, the processes for
the multi-agent system as a whole, processes within individual agents and the external
world, processes within task-related components of individual agents.
Modelling a process
The processes identified are modelled as components. For each process the types of
information used as input and resulting as output are identified and modelled as input and
output interfaces of the component.
Modelling process abstraction levels
The levels of process abstraction identified are modelled as abstraction/specialisation
relations between components at adjacent levels of abstraction: components may be
composed of other components or they may be primitive. Primitive components may be
either reasoning components (for example based on a knowledge base), or, alternatively,
components capable of performing tasks such as calculation, information retrieval,
optimisation, et cetera.
The identification of processes at different abstraction levels results in specification of
components that can be used as building blocks, and of a specification of the sub-component
relation, defining which components are a sub-component of a which other
component. The distinction of different process abstraction levels results in process hiding.
4.1.2 Composition
The way in which processes at one level of abstraction in a system are composed of
processes at the adjacent lower abstraction level in the same system is called composition.
This composition of processes is described not only by the component/sub-component
relations, but in addition by the (possibilities for) information exchange between processes
(static view on the composition), and task control knowledge used to control processes and
information exchange (dynamic view on the composition).
Information exchange
Information exchange defines which types of information can be transferred between
components and the information links by which this can be achieved. Within each of the
components private information links are defined to transfer information from one
component to another. In addition, mediating links are defined to transfer information from
the input interfaces of encompassing components to the input interfaces of the internal
components, and to transfer information from the output interfaces of the internal
components to the output interface of the encompassing components.
Task control knowledge
Components may be activated sequentially or they may be continually capable of
processing new input as soon as it arrives (awake). The same holds for information links:
information links may be explicitly activated or they may be awake. Task control
knowledge specifies under which conditions which components and information links are
active (or made awake). Evaluation criteria, expressed in terms of the evaluation of the
results (success or failure), provide a means to further guide processing.
Task control knowledge specifies when and how processes are to be performed and
evaluated. Goals of a process are defined by the task control foci together with the extent to
which they are to be pursued. Evaluation of the success or failure of a process's
performance is specified by evaluation criteria together with an extent. Processes may be
performed in sequence or in parallel, some may be continually "awake', (e.g., able to react
to new input as soon as it arrives), others may need to be activated explicitly.
4.2 Knowledge Composition
Knowledge composition identifies knowledge structures at different levels of (knowledge)
abstraction, and describes how a knowledge structure can be defined in terms of lower level
knowledge structures. The knowledge abstraction levels may correspond to the process
abstraction levels, but this is not often the case; often the matrix depicted in Figure 2 shows
an m to n correspondence between processes and knowledge structures, with m, n > 1.
4.2.1 Identification of knowledge structures at different abstraction levels
The two main structures used as building blocks to model knowledge are: information types
and knowledge bases. These knowledge structures can be identified and described at
different levels of abstraction. At the higher levels details can be hidden. The resulting
levels of knowledge abstraction can be distinguished for both information types and
knowledge bases.
Information types
An information type defines an ontology (lexicon, vocabulary) to describe objects or terms,
their sorts, and the relations or functions that can be defined on these objects. Information
types are defined as signatures (sets of names for sorts, objects, functions, and relations) for
order-sorted predicate logic. Information types can be specified in graphical form, or in
formal textual form.
Knowledge bases
Knowledge bases use ontologies defined in information types. Relations between
information types and knowledge bases define precisely which information types are used.
The relationships between the concepts specified in the information types are defined by the
knowledge bases during detailed design.
4.2.2 Composition of knowledge structures
Information types can be composed of more specific information types, following the
principle of compositionality discussed above. Similarly, knowledge bases can be
composed of more specific knowledge bases. The compositional structure is based on the
different levels of knowledge abstraction distinguished, and results in information and
knowledge hiding.
4.3 Relation between Process Composition and Knowledge Composition
Each process in a process composition uses knowledge structures. Which knowledge
structures (information types and knowledge bases) are used for which processes is defined
by the relation between process composition and knowledge composition. The cells within
the matrix depicted in Figure 2 define these relations.
5. Design Rationale and Compositional Verification
The design rationale behind a design process describes the relevant properties of a system
in relation to the design requirements and the relevant assumptions. The initial requirements
are stated in the initial problem description, others originate during a design process, and
are added to the problem description. Important design decisions are made explicit, together
with some of the alternative choices that could have been made, and the arguments in
favour of and against the different options. At the operational level the design rationale
includes decisions based on operational considerations, such as the choice to implement a
parallel process on one or more machines, depending on the available capacity. This
information is of particular importance for verification.
Requirements imposed on multi-agent systems designed to perform complex and
interactive tasks are often requirements on the behaviour of the agents and the system. As in
non-trivial applications the dynamics of a multi-agent system and the control thereof are of
importance, it is vital to understand how system states change over time. In principle, a
design specifies which changes are possible and anticipated, and which behaviour is
intended. To obtain an understanding of the behaviour of a compositional multi-agent
system, its dynamics can be expressed by means of the evolution of information states over
time. If information states are defined at different levels of process abstraction, behaviour
can be described at different levels of process abstraction as well.
The purpose of verification is to prove that, under a certain set of assumptions, a system
adheres to a certain set of properties, for example the design requirements. A compositional
multi-agent system verification method takes the process abstraction levels and the related
compositional structure into account. In [18], [30], and [6] a compositional verification
method is described and applied to diagnostic reasoning, co-operative information gathering
agents, and negotiating agents, respectively. The verification process is done by a
mathematical proof (i.e., a proof in the form to which mathematicians are accustomed) that
the specification of the system, together with the assumptions, imply the properties that a
system needs to fulfil. The requirements are formulated formally in terms of temporal
semantics. During the verification process the requirements of the system as a whole are
derived from properties of agents (one process abstraction level lower) and these agent
properties, in turn, are derived from properties of the agent components (again one
abstraction level lower).
Primitive components (those components that are not composed of others) can be
verified using more traditional verification methods for knowledge-based systems (if they
are specified by means of a knowledge base), or other verification methods tuned to the
type of specification used. Verification of a (composed) component at a given process
abstraction level is done using
. properties of the sub-components it embeds
. a specification of the process composition relation
. environmental properties of the component (depending on the rest of the system,
including the world).
This introduces compositionality in the verification process: given a set of environmental
properties, the proof that a certain component adheres to a set of behavioural properties
depends on the (assumed) properties of its sub-components, and the composition relation:
properties of the interactions between those sub-components, and the manner in which they
are controlled. The assumptions under which the component functions properly, are the
properties to be proven for its sub-components. This implies that properties at different
levels of process abstraction play their own role in the verification process. Compositional
verification has the following advantages; see also [1], [26], [30]:
. reuse of verification results is supported (refining an existing verified
compositional model by further decomposition, leads to verification of the refined
system in which the verification structure of the original system can be reused).
. process hiding limits the complexity of the verification per abstraction level.
A condition to apply a compositional verification method described above is the availability
of an explicit specification of how the system description at an abstraction level is
composed from the descriptions at the adjacent lower abstraction level.
The formalised properties and their logical relations, resulting from a compositional
verification process, provide a more general insight in the relations between different forms
of behaviour. For example, in [18] different properties of diagnostic reasoning and their
logical relations have been formalised in this manner, and in [30] the same has been done
for pro-activeness and reactiveness properties for co-operative information gathering
agents. In [6] termination and successfulness properties for negotiation processes are
analysed.
6. Reusability and Generic Models
The iterative process of modelling processes and knowledge is often resource-consuming.
To limit the time and expertise required to design a system a development method should
reuse as many elements as possible. Within a compositional development method, generic
agent models and task models, and existing knowledge structures (ontologies and
knowledge bases) may be used for this purpose. Which models are used, depends on the
problem description: existing models are examined, discussed, rejected, modified, refined
and/or instantiated in the context of the problem at hand. Initial abstract descriptions of
agents and tasks can be used to generate a variety of more specific agent and task
descriptions through refinement and composition (for which existing models can be
employed as well).
Agent models and task models can be generic in two senses: with respect to the
processes (abstracting from the processes at the lower levels of process abstraction), and
with respect to the knowledge (abstracting from lower levels of knowledge abstraction, e.g.,
a specific domain of application). Often different levels of genericity of a model may be
distinguished. A refinement of a generic model to lower process abstraction levels, resulting
in a more specific model is called a specialisation. A refinement of a generic model to
lower knowledge abstraction levels, e.g., to model a specific domain of application, is
called an instantiation. Compositional system design focuses on both aspects of genericity,
often starting with a generic agent model. This model may be modified or refined by
specialisation and instantiation. The process of specialisation replaces a single 'empty'
component of a generic model by a composed component (consisting of a number of sub-
components). The process of instantiation takes a component of a generic model and fills it
with (domain) specific information types and knowledge bases. During these refinement
processes components can also be deleted or added. The compositional structure of the
design is the basis for performing such operations on a design.
The applicability of a generic agent model depends on the basic characteristics of an
agent in the problem description. The applicability of a generic task model for agent-specific
tasks depends not only on the type of task involved, but also the way in which the
task is to be approached. Since the availability of a variety of generic models is crucial for
the quality of support that can be offered during a design process, in this section a number
of generic models available in DESIRE are discussed.
6.1 Generic Agent Models
Characteristics of automated agents vary significantly depending on the purposes and tasks
for which they have been designed. Agents may or may not, for example, be capable of
communicating with other agents. A fully reactive agent may only be capable of reacting to
incoming information from the external world. A fully cognitive and social agent, in
comparison, may be capable of planning, monitoring and effectuating co-operation with
other agents. Which agent models are most applicable to a given situation (possibly in
combination) is determined during system design. Generic models for weak agents, co-operative
agents, BDI-agents and deliberative normative agents are briefly described below.
communicated
observation
results
to wim
observed
agent
communicated
agent
Agent task control
Own
Process
Control
Maintenance
of Agent
Information
Agent
Task
Maintenance
of World
Information
Agent
Interaction
Management
World
Interaction
Management
own process info to wim
own process info to aim
own
process
info to
own
process
info to
mwi info to be communicated
communicated
info to ast
communicated world info
observations and actions
observed
info to ast
observed
world info
action and observation info from ast
communication info from ast
agent info to opc
world info to opc
agent info to wim
agent info to aim
world info to aim
world info to wim
Figure
3 Generic model for the weak agent notion
6.1.1 Generic Model for the Weak Agent Notion: GAM
The Generic Agent Model (GAM) depicted in Figure 3 supports the notion of a weak agent,
for which autonomy, pro-activeness, reactiveness and social abilities are distinguished as
characteristics; cf. [44]. This type of agent:
. reasons about its own processes (supporting autonomy and pro-activeness)
. interacts with and maintains information about other agents (supporting social abilities,
and reactiveness and pro-activeness with respect to other agents)
. interacts with and maintains information about the external world (supporting
reactiveness and pro-activeness with respect to the external world).
The six components are: Own Process Control (OPC), Maintenance of World Information
(MWI), World Interaction Management (WIM), Maintenance of Agent Information (MAI),
Agent Interaction Management (AIM), and Agent Specific Tasks (AST). The processes
involved in controlling an agent (e.g., determining, monitoring and evaluating its own goals
and plans) but also the processes of maintaining a self model are the task of the component
Own Process Control. The processes involved in managing communication with other
agents are the task of the component Agent Interaction Management. Maintaining
knowledge of other agents' abilities and knowledge is the task of the component
Maintenance of Agent Information. Comparably, the processes involved in managing
interaction with the external (material) world are the task of the component World
Interaction Management. Maintaining knowledge of the external (material) world is the task
of the component Maintenance of World Information. The specific task for which an agent
is designed (for example: design, diagnosis), is modelled in the component Agent Specific
Task. Existing (generic) task models may be used to further specialise this component; see
Section 6.2.
6.1.2 Generic Co-operative Agent Model: GCAM
If an agent explicitly reasons about co-operation with other agents, the generic model for a
agent depicted in Figure 3 can be extended to include an additional component for co-operation
management. This component, the Co-operation Management component
includes the knowledge needed to acquire co-operation, as shown in Figure 4.
Cooperation Management task control
Monitor
Project
Generate
Project
required project
info on other agents
required monitoring info
monitoring info to output
required info on other agents
commitments to output
own generated project
incoming project info
monitoring info
Figure
4 Refinement of Co-operation Management in the generic co-operative agent model GCAM
To achieve co-operation between a number of agents requires specific plans devised
specifically for this purpose. These plans are the result of reasoning by the component
Generate Project. This component identifies commitments needed for all agents involved,
and modifies existing plans when necessary.
Figure
5 Composition of the component Generate Project in GCAM
The composition of the component Generate Project in Figure 5 includes the two
components Prepare Project Commitments (for composing an initial project team) and
Generate and Modify Project Recipe (to determine a detailed schedule for the project, in
interaction with the project team members) for these two purposes. Execution of a plan,
also part of co-operation, is monitored by each individual agent involved. This is the task of
the component Monitor Project. The two sub-components of this component depicted in
Figure
6, Assess Viability (to determine the feasibility of a plan) and Determine
Consequences (consequences of changes for the agents involved). The generic model of a
cooperative agent is based on the approach put forward in [28]. For a more detailed
explanation of the composition of processes, the knowledge involved and the interaction
between components, see [9].
Monitor Project - task control
Determine
Consequences
Assess
Viability
project info assessment info to DC info on project changes
assessment info to output
Figure
6 Composition of the the component Monitor Project in GCAM
6.1.3 Generic Model of a BDI-Agent: GBDIM
An agent that bases its control of its own processes on its own beliefs, desires,
commitments and intentions is called a BDI-agent. The BDI-agent model is a refinement of
the model for a weak agent GAM. The refinement of own process control in the Generic
Model for BDI-agents, GBDIM, is shown in Figure 7.
own process control task control
desire
determination
determination
intention and
commitment
determination
transfer_desire_info_for_id
transfer_belief_info_for_id
info_for_dd
agent_info
Figure
7 Refinement of the component Own Process Control in the generic BDI-agent model GBDIM
Beliefs, desires, and intentions together with commitments, are determined in separate
components with interaction between all three. A distinction is made between (1) intentions
and commitments with respect to goals, and (2) intentions and commitments with respect to
plans. This distinction involves different types of knowledge and, as a result, is modelled
by two different components as depicted in Figure 8.
intention and commitment determination task control
goal
determination
plan
determination
Figure
8 Refinement of the component Intention and Commitment determination
Please note that the influence of intentions and commitments with respect to goals directly
influences intentions and commitments with respect to plans, and vice versa. For more
detail see [8].
Goal
Management
Own Process Control
Management
Management
Plan
Management
belief info to nm
normative meta goals
goal control
plan control
belief info to gm
evaluation info
monitor information
selected goals
goal information
selected actions
belief info to sm
norms
Figure
9 A Generic Model for a Deliberative Normative Agent: GDNM
6.1.4 Generic Model of a Deliberative Normative Agent: GDNM
In many agent societies norms are assumed to play a role. It is claimed that not only
following norms, but also the possibility of 'intelligent' norm violation are of importance.
Principles for agents that are able to behave deliberatively on the basis of explicitly
represented norms are identified and incorporated in a generic model for a deliberative
normative agent. Using this agent model, norms can be communicated, adopted and used as
meta-goals on the agent's own processes. As such they have impact on deliberation about
goal generation, goal selection, plan generation and plan selection.
This generic model for an agent that uses norms in its deliberative behaviour is a
refinement of the generic agent model GAM. A new component is included for society
information, the component Maintenance of Society Information (MSI) at the top level and
the component Own Process Control is refined as shown in Figure 9. For more details, see
[16].
6.2 Generic Models of Problem Solving Methods and Tasks
The specific tasks for which agents are designed vary significantly. Likewise the variety of
tasks for which generic models based on specific problem solving methods have been
developed is wide: diagnosis, design, process control, planning and scheduling are
examples of tasks for which generic models are available. In this section compositional
generic task models (developed in DESIRE) for the first three types of tasks are briefly
described. These task models can be combined with any of the agent models described
above: they can be used to specialise the agent specific task component.
6.2.1 A Generic Model for Diagnostic Tasks: GDIM
Tasks specifically related to diagnosis are included in the generic task model of diagnosis
(for a top level composition, see Figure 10). This generic model (the Generic DIagnosis
Model GDIM) is based on determination and validation of hypotheses. It subsumes both
causal and anti-causal diagnostic reasoning. Application of this generic model for both
types of diagnosis is discussed in [13].
diagnostic reasoning system task control
hypothesis
determination
hypothesis
validation
hypotheses
observation info assessments
symptoms presence
diagnosis
required
observations
hyp target info
focus info
Figure
Generic task model of Diagnosis: GDIM
The component Hypothesis Determination is used to dynamically focus on certain
hypotheses during the process. Hypothesis Validation includes determination of the
observations (Observation Determination) needed to validate a hypothesis (which are
transferred to the external world to be performed), and evaluation of the results of
observation with respect to the hypothesis in focus (Hypothesis Evaluation).
hypothesis validation task control
hypothesis
evaluation
observation
determination predictions
to be observed
obs info to HE
focus hyp to HE
focus hyp to OD
performed obs
eval info
obs info
to output
obs target
info to HE
Figure
11 Composition of the component Hypothesis Validation in GDIM
6.2.2 A Generic Model for Design Tasks: GDEM
The compositional Generic DEsign Model (GDEM; see Figure 12) [10] is based on a
logical analysis of design processes and on analyses of applications, including elevator
configuration and design of environmental measures [14]. In this model Requirement
Qualification Sets Manipulation (component RQS Manipulation or RQSM), Design Object
Description Manipulation (component DOD Manipulation or DODM), and Design Process
Co-ordination (DPC), are distinguished as three separate interacting processes. The model
provides a generic structure which can be refined for specific design tasks in different
domains of application.
An initial design problem statement is expressed as a set of initial requirements and
requirement qualifications. Requirements impose conditions and restrictions on the
structure, functionality and behaviour of the design object for which a structural description
is to be generated during design. Qualifications of requirements are qualitative expressions
of the extent to which (individual or groups of) requirements are considered hard or
preferred, either in isolation or in relation to other (individual or groups of) requirements.
At any one point in time during design, the design process focuses on a specific subset of
the set of requirements. This subset of requirements plays a central role; the design process
is (temporarily) committed to the current requirement qualification set: the aim of
generating a design object description is to satisfy these requirements.
Design task control
Design
Process
Co-ordination
DOD
Manipulation
RQS
Manipulation
design process objective description design process evaluation report
overall design strategy to RQSM
overall design
strategy to DODM
RQSM process
evaluation report
results
RQS information
RQS
DODM
results
DOD
intermediate DOD information
intermediate RQS information
intermediate DODM results
DOD
information
DODM process
evaluation report
Figure
12 Composition of the Design Task: GDEM
During design the subsets of the set of requirements considered may change as may the
requirements themselves. The same holds for design object descriptions representing the
structure of the object to be designed.
The component Requirement Qualification Set Manipulation has four sub-components:
. RQS modification: the current requirement qualification set is analysed, proposals for
modification are generated, compared and the most promising (according to some
. deductive RQS refinement: the current requirement qualification set is deductively refined
by means of the theory of requirement qualification sets,
. current RQS maintenance: the current requirement qualification set is stored and
maintained,
. RQSM history maintenance: the history of requirement qualification sets modification is
stored and maintained.
The component Manipulation of Design Object Descriptions also has four sub-components:
. DOD modification: the current design object description is analysed in relation to the
current requirement set, proposals for modification are generated, compared and the
most promising (according to some measure) selected,
. deductive DOD refinement: the current design object description is deductively refined by
means of the theory of design object descriptions,
. current DOD maintenance: the current design object description is stored and maintained,
. DODM history maintenance: the history of design object descriptions modification is stored
and maintained.
More detail on this model can be found in [10]. In [11] the different levels of strategic
reasoning in the model are described in more detail, including the component Design
Process Co-ordination for the highest level of strategic reasoning.
6.2.3 A Generic Model for Process Control Tasks: GPCM
Process control involves three sub-processes: process analysis, simulation of world
processes, and plan determination. These sub-processes are represented explicitly at the
top-level of the Generic Process Control Model GPCM depicted in Figure 13.
process control task: task control
selected observations
selected actions
plan
determination
simulated
world processes
process
analysis
evaluation information
spy points finalisation
current world
state for simulation
selected actions to process analysis
proposed actions
world obs info
simulation information
Figure
Process composition of process control: GPCM
Process Analysis involves evaluation of the process as a whole, and determination of the
observations to be performed in the external world. This is depicted below in Figure 14 in
the composition of the component Process Analysis.
process analysis task control
process
evaluation
determine
observations
eval info
selected
observations
world and
simulation
obs info
for evaluation evaluation information
performed world obs
plans for eval plans for determ obs
Figure
14 Process composition of process analysis: information links
Note that two types of observations can be performed: incidental observations that
return an observation result for only the current point in time, and continuous observations
that continuously return all updated observation results as soon as changes in the world
occur.
6.3 Generic Models of Reasoning Patterns
An example of a generic model for a specific reasoning pattern, is a model for reasoning
patterns in which assumptions are dynamically added and retracted (sometimes called
hypothetical reasoning), is discussed. Reasoning with and about assumptions entails
deciding about a set of assumptions to be assumed for a while (reasoning about
assumptions), and deriving which facts are logically implied by this set of assumptions
(reasoning with assumptions). The derived facts may be evaluated; based on this evaluation
some of the assumptions may be rejected and/or a new set of assumptions may be chosen
(reasoning about assumptions). For example, if an assumption is chosen, and the facts
derived from this assumption contradict information obtained from a different source (e.g.,
by observation), the assumption may be rejected and the converse may be assumed.
Reasoning with and about assumptions is a reflective reasoning method. It proceeds by
the following alternation of object level and meta-level reasoning, and upward and
downward reflection:
. inspecting the information currently available (epistemic upward reflection),
. determining a set of assumptions (meta-level reasoning),
. assuming this set of assumptions for a while (downward reflection of assumptions),
. deriving which facts follow from this assumed information (in the object level reasoning)
. inspecting the information currently available (epistemic upward reflection),
. evaluating the derived facts (meta-level reasoning)
. deciding to reject some of the assumptions and/or to choose a new set of assumptions based on this
evaluation (meta-level reasoning).
and so on
As an example, if an assumption 'a is true' is chosen, and the facts derived from this
assumption contradict information that is obtained from a different source, the assumption
'a is true' may be rejected and the converse `a is false' may be assumed. This reasoning
pattern also occurs in diagnostic reasoning based on causal knowledge.
system task control
assumption
determination
assumption
evaluation
observation
result
prediction
external
world
assessments
required observations
predictions
hypotheses
assumptions
observation results
Figure
A generic model for reasoning with and about assumptions: GARM
The generic model for reasoning with and about assumptions consists of four primitive
components: External world, Observation Results Prediction, Assumption Determination,
Assumption Evaluation (see Figure 15). The first two of these components represent the
object level, the last two the meta-level. The component Observation Result Prediction
reasons with assumptions, the two components Assumption Determination and Assumption
Evaluation reason about assumptions. Note that this generic reasoning model is applied,
among others, in de generic model for diagnosis GDIM presented in Section 6.2.1.
However, the model has other types of application as well. For example, on the basis of this
reasoning model, more specialised models have been designed for:
. a generic model for default reasoning with explicit strategic knowledge on resolution of
conflicting defaults (GDRM)
. a generic model for reasoning on the basis of a Closed World Assumption (GCWARM),
with possibilities for context-sensitive informed and scoped variants of the Closed
World Assumption
7. Supporting Software Environment
The compositional design method DESIRE is supported by a software environment. The
DESIRE software environment includes a number of facilities. Graphical design tools
support specification of conceptual and detailed design of processes and knowledge at
different abstraction levels. A detailed design in DESIRE provides enough detail to be able
to develop an operational implementation automatically in any desired environment. An
implementation generator supports prototype generation of both partially and fully
specified models. The code generated by the implementation generator can be executed in
an execution environment. Screenshots of interaction with the tools illustrate the support
the tools provide. Figure 16 shows the result of the creation (by a mouse click, and then
filling the names) of two components Agent and the External World and two links between
the components. The precise specifications of these components and links are created in
interaction with the graphical editors to make the drawing, as shown in Figures 17 and 18.
Moreover, if within one of the components a compositional structure using subcomponents
is required, by a mouse click on this component a new drawing area can be opened, where
again components can be introduced.(zoom in).
external world
Figure
Graphical design tool for process composition
Figure
17 depicts the initial specification of the Agent component in which, for example,
the input and output information types are defined. Figure 18 shows the specification of an
information link between the External World and the Agent. For example, the type of
information to be exchanged, namely action_info, is specified in this window. Figure 19
shows how information types are defined. The example information type temperatures
requires a new sort TEMP_VALUE.
Object Input Information Typ e: observation_results
Object Output Information Typ e: action_info
Additional Information Typ e: Specification:
Object Input Information Typ e: observation_result_info
Figure
Component editing window for a component
Information type:
Information type:
external world
Figure
Editor for information links
Information type: temperatures
Information type References
Figure
19 Editor for information types
8. Discussion
The basic principles behind compositional multi-agent system design described in this
paper (process and knowledge abstraction, compositionality, reusability, formal semantics,
and formal evaluation) are principles generally acknowledged to be of importance in both
software engineering and knowledge engineering. The operationalisation of these principles
within a compositional development method for multi-agent systems is, however, a
distinguishing element. Such a method can be supported by a (graphical) software
environment in which all three levels of design are supported: from conceptual design to
implementation. Libraries of both generic models and instantiated components, of which a
few have been highlighted in this paper, support system designers at all levels of design.
Generic agent models, generic task models and generic models of reasoning patterns help
structure the process of system design. Formal semantics provide a basis for methods for
verification - an essential part of such a method.
A number of approaches to conceptual-level specification of multi-agent systems have
been recently proposed. On the one hand, general-purpose formal specification languages
stemming from Software Engineering are applied to the specification of multi-agent
systems (e.g., [35], [40] for approaches using Z, resp. Z and CSP). A compositional
development method such as DESIRE is committed to well-structured compositional
designs that can be specified at a higher level of conceptualisation than in Z or VDM and,
in particular, allows for specification in terms of knowledge bases, which especially for
applications in information-intensive domains is an advantage. Moreover, designs can be
implemented automatically using automated prototype generators. In [34] an approach to
the composition of reactive system components is described. Specification of components
is done on the basis of temporal logic. Two differences with our approach are the following.
First, their approach is limited to reactive components. In our approach components are
allowed to be non-reactive as well. Another difference is that in their case specification of
the type of the composition of components is limited. In our case the task control
specification forms the part of the composition specification where the dynamics of the
composition is defined in a tailored manner, using temporal task control rules. This enables
to specify, for each composition, precisely the type of composition that is required. This is
also a difference with [35] and [40].
On the other hand, new development methods for the specification of multi-agent
systems have been proposed. These methods often commit to a specific agent architecture.
For instance, [32] describe a language on the one hand based on the BDI agent architecture
[39], and on the other hand based on object-oriented design methods.
In [42] an agent is constructed from components using a central message board within
the agent which manages the interaction between the agent's components and integrates the
activity within the agent. Our approach is more general in the sense that a component-based
architecture of an agent (e.g., the model GAM) need not to commit to such a central
message-board; if desired, it is one of the architectural possibilities. Moreover, components
within DESIRE are more self-contained in the sense that they include knowledge bases and
relate to specific inference procedures and settings. In contrast, in [42] components are
quite heterogeneous; for example, a component can be just a knowledge base, which only
gets its dynamic semantics if it is processed by another component. Another difference is
that in [42] components are specified as a type of logic programs. It is not clear how
declarative and/or procedural semantics of these programs are defined. For example, they
allow component replacement as one of the steps in dynamics. This suggests dynamic
semantics that are on the programming level; how to define such semantics on a conceptual
level is far from trivial. In our approach semantics is defined on a conceptual design level
based on traces of compositional states.
The Concurrent MetateM framework [22] is another modelling framework for multi-agent
systems. A comparison is discussed for the structure of agents, inter-agent
communication and meta-level reasoning (for a more extensive comparison, see [37]).
For the structure of agents, in DESIRE, the knowledge structures that are used in the
knowledge bases and for the input and output interfaces of components are defined in terms
of information types, in which sort hierarchies can be defined. Signatures define sets of
ground atoms. An assignment of truth values true, false or unknown to atoms is called an
information state. Every primitive component has an internal information state, and all
input and output interfaces have information states. Information states evolve over time.
Atoms are persistent in the sense that an atom in a certain information state is assigned to
the same truth value as in the previous information state, unless its truth value has changed
because of updating an information link.
Concurrent MetateM does not have information types, there is no predefined set of
atoms and there are no sorts. The input and output interface of an object consists only of the
names of predicates. Two valued logic is used with a closed world assumption, thus an
information state is defined by the set of atoms that are true.
In a DESIRE specification of a multi-agent system, the agents are (usually)
subcomponents of the top-level component that represents the whole (multi-agent) system,
together with one or more components that represent the rest of the environment. A
component that represents an agent can be a composed component: an agent task hierarchy
is mapped into a hierarchy of components. All (sub-)components (and information links)
have their own time scale.
In a Concurrent MetateM model, agents are modelled as objects that have no further
structure: all its tasks are modelled with one set of rules. Every object has its own time-scale
The communication between agents in DESIRE is defined by the information links
between them: communication is based on point-to-point or broadcast message passing.
Communication between agents in Concurrent MetateM is done by broadcast message
passing. When an object sends a message, it can be received y all other objects. On top of
this, both multi-cast and point-to-point message passing can be defined.
In DESIRE, meta-reasoning is modelled by using separate components for the object
and the meta-level. For example, one component can reason about the reasoning process
and information state of another component. Two types of interaction between object- and
meta-level are distinguished: upward reflection (from object- to meta-level) and downward
reflection (from meta- to object-level). The knowledge structures used for meta-level
reasoning are defined in terms of information types, standard meta-information type can
automatically be generated.
For meta-reasoning in Concurrent MetateM, the logic MML has been developed. In
MML, the domain over which terms range has been extended to incorporate the names of
object-level formulae. Execution of temporal formulae can be controlled by executing them
by a meta-interpreter. These meta-facilities have not been implemented yet.
The compositional approach to agent design in this paper has some aspects in common
with object oriented design methods; e.g., [5], [17], [41]. However, there are differences as
well. Examples of approaches to object-oriented agent specifications can be found in [4],
[31]. A first interesting point of discussion is to what the difference is between agents and
objects. Some tend to classify agents as different from objects. For example, [29] compare
objects with agents on the dimension of autonomy in the following way:
'An object encapsulates some state, and has some control over this state in that it can only be
accessed or modified via the methods that the object provides. Agents encapsulate state in just the
same way. However, we also think of agents as encapsulating behaviour, in addition to state. An
object does not encapsulate behaviour: it has no control over the execution of methods - if an object
x invokes a method m on an object y, then y has no control over whether m is executed or not - it
just is. In this sense, object y is not autonomous, as it has no control over its own actions. In
contrast, we think of an agent as having exactly this kind of control over what actions it performs.
Because of this distinction, we do not think of agents as invoking methods (actions) on agents -
rather, we tend to think of them requesting actions to be performed. The decision about whether to
act upon the request lies with the recipient.
Some others consider agents as a specific type of objects that are able to decide by
themselves whether or not they execute a method (objects that can say 'no'), and that can
initiate action (objects that can say 'go').
A difference between the compositional design method DESIRE and object-oriented
design methods in representation of basic functionality is that within DESIRE declarative,
knowledge-based specification forms are used, whereas method specifications (which
usually have a more procedural style of specification) are used in object-oriented design.
Another difference is that within DESIRE the composition relation is defined in a more
specific manner: the static aspects by information links, and the dynamic aspects by
(temporal) task control knowledge, according to a pre-specified format. A similarity is the
(re)use of generic structures: generic models in DESIRE, and patterns (cf. [3], [23]) in
object-oriented design methods, although their functionality and compositionality are
specified in different manners, as discussed above.
--R
ACM Transactions on Programming Languages and Systems
Plangent: An Appreoahc to Making Mobile Agents Intelligent.
A Pattern Language.
Agent Design Patterns: Elements of Agent Application Design.
Compositional Design and Verification of a Multi-Agent System for One-to-Many Negotiation
Formal specification of Multi-Agent Systems: a real-world case
Modelling the internal behaviour of BDI-agents
Formalisation of a cooperation model based on joint intentions.
On formal specification of design tasks.
Strategic Knowledge in Compositional Design Models.
The Acquisition of a Shared Task Model.
Principles and Architecture.
Compositional Verification of Knowledge-based Systems: a Case Study for Diagnostic Reasoning
Science in Computer Programming
Formal Refinement Patterns for Goal-Driven Requirements Elaboration
A. Formal
Representing and executing agent-based systems
Elements of reusable object-oriented Software
Specification of Behavioural Requirements within Compositional Multi-Agent System Design
Compositional Verification of a Distributed Real-Time Arbitration Protocol
Communications of the ACM
Controlling Cooperative Problem Solving in Industrial Multi-Agent Systems using Joint Intentions
Compositional Verification of Multi-Agent Systems: a Formal Analysis of Pro-activeness and Reactiveness
A Methodology and Technique for Systems of BDI Agents.
Processes and Techniques.
Composition of Reactive System Components.
A formal framework for agency and autonomy.
The open agent archtiecture: A framework for building distributed software systems
Agent Modelling in MetateM and DESIRE.
Modeling rational agents within a BDI architecture.
Architectural issues in Component-Based Software Engineering
Department of Computer Science
Lessons learned through six years of component-based development
--TR
Object-oriented modeling and design
Composing specifications
Goal-directed requirements acquisition
Object-oriented development
Object-oriented analysis and design with applications (2nd ed.)
Compositional verification of a distributed real-time arbitration protocol
Design patterns
Agent theories, architectures, and languages
Representing and executing agent-based systems
Controlling cooperative problem solving in industrial multi-agent systems using joint intentions
A methodology and modelling technique for systems of BDI agents
Formal refinement patterns for goal-driven requirements elaboration
Applications of intelligent agents
Agent design patterns
ZEUS
Composition of reactive system components
Component primer
Lessons learned through six years of component-based development
Requirements Engineering
The Acquisition of a Shared Task Model
Compositional Verification of Knowledge-Based Systems
Modelling Internal Dynamic Behaviour of BDI Agents
Compositional Verification of Multi-Agent Systems
Formalization of a Cooperation Model Based on Joint Intentions
Agent Modelling in METATEM and DESIRE
Deliberative Normative Agents
Specification of Bahavioural Requirements within Compositional Multi-agent System Design
Compositional Design and Verification of a Multi-Agent System for One-to-Many Negotiation
--CTR
Frances M. T. Brazier , Frank Cornelissen , Rune Gustavsson , Catholijn M. Jonker , Olle Lindeberg , Bianca Polak , Jan Treur, Compositional Verification of a Multi-Agent System for One-to-Many Negotiation, Applied Intelligence, v.20 n.2, p.95-117, March-April 2004
F. M. T. Brazier , B. J. Overeinder , M. van Steen , N. J. E. Wijngaards, Agent factory: generative migration of mobile agents in heterogeneous environments, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Catholijn M. Jonker , Jan Treur , Wouter C. Wijngaards, Specification, analysis and simulation of the dynamics within an organisation, Applied Intelligence, v.27 n.2, p.131-152, October 2007
Catholijn M. Jonker , Jan Treur, Agent-oriented modeling of the dynamics of biological organisms, Applied Intelligence, v.27 n.1, p.1-20, August 2007 | generic model;agent;reuse;component-based;design |
586463 | Parallelizing the Data Cube. | This paper presents a general methodology for the efficient parallelization of existing data cube construction algorithms. We describe two different partitioning strategies, one for top-down and one for bottom-up cube algorithms. Both partitioning strategies assign subcubes to individual processors in such a way that the loads assigned to the processors are balanced. Our methods reduce inter processor communication overhead by partitioning the load in advance instead of computing each individual group-by in parallel. Our partitioning strategies create a small number of coarse tasks. This allows for sharing of prefixes and sort orders between different group-by computations. Our methods enable code reuse by permitting the use of existing sequential (external memory) data cube algorithms for the subcube computations on each processor. This supports the transfer of optimized sequential data cube code to a parallel setting.The bottom-up partitioning strategy balances the number of single attribute external memory sorts made by each processor. The top-down strategy partitions a weighted tree in which weights reflect algorithm specific cost measures like estimated group-by sizes. Both partitioning approaches can be implemented on any shared disk type parallel machine composed of p processors connected via an interconnection fabric and with access to a shared parallel disk array.We have implemented our parallel top-down data cube construction method in C++ with the MPI message passing library for communication and the LEDA library for the required graph algorithms. We tested our code on an eight processor cluster, using a variety of different data sets with a range of sizes, dimensions, density, and skew. Comparison tests were performed on a SunFire 6800. The tests show that our partitioning strategies generate a close to optimal load balance between processors. The actual run times observed show an optimal speedup of p. | Figure
1. A 4-dimensional lattice.
is aggregated over all distinct combinations over AB. A group-by is a child of some parent
group-by if the child can be computed from the parent by aggregating some of its
attributes. Parent-child relationships allow algorithms to share partitions, sorts, and partial
sorts between different group-buys. For example, if the data has been sorted with respect
to AB, then cuboid group-by A can be generated from AB without sorting and generating
ABC requires only a sorting of blocks of entries. Cube algorithms differ on how they make
use of these commonalities. Bottom-up approaches reuse previously computed sort orders
and generate more detailed group-buys from less detailed ones (a less detailed group-by
contains a subset of the attributes). Top-down approaches use more detailed group-bys
to compute less detailed ones. Bottom-up approaches are better suited for sparse rela-
tions. Relation R is sparse if N is much smaller than the number of possible values in
the given d-dimensional space. We present different partitioning and load balancing approaches
depending on whether a top-down or bottom-up sequential cube algorithm is
used.
We conclude this section with a brief discussion of the underlying parallel model, the
standard shared disk parallel machine model. That is, we assume p processors connected
via an interconnection fabric where processors have typical workstation size local memories
and concurrent access to a shared disk array. For the purpose of parallel algorithm design,
we use the Coarse Grained Multicomputer (CGM) model [5, 8, 15, 18, 27]. More precisely,
we use the EM-CGM model [6, 7, 9] which is a multi-processor version of Vitter's Parallel
Model [28?30]. For our parallel data cube construction methods we assume that the
d-dimensional input data set R of size N is stored on the shared disk array. The output, i.e.
the group-bys comprising the data cube, will be written to the shared disk array. Subsequent
applications may impose requirements on the output. For example, a visualization application
may require storing group-by in striped format over the entire disk array to support
fast access to individual group-bys.
3. Parallel bottom-up data cube construction
Bottom-up data cube construction methods calculate the group-bys in an order which emphasizes
the reuse of previously computed sorts and they generate more detailed group-buys
from less detailed ones. Bottom-up methods are well suited for sparse relations and they
support the selective computation of blocks in a group-by; e.g., generate only blocks which
specify a user-de?ned aggregate condition [4].
Previous bottom-up methods include BUC [4] and PartitionCube (part of [24]). The main
idea underlying bottom-up methods can be captured as follows: if the data has previously
been sorted by attribute A, then creating an AB sort order does not require a complete
resorting. A local resorting of A-blocks (blocks of consecutive elements that have the same
attribute can be used instead. The sorting of such A-blocks can often be performed in local
memory. Hence, instead of another external memory sort, the AB order can be created in one
single scan through the disk. Bottom-up methods [4, 24] attempt to break the problem into
a sequence of single attribute sorts which share pre?xes of attributes and can be performed
in local memory with a single disk scan. As outlined in [4, 24], the total computation time
of these methods is dominated by the number of such single attribute sorts.
In this section we describe a partitioning of the group-by computations into p independent
subproblems. The partitioning generates subproblems which can be processed ef?ciently by
bottom-up sequential cube methods. The goal of the partitioning is to balance the number of
single attribute sorts required by each subproblem and to ensure that each subproblem has
overlapping sort sequences in the same way as for the sequential methods (thereby avoiding
additional work).
Let A1,.,Ad be the attributes of relation R and assume |A1|?|A2|, ???|Ad|
where |Ai | is the number of different possible values for attribute Ai . As observed in [24],
the set of all groups-bys of the data cube can be partitioned into those that contain A1 and
those that do not contain A1. In our partitioning approach, the groups-bys containing A1 will
be sorted by A1. We indicate this by saying that they contain A1 as a pre?x. The group-bys
not containing A1 (i.e., A1 is projected out) contain A1 as a post?x. We then recurse with
the same scheme on the remaining attributes. We shall utilize this property to partition the
computation of all group-bys into independent subproblems. The load between subproblems
will be balanced and they will have overlapping sort sequences in the same way as for the
sequential methods. In the following we give the details of our partitioning method.
Let x, y, z be sequences of attributes representing sort orders and let A be an arbitrary
single attribute. We introduce the following de?nition of sets of attribute sequences
representing sort orders (and their respective group-bys):
The entire data cube construction corresponds to the set Sd(?, A1 .Ad, ?) of sort orders
and respective group-bys, where d is the dimension of the the data cube. We refer
to i as the rank of Si (. The set Sd(?, A1 .Ad, ?) is the union of two subsets of
These, in turn, are the
Figure
2. Partitioning for a 4-dimensional data cube with attributes A, B, C, D. The 8 S1-sets correspond to the
group-buys determined for four attributes.
union of four subsets of rank d ? 2: Sd?2(A1 A2, A3 .Ad, ?), Sd?2(A1, A3 .Ad, A2),
Sd?2(A2, A3 .Ad, A1), and Sd?2(?, A3 .Ad, A2 A1). A complete example for a
4-dimensional data cube with attributes A, B, C, D is shown in ?gure 2.
For the sake of simplifying the discussion, we assume that p is a power of 2,
Consider the 2pS-sets of rank d these 2p sets in
the order de?ned by Eq. (2). De?ne
Our partitioning assigns set summarized
in Algorithm 1.
ALGORITHM 1. Parallel Bottom-Up Cube Construction.
Each processor Pi ,1? i ? p, performs the following steps, independently and in
parallel:
(1) Determine the two sets forming i as described below.
(2) Compute all group-bys in i using a sequential (external-memory) bottom-up
cube construction method.
?End of Algorithm?
We illustrate the partitioning using an example with 8. For these
values, we generate 16 S-sets of rank 6. Giving only the indices of attributes A1, A2, A3,
Figure
3. -sets assigned to 8 processors when represents a projected-out attribute and
? represents an existing attribute.
and A4,wehave
Each processor is assigned the computation of 27 group-bys as shown in ?gure 3. If every
processor has access to its own copy of relation R, then a processor performs
attribute sorts to generate the data in the ordering needed for its group-bys. If there is only
one copy of R, read-con?icts can be avoided by sorting the sequences using a binomial
heap broadcast pattern [19]. Doing so results in every processor Pi receiving its two sorted
sequences forming i after the time needed for single attribute sorts. Figure 4 shows
the sequence of sorts for the 8-processor example. The index inside the circles indicates the
processor assignment; i.e., processor 1 performs a total of four single attribute sorts on the
original relation R, starting with the sort on attribute A1. Using binomial heap properties, it
follows that a processor does at most k +1 single attribute sorts and the 2p sorted sequences
are available after the time needed for
Figure
4. Binomial heap structure for generating the 2p Gamma-sets without read con?icts.
Algorithm 1 can easily be generalized to values of p which are not powers of 2. We also
note that Algorithm 1 requires p ? 2d?1. This is usually the case in practice. However, if
a parallel algorithm is needed for larger values of p, the partitioning strategy needs to be
augmented. Such an augmentation could, for example, be a partitioning strategy based on
the number of data items for a particular attribute. This would be applied after partitioning
based on the number of attributes has been done. Since the range p ?{20 .2d?1} covers
current needs with respect to machine and dimension sizes, we do not further discuss such
augmentations in this paper.
The following four properties summarize the main features of Algorithm 1 that make it
load balanced and communication ef?cient:
? The computation of each group-by is assigned to a unique processor.
? The calculation of the group-bys in i , assigned to processor Pi , requires the same
number of single attribute sorts for all 1
? The sorts performed at processor Pi share pre?xes of attributes in the same way as in
[4, 24] and can be performed with disk scans in the same manner as in [4, 24].
? The algorithm requires no inter-processor communication.
4. Parallel top-down data cube construction
Top-down approaches for computing the data cube, like the sequential PipeSort, Pipe Hash,
and Overlap methods [1, 10, 25], use more detailed group-bys to compute less detailed ones
that contain a subset of the attributes of the former. They apply to data sets where the number
of data items in a group-by can shrink considerably as the number of attributes decreases
(data reduction). The PipeSort, PipeHash, and Overlap methods select a spanning tree T
of the lattice, rooted at the group-by containing all attributes. PipeSort considers two cases
of parent-child relationships. If the ordered attributes of the child are a pre?x of the ordered
attributes of the parent (e.g., ABCD ? ABC) then a simple scan is suf?cient to create
the child from the parent. Otherwise, a sort is required to create the child. PipeSort seeks
to minimize the total computation cost by computing minimum cost matchings between
successive layers of the lattice. PipeHash uses hash tables instead of sorting. Overlap
attempts to reduce sort time by utilizing the fact that overlapping sort orders do not always
require a complete new sort. For example, the ABC group-by has A partitions that can be
sorted independently on C to produce the AC sort order. This may permit independent sorts
in memory rather than always using external memory sort.
Next, we outline a partitioning approach which generates p independent subproblems,
each of which can be solved by one processor using an existing external-memory top-down
cube algorithm. The ?rst step of our algorithm determines a spanning tree T of the lattice by
using one of the existing approaches like PipeSort, PipeHash, and Overlap, respectively.
To balance the load between the different processors we next perform a storage estimation
to determine approximate sizes of the group-bys in T . This can be done, for example,
by using methods described in [11, 26]. We now work with a weighted tree. The most
crucial part of our solution is the partitioning of the tree. The partitioning of T into subtrees
induces a partitioning of the data cube problem into p subproblems (subsets of group-bys).
Determining an optimal partitioning of the weighted tree is easily shown to be an NP-complete
problem (by making, for example, a reduction to processor scheduling). Since the
weights of the tree represent estimates, a heuristic approach which generates p subproblems
with ?some control? over the sizes of the subproblems holds the most promise. While we
want the sizes of the p subproblems balanced, we also want to minimize the number of
subtrees assigned to a processor. Every subtree may require a scanning of the entire data set
R and thus too many subtrees can result in poor I/O performance. The solution we develop
balances these two considerations.
Our heuristics makes use of a related partitioning problem on trees for which ef?cient
algorithms exist, the min-max tree k-partitioning problem [3] de?ned as follows: Given a
tree T with n vertices and a positive weight assigned to each vertex, delete k edges in the
tree such that the largest total weight of a resulting subtree is minimized.
The min-max tree k-partitioning problem has been studied in [3, 12, 23]. These methods
assume that the weights are ?xed. Note that, our partitioning problem on T is different
in that, as we cut a subtree T out of T , an additional cost is introduced because the
group-by associated with the root of T must now be computed from scratch through a
separate sort. Hence, when cutting T out of T , the weight of the root of T has to be
increased accordingly. We have adapted the algorithm in [3] to account for the changes of
weights required. This algorithm is based on a pebble shifting scheme where k pebbles are
shifted down the tree, from the root towards the leaves, determining the cuts to be made.
In our adapted version, as cuts are made, the cost for the parent of the new partition is
adjusted to re?ect the cost of the additional sort. Its original cost is saved in a hash table
for possible future use since cuts can be moved many times before reaching their ?nal
position. In the remainder, we shall refer to this method as the modi?ed min-max tree
k-partitioning.
However, even a perfect min-max k-partitioning does not necessarily result in a partitioning
of T into subtrees of equal size, and nor does it address tradeoffs arising from the
number of subtrees assigned to a processor. We use tree-partitioning as an initial step for
our partitioning. To achieve a better distribution of the load we apply an over partitioning
strategy: instead of partitioning the tree T into p subtrees, we partition it into s ? p subtrees,
where s is an integer, s ? 1. Then, we use a ?packing heuristic? to determine which subtrees
belong to which processors, assigning s subtrees to every processor. Our packing heuristic
considers the weights of the subtrees and pairs subtrees by weights to control the number
of subtrees. It consists of s matching phases in which the p largest subtrees (or groups of
subtrees) and the p smallest subtrees (or groups of subtrees) are matched up. Details are
described in Step 2b of Algorithm 2.
ALGORITHM 2. Sequential Tree-partition(T , s, p).
Input:Aspanningtree T ofthelatticewithpositiveweightsassignedtothenodes(represent-
ing the cost to build each node from it's ancestor in T ). Integer parameters s (oversampling
ratio) and p (number of processors).
Output: A partitioning of T into p subsets 1,.,p of s subtrees each.
(1) Compute a modi?ed min-max tree s ? p-partitioning of T into s ? p subtrees T1,.,
Ts?p.
190 DEHNE ET AL.
(2) Distributesubtrees T1,.,Ts?p amongthe p subsets1,.,p,s subtreespersubset,
as follows:
(2a) Create s ? p sets of trees named ?i ,1? i ? sp, where initially ?i ={Ti }. The
weight of ?i is de?ned as the total weight of the trees in ?i .
(2b) For
? Sort the ?-sets by weight, in increasing order. W.L.O.G., let ?1,.,
?sp?(j?1)p be the resulting sequence.
?End of Algorithm?
The above tree partition algorithm is embedded into our parallel top-down data cube
construction algorithm. Our method provides a framework for parallelizing any sequential
top-down data cube algorithm. An outline of our approach is given in the following
Algorithm 3.
ALGORITHM 3. Parallel Top-Down Cube Construction.
Each processor Pi ,1?i ? p, performs the following steps independently and in
parallel:
(1) Apply the storage estimation method in [11, 26] to determine the approximate
sizes of all group-bys in T .
(2) Select a sequential top-down cube construction method (e.g., PipeSort, Pipe
Hash,orOverlap) and compute the spanning tree T of the lattice as used by
this method. Compute the weight of each node of T : the estimated cost to build
each node from it's ancestor in T .
(3) Execute Algorithm Tree-partition(T, s, p) as shown above, creating p sets
1,.,p. Each set i contains s subtrees of T .
(4) Compute all group-bys in subset i using the sequential top-down cube construction
method chosen in Step 1.
?End of Algorithm?
Our performance results described in Section 6 show that an over partitioning with
or 3 achieves very good results with respect to balancing the loads assigned to the processors.
This is an important result since a small value of s is crucial for optimizing performance.
5. Parallel array-based data cube construction
Our method in Section 4 can be easily modi?ed to obtain an ef?cient parallelization of the
ArrayCube method presented in [32]. The ArrayCube method is aimed at dense data cubes
and structures the raw data set in a d-dimensional array stored on disk as a sequence of
?chunks?. Chunking is a way to divide the d-dimensional array into small size d-dimensional
chunks where each chunk is a portion containing a data set that ?ts into a disk block. When a
?xed sequence of such chunks is stored on disk, the calculation of each group-by requires a
certain amount of buffer space [32]. The ArrayCube method calculates a minimum memory
spanning tree of group-bys, MMST, which is a spanning tree of the lattice such that the total
amount of buffer space required is minimized. The total number of disk scans required for
the computation of all group-bys is the total amount of buffer space required divided by the
memory space available. The ArrayCube method can therefore be parallelized by simply
applying Algorithm 3 with T being the MMST.
6. Experimental performance analysis
We have implemented and tested our parallel top-down data cube construction method
presented in Section 4. We implemented sequential pipesort [1] in C++, and our parallel
top-down data cube construction method (Section 4) in C++ with MPI [2]. Most of the
required graph algorithms, as well as data structures like hash tables and graph representa-
tions, were drawn from the LEDA library [21]. Still, the implementation took one person
year of full time work. We chose to implement our parallel top-down data cube construction
method rather than our parallel bottom-up data cube construction method because the
former has more tunable parameters that we wish to explore. As our primary parallel hardware
platform, we use a PC cluster consisting of a front-end machine and eight processors.
The front-end machine is used to partition the lattice and distribute the work among the
other 8 processors. The front-end machine is an IBM Net?nity server with two 9 GB SCSI
disks, 512 MB of RAM and a 550-MHZ Pentium processor. The processors are 166 MHZ
Pentiums with 2G IDE hard drives and 32 MB of RAM, except for one processor which
is a 133 MHZ Pentium. The processors run LINUX and are connected via a 100 Mbit Fast
Ethernet switch with full wire speed on all ports. Clearly, this is a very low end, older, hardware
platform. The experiments reported in the remainder of this section represent several
weeks of 24 hr/day testing and the PC cluster platform described above has the advantage
of being available exclusively for our experiments without any other user disturbing our
measurements. For our main goal of studying the speedup obtained by our parallel method
rather than absolute times, this platform proved suf?cient. To verify that our results also
hold for newer machines with faster processors, more memory per processor, and higher
bandwidth, we then ported our code to a SunFire 6800 and performed comparison tests
on the same data sets. The SunFire 6800 used is a very recent SUN multiprocessor with
Sun UltraSPARC III 750 MHz processors running Solaris 8, 24 GB of RAM and a Sun T3
shared disk.
Figure
5 shows the PC cluster running time observed as a function of the number of processors
used. For the same data set, we measured the sequential time (sequential pipesort [1])
and the parallel time obtained through our parallel top-down data cube construction method
(Section 4), using an oversampling ratio of 2. The data set consisted of 1,000,000
records with dimension 7. Our test data values were uniformly distributed over 10 values
in each dimension. Figure 5 shows the running times of the algorithm as we increase
the number of processors. There are three curves shown. The runtime curve shows the
time taken by the slowest processor (i.e. the processor that received the largest workload).
The second curve shows the average time taken by the processors. The time taken by the
front-end machine, to partition the lattice and distribute the work among the compute nodes,
Figure
5. PC cluster running time in seconds as a function of the number of processors. (Fixed parameters: Data
7. Experiments per data point = 5).
was insigni?cant. The theoretical optimum curve shown in ?gure 5 is the sequential pipesort
time divided by the number of processors used.
We observe that the runtime obtained by our code and the theoretical optimum are
essentially identical. That is, for an oversampling ratio of 2, an optimal speedup of p
is observed. (The anomaly in the runtime curve at due to the slower 133 MHZ
Pentium processor.)
Interestingly, the average time curve is always below the theoretical optimum curve, and
even the runtime curve is sometimes below the theoretical optimum curve. One would have
expected that the runtime curve would always be above the theoretical optimum curve.
We believe that this superlinear speedup is caused by another effect which bene?ts our
parallel method: improved I/O. When sequential pipesort is applied to a 10 dimensional
data set, the lattice is partitioned into pipes of length up to 10. In order to process a pipe of
length 10, pipesort needs to write to 10 open ?les at the same time. It appears that under
LINUX, the number of open ?les can have a considerable impact on performance. For
100,000 records, writing them to 4 ?les each took 8 seconds on our system. Writing them
to 6 ?les each took 23 seconds, not 12, and writing them to 8 ?les each took 48 seconds,
not 16. This bene?ts our parallel method, since we partition the lattice ?rst and then apply
pipesort to each part. Therefore, the pipes generated in the parallel method are considerably
shorter.
In order to verify that our results also hold for newer machines with faster processors,
more memory per processor, and higher bandwidth, we ported our code to a SunFire 6800
and performed comparison tests on the same data sets. Figure 6 shows the running times
observed for the SunFire 6800. The absolute running times observed are considerably
faster, as expected. The SunFire is approximately 4 times faster than the PC cluster. Most
Figure
6. SunFire 6800 running time in seconds as a function of the number of processors. Same data set as in
?gure 5.
importantly, the shapes of the curves are essentially the same as for the PC cluster. The
runtime (slowest proc.) and average time curves are very similar and are both very close
to the theoretical optimum curve. That is, for an oversampling ratio of 2, an optimal
speedup of p is also observed for the SunFire 6800. The larger SunFire installation also
allowed us to test our code for a larger number of processors. As shown in ?gure 6, we still
obtain optimal speedup p when using 16 processors on the same dataset.
Figure
7 shows the PC cluster running times of our top-down data cube parallelization as
we increase the data size from 100,000 to 1,000,000 rows. The main observation is that the
parallel runtime increases slightly more than linear with respect to the data size which is
consistent with the fact that sorting requires time O(n log n). Figure 7 shows that our parallel
top-down data cube construction method scales gracefully with respect to the data size.
Figure
8 shows the PC cluster running time as a function of the oversampling ratio s.
We observe that, for our test case, the parallel runtime (i.e. the time taken by the slowest
processor) is best for 3. This is due to the following tradeoff. Clearly, the workload
balance improves as s increases. However, as the total number of subtrees, s ? p, generated
in the tree partitioning algorithm increases, we need to perform more sorts for the root
nodes of these subtrees. The optimal tradeoff point for our test case is s = 3. It is important
to note that the oversampling ratio s is a tunable parameter. The best value for s depends
on a number of factors. What our experiments show is that 3issuf?cient for the load
balancing. However, as the data set grows in size, the time for the sorts of the root nodes
of the subtrees increases more than linear whereas the effect on the imbalance is linear. For
substantially larger data sets, e.g. 1G rows, we expect the optimal value for s to be 2.
Figure
9 shows the PC cluster running time of our top-down data cube parallelization as
we increase the dimension of the data set from 2 to 10. Note that, the number of group-bys
that must be computed grows exponentially with respect to the dimension of the data set. In
?gure 9, we observe that the parallel running time grows essentially linear with respect to
194 DEHNE ET AL.
Figure
7. PC cluster running time in seconds as a function of the data size. (Fixed parameters: Number of
7. Experiments per data point = 5).
Figure
8. PC cluster running time in seconds as a function of the oversampling ratio s. (Fixed parameters: Data
rows. Number of processors = 8. Dimensions = 7. Experiments per data point = 5).
Figure
9. PC cluster running time in seconds as a function of the number of dimensions. (Fixed parameters: Data
200,000 rows. Number of processors = 8. Experiments per data point = 5.) Note: Work grows exponentially
with respect to the number of dimensions.
the output size. We also tried our code on very high dimensional data where the size of the
output becomes extremely large. For example, we executed our parallel algorithm for a 15-
dimensional data set of 10,000 rows, and the resulting data cube was of size more than 1G.
Figure
shows the PC cluster running time of our top-down data cube parallelization as
we increase the cardinality in each dimension, that is the number of different possible data
Figure
10. PC cluster running time in seconds as a function of the cardinality, i.e. number of different possible data
values in each dimension. (Fixed parameters: Data size = 200,000 rows. Number of processors = 8. Dimensions =
196 DEHNE ET AL.
Figure
11. PC cluster running time in seconds as a function of the skew of the data values in each dimension,
based on ZIPF. (Fixed parameters: Data size = 200,000 rows. Number of processors = 8. Dimensions = 7.
Experiments per data point = 5).
values in each dimension. Recall that, top-down pipesort [1] is aimed at dense data cubes.
Our experiments were performed for 3 cardinality levels: 5, 10, and 100 possible values per
dimension. The results shown in ?gure 6 con?rm our expectation that the method performs
better for denser data.
Figure
11 shows the PC cluster running time of our top-down data cube parallelization for
data sets with skewed distribution. We used the standard ZIPF distribution in each dimension
with data reduction in top-down pipesort [1] increases with
skew, the total time observed is expected to decrease with skew which is exactly what
we observe in ?gure 11. Our main concern regarding our parallelization method was how
balanced the partitioning of the tree would be in the presence of skew. The main observation
in ?gure 11 is that the relative difference between runtime (slowest processor) and average
time does not increase as we increase the skew. This appears to indicate that our partitioning
method is robust in the presence of skew.
7. Comparison with previous results
In this Section we summarize previous results on parallel data cube computation and compare
them to the results presented in this paper.
In [13, 14], the authors observe that a group-by computation is essentially a parallel pre?x
implementation of this method is mentioned and no experimental performance evaluation
is presented. This method creates large communication overhead and will most likely show
unsatisfactory speedup. The methods in [20, 22] as well as the methods presented in this
paper reduce communication overhead by partitioning the load and assigning sets of group-by
computations to individual processors. As discussed in [20, 22], balancing the load
assigned to different processors is a hard problem. The approach in [20] uses a simple
greedy heuristic to parallelize hash-based data cube computation. As observed in [20], this
simple method is not scalable. Load balance and speedup are not satisfactory for more
than 4 processors. A subsequent paper by the same group [31] focuses on the overlap
between multiple data cube computations in the sequential setting. The approach in [22]
considers the parallelization of sort-based data cube construction. It studies parallel bottom-up
Iceberg-cube computation. Four different methods are presented: RP, RPP, ASL, and
PT. Experimental results presented indicate that ASL, and PT have the better performance
among those four. The main reason is that RP and RPP show weak load balancing. PT is
somewhat similar to our parallel bottom-up data cube construction method presented in
Section 3 since PT also partitions the bottom-up tree. However, PT partitions the bottom-up
tree simply into subtrees with equal numbers of nodes, and it requires considerably more
tasks than processors to obtain good load balance. As observed in [22], when a larger
number of tasks is required, then performance problems arise because such an approach
reduces the possibility of sharing of pre?xes and sort orders between different group-by
computations. In contrast, our parallel bottom-up method in Section 3 assigns only two
tasks to each processor. These tasks are coarse grained which greatly improves sharing of
pre?xes and sort orders between different group-by computations. Therefore, we expect
that our method will not have a decrease in performance for a larger number of processors
as observed in [22]. The ASL method uses a parallel top-down approach, using a skiplist
to maintain the cells in each group-by. ASL is parallelized by making the construction
of each group-by a separate task, hoping that a large number of tasks will create a good
overall load balancing. It uses a simple greedy approach for assigning tasks to processors
that is similar to [20]. Again, as observed in [22], the large number of tasks brings with
it performance problems because it reduces the possibility of sharing of pre?xes and sort
orders between different group-by computations. In contrast, our parallel top-down method
in Section 4 creates only very few coarse tasks. More precisely, our algorithm assigns s
tasks (subtrees) to each processor, where s is the oversampling ratio. As shown in Section 6,
an oversampling ratio s ? 3issuf?cient to obtain good load balancing. In that sense, our
method answers the open question in [22] on how to obtain good load balancing without
creating so many tasks. This is also clearly re?ected in the experimental performance of
our methods in comparison to the experiments reported in [22]. As observed in [22], their
experiments (?gure 10 in [22]) indicate that ASL obtains essentially zero speedup when
the number of processors is increased from 8 to 16. In contrast, our experiments (?gure 6
of Section show that our parallel top-down method from Section 4 still doubles its speed
when the number of processors is increased from 8 to 16 and obtains optimal speedup p
when using processors.
8. Conclusion and future work
We presented two different, partitioning based, data cube parallelizations for standard shared
disk type parallel machines. Our partitioning strategies for bottom-up and top-down data
198 DEHNE ET AL.
cube parallelization balance the loads assigned to the individual processors, where the loads
are measured as de?ned by the original proponents of the respective sequential methods.
Subcube computations are carried out using existing sequential data cube algorithms. Our
top-down partitioning strategy can also be easily extended to parallelize the ArrayCube
method. Experimental results indicate that our partitioning methods are ef?cient in practice.
Compared to existing parallel data cube methods, our parallelization approach brings a
signi?cant reduction in inter-processor communication and has the important practical
bene?t of enabling the re-use of existing sequential data cube code.
A possible extension of our data cube parallelization methods is to consider a shared
nothing parallel machine model. If it is possible to store a duplicate of the input data set R
oneachprocessor'sdisk,thenourmethodcanbeeasilyadaptedforsuchanarchitecture.This
is clearly not always possible. It does solve most of those cases where the total output size is
considerably larger than the input data set; for example sparse data cube computations. The
data cube can be several hundred times as large as R. Suf?cient total disk space is necessary
to store the output (as one single copy distributed over the different disks) and a p times
duplication of R may be smaller than the output. Our data cube parallelization method would
then partition the problem in the same way as described in Sections 3 and 4, and subcube
computations would be assigned to processors in the same way as well. When computing
its subcube, each processor would read R from its local disk. For the output, there are two
alternatives. Each processor could simply write the subcubes generated to its local disk.
This could, however, create a bottleneck if there is, for example, a visualization application
following the data cube construction which needs to read a single group-by. In such a case,
each group-by should be distributed over all disks, for example in striped format. To obtain
such a data distribution, all processors would not write their subcubes directly to their local
disks but buffer their output. Whenever the buffers are full, they would be permuted over
the network. In summary we observe that, while our approach is aimed at shared disk
parallel machines, its applicability to shared nothing parallel machines depends mainly on
the distribution and availability of the input data set R. An interesting open problem is to
identify the ?ideal? distribution of input R among the p processors when a ?xed amount of
replication of the input data is allowed (i.e., R can be copied r times, 1 ? r < p).
Another interesting question for future work is the relationship between top-down and
bottom-up data cube computation in the parallel setting. These are two conceptually very
different methods. The existing literature suggests that bottom-up methods are better suited
for high dimensional data. So far, we have implemented our parallel top-down data cube
method which took about one person year of full time work. We chose to implement
the top-down method because it has more tunable parameters to be discovered through
experimentation. A possible future project could be to implement our parallel bottom-up
data cube method in a similar environment (same compiler, message passing library,
data structure libraries, disk access methods, etc.) and measure the various trade-off points
between the two methods. As indicated in [22], the critical parameters for parallel bottom-up
data cube computation are similar: good load balance and a small number of coarse
tasks. This leads us to believe that our parallel bottom-up method should perform well.
Compared to our parallel top-down method, our parallel bottom-up method has fewer
parameters available for ?ne-tuning the code. Therefore, the trade-off points in the parallel
setting between top-down and bottom-up methods may be different from the sequential
setting.
Relatively little work has been done on the more dif?cult problem of generating partial
data cubes, that is, not the entire data cube but only a given subset of group-bys. Given a
lattice and a set of selected group-bys that are to be generated, the challenge is in deciding
which other group-bys should be computed in order to minimize the total cost of computing
the partial data cube. In many cases computing intermediate group-bys that are not in the
selected set, but from which several views in the selected set can be computed cheaply, will
reduce the overall computation time. Sarawagi et al. [25] suggest an approach based on
augmenting the lattice with additional vertices (to represent all possible orderings of each
view's attributes) and additional edges (to represent all relationships between views). Then
a Minimum Steiner Tree approximation algorithm is run to identify some number of ?inter-
mediate? nodes (so-called Steiner points) that can be added to the selected subset to ?best?
reduce the overall cost. An approximation algorithm is used because the optimal Minimum
Steiner Tree problem is NP-Complete. The intermediate nodes introduced by this method
are,ofcourse,tobedrawnfromthenon-selectednodesintheoriginallattice.Byaddingthese
additional nodes, the cost of computing the selected nodes is reduced. Although theoretically
neat this approach is not effective in practice. The problem is that the augmented lattice
has far too many vertices and edges to be processed ef?ciently. For example, in a 6 dimensional
partial data cube the number of vertices and edges in the augmented lattice increase
by factors of 30 and 8684 respectively. For a 8 dimensional partial data cube the number
of vertices and edges increase by factors of 428 and 701,346 respectively. The augmented
lattice for a 9 dimensional partial data cube has more than 2,000,000,000 edges. Another
approach is clearly necessary. The authors are currently implementing new algorithms for
generating partial data cubes. We consider this an important area of future research.
Acknowledgments
TheauthorswouldliketothankStevenBlimkie,ZimminChen,KhoiManhNguyen,Thomas
Pehle, and Suganthan Sivagnanasundaram for their contributions towards the implementation
described in Section 6. The ?rst, second, and fourth author's research was partially
supported by the Natural Sciences and Engineering Research Council of Canada. The third
author's research was partially supported by the National Science Foundation under Grant
9988339-CCR.
--R
Introduction to Parallel Computing
Max Planck Institute
--TR
Probabilistic counting algorithms for data base applications
Optimal algorithms for tree partitioning
Introduction to parallel computing
Scalable parallel geometric algorithms for coarse grained multicomputers
Implementing data cubes efficiently
Towards efficiency and portability
An array-based algorithm for simultaneous multidimensional aggregates
Efficient external memory algorithms by simulating coarse-grained parallel algorithms
External memory algorithms
Bottom-up computation of sparse and Iceberg CUBE
Parallel virtual memory
A Shifting Algorithm for Min-Max Tree Partitioning
Iceberg-cube computation with PC clusters
Data Cube
High Performance OLAP and Data Mining on Parallel Computers
Reducing I/O Complexity by Simulating Coarse Grained Parallel Algorithms
Fast Computation of Sparse Datacubes
Storage Estimation for Multidimensional Aggregates in the Presence of Hierarchies
On the Computation of Multidimensional Aggregates
Multi-Cube Computation
BSP-Like External-Memory Computation
Bulk synchronous parallel computing-a paradigm for transportable software
A Parallel Scalable Infrastructure for OLAP and Data Mining
--CTR
Ying Chen , Frank Dehne , Todd Eavis , Andrew Rau-Chaplin, Parallel ROLAP Data Cube Construction on Shared-Nothing Multiprocessors, Distributed and Parallel Databases, v.15 n.3, p.219-236, May 2004
Frank Dehne , Todd Eavis , Andrew Rau-Chaplin, The cgmCUBE project: Optimizing parallel data cube generation for ROLAP, Distributed and Parallel Databases, v.19 n.1, p.29-62, January 2006
Ge Yang , Ruoming Jin , Gagan Agrawal, Implementing data cube construction using a cluster middleware: algorithms, implementation experience, and performance evaluation, Future Generation Computer Systems, v.19 n.4, p.533-550, May
Ying Chen , Frank Dehne , Todd Eavis , Andrew Rau-Chaplin, PnP: sequential, external memory, and parallel iceberg cube computation, Distributed and Parallel Databases, v.23 n.2, p.99-126, April 2008 | OLAP;partitioning;load balancing;parallel processing;data cube |
586511 | A unifying approach to goal-directed evaluation. | Goal-directed evaluation, as embodied in Icon and Snobol, is built on the notions of backtracking and of generating successive results, and therefore it has always been something of a challenge to specify and implement. In this article, we address this challenge using computational monads and partial evaluation.We consider a subset of Icon and we specify it with a monadic semantics and a list monad. We then consider a spectrum of monads that also fit the bill, and we relate them to each other. For example, we derive a continuation monad as a Church encoding of the list monad. The resulting semantics coincides with Gudeman's continuation semantics of Icon.We then compile Icon programs by specializing their interpreter (i.e., by using the first Futamura projection), using type-directed partial evaluation. Through various back ends, including a run-time code generator, we generate ML code, C code, and OCaml byte code. Binding-time analysis and partial evaluation of the continuation-based interpreter automatically give rise to C programs that coincide with the result of Proebsting's optimized compiler. | Introduction
Goal-directed languages combine expressions that can yield multiple results
through backtracking. Results are generated one at a time: an expression can
either succeed and generate a result, or fail. If an expression fails, control is
passed to a previous expression to generate the next result, if any. If so, control
is passed back to the original expression in order to try whether it can succeed
this time. Goal-directed programming specifies the order in which subexpressions
are retried, thus providing the programmer with a succint and powerful
control-flow mechanism. A well-known goal-directed language is Icon [11].
Backtracking as a language feature complicates both semantics and imple-
mentation. Gudeman [13] gives a continuation semantics of a goal-directed
language; continuations have also been used in implementations of languages
with control structures similar to those of goal-directed evaluation, such as Prolog
[3, 15, 30]. Proebsting and Townsend, the implementors of an Icon compiler
in Java, observe that continuations can be compiled into e#cient code [1, 14],
but nevertheless dismiss them because "[they] are notoriously di#cult to under-
stand, and few target languages directly support them" [23, p.38]. Instead, their
compiler is based on a translation scheme proposed by Proebsting [22], which
is based on the four-port model used for describing control flow in Prolog [2].
Icon expressions are translated to a flow-chart language with conditional, direct
and indirect jumps using templates; a subsequent optimization which, amongst
other things, reorders code and performs branch chaining, is necessary to produce
compact code. The reference implemention of Icon [12] compiles Icon into
byte code; this byte code is then executed by an interpreter that controls the
control flow by keeping a stack of expression frames.
In this article, we present a unified approach to goal-directed evaluation:
1. We consider a spectrum of semantics for a small goal-directed language.
We relate them to each other by deriving semantics such as Gudeman's [13]
as instantiations of one generic semantics based on computational monads
[21]. This unified approach enables us to show the equivalence of
di#erent semantics simply and systematically. Furthermore, we are able
to show strong conceptual links between di#erent semantics: Continuation
semantics can be derived from semantics based on lists or on streams of
results by Church-encoding the lists or the streams, respectively.
2. We link semantics and implementation through semantics-directed compilation
using partial evaluation [5, 17]. In particular, binding-time analysis
guides us to extract templates from the specialized interpreters. These
templates are similar to Proebsting's, and through partial evaluation, they
give rise to similar flow-chart programs, demonstrating that templates are
not just a good idea-they are intrinsic to the semantics of Icon and can
be provably derived.
The rest of the paper is structured as follows: In Section 2 we first describe
syntax and monadic semantics of a small subset of Icon; we then instantiate the
semantics with various monads, relate the resulting semantics to each other, and
present an equivalence proof for two of them. In Section 3 we describe semantics-
directed compilation for a goal-directed language. Section 4 concludes.
2 Semantics of a Subset of Icon
An intuitive explanation of goal-directed evaluation can be given in terms of lists
and list-manipulating functions. Consequently, after introducing the subset of
Icon treated in this paper, we define a monadic semantics in terms of the list
monad. We then show that also a stream monad and two di#erent continuation
monads can be used, and we give an example of how to prove equivalence of the
resulting monads using a monad morphism.
2.1 A subset of the Icon programming language
We consider the following subset of Icon:
Intuitively, an Icon term either fails or succeeds with a value. If it succeeds, then
subsequently it can be resumed, in which case it will again either succeed or fail.
This process ends when the expression fails. Informally, i succeeds with the value
succeeds with the sum of the sub-expressions; E 1 to E 2 (called a
succeeds with the value of E 1 and each subsequent resumption yields
the rest of the integers up to the value of E 2 , at which point it
succeeds with the value of E 2 if it is larger than the value
produces the results of
it produces the results of E 3 .
Generators can be nested. For example, the Icon term 4 to (5 to 7) generates
the result of the expressions 4 to 5, 4 to 6, and 4 to 7 and concatenates
the results.
In a functional language such as Scheme, ML or Haskell, we can achieve the
e#ect of Icon terms using the functions map and concat. For example, if we
define
fun to
in ML, then evaluating concat (map (to
6, 4, 5, 6, 7] which is the list of the integers produced by the Icon term 4
to (5 to 7).
2.2 Monads and semantics
Computational monads were introduced to structure denotational semantics [21].
The basic idea is to parameterize a semantics over a monad; many language ex-
tensions, such as adding a store or exceptions, can then be carried out by simply
instantiating the semantics with a suitable monad. Further, correspondence
join M
Figure
1: Monad operators and their types
Standard monad operations:
unit
join L
join L
(l
Special operations for sequences:
empty
if empty L [
if empty L
append L
Figure
2: The list monad
proofs between semantics arising from instantiation with di#erent monads can
be conducted in a modular way, using the concept of a monad morphism [28].
Monads can also be used to structure functional programs [29]. In terms of
programming languages, a monad M is described by a unary type constructor
M and three operations unit M , map M
and join M
with types as displayed in
Figure
1. For these operations, the so-called monad laws have to hold.
In Section 2.4 we give a denotational semantics of the goal-directed language
described in Section 2.1. Anticipating semantics-directed compilation by partial
evaluation, we describe the semantics in terms of ML, in e#ect defining an
interpreter. The semantics int M is parameterized over a monad
M, where #M represents a sequence of values of type #.
where
(#x.join M
to
else append M (unit M i) (to M (i
Figure
3: Monadic semantics for a subset of Icon
2.3 A monad of sequences
In order to handle sequences, some structure is needed in addition to the three
generic monad operations displayed in Figure 1. We add three operations:
empty M
append M
Here, empty M stands for the empty sequence; if empty M is a discriminator
function that, given a sequence and two additional inputs, returns the first
input if the sequence is empty, and returns the second input otherwise; append M
appends two sequences.
A straightforward instance of a monad of sequences is the list monad L,
which is displayed in Figure 2; for lists, "join" is sometimes also called "flatten"
or, in ML, "concat".
2.4 A monadic semantics
A monadic semantics of the goal-directed language described in Section 2.1. is
given in Figure 3. We explain the semantics in terms of the list monad. A literal
i is interpreted as an expression that yields exactly one result; consequently, i is
mapped into the singleton list [i] using unit . The semantics of to, + and <= are
given in terms of bind2 and a function of type int # int # int list. The type of
function bind2 L is
list # list # list,
i.e., it takes two lists containing values of type # and #, and a function mapping
# into a list of values of type #. The e#ect of the definition of bind2 L f xs ys
is (1) to map f x over ys for each x in xs and (2) to flatten the resulting list of
lists. Both steps can be found in the example at the end of Section 2.1 of how
the e#ect of goal-directed evaluation can be achieved in ML using lists.
2.5 A spectrum of semantics
In the following, we describe four possible instantiations of the semantics given
in
Figure
3. Because a semantics corresponds directly to an interpreter, we thus
create four di#erent interpreters.
2.5.1 A list-based interpreter
Instantiating the semantics with the list monad from Figure 2 yields a list-based
interpreter. In an eager language such as ML, a list-based interpreter always
computes all results. Such behavior may not be desirable in a situation where
only the first result is of interest (or, for that matter, whether there exists a
result): Consider for example the conditional, which examines whether a given
expression yields at least one result or fails. An alternative is to use laziness.
2.5.2 A stream-based interpreter
Implementing the list monad from Figure 2 in a lazy language results in a monad
of (finite) lazy lists; the corresponding interpreter generates one result at a time.
In an eager language, this e#ect can be achieved by explicitly implementing a
data type of streams, i.e., finite lists built lazily: a thunk is used to delay
computation.
The definition of the corresponding monad operations is straightforward.
2.5.3 A continuation-based interpreter
Gudeman [13] gives a continuation-based semantics of a goal-directed language.
We can derive this semantics by instantiating our monadic semantics with the
continuation monad C as defined in Figure 4. The type-constructor #C of the
continuation monad is defined as (# R) # R, where R is called the answer
type of the continuation.
A conceptual link between the list monad and the continuation monad with
answer type # list # list can be made through a Church encoding [4] of the
higher-order representation of lists proposed by Hughes [16]. Hughes observed
that when constructing the partially applied concatenation function #ys .xs @ ys
rather than the list xs , lists can be appended in constant time. In the resulting
representation, the empty list corresponds to the function that appends no ele-
ments, i.e., the identity, whereas the function that appends a single element is
Standard monad operations:
unit
join
Special operations for sequences:
empty C
if empty C
xs ys
append C
Figure
4: The continuation monad
represented by a partially applied cons function:
cons
Church-encoding a data types means abstracting over selector functions, in this
case " :: ":
cons
The resulting representation of lists can be typed as
(#,
which indeed corresponds to #C with answer type #. Notice that nil and
cons for this list representation yield empty C and unit C , respectively. Similarly,
the remaining monad operations correspond to the usual list operations.
Figure
5 displays the definition of operations have been
inlined and the resulting expressions #-reduced.
2.5.4 An interpreter with explicit success and failure continuations
A tail-recursive implementation of a continuation-based interpreter for Icon uses
explicit success and failure continuations. The result of interpreting an Icon
expression then has type
(int
where the first argument is the success continuation and the second argument
the failure continuation. Note that the success continuation takes a failure continuation
as a second argument. This failure continuation determines the resumption
behavior of the Icon term: the success continuation may later on apply
where
to C
else
Figure
5: A continuation semantics
its failure continuation to generate more results. The corresponding continuation
monad C 2 has the same standard monad operations as the continuation
monad displayed in Figure 4, and the sequence operations
empty
if empty C 2
xs ys
append
Just as the continuation monad from Figure 4 can be conceptually linked to the
list monad, the present continuation monad can be linked to the stream monad
by a Church encoding of the data type of streams:
more x
The fact that the second component in a stream is a thunk suggests one to give
the selector function s m the type int # (1 1 1 #; the resulting type for end
and more x xs is then
(int
Choosing # as the result type of the selector functions yields the type of a
continuation monad with answer type
The interpreter defined by the semantics
is the starting point of the
semantics-directed compilation described in Section 3. Figure 6 displays the
definition of
where all monad operations have been inlined and the resulting
expressions #-reduced. Because the basic monad operations of C 2 are the same
as those of C, the semantics based on C 2 and C only di#er in the definitions of
leq , to , and in how if is handled.
(#j.to
where
to C 2
else
Figure
A semantics with success and failure continuations
2.6 Correctness
So far, we have related the various semantics presented in Section 2.5 only con-
ceptually. Because the four di#erent interpreters presented in Section 2.5 were
created by instantiating one parameterized semantics with di#erent monads, a
formal correspondence proof can be conducted in a modular way building on
the concept of a monad morphism [28].
and N are two monads, then h :
#M #N is a monad morphism if it preserves the monad operations 1 , i.e.,
The following lemma shows that the semantics resulting from two di#erent
monad instantiations can be related by defining a monad morphism between
the two sequence monads in question.
and N be monads of sequences as specified in Section 2.3. If
h is a monad morphism from M to N, then for every Icon
expression E.
We strengthen the definition of a monad morphism somewhat by considering a sequence-
preserving monomorphism that also preserves the monad operations specific to the monad of
sequences.
Proof: By induction over the structure of E. A lemma to the e#ect that
shown by induction over
We use Lemma 2 to show that the list-based interpreter from Section 2.5.1 and
the continuation-based interpreter from Section 2.5.3 always yield comparable
results:
Proposition 3 Let show : #C # L be defined as
show
(unit L x) xs) empty L .
Then (show expressions E.
Proof: We show that (1) h : # L #C, which is defined as
(unit C x) (h xs)
is a monad morphism from L to C, and (2) the function (show #h) is the identity
function on lists. The proposition then follows immediately with Lemma 2. #
2.7 Conclusion
Taking an intuitive list-based semantics for a subset of Icon as our starting point,
we have defined a stream-based semantics and two continuation semantics. Because
our inital semantics is defined as the instantiation of a monadic semantics
with a list monad, the other semantics can be defined through a stream monad
and two di#erent continuation monads, respectively. The modularity of the
monadic semantics allows us to relate the semantics to each other by relating
the corresponding monads, both conceptually and formally. To the best of
our knowledge, the conceptual link between list-based monads and continuation
monads via Church encoding has not been observed before.
It is known that continuations can be compiled into e#cient code relatively
easily [1, 14]; in the following section we show that partial evaluation is su#-
cient to generate e#cient code from the the continuation semantics derived in
Section 2.5.4.
3 Semantics-Directed Compilation
The goal of partial evaluation is to specialize a source program
of two arguments to a fixed "static" argument s : S. The result is a residual
program that must yield the same result when applied to a "dy-
namic" argument d as the original program applied to both the static and the
dynamic arguments, i.e., [[p s
Our interest in partial evaluation is due to its use in semantics-directed com-
pilation: when the source program p is an interpreter and the static argument s
is a term in the domain of p then p s is a compiled version of s represented in the
implementation language of p. It is often possible to implement an interpreter
in a functional language based on the denotational semantics.
Our starting point is a functional interpreter implementing the denotational
semantics in Figure 6. The source language of the interpreter is shown in Figure
7. In Section 3.1 we present the Icon interpreter written in ML. In Section
3.1, 3.2, and 3.3 we use type-directed partial evaluation to specialize this
interpreter to Icon terms yielding ML code, C code, and OCaml byte code as
output. Other partial-evaluation techniques could be applied to yield essentially
the same results.
structure struct
datatype
| TO of icon * icon
| PLUS of icon * icon
| LEQ of icon * icon
| IF of icon * icon * icon
Figure
7: The abstract syntax of Icon terms
3.1 Type-directed partial evaluation
We have used type-directed partial evaluation to compile Icon programs into
ML. This is a standard exercise in semantics-directed compilation using type-directed
partial evaluation [9].
Type-directed partial evaluation is an approach to o#-line specialization of
higher-order programs [8]. It uses a normalization function to map the (value of
trivially specialized program #d.p(s, d) into the (text of the) target program
The input to type-directed partial evaluation is a binding-time separated program
in which static and dynamic primitives are separated. When implemented
in ML, the source program is conveniently wrapped in a functor parameterized
over a structure of dynamic primitives. The functor can be instantiated with
evaluating primitives (for running the source program) and with residualizing
primitives (for specializing the source program).
3.1.1 Specializing Icon terms using type-directed partial evaluation
In our case the dynamic primitives operations are addition (add), integer comparison
(leq), a fixed-point operator (fix), a conditional functional (cond), and
a quoting function (qint) lifting static integers into the dynamic domain. The
signature of primitives is shown in Figure 8. For the residualizing primitives
we let the partial evaluator produce functions that generate ML programs with
meaningful variable names [8].
The parameterized interpreter is shown in Figure 9. The main function eval
takes an Icon term and two continuations, res
res, and yields a result of type res. We intend to specialize
the interpreter to a static Icon term and keeping the continuation parameters
k and f dynamic. Consequently, residual programs are parameterized over two
continuations. (If the continuations were also considered static then the residual
programs would simply be the list of the generated integers.)
signature
type tunit
type tint
type tbool
type res
val qint : int -> tint
val add : tint * tint -> tint
res
val fix : ((tint -> res) -> tint -> res) -> tint -> res
Figure
8: Signature of primitive operations
The output of type-directed partial evaluation is the text of the residual
program. The residual program is in long beta-eta normal form, that is, it does
not contain any beta redexes and it is fully eta-expanded with respect to its
type.
Example 4 The following is the result of specializing the interpreter with respect
to the Icon (4 to 7).
fix (fn loop0 =>
fn i0 =>
cond (leq (i0, qint 7),
(fn () => loop0 (add (i0, qint 1))),
(qint
struct
fun loop (i,
P.fix
(fn walk =>
P.cond (P.leq (i, j),
fn _ =>
walk (P.add (i, P.qint 1))),
fun select (i,
P.cond (P.leq (i, j), fn _ => k j f, f)
fun sum (i,
fun eval (LIT i)
| eval (TO(e1,
eval e1 (fn i => eval e2 (fn j => loop (i,
| eval (PLUS(e1,
eval e1 (fn i => eval e2 (fn j => sum (i,
| eval (LEQ(e1,
eval e1 (fn i => eval e2 (fn j => select (i,
| eval (IF(e1, e2,
eval e1
(fn _ => fn _ => eval e2 k f)
(fn _ => eval e3 k f)
Figure
9: Parameterized interpreter
3.1.2 Avoiding code duplication
The result of specializing the interpreter in Figure 9 may be exponentially large.
This is due to the continuation parameter k being duplicated in the clause for
IF. For example, specializing the interpreter to the Icon term 100
yields the following residual program in which the context
cond (leq (qint 1, qint 2),
Code duplication is a well-known problem in partial evaluation [17]. The
equally well-known solution is to bind the continuation in the residual program,
just before it is used. We introduce a new primitive save of two arguments, k
and g, which applies g to two "copies" of the continuation k.
signature
res
val save : succ -> (succ * succ -> res) -> res
The final clause of the interpreter is modified to save the continuation parameter
before it proceeds, as follows.
fun eval (LIT i)
| eval (IF(e1, e2,
save k
(fn (k0, k1) => eval e1
(fn _ => fn _ => eval e2 k0 f)
(fn _ => eval e3 k1 f))
Specializing this new interpreter to the Icon term from above yields the
following residual program in which the context add(100, - ) occurs only once.
save (fn v0 =>
fn resume0 =>
k (add (qint 100, v0)) (fn () => resume0 ()))
(fn (k0_0, k1_0) =>
cond (leq (qint 1, qint 2),
Two copies of continuation parameter k are bound to k0 0 and k1 0 before the
continuation is used (twice, in the body of the second lambda). In order just to
prevent code duplication, passing one "copy" of the continuation parameter is
actually enough. But the translation into C introduced in Section 3.2 uses the
two di#erently named variables, in this case k0_0 and k1_0, to determine the
IF-branch inside which a continuation is applied.
3.2 Generating C programs
Residual programs are not only in long beta-eta normal form. Their type
(tint # (tunit # res) # res) # (tunit # res) # res
imposes further restrictions: A residual program must take two arguments, a
success continuation res and a failure continuation
res, and it must produce a value of type res. When we also
consider the types of the primitives that may occur in residual programs we see
that values of type res can only be a result of
. applying the success continuation k to an integer n and function of type
tunit # res;
. applying the failure continuation f;
. applying the primitive cond to a boolean and two functions of type tunit #
res;
. applying the primitive fix to a function of two arguments, loop n
res and in : tint, and an integer;
. (inside a function passed to fix) applying the function loop n
to an integer;
. applying the primitive save to two arguments, the first being a function
of two arguments, vn : tint and resumen : tunit # res, and the second
being a function of a pair of arguments, k 0
, each of type tint #
(tunit # res) # res;
. (inside the first function passed to save) applying the function resumen ;
or
. (inside the second function passed to save) applying one of the functions
n to an integer and a function of type tunit # res.
A similar analysis applies to values of type tint: they can only arise from
evaluating an integer n, a variable in , or a variable vn or from applying add to
two argument of type tint. As a result, we observe that the residual programs
of specializing the Icon interpreter using type-directed partial evaluation are
restricted to the grammar in Figure 10. (The restriction that the variables
loop n
, in , vn , and resume n each must occur inside a function that binds them
cannot be expressed using a context-free grammar. This is not a problem for our
development.) We have expressed the grammar as an ML datatype and used this
datatype to represent the output from type-directed partial evaluation. Thus,
we have essentially used the type system of ML as a theorem prover to show
the following lemma.
Lemma 5 The residual program generated from applying type-directed partial
evaluation to the interpreter in Figure 9 can be generated by the grammar in
Figure
10.
The idea of generating grammars for residual programs has been studied by,
e.g., Malmkj-r [20] and is used in the run-time specializer Tempo to generate
code templates [6].
| f ()
| cond (E, fn () => S, fn () => S)
| fix (fn loop n => fn in => S) E
| loop n E
| save (fn vn => fn resumen =>
| resume n ()
in | vn | add (E, E) | leq (E, E)
Figure
10: Grammar of residual programs
The simple structure of output programs allows them to be viewed as programs
of a flow-chart language. We choose C as a concrete example of such a
language. Figure 11 and 12 show the translation from residual programs to C
programs.
The translation replaces function calls with jumps. Except for the call to
resume n (which only occurs as the result of compiling if-statements), the name
of a function uniquely determines the corresponding label to jump to. Jumps to
resume n can end up in two di#erent places corresponding to the two copies of
the continuation. We use a boolean variable gate n
to distinguish between the
two possible destinations. Calls to loop n
and kn pass arguments. The names
of the formal parameters are known (i n and vn , respectively) and therefore
arguments are passed by assigning the variable before the jump.
In each translation of a conditional a new label l must be generated. The
entire translated term must be wrapped in a context that defines the labels succ
and fail (corresponding to the initial continuations). The statements following
the label succ are allowed to jump to resume. The translation in Figure 11 and
12 generates a C program that successively prints the produced integers one by
one. A lemma to the e#ect that the translation from residual ML programs into
C is semantics preserving would require giving semantics to C and to the subset
of ML presented in Figure 10 and then showing equivalence.
Example 6 Consider again the Icon (4 to 7) from Example 4. It
is translated into the following C program.
loop0: if (i0 <= 7) goto L0;
goto fail;
goto succ;
| S
succ: printf("%d ", value);
goto resume;
goto succ;
resume: |S| S
|f
|cond (E, fn () => S, fn () => S # )|
if (|E| E) goto l;
l: |S| S
|fix (fn loop n => fn in => S) E|
goto loop
save (fn vn => fn resumen => S)
succn: |S| S
|resumen
goto resume 0
gate
goto
resume i
Figure
Translating residual programs into C (Statements)
|qint
Figure
12: Translating residual programs into C (Expressions)
resume:
goto loop0;
succ: printf("%d ", value);
goto resume;
The C target programs corresponds to the target programs of Proebsting's
optimized template-based compiler [22]. In e#ect, we are automatically generating
flow-chart programs from the denotation of an Icon term.
3.3 Generating byte code
In the previous two sections we have developed two compilers for Icon terms,
one that generates ML programs and one that generates flow-chart programs.
In this section we unify the two by composing the first compiler with the third
author's automatic run-time code generation system for OCaml [25] and by
composing the second compiler with a hand-written compiler from flow charts
into OCaml byte code.
3.3.1 Run-time code generation in OCaml
Run-time code generation for OCaml works by a deforested composition of traditional
type-directed partial evaluation with a compiler into OCaml byte code.
Deforestation is a standard improvement in run-time code generation [6, 19, 26].
As such, it removes the need to manipulate the text of residual programs at specialization
time. As a result, instead of generating ML terms, run-time code generation
allows type-directed partial evaluation to directly generate executable
OCaml byte code.
Specializing the Icon interpreter from Figure 9 to the Icon
to 7) using run-time code generation yields a residual program of about 110
byte-code instructions in which functions are implemented as closures and calls
are implemented as tail-calls. (Compiling the residual ML program using the
OCaml compiler yields about 90 byte-code instructions.)
3.3.2 Compiling flow charts into OCaml byte code
We have modified the translation in Figure 11 and 12 to produce OCaml byte-code
instructions instead of C programs. The result is an embedding of Icon
into OCaml.
Using this (4 to 7) yields 36 byte-code instructions in which
functions are implemented as labelled blocks and calls are implemented as an
assignment (if an argument is passed) followed by a jump. This style of target
code was promoted by Steele in the first compiler for Scheme [27].
3.4 Conclusion
Translating the continuation-based denotational semantics into an interpreter
written in ML and using type-directed partial evaluation enables a standard
semantics-directed compilation from Icon terms into ML. A further compilation
of residual programs into C yields flow-chart programs corresponding to those
produced by Proebsting's Icon compiler [22].
4 Conclusions and Issues
Observing that the list monad provides the kind of backtracking embodied in
Icon, we have specified a semantics of Icon that is parameterized by this monad.
We have then considered alternative monads and proven that they also provide
a fitting semantics for Icon. Inlining the continuation monad, in particular,
yields Gudeman's continuation semantics [13].
Using partial evaluation, we have then specialized these interpreters with
respect to Icon programs, thereby compiling these programs using the first Futamura
projection. We used a combination of type-directed partial evaluation
and code generation, either to ML, to C, or to OCaml byte code. Generating
code for C, in particular, yields results similar to Proebsting's compiler [22].
Gudeman [13] shows that a continuation semantics can also deal with additional
control structures and state; we do not expect any di#culties with scaling
up the code-generation accordingly. The monad of lists, on the other hand, does
not o#er enough structure to deal, e.g., with state. It should be possible, how-
ever, to create a rich enough monad by combining the list monad with other
monads such as the state monad [10, 18].
It is our observation that the traditional (in partial evaluation) generalization
of the success continuation avoids the code duplication that Proebsting
presents as problematic in his own compiler. We are also studying the results
of defunctionalizing the continuations, -a la Reynolds [24], to obtain stack-based
specifications and the corresponding run-time architectures.
Acknowledgments
Thanks are due to the anonymous referees for comments
and to Andrzej Filinski for discussions. This work is supported by the ESPRIT
Working Group APPSEM (http://www.md.chalmers.se/Cs/Research/
Semantics/APPSEM/).
--R
Compiling with Continuations.
Understanding the control of Prolog programs.
On implementing Prolog in functional programming.
The Calculi of Lambda-Conversion
Tutorial notes on partial evalua- tion
A general approach for run-time specialization and its application to C
Representing layered monads.
The Icon Programming Lan- guage
The Implementation of the Icon Programming Language.
Denotational semantics of a goal-directed language
Representing control in the presence of first-class continuations
Prological features in a functional setting-axioms and implementations
"reverse"
Partial Evaluation and Automatic Program Generation.
Combining Monads.
Optimizing ML with run-time code generation
Abstract Interpretation of Partial-Evaluation Algo- rithms
Computational lambda-calculus and monads
Simple translation of goal-directed evaluation
A new implementation of the Icon language.
Definitional interpreters for higher-order programming languages
PhD thesis
Two for the price of one: composing partial evaluation and compilation.
Steele Jr.
Comprehending monads.
Monads for functional programming.
An easy implementation of pil (PROLOG in LISP).
--TR
A novel representation of lists and its application to the function "reverse"
The implementation of the Icon programming language
Computational lambda-calculus and monads
Representing control in the presence of first-class continuations
Denotational semantics of a goal-directed language
Compiling with continuations
Partial evaluation and automatic program generation
Tutorial notes on partial evaluation
Optimizing ML with run-time code generation
A general approach for run-time specialization and its application to C
Representing layered monads
ICON Programmng Language
Definitional Interpreters for Higher-Order Programming Languages
Type-Directed Partial Evaluation
Semantics-Based Compiling
Combining Monads
Monads for Functional Programming
--CTR
Mitchell Wand , Dale Vaillancourt, Relating models of backtracking, ACM SIGPLAN Notices, v.39 n.9, September 2004
Dariusz Biernacki , Olivier Danvy , Chung-chieh Shan, On the static and dynamic extents of delimited continuations, Science of Computer Programming, v.60 n.3, p.274-297, May 2006 | continuations;evaluation computational monads;run-time code generation;code templates;type-directed partial evaluation |
586615 | Spatially adaptive splines for statistical linear inverse problems. | This paper introduces a new nonparametric estimator based on penalized regression splines for linear operator equations when the data are noisy. A local roughness penalty that relies on local support properties of B-splines is introduced in order to deal with spatial heterogeneity of the function to be estimated. This estimator is shown to be consistent under weak conditions on the asymptotic behaviour of the singular values of the linear operator. Furthermore, in the usual non-parametric settings, it is shown to attain optimal rates of convergence. Then its good performances are confirmed by means of a simulation study. | INTRODUCTION
Statistical linear inverse problems consist of indirect noisy observations
of a parameter (a function generally) of interest. Such problems occur
in many areas of science such as genetics with DNA sequences (Mendel-
sohn and Rice 1982), optics and astronomy with image restoration (Craig
and Brown 1986), biology and natural sciences (Tikhonov and Goncharsky
1987). Then, the data are a linear transform of an original signal f corrupted
by noise, so that we have:
where K is some known compact linear operator defined on a separable
Hilbert space H (supposed in the following to be L 2 [0; 1]; the space of
square integrable functions defined on [0,1]) and ffl i is a white noise with
unknown variance oe 2 : These problems are also called ill-posed problems
because the operator K is compact and consequently equation (1) can not
be inverted directly since K \Gamma1 is not a bounded operator. The reader is
referred to Tikhonov and Arsenin (1977) for a seminal book on ill-posed
operator equations and O'Sullivan (1986) for a review of the statistical
perspective on ill-posed problems. In the following, we will restrict ourself
to integral equations with kernel k(s;
which include deconvolution
There is a vast literature in numerical analysis (Hansen 1998, Neumaier
1998 and references therein) and in statistics dealing with inverse problems
(e.g Wahba 1977, Mendelsohn and Rice 1982, Nychka and Cox 1989,
Abramovich and Silverman 1998 among others). Actually, since model (1)
can not be inverted directly, even if the data are not corrupted by noise,
one has to regularize the estimator by adding a constraint in the estimation
procedure (Tikhonov and Arsenin 1977). The regularization can be linear
and is generally based on a windowed singular value decomposition (SVD)
of K: Indeed, since K is compact, it admits the following decomposition
are orthonormal bases of H
and the singular values are sorted by decreasing order, 1
Most estimators of f proposed in the literature are based either on a finite
rank, depending on the sample size, approximation of K achieved by truncating
the basis expansions obtained by means of the SVD or by adding
a regularization (smoothing) parameter to the eigenvalues that makes K
The sequence of filtering coefficients f j controls the regularity of the so-
for the Tikhonov method and f I [jk] for the truncated
SVD method (see Hansen 1998, for an exhaustive review of these
methods). The rate of decay of the singular values indicates the degree of
ill-posedness of the problem: the more the singular values decrease rapidly
the more ill-posed the problem is.
Nevertheless, estimators based on the SVD have two main defaults. On
the one hand, the basis functions depend explicitly on the operator K and
not on the function of interest. For instance, it is well known that Fourier
basis are the singular functions of K for deconvolution problems and that
they can not provide a parsimonious approximation of the true function
if it is smooth in some regions and rapidly oscillates in other regions. On
the other hand, the usual regularization procedures do not allow to deal
with spatial heterogeneity of the function to be recovered. Several authors
have proposed spatially adaptive estimators based on wavelet decomposition
(Donoho 1995, Abramovich and Silverman 1998) that attain minimax
rates of convergence for particular operators K such as homogeneous oper-
ators. Our approach is quite different and relies on spline fitting with local
roughness penalties.
Until now, spatially adaptive splines were computed by means of knots
selection procedures that require sophisticated algorithms (Friedman 1991,
Stone et al. 1997, Denison et al. 1998). This paper does not address the
topic of knots selection and the estimator proposed below is a penalized
regression splines whose original idea traces back to O'Sullivan (1986) and
Ruppert and Carroll (1999). Actually, Ruppert and Carroll's method con-
sists in penalizing the jumps of the function at the interior knots, each
being controlled by a smoothing parameter, in order to manage both the
highly variable part and the smooth part of the estimator. In this arti-
cle, a similar approach is proposed. Using the fact that B-spline functions
have local supports and that the derivative of a B-spline of order q is the
combination of two B-splines of order are able to define local
measures of the squared norm of a given order derivative of the function
of interest. Thus the curvature of the estimator can be controlled locally
by means of smoothing parameters associated to these local measures of
roughness. Some asymptotic properties of the estimator are given. These
local penalties are controlled by local smoothing parameters whose values
must be chosen very carefully in practical situations in order to get accurate
estimates. The generalized cross validation (GCV) criterion is widely used
for nonparametric regression and generally allows to select "good" values
of the smoothing parameter (Green and Silverman, 1994). Unfortunately,
GCV seems to fail to select effective smoothing parameter values in the
framework of adaptive splines for inverse problems by giving too often un-
dersmoothed estimates. Further investigation is needed to cope with this
important practical topic but that is beyond the scope of this paper. Nevertheless
a small Monte Carlo experiment has been performed to show the
potential of this new approach.
The organization of the paper is as follows. In section 2, the spatial
adaptive regression splines estimator is defined. In section 3, upper bounds
for the rates of convergence are given. The particular case where
(the usual nonparametric framework) is also tackled and the spatially
adaptive estimator is shown to attain optimal rates of convergence. Then,
in section 4, a simulation study compares the behavior of this estimator
to the penalized regression splines proposed by O'Sullivan (1986). Finally,
section 5 gathers the proofs.
Splus programs for carrying out the estimation are available on request.
2. SPATIALLY ADAPTIVE SPLINE ESTIMATES
The estimator proposed below is based on spline functions. Let's now
briefly recall the definition and some known properties of these functions.
Suppose that q and k are integers and let S qk be the space of spline functions
defined on [0; 1]; of order q (q 2); with k equispaced interior knots. The
set S qk is then the set of functions s defined as :
ffl s is a polynomial of degree on each interval
ffl s is continuously differentiable on [0; 1].
The space S qk is known to be of dimension q +k and one can derive a basis
by means of normalized B-splines fB q
1978 or Dierckx, 1993). These functions are non negative and have local
support:
where
Furthermore, a remarkable property of B-splines is that the derivative of a
B-spline of order q can be expressed as a linear combination of two B-splines
of order precisely, if
kj is the jth normalized B-spline of S (q\Gamma1)k and B qk is the vector
of all the B-splines of S qk : Let's define by D qk the weighted differentiation
which gives the coordinates in S (q\Gamma1)k of the
derivative of a function of S qk :
Then, by iterating this process, one can easily obtain the coordinates of a
given order derivative of a function of S qk by applying the (k+q \Gammam)\Theta(k+q)
matrix \Delta (m) defined as follows:
We consider a penalized least squares regression estimator with penalty
proportional to the weighted squared norm of a given order m (m
derivative of the functional coefficient, the effect of which being to give
preference for a certain local degree of smoothness. Using the local support
properties of B-splines, this adaptive roughness penalty is controlled by
local positive smoothing parameters ae that may
take spatial heterogeneity into account. Our penalized B-splines estimate
of f is thus defined as
qk
' is a solution of the following minimization problem
min
'2R q+kn
j is the jth element of ' (m) and k:k denotes the usual L 2 [0; 1] norm.
Let An be the n \Theta (q
C qk the (k +q) \Theta (k +q) matrix whose generic element is the inner product
between two B-splines:
Z 1B q
ki (t)B q
Let's define
I ae C (q\Gammam)k I ae \Delta (m)
where I ae is the diagonal matrix with diagonal elements Then, the solution
" ' of the minimization problem (10) is given by:
n;aen
where Y is the vector of R n with elements
Remark 2. 1. If ae then the estimator defined
by (9) is the same as the estimator proposed by O'Sullivan (1986):
min
'2R q+kn
Furthermore, in the usual nonparametric settings (i.e
kind of penalized regression splines have already been used for different
purposes. Kelly and Rice (1991) have used them to nonparametrically
estimate dose-response curves and Cardot and Diack (1998) have demonstrated
they could attain optimal rates of convergence. Besse et al. (1997)
have performed the principal components analysis of unbalanced longitudinal
data and Cardot (2000) has studied the asymptotic convergence of
the principal components analysis of sampled noisy functional data.
Remark 2. 2. The local penalty defined in (10) may be viewed as a
kind of discrete version of the continuous penalty defined as follows:
the local roughness being continously controlled by function ae(t):
3. ASYMPTOTIC RESULTS
Our convergence results are derived according to the semi-norm induced
by the linear operator K (named K-norm in the
and the empirical norm
Then we have kfk belongs to the null space of K and thus
can not be estimated. This norm allows us to measure the distance of the
estimate from the recoverable part of function f and has been considered
by Wahba (1977). Let's define
It is easily
seen that K(m) is the space of polynomial functions defined on [0; 1] with
degree less than m:
To ensure the existence and the convergence of the estimator we need
the following assumptions on the regularity of f; on the repartition of the
design points, the moments of the noise and on the operator
(H.2) The ffl i 's are independent and distributed as ffl where
Let's denote by Fn the empirical distribution of the design se-
quence, ng ae [0; 1] and suppose it converges to a design
measure F that has a continuous, bounded, and strictly positive density h
on [0; 1]: Furthermore, let's suppose that there exists a sequence fdng of
positive numbers tending to zero such that
sup
(H.4) The kernel k(s; t) belongs to L 2 ([0; 1] \Theta [0; 1]) and, for fixed s,
the function t 7! k(s; t) is a continous function whose derivative belongs to
It exists C ? 0 such that 8g 2 K(m); kKgk C kgk :
In other words, assumption (H.5) means that the null space of K should
not contain a (non null) polynomial whose degree is less than m: This
condition is rather weak when dealing with deconvolution problems but
excludes some operator equations such as differentiation. By assumption
(H.3), the norm of L 2 ([0; 1]; dF (t)) is equivalent to the L 2 ([0; 1]; dt) norm
with respect to the Lebesgue measure. Assumptions (H.5) and (H.3) ensure
the invertibility of G n;ae and hence the unicity of "
f provided that n is
sufficiently large. Finally assumption (H.4) is a technical assumption that
ensures a certain amount of regularity for operator K but that can be
relaxed for particular operator equations. More precisely, it implies that K
is a Hilbert-Schmidt operator, i.e
is the sequence
of singular values of K:
Let's define
0: We can state
now the two main theorems of this article:
Theorem 3.1. Suppose that n tends to infinity and k=o(n);
then under hypotheses (H.1)-(H.5) we have:
f
K;n
O
The best upper bound is
f
K;n
O
\Gamma2p
It is obtained when
There is
no strong assumption on the decay of
ae as n goes to infinity and actually
ae
can be as small as we want. However, it is well known that in practical situations
a too small value of
ae leads to very bad estimates having undesirable
oscillations. Thus the empirical K-norm should not be considered as an
effective criterion to evaluate the asymptotic performance of this estimator.
Theorem 3.2. Suppose that n tends to infinity and k=o(n);
one has the following upper
f
O
ae 6
Remark 3. 1. Upper bounds for the empirical and the K-norm are
different and surprinsigly, that difference is entirely caused by the bias
term whereas one should expect it would be the result of variance. The
bounds obtained in the K-norm error depend directly on how accurately the
empirical measure Fn of the design points approximates the true measure F:
Furthermore, a larger amount of regularization is needed for the estimator
to be convergent. For instance, if the sequence dn decreases at the usual
rate one choose
ae n \Gamma(p+m)=(2p+1) as before then the
f is not consistent since ae
ae and then d 2
goes to infinity.
If we choose
ae ae n \Gamma(p+m)=(4p+3m) and k n 1=(4p+3m) , then the
asymptotic error is
f
O
Remark 3. 2. This bound may not be optimal for particular operator
equations since the demonstration relies on general arguments without
assuming any particular decay of the singular values of K (excepted the
implicit conditions imposed by H.4). Thus it must be interpreted as an
upper bound for the rates of convergence: under assumptions (H.4) and
on operator K; the rate of convergence is at least the one given in
(15).
Remark 3. 3. The consistency of the estimator (eq. 12) proposed by
O'Sullivan (1986) is a direct consequence of Theorem 3.2. Upper bounds
for the rates of convergence are those obtained in (15).
3. 4. We have supposed that the interior knots were equispaced
but Theorem 3.2 remains true provided that the distance between two
successive knots satisfies the asymptotic condition :
Remark 3. 5. In the usual nonparametric framework,
the estimator "
qk
' is defined as follows :
min
'2R q+kn
Writing
I ae C (q\Gammam)k I ae \Delta (m) ;
defined as in (11). The demonstration
of the convergence of this estimator is an immediate consequence
of the convergence of (9) since it only remains to study the asymptotic behaviour
of Cn : This has already been done by Agarwall and Studden (1980)
who have shown that if
we get under (H1), (H2) and (H3):
f
O
and the usual optimal rates of convergence are attained if k n 1=(2p+1)
and
Note that there are no conditions on ae since
Cn is a well conditioned matrix.
4. A SIMULATION STUDY
In this section, a small Monte Carlo experiment has been performed in
order to compare the behaviour of the two estimators defined in section 2.
We have simulated ns = 100 samples, each being composed of
noisy measurements of the convoluted function at equidistant design points
in [0; 1]:
Z 1k
ds
with
\Gamma0:5
\Gamma0:5
\Gamma0:5
\Gamma0:5
x
x
Y
-226f
adapt pens
(b)
FIG. 1. (a) Convoluted function Kf and its noisy observation. (b) True function
f , adaptive spline and O'Sullivan's penalized spline estimates with median fit.
and
The noise ffl has gaussian distribution with standard deviation 0:2 so that
the signal-to-noise-ratio is 1 to 8. The integral equation (18) is practically
approximated by means of a quadrature rule. Function f is drawn in Fig.
1, it is flat in some regions and oscillates in others.
We need to choose the smoothing parameter values to compute the es-
timates. These tuning parameters which control the regularity of the estimators
are numerous: the number of knots, the order q of the splines,
the order m of derivation involved in the roughness penalty and the vector
smoothing parameters. Fortunately, all these
parameters have not the same importance to control the behaviour of the
estimators. Indeed, it appears in the usual nonparametric settings that
the most crucial parameters are the elements of ae which are regularization
parameters. The number of knots and their locations are of minor
importance (Eilers and Marx 1996, Besse et al. 1997), provided they are
numerous enough to capture the variability of the true function f: The
number of derivatives used in (10) controls the roughness penalty is rather
important since two different values of m lead to two different estimators.
It may have mechanical interpretation and its value can be chosen by the
practitioners. Here, it was fixed to and the order of the splines to
as it is the case in a lot of applications in the literature. We consider
a set equispaced knots in [0; 1] to build the estimator and thus we
deal with 44\Theta44 square matrices. Nevertheless, the number of smoothing
parameters remains very large: ae 2 R 42
To face this problem, we used the
method proposed by Ruppert and Carrol (1999) which consists to select
a subset of N k ; N k ! k; smoothing parameters ae
including the "edges" ae
The criterion (GCV,
AIC, .) used to select the smoothing parameter values is then optimized
according to this subset of variables, the values of the other smoothing
parameters being determined by linear interpolation: if
\Gammaae
(j
This subset of smoothing parameters
may be chosen a priori if one has some a priori knowledge of the spatial
variability of the true function but in the following we will consider N
equispaced "quantile" smoothing parameters.
14 CARDOT0.2Pens Adapt
MSE
FIG. 2. Boxplot of the mean square error of estimation for each estimate. The
median error is 0.23 for the penalized spline and 0.08 for the adaptive spline.
We first consider a generalized cross validation criterion in order to
choose the values of ae
because it is computationally fast, widely used as
an automatic procedure and has been proved to be efficient in many statistical
settings (Green and Silverman, 1994). Unfortunately, it seems to fail
here (if there is more than one smoothing parameter) and systematically
gives too small smoothing parameter values that lead to undersmoothed
estimates. Actually, we think it would be better to consider a penalized
version of the GCV that takes into account the number of smoothing parameters
and our future work will go in that direction.
Thus, we have defined the exact empirical risk
in order to evaluate the accuracy of estimate "
Smoothing parameter
values are chosen by minimizing the above risk so that we compare the best
attainable penalized splines defined in (12) and adaptive splines estimators
from samples y
Boxplots of this empirical risk are drawn in Fig. 2 and show that the use
of local penalties may lead to substantial improvements of the estimate:
the median error is 0.08 for the adaptative spline whereas it is 0.23 for
the penalized spline. The penalized spline estimates whose curvature is
only controlled by one parameter can not manage both the flat regions and
the oscillatories regions of function f: That's why undesirable oscillations
of the penalized spline estimate appear in the intervals [0; 0:2] and [0:7; 1]
whereas the use of local smoothing parameters allows to cope efficiently
this problem (see Fig. 1).
5. PROOFS
Let's decompose the mean square error into a squared bias and a variance
term according to the x norm which is successively fK; ng and fKg:
f
x
f
x
f
x
and let's study each term separately. Technical Lemmas are gathered at
the end of the section.
5.1. Bias term
5.1.1. Empirical Bias
Define by
qk
n;aen A 0
thermore, it is easy to show that
' k is the solution of the minimization
problem
min
'2R q+kn
Criterion (20) can be written equivalently with the empirical K-norm :
min
K;n
From Theorem XII.1 in De Boor (1978) and regularity assumption (H.1),
there exists
qk such that
sup
where constant C does not depend on k:
Furthermore we have:
K;n
'Z
'Z
Ck \Gamma2p 1
because lim n (1=n)
ds dF (t) ! +1 by
assumptions (H.3) and (H.4).
On the other hand, since
f k is the solution of (20), we have:
K;n
K;n
From Lemma 5.2 and k' s
Finally, the empirical bias is bounded as follows:
f
K;n
5.1.2. K-norm bias
Let's denote by f
ae the solution of the following optimization
problem:
min
ae is the solution of (23), using the continuity of K; one gets with
Lemma 5.2:
where function s is defined in (21).
Writing now
it remains to study the last term of the right side of this inequality to
complete the proof. Let's denote by K k the (q
elements
I ae C (q\Gammam)k I ae
We have
qk
with
n;ae
n IE(Y): It is easy to check that
classical interpolation
theory that
Pursuing the calculus begun in (25) and appealing to Lemmas 5.1 and
5.3, one gets
ae
n;ae
ae
n;ae
O
O(d 2
ae 4
O
O(d 2
O
ae 6
that completes the proof.
5.2. Variance
5.2.1. Empirical variance
Let's denote by K n;k the (q+k)\Theta(q+k) matrix with elements
It is exactly the matrix n \Gamma1 A 0
Before embarking on the calculus, let's
notice that I q+k
I ae C (q\Gammam)k I ae \Delta (m) is a nonnegative
matrix and hence
tr
n;ae K n;k
Furthermore, the largest eigenvalues of G \Gamma1
n;ae K n;k is less than one and thus
for any (q nonnegative matrix A; one has tr(G \Gamma1
n;ae K n;k
et al. 1998, Lemma 6.5). Thus, under (H.2), the empirical
variance term is bounded as follows :
f
K;n
n;aen
tr
n;ae K n;k
tr
n;ae K n;k
5.2.2. K-norm variance
Using the same decomposition as in (27), one obtains readily:
f
n;aen
tr
On the other hand, one can easily check that
n;ae
n;ae K n;k
O
O
O
by Lemma 5.1.
Using equations (28), (29) and the condition finally gets
f
O
5.3. Technical Lemmas
Lemma 5.1. Under assumptions (H.3) and (H.4) one has
An
Proof. Let g be a function of S qk ; then
By integration by
parts followed by the Holder inequality and invoking assumption (H.3), we
An
K;n
Z 1jKg(t)j jDKg(t)jdt
Z 1@k
@t
(s; ds:
One the other hand, from lemma 5.3 we have kKgk
and, with similar arguments, since by assumption (H.4) DK
is a bounded operator, we also get kD Kgk
Matching previous remarks, one finally obtains the desired result :
An
Let's denote by N It is the null
space of
Lemma 5.2. There are two positive constants c 1 and c 2 such that:
I ae C (q\Gammam)k I ae \Delta (m) u c 1 k
I ae C (q\Gammam)k I ae \Delta (m) u:
Proof.
The Grammian matrix C (q\Gammam)k is positive and from Agarwall and Studden
(1980) it exists two positive constants c 3 and c 4 such that:
Then, it easy to check that the matrix I ae C (q\Gammam)k I ae is positive, its largest
eigenvalue is proportional to ae 2 k \Gamma1 and its smallest eigenvalue is proportional
to ae remains to study
to complete the proof.
Let's begin with the first point.
is defined in (7). Furthermore,
writing this weighted difference as follows:
'
and remembering that \Delta (m) is obtained by iterations of the differentiation
process, one gets
that completes the proof of the first point.
Let's suppose now that u 2 N
since Furthermore, the sum of (u
can be expressed in a matrix way:
SPATIALLY ADAPTIVE SPLINES 21
where matrix L is a kind of discretized Laplacian matrix defined as follows
. :::
Its eigenvalues are 2 [1 \Gamma cos
bill, 1969). The null space of L is spanned by the constant vector and the
smallest non null eigenvalue is proportional to 2 (k +q) \Gamma2 . Hence, we have
Since the smallest eigenvalue of I ae C (q\Gammam)k I ae is proportional to ae
gets the desired result for 1: The proof is complete by iterating these
calculus for
Lemma 5.3.
ae
n;ae
Proof.
Let's denote by K the adjoint operator of K; then, for ' 2 R q+k
kK Kk
By inequality (31), one gets kC qk
bounded since K is continuous and the first point is now complete.
22 CARDOT
Let's recall that G
I ae C (q\Gammam)k I ae \Delta (m) and decompose any
function
From Lemma 5.2, one gets:
I ae C (q\Gammam)k I ae \Delta (m) ' g2 c ae
Then, under (H.5), one has
min
and thus the smallest eigenvalue of G ae satisfies min Writing
now G n;ae \Gamma G
we get with Lemma 5.1:
Consequently the smallest eigenvalue of G n;ae satisfies
and then min (G n;ae ) ae
--R
Wavelet decomposition approaches to statistical inverse problems.
Asymptotic integrated mean square error using least squares and bias minimizing splines.
Simultaneousnonparametricregressions of unbalanced longitudinal data.
Convergence en moyenne quadratique de l'estimateur de la r'egression par splines hybrides.
Nonparametric estimation of smoothed principal components analysis of sampled noisy functions.
Inverse Problems in Astronomy.
A Practical Guide to Splines.
Automatic Bayesian curve fitting.
Curve and Surface Fitting with Splines.
Nonlinear solution of linear inverse problems by wavelet-vaguelette decomposition
Flexible Smoothing with B-splines and Penalties (with discussion)
Multivariate adaptive regression splines (with discussion).
Introduction to Matrices With Applications in Statistics.
Nonparametric Regression and Generalized Linear Models.
Monotone smoothing with application to dose-response curves and the assessment of synergism
Deconvolution of Microfluometric Histograms with B-splines
Solving Ill-Conditioned and Singular Linear Systems: a Tutorial on Regularization
Convergence rates for regularized solutions of integral equations from discrete noisy data.
A Statistical Perspective on Ill-Posed Inverse Problems
Polynomial splines and their tensor product in extended linear modeling.
Solutions of Ill-posed problems
Practical Approximate Solutions to Linear Operator Equations when the Data are Noisy.
Local Asymptotics for Regression Splines and Confidence Regions.
--TR
Curve and surface fitting with splines
Simultaneous non-parametric regressions of unbalanced longitudinal data
Solving Ill-Conditioned and Singular Linear Systems
Rank-deficient and discrete ill-posed problems
--CTR
Herv Cardot , Pacal Sarda, Estimation in generalized linear models for functional data via penalized likelihood, Journal of Multivariate Analysis, v.92 n.1, p.24-41, January 2005 | deconvolution;convergence;linear inverse problems;regularization;local roughness penalties;spatially adaptive estimators;regression splines;integral equations |
586618 | The deepest regression method. | Deepest regression (DR) is a method for linear regression introduced by P. J. Rousseeuw and M. Hubert (1999, J. Amer. Statis. Assoc. 94, 388-402). The DR method is defined as the fit with largest regression depth relative to the data. In this paper we show that DR is a robust method, with breakdown value that converges almost surely to 1/3 in any dimension. We construct an approximate algorithm for fast computation of DR in more than two dimensions. From the distribution of the regression depth we derive tests for the true unknown parameters in the linear regression model. Moreover, we construct simultaneous confidence regions based on bootstrapped estimates. We also use the maximal regression depth to construct a test for linearity versus convexity/concavity. We extend regression depth and deepest regression to more general models. We apply DR to polynomial regression and show that the deepest polynomial regression has breakdown value 1/3. Finally, DR is applied to the Michaelis-Menten model of enzyme kinetics, where it resolves a long-standing ambiguity. | Introduction
Consider a dataset Z ng ae IR p . In linear regression
we want to fit a hyperplane of the form
. We denote the x-part of each data point z i by x . The
residuals of Z n relative to the fit ' are denoted as r
To measure the quality of a fit, Rousseeuw and Hubert [16] introduced the notion of
regression depth.
Research Assistant with the FWO, Belgium
Postdoctoral Fellow at the FWO, Belgium.
Definition 1. The regression depth of a candidate fit ' 2 IR p relative to a dataset Z n ae IR p
is given by
rdepth ('; Z n
where the minimum is over all unit vectors
The regression depth of a fit ' ae IR p relative to the dataset Z n ae IR p is thus the
smallest number of observations that need to be passed when tilting ' until it becomes
vertical. Therefore, we always have 0 rdepth('; Z n ) n.
In the special case of are no x-values, and Z n is a univariate dataset. For
any ' 2 IR we then have rdepth('; Z n which is the
'rank' of ' when we rank from the outside inwards. For any p 1, the regression depth of '
measures how balanced the dataset Z n is about the linear fit determined by '. It can easily
be verified that regression depth is scale invariant, regression invariant and affine invariant
according to the definitions in Rousseeuw and Leroy ([17, page 116]).
Based on the notion of regression depth, Rousseeuw and Hubert [16] introduced the
deepest regression estimator (DR) for robust linear regression. In Section 2 we give the
definition of DR and its basic properties. We show that DR is a robust method with
breakdown value that converges almost surely to 1/3 in any dimension, when the good data
come from a large semiparametric model. Section 3 proposes the fast approximate algorithm
MEDSWEEP to compute DR in higher dimensions (p 3). Based on the distribution of
the regression depth function, inference for the parameters is derived in Section 4. Tests
and confidence regions for the true unknown parameters ' are constructed. We also
propose a test for linearity versus convexity of the dataset Z n based on the maximal depth
of Z n . Applications of deepest regression to specific models are given in Section 5. First
we consider polynomial regression, for which we update the definition of regression depth
and then compute the deepest regression accordingly. We show that the deepest polynomial
regression always has breakdown value at least 1/3. We also apply the deepest regression
to the Michaelis-Menten model, where it provides a solution to the problem of ambiguous
results obtained from the two commonly used parametrizations.
2 Definition and properties of deepest regression
Definition 2. In p dimensions the deepest regression estimator DR(Z n ) is defined as the
fit ' with maximal rdepth ('; Z n ), that is
rdepth ('; Z n
(See Rousseeuw and Hubert [16].) Since the regression depth of a fit ' can only increase
if we slightly tilt the fit until it passes through p observations (while not passing any other
observations), it suffices to consider all fits through p data points in Definition 2. If several
of these fits have the same (maximal) regression depth, we take their average. Note that no
distributional assumptions are made to define the deepest regression estimator of a dataset.
The DR is a regression, scale, and affine equivariant estimator. For a univariate dataset, the
deepest regression is its median. The DR thus generalizes the univariate median to linear
regression.
In the population case, let be a random p-dimensional variable, with distribution
H on IR p . Then rdepth('; H) is defined as the smallest amount of probability mass that
needs to be passed when tilting ' in any way until it is vertical. The deepest regression
DR(H) is the fit ' with maximal depth. The natural setting of deepest regression is a
large semiparametric model H in which the functional form is parametric and the error
distribution is nonparametric. Formally, H consists of all distributions H on IR p that satisfy
the following conditions:
H has a strictly positive density and there exists a ~
med H (y j
': (H)
Note that this model allows for skewed error distributions and heteroscedasticity. Van Aelst
and Rousseeuw [22] have shown that the DR is a Fisher-consistent estimator of med(yjx)
when the good data come from the natural semiparametric model H. The asymptotic
distribution of the deepest regression was obtained by He and Portnoy [9] in simple regression,
and by Bai and He [2] in multiple regression.
Figure
1 shows the Educational Spending data, obtained from the DASL library at
http://lib.stat.cmu.edu/DASL. This dataset lists the expenditures per pupil versus the
average salary paid to teachers for regions in the US. The fits ' 1
both have regression depth 2, and the deepest regression DR(Z n
average salary
expenditures DR(Z n )2
Figure
1: Educational spending data, with observations in dimensions. The
lines
both have depth 2, and the deepest regression DR(Z n ) is the average of fits
with depth 23.
(0:17; \Gamma0:51) t is the average of fits with depth 23. Figure 1 illustrates that lines with high
regression depth fit the data better than lines with low depth. The regression depth thus
measures the quality of a fit, which motivates our interest in the deepest regression DR(Z n ).
We define the finite-sample breakdown value "
n of an estimator T n as the smallest fraction
of contamination that can be added to any dataset Z n such that T n explodes (see also Donoho
and Gasko [6]). Let us consider an actual dataset Z n . Denote by Z n+m the dataset formed
by adding m observations to Z n . Then the breakdown value is defined as
Zn+m
The breakdown value of the deepest regression is always positive, but it can be as low
as when the original data are themselves peculiar (Rouseeuw and Hubert [16]).
Fortunately, it turns out that if the original data are drawn from the model, then the
breakdown value converges almost surely to 1=3 in any dimension p.
Theorem 1. Let Z
be a sample from a distribution H on IR p
(p
\Gamma\Gamma\Gamma!
(All proofs are given in the Appendix.) Theorem 1 says that the deepest regression does
not break down when at least 67% of the data are generated from the semiparametric model
H while the remaining data (i.e., up to 33% of the points) may be anything. This result holds
in any dimension. The DR is thus robust to leverage points as well as to vertical outliers.
Moreover, Theorem 1 illustrates that the deepest regression is different from L 1 regression,
which is defined as L 1 (Z n
(')j. Note that L 1 is another generalization of
the univariate median to regression, but with zero breakdown value due to its vulnerability
to leverage points.
In simple regression, Van Aelst and Rousseeuw [22] derived the influence function of the
DR for elliptical distributions and computed the corresponding sensitivity functions. The
influence functions of the DR slope and intercept are piecewise smooth and bounded, meaning
that an outlier cannot affect DR too much, and the corresponding sensitivity functions
show that this already holds for small sample sizes.
The deepest regression also inherits a monotone equivariance property from the univariate
median, which does not hold for L 1 or other estimators such as least squares, least trimmed
squares (Rousseeuw [14]) or S-estimators (Rousseeuw and Yohai [20]). By definition, the
regression depth only depends on the x i and the signs of the residuals. This allows for
monotone transformations of the response y Assume the functional model is
with g a strictly monotone link function. Typical examples of g include the logarithmic, the
exponential, the square root, the square and the reciprocal transformation. The regression
depth of a nonlinear fit (4) is defined as in (1) but with r i
Due to the monotone equivariance, the deepest regression fit can be obtained as follows.
First we put ~
determine the deepest linear regression "
the transformed data
Then we can backtransform the deepest linear regression "
yielding the deepest nonlinear regression fit
to the original data.
Computation
In dimensions the regression depth can be computed in O(n log n) time with the
algorithm described in (Rousseeuw and Hubert [16]). To compute the regression depth of a
fit in constructed exact algorithms
with time complexity O(n p\Gamma1 log n). For datasets with large n and/or p they also give
an approximate algorithm that computes the regression depth of a fit in O(mp 3
mn log n) time. Here m is the number of (p \Gamma 1)-subsets in x-space used in the algorithm.
The algorithm is exact when all
\Delta such subsets are considered.
A naive exact algorithm for the deepest regression computes the regression depth of all
observations and keeps the one(s) with maximal depth. This yields a
total time complexity of O(n log n) which is very slow for large n and/or high p. Even
if we use the approximate algorithm of Rousseeuw and Struyf [18] to compute the depth of
each fit, the time complexity remains very high. In simple regresssion, collaborative work
with several specialists of computational geometry yielded an exact algorithm of complexity
O(n log 2 n), i.e. little more than linear time (van Kreveld et al. [24]). To speed up the
computation in higher dimensions, we will now construct the fast algorithm MEDSWEEP
to approximate the deepest regression.
The MEDSWEEP algorithm is based on regression through the origin. For regression
through the origin, Rousseeuw and Hubert [16] defined the regression depth (denoted as
rdepth 0 ) by requiring that in Definition 1. Therefore, the rdepth 0 (') of a fit ' ae IR p
relative to a dataset Z n ae IR p+1 is again the smallest number of observations that needs
to be passed when tilting ' in any way until it becomes vertical. Rousseeuw and Hubert
[16] have shown that in the special case of a regression line through the origin (p = 1), the
deepest regression (DR of the dataset Z given by the slope
med
where observations with x are not used. This estimator has minimax bias (Martin,
Yohai and Zamar [11]) and can be computed in O(n) time.
We propose a sweeping method based on the estimator (5) to approximate the deepest
regression in higher dimensions. Suppose we have a dataset Z
ng. We arrange the n observations as rows in a n \Theta p matrix
where the X i and Y are n-dimensional column vectors.
Step 1: In the first step we construct the sweeping variables X S
. We start
with
To obtain X S
(j ? 1) we successively sweep X S
out of the original
variable X j . In general, to sweep X S
k out of X l (k ! l) we compute
med
med
x il
med
ik
where J is the collection of indices for which the denominator is different from zero, and
then we replace X l by
now sweep the next variable X S
out of this new X l . If
. Thus we obtain the sweeping variables
Step 2: In the second step we successively sweep X S
out of Y . Put Y . For
med
y
med
y
med
ik
with J as before, and we replace the original Y
k . Thus we obtain
The proces (7)-(8) is iterated until convergence is reached. In each iteration step all the
coefficients "
fi are updated. The maximal number of iterations has been set to
100 because experiments have shown that even when convergence is slow, after 100 iteration
steps we are already close to the optimum. After the iteration proces, we take the median
of Y S to be the intercept "
Step 3: By backtransforming "
we obtain the regression coefficients
corresponding to the original variables . The obtained fit " '
is then slightly
adjusted until it passes through p observations, because we know that this can only improve
the depth of the fit. We start by making the smallest absolute residual zero. Then for each
of the directions we tilt the fit in that direction until it passes an observation
while not changing the sign of any other residual. This yields the fit "
'.
Step 4: In the last step we approximate the depth of the final fit "
'. Let u S
be the directions corresponding to the variables X S
, then we compute the minimum
over
instead of over all unit
vectors in the right hand side of expression (1).
Since computing the median takes O(n) time, the first step of the algorithm needs O(p 2 n)
time and the second step takes O(hpn) time where h is the number of iterations. The adjustments
in step 3 also take O(p 2 n) time, and computing the approximate depth in the last
step can be done in O(pn log n) time. The time complexity of the MEDSWEEP algorithm
thus becomes O(p log n) which is very low.
To measure the performance of our algorithm we carried out the following simulation. For
different values of p and n we generated
ng from the standard gaussian distribution. For each of these samples we computed
the deepest regression
with the MEDSWEEP algorithm and measured the
total time needed for these 10; 000 estimates. For each n and p we also computed the bias
of the intercept, which is the average of the 10; 000 intercepts, and the bias of the vector of
the slopes, which we measure by
((ave
(ave
We also give the mean squared error of the vector of the slopes, given by
where the true values and the mean squared error of the
intercept, given by 1
Table
1 lists the bias and mean squared error of the
vector of the slopes, while the bias and mean squared error of the intercept are given in
Table
2. Note that the bias and mean squared error of the slope vector and the intercept
are low, and that they decrease with increasing n. From Tables 1 and 2 we also see that the
mean squared error does not seem to increase with p.
Table
1: Bias (9) and mean squared error (10) of the DR slope vector, obtained by generating
standard gaussian samples for each n and p. The DR fits were obtained with
the MEDSWEEP algorithm. The results for the bias have to be multiplied by 10 \Gamma4 and the
results for the MSE by 10 \Gamma3 .
50 bias 18.01 18.27 28.53 22.47 21.42
MSE 54.55 52.32 52.23 52.54 58.65
100 bias 7.56 8.41 11.16 8.98 12.68
MSE 24.59 25.32 25.99 26.15 27.17
300 bias 8.11 5.98 6.29 7.39 10.89
MSE 8.11 8.27 8.26 8.41 8.42
500 bias 0.44 1.68 2.91 9.10 5.49
MSE 4.93 4.89 5.02 4.93 4.97
1000 bias 6.34 3.58 6.77 3.54 3.98
MSE 2.47 2.51 2.46 2.46 2.50
Table
3 lists the average time the MEDSWEEP algorithm needed for the computation
of the DR, on a Sun SparcStation 20/514. We see that the algorithm is very fast.
To illustrate the MEDSWEEP algorithm we generated 50 points in 5 dimensions, according
to the model e coming from
the standard gaussian distribution. The DR fit obtained with MEDSWEEP is
21. The algorithm needed 14 iterations
till convergence. In a second example, we generated 50 points according to the model
iterations
the MEDSWEEP algorithm yielded the fit
with approximate depth 21. Note that in both cases the coefficients obtained by the algorithm
approximate the true parameters in the model very well. The MEDSWEEP algorithm
is available from our website http://win-www.uia.ac.be/u/statis/ where its use is explained
Table
2: Bias and mean squared error of the DR intercepts, obtained by generating
10; 000 standard gaussian samples for each n and p. The DR fits were obtained with the
MEDSWEEP algorithm. The results for the bias have to be multiplied by 10 \Gamma4 and the
results for the MSE by 10 \Gamma3 .
50 bias -3.04 -48.70 14.38 13.23 -19.64
MSE
100 bias 5.75 -3.32 12.21 9.92 -1.70
MSE 15.78 16.01 16.92 16.77 18.14
300 bias -2.71 -7.99 -2.37 3.49 4.54
MSE 5.25 5.31 5.22 5.23 5.47
500 bias 1.35 -5.53 -4.33 -4.26 2.39
MSE 3.09 3.15 3.21 3.21 3.18
1000 bias -3.76 -3.40 -5.10 0.75 -0.01
MSE 1.56 1.56 1.55 1.58 1.62
Inference
4.1 Tests for parameters
In simple regression, the semiparametric model assumptions of condition (H) state that
and that the errors e
are independent with P
it is possible to compute F n (k) :=
n has the same fx as the actual dataset Z n . By
invariance properties,
where the e i are i.i.d. from (say) the standard gaussian. Thus we can compute F n (k) by
simulating (11). When there are no ties among the x i we can even compute F n (k) explicitly
from formula (4.4) in Daniels (1954), yielding
Table
3: Computation time (in seconds) of the MEDSWEEP algorithm for a sample of size
n with p dimensions. Each time is an average over 10; 000 samples.
1000
for k [(n \Gamma 1)=2], and F n and each term is a
probability of the binomial distribution B(n; 1=2), which stems from the number of e i in
with a particular sign. For increasing n we can approximate B(n; 1=2) by a
gaussian distribution due to the central limit theorem, so (12) can easily be extended to
large n.
The distribution of the regression depth allows us to test one or several regression
coefficients. To test the combined null hypothesis ( ~
and the corresponding p-value equals F n (k) and can be computed from (12).
Consider the dataset in Figure 2 about species of animals (Van den Bergh [23]). The
plot shows the logarithm of the weight of a newborn versus the logarithm of the weight of
its mother. The deepest regression line has depth 19. For this dataset
yielding the p-value F 41 which is highly significant.
To test the significance of the slope
. This is easy, because we only have to compute the rdepth of all horizontal lines
passing through an observation. For the animal dataset, the maximal rdepth((0;
equals 5. Therefore the corresponding p-value is P (rdepth( ~
so H 0 is rejected. This p-value 0:00002 should be interpreted in the same way as the p-value
associated with R 2 or the F-test in LS regression. Analogously, to test ~
by considering all lines through the origin and an observation,
yielding the p-value F 41 which is also highly significant.
More generally, we can test the hypothesis H
by computing
the maximal regression depth of all lines with ' that pass through an observation.
log(weight mother)
log(weight
newborns)
DR
Figure
2: Logarithm of the weight of a newborn versus the logarithm of the weight of its
mother for species of animals, with the DR line which has depth 19.
For example, to test the hypothesis H
for the animal data (i.e., the hypothesis
that the weight of a newborn is proportional to the weight of its mother) we compute
and the corresponding p-value F 41 which is
significant at the 5% level but not at the 1% level.
These tests generalize easily to higher dimensions and situations with ties among the x i ,
but then we can no longer use the exact formula (12) which is restricted to the bivariate case
without ties in fx g. Therefore, in these cases we compute F n (k) by simulating (11).
Let us consider the stock return dataset (Benderly and Zwick [3]) shown in Figure 3 with
28 observations in dimensions. The regressors are output growth and inflation
(both as percentages), and the response is the real return on stocks. The deepest regression
obtained with the MEDSWEEP algorithm equals
depth 11. To test the null hypothesis ( ~
that both slopes are zero (this
would be done with the R 2 in LS regression) we compute the maximal rdepth((0; 0; ' 3
over all ' 3 (i.e. over all y i in the dataset). By computing the exact rdepth of these 28 horizontal
planes (by the fast algorithm of Rousseeuw and Struyf [18]) we obtain the value 6.
Simulation yields the corresponding p-value F 28 which is not significant. To test
the significance of the intercept
That is, we compute the depth of all planes through two observations and the ori-
gin. For the example this yields max corresponding p-value
F 28 (11) 1, which is not at all significant.
4.2 Confidence regions
In order to construct a confidence region for the unknown true parameter vector ~
use a bootstrap method. Starting from the dataset Z
ng 2 IR p , we construct a bootstrap sample by randomly drawing n observations, with
replacement. For each bootstrap sample Z (j)
we compute its deepest regression
. Note that there will usually be a few outlying estimates "
in the set of
which is natural since some bootstrap samples contain
disproportionally many outliers. Therefore we don't construct a confidence ellipsoid based
on the classical mean and covariance matrix of the f "
but we use the robust
minimum covariance determinant estimator (MCD) proposed by (Rousseeuw [14,15]).
The MCD looks for the h n=2 observations of which the empirical covariance matrix
has the smallest possible determinant. Then the center "
of the dataset is
defined as the average of these h points, and the covariance matrix "
\Sigma of the dataset is a
certain multiple of their covariance matrix. To obtain a confidence ellipsoid of level ff we
compute the MCD of the set of bootstrap estimates with ff)me. The
confidence ellipsoid E 1\Gammaff is then given by
Here
is the statistic of the robust distances of the
where the robust distance (Rousseeuw and Leroy
[17]) of a bootstrap estimate "
is given by
From this confidence ellipsoid E 1\Gammaff in fit space we can also derive the corresponding
regression confidence region for "
defined as
Theorem 2. The region R 1\Gammaff equals the set
Let us consider the Educational Spending data of Figure 1. Figure 4a shows the deepest
regression estimates of 1000 bootstrap samples, drawn with replacement from the original
data. Using the fast MCD algorithm of Rousseeuw and Van Driessen [19] we find the center
in Figure 4a to be (0:19; \Gamma0:95) t which corresponds well to the DR fit
the original data. As a confidence region for ( ~
we take the 95% tolerance ellipse E 0:95
based on the MCD center and scatter matrix, which yields the corresponding confidence
region R 0:95 shown in Figure 4b. Note that the intersection of this confidence region with
a vertical line is not a 95% probability interval for an observation y at x 0 . It is the
interval spanned by the fitted values "
in a 95% confidence region
for ( ~
An example of a confidence region in higher dimensions is shown in Figure 3. It shows
the 3-dimensional stock return dataset with its deepest regression plane, obtained with
the MEDSWEEP algorithm. The 95% confidence region shown in Figure 3 was based on
samples.
4.3 Test for linearity in simple regression
If the observations of the bivariate dataset Z n lie exactly on a straight line, then
is the highest possible value. On the other hand, if Z n lies exactly
on a strictly convex or strictly concave curve, then
n=3 is at its lowest
(Rousseeuw and Hubert [16, Theorem 2]). Therefore,
can be seen as a
measure of linearity for the dataset Z n . Note that this lower bound does not depend on the
amount of curvature when the lie exactly on the curve. However, as soon as there
is noise (i.e. nearly always), the relative sizes of the error scale and the curvature come into
play.
The null hypothesis assumes that the dataset Z n follows the linear model
for some ~
and with independent errors e i each having a distribution with zero
median. To determine the corresponding p-value we generate
DR
output growth
inflation
stock
return
Figure
3: Stock return dataset with its deepest regression plane, and the upper and lower
surface of the 95% confidence region R 0:95 based on samples.
Table
4: Windmill data: Maximal rdepth k with corresponding p-value p(k) obtained by
simulation.
ng with the same x-values as in the dataset Z n and with standard gaussian
e i . For each j we compute the maximal regression depth of the dataset Z (j) . Then for each
value k we approximate the p-value P(maxrdepth kjH 0 ) by
For example, let us consider the windmill dataset (Hand et al. [8]) which consists of 25
measures of wind velocity with corresponding direct current output, as shown in Figure 5.
The p-values in Table 4 were obtained from (17). For the actual
so we reject the linearity at the 5% level.
(a)
(b)
average salary
expenditures DR
R 0.95
Figure
4: (a) DR estimates of 1; 000 bootstrap samples from the Educational Spending data
with the 95% confidence ellipse E 0:95 ; (b) Plot of the data with the DR line and the 95%
confidence region R 0:95 for the fitted value.
5 Specific models
5.1 Polynomial regression
Consider a dataset Z ng ae IR 2 . The polynomial regression model
wants to fit the data by is called the degree of
the polynomial. The residuals of Z n relative to the fit are denoted as
We could consider this to be a multiple regression problem with regressors
and determine the depth of a fit ' as in Definition 1. But we know that the joint distribution
of degenerate (i.e. it does not have a density), so many properties of the
deepest regression, such as Theorem 1 about the breakdown value, would not hold in this
case. In fact, the set of possible x forms a so-called moment curve in IR k , so it is inherently
one-dimensional.
A better way to define the depth of a polynomial fit (denoted by rdepth k ) is to update
the definition of regression depth in simple regression, where x was also univariate. For each
polynomial fit ' we can compute its residuals r i (') and define its regression depth as
rdepth k
where . Note that this definition
only depends on the x i -values and the signs of the residuals. With this definition of depth,
we define the deepest polynomial of degree k as in (2) and denote it by DR k (Z n ).
Theorem 3. For any dataset Z ng 2 IR 2 with distinct x i the deepest
polynomial regression of degree k has breakdown value
Theorem 3 shows that with the definition of depth of a polynomial fit given in (18), the
deepest polynomial regression has a positive breakdown value of approximately 1/3, so it is
robust to vertical outliers as well as to leverage points.
In Section 4.3 we rejected the linearity of the windmill data. Therefore we now consider
a model with a quadratic component for this data, i.e. Figure 5 shows
the windmill data with the deepest quadratic fit, which has depth 12.
wind velocity
direct
current
output
Figure
5: Windmill data with the deepest quadratic fit, which has regression depth 12.
5.2 Michaelis-Menten model
In the field of enzyme kinetics, the steady-state kinetics of the great majority of the enzyme-
catalyzed reactions that have been studied are adequately described by a hyperbolic relationship
between the concentration s of a substrate and the steady-state velocity v. This
relationship is expressed by the Michaelis-Menten equation
where the constant v max is the maximum velocity and Km is the Michaelis constant. The
Michaelis-Menten equation has been linearized by rewriting it in the following three ways:
s
Km
\GammaKm
s
which are known as the Scatchard equation (Scatchard [21]), the double reciprocal equation
(Lineweaver and Burke [10]) and the Woolf equation (Haldane [7]). Each of the three relations
(21), (22), (23) can be used to estimate the constants v max and Km . In general the three
relations yield different estimates for the constants v max and Km , because the error terms
are also transformed in a nonlinear way. Cressie and Keightley [5] compared these three
linearizations of the Michaelis-Menten relation in the context of hormone-receptor assays,
and concluded that for well-behaved data the Woolf equation (23) works best, but for data
containing outliers the double reciprocal equation (22) with robust regression gives better
results.
Theorem 4 shows that applying the deepest regression to the Woolf equation (23) yields
the same estimates for v max and Km as the deepest regression applied to the double reciprocal
equation (22). This resolves the ambiguity.
Theorem 4. Let Z ng
denote the DR fit of the double reciprocal equation as DR(f( 1
Then the DR fit of the Woolf equation satisfies
In both cases we obtain the same " v
Example: In assays for hormone receptors the Michaelis-Menten equation describes
the relationship between the amount B of hormone bound to receptor and the amount F of
hormone not bound to receptor. These assays are used e.g. to determine the cancer treatment
method (see Cressie and Keightley [5]). Equation (20) now becomes
The parameters of interest are the concentration B max of binding sites and the dissociation
constant KD for the hormone-receptor interaction. Figure 6a shows the Woolf plot
of data from an estrogen receptor assay obtained by Cressie and Keightley [4]. Note that
this dataset clearly contains one outlier. To this plot we added the deepest regression line
In Figure 6b we also show the double reciprocal
plot with its deepest regression line DR(f( 1
in both cases we obtain the same estimated values "
are comparable to the least squares estimates "
obtained
from the Woolf equation based on all data except the outlier. On the other hand, least
squares applied to the full data gives "
based on the Woolf
(a)
50 100 150 200 250 300246F
DR
(b)
DR
Figure
(a) Woolf plot of the Cressie and Keightly data with the deepest regression line
and the least squares fit double reciprocal
plot of the data with the deepest regression line and the least squares
equation, and "
based on the double reciprocal equation.
These estimates are quite different. With least squares, in both cases the estimates "
KD are highly influenced by the outlying observation (both "
KD come out
too high) which may lead to wrong conclusions, e.g. when determining a cancer treatment
method.
Appendix
Proof of Theorem 1. To prove Theorem 1 we need the following lemmas.
Lemma 1. If the x i are in general position, then f'; rdepth('; Z n ) pg is bounded.
Proof: For J ae ng with we denote the fit that passes through the p
observations
Since the x i are in general position,
such a fit ' J will be non-vertical (for any J). Therefore, convf' J ; J ae
stands for the convex hull.) 2
Lemma 2. For any dataset Z n 2 IR p with the x i in general position (i.e. no more than
of the x i lie in any (p \Gamma 2)-dimensional affine subspace of IR p\Gamma1 ) the deepest regression has
breakdown value
Proof: By Lemma 1 we know that f'; rdepth('; Z n ) pg is bounded. Therefore to break
down the estimator we must add at least m observations such that rdepth(DR(Z n+m ); Z n )
it holds that rdepth(DR(Z n ); Z n ) dn=(p
Amenta et al. [1]), we obtain
from which it follows that m n\Gammap 2 +1
. Finally, it holds that
Lemma 3. If Z n ae Z n+m then
Proof. Since Z n ae Z n+m it holds that rdepth('; Z n ) rdepth('; Z n+m ) for all '. Thus for
all ' it holds that rdepth('; Z n ) max
Lemma 4. Under the conditions of Theorem 1 we have that
a:s:
\Gamma\Gamma\Gamma!
Proof. Let us consider the dual space, i.e. the p-dimensional space of all possible fits '.
Dualization transforms a hyperplane H '
to the point '. An
observation z mapped to the set D(z i ) of all ' that pass through z i ,
so D(z i ) is the hyperplane H i given by ' In dual space, the
regression depth of a fit ' corresponds to the minimal number of hyperplanes H i intersected
by any halfline ['; '
A unit vector u in dual space thus corresponds to an affine hyperplane V in x-space
and a direction in which to tilt ' until it is vertical. For each unit vector u, we therefore
define the wedge-shaped region A ';u
in primal space, formed by tilting ' around V (in the
direction of u) until the fit becomes vertical. Further denote H n the empirical distribution
of the observed data Z n . Define the metric
)j:
If Z n is sampled from H, then it holds that
\Gamma\Gamma\Gamma!
0:
This follows from the generalization of the Glivenko-Cantelli theorem formulated in (Pollard
[13, Theorem 14]) and proved by (Pollard [13, Lemma 18]) and the fact that A ';u
can
be constructed by taking a finite number of unions and intersections of half-spaces. Now
define
its empirical version \Pi n
). It is clear that
1. Moreover we have that j\Pi n ( ~
hence
\Gamma\Gamma\Gamma!
Finally it holds that
a:s:
The latter inequality uses Theorem 7 in (Rousseeuw and Hubert [16]) which is valid since
Z n is almost surely in general position. Taking limits in (27) and using (26) then finishes
the proof. 2
Proof of Theorem 1: We will first show that lim inf
1almost surely. From Lemma 1
we know that f'; rdepth('; Z n ) pg is bounded a.s. if Z n ae IR p is sampled from H.
Therefore, to break down the estimator we must add at least m observations such that
This implies that
where the first inequality is due to Lemma 3, and thus
a:s:
Finally we apply Lemma 4 to conclude that
lim inf
a:s:
Next, we prove that lim sup
1almost surely. Since regression depth is affine
invariant, we may assume that none of the jjx i jj in the dataset are zero. We will denote a
hyperplane ' with equation by its two components fi ' and ff ' corresponding
to the slopes and the intercept. Fix two strictly positive real numbers fi 0 and ff 0 . We now
consider a point such that for any hyperplane ' passing through
data points
it holds that 1. Because we
assumed that none of the jjx i jj equals 0, we can always find such a point
x would be unbounded.
The dataset Z n+m is then obtained by enlarging the dataset Z n with
points equal to We know that the deepest fit DR(Z n+m ) must pass through at least p
different observations of Z n+m . Denote by ' J any candidate deepest fit. If ' J passes through
it is clear that rdepth(' J ; Z n+m ) On the other hand, any ' J
which passes through p data points of Z n has rdepth(' J ; Z n+m This can be seen
as follows. First consider the dataset Z n+1 which consists of the n original observations and
one copy of the point Theorem 7 in (Rousseeuw and Hubert [16]) it follows that
\Theta n+1+p
. Now there always exists a unit vector
such that x t
n. Then the number of observations passed
when tilting ' J around (u; v) as in Definition 1 plus the number of observations passed
when tilting ' J around (\Gammau; \Gammav) equals because the fit ' J passes through exactly
observations. Therefore, we can suppose that the number of observations passed when
tilting ' J around (u; v) is at most [ n+1+p]. (If not, we replace (u; v) by (\Gammau; \Gammav).) Note
that the residual r does not pass through First suppose
the residual r strictly positive. The data are in general position, therefore we
can always find a value " ? 0 such that for all it holds that x t
number of observations passed when tilting ' J around (u; is the same as when tilting
around (u; v). Finally, adding the other replications of does not change this
value. Therefore rdepth(' J ; Z n+m If the residual r strictly negative,
we replace v by in a similar way and obtain the same result.
The above reasoning shows that the deepest fit must pass through
original data points. Since we have shown that all these fits have an arbitrarily large slope
and intercept, it holds that
a:s:
\Gamma\Gamma\Gamma!
and thus
lim sup
a:s:
From (28) and (29) we finally conclude (3). 2
Proof of Theorem 2. Consider x 2 IR p\Gamma1 . We will prove that the upper and lower bounds
in expression (16) are the values y such that in the dual plot the hyperplane
is tangent to the ellipsoid
First consider the special case "
yielding the unit sphere ' t
The tangent hyperplane in an arbitrary point ~
on the sphere is given by
~
Therefore the hyperplane becomes a tangent hyperplane
' on the sphere. Together with ~
yields giving the lower bound
and the upper bound
corresponding to expression (16) for this case.
Consider the general case of an ellipsoid
is the diagonal matrix of eigenvalues of "
is the matrix of eigenvectors of "
\Sigma. We can transform this to the previous
case by the transformation ~
). The hyperplane
transforms to c(x t ; 1)P 1=2 ~
which becomes a tangent hyperplane if
This yields the lower bound y =
and the upper bound y =
of expression (16). 2
Proof of Theorem 3. From the definition of rdepth k by (18) it follows for any data set
ng with distinct x i as in Lemma 1 that f'; rdepth k ('; Z n )
is bounded. Now any bivariate linear fit
corresponds to a polynomial fit
holds for the deepest regression line DR 1 that rdepth(DR 1
(Rousseeuw and Hubert [16]), it follows that the deepest polynomial regression DR k of degree
k has rdepth k (DR
Because must add at least m observations
such that rdepth k (DR k (Z n+m ); Z n ) k to break down the estimator. We obtain
from which it follows that m (n \Gamma 3k)=2. This yields
Proof of Theorem 4. We will show that when s holds for every
This follows from
(v i
for all switching the x-values from 1
to s i reverses their order. Therefore,
according to Definition 1 both depths are the same. 2
--R
"Regression Depth and Center Points,"
Asymptotic distributions of the maximal depth estimators for regression and multivariate location
The underlying structure of the direct linear plot with application to the analysis of hormone-receptor interactions
Analysing data from hormone-receptor assays
Breakdown properties of location estimates based on halfspace depth and projected outlyingness
Graphical methods in enzyme chemistry
A Handbook of Small Data Sets
"Applied Statistical Science III: Nonparametric Statistics and Related Topics"
The determination of enzyme dissociation constants
"On Depth and Deep Points: a Calculus,"
Convergence of Stochastic Processes
Least median of squares regression
"Mathematical Statistics and Applications, Vol. B"
New York: Wiley-Interscience
Computing location depth and regression depth in higher dimensions
A fast algorithm for the minimum covariance determinant estimator
"Robust and Nonlinear Time Series Analysis,"
The attractions of proteins for small molecules and ions
Robustness of deepest regression
"Proceedings of the 15th Symposium on Computational Geometry,"
--TR
Robust regression and outlier detection
Efficient algorithms for maximum regression depth
A fast algorithm for the minimum covariance determinant estimator
An optimal algorithm for hyperplane depth in the plane
Robustness of deepest regression
Computing location depth and regression depth in higher dimensions
--CTR
R. Wellmann , S. Katina , Ch. H. Mller, Calculation of simplicial depth estimators for polynomial regression with applications, Computational Statistics & Data Analysis, v.51 n.10, p.5025-5040, June, 2007
Christine H. Mller, Depth estimators and tests based on the likelihood principle with application to regression, Journal of Multivariate Analysis, v.95 n.1, p.153-181, July 2005 | algorithm;regression depth;inference |
586635 | Geometry and Color in Natural Images. | Most image analysis algorithms are defined for the grey level channel, particularly when geometric information is looked for in the digital image. We propose an experimental procedure in order to decide whether this attitude is sound or not. We test the hypothesis that the essential geometric contents of an image is contained in its level lines. The set of all level lines, or topographic map, is a complete contrast invariant image description: it yields a line structure by far more complete than any edge description, since we can fully reconstruct the image from it, up to a local contrast change. We then design an algorithm constraining the color channels of a given image to have the same geometry (i.e. the same level lines) as the grey level. If the assumption that the essential geometrical information is contained in the grey level is sound, then this algorithm should not alter the colors of the image or its visual aspect. We display several experiments confirming this hypothesis. Conversely, we also show the effect of imposing the color of an image to the topographic map of another one: it results, in a striking way, in the dominance of grey level and the fading of a color deprived of its geometry. We finally give a mathematical proof that the algorithmic procedure is intrinsic, i.e. does not depend asymptotically upon the quantization mesh used for the topographic map. We also prove its contrast invariance. | Introduction
: color from different angle
In this paper, we shall first review briefly some of the main and attitudes
adopted towards color in art and science (Section 1) We then focus on image
analysis algorithms and define some of the needed terminology (Section 2),
in particular the topographic map : we support therein the view that the
geometrical information of a grey level image is fully contained in the set of
its level lines. Section 3 is devoted to the description of an experimental procedure
to check whether the color information contents can be considered as a
mere non geometrical complement to the geometry given by the topographic
map or not. In continuation, a description of several experiments on color
digital images is performed. In Section 4, which is mainly mathematical, we
check from several point of views the soundness of the proposed algorithm,
in particular its independence of the quantization procedure, its consistency
with the assumption that the grey level image has bounded variation and the
contrast invariance of the proposed algorithm.
1.1 Painting, linguistics
The color-geometry debate in the theory of painting has never been closed,
each school of painters making a manifesto of its preference. Delacroix claimed
"L'ennemi de toute peinture est le gris !'' [23]. To impressionnists, ''la couleur
est tout"(Monet), while the role of contours and geometry is prominent in
the Renaissance painting, but also for the surrealistic school or for cubists :
this last school is mostly concerned with the deconstruction of perspective and
shape [18]. To Kandinsky, founder and theoretician of abstract painting, color
and drawing are treated in a totally separate way, color being associated with
emotions and spirituality, but the building up of a painting being essentially
a question of drawing and the (abstract) shape content relying on drawing,
that is on black strokes, [12]. In this discussion, we do not forget that, while
an accurate definition of color is given in quantum mechanics by photon wave-
length, the human or animal perception of it is extremely blurry and variable.
Red, green and blue color captors on the retina have a strong and variable
overlap and give a very poor wavelength resolution, not at all comparable to
our auditive frequency receptivity to sounds. Different civilisations have even
quite different color systems. For instance, the linguist Louis Hjelmslev : [10]
"Derri'ere les paradigmes qui, dans les diff'erentes langues sont form'es par
les d'esignations de couleurs, nous pouvons, par soustraction des diff'erences,
d'egager un tel continuum amorphe : le spectre des couleurs dans lequel chaque
langue 'etablit directement ses fronti'eres. Alors que cette zone de sens se forme
dans l'ensemble 'a peu pr'es de la m-eme fa-con dans les principales langues de
l'Europe, il n'est pas difficile de trouver ailleurs des formations diff'erentes. En
gallois "vert" est en partie gwyrdd et en partie glas, "bleu" correspond 'a glas,
"gris" est soit glas soit llwyd, brun correspond 'a llwyd ; ce qui veut dire que le
domaine du spectre recouvert par le mot fran-cais vert est, en gallois, travers'e
par une ligne qui en rapporte une partie au domaine recouvert par le fran-cais
bleu, et que la fronti'ere que trace la langue fran-caise entre vert et bleu n'existe
pas en gallois ; la fronti'ere qui s'epare bleu et gris lui fait 'egalement d'efaut, de
m-eme que celle qui oppose en fran-cais gris et brun ; en revanche, le domaine
repr'esent'e en fran-cais par gris est, en gallois, coup'e en deux, de telle fa-con
que la moiti'e se rapporte `a la zone du fran-cais bleu et l'autre moiti'e `a brun.''
To summarize, the semantic division of colors is simply different in French
(or English) and Welsh and there is no easy translation, four colors in French
being covered by three different ones in Welsh.
1.2 Perception theory
Perception theory does not support anymore the absoluteness of color infor-
mation. In his monumental work on visual perception, Wolfgang Metzger [15],
dedicates only one tenth of his treatise (2 chapters over 19) to the perception
of colors. Those chapters are mainly concerned with the impossibility to define
absolute color systems, the variability of the definition of color under different
lighting conditions, and the consequent visual illusions. See in particular
the subsection : "Gibt es eine physikalisch festliegende "normale" Beleuch-
tung und eine "eigentliche" Farbe ?" (Is there any physically well founded
"normal" illumination and a proper color ?) Noticeably, in this treatise, 100
percent of the experiments not directly concerned with color are made with
black and white drawings and pictures. In fact, the gestaltists not only question
the existence of "color information", but go as far as to deny any physical
reality to any grey level scale : the grey levels are not measurable physical
quantities in the same level as, say, temperature, pression or velocity. A main
reason invoked is that most images are generated under no control or even
knowledge of the illumination conditions or the physical reflectance of objects.
This may also explain the failure of several interesting attempts to use shape
from shading information in shape analysis [11]. The contribution of black and
white photographs and movies has been to demonstrate that essential shape
content of images can be encoded in a gray scale and this attitude seems to
be corroborated by the image processing research. Indeed, and although we
are not able to deliver faithful statistics, we can state that an overwhelming
majority of image processing algorithms are being designed, tested and
validated on grey level images. Satellite multispectral images (SPOT images
for instance) attribute a double resolution to panchromatic (i.e. grey level
images) and a simple one to color channels : somehow, color is assumed to
only give a semantic information, (like e.g. presence of vegetation) and not a
engineering decision meets Kandinsky's claim !
1.3 Image processing algorithms
Let us now consider the practice of image processing. When an algorithm
has to be applied to color images, it is generally first designed and tested
on grey level images and then applied independently to each channel. It is
experimentally difficult to demonstrate the improvements due to a joint use
of the three color channels instead of this independent processing. Antonin
Chambolle [6] tried several strategies to generalize mean curvature algorithms
to color images. His overall conclusion (private communication) was that no
perceptible improvement was made by defining a color gradient : diffusing
each channel independently led to essentially equal results, from a perception
viewpoint. A more recent, and equivalent, attempt to define a "color gradient"
in order to perform anisotropic diffusion in a more sophisticated way than
just diffusing each channel independently is given by [21]. The authors do
not provide a comparison with an algorithm making an independent diffusion
on each channel, however, so that their study does not contradict the above
mentioned conclusion. Pietro Perona [19] performed experiments on color
images where he applied the Perona-Malik anisotropic diffusion [20].
In this paper, we propose a numerical experimental procedure to check
that color information does not contribute to our geometric understanding
of natural images. This statement has to be made precise. We have first
considered it a common sense consequence of the arbitrariness of lighting
conditions and total inaccuracy of our color wavelength perception, which
makes the definition of color channels very context-dependent. This explains
why the literature on color is mainly devoted to a "restoration" of universal
color characteristics such as saturation, hue and luminance. Now, of these
three characteristics, only luminance, defined as a sum of color channels has
a (relative) physical meaning. Indeed, luminance, or "grey level" is defined
as a photon count over a period of time (exposure time). Thus, we can relate
a linear combination of R, G, B, channels to this photon count. Now, we
mentionned that from the perception theory viewpoint, (see e.g. Wertheimer
[25]), even this grey level information is subject to so much variability due to
unknown illumination and reflectance condition that we cannot consider it as
a physical information.
Mathematical morphology and the topographic
This explains why Matheron and Serra [22] developed a theory of image anal-
ysis, focused on grey level and where, for most operators of the so called "flat
morphology", contrast invariance is the rule. Flat morphological operators
(e.g. erosions, dilations, openings, closings, connected operators, etc.) commute
with contrast changes and therefore process independently the level sets
of a grey level image.
In the following, let us denote by u(x) the grey level of an image u at
point x. In digital images, the only accessible information is a quantized and
sampled version of u, u(i; j), where (i; j) is a set of discrete points (in general
on a grid) and u(i; j) belongs in fact to a discrete set of values, 0; 1; :::; 255 in
many cases. Since, by Shannon theory, we can assume that u(x) is recoverable
at any point from the samples u(i; j), we can in a first approximation assume
that the image u(x) is known, up to the quantization noise. Now, since the
illumination and reflectance conditions are arbitrary, we could as well have
observed an image g(u(x)) where g is any increasing contrast change. Thus,
what we really know are in fact the level sets
where we somehow forget about the actual value of -. According to the
Mathematical Morphology doctrine, the reliable information in the image is
contained in the level sets, independently of their actual levels. Thus, we are
led to consider that the geometric information, the shape information, is contained
in those level sets. This is how we define the geometry of the image. In
this paragraph, we are simply summarizing some arguments contained explicitly
or implicitly in the Mathematical Morphology theory, which were further
developed in [5]. We can further describe the level sets by their boundaries,
which are, under suitable very general assumptions, Jordan curves.
Jordan curves are continuous maps from the circle into the plane IR 2 without
crossing points. To take an instance which we will invoke further on, if we
assume that the image u has bounded variation, then for almost all levels of
u, X - u is a set with bounded perimeter and its boundary a countable family
of Jordan curves with finite length [1]. In the mentioned work, it is demonstrated
that the level line structure whose existence was assumed in [4] indeed
is mathematically consistent if the image belongs to the space BV of functions
with bounded variation. It is also proved that the connected components of
level sets of the image give a locally contrast invariant description.
In the digital framework, the assumption that the level lines are Jordan
curves is straightforward if we adopt the nave but useful view that u is constant
on each pixel. Then level lines are concatenations of vertical and horizontal
segments and this is how we shall visualize them in practice. As explained
in [5], level lines have a series of structure properties which make them most
suitable as building blocks for image analysis algorithms. We call the set of
all level lines of an image topographic map. The topographic map is invariant
under a wide class of local contrast changes ([5]), so that it is a useful tool for
comparing images of the same object with different illuminations. This application
was developed in [2] who proposed an algorithm to create from two
images a third whose topographic map is, roughly speaking, an intersection
of the two different input images. Level lines never meet, so that they build
an inclusion tree. A data structure giving fast access to each one of them is
therefore possible and was developed in [17], who proved that the inclusion
trees of upper and lower level sets merge. We can conceive the topographic
map as a tool giving a complete description of the global geometry for grey
level images. A further application to shape recognition and registration is
developed in [16], who proposes a topographic map based contrast invariant
registration algorithm and [13], who propose a registration algorithm based
on the recognition of pieces of level lines. Such an algorithm is not only contrast
invariant, but occlusion stable. Several morphological filters are easy to
formalize and implement in the topographic map formalism. For instance, the
Vincent-Serra connected operators ([24], [14]). Such filters, as well as local
quantization algorithms are easily defined [5]. Our overall assumption is first
that all of the reliable geometric information is contained in the topographic
map and second that this level line structure is under many aspects (complete-
ness, inclusion structure) more stable than, say, the edges or regions obtained
by edge detection or segmentation. In particular, the advantage of level lines
over edges is striking from the reconstruction viewpoint : we can reconstruct
exactly the original image from its topographic map, while any edge structure
implies a loss of information and many artifacts. It is therefore appealing to
extend the topographic map structure to color images and this is the aim we
shall try to attain here. Now, we will attain it in the most trivial way. We
intend to show by an experimental and mathematically founded procedure
that the geometric structure of a color image is essentially contained in the
topographic map of its grey level. In other terms, we propose the topographic
map of color image to be simply the topographic map of a linear combination
of its three channels. If that is true, then we can claim that tasks like shape
recognition, and in general, anything related to the image geometry, should
be performed on the subjacent grey level image. As we shall see in the next
section where we review from several points of view the attitude of scientists
and artists toward color, this claim is nothing new and implicit in the way we
usually proceed in image processing. Our wish, is, however, to make it explicit
and get rid of this bad consciousness we feel by ever working and thinking
and teaching with grey level images. Somebody in the assistance always asks
"and what about color images". We propose to prove that we do not need
them for geometric analysis.
2.1 Definition of the geometry
We are led to define the geometry of a digital image as defined by its topographic
map. What about color ? Our aim here is to prove that we can
consider the color information as a subsidiary information, which may well be
added to the geometric information but does not yield much more to it and
never contradicts it. Here is how we proceed :
In general words, we shall define an experimental procedure to prove that
Replacing the colors in an image by their conditional expectation
with respect to the grey level does not alter the color image.
Of course, this statement needs some mathematical formalization before
being translated into an algorithm. We prefer, for a sake of clarity, to start
with a description of the algorithm (in the next section). Then, we shall display
some experiments since, as far as color is concerned, visual inspection
here is the ultimate decision tool used to check whether a colored image has
been altered or not. Section 4 is devoted to the complete mathematical for-
malization, i.e. a proof that the defined procedure converges and in no way
depends upon the choice of special quantization parameters. Although this
was not our aim here, let it be mentionned that extensions of this work for
application to the compression of multi-channel images are in course [7]. The
idea is to compress the grey level channel by encoding its topographic map.
Then, instead of encoding the color channels separately, an encoding of the
conditional expectation with respect to the topographic map is computed.
This definition will become clear in the next two sections.
3 Algorithm and experiments
3.1 The algorithm: Morphological Filtering of Color
Images
The algorithm we present is based on the idea that the color component of
the image does not give contradictory geometric information to the grey level
and, in any case, is complementary. Following the ideas of previous works (see
[4], [5]), we describe the geometric contents of the image by its topographic
map, a contrast invariant complete representation of the image. We discuss in
Section 4 an extension to color channels of this contrast invariance property.
The algorithm we shall define now imposes to the chromatic components,
saturation and hue, to have the same geometry, or the same topographic map,
as the luminance component. We shall experimentally check that by doing
so, we do not create new colors and the overall color aspect of the image does
not change with this operation. In some sense, and although this is not our
aim, the algorithm is an alternative to color anisotropic diffusions algorithms
already mentioned ([3], [6], [21]).
To define the algorithm, we take a partition
of the grey level range of the luminance component. In practice
a this partition is defined by a grey level quantization
step. The resulting set of level lines is a restriction to the chosen levels
of the topographic map. We then consider the connected components of the
set complementary to the level lines. In the discrete case, these connected
components are sets of finite perimeter given by a finite number of pixels, and
constitute a partition of the image. In the continuous case, we must impose
some restriction about the space of functions we take (see Section 4).
The algorithm we propose is the following. Let U
are the intensity of red, green
and blue.
(i) From these channels, we compute the L, S and H values of the color signal,
i.e., the luminance, saturation and hue, defined by
red
pwith
or the perceptually less correct but simpler,
~
oe \Theta U(x); oe \Theta@01
(1)
(ii) Compute the topographic map associated to L with a fixed quantization
step. In other terms, we compute all level lines with levels multiple of a
fixed factor, for instance 10. This quantization process yields the topographic
representation of a partial, coherent view of the image structure (see [5] for
more details about visualization of the topographic map). This computation
obvious : we first compute the level sets, as union of pixels, for levels - which
are multiples of the entire quantization step. We then simply compute their
boundary as concatenations of vertical and horizontal lines.
(iii) In each connected component A of the topographic map of L, we take
the average value of precisely, let fx be the pixels of A.
We compute the value
will be the new constant value in the connected
component A, for the components S and H. In other terms, we transform the
H, where -
H have a constant value, in fact the average
value, inside each connected component of the topographic map of L, the
luminance component. As a consequence we obtain a new representation of
the image, piecewise constant on the connected components of the topographic
map and we therefore constrain the color channels S and H to have the same
topographic map as the grey level.
(iv) Finally, in order to visualize the color image, we compute (U
defined respectively as the new red, green and blue channels of the image by
performing the inverse color coordinates change on (L; -
Remark 1 Note that in order to perform a visualization, each channel of
the )-space must have a range of values in a fixed interval [a; b]. In
practice, in the discrete case, After applying the preceding
algorithm, this range can be altered and we can't recover the same range of
values for the final components. We therefore threshold these components so
that their final range be [0; 255].
slight change of this algorithm can be obtained if we take,
instead of the average on the chromatic components of the image in each connected
components, the average of these components weighted by the modulus
of the gradient of the luminance component (see the remark after Theorem
3). This means that we replace the step (iii) of the algorithm given above by
(iii)'. That is,
(iii)' In each connected component A of the topographic map of L let fx
be the pixels of this region. Compute the value v
, where
will be the new constant value in the connected component
A, for the components S and H. We shall explain in Section 4 the measure
geometric meaning of this variant.
3.2 Experiments and discussion
In Experiment 1, Image 1.1 is the original image, which we shall call the
"tree" image. In the L-component of the (L; we take the
topographic map of the level lines for all levels which are multiples of
This results in Image 1.2, where large enough regions are to be noticed, on
which the grey level is therefore constant. We then apply the algorithm, by
taking the average of components on the connected components of the
topographic map, i.e. the flat regions of the grey level image quantized with a
mesh equal to 5. Equivalently, this results in averaging the color of "tree" on
each white region of Image I.2. We then obtain Image 1.3. In Image 1.4, we
display, above, a detail of the original image "tree" and below the same detail
after the color averaging process has been applied. This is the only part of
the image where the algorithm brings some geometric alteration. Indeed, on
this part, the grey level contrast between the sea's blue and the tree's light
brown vanishes and a tiny part of the tree becomes blue. This detail is hardly
perceptible in the full size image because the color coherence is preserved :
no new color is created and, as a surprising matter of fact, we never observed
the creation of new colors by the averaging process.
In Experiment 2, Image 2.1 is the original image, which we call "peppers".
Image 2.2 is the topographic map of "peppers" for levels which are multiples
of 10. By a way of practical rule, quantizations of an image by a mesh of 5 are
seldom visually noticeable ; in that case, we can go up to a mesh of 10 without
visible alteration. Image 2.3 displays the outcome of averaging the colors on
the flat zones of Image 2, which is equivalent to averaging the colors on the
white regions of Image 2.2. If color does not bring any relevant geometrical
information complementary to the grey level, then it is to be expected that
Image 2.3 will not have lost geometric details with respect to Image 2.1. This
is the case and, in all natural images where we applied the procedure, the
outcome was the same : the conditional expectation of color with respect to
grey level yields, perceptually, the same image as the original. We are aware
that one can numerically create color images where the different colors have
exactly the same grey level, so that the grey level image has no level lines at
all ! If we applied the above procedure to such images, we would of course see
a strong difference between the processed one, which would become uniform,
and the original. Now, the generic situation for natural images is that "color
contrast always implies (some) grey level contrast". This empiric law is easily
checked on natural images. We have seen in Experiment 1 the only case where
we noticed that it was slightly violated, two different colors happening to have
(locally) grey levels which differed by less than 5 grey levels. In Image 2.4, we
explore further the dominance of geometry on color by giving to "tree" the
colors of "peppers". We obtained that image by averaging the colors of peppers
on the white regions of "tree" and then reconstructing an image whose
grey level was that of "tree" and the colors those of "peppers".
In Experiment 3, we further explore the striking results of mixing the color
of an image with the grey level of another one. Our conclusion will be in all
cases that, like in Image 2.4, the dominance of grey level above color, as far
as geometric information is concerned, is close to absolute. In Image 3.1, we
do the converse procedure to the one in Image 2.4 : we average the colors
of tree conditionally to the topographic map of peppers. The amazing result
is a new pepper image, where no geometric content from "tree" is anymore
visible. Of course, those two experiments are not totally fair, since we force
the color of the second image to have the topographic map of the first one.
Thus, in Image 3.2, we simply constructed an image having the grey level of
"peppers" and the colors of "tree". Notice again the dominance of "peppers"
and the fading of the shapes of "tree". In Image 3.3 we display an original
"baboon" image and in Image 3.4 the result of imposing the colors of "tree"
to "baboon". Again, we mainly see "baboon".
In all experiments, we used the first version of the mean value algorithm
and not the one described in Remark 2. Our experimental records show no
visible difference between both algorithms.
3.3 Conclusions
Our conclusions are contained in the text of the experiments. From the image
processing viewpoint, we would like to mention some possible ways of appli-
cations. Although the presented algorithm has no more application than a
visual inspection and a check of the independence of geometry and color, we
can deduce from this inquiry some possible new ways to process color images.
First of all, we can consider that the compression of multichannels images
should be led back to the compression of the topographic map given by the
panchromatic (grey level) image. This kind of attempt is in course [7]. Also,
one may ask whether color images, when given at a resolution smaller than
the panchromatic image (this is e.g. the case for SPOT images) should not be
deconvolved and led back to the grey level resolution. This seems possible, if
this deconvolution is made under the (strong) constraint that the topographic
map of each color coincides with the grey level topographic map.
Image 1.1 Image 1.2
Image 1.3 Image 1.4
Experiment 1. Image 1.1 is the original image. Image 1.2 is the topographic
map of the L-component of the (L,S,H) color space for all level lines which are
multiples of 5. The application of the algorithm by taking the average of S,H
components on the connected components of Image 1.2 gives us Image 1.3.
Finally, in Image 1.4 we display, above, a detail of the original image "tree"
and below the same detail after the algorithm has been applied.
Image 2.1 Image 2.2
Image 2.3 Image 2.4
Experiment 2. Image 2.1 is the original image. Image 2.2 is the topographic
map of of the L-component of the (L,S,H) color space for all level lines which
are multiples of 10. The application of the algorithm to Image 2.1 over the
white regions of Image 2.2 give us Image 2.3. Finally, we obtain Image 2.4 by
averaging the colors of Image 2.1 on the white regions of Image 1.1.
Image 3.1 Image 3.2
Image 3.3 Image 3.4
Experiment 3. In Image 3.1, we average the colors of tree conditionally to the
topographic map of Image 2.1. Image 3.2 shows the grey level of Image 2.1
and the colors of Image 1.1. Image 3.3 is the original image and Image 3.4 is
the result of imposing the colors of Image 1.1 to Image 3.3.
4 Formalization of the Algorithm
Let (Y; B; -) be a measure space and F ae B be a family of measurable subsets
of Y . A connected component analysis of (Y; F) is a map which assigns to
each set X 2 F a family of subsets C of X such that
then each C n (X) is contained in some Cm (X 0
Notice that this definition asks more than the usual definition [22], since we
request that sets of F be essentially decomposable into connected components
with positive measure. If, e.g., F is the set of open sets of IR 2 , then the usual
definition of connectedness applies, i.e. satisfies requirements i) and ii).
be a measurable function. Let F(u)
be a family of sets contained in the oe-algebra generated by the level sets
fig. Assume that a connected component analysis is given
In other words, F(u) is a family of subsets where the connected
components can be computed and satisfy i) and ii).
bg be a partition of [a; b] such that
partitions will be called
admissible. Let us denote by CC(P) the set of connected components of the
sections [a -). For each
connected component A 2 CC(P) we define the average value of v on A by
Z
A
v(x)d-:
Then we define the function
This function is nothing but the conditional expectation of v 2 L 1 (Y; B; -)
with respect to the oe-algebra AP of subsets of Y generated by CC(P).
In the next proposition we summarize some basic properties of conditional
expectations, [26], Ch. 9.
Proposition 1 Let (X; B; -) be a measure space and let A be a sub-oe-algebra
of B. Let L 1 (X; B; -) be the space of measurable (with respect to B) functions
which are Lebesgue integrable with respect to -. Let L 1 (A) be the subspace of
of functions which are measurable with respect to A. Then E(:jA)
is a bounded linear operator from L 1 (X; B; -) onto L 1 (A).
We have
In particular, if v is measurable with respect to A then
iii) If A 0 is a sub-oe-algebra of A then
In particular,
bg be an admissible partition
of [a; b]. Let P bg be a refinement of P, i.e.,
a partition of [a; b] such that each a i coincides with one of the b j , and such
that each section [b j - 1. Then AP is a
sub-oe-algebra of A P 0 .
Proof. Let A 2 CC(P). Then for some is a connected
component of [a i - u ! a i+1 ]. Let j; k be such that a
According to our Axiom ii), each connected component of [b l -
is contained in a connected component of [a
1g. Then, by Axiom i), each connected component of [b l -
l contained in A or disjoint to A (modulo a
null set). Let be the set of connected components of [b l -
l contained in A. We have that
Indeed, if this was not true, then we would find by Axiom ii) another connected
component of [b l - contained in A
and we would obtain a contradiction.
be the space of functions which are measurable with
respect to the oe-algebra AP and -integrable. Proposition 1 can be translated
as a statement about the operator E(:ju;
Proposition 2 Let P be an admissible partition of [a; b]. Then E(:ju; P) is
a bounded linear operator from L 1 (Y; B; -) onto L 1 (Y; AP ; -) satisfying properties
of Proposition 1. If P 0 is admissible and is a refinement of P
then
E(E(vju;
Proof. We have shown that AP is a sub-oe-algebra of A P 0
. Let A 2 CC(P)
and A n 2 CC(P 0 ) be such that
Z
A
v(x)dx =X
Z
An
v(x)dx
which is equivalent to (4).
For any partition of [a; b] we define
i2f0;1;:::;N \Gamma1g
a
Let P n be a sequence of partitions of [a; b] such that
a) P n are admissible, i.e., the sections of u associated to levels of P n , are in
F(u).
b) P n+1 is a refinement of P n ,
c)
Note that the oe-algebras
APn form a filtration, i.e., an increasing sequence of oe-algebras contained in
and, by Proposition 2, we have
Thus, v n is a martingale relative to (fAPn g n ; -) ([8], Ch. VII, Sect. 8).
According to the martingale convergence theorem we have
Theorem 1 Let A1 be the oe-algebra generated by the sequence of oe-algebras
APn , n - 1, i.e., the smallest oe-algebra containing all of them. If v 2
and a.e. to a function
which may be considered as the projection of v into the
space In particular,
Proof. Bounded martingales in L p converge in L p and a.e. if p ? 1 and a.e.
Martingales generated as conditional
expectations of a function v 2 L
with respect to a filtration are equiinte-
grable and, thus, converge in L 1 ([26], [8], Ch. VII, p. 319). Now, by L'evy's
Upward Theorem v 331). The
final statement is a consequence of the properties of conditional expectations
as described in Proposition 2.
4.1 The BV model
Let\Omega be an open subset of IR N . A function u 2 L
whose partial derivatives
in the sense of distributions are measures with finite total variation
in\Omega
is called a function of bounded variation. The class of such functions will be
denoted by
if and only if there are Radon measures
defined
in\Omega with finite total mass
in\Omega and
Z\Omega
Z\Omega
for all . Thus the gradient of u is a vector valued
measure whose finite total variation in an open
'\Omega is given by
This defines a positive measure called the variation measure of u. For further
information concerning functions of bounded variation we refer to [9] and [27].
For a Lebesgue measurable subset E ' IR N and a point x 2 R N , the
following notation will be used:
and
D(x; E) and D(x; E) will be called the upper and lower densities of x in E.
If the upper and lower densities are equal, then their common value will be
called the density of x in E and will be denoted by D(x; E).
The measure theoretic boundary of E is defined by
E) ? 0; D(x; IR N n E) ? 0g:
Here and in what follows we shall denote by H ff the Hausdorff measure of
dimension ff 2 [0; N ]. In particular, H N \Gamma1 denotes the (N \Gamma 1)-dimensional
Hausdorff measure and H N , the N-dimensional Hausdorff measure, coincides
with the Lebesgue measure in IR N .
Let E be a subset of IR N wih finite perimeter. This amounts to say that
the space of functions of bounded variation. Then @
is rectifiable, i.e., @ M
1)-dimensional
embedded -submanifold of IR N and H N
have that H N \Gamma1 (@ E) =k D-E k. We shall denote by P (E) the perimeter of
E).
As shown in [1], we can define the connected components of a set of finite
perimeter E so that they are sets of finite perimeter and constitute a partition
of us describe those results.
be a set with finite perimeter. We say that
is decomposable if there exists a partition (A; B) of E such that P
both, A and B, have strictly positive measure. We say that
E is indecomposable if it is not decomposable.
Theorem 2 ([1]) Let E be a set of finite perimeter in IR N . Then there
exists a unique finite or countable family of pairwise disjoint indecomposable
sets fY n g n2I such that
I and
are sets of finite perimeter and
iii) The sets Y n are maximal indecomposable, i.e. any indecomposable set
F ' E is contained, modulo a null set, in some set Y n .
The sets Y n will be called the M-components of E. Moreover, we have
If F is a set of finite perimeter contained in E, then each M-component of
F is contained in a M-component of E. In other words, if F per denotes the
family of subsets of IR N with finite perimeter then the above statement gives
a connected component analysis of (IR N ; F per ).
Let
1(\Omega\Gamma3 Without loss of generality we
may assume that u takes values in [a; b]. We know that for almost all levels
the level set [u -] is a set of finite perimeter. Let
fig such that u \Gamma1 ([ff; fi)) is of finite perimeter. Then
Theorem 2 describes a connected component analysis of G(u).
Let P n be a sequence of partitions of [a; b] such that
a) P n+1 is a refinement of P n ,
b) For each n 2 IN , the sections of u associated to levels of P n , are sets of
finite perimeter.
c)
d)
Given a oe-algebra A and a measure - we denote by A the completion of
A with respect to -, i.e., setg.
be sequences of partitions of [a; b] satisfying a); b); c); d).
Let A1 , resp. B1 , be the oe-algebra generated by the sequence of oe-algebras
APn , resp. AQn . Then
Proof. It suffices to prove that given n 2 IN and X an M-component of
d) being an interval of P n , then there is a set Z 2 B1
such that -(X \DeltaZ
IN be large enough so that [c; d) ' [
being
intervals of Qm , with c 2 [b
ffl. Now, since any M-component of a set [b i -
contained in X or disjoint to it, we have
where Z is the union of M-components of the sets [b i -
are contained in X. Obviously, Z 2 B1 and -(XnZ
This implies our statement.
The above Lemma proves that the oe-algebra generated by the sequence of
oe-algebras APn , n - 1, is independent of the sequence of partitions satisfying
us denote by A u the oe-algebra given by the last Lemma.
Observe that E(ujA u
Theorem 3 Let v 2 L
be a sequence of partitions
satisfying a); b); c); d). Let v is a martingale relative
to (fAPn converges in L 1 a.e. to a function
which may be considered as the projection of v into the
space L p The limit is independent of the sequence of partitions satisfying
In particular,
Remark. Let u
is in
(jDujdx). Then we may
also define
where
Z
A
vjDujdx
Z
A
Then results similar to Proposition 2 and Theorem 3 hold for E 0 as an operator
from L
Formally, we have
Z
A
vjDujdx
Z
A
Z
[c-u-A !d]
vjDujdx
Z
[c-u-A !d]
Z d
c
d
dt
Z
[u- A ?t]
vjDujdx
Z d
c
d
dt
Z
[u- A ?t]
Z d
c
Z
Z d
c
Z
for any connected component A of [c -
Z
Z
denotes the connected component of obtained from
A when letting c; d ! -. In the case under consideration, this amounts to
interpret our algorithm as the computation of the average of v along the
connected components of the level curves of u. Note that, if we take
the above formula gives
when letting c; d ! -.
4.2 Contrast invariance for color
We can state the contrast invariance axiom for color as
Contrast Invariance of Operations on Color [6]: We say that an operator
T is morphological if for any U and any
~ h(! U; oe ?)
where ~ h is a continuous increasing real function we have
We refer to such functions h as contrast changes for color vectors.
Proposition 3 Let U
be a color image such that L 2
(\Omega\Gamma and ~
. Let AL be the oe-algebra associated to the luminance
channel as described in the previous section. Let us define the filter
F (U) =@
E( ~
E(H(U)jAL )A (in L; ~
Then F is a morphological operator.
Proof. Let h be a contrast change for color vectors. Let V be any color
image. Then the Luminance, Saturation and Hue of the color vector h(V ) are
given by
~
~
oe \Theta h(V ); oe \Theta@01
Then the Luminance, Saturation and Hue of the color vector F (h(U)) are
E( ~ h(L(U))
~
E(H(U)jAL )C A (in L; ~
using Proposition 1, ii), we may write (13) as
E( ~
E(H(U)jAL )C A (in L; ~
By definition of F (U ), it is immediate that
~
which is equal
E( ~
E(H(U)jAL )C A (in L; ~
Thus F i.e., the operator F is contrast invariant.
Acknowledgement
We gratefully acknowledge partial support by CYCIT
Project, reference TIC99-0266 and by the TMR European Project Viscosity
Solutions and their Applications, FMRX-CT98-0234.
--R
Connected Components of Sets of Finite Perimeter and Applications to
Contrast Invariant Image Intersection
Total Variation Methods for Restoration of Vector-Valued Images
Topographic Maps and Local Contrast Changes in Natural Images
Topographics Maps of Color Images
Measure Theory and Fine Properties of Functions
Robot Vision MIT Press
Point et ligne sur plan
Filtrage et d'esocclusion d'images par m'ethodes d'ensembles de niveau PhD Thesis
Cahiers de l'ENS Cachan
Fast Computation of a Contrast Invariant Image Representation
Anisotropic Diffusion of Multivalued Images with Applications to Color Filtering
Image Analysis and Mathematical Morphology
Morphological area openings and closings for gray-scale im- ages
Cambridge Mathematical Textbooks
--TR
The curvature primal sketch
Dynamic shape
Weakly differentiable functions
Scale-Space and Edge Detection Using Anisotropic Diffusion
A Theory of Multiscale, Curvature-Based Shape Representation for Planar Curves
Topographic Maps and Local Contrast Changes in Natural Images
Robot Vision
Topographic Maps of Color Images
--CTR
Ballester , Vicent Caselles , Laura Igual , Joan Verdera , Bernard Roug, A Variational Model for P+XS Image Fusion, International Journal of Computer Vision, v.69 n.1, p.43-58, August 2006
Jean Serra, A Lattice Approach to Image Segmentation, Journal of Mathematical Imaging and Vision, v.24 n.1, p.83-130, January 2006 | color images;luminance constraint;level sets;morphological filtering |
586762 | The algebra and combinatorics of shuffles and multiple zeta values. | The algebraic and combinatorial theory of shuffles, introduced by Chen and Ree, is further developed and applied to the study of multiple zeta values. In particular, we establish evaluations for certain sums of cyclically generated multiple zeta values. The boundary case of our result reduces to a former conjecture of Zagier. | INTRODUCTION
We continue our study of nested sums of the form
Y
commonly referred to as multiple zeta values [2, 3, 4, 11, 12, 16, 19]. Here
and throughout, s are positive integers with s 1 ? 1 to ensure
convergence.
44 D. BOWMAN & D. M. BRADLEY
There exist many intriguing results and conjectures concerning values
of (1) at various arguments. For example,
was conjectured by Zagier [19] and first proved by Broadhurst et al [2]
using analytic techniques. Subsequently, a purely combinatorial proof was
given [3] based on the well-known shuffle property of iterated integrals, and
it is this latter approach which we develop more fully here. For further and
deeper results from the analytic viewpoint, see [4].
Our main result is a generalization of (2) in which twos are inserted at
various places in the argument string f3; 1g n . Given a non-negative integer
be a vector of non-negative integers, and
consider the multiple zeta value obtained by inserting m j consecutive twos
after the jth element of the string f3; 1g n for each
For non-negative integers k and r, let C r (k) denote the set of
dered non-negative integer compositions of k having r parts. For example,
0)g. Our generalization
of (2) states (see Corollary 5.1 of Section 5) that
for all non-negative integers m and n with m - 2n. Equation (2) is the
special case of (3) in which
again
then (see Theorem 5.1 of Section 5)
is an equivalent formulation of (3). The cyclic insertion conjecture [3] can
be restated as the assertion that
and integers m - 2n - 0. Thus, our result reduces the problem to that of
establishing the invariance of C(~s) on C 2n+1 (m \Gamma 2n).
The outline of the paper is as follows. Section 2 provides the essential
background for our results. The theory is formalized and further developed
in Section 3, in which we additionally give a simple proof of Ree's formula
for the inverse of a Lie exponential. In Section 4 we focus on the combinatorics
of two-letter words, as this is most directly relevant to the study of
multiple zeta values. In the final section, we establish the aforementioned
results (3) and (4).
2. ITERATED INTEGRALS
As Kontsevich [19] observed, (1) admits an iterated integral representa-
tion
Z 1k
Y
a
of depth
Here, the notation
y
Y
Z
x?t1?t2?\Delta\Delta\Delta?t n?y
Y
of [2] is used with a and b denoting the differential 1-forms dt=t and
respectively. Thus, for example, if f 1
the Furthermore, we shall
agree that any iterated integral of an empty product of differential 1-forms
is equal to 1. This convention is mainly a notational convenience; nevertheless
we shall find it useful for stating results about iterated integrals
more concisely and naturally than would be possible otherwise. Thus (6)
reduces to 1 when regardless of the values of x and y.
Clearly the product of two iterated integrals of the form (6) consists of
a sum of iterated integrals involving all possible interlacings of the vari-
ables. Thus if we denote the set of all
permutations oe of the indices
by Shuff(n; m), then we have the self-evident
y
Y
y
y
D. BOWMAN & D. M. BRADLEY
and so define the shuffle product by
Y
Thus, the sum is over all non-commutative products (counting multiplicity)
of length n +m in which the relative orders of the factors in the products
are preserved. The term "shuffle" is
used because such permutations arise in riffle shuffling a deck of
cards cut into one pile of n cards and a second pile of m cards.
The study of shuffles and iterated integrals was pioneered by Chen [6, 7]
and subsequently formalized by Ree [18]. A fundamental formula noted by
Chen expresses an iterated integral of a product of two paths as a convolution
of iterated integrals over the two separate paths. A second formula
also due to Chen shows what happens when the underlying simplex (6) is
re-oriented. Chen's proof in both cases is by induction on the number of
differential 1-forms. Since we will make use of these results in the sequel, it
is convenient to restate them here in the current notation and give direct
proofs.
Proposition 2.1 ([8, (1.6.2)]). Let ff be differential 1-
forms and let x; y 2 R. Then
y
Z y
x
Proof. Suppose . Observe that
y
y
y
y
Now switch the limits of integration at each level.
Proposition 2.2 ([6, Lemma 1.1]). Let ff
1-forms and let y - z - x. Then
y
Y
z
Y
y
Y
Proof.
\Theta
A related version of Proposition 2.2, "H-older Convolution," is exploited
in [2] to indicate how rapid computation of multiple zeta values and related
slowly-convergent multiple polylogarithmic sums is accomplished. In
Section 3.2, Proposition 2.2 is used in conjunction with Proposition 2.1 to
give a quick proof of Ree's formula [18] for the inverse of a Lie exponential.
3. THE SHUFFLE ALGEBRA
We have seen how shuffles arise in the study of iterated integral representations
for multiple zeta values. Following [15] (cf. also [3, 18]) let A be
a finite set and let A denote the free monoid generated by A. We regard A
as an alphabet, and the elements of A as words formed by concatenating
any finite number of letters from this alphabet. By linearly extending the
concatenation product to the set QhAi of rational linear combinations of
elements of A , we obtain a non-commutative polynomial ring with indeterminates
the elements of A and with multiplicative identity 1 denoting
the empty word.
The shuffle product is alternatively defined first on words by the recursion
and then extended linearly to QhAi. One checks that the shuffle product
so defined is associative and commutative, and thus QhAi equipped with
the shuffle product becomes a commutative Q-algebra, denoted ShQ [A].
Radford [17] has shown that ShQ [A] is isomorphic to the polynomial algebra
Q[L] obtained by adjoining to Q the transcendence basis L of Lyndon
words.
The recursive definition (8) has its analytical motivation in the formula
for integration by parts-equivalently, the product rule for differentiation.
Thus, if we put a = f(t) dt,
y
y
y
y
y
Copyright c
2002 by Academic Press
All rights of reproduction in any form reserved.
48 D. BOWMAN & D. M. BRADLEY
then writing F
R x
ds and applying the product rule for differentiation
yields
y
y
y
y
ds
y
y
y
y
ds
y
Alternatively, by viewing F as a function of y, we see that the recursion
could equally well have been stated as
Of course, both definitions are equivalent to (7).
3.1. Q-Algebra Homomorphisms on Shuffle Algebras
The following relatively straightforward results concerning Q-algebra homomorphisms
on shuffle algebras will facilitate our discussion of the Lie
exponential in Section 3.2 and of relationships between certain identities
for multiple zeta values and Euler sums [1, 2, 4]. To reduce the possibility
of any confusion in what follows, we make the following definition explicit.
Definition 3.1. Let R and S be rings with identity, and let A and B
be alphabets. A ring anti-homomorphism is an additive,
R-linear, identity-preserving map that satisfies
(and hence for all u; v 2 RhAi).
Proposition 3.1. Let A and B be alphabets. A ring anti-homomor-
induces a Q-algebra
homomorphism of shuffle algebras in the natural
way.
Proof. It suffices to show that /(u
A . The proof is by induction, and will require both recursive definitions
of the shuffle product. Let u; v 2 A be words. For the base case, note
that likewise with the empty word on
the right. For the inductive step, let a; b 2 A be letters and assume that
Then as / is an anti-homomorphism of rings,
Of course, there is an analogous result for ring homomorphisms.
Proposition 3.2. Let A and B be alphabets. A ring homomorphism OE :
induces a Q-algebra homomorphism
of shuffle algebras OE : ShQ [A] ! ShQ [B] in the natural way.
Proof. The proof is similar to the proof of Proposition 3.1, and in
fact is simpler in that it requires only one of the two recursive definitions
of the shuffle product. Alternatively, one can put an ,
verify the equation OE(u
using (7) and the hypothesis that OE is a ring homomorphism on QhAi.
Example 1. Let A be an alphabet and let R : QhAi ! QhAi be the
canonical ring anti-automorphism induced by the assignments R(a) = a for
all a 2 A. Then R(
an 2 A, so that
R is a string-reversing involution which induces a shuffle algebra automorphism
of ShQ [A]. We shall reserve the notation R for this automorphism
throughout.
Example 2. Let QhAi be the ring
automorphism induced by the assignments the
composition / := S ffi R is a letter-switching, string-reversing involution
which induces a shuffle algebra automorphism of ShQ [A]. In the case a =
this is the so-called Kontsevich duality [19, 1, 2, 16] for
iterated integrals obtained by making the change of variable t at
each level of integration. Words which are invariant under / are referred to
as self-dual. It is easy to see that a self-dual word must be of even length,
and the number of self-dual words of length 2k is 2 k .
50 D. BOWMAN & D. M. BRADLEY
Example 3. Let
be the letter-shifting, string-reversing ring anti-homomorphism induced by
the assignments
the choice of differential 1-
forms
for multiple zeta values to equivalent identities for alternating unit Euler
sums. We refer the reader to [1, 2, 4] for details concerning alternating Euler
sums; for our purposes here it suffices to assert that they are important
instances-as are multiple zeta values-of multiple polylogarithms [2, 10].
3.2. A Lie Exponential
Let A be an alphabet, and let Ag be a set of card(A)
distinct non-commuting indeterminates. Every element in QhXi can be
written as a sum is a homogeneous form
of degree n. Those elements F for which Fn belongs to the Lie algebra
generated by X for each n ? 0 and for which F are referred to as Lie
elements.
QhXi be the canonical ring isomorphism induced by the
assignments a for all a 2 A. If Ag is another set of
non-commuting indeterminates, we similarly define Y
be the canonical ring isomorphism induced by the assignments
for all a 2 A. Let us suppose are disjoint
and their elements commute with each other, so that for all a; b 2 A we
have X a Y a . If we define addition and multiplication in Q[X;Y] by
a for all a 2 A, then Q[X;Y]
becomes a commutative Q-algebra of ring isomorphisms Z. For example,
an where a 1 ; a an 2 A, then
an
Y
Y
ii be defined by
w2A
Evidently,
a2A
aX a
a2A aX a
More importantly, G is a homomorphism from the underlying Q-vector
space to the underlying multiplicative monoid ((Sh Q [A])hhX; Y ii;
Theorem 3.1. The defined by
(10) has the property that
Proof. On the one hand, we have
w2A
whereas on the other hand,
u2A
Therefore, we need to show that
w2A
But,
Y
a j
Y
a j
Y
Y
Y a j
Y
a oe(r)
Y
Y
Y a
using the non-recursive definition (7) of the shuffle product. For each
an run through the elements of A, then so do
Copyright c
2002 by Academic Press
All rights of reproduction in any form reserved.
52 D. BOWMAN & D. M. BRADLEY
a Hence putting b we have that
Y
Y
Y
Y
Y
w2A
In the penultimate step, we have summed over all
shuffles of the indeterminates
X with the indeterminates Y; yielding all 2 n possible choices
obtained by selecting an X or a Y from each factor in the product (X b1
Remarks. Theorem 3.1 suggests that the map G defined by (10) can
be viewed as a non-commutative analog of the exponential function. The
analogy is clearer if we rewrite (11) in the form
a2A
aX a
Just as the functional equation for the exponential function is equivalent
to the binomial theorem, Theorem 3.1 is equivalent to the following shuffle
analog of the binomial theorem:
Proposition 3.3 (Binomial Theorem in QhXihY i). Let
be disjoint sets of non-commuting
indeterminates such that
Y
Y
Y
Chen [6, 7] considered what is in our notation the iterated integral of (10),
namely
G x
w2A
y
in which the alphabet A is viewed as a set of differential 1-forms. He
proved [6, Theorem 6.1], [7, Theorem 2.1] the non-commutative generating
function formulation
G x
z G z
of Proposition 2.2 and also proved [7, Theorem 4.2] that if the 1-forms
are piecewise continuously differentiable, then log G x
y is a Lie element, or
equivalently, that G x
y is a Lie exponential. However, Ree [18] showed that
a formal power series
log
in non-commuting indeterminates X j is a Lie element if and only if the
coefficients satisfy the shuffle relations
for all non-negative integers n and k. Using integration by parts, Ree [18]
showed that Chen's coefficients do indeed satisfy these relations, and that
more generally, G(X) as defined by (10) is a Lie exponential, a fact that
can also be deduced from Theorem 3.1 and a result of Friedrichs [9, 13, 14].
Ree also proved a formula [18, Theorem 2.6] for the inverse of (10), using
certain derivations and Lie bracket operations. It may be of interest to give
a more direct proof, using only the shuffle operation. The result is restated
below in our notation.
Theorem 3.2 ([18, Theorem 2.6]). Let A be an alphabet, let
Ag be a set of non-commuting indeterminates and let
QhXi be the canonical ring isomorphism induced by the assignments
a for all a 2 A. Let G(X) be as in (11), let R be as in
Example 1, and put
w2A
54 D. BOWMAN & D. M. BRADLEY
where jwj denotes the length of the word w. Then G(X)
It is convenient to state the essential ingredient in our proof of Theorem
3.2 as an independent result.
Lemma 3.1. Let A be an alphabet and let R be as in Example 1. For all
uv=w
Remarks. We have used the Kronecker delta
Since R is a Q-algebra automorphism of ShQ [A], applying R to both sides
of (13) yields the related identity
uv=w
Proof of Lemma 3.1. First note that if we view the elements of A as
differential 1-forms and integrate the left hand side of (13) from y to x,
then we obtain
uv=w
y
y
uv=w
Z y
x
y
Z y
y
by Propositions 2.1 and 2.2. For an integral-free proof, we proceed as
follows. holds when
an 2 A and n is a positive integer. Let Sn denote the group of
permutations of the set of indices let the additive weight-
subsets of Sn to words as follows:
Y
a
For
uv=w
Y
a k\Gammaj+1
Y
a j
since the sums telescope.
Remark. One can also give an integral-free proof of Lemma 3.1 by induction
using the recursive definition (9) of the shuffle product.
Proof of Theorem 3.2. By Lemma 3.1, we have
u2A
w2A
uv=w
w2A
Since (ShQ [A])hhXii is commutative with respect to the shuffle product,
the result follows.
56 D. BOWMAN & D. M. BRADLEY
4. COMBINATORICS OF SHUFFLE PRODUCTS
The combinatorial proof [3] of Zagier's conjecture (2) hinged on expressing
the sum of the words comprising the shuffle product of (ab) p with (ab) q
as a linear combination of basis subsums T p+q;n . To gain a deeper understanding
of the combinatorics of shuffles on two letters, it is necessary to
introduce additional basis subsums. We do so here, and thereby find analogous
expansion theorems. We conclude the section by providing generating
function formulations for these results. The generating function formulation
plays a key role in the proof of our main result (4), Theorem 5.1 of
Section 5. The precise definitions of the basis subsums follow.
Definition 4.1. ([3]) For integers m - n - 0 let S m;n denote the set
of words occurring in the shuffle product (ab) n (ab) m\Gamman in which the
subword a 2 appears exactly n times, and let T m;n be the sum of the
distinct words in S m;n : For all other integer pairs (m; n) it is convenient to
define Tm;n := 0.
Definition 4.2. For integers m - m;n be the sum of
the elements of the set of words arising in the shuffle product of b(ab)
with b(ab) m\Gamman\Gamma1 in which the subword b 2 occurs exactly n times. For all
other integer pairs (m; n) define U m;n := 0:
In terms of the basis subsums, we have the following decompositions:
Proposition 4.1 ([3, Prop. 1]). For all non-negative integers p and
q,
The corresponding result for our basis (Definition 4.2) is
Proposition 4.2. For all positive integers p and q,
Proof of Proposition 4.2. See the proof of Proposition 4.1 given in [3].
The only difference here is that a 2 occurs one less time per word than b 2
and so the multiplicity of each word must be divided by 2. The index of
Copyright c
2002 by Academic Press
All rights of reproduction in any form reserved.
summation now starts at 1 because there must be at least one occurrence
of b 2 in each term of the expansion.
Corollary 4.1. For integers
a
Proof. ?From (8) it is immediate that
Now apply (14) and Proposition 4.2.
Proposition 4.3. Let x be sequences of not necessarily
commuting indeterminates, and let m be a non-negative (respec-
tively, positive) integer. We have the shuffle convolution formulae
\Theta
and
\Theta b(ab) k\Gamma1 b(ab)
respectively.
58 D. BOWMAN & D. M. BRADLEY
Proof. Starting with the left hand side of (17) and applying (14), we
find that
\Theta (ab) k (ab) m\Gammak
m\Gamman X
k=n
which proves (17). The proof of (18) proceeds analogously from (15).
As the proof shows, the products taken in (17) and (18) can be quite
general; between the not necessarily commutative indeterminates and the
polynomials in a; b the products need only be bilinear for the formulae to
hold. Thus, there are many possible special cases that can be examined.
Here we will consider only one major application. If we confine ourselves
to commuting geometric sequences, we obtain
Theorem 4.1. Let x and y be commuting indeterminates. In the commutative
polynomial ring (ShQ [a; b])[x; y] we have the shuffle convolution
\Theta (ab) k (ab) m\Gammak
for all non-negative integers m, and
\Theta
for all integers m - 2.
Proof. In Proposition 4.3, put x
apply the binomial theorem.
5. CYCLIC SUMS IN ShQ [A; B]
In this final section, we establish the results (3) and (4) stated in the
introduction. Let S m;n be as in Definition 4.1. Each word in S m;n has a
unique representation
(ab) m0
Y
(a
in which m are non-negative integers with sum
Conversely, every ordered (2n 1)-tuple
non-negative integers with sum
to a unique word in S m;n via (21). Thus, a bijective correspondence ' is
established between the set S m;n and the set C 2n+1 (m \Gamma 2n) of ordered
non-negative integer compositions of In view of
the relationship (5) expressing multiple zeta values as iterated integrals, it
therefore makes sense to define
Thus, if
Z 1(ab) m0
Y
(a
in which the argument string consisting of m j consecutive twos is inserted
after the jth element of the string f3; 1g n for each
From [1] we recall the evaluation
Let S 2n+1 denote the group of permutations on the set of indices
2ng. For oe 2 S 2n+1 we define a group action on C 2n+1 (m \Gamma 2n)
by
Let
denote the sum of the 2n+1 Z-values in which the arguments are permuted
cyclically. By construction, C is invariant under any cyclic permutation of
Copyright c
2002 by Academic Press
All rights of reproduction in any form reserved.
D. BOWMAN & D. M. BRADLEY
its argument string. The cyclic insertion conjecture [3, Conjecture 1] asserts
that in fact, C depends only on the number and sum of its arguments. More
specifically, it is conjectured that
Conjecture 5.1. For any non-negative integers m
have
An equivalent generating function formulation of Conjecture 5.1 follows.
Conjecture 5.2. Let x be a sequence of commuting indetermi-
nates. ThenX
y 2n
0-j-2n
y 2n
To see the equivalence of Conjectures 5.1 and 5.2, observe that by the
multinomial theorem,X
y 2n
y 2n
m-2n
y 2n
m-2n
Y
y 2n
m-2n
Now compare coefficients. Although Conjecture 5.1 remains unproved, it
is nevertheless possible to reduce the problem to that of establishing the
invariance of C(~s) for ~s 2 C 2n+1 (m \Gamma 2n). More specifically, we have the
following non-trivial result.
Theorem 5.1. For all non-negative integers m and n with m - 2n,
Example 4. If Theorem 5.1 states that
which is equivalent to the Broadhurst-Zagier formula (2) (Theorem 1 of [3]).
Example 5. If Theorem 5.1 states that
which is Theorem 2 of [3].
Theorem 5.1 gives new results, although no additional
instances of Conjecture 5.1 are settled. For the record, we note the following
restatement of Theorem 5.1 in terms of Z-functions:
Corollary 5.1 (Equivalent to Theorem 5.1). Let T m;n be as in Definition
4.1, and put a non-negative
integers m and n, with
Proof of Theorem 5.1. In view of the equivalent reformulation (24)
and the well-known evaluation (22) for Z(m), it suffices to prove that with
Tm;n as in Definition 4.1 and with a = dt=t,
Let
z 2k
z 2k i(f2g k
62 D. BOWMAN & D. M. BRADLEY
Then [1] We have
J(z cos ')J(z sin
-z cos ' \Delta sinh(-z sin ')
-z sin '
On the other hand, putting Theorem 4.1
yields
J(z cos ')J(z sin ')
(z cos ') 2k
(z sin ') 2j
(z cos ') 2n (z sin ') 2m\Gamma2n
Z 1(ab) n (ab) m\Gamman
Z 1Tm;n
z 2m
Equating coefficients of z 2m (sin 2') 2n in (25) and (26) completes the proof.
ACKNOWLEDGMENT
Thanks are due to the referee whose comments helped improve the exposition.
--R
"Evaluations of k-fold Euler/Zagier Sums: A Compendium of Results for Arbitrary k,"
"Special Values of Multiple Polylogarithms,"
"Combinatorial Aspects of Multiple Zeta Values,"
"Resolution of Some Open Problems Concerning Multiple Zeta Evaluations of Arbitrary Depth,"
"Iterated Integrals and Exponential Homomorphisms,"
"Integration of Paths, Geometric Invariants and a Generalized Baker-Hausdorff Formula,"
"Algebras of Iterated Path Integrals and Fundamental Groups,"
"Mathematical Aspects of the Quantum Theory of Fields, V,"
"Multiple Polylogarithms, Cyclotomy and Modular Com- plexes,"
"Algebraic Structures on the Set of Multiple Zeta Values,"
"Relations of Multiple Zeta Values and their Algebraic Expression,"
"A Theorem of Friedrichs,"
"On the Exponential Solution of Differential Equations for a Linear Operation,"
"Lyndon words, polylogarithms and the Riemann i function,"
"A Generalization of the Duality and Sum Formulas on the Multiple Zeta Values,"
"A Natural Ring Basis for the Shuffle Algebra and an Application to Group Schemes,"
"Lie Elements and an Algebra Associated with Shuffles,"
"Values of Zeta Functions and their Applications,"
--TR
Lyndon words, polylogarithms and the Riemann MYAMPERSANDzgr; function
--CTR
Ae Ja Yee, A New Shuffle Convolution for Multiple Zeta Values, Journal of Algebraic Combinatorics: An International Journal, v.21 n.1, p.55-69, January 2005
Kurusch Ebrahimi-Fard , Li Guo, Mixable shuffles, quasi-shuffles and Hopf algebras, Journal of Algebraic Combinatorics: An International Journal, v.24 n.1, p.83-101, August 2006
Douglas Bowman , David M. Bradley , Ji Hoon Ryoo, Some multi-set inclusions associated with shuffle convolutions and multiple zeta values, European Journal of Combinatorics, v.24 n.1, p.121-127, 1 January | lie algebra;iterated integral;multiple zeta value;shuffle |
586803 | Hydrodynamical methods for analyzing longest increasing subsequences. | Let Ln be the length of the longest increasing subsequence of a random permutation of the numbers 1 .... , n, for the uniform distribution on the set of permutations. We discuss the "hydrodynamical approach" to the analysis of the limit behavior, which probably started with Hammersley (Proceedings of the 6th Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1 (1972) 345-394) and was subsequently further developed by several authors. We also give two proofs of an exact (non-asymptotic) result, announced in Rains (preprint, 2000). | Introduction
In recent years quite spectacular advances have been made with respect to the distribution theory
of longest increasing subsequences L n of a random permutation of the numbers 1, . , n, for the
uniform distribution on the set of permutations. Recent reviews of this work are given in Aldous
and Diaconis (1999) and Deift (2000).
However, rather than trying to give yet another review of this recent work, I will try to give a
description of a di#erent approach to the theory of the longest increasing subsequences, which in
Aldous and Diaconis (1995) is called "hydrodynamical".
As an example of a longest increasing subsequence we consider the permutation
also used as an example in Aldous and Diaconis (1999). A longest increasing subsequence is:
(1, 3, 4, 6, 9).
and another longest increasing subsequence is:
(2, 3, 4, 6, 9).
MSC 2000 subject classifications. Primary: 60C05,60K35, secondary 60F05.
Key words and phrases. Longest increasing subsequence, Ulam's problem, Hammersley's process.
For this example we get
It was proved in Hammersley (1972) that, as n #,
-# denotes convergence in probability, and
lim
for some positive constant c, where #/2 # c # e. Subsequently Kingman (1973) showed that
and later work by Logan and Shepp (1977) and Vershik and Kerov (1977) (expanded more fully
in Vershik and Kerov (1981) and Vershik and Kerov (1985)) showed that actually 2. The
problem of proving that the limit exists and finding the value of c has been called "Ulam's problem'',
see, e.g., Deift (2000), p. 633.
In proving that Aldous and Diaconis (1995) replace the hard combinatorial
work in Logan and Shepp (1977) and Vershik and Kerov (1977), using Young tableaux by
"hydrodynamical argument", building on certain ideas in Hammersley (1972), and it is this
approach I will focus on in the present paper.
I will start by discussing Hammersley (1972) in section 2. Subsequently I will discuss the
methods used in Aldous and Diaconis (1995) and Sepp al ainen (1996). Slightly as a side-track,
I will discuss an exact (non-asymptotic) result announced in Rains (2000), for which I have not
seen a proof up till now, but for which I will provide a hydrodynamical proof below. Finally, I will
discuss the rather simple (and also hydrodynamical) proof of
Hammersley's approach
The Berkeley symposium paper Hammersley (1972) is remarkable in several ways. The opening
sentences are: "Graduate students sometimes ask, or fail to ask: "How does one do research in
mathematical statistics?" It is a reasonable question because the fruits of research, lectures and
published papers bear little witness to the ways and means of their germination and ripening".
This beginning sets the tone for the rest of the paper, where Hammersley describes vividly the
germination and ripening of his own research on the subject.
In section 3, called "How well known is a well-known theorem?", he describes the di#culties
encountered in finding a reference for a proof of the following theorem:
Theorem 2.1 Any real sequence of at least mn+1 terms contains either an ascending subsequence
of m+ 1 terms or a descending subsequence of n
This result, due to Erd os and Szekeres (1935), is called the "pigeonhole principle" in
Hammersley (1972), a term also used by other authors. A nice description of this problem and
other related problems is given in Aigner and Ziegler (1998). The relevance of the pigeonhole
principle for the behavior of longest increasing subsequences is that one can immediately conclude
from it that
As noted in Deift (2000), it is probable that Ulam, because of a long and enduring friendship
with Erdos, got interested in determining the asymptotic value of EL n and for this reason started
(around 1961) a simulation study for n in the range 1 # being very large at the
time, quoting Deift (2000)), from which he found
leading him to the conjecture that
lim
exists. This is the first part of "Ulam's problem'', the second part being the determination of c.
Relation (2.1) then shows:
c #2 ,
if we can deal with the first part of Ulam's problem (existence of the limit (2.2)).
The first part of Ulam's problem is in fact solved in Hammersley (1972). It is Theorem 2.2
below (Theorem 4 on p. 352 of Hammersley (1972)):
Theorem 2.2 Let .) be an i.i.d. sequence of real-valued continuously distributed
random variables, and let, respectively, L n and L # n be the lengths of a longest increasing and a
longest decreasing subsequence of (X 1 , . , X n ). Then we have:
-# c and L # n / # n
for some positive constant c, where p
-# denotes convergence in probability. We also have
convergence in the pth absolute mean of L n
Note that, for a sample of n continuously distributed random variables, the vector of ranks
(R 1 , . , R n ) of the random variables X 1 , . , X n (for example ordered according to increasing
magnitudes) has a uniform distribution over all permutations of (1, . , n). Because of the
continuous distribution we may disregard the possibility of equal observations ("ties"), since this
occurs with probability zero. So the random variable L n , as defined in Theorem 2.2, indeed
has exactly the same distribution as the length of a longest increasing subsequence of a random
permutation of the numbers 1, . , n, for the uniform distribution on the set of permutations.
The key idea in Hammersley (1972) is to introduce a Poisson process of intensity 1 in the
first quadrant of the plane and to consider longest North-East paths through points of the Poisson
point process in squares [r, North-East path in the square [r, s] 2 is a
sequence of points (X 1 of the Poisson process such that X 1 < . < X k
and Y 1 < . < Y k . We call k the length of the path.
Note that we can disregard the probability that since this
happens with probability zero. A longest North-East path is a North-East path for which k is
largest. Conditional on the number of points of the Poisson process in [r, s] 2 , say n, the length of
the longest North-East path has the same distribution as the longest increasing subsequence of a
random permutation of 1, . , n. This follows from the fact that, if (U 1 , are the
points of the Poisson process belonging to [r, s] 2 , where we condition on the event that the number
of points of the Poisson process in [r, s] 2 is equal to n, and if V (1) < . < V (n) are the order statistics
of the second coordinates, then the corresponding first coordinates U k1 , . , U kn behave as a sample
from a Uniform distribution on [r, s]. A longest increasing North-East path will either consist of
just one point (provided that the rectangle contains a point of the Poisson process, otherwise the
length will be zero) or be a sequence of the form:
< . < U k j
. So the length of a longest North-East path in [r, s] 2 , conditionally
on the number of points in [r, s] 2 being n, is distributed as the longest increasing subsequence of the
sequence random variables (U 1 , . , U n ), and hence, by the remarks following Theorem 2.2 above,
distributed as the length of a longest increasing subsequence of a permutation of the numbers
1, . , n. If the length is zero and everything is trivial of course.
Following Hammersley (1972), we denote the length of a longest North-East path in the square
[r, s] 2 by W r,s , and for the collection of random variables {W r,s obviously
have the so-called superadditivity property:
meaning that -W r,t has the subadditivity property:
Furthermore, we clearly have, for each r > 0, that # W nr,(n+1)r is an i.i.d. sequence
of random variables, since W nr,(n+1)r is a function of the Poisson point process restricted to the
square [nr, (n+1)r] 2 , and since the restrictions of the Poisson point process to the squares [nr, (n+
are i.i.d. For the same type of reason, the distribution of W r,r+k does not depend on r # (0, #).
Finally max{0, -W 0,n each n, and it will be shown below (see (2.11)) that
for a finite constant K > 0. So we are in a position to apply Liggett's version of Kingman's
subadditive ergodic theorem,
r
a.s.
r
, r #.
and also
r
r
, r #.
Hammersley next defines t(n) as the smallest real number such that [0, t(n)] 2 contains exactly
n points of the Poisson process. Then it is clear from the properties of the Poisson point process
in
that
a.s.
and hence that
a.s.
and
where the constant c is the same in (2.4) and (2.5). But since W 0,t(n) has the same distribution as
we obtain from (2.4)
-# denotes convergence in probability, and from (2.5),
Remark. Note that we went from the almost sure relation (2.4) to the convergence in probability
(2.6) for the original longest increasing subsequence, based on a random permutation of the numbers
1, . , n. It is possible, however, also to deduce the almost sure convergence of L n / # n from (2.4),
using an extra tool, as was noticed by H. Kesten in his discussion of Kingman (1973) (see p. 903
of Kingman (1973)). I want to thank a referee for giving the latter reference, setting "the record
straight" for this issue that was still bothering Hammersley in Hammersley (1972).
From (2.1) we now immediately obtain
(see also the remark below (2.2)), but we still have to prove (2.3). This problem is dealt with by
Theorem 2.3 below (Theorem 6 on p. 355 of Hammersley (1972)):
Theorem 2.3 Let, for x # IR, #x# be the smallest integer # x, and let, for a fixed t # 0 and each
positive integer N , denote the uniform distribution on the set of
permutations of the sequence (1, . , N) with corresponding expectation EN , let # N
be the length of a
longest monotone (decreasing or increasing) subsequence of a permutation of the numbers 1, . , N ,
and let m n,N the number of monotone subsequences of length n under the probability measure PN .
Then we have
e -2t
Proof: This is proved by an application of Stirling's formula (for details of this computation
which is not di#cult, see Hammersley (1972), pp. 355-356). #
Elementary calculations show that Theorem 2.3 implies:
for a constant K > 0. Returning to the situation of longest increasing North-East paths in the
plane, we obtain from (2.10)
which proves (2.3). Moreover, we have:
lim sup
e,
using Theorem 2.3.
Combining this result with (2.6) and (2.7), we obtain:
e,
and, in particular, it is proved that c # (0, #), since also c # 1/2, by (2.8).
So the first part of Ulam's problem is now solved, and we know that the constant c in the limit
(2.2) is a number between 1/2 and e. As noted above, Hammersley improved the lower bound to
#/2 and in Kingman (1973) the bounds were tightened further to
However, Hammersley was in fact quite convinced that p. 372 of Hammersley (1972),
where he says: "However, I should be very surprised if (12.12) is false", (12.12) being the statement
his paper). Hammersley (1972) contains three "attacks on c" of which we will discuss
the third attack, since the ideas of the third attack sparked other research, like the development
in Aldous and Diaconis (1995). Somewhat amusingly, his second attack yielded c # 1.961, but
apparently he did not believe too much in that attack, in view of the remark ("I should be very
quoted above.
"Hammersley's process'' is introduced in section 9 of Hammersley (1972). In this section on
Monte Carlo methods he introduces a kind of interacting particle process in discrete time. The
particles of this process all live on the interval [0, 1] and are at "time n" a subset of a Uniform(0,1)
sample . We now give a description of this process.
. be an i.i.d. sequence of Uniform(0,1) random variables, and let, for each n,
be defined by
Moreover let, for x # (0, 1), X x
be defined as the subsequence of X n obtained by omitting all
As before, #(X n ) is the length of the longest increasing subsequence of X n . In a similar
way, #(X x
) is the length of the longest increasing subsequence of X x
. Hammersley now notes that
n ) is an integer-valued step function of x, satisfying the recurrence relation
and that, for a simulation study, one only needs to keep the values of the X i 's, where the function
makes a jump. The recurrence relation starts from:
Suppose that the jumps of the function x # (X x
) occur at the points Y i,n ,
Y 1,n < . < Y I(n),n .
Then it is clear from (2.12) that the jumps of the function x # X x
# occur at the points
Y 1,n+1 < . < Y I(n+1),n+1 ,
which is obtained from the points (2.14) by adding Y to the points (2.14), if
by replacing the smallest value y i,n > X n+1 by X n+1 . Note that
we call the particle process n # Y n Hammersley's discrete time interacting particle process.
In Hammersley (1972) the following simple example is given, clarifying the way in which this
process evolves. Let
Then the sequence (Y 1 , . , Y 6 ) is represented by the sequence of states
So either a new point is added (which happens, e.g., at the second step in the example above), or
the "incoming point" replaces an existing point that is immediately to the right of this incoming
point (this happens, e.g., at the third step in the example above).
We note in passing that the first stage of Viennot's geometric construction of the Robinson-Schensted
correspondence, given in Viennot (1976), is in fact Hammersley's discrete time
interacting particle process on a lattice. A nice exposition of this construction is given in Sagan
(1991).
We can now discuss the "third attack on c" (section 12 in Hammersley (1972)). The argument
at the bottom of p. 372 and top of p. 373 is close to the equation (9) of Aldous and Diaconis (1995),
(as pointed out to me by Aldous (2000)), and is the hydrodynamical argument that inspired the
approach in Aldous and Diaconis (1995). The argument (called "treacherous" by Hammersley)
runs as follows.
. be i.i.d. Uniform(0,1) random variables, and let Y 1,n < . < Y I(n),n be the
points of Hammersley's discrete time interacting particle process at time n, associated with the
sample described above. Moreover, let # (X n ) be the length of a longest increasing
subsequence of X 1 , . , X n . Then we have:
since # (X n+1
#, we have (quoting Hammersley (1972), bottom of p. 372) that "the left
side of (2.16) is the result of di#erencing c # n n) with respect to n, and ought to be about2 c/ # n if the error term is smooth". Continuing quoting Hammersley (1972) (top of p. 373) we
get that "the right side of (2.16) is the displacement in x near just su#cient to ensure unit
increase in #(X x
and should be the reciprocal of #
) at namely 2/ (c # n)". The last
statement is referring to relation (12.1) on p. 370 of Hammersley (1972), which is:
There is of course some di#culty in interpreting the equality sign in (2.17), does it mean "in
probability", "almost surely" or is an asymptotic relation for expectations meant? Let us give
(2.17) the latter interpretation. Then we would get from (2.16), following Hammersley's line of
argument:
c
This would yield 2. Following Hammersley's daring practice (in section 12 of Hammersley
(1972)) of di#erentiating w.r.t. a discrete parameter (in this case n), we can rewrite (2.18) in the
#n
c
or
#n
#x
We shall return to equation (2.20) in the next sections, where it will be seen that it can be given
di#erent interpretations, corresponding to the di#erent approaches in Aldous and Diaconis (1995)
and Groeneboom (2000) (and perhaps more implicitly in Sepp al ainen (1996)).
3 The Hammersley-Aldous-Diaconis interacting particle process
As I see it, one major step forward, made in Aldous and Diaconis (1995), is to turn Hammersley's
discrete time interacting particle process, as described in section 2, into a continuous time process.
One of the di#culties in interpreting relation (2.20) is the di#erentiation w.r.t. the discrete time
parameter n, and this di#culty would be removed if we can di#erentiate with respect to a continuous
time parameter (but other di#culties remain!).
The following gives an intuitive description of the extension of Hammersley's process on IR
to a continuous time process, developing according to the rules specified in Aldous and Diaconis
(1995). Start with a Poisson point process of intensity 1 on IR 2
. Now shift the positive x-axis
vertically through (a realization of) this point process, and, each time a point is caught, shift to
this point the previously caught point that is immediately to the right.
Alternatively, imagine, for each x > 0, an interval [0, x], moving vertically through the Poisson
point process. If this interval catches a point that is to the right of the points caught before, a
new extra point is created in [0, x], otherwise we have a shift to this point of the previously caught
point that is immediately to the right and belongs to [0, x] (note that this mechanism is exactly the
same as the mechanism by which the points Y i,n of Hammersley's discrete time process are created
in section 2). The number of points, resulting from this "catch and shift" procedure at time y on
the interval [0, x], is denoted in Aldous and Diaconis (1995) by
So the process evolves in time according to "Rule 1" in Aldous and Diaconis (1995), which is
repeated here for ease of reference:
Rule 1 At times of a Poisson (rate x) process in time, a point U is chosen uniformly on [0, x],
independent of the past, and the particle nearest to the right of U is moved to U , with a new particle
created at U if no such particle exists in [0, x].
We shall call this process the Hammersley-Aldous-Diaconis interacting particle process. For a
picture of the space-time curves of this process, see Figure 1; an "#-point" is an added point and
a "#-point" is a deleted point for this continuous time process (time is running along the vertical
axis).
In Hammersley's ``third attack on c'', one of the crucial assumptions he was not able to prove
was the assumption that the distribution of the points Y i,n was locally homogeneous, so, actually,
is locally behaving as a homogeneous Poisson process; he calls this "assumption #" (p. 371 of
Hammersley (1972)). This key assumption is in fact what is proved in Aldous and Diaconis
(1995) for the Hammersley-Aldous-Diaconis interacting particle process in Theorem 5 on p. 204
(which is the central result of their paper):
(0,y)
a-point b-point
Figure
1: Space-time curves of the Hammersley-Aldous-Diaconis process, contained in [0, x][0, y].
Theorem 3.1 (a) 2.
(b) For each fixed a > 0, the random particle configuration with counting process
y,
converges in distribution, as t #, to a homogeneous Poisson process with intensity a -1 .
After stating this theorem, they give the following heuristic argument. Suppose the spatial
process around position x at time t approximates a Poisson process of some rate #(x, t). Then
clearly
#t
where D x,t is the distance from x to the nearest particle to the left of x. For a Poisson process,
ED x,t would be 1/(spatial rate), so
ED x,t ##(x, t) ##
In other words, w(x, approximately the partial di#erential equation
#w
#x
#w
#t
whose solution is w(x, tx. Note that (3.1) is very close to (2.20) above, but that
we got rid of the di#erentiation w.r.t. a discrete time parameter!
Appealing as the above argument may seem, a closer examination of Aldous and Diaconis
(1995) learns that their proof not really proceeds along the lines of this heuristic argument. In
fact, they prove separately that c # 2 and that c # 2, by arguments that do not seem to use the
di#erential equation (3.1). The proofs of c # 2 and of c # 2 are based on coupling arguments,
where the Hammersley-Aldous-Diaconis process is coupled to a stationary version of this process,
starting with a non-empty configuration, and living on IR instead of IR + . This process evolves
according to the following rule (p. 205 of Aldous and Diaconis (1995)).
Rule 2 The restriction of the process to the interval [x 1 ,
(i) There is some arbitrary set of times at which the leftmost point (if any) in the interval is
removed.
(i) At times of a rate x 2 -x 1 Poisson process in time, a point U is chosen uniformly on [x 1 , x 2 ],
independent of the past, and the particle nearest to the right of U is moved to U , with a new
particle created at U if no such particle exists in [x 1 , x 2 ].
To avoid a possible misunderstanding, rule 2 above is not the same a "rule 2" in Aldous and
Diaconis (1995). The existence of such a process is ensured by the following lemma (Lemma 6 in
Aldous and Diaconis (1995))
Lemma 3.1 Suppose an initial configuration N(, 0) of particles in IR satisfies
lim inf
x#
x
> 0, a.s.
Let P be a Poisson point process of intensity 1 in the plane, and let L # ((z, 0), (x, t)) be the maximal
number of Poisson points on a North-East path from (z, 0) to (x, t), in the sense that it is piecewise
linear with vertices (z, 0),
t, for points belonging a realization of P, and such that k is largest.
Then the process, defined by
-#<z#x
evolves according to Rule 2.
A process of this type is called "Hammersley's process on IR'' in Aldous and Diaconis (1995),
but we will call this the Hammersley-Aldous-Diaconis process on IR. As an example of such a
process, consider an initial configuration, corresponding to a Poisson point process P # of intensity
# > 0. The initial configuration will be invariant for the process; that is: N(, t) will have the same
distribution as P # , for each t > 0. The following key lemma (Lemma 7 in Aldous and Diaconis
(1995)) characterizes the invariant distributions the process on IR.
Lemma 3.2 A finite intensity distribution is invariant and translation-invariant for the
Hammersley-Aldous-Diaconis process on IR if and only if it is a mixture of Poisson point processes
Other key properties are given in the next lemma (part (i) is Lemma 8, part (ii) and (iii) are
Lemma 3, and part (iv) is Lemma 11 in Aldous and Diaconis (1995)).
Lemma 3.3 (i) Let N be a Hammersley-Aldous-Diaconis process on IR, with the invariant
distribution
N(x, t) be defined by
number of particles of {N(s, (t, #)} which exit during the time interval [0, x].
Hammersley-Aldous-Diaconis process on IR, with invariant
distribution P 1/# .
(ii) (Space-time interchange property) Let
Then
(iii) (Scaling property) For all x, y, k > 0,
(iv) For fixed x, y # IR we have:
lim
Part (ii) and (iii) are immediately clear from the fact that the distribution of
only depends on the area of the rectangle [x 1 , y 1 since the expected
number of points of the Poisson point process only depends on the area of the recatangle, and since
the shape of the rectangle has no influence on the distribution of L # ((x 1 , y 1 )). The proofs
of (i) and (iv) rely on somewhat involved coupling arguments, and we refer for that to Aldous and
Diaconis (1995).
The argument for c # 2 now runs as follows. The processes
have subsequential limits, which have to be translation-invariant and invariant for the Hammersley-
Aldous-Diaconis process. By part (ii) of Lemma 3.3 (the space-time interchange property)) these
limit processes must have the same distribution. By Lemma 3.2 and the invariance properties
the limit process must have, these processes must be mixtures of Poisson pocesses. Suppose the
subsequential limit of the process t # N(t is such a mixture of Poisson processes with
random intensity #. Then, according to part (i) of Lemma 3.3, the subsequential limit of the
process must be a mixture of Poisson processes with random intensity # -1 . We have
and, by Jensen's inequality, E# implying
But, since the limit processes must have the same distribution, we also have
hence
implying E# 1. Using this fact, in combination with Fatou's lemma and part (iv) of Lemma 3.3,
we get
is the sequence for which we have the subsequential limit. Hence c # 2.
Proving c # 2 is easier; it is proved by a rather straightforward coupling argument in Aldous
and Diaconis (1995) and it also follows from the result in section 4 below (see the last paragraph
of section 4). So the conclusion is that 2. Moreover, since # d
and since the covariance
of # and # -1 is equal to zero, we can only have: almost surely. This proves part (b) of
Theorem 3.1 for the case a = 1, and the case a #= 1 then follows from part (iii) of Lemma 3.3.
4 An nonasymptotic result for longest North-East paths in the
unit square
The purpose of the present section is to give a proof of the following result.
Theorem 4.1 Let P 1 be a Poisson process of intensity # 1 on the lower edge of the unit square
process of intensity # 2 on the left edge of the unit square, and P a Poisson
process of intensity # 1 # 2 in the interior of the unit square. Then the expected length of a longest
North-East path from (0, 0) to (1, 1), where horizontal or vertical parts are allowed at the start of
the path, is equal to # 1
Here, as before, the length of the path is defined as the number of points of the point process,
"picked up" along the path. However, in the present situation it is allowed to pick up points from
the boundary, and, moreover, there are Poisson point processes on both the left and the lower
boundary. The exact result about the expectation of the length of the longest North-East path
(for the case # announced in Rains (2000), who refers to a manuscript in preparation of
Prahofer and Spohn (which I have not seen).
The idea of the proof is to show that if we start with the (possibly empty) configuration of points
on the lower edge, and let the point proces develop according to the rules of the Hammersley-Aldous-
Diaconis process, where we let the leftmost point "escape" at time t, if the left edge of the unit
square contains a point of the Poisson process of intensity # 2 at (0, t), the process will be stationary.
This means that the expected number of points of the process will be equal to # 1 at each time t, so
in particular at time t = 1. Since the expected number of points on the left edge is # 2 , the expected
number of space-time curves of the Hammersley-Aldous-Diaconis process (with "left escapes") will
be This implies the result, since the length of a longest North-East path is equal to the
number of space-time curves (note that such a longest North-East path "picks up" one point from
each space-time curve).
A proof of Theorem 4.1 in the spirit of the methods used in Aldous and Diaconis (1995) and
al ainen (1996) would run as follows. Start with a Poisson process # 0 of intensity # 1 on IR
and a Poisson process of intensity # 1 # 2 in the upper half plane. Now let the Poisson process # 0
develop according to the rules of the Hammersley-Aldous-Diaconis process on the whole real line,
and let # t denote the process at time t. Then the process {# will be invariant in the sense
that it will be distributed as a Poisson process of intensity # 1 at any positive time. The restriction
of # t to the interval [0, 1] will be a Poisson process of intensity # 1 on this interval. Since (by the
"bus stop paradox") the distribution of the distance of the rightmost point in the interval (-#,
to 0 will have an exponential distribution with (scale) parameter 1/# 1 , the leftmost points in the
interval [0, 1] will escape at rate # 2 , because an escape will happen if a new point is "caught" in
the interval between the rightmost point of the process in (-#, 0) and 0, and because the intensity
of the Poisson process in the upper half plane is # 1 # 2 . So the point process on [0, 1], induced
by the stationary process {# t : t # 0} on IR, develops exactly along the rules of the Hammersley-
Aldous-Diaconis process "with left escapes", described above, and the desired stationarity property
follows.
The proof of Theorem 4.1 below uses an "infinitesimal generator" approach. It is meant to draw
attention to yet another method that could be used in this context and this is the justification of
presenting it here, in spite of the fact that it is much longer than the proof we just gave (but most
of the work is setting up the right notation and introducing the right spaces). Also, conversely,
the proof below can be used to prove the property that a Poisson process on IR is invariant for the
Hammersley-Aldous-Diaconis process; this property is a key to the proofs in Aldous and Diaconis
(1995).
Let # denote a point process on [0, 1]. That is, # is a random (Radon) measure on [0, 1], with
realizations of the form
are the points of the point process # and f is a bounded measurable function
. If we define the right side of (4.1) to be zero.
We can consider the random measure # as a random sum of Dirac measures:
and hence
for Borel sets B # (0, 1). So #(B) is just the number of points of the point process #, contained in
B, where the sum is defined to be zero if The realizations of a point process, applied on
Borel subsets of [0, 1], take values in Z+ and belong to a strict subset of the Radon measures on
[0, 1]. We will denote this subset, corresponding to the point processes, by N , and endow it with the
vague topology of measures on [0, 1], see, e.g., Kallenberg (1986), p. 32. For this topology, N is a
(separable) Polish space and a closed subset of the set of Radon measures on [0, 1], see Proposition
15.7.7 and Proposition 15.7.4, pp. 169-170, Kallenberg (1986). Note that, by the compactness of
the interval [0, 1], the vague topology coincides with the weak topology, since all continuous functions
contained in the compact interval [0, 1]. For this reason we
will denote the topology on N by the weak topology instead of the vague topology in the sequel.
Note that the space N is in fact locally compact for the weak topology.
In our case we have point processes # t , for each time t # 0, of the form
denotes the number of points at time t, and where defined
to be the zero measure on [0, 1], if N if we start with a Poisson process of intensity
the initial configuration # 0 will with probability one be either the zero measure or be of the
for some n > 0, but since we want to consider the space of bounded continuous functions on N ,
it is advantageous to allow configurations where some # i 's will be equal. We also allow the # i 's to
take values at 0 or 1. If we have a "stack" of # i 's at the same location in (0, 1], we move one point
("the point on top") from the stack to the left, if a point immediately to the left of the location
of the stack appears, leaving the other points at the original location. Likewise, if a stack of # i 's is
located at 0, we remove the point on top of the stack at time t if the Poisson point process on the
left lower boundary has a point at (0, t).
Now let F c be the Banach space of continuous bounded functions # with the supremum
norm. For # N and t > 0 we define the function P
We want to show that the operator P t is a mapping from F c into itself.
Boundedness of P t # is clear if # : N # IR is bounded and continuous, so we only must prove
the continuity of P t #, if # is a bounded continuous function # : N # IR. If # is the zero measure
and sequence of measures in N , converging weakly to #, we must have:
lim
and hence # n ([0, large n. This implies that
for all large n. If
sequence of measures in N , converging
weakly to #, we must have # n ([0, is of the form
for all large n. Moreover, the ordered # n,i 's have to converge to the ordered # i 's in the Euclidean
topology. Since the x-coordinates of a realization of the Poisson process of intensity # 1 # 2 in (0, 1)
(0, t] will with probability one be di#erent from the # i 's, sample paths of the processes {#
either starting from # or from # n , will develop in the same way, if n is su#ciently large, for such a
realization of the Poisson process in (0, 1) (0, t]. Hence
lim
So we have:
lim
if the sequence (# n ) converges weakly to #, implying the continuity of P t #.
process with respect to the natural filtration {F
generated by this process, we have the semi-group property
for bounded continuous functions # Moreover, we can define the generator G of the
process working on the bounded continuous functions #
if # is the zero measure on (0, 1), and by
defined by
The first term on the right of (4.5) corresponds to the insertion of a new point in one of the intervals
the shift of # i to this new point if the new point is not in the rightmost interval, and
the second term on the right of (4.5) corresponds to an "escape on the left". Note that G#) is
computed by evaluating
lim
# .
The definition of G can be continuously extended to cover the configurations
working with the extended definition of P t , described above.
So we have a semigroup of operators P t , working on the Banach space of bounded continuous
G. It now follows from Theorem 13.35 in Rudin (1991) that
we have the following lemma.
Lemma 4.1 Let N be endowed with the weak topology and let # : N # IR be a bounded continuous
function. Then we have, for each t > 0,
d
dt
Proof: It is clear that the conditions (a) to (c) of definition 13.34 in Rudin (1991) are satisfied,
and the statement then immediately follows. #
We will also need the following lemma (this is the real "heart" of the proof).
Lemma 4.2 Let for a continuous function f : [0, , the function be defined
by
Then:
Proof. We first consider the value of GL f (# 0 ) for the case where # 0 is the zero measure, i.e., the
interval [0, 1] contains no points of the point process # 0 . By (4.4) we then have:
x
Hence:
e -f(x)-f(u) dx du
e -f(x)-f(u) dx du.
Now generally suppose that, for n > 1,
Then, by a completely similar computation, it follows that
So we get
since
and similarly
We now have the following corollary.
Corollary 4.1 Let # : N # IR be a continuous function with compact support in N . Then:
Proof. Let C be the compact support of # in N . The functions L f , where f is a continuous
are closed under multiplication and hence linear combinations of these
functions, restricted to C, form an algebra. Since the constant functions also belong to this algebra
and the functions L f separate points of C, the Stone-Weierstrass theorem implies that # can be
uniformly approximated by functions from this algebra, see, e.g., Dieudonn e (1969), (7.3.1), p.
137. The result now follows from Lemma 4.2, since G is clearly a bounded continuous operator on
the Banach space of continuous functions
Now be a continuous function with compact support in N . Then P t # is also
a continuous function with compact support in N , for each t > 0. By Corollary 4.1 we have:
Hence, by Lemma 4.1,
implying
for each continuous function # : N # IR with compact support in N . But since N is a Polish space,
every probability measure on N is "tight", and hence # t has the same distribution as # 0 for every
(here we could also use the fact that N is in fact locally compact for the weak topology).
Theorem 4.1 now follows as before.
Remark. For a general result on stationarity of interacting particle processes (but with another
state space!), using an equation of type (4.13), see, e.g., Liggett (1985), Proposition 6.10, p. 52.
The argument shows more generally that, if we start with a Poisson point process of intensity
and a Poisson point process of intensity
, the starting distribution
on IR + is invariant for the Hammersley-Aldous-Diaconis process, if we let points escape at zero at
rate # 2 .
It is also clear that the inequality c # 2 follows, since the length of a longest North-East path
from (0, 0) to a point (t, in the construction above, be always at least as big
as the length of a longest North-East path from (0, 0) to a point (t, t), if we start with the empty
configuration on the x- and y-axis: we simply have more opportunities for forming a North-East
path, if we allow them to pick up points from the x- or y-axis. Since, starting with a Poisson
process of intensity 1 in the first quadrant, and (independently) Poisson processes of intensity 1 on
the x- and y-axis, the expected length of a longest North-East path to (t, t) will be exactly equal
to 2t, according to what we proved above, we obtain from this c # 2.
5 Seppalainen's stick process
The result proved by hydrodynamical arguments in sections 8 and 9 of Sepp al ainen (1996).
I will summarize the approach below.
First of all, a counting process on IR instead of (0, #) is used, and for this process a number
of starting configurations are considered. Note that we cannot start with the empty configuration
on IR, since points would immediately be pulled to -#, as noted in Aldous and Diaconis (1995).
For the purposes of proving 2, the most important starting configurations are:
(i) a Poisson process of intensity 1 on (-#, 0] and the empty configuration on (0, #).
(ii) a Poisson process of intensity 1 on IR.
Let (z k ) k#Z be an initial configuration on IR. Seppalainen's stick process is defined as a process of
functions associated with this particle process, by
Instead of z k (0) we write z k .
We now define
and
-#<z#x
where L # ((z, 0), (x, y)) is the maximum number of points on a North-East path in (z, x] (0, y],
as in Lemma 3.1 of section 3.
as the maximum number of points on a North-East path in
. The key to the approach in Sepp al ainen (1996) is to work with a kind of inverse of (5.2),
defined by
in words: # ((x 1 , y 1 is the minimum horizontal distance needed for building a North-East
path of k points, starting at in the "time interval" (y 1 , y 2 ]. This can be seen as another way
of expanding relation (2.16), which, in fact, is also a relation between the discrete time Hammersley
process and its inverse.
Now, given the initial configuration (z k ) k#Z , the position of the particle z k at time y > 0 is
given by
z
Note that for each point point z k (y) at time y, originating from z k in the original configuration,
there will always be a point z j of the original configuration, with j # k, such that
For example, if z k-1 < z k (y) < z k we get a path of length 1 from z k-1 to z k (y), and
Similarly, if z k (y) < z k-1 , we can always construct a path from a point z j , with j < k to z k (y)
through points of the Poisson point process, "picked up" by the preceding paths ("seen from z k (y)",
these are descending corners in the preceding paths).
These points need not be uniquely determined. Proposition 4.4 in Sepp al ainen (1996) clarifies
the situation. It asserts that almost surely (that is: for almost every realization of the point
processes) we have that for all y > 0 and each k # Z there exist integers i - (k, y) and i
that
z
holds
The proof of this
Proposition 4.4 in Sepp al ainen (1996) is in fact fairly subtle!
We now first consider (proceeding a little bit di#erently than in Sepp al ainen (1996)) the
evolution of the initial configuration on (-#, 0] in the case that we have a Poisson process of
intensity 1 on (-#, 0] and the empty configuration on (0, #), using the same method as used in
section 4. Let F be the set of continuous functions f
We denote the point process of the starting configuration by # 0 and the configuration at time t > 0
by # t , where we let it develop according to the rules of the Hammersley-Aldous-Diaconis process.
Then, just as before, we can prove
lim
for f # F . So, in the case that we have a Poisson process of intensity 1 on (-#, 0], the empty
configuration on (0, #), and a Poisson process of intensity 1 in the upper half plane, the distribution
of the initial confirguration is invariant for the Hammersley-Aldous-Diaconis process.
Let P 0 be the probability measure, associated with the initial configuration (i) in the beginning
of this section, and let, for (the length of the "stick" at location i), where
we let z -1 be the biggest point of the initial configuration in (-#, 0). Moreover, let the measure
m 0 on the Borel sets of IR be defined by
and let # defined for all i # Z). Then we have, for each # > 0, and
each interval [a, b] # IR:
lim
which corresponds to condition (1.10) in Sepp al ainen (1996). This condition plays a similar role
as condition (3.2) in section 3 above. Here [x] denotes the largest integer # x, for x # IR. It is then
proved that, if x > c 2 y/4
y,
where u is a (weak) solution of the partial di#erential equation
#y
#x
under the initial condition u(x, since we also have
if we can prove
lim
This is in fact proved in Sepp al ainen (1996) on page 32. The first term on the right of (5.8) comes
from 1
using
We return to (5.6). Another interpretation of this relation is
where
z
and
z#x
using Theorem A1 on p. 38 of Sepp al ainen (1996), with
This corresponds
to relation (8.8) on p. 29 of Sepp al ainen (1996), which implies that
An easy calculation shows:
since U(x, y) # 0 (note that z # 0 since U(x, y) < 0 can only
occur if x - 1
which case the minimizing z is given by z
y. Note that
y, and that the right side of (5.9) in this case is given by
y.
So the partial di#erential equation (5.7), with initial condition
is solved by
which, considered as a function of x, is the Radon-Nikodym derivative of the Borel measure m y on
IR, defined by
Note that u only solves (5.7) in a distributional sense. That is: for any continuously
di#erentiable "test compact support and any y > 0, we have:
see also (1.6) on p. 5 of Sepp al ainen (1996).
We note that in Seppalainen's interpretation of the particle process, we cannot get new points
to the right of zero in the situation where we start with a Poisson point process of intensity 1 on
(-#, 0] and the empty configuration on (0, #). In this situation we have z so we
have infinitely many particles at location z -1 . This means that at each time y, where a new point
of the Poisson point process in the plane occurs with x-coordinate just to the left of x-1 , and also
to the right of points z i (y), satisfying z i (y) < z -1 , one of the infinitely many points z i (y) that are
still left at location z -1 shifts to the x-coordinate of this new point. Seppalainen's interpretation
with the "moving to the left" is perhaps most clearly stated in the first paragraph of section 2 of
al ainen (1998).
In the interpretation of Seppalainen's ``stick process'', we would have an infinite number of sticks
of zero length at sites 0, 1, 2, . Each time an event of the above type occurs, one of the sticks
of length zero gets positive length, and the stick at the preceding site is shortened (corresponding
to the shift to the left of the corresponding particle in the particle process). In this way, mass
gradually shifts to the right in the sense that at each event of this type a new stick with an index
higher than all indices of sticks with positive length gets itself positive length. The corresponding
"macroscopic" picture is that the initial profile u(, shifts to u(,
y.
The interpretation of the relation
taking in (5.10), is that z [tx] (0) travels roughly over a distance t(y -x) to the left in the time
interval [0, ty], if y > x # 0 (we first need a time interval tx to get to a "stick with index [tx]" and
length zero, then another time interval of length t(y - x) to build a distance left from zero of order
t(y - x)), and that (with high probability) z [tx] (0) does not travel at all during this time interval,
if x > y (a "stick with index [tx]" is not reached during this time interval).
6 An elementary hydrodynamical proof of
It turns out that a very simple proof of the result possible, only using the subadditive ergodic
theorem and almost sure convergence of certain signed measures, associated with the Hammersley-
Aldous-Diaconis process. We give the argument, which is based on Groeneboom (2000), below.
be the length of a longest increasing sequence from (0, 0) to (x, y) in the rectangle
[0, x] [0, y], if we start with the empty configuration on IR + and a Poisson point process of
intensity 1 in IR 2
. has the same meaning as N + (x, y) in Aldous and Diaconis (1995),
see section 3 above. We further call a point of the Poisson point process in IR 2
an #-point and a
point where a Hammersley-Aldous-Diaconis space-time curve has a North-East corner a #-point.
The situation is illustrated in Figure 1, where the number of #-points and #-points is 9 and 5,
respectively, and N
has the following alternative interpretation:
number of #-points in B minus number of #-points in B.
So we can associate with N + (x, y) a random signed measure, defined by:
number of #-points in B minus number of #-points in B,
for Borel sets B # IR 2
, and we shall write
dN(x, y).
Note that N now has an interpretation di#erent from that in sections 3 and 5.
In a similar way we can associate a random measure V t with the process
We get:
{ number of #-points in tB minus number of #-points in tB},
where the set tB is defined by
Furthermore, we define:
{number of #-points in [0, tx] [0, ty]}
and
{number of #-points in [0, tx] [0, ty]}.
With these definitions we clearly have:
Moreover, we define
so we omit the upper edge of the rectangle [0, x] [0, y] (we can also omit the right edge of this
rectangle, but not both edges, as will be clear from the sequel!). With these definitions we have
the following lemma.
Lemma 6.1 For each rectangle
Proof. Suppose (as we may) that the boundary of the rectangle [0, tx] [0, ty] does not contain
#- or #-points. Further suppose that there are m space-time curves, going through the rectangle
[0, tx] [0, ty] (meaning that N(tx,
Crossing the space-time curves, going from (0, 0) in a North-East direction, we can number
these paths as is the path closest to the origin. Then, for an #-point (u, v)
on P i , we get
1)/t, and for a #-point (u, v) on P i , we get
(here the fact
that we omit the upper edge of the rectangle [0, u] [0, v] becomes important!). Let A 1 be the set
of #-points and A 2 be the set of #-points, contained in [0, tx] [0, ty], respectively. Then we get:
- 1)#-points on P i } - i#-points on P i }}
But for each space-time curve P i , contained in [0, tx] [0, ty], we have:
#-points on P i } -points on P i
So we get
From this, the result easily follows. First of all
a.s.
by the subadditive ergodic theorem (this is the fundamental result in Hammersley (1972)). Hence,
by the continuity theorem for almost sure convergence,
a.s.
a.s.
-# 2xy,
since, by (6.3), t a.s.
-# 0, and since 2t
a.s.
(x, y) is the number of
points of the Poisson point proces in [0, tx][0, ty], divided by t 2 . Finally, defining V (x, y)
(i.e., the almost sure limit of V t (x, y)), we get
a.s.
see Groeneboom (2000). The result now follows from (6.4) to (6.6) and Lemma 6.1.
Now, to give some insight into the "germination and ripening of the present proof" and to show
that "it did not spring fully armed like Pallas Athene from the brow of Zeus" (quoting Hammersley
(1972)), consider a twice di#erentiable function (x,
and
#x#y
#x#y
for twice di#erentiable functions f IR. The motivation for looking at (6.8) came from
considering the asymptotic behavior of
#x#y
for twice di#erentiable functions f From a study of the asymptotic behavior of
for small h, k > 0, it can be seen that the following relation has to hold:
#x#y
where G(tx, ty) and H(tx, ty) are the distances between (tx, ty) and the nearest crossings of a
Hammersley-Aldous-Diaconis space-time curve with the line segments
respectively. We conjectured that
lim
implying in particular that
lim
Since it is not di#cult to show that
lim
the statement that would then be equivalent to saying that the covariance between G(tx, ty)
and H(tx, ty) tends to zero, as t #. But since I was not able to prove these conjectures, I looked
for another way to arrive at relation (6.8).
Returning to (6.8), and combining this relation with the straightforward di#erentiation
#x#y
#x#y
#x
#y
we recover the relation:
#x
#y
under the side condition (6.7). Note that this is exactly the partial di#erential equation (3.1) (the
"heuristic argument" in Aldous and Diaconis (1995)), which in turn goes back to the heuristic
relation (2.20) in Hammersley (1972). This partial di#erential equation has as solution:
which is the almost sure limit of V t (x, y).
The (random) function (x, y) # V t (x, y) satisfies the side condition (6.7), but is clearly not
di#erentiable in x and y. However, we might still hope to have a relation of the form
in some sense, for rectangles taking
which leads to the preceding proof.
7 Concluding remarks
In the foregoing sections I tried to explain the "hydrodynamical approach" to the theory of
longest increasing subsequences of random permutations, for the uniform distribution on the set
of permutations. This approach probably started with the paper Hammersley (1972) and the
heuristics that can be found in that paper have been expanded in di#erent directions in the papers
discussed above. The hydrodynamical approach has also been used to investigate large deviation
properties, see, e.g., Sepp al ainen (1998), and for large deviations of the upper tail: Deuschel and
Zeitouni (1999); they still treat the large deviations of the lower tail by combinatorial methods,
using Young diagrams, but apparently would prefer to have a proof of a more probabilistic nature,
as seems clear from their remark: "The proof based on the random Young tableau correspondence
is purely combinatoric and sheds no light on the random mechanism responsible for the large
deviations".
I conjecture that is it is possible to push the hydrodynamical approach further for proving
the asymptotic distribution results in Baik, Deift and Johansson (1999), but at present these
results still completely rely on an analytic representation of the probability distribution of longest
increasing subsequences using Toeplitz determinants, see, e.g. Deift (2000), p. 636.
Acknowledgement
. I want to thank the referees for their constructive comments.
--R
Hammersley's interacting particle process and longest increasing subsequences.
Longest increasing subsequences: from patience sorting to the Baik-Deift-Johansson theorem
Personal communication.
Proofs from THE BOOK.
On the distribution of the length of the longest increasing subsequences of random permutations.
Integrable systems and combinatorial theory.
On increasing subsequences of I.
A combinatorial problem in geometry.
Ulam's problem and Hammersley's process.
A few seedlings of research.
Random measures
Subadditive ergodic theory.
Interacting particle systems.
A variational problem for random Young tableaux.
A mean identity for longest increasing subsequence problems.
Functional analysis
The symmetric group.
Asymptotics of the Plancherel measure of the symmetric group and the limiting form of Young tableaux.
Asymptotic behavior of the maximum and generic dimensions of irreducible representations of the symmetric group.
Asymptotic theory of the characters of the symmetric group.
Une forme g
--TR
--CTR
Michael H. Albert , Alexander Golynski , Angle M. Hamel , Alejandro Lpez-Ortiz , S. Srinivasa Rao , Mohammad Ali Safari, Longest increasing subsequences in sliding windows, Theoretical Computer Science, v.321 n.2-3, p.405-414, August 2004 | ulam's problem;longest increasing subsequence;hammersley's process |
586837 | Some multilevel methods on graded meshes. | We consider Yserentant's hierarchical basis method and multilevel diagonal scaling method on a class of refined meshes used in the numerical approximation of boundary value problems on polygonal domains in the presence of singularities. We show, as in the uniform case, that the stiffness matrix of the first method has a condition number bounded by (ln(1/h))2, where h is the meshsize of the triangulation. For the second method, we show that the condition number of the iteration operator is bounded by ln(1/h), which is worse than in the uniform case but better than the hierarchical basis method. As usual, we deduce that the condition number of the BPX iteration operator is bounded by ln(1/h). Finally, graded meshes fulfilling the general conditions are presented and numerical tests are given which confirm the theoretical bounds. | Introduction
The solution of boundary value problems (b.v.p.) in non-smooth domains presents singularities
in the neighbourhood of singular points of the boundary, e.g. in the neighbourhood
of re-entrant corners. Consequently, the use of uniform finite element meshes yields a poor
rate of convergence. Many authors proposed to build graded meshes in the neighbourhood
of these singular points in order to restore the optimal convergence order (see, e.g.
[13, 16]). Roughly speaking, such meshes consist in moving the nodal points by some
coordinate transformation in order to compensate the singular behaviour of the solution,
i.e. that the nodes accumulate near the singular point.
As usual the finite element discretization leads to the resolution of large-scale systems
of linear algebraic equations, where the system matrices in the nodal basis have a large
condition number. This implies that the resolution by iterative methods requires a large
number of iterations. Using preconditioners based on multilevel techniques one can reduce
this number of iterations drastically. The first obstacle is that the graded meshes proposed
in [13, 16] are actually not nested. Consequently, we propose here to build a sequence of
nested graded meshes T two-dimensional domains which are also appropriate
for the approximation of singularities. A similar algorithm was proposed in [12].
For uniform meshes standard multilevel methods, e.g. the hierarchical basis method
[20] and BPX-like preconditioners [3, 4, 5, 8, 10, 14, 15, 19, 21] allow the reduction of the
condition number to the order O((ln h respectively, for two-dimensional
problems.
Similar results were obtained in the case of nonuniformly refined meshes (see, e.g.,
[4, 5, 8, 15, 19, 20]). But these meshes are different from the above graded meshes.
Therefore, our goal is to extend this kind of results to our new meshes. The main idea is
to prove that our graded meshes satisfy the conditions
with positive constants - 1 , - 2 , fi, and fl; hK k and hK l are the exterior diameter of the
triangles l. Using this property, we can prove
that the condition number of the stiffness matrix in the hierarchical basis is of the order
and that the condition number of a (j 1)-level additive Schwarz operator
with multilevel diagonal scaling (MDS method) is of the order O(ln h
The outline of the paper is the following one: In Section 2, we present our model
problem and describe its finite element discretization. In Section 3, we analyse the condition
number of the stiffness matrix in the hierarchical basis by showing the equivalence
between the H 1 -norm and the standard discrete one, and in Section 4, we derive estimates
of the condition number of the MDS method by adapting Zhang's arguments [21]. Section
5 is devoted to the building of the nested graded meshes. We also check that these
meshes are regular and fulfil the conditions (1). Finally, numerical tests are presented in
Section 6 which confirm our theoretical estimates.
2 The model problem
be a bounded domain of the plane with a polygonal boundary \Gamma (i.e. the
union of a finite number of linear segments). On \Omega\Gamma we shall consider usual Sobolev spaces
of norm and semi-norm denoted by k \Delta k
s;\Omega , respectively (we
refer to [11] for more details). As usual, a
s(\Omega\Gamma is the closure in H
of C 1
0(\Omega\Gamma7 the
space of C 1 functions with compact support in \Omega\Gamma
Consider the boundary value problem
whose variational formulation is: Find u 2
a
1(\Omega\Gamma such that
a
where we have set
Z
\Omega
r T urv dx and
Z
\Omega
(\Omega\Gamma3 It is well known that
if\Omega is convex then
2(\Omega\Gamma and consequently
the use of uniform meshes in standard finite element methods yields an optimal order of
convergence h. On the contrary,
if\Omega is not convex then u 62 H
in general and uniform
meshes yield a poor rate of convergence. Many authors [13, 16, 18] have shown that local
mesh grading allows to restore the optimal order. But such meshes are not uniform in the
sense used in standard multilevel techniques. Hereabove and later on, by uniform meshes
we mean either regular refinements (partition of triangles of level k into four congruent
subtriangles of level nonuniformly refinements (PLTMG package of [2]), see for
instance Section 4 of [15] and the references cited there. For this reason, as in [20, 21],
we relax the conditions of the meshes in the following way (graded meshes that fulfil
these conditions are built in Section 5). We suppose that we have a sequence of nested
triangulations fT k g k2IN such that any triangle of T k is divided into four triangles of T k+1 .
We assume that the triangulations are regular in Ciarlet's sense [6], i.e., the ratios hK =ae K
between the exterior diameters hK and the interior diameters ae K of elements
are uniformly bounded from above and the maximal mesh size h tends to
zero as k goes to infinity. We further assume (see Section 3 of [20] and Section 2 of [21])
that there exist positive constants fi; positive constants such that for all
l with K k ae K l , we have
For regular refinements we have We shall see later on
that our graded meshes satisfy (4) with is the
grading parameter.
In each triangulation T k , we use the approximation space
a
is the set of polynomials of order - 1 on K. We consider the Galerkin
approximation solution of
Let us remark that with the mesh T k built in Section 5 and an appropriate parameter -,
we have the error estimate
1;\Omega . 2 \Gammak kfk
where here and in the sequel a . b means that there exists a positive constant C independent
of k and of the above constants fi; fl such that a - Cb. In Section 5, the constant
will also be independent of the grading parameter -.
3 Yserentant's hierarchical basis method
The goal of this section is to show that the stiffness matrix of the Galerkin method in the
hierarchical basis on meshes T k of the previous section has a condition number bounded
by (ln( 1
as in the uniform case. The same result was already underlined by Yserentant
in Section 3 of [20] for nonuniformly refined meshes (in the above sense) by introducing
the condition (4) and by showing that the results for uniformly refined meshes proved in
Section 2 of [20] could be adapted to this kind of meshes satisfying (4). We then follow
the arguments of Section 2 of [20], underline the differences with the standard refinement
rule and also give the dependence with respect to the parameters fi; fl.
Let N k be the set of vertices of the triangles of T k and S k be the space of continuous
functions on -
\Omega and linear on the triangles of T k . For a continuous function u in -
be the function in S k interpolating u at the nodes of T k , i.e.,
I k u
For further use, let us also denote by V k the subspace of S k of functions vanishing at the
nodes of level k \Gamma 1, in other words, V k is the range of I k \Gamma I k\Gamma1 .
On the finite element space S j , define the semi-norm j \Delta j as follows:
The proof of the equivalence of norms we have in mind is based on the two following
preliminary lemmas. The first one concerns equivalence of semi-norms (cf. Lemma 2.4 of
[20]).
Lemma 3.1 For all u
1;\Omega . juj
Proof: In view of Lemma 2.4 of [20], we simply need to show that the following estimates
1;K .
for all To prove this estimate, we remark that K 2 T k\Gamma1 is
divided into four triangles K l 4, such that v is linear in each K l and
are the vertices of K (see
Figure
1). Due to the fact that the triangulation T k is regular, by an affine coordinate
transformation (reducing to the reference element -
we prove that
1;K l .
where I(K l ) is the set of vertices of K l which are not vertex of K. Summing these
equivalences on l = 4, we obtain (9).
The second ingredient is a Cauchy-Schwarz type inequality already proved in Lemma
2.7 of [20] in the case of regularly refined meshes and that we easily extend to the case of
our mesh as suggested in Section 3 of [20].
Figure
1: Triangle divided in four subtriangles K l
Lemma 3.2 For all l , we have
a(u; v) . fl jk\Gammalj
Proof: Similar to the proof of Lemma 2.7 of [20] with the following slight modification:
if K is a fixed triangle of T l and S the boundary strip of K consisting of all triangles of
are subsets of K and meet the boundary of K then due to (4), we
have
meas (K) .
In view to the proof of Lemma 2.7 of [20], this yields the assertion.
Now we can formulate the equivalence between the H 1 norm and the discrete one (see
Theorem 2.2 of [20]).
Theorem 3.3 For all u
1;\Omega .
Proof: For the lower bound, we remark that the assumption (4) and Lemmas 2.2 and
2.3 of [20] imply that
for every K 2 Summing these inequalities on all K 2 T k , we get
1;\Omega . (1
0;\Omega . (1
Therefore by Lemma 3.1 and the triangular inequality, we get
1;\Omega . kI 0 uk 2
By the estimates (12) and (13), we then obtain the lower bound in (11).
Let us now pass to the upper bound. First, Lemma 3.2 and the arguments of Lemma
2.8 of [20] yield
1;\Omega .
On the other hand, the assumption (4), the fact that our triangulation is regular and the
arguments of Lemma 2.9 of [20] lead to
0;\Omega . kI 0 uk 2
The sum of the two above estimates gives the upper bound in (11).
Using a hierachical basis of V j and the former results, we directly get the
Corollary 3.4 The Galerkin stiffness matrix A j of the approximated problem (5) in the
hierarchical basis has a spectral condition number which grows at most quadratically
with the number of levels j, more precisely
4 Multilevel diagonal scaling method
In this section, we analyse the multilevel diagonal scaling method and the BPX algorithm
in the spirit of [21]. Here the main difficulty relies on the fact that our meshes are not
quasi-uniform (quasi-uniform meshes means that hK - h k , for all triangles K 2 T k , for
leading to the fact that the assumption 2.1.c of [21] is violated.
Let us recall that the multilevel diagonal scaling method consists in the following
algorithm: First we represent as a sum
i is the nodal basis function of V k associated with the
interior vertex
being the number of interior vertices of T k . Define
the operator A from V j to V j by
where (\Delta; \Delta) means the L 2
inner product. Let us further define the preconditioner B
MDS
and the j 1-level multilevel diagonal scaling operator PMDS by
The multilevel diagonal scaling algorithm consists in finding u of the Galerkin
problem (5) by solving iteratively (using for instance the conjugate gradient method) the
equation
As usual to solve iteratively (16), the crucial point is to estimate the condition number
of the iteration operator PMDS . For quasi-uniform meshes, it was shown by X. Zhang in
Theorem 3.1 and Section 4 of [21] that this condition number is uniformly bounded (with
respect to the level j). The same result was extended to the case of nonuniformly refined
meshes [8, x5], [15, x4.2.2]. Our goal is to extend this type of results to meshes satisfying
only (4) (actually only the upper bound is sufficient) which can be non quasi-uniform.
Analysing carefully the proof of Theorem 3.1 of [21] we remark that the upper bound is
valid under the assumption (4) (only the upper bound) and is fully independent of the
quasi-uniformity of the meshes. On the contrary the proof of the lower bound uses this
last property. The key point in our proof of this lower bound is the use of Scott-Zhang's
interpolation operator that we recall now for convenience [17]. For a fixed k 2 f0;
with each g, we associate the macro-element
which is actually the support of OE k
i . For any triangle K 2 T k , let us further denote by
S(K) the union of all macro-elements containing K, i.e.,
The following well known facts result from the regularity of the family fT k g k2IN : There
exists a positive integer M (independent of k) such that
card
hK . hK 0 ; for any K; K
A direct consequence of these two properties is that the diameter of S(K) is equivalent
to hK , indeed from the triangular inequality we have
diam
Using the properties (18) and (17), we get
With any nodal point p k
i , we associate one edge oe k
i of one triangle K 2 T k such that
. We now fix a dual basis f/ k
i g of the nodal one fOE k
in the sense that
Z
Then for all v 2
a
Scott-Zhang's interpolation operator - k v on T k is defined by
Z
Note that the operator - k is actually linear continuous from a
a projection on V k (i.e. - k that it enjoys the following local
interpolation property (see Section 4 of [17]): for all triangles K 2 T k and or 1, we
a
Let us notice that Cl'ement's interpolation operator [7, 9] also satisfies (20) but unfortunately
is not a projection on V k .
Now we are able to prove the estimate of the condition number -(PMDS ) of the iteration
operator PMDS .
Theorem 4.1 The multilevel diagonal scaling operator PMDS
Consequently we have
which means that -(PMDS ) grows at most linearly with the number of levels j + 1.
Proof: As already mentioned, the upper bound was proved by X. Zhang in Lemmas 3.2
to 3.5 in [21]. To prove the lower bound instead of using the H 1 -projection on V k , for
which has a global approximation property which is not convenient for
non quasi-uniform meshes, we take advantage of the local interpolation property (20) of
Scott-Zhang's interpolation operator. Indeed for any u 2
a
1(\Omega\Gamma8 we set
with the convention - \Gamma1 Consequently any u 2 V j may be written
Then for all triangles K 2 T k and or 1, we have:
where M(K) is the unique triangle in T k\Gamma1 containing K if
Owing to (20) and (18), we deduce that
Now we decompose u k in the nodal basis, in other words we write
where
. Consequently we get
. ju k (p k
This last estimate being obtained using the equivalence of norms in finite dimensional
spaces on the reference element -
K and an affine coordinate transformation. Using now
the estimate (24) we arrive at
1;\Omega .
Summing this last estimate on and using the property (17), we obtain
1;\Omega .
1;K . juj 2
The sum on
1;\Omega . (j
With the help of Lemma 3.1 of [21] (see also Remark 3.1 of [21]) and the definition of the
bilinear form a, we conclude
The lower bound directly follows.
Let us finish this section by looking at the BPX algorithm. As the BPX preconditioner
is defined by
the BPX operator
equivalent to 1 (uniformly with respect to k), the condition numbers of
PBPX and PMDS are equivalent. This means that the following holds.
Corollary 4.2 The BPX operator enjoys the property
5 Graded nested meshes
The triangulations T k
of\Omega are graded according to Raugel's procedure [11, 16]. But here
since we need a nested sequence of triangulations this procedure is slightly modified. As
a consequence we need to check the regularity of the meshes. In a second step we shall
show that this family satisfies the condition (4).
Let us first describe the construction of the meshes:
Divide\Omega into a coarse triangular mesh T 0 such that each triangle has either one or
no singular point
(of\Omega\Gamma as vertex. If a triangle has a singular point as vertex (i.e. the
interior angle at this point is ? -), it is called a singular triangle and we suppose that all
its angles are acute and the edges hitting the singular point have the same length (this is
always possible by eventual subdivisions).
ii) Any non singular triangle T of T 0 is divided using the regular refinement procedure,
i.e., divide any triangle of T k included in T into four congruent subtriangles of T k+1 , see
Figure
2.
Figure
2: Triangle K 2 T k divided into four congruent subtriangles
iii) Any singular triangle T of T 0 is refined iteratively as follows: Fix a grading parameter
(that for simplicity we take identical for all singular triangles; if there exists more
than one singular point, then we simply need to take the same parameter for triangles
containing the same singular point). In order to make understandable our procedure we
describe explain how to pass from T " T k to T " T k+1 . For
convenience we first recall Raugel's grading procedure.
Introduce barycentric coordinates - in T such that the singular point of T has
the coordinate - 1. For all n 2 IN , define vertices
in T whose
coordinates are
Raugel's grading procedure consists in defining as the set of triangles described
by their three vertices as follows:
First simply defined by Raugel's procedure, i.e., it is the set of four triangles
described by (26) with Figure 3).
Secondly, the triangulation T " T 2 is built as follows (see Figure 4): The part below
the line - 1 is identical with Raugel's one, namely it is described by the four
triangles of vertices:
On the contrary the part above the line - 1 is modified in order to guarantee
the nested property. More precisely, the set of triangles in this zone is described by
~
~
where for the points ~
i;j are identical with p (4)
i;j except in the case (i;
and (i;
1;2 ) as the intersection between the line
(2)
(2)
(2)
(2)
(2)
(2)
Figure
3:
and the line joining the points p (2)
Figure
4.
Notice that these points ~
i;j are actually on one edge of a triangle of T " T 1 . We now
remark that in this procedure the three triangles K l above the line
are divided into four triangles in the following way: determine the two
points which are intersection between the line and the edges of K l ;
determine the mid point of the third edge (uniform subdivision in two parts). Using these
three points on the edges of K l and the vertices of K l , we divide K l into four triangles in
a standard way (see Figure 1). This will be the general rule.
~
~
Figure
4: our procedure
Now we can describe the passage from . The triangle of
containing the singular corner is divided into four triangles in Raugel's way: these triangles
are described by their three vertices
Any triangle K 2 T "T k above the line - 1
divided into four triangles in the
following way: First there exists i - 1 such that K is between the lines
Two vertices are on one line that we denote by and the
third one denoted by p 1 is on the other line. Secondly determine the two points p 0
0which are intersection between the line - 1
and the edges of K; determine
the mid point
1 of the third edge. Now the four triangles K l
are described by their three vertices (see Figure 5):
Remark that the triangle of containing the singular corner is also refined with
the same rule.
Let us finally notice that the above procedure guarantees the conformity of the meshes.
Now we want to show that this family of meshes is regular.
Lemma 5.1 The above family is regular in the sense that
Proof: To prove the assertion it suffices to look at the triangles of T " T k for any singular
triangle T of T 0 . Now we remark that our procedure preserves the acute property of the
angles. Therefore if we show that for all K 2
are the lengths of the edges of K in increasing order, then we deduce that
the smallest angle ff K of K satisfiesq
. sin(ff K
By Zl'amal's condition [22], we then deduce
ae K
which yields (27).
It then remains to prove (28). We now remark that if we apply a similarity of center at
the singular point and of ratio 2 \Gamma1=- to the triangulation we obtain the part of the
triangulation of below the line - 1 . This means that we are reduced
to prove (28) for the triangles above that line . Therefore we say that
only if K is between the lines
with
For any triangle K 2 ~
us denote by p K the length of the edge parallel to the
line
when K is between the lines
We first prove that
hK . p K . e 3( 1
Indeed we shall establish inductively that
r l
~ hK . p K .
l
1. It is clear that (30) holds for
Consequently to prove (30) for all k, it suffices to show that if (30) holds for k, it also
holds 1. Fix any K 2 ~
then as already explained it is divided into four
triangles K l 4, of ~
Two geometrical cases can be distinguished: either
1 is on the line - 1
1 is on the line - 1
us first show
that (30) holds for the triangles K l 4, in the first case. With the notation from
Figure
5, we deduce from the construction of the mesh that p 0
h is the similarity of center p 1 and ratio
This implies that
(a)
Figure
5: Definition of the nodes p i and p 0
Since by assumption K satisfies (30), K 1 and K 3 directly satisfies
r l
l
leading to (30) for K 1 (with 1. For the triangle K 3 ,
the above estimate yields
r K
r l
~ hK 3 . p K 3 . r K
l
where r
. This leads to (30) for K 3 because
due to the fact that i - 2 k\Gamma1 .
For K 2 and K 4 , we have p K Therefore by the inductive assumption
and the fact that ~ hK 2
r l
~ hK l . p K l
l
Again this leads to (30) for K 2 and K 4 because we easily check that (note that r - 1=2)
The second case is treated similarly, for K 3 we have the same estimate than before
K instead of r K that is the reason of the factor r \Gamma1
k+2 on the right-hand side. For
simply remark that the ratio ~
r of the second similarity is use
the fact that r k+2 - 2~r.
The proof of (30) is then complete.
Now (29) follows from (30) because using the fact that
log a
with
l=3
l
l=3
l - e 3( 1
Let us now come back to (28). For any K 2 ~
by construction of the mesh, we
clearly have
with the above notation for the vertices of K. On the other hand, since all the angles of
K are acute, if t denotes the orthogonal projection of p 1 on the edge we have by
3:
. e 6( 1
is the angle between the lines
Using the estimates (29), (31) and (32), we conclude that
This yields (28).
Remark 5.2 It was shown by Raugel in [16, p.96] that Raugel's graded meshes satisfy
Let us now show that our meshes satisfy the condition (4).
Lemma 5.3 The above family satisfies the condition (4) with
positive constant C independent of -.
Proof: As before it suffices to prove the assertion for the triangles in a fixed singular
triangle T of T 0 (since the remainder of the triangulation is quasi-uniform). By the
estimates (29), (31) and (32), we can claim that
Consequently we are reduced to estimate the quotient
~ hK l
when k - l, for any triangle K k l with K k ae K l . This quotient is
now easily estimated from above and from below by using the mean value theorem and
by distinguishing the case when K l contains the singular corner or not.
Remark 5.4 With our meshes, we have by Corollaries 3.4 and 4.2 and the two above
Lemmas that
(j
where C(-) is a positive constant which depends on e 6( 1
and then can blow
up as - tends to 0. This fact is confirmed by the numerical tests given in the next section.
6 Numerical results
In this Section, we present some numerical results which confirm our theoretical results
derived in Sections 3 and 4.
Let us consider boundary value problem (2) in a
domain\Omega with a re-entrant corner
(see
Figure
6).
It is well-known that the weak solution u of such a problem admits in the neighbourhood
of the singular point, i.e. in the neighbourhood of the re-entrant corner, the singular
representation
2(\Omega\Gamma3 the singular function
3 - in our example), and the stress intensity factor c (see, e.g., [13, 16]). Here, (r; ')
are polar coordinates with x
Using
graded meshes with a grading parameter -
one gets the optimal convergence order
of the finite element solution of problem (2). Figure 6 shows the mesh T 0 and the mesh
Figure
Domain\Omega with mesh T 0 and T 3
3 resulting from the mesh generation procedure described in Section 5 with the grading
parameter
Next we want to show by our experiments the dependence of the condition number
of the Galerkin stiffness matrix A j in the hierarchical basis on the number
levels used (Figure 7). In the experiments we use different values of the grading parameter
-. On can observe that is nearly a constant, and consequently, the
experiments confirm the theoretical estimate given in Corollary 3.4.0.51.52.53.54.51 2 3 4 5 6 7
(j
Figure
7: as a function of j
Figure
8 shows the behaviour of -(PMDS ) in dependence on the number
used. The numerical experiments confirm the statement given in Theorem 4.1.
Figure
8: -(PMDS ) as a function of j
Acknowledgement
We thank Dr. T. Apel for many discussions on this topics.
--R
spaces.
A Software Package for Solving Elliptic Partial Differential Equations - Users' Guide 7.0
A basic norm equivalence for the theory of multilevel methods.
New estimates for multilevel algorithms including the V-cycle
Parallel multilevel preconditioners.
The Finite Element Method for Elliptic Problems.
Approximation by finite element functions using local regularization.
Multilevel preconditioning.
Finite Element Methods for Navier-Stokes Equations: Theory and Algorithms
Multilevelmethoden als Iterationsverfahren - uber Erzeugendensystemen
Elliptic Problems in Nonsmooth Domains.
On adaptive grids in multilevel methods.
On discrete norm estimates related to multilevel preconditioners in the finite element method.
Multilevel Finite Element Approximation: Theory and Applications.
R'esolution num'erique de probl'emes elliptiques dans des domaines avec coins.
Finite element interpolation of nonsmooth functions satisfying boundary conditions.
An Analysis of the Finite Element Method.
Iterative methods by space decomposition and subspace correction.
On the multi-level splitting of finite element spaces
Multilevel Schwarz methods.
On the finite element method.
--TR
On the multi-level splitting of finite element spaces
Iterative methods by space decomposition and subspace correction
Finite Element Method for Elliptic Problems | graded meshes;finite element discretizations;multilevel methods;mesh refinement |
586856 | Adaptive and Efficient Algorithms for Lattice Agreement and Renaming. | In a shared-memory system, n independent asynchronous processes, with distinct names in the range {0, ..., N-1}, communicate by reading and writing to shared registers. An algorithm is wait-free if a process completes its execution regardless of the behavior of other processes. This paper considers wait-free algorithms whose complexity adjusts to the level of contention in the system: An algorithm is adaptive (to total contention) if its step complexity depends only on the actual number of active processes, k; this number is unknown in advance and may change in different executions of the algorithm.Adaptive algorithms are presented for two important decision problems, lattice agreement and (6k-1)-renaming; the step complexity of both algorithms is O(k log k). An interesting component of the (6k-1)-renaming algorithm is an O(N) algorithm for (2k-1)-renaming; this improves on the best previously known (2k-1)-renaming algorithm, which has O(Nnk) step complexity.The efficient renaming algorithm can be modified into an O(N) implementation of atomic snapshots using dynamic single-writer multi-reader registers. The best known implementations of atomic snapshots have step complexity O(N log N) using static single-writer multi-reader registers, and O(N) using multi-writer multi-reader registers. | Introduction
. An asynchronous shared-memory system contains n processes
running at arbitrary speeds and communicating by reading from and writing to shared
registers; processes have distinct names in the range In a wait-free
algorithm, a process terminates in a finite number of steps, even if other processes
are very slow, or even stop taking steps completely.
The step complexity of many wait-algorithms depends on N ; for example, collecting
up-to-date information from all processes typically requires to read an array
indexed with processes' names. Real distributed systems need to accommodate a large
number of processes, i.e., N is large, while often only a small number of processes take
part in the computation. For such systems, step complexity depending on n or N is
undesirable; it is preferable to have step complexity which adjusts to the number of
processes participating in the algorithm.
An algorithm is adaptive (to total contention) if its step complexity depends
only on the total number of processes participating in the algorithm, denoted k; k is
unknown in advance and it may change in different executions of the algorithm. The
step complexity of an adaptive algorithm adjusts to the number of active processes:
It is constant if a single process participates in the algorithm, and it gradually grows
as the number of active processes increases.
A weaker guarantee is provided by range-independent algorithms whose step complexity
depends only on n, the maximal number of processes; clearly, n is fixed for all
executions. 1 The advantage of range-independent algorithms is quite restricted: They
require a priori knowledge of n, which is often difficult to determine; moreover, their
extended abstract of this paper appeared in proceedings of the 17th ACM Symposium on
Principles of Distributed Computing (June 1998), pp. 277-286.
2 Department of Computer Science, The Technion, Haifa 32000, Israel (hagit@cs.technion.ac.il,
leonf@cs.technion.ac.il). Supported by the fund for the promotion of research at the Technion.
1 Moir and Anderson [27] use the term "fast", which conflicts with other papers [3, 25].
2 Attiya and Fouren
(Algorithm
union [23]
O(k log
lattice agreement
(Algorithm
)-renaming [27]
(Algorithm
O(k log
(Algorithm
1)-renaming
(Algorithm
1)-renaming
O(n log n)
(Algorithm
1)-renaming
O(N)
Fig. 1. The algorithms presented in this paper; double boxes indicate the main results.
step complexity is not optimal when the actual number of participating processes is
much lower than the upper bound. Yet, as we show, they can be useful tools in the
construction of adaptive algorithms.
This paper presents adaptive wait-free algorithms for lattice agreement and re-
naming, using only read and write operations. Along the way, we improve the step
complexity of non-adaptive algorithms for renaming. Figure 1 depicts the algorithms
presented in this paper.
In the one-shot M -renaming problem [10], processes are required to choose distinct
names in a range of size M (k), for some bounded function M . This paper does
not consider the more general long-lived renaming problem [9], in which processes repeatedly
acquire and release names. Adaptive renaming can serve as an intermediate
step in adaptive algorithms for other problems [9, 26, 27, 28]: The new names replace
processes' original names, making the step complexity depend only on the number of
active processes. Our algorithms employ this technique, as well as [6, 7].
An efficient adaptive algorithm for renaming could not be derived from known
algorithms: The best previously known algorithm for renaming with linear name
space [18] has O(Nnk) step complexity, yielding O(k 3 ) step complexity (at best) if
it can be made adaptive. Thus, we first present an (2k \Gamma 1)-renaming algorithm
with O(N ) step complexity, which is neither adaptive nor range-independent. This
algorithm is based on a new "shrinking network" construction, which we consider to
be the novel algorithmic contribution of our paper.
The new linear renaming algorithm is employed in a range-independent algorithm
for 1)-renaming with O(n log n) step complexity. Processes start with an
adaptive O(k 2 )-renaming algorithm whose step complexity is O(k); this is a simple
modification of the range-independent renaming algorithm of Moir and Anderson [27].
Then, processes reduce the range of names in O(logn) iterations; each iteration uses
our new linear renaming algorithm.
The range-independent renaming algorithm is used to construct an adaptive (6k \Gamma
1)-renaming algorithm with O(k log complexity. In this algorithm, processes
are partitioned into O(logk) disjoint sets according to views obtained from an adaptive
lattice agreement algorithm (described below). This partition bounds the number
of processes in each set, and allows them to employ a range-independent (2k \Gamma 1)-
Adaptive Lattice Agreement and Renaming 3
renaming algorithm designed for this bound. Different sets use disjoint name spaces;
no coordination between the sets is required.
In the lattice agreement problem [15], processes obtain comparable (by contain-
ment) subsets of the set of active processes. A wait-free lattice agreement algorithm
can be turned into a wait-free implementation of an atomic snapshot object, with
O(n) additional read/write operations [15]. Atomic snapshot objects allow processes
to get instantaneous global views ("snapshots") of the shared memory and thus, they
simplify the design of wait-free algorithms.
The step complexity of our adaptive algorithm for lattice agreement is O(k log k).
In this algorithm, processes first obtain names in a range of size O(k 2 ) using the
simple algorithm with O(k) step complexity. Based on its reduced name, a process
enters an adaptive variant of the tree used in the lattice agreement algorithm of Inoue
et al. [23].
Appendix
C describes how the shrinking network is modified to get a lattice
agreement algorithm with O(N ) step complexity, using dynamic single-writer single-reader
registers; this gives an implementation of atomic snapshots with the same
complexity. Previous implementations of atomic snapshots had either O(N log N )
step complexity using static single-writer multi-reader registers [16], or O(N ) step
complexity using multi-writer multi-reader registers [23].
The renaming problem was introduced and solved by Attiya et al. [10] for the
message-passing model; Bar-Noy and Dolev [17] solved the problem in the shared-memory
model. Burns and Peterson [19] considered the l-assignment problem-
dynamic allocation of l distinct resources to processes. They present a wait-free
l-assignment algorithm which assumes l is the number of processes
trying to acquire a resource. All these algorithms have exponential step complexity
[21]. Borowsky and Gafni [18] present an algorithm for one-shot (2k \Gamma 1)-renaming
using O(Nnk) read/write operations.
Anderson and Moir [9] define long-lived renaming and present range-independent
algorithms for one-shot and long-lived renaming; their algorithms use test&set op-
erations. Moir and Anderson [27] introduced a building block, later called a split-
ter, and employ it in range-independent algorithms for long-lived renaming, using
read/write operations. Moir and Garay [28, 26] give a range-independent long-lived
O(kn)-renaming algorithm, using only read/write operations. By combining with a
long-lived 1)-renaming algorithm [19] they obtain a range-independent long-lived
1)-renaming algorithm; its step complexity is dominated by the exponential step
complexity of Burns and Peterson's algorithm.
Herlihy and Shavit [22] show that one-shot renaming requires names. This
implies that our range-independent renaming algorithm provides an optimal name
space. The name space provided by our adaptive renaming algorithm is not optimal
it is linear in the number of active processes.
Following the original publication of our paper [12], Afek and Merritt [4] used our
algorithms to obtain an adaptive wait-free (2k \Gamma 1)-renaming algorithm, with O(k 2 )
step complexity.
In another paper [13], we present an adaptive collect algorithm with O(k) step
complexity and derive adaptive algorithms for atomic snapshots, immediate snapshots
and 1)-renaming. That paper emphasizes the modular use of a collect operation
to make known algorithms adaptive; the algorithms have higher step complexity than
those presented here.
Our algorithms adapt to the total number of participating processes, that is, if a
4 Attiya and Fouren
process ever performs a step then it influences the step complexity of the algorithm
throughout the execution. More useful are algorithms which adapt to the current contention
and whose step complexity decreases when processes stop participating. Afek,
Dauber and Touitou [3] present implementations of long-lived objects which adapt to
the current contention; they use load-linked and store-conditional operations. Recent
papers present algorithms for long-lived renaming [2, 14], collect [6] and snapshots [7]
which adapt to the current contention using only read/write operations.
Lamport [25] suggests a mutual exclusion algorithm which requires a constant
number of steps when a single process wishes to enter the critical section, using
read/write operations; when several processes compete for the critical section, the
step complexity depends on the range of names. Alur and Taubenfeld [8] show that
this behavior is inherent for mutual exclusion algorithms. Choy and Singh [20] present
mutual exclusion algorithms whose time complexity-the time between consecutive
entries to the critical section-is O(k), using only read/write operations. Afek, Stupp
and Touitou [6] use an adaptive collect algorithm to derive an adaptive version of
the Bakery algorithm [24]; they present another mutual execlusion algorithm in [5].
Recently, Attiya and Bortnikov [11] presented a mutual exclusion algorithm whose
time complexity is O(log k); this algorithm employs an unbalanced tournament tree
with the same structure as our adaptive lattice agreement tree.
2. Preliminaries.
2.1. The Model. In the shared-memory model, processes
by applying operations on shared objects. A process p i is modeled as a (possibly
infinite) state machine; process p i has a distinct name id
The shared objects considered in this paper are atomic read/write registers, accessed
by read and write operations. A read(R) operation does not change the state of
R, and returns the current value stored in R; a write(v,R) operation changes the state
of R to v. A multi-writer multi-reader register allows any process to perform read and
operations. A single-writer multi-reader register allows only a single process to
perform write operations, and any process to perform read operation. A single-writer
multi-reader register is dynamic if the identity of the single process writing to the
register varies in different executions; otherwise, it is static.
An event is a computation step by a single process; the process determines the
operation to perform according to its state, and its next state according to its state
and the value returned by the operation. Computations in the system are captured
as sequences of events. An execution ff is a (finite or infinite) sequence of events
is the process performing the event OE r , then
it applies a read or a write operation to a single register and changes its state according
to its transition function. There are no constraints on the interleaving of events by
different processes, reflecting the assumption that processes are asynchronous and
there is no bound on their relative speeds.
Consider an execution ff of some algorithm A. For process is the
number of read/write operations p i performs in ff. The step complexity of A in ff,
denoted step(A; ff), is the maximum of step(A; ff; process is
active in ff if it takes a step in ff, that is, step(A; ff; denotes the number
of active processes in ff.
Algorithm A is range-independent if there is a function f : N 7! N such that in
every execution ff of A, step(A; ff) f(n). Namely, the step complexity of A in every
execution is bounded by a function of the total number of processes (which is known
in advance); it does not depend on the range of the initial names.
Adaptive Lattice Agreement and Renaming 5[0][1]
[6]
diagonal
Fig. 2. The grid used for O(k 2 )-renaming (depicted for
Algorithm A is adaptive (to total contention) if there is a function f : N 7! N such
that in every execution ff of A, step(A; ff) f(k(ff)). Namely, the step complexity of
A in ff is bounded by a function of the number of active processes in ff. Clearly, the
number of active processes is not known a priori.
A wait-free algorithm guarantees that every process completes its computation in
a finite number of steps, regardless of the behavior of other processes. Since k(ff) is
bounded (by n) it follows that adaptive algorithms are wait-free.
2.2. Problems. The M -renaming problem [10] requires processes to choose distinct
names in a range that depends only on the number of active processes. Namely,
there is a function M : N 7! N such that in every execution ff, processes output
distinct names in the range 1g.
In the lattice agreement problem [15], every process p i outputs V i a subset of the
active processes (e.q., a view) such that the following conditions hold:
are comparable (either V
and j.
2.3. Simple O(k 2 )-Renaming in O(k) Operations. The first step in our algorithms
is a simple adaptive O(k 2 )-renaming algorithm. This algorithm reduces the
range of names to depend only on the number of active processes; later stages use
these distinct names, without sacrificing the adaptiveness. We describe this algorithm
first since it employed in both adaptive algorithms presented in this paper.
The basic building block of this algorithm is the splitter of Moir and Anderson [27].
A process executing a splitter obtains down, right or stop. At most one process
obtains stop and when a single process executes the splitter it obtains stop; when
two or more processes execute the splitter, not all of them obtain the same value. In
this way, the set of processes accessing the splitter is "split" into smaller subsets.
As in [27], splitters are arranged in a grid of size n \Theta n (Figure 2). A process
starts at the upper left corner of the grid; the splitters direct the process either to
continue (moving right or down in the grid), or to obtain the number associated with
the current splitter. The grid spreads the processes so that each process eventually
stops in a distinct splitter.
The difference between the algorithm of Moir and Anderson [27] and our algorithm
is that they number splitters by by rows, while we number splitters by diagonals.
Splitter (i; j), in row i and column j, 0
Figure
shows our numbering; the numbering of Moir and
Anderson appears in square brackets.
6 Attiya and Fouren
Algorithm 1 Adaptive k(k 1)=2-renaming.
Procedure Adaptive k(k
private i, j: integer, initially 0 // row and column indices
private move: fdown; right; stopg, initially down // direction
1. while ( move 6= stop ) do
2. move := Splitter[i; j]() // execute splitter in grid position (i;
3. if ( move increase row
4. if ( move = right ) then j++ // increase column
5. return((i name based on the current splitter
Procedure Splitter[i; j] // from Moir and Anderson [27]
shared initially ?
shared Y[i; j]: Boolean, initially false
1. X[i; j] := id
2. if ( Y[i; j] ) then return(right)
3. else Y[i; j] := true
4. if ( X[i;
5. else return(down)
Algorithm 1 presents pseudocode for the grid and for a splitter. 2
We say that splitter (i; steps away from splitter (0; 0), the top left
corner of the grid. As shown in [27, Section 3.1], if k processes access the grid then
each process stops after O(k) operations in a distinct splitter which is at most k \Gamma 1
steps away from (0; 0). A simple counting argument shows that these splitters have
numbers in the range 1g.
Theorem 2.1. Algorithm 1 solves k(k 1)=2-renaming with O(k) step complexity
3. 1)-Renaming in O(N ) Operations. As explained in the introduction,
the step complexity of adaptive renaming depends on a new linear renaming algorithm,
which is neither range-independent nor adaptive.
The algorithm is organized as a network of reflectors. A reflector has two distinguished
entrances; a process accessing the reflector changes the direction of its movement
if another process accessed the reflector, depending on the entrance through
which it entered the reflector.
The network consists of N columns, numbered from left to right
(see
Figure
3). Column
\Gammac, from top to bottom. Process q with name c starts at the topmost
reflector of column c and descends through column c, until it sees another process
accessing the same reflector. Then, q moves left to right towards column
outputs the row on which it exits column N \Gamma 1.
For column c, S c\Gamma1 is the set of processes starting in columns 1. The
main property of the network is that processes in S c\Gamma1 enter column c on distinct
rows among the lowest 2jS ones. Therefore, processes in S c\Gamma1 do not access
the same reflectors in column c (or larger); they may interfere only with the single
process descending through column c.
Algorithms declare private variables only if their usage in not obvious, or their initial value is
important.
Adaptive Lattice Agreement and Renaming 7
Process q descends through column c until it accesses a reflector in row r through
which a process in S c\Gamma1 has passed; then, q moves to column c remaining in row
r. If process p 2 S c\Gamma1 accesses a reflector which q has passed, then p moves one row
up to column c a reflector which q did not pass, then p moves one
row down to column c+ 1. Therefore, processes in S c\Gamma1 which enter column c on rows
move one row up; processes in S c\Gamma1 which enter column c on rows ! r, move
one row down. Process q leaves on one of the free rows between the rows occupied
by these two subsets of S
fqg leave
column c on distinct rows. Since the new names of the processes are the rows on
which they leave the network, they output distinct names.
The interaction of processes in column c guarantees that processes in S c\Gamma1 move
to upper rows in column c is active; at most two additional rows are
occupied (Figure 5(b)). If q is not active, then processes leave column c exactly on
the same number of rows as they enter (Figure 5(a)). Thus, an active process causes
at most two rows to be occupied; if there are k active processes, then they leave the
network on the lowest rows.
More formally, a reflector has two entrances, in 0 and in 1 , two lower exits, down 0
and down 1 , and two upper exits, up 0 and up 1 . A process entering the reflector on
entrance in i leaves the reflector only on exits up i or down i (see top left corner of
Figure
3). If a single process enters the reflector then it must leave on a lower exit,
and at most one process leaves on a lower exit; it is possible that two processes
entering the reflector will leave on upper exits. A reflector is easily implemented with
two Boolean registers (see Algorithm 2).
The reflectors of column c, denoted are connected
as follows:
- The upper exit up 0 of S[c; r] is connected to entrance in 0 of S[c
- The upper exit up 1 of S[c; r] is connected to entrance in 0 of S[c
- The lower exit down 0 of S[c; r] is connected to entrance in 0 of S[c
- The lower exit down 1 of a reflector S[c; r] is connected to entrance in 1 of
reflector lowest reflector of column c),
then it is connected to entrance in 0 of reflector S[c
In Algorithm 2, a process with name c starts on entrance in 1 of the upper reflector of
column c; it descends through column c (leaving on exit down 1 ) until it sees another
process or it reaches the bottom of the column. At this point, the process leaves on
exit up 1 to the next column, and moves towards column in each column y,
it enters exactly one reflector on entrance in 0 ; it leaves on exit up 0 if it sees another
process, or on exit down 0 , otherwise.
Suppose that p j enters the reflector on entrance in i , i 2 f0; 1g, and no process
enters the reflector on entrance in 1\Gammai . Since no process writes to R 1\Gammai , p j reads false
from R 1\Gammai and leaves the reflector on the lower exit, down i . This implies the following
lemma:
Lemma 3.1. If a single process enters a reflector, then it leaves on a lower exit.
Similar arguments are used in the proof of the next lemma:
Lemma 3.2. If a single process enters a reflector on in 0 and a single process
enters the reflector on in 1 , then at most one process leaves the reflector on a lower
exit.
Proof. Assume that p i enters the reflector on in 0 and p j enters the reflector on in 1 .
If both processes read true from R 1 and R 0 , then by the algorithm, exit(p i
and the lemma holds. Otherwise, without loss of generality, p i reads
8 Attiya and Fouren
A reflector.
in 0
in 1 up 0
Fig. 3. The network of reflectors for (2k \Gamma 1)-renaming (depicted for
false from R 1 . Since p i reads false from R 1 , p j writes to R 1 in Line 1 after p i
reads R 1 at Line 2. Therefore, p j reads R 0 in Line 2 after p i writes to R 0 in Line 1.
Consequently, obtains true from R 0 and by the algorithm, exit(p
proves the lemma.
Recall that S c contains the active processes starting on columns
every process c) be the value of the local variable row before p i
accesses the first reflector in column c + 1. The next lemma shows that processes exit
a column on distinct rows.
Lemma 3.3. For every pair of processes
Proof. The proof is by induction on the column c. In the base case, the
lemma trivially holds since only one process may access a reflector in column 0.
For the induction step, suppose that the lemma holds for column c 0; there are
two cases:
Adaptive Lattice Agreement and Renaming 9
Algorithm 1)-renaming.
Procedure shrink(name renaming algorithm
private col, row: integer, initially name // start on top reflector of column name
1. while ( name ) do // descend through column name
2. exit := reflector[row,col](1) // enter on in 1
3. if (
4. else row\Gamma\Gamma //
5. if ( row ! \Gammacol ) then col++ // reached the lowest reflector in column
6. while do // move towards column
7. exit := reflector[row; col](0) // enter on in 0
8. if (
9. else col++; row\Gamma\Gamma; //
10. return(row +N );
Procedure reflector(entrance r : 0,1)
2. if ( R
3. else return(up r )
Case 1: If no process starts on column c + 1, then by the algorithm, no reflector
in column c+1 is accessed on in 1 . By Lemma 3.1, every process p i 2 S c leaves column
. By the algorithm, we have
Figure
4(a)) and the lemma holds by the induction hypothesis.
Case 2: Suppose that process q starts on column c + 1. Let S[c
the last reflector accessed by q in column c + 1. That is, q leaves reflectors S[c
does not access any of the reflectors
By Lemma 3.2, every process p i 2 S c which enters column c + 1 on row r
higher, exits column c + 1 on up 0 , and we have:
By Lemma 3.1, every process p i 2 S c which enters column c+1 on row r lower,
exits column c + 1 on down 0 , and we have:
Now consider process q. By the algorithm, q leaves column c 1 either on exit
down 1 of the lowest reflector in the column, S[c 1)], or on exit up 1 of a
reflector
If q leaves reflector S[c+1; \Gamma(c+1)] on down 1 (Figure 4(b)), then by the algorithm,
If there is a process
(a) (b) (c)
Fig. 4. Illustration for the proof of Lemma 3.3-column c + 1.
If q leaves reflector S[c then by the algorithm,
. By Lemma 3.2, there is a process p j 2 S c which accesses S[c+1; r 0 ];
that is, row(p . By the algorithm
ae
leaves on down 0
The induction hypothesis and the above equations imply that in all cases, row(p
1), for every pair of processes
Therefore, processes exit the network on different rows and hence, obtain distinct
names. The next lemma shows that processes in S c leave column c on the lowest
rows.
Lemma 3.4. For every process
Proof. The proof is by induction on c. In the base case, there is a process
i such that id since no process accesses
reflector S[0; 0] on in 0 . Therefore, by the algorithm, we have row(p
the lemma holds.
For the induction step, suppose that the lemma holds for column c 0; there are
two cases:
Case 1: If no process starts on column c + 1, then no process accesses reflectors
in column c +1 on entrance in 1 (Figure 5(a)). Therefore, by Lemma 3.1, each process
By the induction hypothesis
Also, since jS c+1
The lemma follows from these inequalities.
Case 2: Suppose that process q starts on column c + 1. By the induction hypoth-
esis, processes from S c access only the lowest 2jS c reflectors in column c+1. Since
no process accesses the upper reflectors S[c
on in 0 , by Lemma 3.1, q accesses these reflectors until it reaches a reflector S[c+1; r 0 ]
Adaptive Lattice Agreement and Renaming 11
(a) (b)
Fig. 5. Illustration for the proof of Lemma 3.4-column c + 1.
accessed by another process, or until it reaches the lowest reflector S[c
in the column (Figure 5(b)). Therefore, q leaves column c 1 either on exit down 1 of
reflector 1)] or on exit up 1 of a reflector S[c
1. By the algorithm, this implies
(1)
According to the algorithm, for each process
ae
Together with the induction hypothesis, this implies
(2)
Also,
The lemma follows from inequalities (1), (2) and (3).
Lemma 3.4 implies that processes leave the network on rows
2. Since jS N the names chosen in Line 10 are in the range
Process at most 2id reflectors in column id j , and exactly one
reflector in each column id Each reflector requires a constant number
of operations, implying the next theorem:
Theorem 3.5. Algorithm 2 solves (2k \Gamma 1)-renaming with step complexity
O(N ).
The network consists of O(N 2 ) reflectors; each reflector is implemented with two
registers. Register R i of a reflector is written only by a process entering the reflector
on entrance in i . Entrance in 1 of a reflector is accessed only by the single process
starting on this column, and entrance in 0 is accessed by at most one process (by
Lemma 3.3). Therefore, we use O(N 2 ) dynamic single-writer single-reader registers.
12 Attiya and Fouren
shrink[1]
shrink[2] shrink[3]
shrink[7]
shrink[6]
shrink[5]
shrink[4]
1)=2-renaming (Algorithm 1)
Fig. 6. The range-independent algorithm for (2k \Gamma 1)-renaming (depicted for
Algorithm 3 Range-independent (2k \Gamma 1)-renaming for n processes.
Procedure indRenamingn ()
1. temp-name := Adaptive k(k
2. is the height of the tree
3. side := temp-name mod 2
4. temp-name := 0
5. while ( ' 1 ) do
6. temp-name := shrink['](temp-name
7. side := ' mod 2
8. ' := b'=2c
9. return(temp-name)
4. (2k \Gamma1)-Renaming in O(n log n) Operations. A range-independent (2k \Gamma1)-
renaming can be obtained by combining adaptive O(k 2 )-renaming and non-adaptive
1)-renaming. First, the names are reduced into a range of size O(n 2 ) (Algo-
rithm 1); these names are used to enter the shrinking network of Algorithm 2, which
reduces them into a range of size (2k \Gamma1). The shrinking network is started with names
of size O(n 2 ), and hence, the step complexity of this simple algorithm is O(n 2 ). The
algorithm presented in this section obtains O(n log n) step complexity by reducing
the name space gradually in O(log n) iterations. To do so, distinct copies of shrink
(Algorithm are associated with the vertices of a complete binary tree of height
Each copy of shrink is designed for names
in a range of size 4n \Gamma 2; that is, it employs a network with
A process starts Algorithm 3 by acquiring a name using O(k 2 )-renaming; this
name determines from which leaf to start. The process performs the shrinking network
associated with each vertex v on the path from the leaf to the root, starting at a column
which is determined by the name obtained at the previous vertex: If it ascends from
the left subtree of v, then it starts at one of the first 2n \Gamma 1 columns of the network;
otherwise, it starts at one of the last columns. The process outputs the name
obtained at the root.
The vertices of the tree are numbered in BFS order (Figure 4): The root is
numbered vertex v is numbered ', then its left child is numbered 2', and its right
child is numbered 2'+ 1. The copy of Algorithm 2 associated with a vertex numbered
' is denoted shrink['].
Lemma 4.1. For every vertex v, processes executing shrink[v] obtain distinct
temporary names in the range
Proof. The proof is by induction on d, the height of v. In the base case,
Adaptive Lattice Agreement and Renaming 13
Algorithm 4 Adaptive (6k \Gamma 1)-renaming.
1. V := AdaptiveLA() // Algorithm 5, presented below
2. r := dlog jVje
3. temp-name := indRenaming 2 r () // Algorithm 3
4. if (
5. else return(temp-name
After executing Algorithm 1, processes get distinct names in the range
1g. Therefore, at most one process accesses v from the left executing shrink[v]
with temporary name 0, and at most one process accesses v from the right, executing
shrink[v] with temporary name 1. Thus, they execute shrink[v]
with different temporary names in the range 3g. Theorem 3.5 implies
they obtain distinct names in the range f0; 1; 2g and therefore, the lemma holds when
For the induction step, assume the lemma holds for vertices at height d, and let
v be a vertex at height d + 1. By the induction hypothesis and by the algorithm,
accessing v from the left child have distinct temporary names in the range
accessing v from the right child have distinct names in
the range f2n 3g. Thus, processes execute shrink[v] with distinct
names in the range distinct names in the range
2g, by Theorem 3.5.
Therefore, processes obtain distinct names in the range completing
shrink at the root. A process performs shrink in n) vertices of
the tree, and each vertex requires O(n) operations (Theorem 3.5). This implies the
following theorem:
Theorem 4.2. Algorithm 3 solves (2k \Gamma 1)-renaming with O(n log n) step complexity
5. 1)-Renaming in O(k log Operations. In our adaptive (6k \Gamma 1)-
renaming algorithm, a process estimates the number of active processes and performs
a copy of the range-independent (2k \Gamma 1)-renaming algorithm (Algorithm designed
for this number. Processes may have different estimates of k, the number of active
processes, and perform different copies of Algorithm 3. Instead of consolidating the
names obtained in the different copies, disjoint name spaces are allocated to the copies.
The number of active processes is estimated by the size of a view obtained from
lattice agreement; since views are comparable, the estimate is within a constant factor
(see Lemma 5.1).
In Algorithm 4, process p i belongs to a set S j if the size of its view is in
For views obtained in lattice agreement, this partition guarantees that jS j j 2 j , for
moreover, if the number of active processes is k, then jS
dlog ke. There are dlog ne copies of the Algorithm 3, denoted indRenaming 2
indRenaming 2 dlog ne . Processes in S j perform indRenaming 2 j , designed for 2 j processes,
and obtain names in a range of size 2jS j 1. The name spaces for S ne do
not overlap, and their size is linear k (Figure 7).
Lemma 5.1. If the views of processes in a set S satisfy the comparability and
self-inclusion properties of lattice agreement, and the size of a view is at most k, then
jSj k.
Proof. Assume V is the view with maximal size in S. Let V i be the view of some
process S. The self-inclusion property implies that and the comparability
14 Attiya and Fouren
adaptive lattice agreement
renaming
range-independent
Fig. 7. Adaptive (6k \Gamma 1)-renaming.
property implies that V i ' V . Therefore, S ' V , implying that jSj jV j k.
By the algorithm, if process p i is in S j then jV i j 2 j . Lemma 5.1 implies the
next lemma:
Lemma 5.2. If there are k active processes, then jS
For every process since the views contain only active processes.
Therefore, which implies the next lemma:
Lemma 5.3. If there are k active processes, then jS
By Lemma 5.2, at most 2 j processes invoke indRenaming 2 j . Therefore, process p i
invoking indRenaming 2 j obtains temp-name 2g, by Theorem 4.2. By
the algorithm, p i returns temp-name
The set of names returned by processes performing indRenaming 2 j is denoted
NameSpace the next lemma follows from the algorithm:
Lemma 5.4. (1) NameSpace i
NameSpace
dlog ne.
(2)
Lemma 5.5. If there are k active processes, then they return distinct names in
the range
Proof. If two active processes, p i and p j , execute the same copy of indRenaming,
then they obtain distinct names by Theorem 4.2; otherwise, they obtain distinct
names, by Lemma 5.4(1).
By Lemma 5.3, processes invoke indRenaming 2 j only if 0 j dlog ke. By
Lemma 5.4(2), processes invoking indRenaming
in the range 2g. By Theorem 4.2, a process p i invoking the last non-empty
copy indRenaming 2 dlog ke obtains a temporary name in the range
By the algorithm, p i returns a name in the range f2 dlog
3 There are dlog ke+1 names of the form 2 which are not used. Therefore,
the names obtained in the algorithm can be mapped into a name space of size 6k \Gamma dlog 2.
Adaptive Lattice Agreement and Renaming 15
Thus, the output names are in a range whose size is not greater than 2 dlog
the correctness of the algorithm follows from
Lemma 5.5.
If there are k active processes, then each process performs AdaptiveLA (pre-
sented in the next section) in O(k log operations. By Lemma 5.3, only copies
of indRenaming for less than 2k processes are invoked. Therefore, a process completes
indRenaming in O(k log operations.
Theorem 5.6. Algorithm 4 solves (6k \Gamma 1)-renaming with O(k log
The upper bound on the size of the name space, 6k \Gamma 1, is tight for Algorithm 4.
Assume that all processes executing lattice agreement obtain the maximal view (with
size and access indRenaming 2 dlog ke . The processes leave the range
2g unused (since it is unknown whether the previous copies of indRenaming are empty
or not) and return names in the range f2 dlog 2g. If k is not an
integral power of 2, then the output names are in a range of size 2 log k+2
is an integral power of 2, then the output names are in a range of size
Merritt (private communication) noted that the names can be reduced by partitioning
the active processes into sets of size a for an integer a ? 2.
Active processes are partitioned into sets S
adaptiveRenaming a j designed for a j participants and obtain new names in a range of
size 2jS j 1. As in our algorithm, when k processes are active, adaptiveRenaming a j
is accessed only for 0 j dlog a ke. Processes accessing copies adaptiveRenaming a j ,
names in a space of size
P dlog a
2 a log a k+1 \Gamma1
which tends to 2k, when a ! 1. Processes accessing the last
non-empty copy, adaptiveRenaming a dlog a new names in a range of size 2k \Gamma 1.
Thus, the size of the total name space is
6. Lattice Agreement in O(k log Operations. Our lattice agreement algorithm
is based on the algorithm of Inoue et al. [23]. In their algorithm, each process
starts at a distinct leaf (based on its name) of a complete binary tree with height
climbs up the tree to the root. At each vertex on the path, it performs
a procedure which merges together two sets of views, each set containing only
comparable views; this procedure is called union. At the leaf, the process uses its own
name as input to union; at the inner vertices, it uses the view obtained in the previous
vertex as input to union. The process outputs the view it obtains at the root.
Specifically, union takes two parameters, an input view V and an integer side 2
f0; 1g, and returns an output view; its properties are specified by the next lemma [23,
Lemma 6]:
Lemma 6.1. If the input views of processes invoking union with are
comparable and satisfy the self-inclusion property, and similarly for the input views
of processes invoking union with
(1) the output views of processes exiting union are comparable, and
(2) the output view of a process exiting union contains its input view.
Appendix
A describes union in detail, and explains the next lemma:
Lemma 6.2. The step complexity of union is O(k).
Our adaptive algorithm uses an unbalanced binary tree T r defined inductively as
follows. T 0 has a root v 0 with a single left child (Figure 8(a)). For r 0, suppose T r
Attiya and Fouren22 37
29 vr
Cr
(a) T0
(b) Tr+1
Fig. 8. The unbalanced binary tree used in the adaptive lattice agreement algorithm.
is defined with an identified vertex v r , which is the last vertex in an in-order traversal
of T r ; notice that v r does not have a right child in T r . T r+1 is obtained by inserting
a new vertex v r+1 as the right child of v r , and inserting a complete binary tree C r+1
of height r as the left subtree of v r+1 (Figure 8(b)). By the construction, v r+1 is
the last vertex in an in-order traversal of T r+1 .
The vertices of the tree are numbered as follows: The root is numbered 1; if a
vertex is numbered ', then its left child is numbered 2', and its right child is numbered
Figure
8).
By the construction, the leaves of T r are the leaves of the complete binary subtrees
. Therefore, the total number of leaves in T r is
The following simple lemma, proved in Appendix B, states some properties of T r .
Lemma 6.3. Let w be the i-th leaf of T r , 1 i counting from left to
right. Then
(1) the depth of w is 2blog ic
(2) w is numbered 2 d
Algorithm 5 uses T 2 log n\Gamma1 , which has n leaves. 4 A process starts the algorithm
by obtaining a new name in a range of size k(k + 1)=2 (using Algorithm 1).
This name determines the leaf at which the process starts to climb up the tree: A
process with a new name x i starts the algorithm at the x i th leaf of the tree, counting
from left to right. Since k(k
leaves for temporary names in a range of size k(k + 1)=2. By Lemma 6.3, the x i th
leaf is numbered 2 d
As in [23], a distinct copy of union is associated with each inner vertex of the tree.
A process performs copies of union associated with the vertices along its path to the
root, and returns the view obtained at the root.
Simple induction on the distance of a vertex v from the leaves shows that the
views obtained by processes executing union at v satisfy the comparability and self-
inclusion properties. In the base case, v is a leaf and the claim is trivial since a single
process starts at each leaf; the induction step follows immediately from Lemma 6.1.
Hence, the views obtained at the root have the lattice agreement properties.
If there are k active processes, process p i gets a unique name x
4 For simplicity, we assume n is a power of 2.
Adaptive Lattice Agreement and Renaming 17
Algorithm 5 Adaptive lattice agreement.
Procedure AdaptiveLA()
1. temp-name := Adaptive k(k
2. d := blog temp-namec
3. temp-name // the leaf corresponding to temp-name
4. V := fp i g // the input is the process's name
5. while ( ' 1 ) do
6. side := ' mod 2 // calculate side
7. ' := b'=2c // calculate father
8. V := union['](V; side)
9. return(V)
1)=2g (Line 1) and by Lemma 6.3(2), starts in a leaf ' of depth 2blog x
Therefore, p i accesses at most 2blog x
vertices. At each vertex, the execution of union requires O(k) operations (Lemma 6.2).
Thus, the total step complexity of the algorithm is O(k log k), implying the following
theorem:
Theorem 6.4. Algorithm 5 solves lattice agreement with O(k log
7. Discussion. This work presents adaptive wait-free algorithms, whose step
complexity depends only on the number of active processes, for lattice agreement and
1)-renaming in the read/write asynchronous shared-memory model; the step
complexity of both algorithms is O(k log k).
Clearly, the complexities of our algorithms-the number of steps, the number
and the size of registers used-can be improved. For example, an algorithm for O(k)-
renaming with O(k) step complexity would immediately yield a lattice agreement
algorithm with the same step complexity. Also it would be interesting to see if ideas
from our efficient algorithms can improve the complexities of algorithms which adapt
to the current contention [2, 6].
Acknowledgments
:. We thank Yehuda Afek and Eli Gafni for helpful discussions,
Yossi Levanoni for comments on an earlier version of the paper, and the reviewers for
many suggestions on how to improve the organization and presentation.
--R
Atomic snapshots of shared memory
Results about fast mutual exclusion
Using local-spin k-exclusion algorithms to improve wait-free object implementation
Renaming in an asynchronous environment
Adaptive and efficient mutual exclusion
Adaptive wait-free algorithms for lattice agreement and renaming
Atomic snapshots using lattice agreement
Atomic snapshots in O(n log n) operations
A partial equivalence between shared-memory and message-passing in an asynchronous fail-stop distributed environment
The ambiguity of choosing
Adaptive solutions to the mutual exclusion problem
Exponential examples for two renaming algorithms.
The topological structure of asynchronous computability
A new solution of Dijkstra's concurrent programming problem
Fast long-lived renaming improved and simplified
--TR
--CTR
Michel Raynal, Wait-free computing: an introductory lecture, Future Generation Computer Systems, v.21 n.5, p.655-663, May 2005
Hagit Attiya , Faith Ellen Fich , Yaniv Kaplan, Lower bounds for adaptive collect and related objects, Proceedings of the twenty-third annual ACM symposium on Principles of distributed computing, July 25-28, 2004, St. John's, Newfoundland, Canada
Wojciech Golab , Danny Hendler , Philipp Woelfel, An O(1) RMRs leader election algorithm, Proceedings of the twenty-fifth annual ACM symposium on Principles of distributed computing, July 23-26, 2006, Denver, Colorado, USA
Hagit Attiya , Arie Fouren , Eli Gafni, An adaptive collect algorithm with applications, Distributed Computing, v.15 n.2, p.87-96, April 2002
Hagit Attiya , Arie Fouren, Algorithms adapting to point contention, Journal of the ACM (JACM), v.50 n.4, p.444-468, July | atomic read/write registers;lattice agreement;shared-memory systems;wait-free computation;renaming;atomic snapshots |
586857 | Approximating the Throughput of Multiple Machines in Real-Time Scheduling. | We consider the following fundamental scheduling problem. The input to the problem consists of n jobs and k machines. Each of the jobs is associated with a release time, a deadline, a weight, and a processing time on each of the machines. The goal is to find a nonpreemptive schedule that maximizes the weight of jobs that meet their respective deadlines. We give constant factor approximation algorithms for four variants of the problem, depending on the type of the machines (identical vs. unrelated) and the weight of the jobs (identical vs. arbitrary). All these variants are known to be NP-hard, and the two variants involving unrelated machines are also MAX-SNP hard. The specific results obtained are as follows: For identical job weights and unrelated machines: a greedy $2$-approximation algorithm. For identical job weights and k identical machines: the same greedy algorithm achieves a tight $\frac{(1+1/k)^k}{(1+1/k)^k-1}$ approximation factor. For arbitrary job weights and a single machine: an LP formulation achieves a 2-approximation for polynomially bounded integral input and a 3-approximation for arbitrary input. For unrelated machines, the factors are 3 and 4, respectively. For arbitrary job weights and k identical machines: the LP-based algorithm applied repeatedly achieves a $\frac{(1+1/k)^k}{(1+1/k)^k-1}$ approximation factor for polynomially bounded integral input and a $\frac{(1+1/2k)^k}{(1+1/2k)^k-1}$ approximation factor for arbitrary input. For arbitrary job weights and unrelated machines: a combinatorial $(3+2\sqrt{2} \approx 5.828)$-approximation algorithm. | Introduction
We consider the following fundamental scheduling problem. The input to the problem consists of
n jobs and k machines. Each of the jobs is associated with a release time, a deadline, a weight,
and a processing time on each of the machines. The goal is to nd a schedule that maximizes the
weight of the jobs that meet their deadline. Such scheduling problems are frequently referred to
as real-time scheduling problems, and the objective of maximizing the value of completed jobs is
frequently referred to as throughput. We consider four variants of the problem depending on the
type of the machines (identical vs. unrelated) and the weight of the jobs (identical vs. arbitrary).
Garey and Johnson [13] (see also [14]) show that even the simplest decision problem corresponding
to this problem is already NP-Hard in the strong sense. In this decision problem the input
consists of a set of n jobs with release time, deadline, and processing time. The goal is to decide
whether all these jobs can be scheduled on a single machine; each within its time window. We show
that the two variants involving unrelated machines are also MAX-SNP hard.
In this paper we give constant factor approximation algorithms for all four variants of the
problem. To the best of our knowledge, this is the rst paper that gives approximation algorithms
with guaranteed performance (approximation factor) for these problems. We say that an algorithm
has an approximation factor for a maximization problem if the weight of its solution is at least
1=OPT, where OPT is the weight of an optimal solution. (Note that we dened the approximation
factor so that it would always be at least 1.)
The specic results obtained are listed below and summarized in the table given in Figure 1.
weight function identical machines unrelated machines
identical job weights (2; 1:8;
arbitrary job weights (2; 1:8;
integral, poly-size, input
arbitrary job weights (3; 2:78;
arbitrary input
Figure
1: Each entry contains the approximation factors for
For identical job weights and unrelated machines, we give a greedy 2-approximation algorithm.
For identical job weights and k identical machines, we show that the same greedy algorithm
achieves a tight (1+1=k) k
approximation factor.
For arbitrary job weights, we round a fractional solution obtained from a linear programming
relaxation of the problem. We distinguish between the case where the release times, deadlines,
and processing times, are integral and polynomially bounded, and the case where they are
arbitrary. In the former case, we achieve a 2-approximation factor for a single machine, and a
3-approximation factor for unrelated machines. In the latter case, we get a 3-approximation
factor for a single machine, and a 4-approximation factor for unrelated machines.
For arbitrary job weights and k identical machines, we achieve a (1+1=k) k
approximation
factor for polynomially bounded integral input, and a (1+1=2k) k
approximation factor for
arbitrary input. Note that as k tends to innity these factors tend to e
1:58198, and
e
2:54149, respectively.
For arbitrary job weights and unrelated machines we also present a combinatorial (3 +2
approximation factor (3
The computational di-culty of the problems considered here is due to the \slack time" available
for scheduling the jobs. In general, the time window in which a job can be scheduled may be (much)
larger than its processing time. Interestingly, the special case where there is no slack time can be
solved optimally in polynomial time even for multiple machines [3]. Moreover, the problem can be
solved on a single machine with the execution window less than twice the length of the job.
Another special case that was considered earlier in the literature is the case in which all jobs
are released at the same time (or equivalently, the case in which all deadlines are the same). This
special case remains NP-Hard even for a single machine. However, Sahni [24] gave a fully polynomial
approximation scheme for this special case.
The problems considered here have many applications. Hall and Magazine [17] considered
the single machine version of our problem in the context of maximizing the scientic, military
or commercial value of a space mission. This means selecting and scheduling in advance a set
of projects to be undertaken during the space mission, where an individual project is typically
executable during only part of the mission. It is indicated in [17] that up to 25% of the budget of a
space mission may be spent in making these decisions. Hall and Magazine [17] present eight heuristic
procedures for nding an optimal solution together with computational experiments. However, they
do not provide any performance guarantees on the solutions produced by their heuristics. They also
mention the applicability of such problems to patient scheduling in hospitals. For more applications
and related work in the scheduling literature see [8, 11] and the survey of [22].
The preemptive version of our problem for a single machine was studied by Lawler [21]. For
identical job weights, Lawler showed how to apply dynamic programming techniques to solve the
problem in polynomial time. He extended the same techniques to obtain a pseudo-polynomial
algorithm for arbitrary weights as well ([21]). Lawler [20] also obtained polynomial time algorithms
that solve the problem in two special cases: (i) the time windows in which jobs can be scheduled are
nested; and (ii) the weights and processing times are in opposite order. Kise, Ibaraki and Mine [18]
showed how to solve the special case where the release times and deadlines are similarly ordered.
A closely related problem is considered Adler et al. [1] in the context of communication in linear
networks. In this problem, messages with release times and deadlines have to be transmitted over
a bus that has a unit bandwidth, and the goal is to maximize the number of messages delivered
within their deadline. It turns out that our approximation algorithms for the case of arbitrary
weights can be applied to the weighted version of the unbuered case considered in [1] to obtain
a constant factor approximation algorithm. No approximation algorithm is given in [1] for this
version.
In the on-line version of our problems, the jobs appear one by one, and are not known in
advance. Lipton and Tomkins [23] considered the non-preemptive version of the on-line problem,
while Koren and Shasha [19] and Baruah et al. [7] considered the preemptive version. The special
cases where the weight of a job is proportional to the processing time were considered in the on-line
setting in several papers [5, 10, 12, 15, 16, 6]. Our combinatorial algorithm for arbitrary weights
borrows some of the techniques used in the on-line case.
Some of our algorithms are based on rounding a fractional solution obtained from a linear
programming (LP) relaxation of the problem. In the LP formulation for a single machine we have
a variable for every feasible schedule of each of the jobs, a constraint for each job, and a constraint
for each time point. A naive implementation of this approach would require an unbounded number
of variables and constraints. To overcome this di-culty, we rst assume that all release times,
deadlines, and processing times are (polynomially bounded) integers. This yields a polynomial
number of variables and constraints, allowing for the LP to be solved in polynomial time. For
the case of arbitrary input, we show that we need not consider more than O(n 2 ) variables and
constraints for each of the n jobs. This yields a strongly polynomial running time at the expense
of a minor degradation in the approximation factor. The rounding of the LP is done by reducing
the problem to a graph coloring problem.
We extend our results from a single machine to multiple machines by applying the single machine
algorithm repeatedly, machine by machine. We give a general analysis for this type of algorithms
and, interestingly, prove that the approximation factor for the case of identical machines is superior
to the approximation factor of the single machine algorithm which served as our starting
point. A similar phenomenon (in a dierent context) has been observed by Cornuejols, Fisher and
Nemhauser [9]. Our analysis in the unrelated machines case is similar to the one described (in
a dierent context) by Awerbuch et al. [4]. Unlike the identical machines case, in the unrelated
machines case the extension to multiple machines degrades the performance relative to a single
machine.
Our algorithms (and specically our LP based algorithms) can be applied to achieve approximation
algorithms for other scheduling problems. For example, consider a problem where we can
compute an estimate of the completion time of the jobs in an optimal (fractional) solution. Then,
we can apply our algorithms using these estimated completion times as deadlines, and get a schedule
where a constant fraction of the jobs indeed nish by these completion times. This observation has
already been applied by Wein [25] to achieve constant factor approximation algorithms for various
problems, among them the minimum
ow-time problem.
Denitions and notations
Let the job system contain n jobs Each job
J i is characterized by the quadruple (r g. The interpretation
is that job J i is available at time r i , the release time, it must be executed by time d i , the deadline,
its processing time on machine M j is ' i;j , and w i is the weight (or prot) associated with the job.
We note that our techniques can also be extended to the more general case where the release time
and deadline of each job dier on dierent machines. However, for simplicity, we consider only the
case where the release time and deadline of each job are the same on all machines. The hardness
results are also proved under the same assumption.
We refer to the case in which all job weights are the same as the unweighted model, and the
case in which job weights are arbitrary as the weighted model. (In the unweighted case our goal
is to maximize the cardinality of the set of scheduled jobs.) We refer to the case in which the
processing times of the jobs on all the machines are the same as the identical machines model,
and the case in which processing times dier as the unrelated machines model. In the unweighted
jobs and identical machines model, job J i is characterized by a triplet (r
Without loss of generality, we assume that the earliest release time is at time
A feasible scheduling of job J i on machine M j at time t, r i t d i ' i;j , is referred to as a job
instance, denoted by J i;j (t). A job instance can also be represented by an interval on the time line
[0; 1). We say that the interval J i;j belongs to job J i . In general, many intervals
may belong to a job. A set of job instances J 1;j (t 1 feasible schedule on machine
if the corresponding intervals are independent, i.e., they do not overlap, and they belong to
distinct jobs. The weight of a schedule is the sum of the weights of the jobs to which the intervals
(job instances) belong. In the case of multiple machines, we need to nd a feasible schedule of
distinct jobs on each of the machines. The objective is to maximize the sum of the weights of all
schedules.
We distinguish between the case where the release times, processing times, and deadlines are
integers bounded by a polynomial in the number of jobs, and between the case of arbitrary inputs.
The former case is referred to as polynomially bounded integral input and the latter case is referred
to as arbitrary input.
3 Unweighted jobs
In this section we consider the unweighted model. We dene a greedy algorithm and analyze its
performance in both the unrelated and identical models. In the former model, we show that it
is a 2-approximation algorithm, and in the latter model, we show that it is a (k)-approximation
algorithm, where
3.1 The greedy algorithm
The greedy strategy for a single machine is as follows. At each time step t (starting at the
algorithm schedules the job instance that nishes rst among all jobs that can be scheduled at t
or later. Note that the greedy algorithm does not take into consideration the deadlines of the jobs,
except for determining whether jobs are eligible for scheduling. The greedy algorithm for multiple
machines just executes the greedy algorithm (for a single machine) machine by machine.
We now dene the procedure NEXT(t; j; J ). The procedure determines the job instance J i;j (t 0 ),
t, such that t 0 is the earliest among all instances of jobs in J that start at time t or later
on machine M j . If no such interval exists, the procedure returns null, otherwise the procedure
returns J i;j (t 0 ).
Algorithm 1-GREEDY(j; J ) nds a feasible schedule on machine M j among the jobs in J . by
calling Procedure NEXT repeatedly.
1. The rst call is for J
2. Assume the algorithm has already computed Let the current time be
let the current set of jobs be J := J n fJ g.
3. The algorithm calls NEXT(t; j; J ) that returns either J i h+1 ;j (t h+1 ) or null.
4. The algorithm terminates in round r returns null. It returns the
set )g.
Algorithm k-GREEDY(J ) nds k schedules such that a job appears at most once in the schedules.
It calls Algorithm 1-GREEDY machine by machine, each time updating the set J of jobs to be
scheduled. Assume that the output of 1-GREEDY(j; J ) in the rst
where G j is a feasible schedule on machine M j , for 1 j i 1. Then, the algorithm calls
The following property of Algorithm 1-GREEDY is used in the analysis of the approximation
factors of our algorithms.
Proposition 3.1 Let the set of jobs found by 1-GREEDY(j; J ) for a job system J be G. Let H be
any feasible schedule on machine M j among the jobs in J n G. Then, jHj jGj.
Proof: For each interval (job instance) in H there exists an interval in G that overlaps with it and
terminates earlier. Otherwise, 1-GREEDY would have chosen this interval. The proposition follows
from the feasibility of H, since at most one interval in H can overlap with the end point of any
interval in G. 2
3.2 Unrelated machines
Based on Proposition 3.1, the following theorem states the performance of the k-GREEDY algorithm
in the unweighted jobs and unrelated machines model.
Theorem 3.2 Algorithm k-GREEDY achieves a 2 approximation factor in the unweighted jobs and
unrelated machines model.
Proof: Let be the output of k-GREEDY and let OPT
the sets of intervals scheduled on the k machines by an optimal solution OPT. (We note that these
sets will be considered as jobs and job instances interchangeably.) Let H be the set
of all the jobs scheduled by OPT on machine M j that k-GREEDY did not schedule on any machine
and let be the set of jobs taken by both k-GREEDY
and OPT. It follows that OPT
Proposition 3.1 implies that jH j j jG j j. This is true since H j is a feasible schedule on machine
among the jobs that were not picked by k-GREEDY while constructing the schedule for machine
. Since the sets H j are mutually disjoint and the same holds for the sets G j , jHj jG(k)j. Since
jOGj jG(k)j, we get that jOPT (k)j 2jG(k)j and the theorem follows. 2
3.3
In this section we analyze the k-GREEDY algorithm for the unweighted jobs and identical machines
model. We show that the approximation factor in this case is
For 9=5, and for k !1 we have
The analysis below is quite general and just uses the facts that the algorithm is applied sequentially
machine by machine, and that the machines are identical. Let OPT (k) be an optimal
schedule for k identical machines. Let A be any algorithm for one machine. Dene by A (k) (or
by (k) when A is known) the approximation factor of A compared with OPT (k). Note that the
comparison is done between an algorithm that uses one machine and an optimal schedule that uses
machines. Let A(k) be the algorithm that applies algorithm A, machine by machine, k times. In
the next theorem we bound the performance of A(k) using (k).
Theorem 3.3 Algorithm A(k) achieves an (k) k
approximation factor for k identical
machines.
Proof: Let A i be the set of jobs chosen by A(k) for the i-th machine. Suppose that the algorithm
has already chosen the sets of jobs A Consider the schedule given by removing from
OPT (k) all the jobs in A were also chosen by the optimal solution. Clearly, this is
still a feasible schedule of cardinality at least jOPT (k)j
Therefore, by the denition
of (k), the set A i satises jA i j (1=(k))(jOPT (k)j
Rearranging the terms gives
us the equation,
We prove by induction on i that
. Assume the claim holds for i 1. Applying the
induction hypothesis to Equation (1) we get,
Rearranging terms yields the inductive claim. Setting proves the theorem, namely,
We now apply the above theorem to algorithm k-GREEDY. We compute the value of (k) for
algorithm 1-GREEDY, and observe that algorithm k-GREEDY indeed applies algorithm 1-GREEDY
k times, as assumed by Theorem 3.3.
Theorem 3.4 The approximation factor of k-GREEDY is
, in the unweighted jobs
and identical machines model.
Proof: Recall that algorithm 1-GREEDY scans all the intervals ordered by their end points and
picks the rst possible interval belonging to a job that was not picked before. Suppose this greedy
strategy picks set G, and consider the schedule of machines numbered
Similar to the arguments of Proposition 3.1, in each of the machines, if a particular job of H was
not chosen, then there must be a job in progress in G. Also this job must nish before the particular
job in H nishes. Thus, the number of jobs in H executed on any single machine by the optimal
schedule has to be at most jGj. Since the jobs executed by the optimal schedule on dierent
machines are disjoint, we get jHj kjGj. Consequently, jOPT (k)j
The theorem follows by setting this value for (k) in Theorem 3.3. 2
3.4 Tight bounds for GREEDY
In this subsection we construct an instance for which our bounds in the unweighted model for
algorithm GREEDY are tight. We rst show that for one machine (where the unrelated and identical
models coincide) the 2-approximation is tight. Next, we generalize this construction for the
unrelated model, and prove the tight bound of 2 for k > 1 machines. Finally, we generalize the
construction for one machine to k > 1 identical machines and prove the tight bound of (k).
Recall that in the unweighted model each job is characterized by a triplet (r in the
identical machines model and by a triplet (r g, in the unrelated
machines model.
3.4.1 A single machine
For a single machine the system contains two jobs: G
1-GREEDY schedules the instance G 1 (0) of job G 1 and cannot schedule any instance of H 1 . An
optimal solution schedules the instances H 1 (0) and G 1 (2). Clearly, the ratio is 2. We could repeat
this pattern on the time axis to obtain this ratio for any number of jobs.
This construction demonstrates the limitation of the approach of Algorithm 1-GREEDY. This
approach ignores the deadlines and therefore does not capitalize on the urgency in scheduling job
H 1 in order not to miss its deadline. We generalize this idea further for k machines.
3.4.2 Unrelated machines
For machines the job system contains 2k jobs: G . The release time
of all jobs is 0. The deadline of all the G-type jobs is 3 and the deadline of all the H-type jobs is
2. The length of job G i on machine M i is 1 and it is 4 on all other machines. The length of job H i
on machine M i is 2 and it is 3 on all other machines.
Note that only jobs G i and H i can be scheduled on machine M i , since all other jobs are too
long to meet their deadline. Hence, Algorithm k-GREEDY considers only these two jobs while
constructing the schedule for machine M i . As a result, k-GREEDY selects the instance G i (0) of
to be scheduled on machine M i and cannot schedule any of the H-type jobs. On the other
hand, an optimal solution schedules the instances H i (0) and G i (2) on machine M i .
Algorithm k-GREEDY schedules k jobs while an optimal algorithm schedules all 2k
jobs. This yields a tight approximation factor of 2 in the unweighted jobs and unrelated machines
model.
3.4.3
We dene job systems J (k) for any given k 1. We show that on J (k) the performance of
k-GREEDY(J (k)) is no more than (1=(k)) OPT(J (k)). The J (1) system is the one dened in
Subsection 3.4.1. The J (2) job system contains
2). If we set
and Algorithm 2-GREEDY to make the following selections:
On the rst machine, 2-GREEDY schedules all the 6 jobs of type G 1 . This is true since these
jobs are of length less than the lengths of the jobs of type G 2 and the jobs of type H. The
last G 1 -type interval terminates at time 60. Hence, there is no room for a G 2 -type (H-type)
interval, the deadline of which is 70 (48), and the length of which is 11 (12).
On the second machine, 2-GREEDY schedules all the 4 jobs of type G 2 since they are shorter
than the jobs of type H. The last G 2 -type job terminates at time 44 which leaves no room
for another job of type H.
jobs. We show now an optimal solution that schedules all
jobs. It schedules 9 jobs on each machine as follows:
Note that all the instances terminate before their deadlines. As a result we get a ratio
We are ready to dene J (k) for any k 1. The job system contains k(k jobs. Algorithm
k-GREEDY is able to schedule only k(k+1) k k k+1 out of them and there exists an optimal solution
that schedules all of them. As a result we get the ratio
The J (k) system is composed of k
jobs (0; d setting ' as a large enough number
and then xing d and d, we force Algorithm k-GREEDY to select for machine i all the jobs
of type G i but no other jobs. Thus k-GREEDY does not schedule any of the H-type jobs. On the
other hand, an optimal solution is able to construct the same schedule for all the k machines. It
starts by scheduling 1=k of the H-type jobs with their rst possible instance. Then, it schedules in
turn 1=k of the jobs from G k ; G k order. The values of d; d allow for such
a schedule.
We omit the details of how to set ' and how to determine the deadlines. We just remark that
to validate the optimal schedule, we get lower bounds for the deadlines and to force the k-GREEDY
schedule we get upper bounds for the deadlines. If ' is large enough, then these upper bounds are
larger than the lower bounds.
For could check the following values:
and d
Weighted jobs
In this section we present approximation algorithms for weighted jobs. We rst present algorithms
for a single machine and for unrelated machines that are based on rounding a linear programming
relaxation of the problem. Then, we re-apply the analysis of Theorem 3.3 to get better approximation
factors for the identical machines model. We conclude with a combinatorial algorithm
for unrelated machines which is e-cient and easy to implement. However, it achieves a weaker
approximation guarantee.
4.1 Approximation via linear programming
In this subsection we describe a linear programming based approximation algorithm. We rst
describe the algorithm for the case of a single machine, and then generalize it to the case of
multiple machines. Our linear programming formulation is based on discretizing time. Suppose
that the time axis is divided into N time slots. The complexity of our algorithms depends on N .
However, we assume for now that N is part of the input, and that the discretization of time is ne
enough so as to represent any feasible schedule (up to small shifts). Later, we show how to get rid
of these assumptions at the expense of a slight increase in the approximation factor.
The linear program relaxes the scheduling problem in the following way. A fractional feasible
solution is one which distributes the processing of a job between the job instances or intervals
belonging to it with the restriction that at any given point of time t, the sum of the fractions
assigned to all the intervals at t (belonging to all jobs) does not exceed 1. To this end, for each job
for each interval [t; t belonging to it, i.e., for which t r i and
It would be convenient to assume that x other value of t between 1 and
N . The linear program is as follows.
maximize
subject to:
For each time slot t, 1 t
For each job i, 1 i n:
x it 1
For all i and t: 0 x it 1
It is easy to see that any feasible schedule denes a feasible integral solution to the linear program
and vice versa. Therefore, the value of an optimal (fractional) solution to the linear program is an
upper bound on the value of an optimal integral solution.
We compute an optimal solution to the linear program and denote the value of variable x it in
this solution by q it . Denote the value of the objective function in an optimal solution by OPT. We
now show how to round an optimal solution to the linear program to an integral solution.
To show how the linear program is used we dene a coloring of intervals. The collection of all
intervals belonging to a set of jobs J can be regarded as an interval representation of an interval
graph I. We dene a set of intervals in I to be independent if: (i) No two intervals in the set
in the set belong to the same job. (Note that this denition is more
restrictive than the regular independence relation in interval graphs.) Clearly, an independent set
of intervals denes a feasible schedule. The weight of an independent set P , w(P ), is dened to be
the sum of the weights of the jobs to which the intervals belong.
Our goal is to color intervals in I such that each color class induces an independent set. We
note that not all intervals are required to be colored and that an interval may receive more than one
color. Suppose that a collection of color classes (independent sets) non-negative
coe-cients
there exists a color class P i , 1 i m, for which w(P i ) OPT=2. This color class is dened to
be our approximate solution, and the approximation factor is 2. It remains to show how to obtain
the desired coloring.
We now take a short detour and dene the group constrained interval coloring problem. Let
be an interval representation in which the maximum number of mutually overlapping
intervals is t 1 . Suppose that the intervals are partitioned into disjoint groups
group contains at most t 2 intervals. A legal group constrained coloring of the intervals in Q is a
coloring in which: (i) Overlapping intervals are not allowed to get the same color; (ii) Intervals
belonging to the same group are not allowed to get the same color.
Theorem 4.1 There exists a legal group constrained coloring of the intervals in Q that uses at
most
Proof: We use a greedy algorithm to obtain a legal coloring using at most t 1
the intervals in Q by their left endpoint and color the intervals from left to right with respect to
this ordering. When an interval is considered by the algorithm it is colored by any one of the free
colors available at that time. We show by induction that when the algorithm considers an interval,
there is always a free color.
This is true initially. When the algorithm considers interval Q i , the colors that cannot be used
for Q i are occupied by either intervals that overlap with Q i , or by intervals that belong to the
same group as Q i . Since we are considering the intervals sorted by their left endpoint, all intervals
overlapping with Q i also overlap with each other, and hence there are at most t 1 1 such intervals.
There can be at most t 2 1 intervals that belong to the same group as Q i . Since the number of
available colors is there is always a free color. 2
We are now back to the problem of coloring the intervals in I. Let N . We can round
each fraction q it in the optimal solution to the closest fraction of the form a=N 0 , where 1 a N 0 .
This incurs a negligible error (of at most 1=(Nn) factor) in the value of the objective function. We
now generate an interval graph I 0 from I by replacing each interval J i (t) 2 I by q it N 0 \parallel"
intervals. Dene a group constrained coloring problem on I 0 , where group
all instances of job J i . Note that in I 0 , the maximum number of mutually overlapping intervals is
bounded by N 0 , and the maximum number of intervals belonging to a group is also N 0 .
By Theorem 4.1, there exists a group constrained coloring of I 0 that uses at most 2N 0 1 colors.
Attach a coe-cient of 1=N 0 to each color class. Clearly, the sum of the coe-cients is less than 2.
Also, by our construction, the sum of the weights of the intervals in all the color classes, multiplied
by the coe-cient 1=N 0 , is OPT. We conclude,
Theorem 4.2 The approximation factor of the algorithm that rounds an optimal fractional solution
is 2.
We note that the technique of rounding a fractional solution by decomposing it into a convex
combination of integral solutions was also used by Albers et al. [2]
4.1.1 Strongly polynomial bounds
The di-culty with the linear programming formulation and the rounding algorithm is that the
complexity of the algorithm depends on N , the number of time slots. We now show how we choose
N to be a polynomial in the number of jobs, n, at the expense of losing a bit in the approximation
factor.
First, we note that in case the release times, deadlines, and processing times are integral, we
may assume without loss of generality that each job is scheduled at an integral point of time. If,
in addition, they are restricted to integers of polynomial size, then the number of variables and
constraints is bounded by polynomial.
We now turn our attention to the case of arbitrary inputs. Let p(n) be a (n 2 ) polynomial.
Partition the jobs in J into two classes:
Big slack jobs: J
Small slack jobs: J
We obtain a fractional solution separately for the big slack jobs and small slack jobs. We rst
explain how to obtain a fractional solution for the big slack jobs. For each big slack job J
nd p(n) non-overlapping job instances and assign a value of 1=p(n) to each such interval. Note
that this many non-overlapping intervals can be found since d i r i is large enough. We claim that
this assignment can be ignored when computing the solution (via LP) for the small slack jobs.
This is true because at any point of time t, the sum of the fractions assigned to intervals at t
belonging to big slack jobs can be at most n=p(n), and thus their eect on any fractional solution
is negligible. (In the worst case, scale down all fractions corresponding to small slack jobs by a
factor of (1 n=p(n)).) Nevertheless, a big slack job contributes all of its weight to the fractional
objective function.
We now restrict our attention to the set of small slack jobs and explain how to compute a
fractional solution for them. For this we solve an LP. To bound the number of variables and
constraints in the LP we partition time into at most n (p(n) slots. Instead of having a
variable for each job instance we consider at most n 2 (p(n)+1) variables, where for each job J
there are at most n (p(n) and the j-th variable \represents" all the job instances
of J i that start in the j-th time slot. Similarly, we consider at most n (p(n)
where the j-th constraint \covers" the j-th time slot. For each small slack job J
along the time axis at points r
p(n) , for dening
all the n (p(n) dividers, the time slots are determined by the adjacent dividers. The main
observation is that for each small slack job J i , no interval can be fully contained in a time slot, i.e.,
between two consecutive dividers.
The LP formulation for the modied variables and constraints is slightly dierent from the
original formulation. To see why, consider a feasible schedule. As mentioned above, a job instance
cannot be fully contained in a time slot t. However, the schedule we are considering may consist of
two instances of jobs such that one terminates within time slot t and the other starts within t. If
we keep the constraints that stipulate that the sum of the variables corresponding to intervals that
intersect a time slot is bounded by 1, then we would not be able to represent such a schedule in
our formulation. To overcome this problem, we relax the linear program, and allow that at every
time slot t, the sum of the fractions assigned to the intervals that intersect t can be at most 2. The
relaxed linear program is the following.
maximize
subject to:
For each time slot t:
For each job i, 1 i n:
x it 1
For all i and t: 0 x it 1
It is easy to see that our relaxation guarantees that the value of the objective function in the
above linear program is at least as big as the value of an optimal schedule. We round an optimal
fractional solution in the same way as in the previous section. Since we relaxed our constraints, we
note that when we run the group constrained interval coloring algorithm, the number of mutually
overlapping intervals can be at most twice the number of intervals in each group. Therefore, when
we generate the color classes , we can only guarantee that:
yielding an approximation factor of 3. We
conclude,
Theorem 4.3 The approximation factor of the strongly polynomial algorithm that rounds a fractional
solution is 3.
4.1.2 Unrelated machines
In this section we consider the case of k unrelated machines. We rst present a linear programming
formulation. For clarity, we give the LP formulation for polynomially bounded integral inputs.
However, the construction given above that achieves a strongly polynomial algorithm for arbitrary
inputs can be applied here as well. Assume that there are N time slots. For each job J
machine j, dene a variable x itj for each instance [t; t
maximize
subject to:
For each time slot t and machine j:
For each job i, 1 i n:
For all i, j, and t: 0 x itj 1
The algorithm rounds the fractional solution machine by machine. Let
the rounded solution. When rounding machine i, we rst discard from its fractional solution all
intervals belonging to jobs chosen to S denote the approximation factor that can
be achieved when rounding a single machine. Namely, inputs and
arbitrary inputs.
Theorem 4.4 The approximation factor of the algorithm that rounds a k-machine solution is
Proof: Let F i , 1 i k, denote the fractional solution of machine i, and let w(F i ) denote its
value. Denote by F 0
i the fractional solution of machine i after discarding all intervals belonging to
jobs chosen to S
We know that for all i, 1 i k,
c
Adding up all the inequalities, since the sets S i
are mutually disjoint, we get that,
c
Recall that for each job i, the sum of the values of the fractional solution assigned to the intervals
belonging to it in all the machines does not exceed 1. Therefore,
Yielding that
w(S)
In this subsection we apply Theorem 3.3 for the case of weighted jobs and identical machines. We
distinguish between the cases of polynomially bounded integral input and arbitrary input.
Theorem 4.5 There exists an algorithm for the weighted jobs and identical machines case that
achieves an approximation factor of
polynomially bounded integral input, and
, for arbitrary input.
Proof: As shown above, a linear program can be formulated such that the value of its optimal
solution is at least as big as the value of an optimal schedule. Let N 0 be chosen in the same way as
in the discussion preceding Theorem 4.2. We claim that using our rounding scheme, this feasible
solution denes an interval graph that can be colored by (k+1)N 0 1 colors for integral polynomial
size inputs and by (2k colors for arbitrary inputs.
Consider rst the case of integral polynomial size input. In the interval graph that is induced
by the solution of the LP, there are at most N 0 intervals (that correspond to the same job) in
the same group, and at most kN 0 intervals mutually overlap at any point of time. Applying our
group constrained interval coloring, we get a valid coloring with (k Similarly,
for arbitrary inputs, in the interval graph which is induced by the solution of the LP, there are at
most N 0 intervals (that correspond to the same job) in the same group, and at most 2kN 0 intervals
mutually overlap. Applying our group constrained interval coloring, we get a valid coloring with
This implies that polynomial size input and arbitrary
input. In other words, this is the approximation factor that can be achieved with a single machine
when compared to an optimal algorithm that uses k identical machines. Setting these values of
in our paradigm for transforming an algorithm for a single machine to an algorithm for k
identical machines, yields the claimed approximation factors. 2
Remark: Note that as k tends to innity, the approximation factor is e
1:58192 for both
unweighted jobs and for weighted jobs with integral polynomial size inputs. For arbitrary input,
the approximation factor is
e
2:54149. Setting we get that these bounds coincide with
the bounds for a single machine. For every k > 2 and for both cases these bounds improve upon
the bounds for unrelated machines (of 3 and 4).
4.2 A combinatorial algorithm
In this section we present a combinatorial algorithm for the weighted machines model. We rst
present an algorithm for the single-machine version and then we show how to extend it to the case
where there are k > 1 machines, even in the unrelated machines model.
4.2.1 A single machine
The algorithm is inspired by on-line call admission algorithms (see [12, 7]). We scan the jobs
instances (or intervals) one by one. For each job instance, we either accept it, or reject it. We note
that rejection is an irrevocable decision, where as acceptance can be temporary, i.e., an accepted
job may still be rejected at a later point of time. We remark that in the case of non-preemptive
on-line call admission, a constant competitive factor cannot be achieved by such an algorithm. The
reason is that due to the on-line nature of the problem jobs must be considered in the order of
their release time. Our algorithm has the freedom to order the jobs in a dierent way, yielding a
constant approximation factor.
We now outline the algorithm. All feasible intervals of all jobs are scanned from left to right (on
the time axis) sorted by their end points. The algorithm maintains a set A of currently accepted
intervals. When a new interval, I, is considered according to the sorted order, it is immediately
rejected if it belongs to a job that already has an instance in A, and immediately accepted if it
does not overlap with any other interval in A. In case of acceptance, interval I is added to A. If
I overlaps with one or more intervals in A, it is accepted only if its weight is more than (to be
determined later) times the sum of the weights of all overlapping intervals. In this case, we say that
I \preempts" these overlapping intervals. We add I to A and discard all the overlapping intervals
from A. The process ends when there are no more intervals to scan.
A more formal description of our algorithm, called Algorithm ADMISSION is given in Figure 2.
The main di-culty in implementing the above algorithm is in scanning an innite number of
intervals. After proving the performance of the algorithm we show how to overcome this di-culty.
Informally, we say that an interval I \caused" the rejection or preemption of another interval J ,
Algorithm ADMISSION:
1. Let A be the set of accepted job instances.
Initially,
2. Let I be the set of the yet unprocessed job instances.
Initially, I is the set of all feasible job instances.
3. While I is not empty repeat the following procedure:
Let I 2 J i be the job instance that terminates earliest among all instances in I
and let w be its weight.
Let W be the sum of the weights of all instances I I h in A that overlap I.
(a) I := I n fIg.
(b) If J i \ A 6= ; then reject I.
(c) Else if
A := A [ fIg.
(d) Else if w
W > then accept I and preempt I
g.
reject I.
Figure
2: Algorithm ADMISSION
if either interval I directly rejected or preempted interval J , or if it preempted another interval that
caused the rejection or preemption of interval J . (Note that this relation is dened recursively, as
an interval I may preempt another interval, that preempted other intervals, which in turn rejected
other intervals. In this case interval I caused the rejection or preemption of all these intervals.)
Fix an interval I that was accepted by the algorithm, and consider all the intervals chosen by the
optimal solution, the rejection or preemption of which was caused by interval I. We prove that the
total weight of these intervals is at most f() times the weight of the accepted interval I, for some
function f . Optimizing , we get the 3
Theorem 4.6 The approximation factor of Algorithm ADMISSION is 3
2.
Proof: Let O be the set of intervals chosen by an optimal algorithm OPT. Let the set of intervals
accepted by Algorithm ADMISSION be denoted by A. For each interval I 2 A we dene a set R(I)
of all the intervals in O that are \accounted for" by I. This set consists of I in case I 2 O, and of
all the intervals in O the rejection or preemption of which was caused by I. More formally:
Assume I is accepted by rule 3(c). Then, the set R(I) is initialized to be I in case I 2 O and
the empty set ;, otherwise.
Assume I is accepted by rule 3(d). Then R(I) is initialized to contain all those intervals from
O that were directly preempted by I and the union of the sets R(I 0 ) of all the intervals I 0
that were preempted by I. In addition, R(I) contains I in case I 2 O.
Assume J 2 O is rejected by rule 3(b). Let I 2 A be the interval that caused the rejection of
J . Note that both I and J belong to the same job. In this case add J to R(I).
Assume J 2 O was rejected by rule 3(e) and let I I h be the intervals in A that overlapped
with J at the time of rejection. Let w be the weight of J and let w j be the weight of I j for
We view J as h imaginary intervals J where the weight of J j is w j w
g. Note that due to the rejection rule it follows that
the weight of J j is no more than times the weight of I j .
It is not hard to see that each interval from O, or a portion of it if we use rule 3(e), belongs exactly
to one set R(I) for some I 2 A. Thus, the union of all sets R(I) for I 2 A covers O.
We now x an interval I 2 A. Let w be the weight of I and let W be the sum of weights of all
intervals in R(I). Dene
w . Our goal is to bound from above.
Interval I may directly reject at most one interval from O. Let w r be the weight of (the portion
of) the interval I r 2 O\R(I) that was directly rejected by I, if such exists. Otherwise, let w
Observe that w r w, since otherwise I r would not have been rejected. Let I 0 2 O be the interval
that belongs to the same job as the one to which I belongs (it maybe I itself), if such exists. By
denition, the weight of I 0 is w. Let W r be the sum of the weights of the rest of the
intervals in R(I). Dene
w . It follow that
We now assume inductively that the bound is valid for intervals with earlier end point than
the end point of I. Since the overall weight of the jobs that I directly preempted is at most w=, we
get that w
This implies that ++1
. Therefore,
. This equation is minimized for
which implies that
2.
Finally, since the bound holds for all the intervals in A and since the union of all R(I) sets covers
all the interval taken by OPT, we get that the value of OPT is at most times the value of A.
Hence, the approximation factor is 3
2. 2
Implementation: Observe that Step (3) of the algorithm has to be invoked only when there is a
\status" change, i.e., either a new job becomes available (n times) or a job in the schedule ends (n
times). Each time Step (3) is invoked the total number of jobs instances that have to be examined
is at most n (at most one for each job). To implement the algorithm we employ a priority queue
that holds intervals according to their endpoint. At any point of time it is enough to hold at most
one job instance for each job in the priority queue. It turns out that the total number of operations
for retrieving the next instance is O(n log n), totalling to O(n 2 log n) operations.
4.2.2 Unrelated machines
If the number of unrelated machines is k > 1, we call Algorithm ADMISSION k times, machine by
machine, in an arbitrary order, where the set of jobs considered in the i-th call does not contain
the jobs already scheduled on machines M . The analysis that shows how the 3
5:828 bound carries over to the case of unrelated machines is very similar to the analysis presented
in the proof of Theorem 4.6. The main dierence is in the denition of R(I). For each interval
I 2 A that was executed on machine M i , we dene the set R(I) to consist of I in case I 2 O,
and of all the intervals that (i) were executed on machine M i in the optimal schedule, and (ii) the
rejection or preemption of these jobs was caused by I.
5 The MAX-SNP hardness
We show that the problem of scheduling unweighted jobs on unrelated machines is MAX-SNP Hard.
This is done by reducing a variant of Max-2SAT, in which each variable occurs at most three times,
to this problem. In this variant of Max-2SAT, we are given a collection of clauses, each consisting
of two (Boolean) variables, with the additional constraint that each variable occurs at most three
times, and the goal is to nd an assignment of values to these variables that would maximize the
number of clauses that are satised (i.e., contain at least one literal that has a \true" value). This
problem is known to be MAX-SNP Hard (cf. [26]).
Given an instance of the Max-2SAT problem we show how to construct an instance of the
problem of unweighted jobs, unrelated machines, such that the value of the Max-2SAT problem is
equal to the value of the scheduling problem. Each variable x i is associated with a machine M i .
Each clause C j is associated with a job. The release time of any job is 0 and its deadline is 3.
The job can be executed only on the two machines corresponding to the variables the clause C j
contains. (In other words, the processing time of the job in the rest of the machines is innite.)
Suppose that clause C j contains a variable x i as a positive (negative) literal. The processing
time of the job corresponding to C j on M i is 3=k, where k 2 f1; 2; 3g is the number of occurrences
of variable x i as a positive (negative) literal. Note that in case variable x i occurs in both positive
and negative forms, it occurs exactly once in one of the forms since a variable x i occurs at most
three times overall. It follows that in any feasible schedule, machine M i cannot execute both a job
that corresponds to a positive literal occurrence and a job that corresponds to a negative literal
occurrence.
We conclude that if m jobs can be scheduled, then m clauses can be satised. In the other
direction, it is not hard to verify that if m clauses can be satised, then m jobs can be scheduled.
Since Max-2SAT with the restriction that each variable occurs at most three times is MAX-SNP
Hard, the unweighted jobs and unrelated machines case is MAX-SNP Hard as well.
Acknowledgment
We are indebted to Joel Wein for many helpful discussions, and especially for his suggestion to
consider the general case of maximizing the throughput of jobs with release times and deadlines.
--R
Scheduling time-constrained communication in linear networks
Minimizing stall time in single and parallel disk systems
Scheduling jobs with
Competitive Bandwidth Allocation with Preemption
On the competitiveness of on-line real-time task scheduling
Scheduling in Computer and Manufacturing Systems
Location of bank accounts to optimize oat
Note on scheduling intervals on-line
Two processor scheduling with start times and deadlines
Computers and Intractability: A Guide to the Theory of NP- Completeness
Patience is a Virtue: The E
Maximizing the value of a space mission
A solvable case of one machine scheduling problem with ready and due dates
An optimal on-line scheduling algorithm for overloaded real-time systems
Sequencing to minimize the weighted number of tardy jobs
A dynamic programming algorithm for preemptive scheduling of a single machine to minimize the number of late jobs
"Sequencing and Schedul- ing: Algorithms and Complexity"
Online interval scheduling
Algorithms for scheduling independent tasks
On the approximation of maximum satis
--TR
--CTR
Cash J. Costello , Christopher P. Diehl , Amit Banerjee , Hesky Fisher, Scheduling an active camera to observe people, Proceedings of the ACM 2nd international workshop on Video surveillance & sensor networks, October 15-15, 2004, New York, NY, USA
Lixin Tang , Gongshu Wang , Jiyin Liu, A branch-and-price algorithm to solve the molten iron allocation problem in iron and steel industry, Computers and Operations Research, v.34 n.10, p.3001-3015, October, 2007
Thomas Erlebach , Klaus Jansen, Implementation of approximation algorithms for weighted and unweighted edge-disjoint paths in bidirected trees, Journal of Experimental Algorithmics (JEA), 7, p.6, 2002
Randeep Bhatia , Julia Chuzhoy , Ari Freund , Joseph (Seffi) Naor, Algorithmic aspects of bandwidth trading, ACM Transactions on Algorithms (TALG), v.3 n.1, February 2007
Thomas Erlebach , Frits C. R. Spieksma, Interval selection: applications, algorithms, and lower bounds, Journal of Algorithms, v.46 n.1, p.27-53, January
Laura Barbulescu , Jean-Paul Watson , L. Darrell Whitley , Adele E. Howe, Scheduling SpaceGround Communications for the Air Force Satellite Control Network, Journal of Scheduling, v.7 n.1, p.7-34, January-February 2004
Julia Chuzhoy , Joseph (Seffi) Naor, New hardness results for congestion minimization and machine scheduling, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Julia Chuzhoy , Rafail Ostrovsky , Yuval Rabani, Approximation Algorithms for the Job Interval Selection Problem and Related Scheduling Problems, Mathematics of Operations Research, v.31 n.4, p.730-738, November 2006
Amotz Bar-Noy , Reuven Bar-Yehuda , Ari Freund , Joseph (Seffi) Naor , Baruch Schieber, A unified approach to approximating resource allocation and scheduling, Journal of the ACM (JACM), v.48 n.5, p.1069-1090, September 2001
Thomas Erlebach , Klaus Jansen, Conversion of coloring algorithms into maximum weight independent set algorithms, Discrete Applied Mathematics, v.148 n.1, p.107-125,
Amotz Bar-Noy , Sudipto Guha , Yoav Katz , Joseph (Seffi) Naor , Baruch Schieber , Hadas Shachnai, Throughput maximization of real-time scheduling with batching, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.742-751, January 06-08, 2002, San Francisco, California
Julia Chuzhoy , Joseph (Seffi) Naor, New hardness results for congestion minimization and machine scheduling, Journal of the ACM (JACM), v.53 n.5, p.707-721, September 2006
Reuven Bar-Yehuda , Keren Bendel , Ari Freund , Dror Rawitz, Local ratio: A unified framework for approximation algorithms. In Memoriam: Shimon Even 1935-2004, ACM Computing Surveys (CSUR), v.36 n.4, p.422-463, December 2004
Faisal Z. Qureshi , Demetri Terzopoulos, Surveillance camera scheduling: a virtual vision approach, Proceedings of the third ACM international workshop on Video surveillance & sensor networks, November 11-11, 2005, Hilton, Singapore | parallel machines scheduling;approximation algorithms;throughput;multiple machines scheduling;scheduling;real-time scheduling |
586858 | Checking Approximate Computations of Polynomials and Functional Equations. | A majority of the results on self-testing and correcting deal with programs which purport to compute the correct results precisely. We relax this notion of correctness and show how to check programs that compute only a numerical approximation to the correct answer. The types of programs that we deal with are those computing polynomials and functions defined by certain types of functional equations. We present results showing how to perform approximate checking, self-testing, and self-correcting of polynomials, settling in the affirmative a question raised by [P. Gemmell et al., Proceedings of the 23rd ACM Symposium on Theory of Computing, 1991, pp. 32--42; R. Rubinfeld and M. Sudan, Proceedings of the Third Annual ACM-SIAM Symposium on Discrete Algorithms, Orlando, FL, 1992, pp. 23--43; R. Rubinfeld and M. Sudan, SIAM J. Comput., 25 (1996), pp. 252--271]. We obtain this by first building approximate self-testers for linear and multilinear functions. We then show how to perform approximate checking, self-testing, and self-correcting for those functions that satisfy addition theorems, settling a question raised by [R. Rubinfeld, SIAM J. Comput., 28 (1999), pp. 1972--1997]. In both cases, we show that the properties used to test programs for these functions are both robust (in the approximate sense) and stable. Finally, we explore the use of reductions between functional equations in the context of approximate self-testing. Our results have implications for the stability theory of functional equations. | Introduction
Program checking was introduced by Blum and Kannan [BK89] in order to allow one to use
a program safely, without having to know apriori that the program is correct on all inputs.
Related notions of self-testing and self-correcting were further explored in [BLR93, Lip91].
These notions are seen to be powerful from a practical point of view (c.f., [BW94]) and from a
theoretical angle (c.f., [AS92, ALM + 92]) as well. The techniques used usually consist of tests
performed at run-time which compare the output of the program either to a predetermined
value or to a function of outputs of the same program at dierent inputs. In order to apply
these powerful techniques to programs computing real valued functions, however, several
issues dealing with precision need to be dealt with. The standard model, which considers an
output to be wrong even if it is o by a very small margin, is too strong to make practical
sense, due to reasons such as the following. (1) In many cases, the algorithm is only intended
to compute an approximation, e.g., Newton's method. (2) Representational limitations and
roundo/truncation errors are inevitable in real-valued computations. (3) The representation
of some fundamental constants (e.g., inherently imprecise.
The framework presented by [GLR accommodates these inherently inevitable
or acceptably small losses of information by overlooking small precision errors while
detecting actual \bugs", which manifest themselves with greater magnitude. Given a function
f , a program P that purports to compute f , and an error bound , if jP (x) f(x)j
(denoted P (x) f(x)) under some appropriate notion of norm, we say P (x) is approximately
correct on input x. Approximate result checkers test if P is approximately correct
for a given input x. Approximate self-testers are programs that test if P is approximately
correct for most inputs. Approximate self-correctors take programs that are approximately
correct on most inputs and turn them into programs that are approximately correct on every
input.
Domains. We work with nite subsets of xed point arithmetic that we refer to as nite
rational domains. For n; s
s
l is the
precision. We allow s and n to vary for generality. For a domain D, let D + and D denote
the positive and negative elements in D.
Testing using Properties. There are many approaches to building self-testers. We
illustrate one paradigm that has been particularly useful. In this approach, in order to test if
a program P computes a function f on most inputs, we test if P satises certain properties
of f .
As an example, consider the function and the property \f(x
that f satises. One might pick random inputs x and verify that P 2.
Clearly, if for some x, P incorrect. The program, however,
might be quite incorrect and still satisfy most choices of random
inputs. In particular, there exists a P (for instance, P
high probability, P satises the property at random x and hence will pass the test, and (ii)
there is no function that satises the property for all x such that P agrees with this function
on most inputs. Thus we see that this method, when used naively, does not yield a self-tester
that works according to our specications. Nevertheless, this approach has been used as a
good heuristic to check the correctness of programs [Cod91, CS91, Vai93].
As an example of a property that does yield a good tester, consider the linearity property
only by functions mapping D n;s to R of the form
random sampling, we conclude that the program P satises this property
for most x; y, it can be shown that P agrees with a linear function g on most inputs [BLR93,
Rub94]. We call the linearity property, and any property that exhibits such behavior, a
robust property.
We now describe more formally how to build a self-tester for a class F of functions that
can be characterized by a robust property. Our two-step approach, which was introduced in
[BLR93], is: (i) test that P satises the robust property (property testing), and (ii) check if P
agrees with a specic member of F (equality testing). The success of this approach depends
on nding robust properties which are both easy to test and lead to ecient equality tests.
A property is a pair hI; E (n;s) i, consisting of an equation I f
the values of function f at various tuples of locations hx
over D k
(n;s) from which the locations are picked. The property hI; E (n;s) i is said to char-
We naturally extend the mod function to D n;s by letting xmodK stand for j modK
s , for x; K 2 D n;s ,
and
s ,
acterize a function family F in the following way. A function f is a member of F if and
only if I f that has non-zero measure under E (n;s) . For
instance, the linearity property can be written as I f
(n;s) is a distribution on hx 1 are chosen randomly
from some distribution 2 over the domain D (n;s) . In this case hI; E Lin
Rg, the set of all linear functions over D (n;s) . We will adhere to
this denition of a property throughout the paper; however, for simplicity of notation, when
appropriate, we will talk about the distribution and the equality together. For instance, it
is more intuitive to express the linearity property as f(x giving the
distributions of x; y, than writing it as a pair.
We rst consider robust properties in more detail. Suppose we want to infer the correctness
of the program for the domain D n;s . Then we allow calls to the program on a larger
domain D (n;s) , where is a xed function that depends on the structure of I.
Ideally, we would like (n; . But, for technical reasons, we
allow D (n;s) to be a proper, but not too much larger, superset of D n;s (in particular, the
description size of an element in D (n;s) should be polynomial in the description size of an
element in D n;s ). 3
To use a property in a self-tester, one must prove that the property is roubust. Informally,
the )-robustness of the property hI; E (n;s) i implies that if, for a program P ,
I P with probability at least 1 when hx is chosen from
the distribution E (n;s) , then there is a function g 2 F that agrees with P on 1 fraction of
the inputs in D n;s . In the case of linearity, it can be shown that there is a distribution E Lin
11n;s
on 11n;s such that the property is (2; ; D 11n;s ; D n;s )-robust
for all < 1=48 [BLR93, Rub94]. Therefore, once it is tested that P satises P
probability when the inputs are picked randomly from E Lin
11n;s ,
it is possible to conclude that P agrees with some linear function on most inputs from
2 For example, choosing x 1 and x 2 uniformly from D (n;s) suces for characterizing linearity. To prove
robustness, however, [Rub94] uses a more complicated distribution that we do not describe here.
3 Alternatively, one could test the program over the domain D n;s and attempt to infer the correctness of
the program on most inputs from Dn 0 ;s 0
, where Dn 0 ;s 0
is a large subdomain of D n;s .
D n;s . A somewhat involved denition of robust is given in [Rub94]. Given a function
such that for all n; s, D n;s is a large enough subset of D (n;s) , in this paper we say that a
property is robust if: for all 0 < < 1, there is an such that for all n; s the property is
)-robust.
We now consider equality testing. Recall that once it is determined that P satises
the robust property, then equality testing determines that P agrees on most inputs with a
specic member of F . For instance, in the case of linearity, to ensure that P computes the
specic linear function most inputs, we perform the equality test which ensures
that
s
s
for most x. Neither the property test nor the equality test on its
own is sucient for testing the program. However, since x is the only function that
satises both the linearity property and the above equality property, the combination of the
property test and the equality test can be shown to be sucient for constructing self-testers.
This combined approach yields extremely ecient testers (that only make O(1) calls to
the program for xed and ) for programs computing homomorphisms (e.g., multiplication
of integers and matrices, exponentiation, logarithm). This idea is further generalized in
[Rub94], where the class of functional equations called addition theorems is shown to be useful
for self-testing. An addition theorem is a mathematical identity of the form 8x;
Addition theorems characterize many useful and interesting mathematical
functions [Acz66, CR92]. When G is algebraic, they can be used to characterize families of
functions that are rational functions of x, e cx , and doubly periodic functions (see Table 1 for
examples of functional equations and the families of functions that they characterize over the
reals). Polynomials of degree d can be characterized via several dierent robust functional
equations (e.g., [BFL91, Lun91, ALM
Approximate Robustness and Stability. When the program works with nite
precision, the properties upon which the testers are built will rarely be satised, even by a
program whose answers are correct up to the required (or hardware-wise maximal) number
of digits, since they involve strict equalities. Thus, when testing, one might be willing to
pass programs for which the properties are only approximately satised. This relaxation in
the tests, however, leads to some diculties, for in the approximate setting (1) it is harder
cot Ax f(x)+f(y) 2f(x)f(y) cos a
sin Ax
sin Ax+a
f(x)+f(y) 2f(x)f(y) cosh a
sinh Ax
sinh Ax+a
Ax
f(x)+f(y)+2f(x)f(y) cosh a
sinh Ax
sinh Ax+a
Ax
A
x
Table
1: Some Addition Theorems of the form f(x
to analyze which function families are solutions to the robust properties, and (2) equality
testing is more dicult. For instance, it is not obvious which family of functions would satisfy
both (approximate linearity property) and
s
s
for all x 2 D (n;s) . (approximate equality property).
To construct approximate self-testers, our approach is to rst investigate a notion of
approximate robustness of the property to be used. We rst require a notion of distance
between two functions.
Denition 1 (Chebyshev Norm) For a function f on a domain D,
fjf(x)jg:
When the domain is obvious from the context, we drop it. Given functions f; g, the distance
between them is kf gk. Next, we dene the approximation of a function by another:
Denition 2 The function P (; )-approximates f on domain D if kP fk on at
least 1 fraction of D.
Approximate robustness is a natural extension of the robustness of a property. We say that a
program satises a property approximately if the property is true of the program when exact
equalities are replaced by approximate equalities. Once again consider the linearity property
and a program P that satises the property approximately (i.e., P
but an fraction of the choices of hx
(n;s) . The approximate
robustness of linearity implies that there exists a function g and a choice of 0 ; 00 such that
2)-approximates P on D n;s
In general, we would like to dene approximate robustness of a property
as the following: If a program P satises the equation I approximately on most
choices of inputs according to the distribution E (n;s) , then there exists a function g that (i)
I approximately on all inputs chosen according to E n;s (ii) approximates P on most
inputs in D n;s . The function relating the distributions used for describing the behaviors of
P and G depends on I, but is not required to be uniformly Turing-computable.
We now give a formal denition of approximate robustness:
Denition 3 (Approximate Robustness) Let hI; E (n;s) i characterize the family of functions
F over the domain D (n;s) . Let F 0 be the family of functions satisfying I approximately
on all inputs chosen according to E n;s . A property hI; E (n;s) i for a function family F 0 is
)-approximately robust if 8P; Pr x 1 ;:::;x k 2E (n;s)
implies there is a )-approximates P on D n;s and I g
Once we know that the property is approximately robust, the second step is to analyze
the stability of the property, i.e., to characterize the set of functions F 0 that satisfy the
property approximately and compare it to F , the set of functions that satisfy the property
exactly (Hyers-Ulam stability). In our linearity example, the problem is the following: given
satisfying g(x in the domain, is there a homomorphism h
that with 0 depending only on and not on the size of the domain?
If the answer is armative, we say that the property is stable. In the following denition,
Denition 4 (Stability) A property hI; E n;s i for a function family F is (D n;s ; D n 0 ;s
stable if 8g that satises I g 0 for all tuples with non-zero support according to E n;s , there
is a function h that satises I support according to E n 0 ;s 0
with kh gkD n 0 ;s 0
If a property is both approximately robust and stable, 4 then it can be used to determine
whether P approximates some function in the desired family. Furthermore, if we have a
method of doing approximate equality testing, then we can construct an approximate self-
tester.
Previous Work. Previously, not many of the known checkers have been extended to
the approximate case. Often it is rather straightforward to extend the robustness results to
show approximate robustness. However, the diculty with extending the checkers appears
to lie in showing the stability of the properties. The issue is rst mentioned in [GLR
where approximate checkers for mod, exponentiation, and logarithm are constructed. The
domain is assumed to be closed in all of these results. A domain is said to be closed under
an operation if the range of the operation is a subset of the domain. For instance, a nite
precision rational domain is not closed under addition. In [ABC + 93] approximate checkers for
sine, cosine, matrix multiplication, matrix inversion, linear system solving, and determinant
are given. The domain is assumed to be closed in the results on sine and cosine. In [BW95]
an approximate checker for
oating-point division is given. In [Sud91], a technique which
uses approximation theory is presented to test univariate polynomials of degree at most 9.
It is left open in [GLR whether the properties used to test
polynomial, hyperbolic, and other trigonometric functions can be used in the approximate
setting. For instance, showing the stability of such functional equations is not obvious; if
the functional equation involves division with a large numerator and a small denominator,
a small additive error in the denominator leads to a large additive error in the output.
There has been signicant work on the stability of specic functional equations. The
stability of linearity and other homomorphisms is addressed in [Hye41, For80, FS89, Cho84].
The techniques used to prove the above results, however, cease to apply when the domain
is not closed. The stronger property of stability in a non-closed space, called local stability,
4 The associated distribution needs to be sampleable
is addressed by Skof [Sko83] who proves that Cauchy functional equations are locally stable
on a nite interval in R. The problem of stability of univariate polynomials over continuous
domains is rst addressed in [AB83] and the problem of local stability on R is solved in
[Gaj90]. See [For95] for a survey. These results do not extend in an obvious way to nite
subsets of R, and thus cannot be used to show the correctness of self-testers. For those
that can be extended, the error bounds obtained by naive extensions are not optimal. Our
dierent approach allows us to operate on D n;s and obtain tight bounds.
Results. In this paper, we answer the questions of [GLR
in the armative, by giving the rst approximate versions of most of their testers. We rst
present an approximate tester for linear and multilinear functions with tight bounds. These
results apply to several functions, including multiplication, exponentiation, and logarithm,
over non-closed domains. We next present the rst approximate testers for multivariate poly-
nomials. Finally, we show how to approximately test functions satisfying addition theorems.
Our results apply to many algebraic functions of trigonometric and hyperbolic functions
(e.g., sinh, cosh). All of our results apply to non-closed discrete domains.
Since a functional equation over R has more constraints than the same functional equation
over D n;s , it may happen that the functional equation over R characterizes a family of
functions that is a proper subset of the functions characterized by the same functional
equation over D n;s . This does not limit the ability to construct self-testers for programs
for these functions, due to the equality testing performed by self-testers.
To show our results, we prove new local stability results for discrete domains. Our techniques
for showing the stability of multilinearity dier from those used previously in that
(1) we do not require the domain to be discrete and (2) we do not require the range to
be a complete metric space. This allows us to apply our results to multivariate polynomial
characterizations. In addition to new combinatorial arguments, we employ tools from
approximation theory and stability theory. Our techniques appear to be more generally
applicable and cleaner to work with than those previously used.
Self-correctors are built by taking advantage of the random self-reducibility of polynomials
and functional equations [BLR93, Lip91] in the exact case. As in [GLR + 91], we employ a
similar idea for the approximate case by making several guesses at the answer and returning
their median as the output. We show that if each guess is within of the correct answer
with high probability, then the median yields a good answer with high probability. To build
an approximate checker for all of these functions, we combine the approximate self-tester
and approximate self-corrector as in [BLR93].
Organization. Section 2 addresses the stability of the properties used to test linear
and multilinear functions. Using these results, Section 3 considers approximate self-testing
of polynomials. Section 4 addresses the stability and robustness of functional equations.
Section 5 illustrates the actual construction of approximate self-testers and self-correctors.
2 Linearity and Multilinearity
In this section, we consider the stability of the robust properties used to test linearity and
multilinearity over the nite rational domain D n;s . The results in this section, in addition
to being useful for the testing of linear and multilinear functions, are crucial to our results
in Section 3.
As in [GLR + 91], approximate robustness is easy to show by appropriately modifying the
proof of robustness [Rub94]. This involves replacing each exact equality by an approximate
equality and keeping track of the error accrued at each step of the proof. To show stability,
we use two types of bootstrapping arguments: the rst shows that an error bound on a small
subset of the domain implies the same error bound on a larger subset of the domain; the
second shows that an error bound on the whole domain implies a tighter error bound over
the same domain. These results can be applied to give the rst approximate self-testers for
several functions over D n;s including multiplication, exponentiation, and logarithm (Section
2.2).
2.1 Approximate Linearity
The following denes formally what it means for a function to be approximately linear:
Denition 5 (Approximate Linearity) A function g is -approximately linear on D n;s
Hyers [Hye41] and Skof [Sko83] obtain a linear approximation to an approximately linear
function when the domain is R. (See Appendix A for their approach). Their methods are
not extendible to discrete domains.
Suppose we dene h( 1
s ). In the 0-approximately linear case (exact linearity), since
s
s
s
s
s
s
by induction on the elements in D n;s , we
can show that This approach is typically used to prove the suciency of
the equality test. However, in the -approximately linear case for 6= 0, using the same
inductive argument will only yield a linear function h such that h( i
s
s
). This is
quite unattractive since the error bound depends on the domain size. Thus, the problem
of obtaining a linear function h whose discrepancy from g is independent of the size of the
domain is non-trivial.
In [GLR + 91], a solution is given for when the domain is a nite group. Their technique
requires that the domain be closed under addition, and therefore does not work for D n;s .
We give a brief overview of the scheme in [GLR + 91] and point out where it breaks down
for non-closed domains. The existence of a linear h that is close to g is done in [GLR
by arguing that if D is suciently large, then an error of at least at the maximum error
point x would imply an even bigger error at 2x , contradicting the maximality assumption
about error at x . Here, the crucial assumption is that x 2 D implies 2x 2 D. This step
fails for domains which are not closed under addition.
Instead, we employ a dierent constructive technique to obtain a linear h on D n;s given a
-approximately linear g. Our technique yields a tight bound of 2 on the error e h g
(instead of 4 in [Sko83]) and does not require that the domain be closed under addition.
It is important to achieve the best (lowest) constants possible on the error, because these
results are used in Section 3.2 where the constants aect the error in an exponential way.
The following lemma shows how to construct a linear function h that is within
of a -approximately linear function g in D
n;s .
Lemma 6 Let g be a -approximately linear function on D
n;s , and let h be linear on D n;s .
s
Proof.
We prove by contradiction that 8x 2 D
argument can
be made to show that e(x)
Recall that n
s
is the greatest positive element of the domain, and note that e is a -
approximately linear function. Assume that there exists a point in D
n;s with error greater
be the maximal such element. p has to lie between n
2s
and n
s
, otherwise
n;s would have error greater than 2 contradicting the maximality of p. Let
s
p. Then, e(q)
s
Also, for any x 2 (p; n
s
n;s ,
by denition of p, e(x) . Note that any such x can be written as
To satisfy the approximate linearity property that e(x 0 must
have error strictly less than + .
We now know that the points in the interval (0; q] have error strictly less than 2+ (in
that the point q itself has error strictly less than . Putting
these two facts and approximate linearity together, and since any x 2 (q; 2q] can be written
as q]; we can conclude that at any point in (q; 2q], the error is at most
2+ . Now we can repeat the same argument by taking y from (0; 2q] rather than (0; q] to
bound the error in the interval (0; 3q] by 2 . By a continuing argument, eventually the
interval contains the point p, which means that p has error at most 2+. This contradicts
our initial assumption that e(p) was greater than 2
In addition, since e(0)
We now generalize the error bound on D
n;s to D n;s .
Lemma 7 If a function g is -approximately linear on D n;s , with h and e are dened as
before, and if je( n
Proof. Observe that if the error is upper bounded by in [0; n
s
2s
, so that e(2x) . Also, if je(x)j then je( x)j
n;s . We will bound the error in D n;s
rst by 3 . From the above observations, we have e(x) 4
2s
2s
Assume that 9x 2 D n;s such that be the minimal such
point. Then p > n
2s
, otherwise the error at 2p would exceed 3 t be the point
with the highest error in D
n;s (maximal such one if there is a tie). We consider the possible
locations for t to bound e(t): (1) if t n
2s
, then to ensure that e(2t) e(t), e(t) ;
2s
2s
therefore, to satisfy the bound above on e(t
2s
therefore to satisfy the bound above,
e(t) =2 .
Regardless of where t lies, e(t) +, hence the error in D
n;s is bounded by +.
However, e( n
s
s
n;s , this contradicts the
bound we established before. Therefore, there cannot be a point in D n;s with error greater
argument can be used to bound negative error.
Now we reduce the error bound to 2 + . Assume that p is the minimal point in D n;s
with error at least 2 + . The proof is similar to the previous stage, using the tighter
bound e(x)
2s
stay the same; for case (2)
we . Therefore, the error cannot exceed
s
which is a contradiction.
The following special case proves the stability result for linearity:
Corollary 8 The linearity property is (D n;s ; D n;s
Proof. Suppose function g is -approximately linear on D n;s . Set h( n
s
s
7. This uniquely denes a linear h with
The intuition that drives us to set h( n
s
s
) in the proof of Corollary 8 is as follows.
Consider the following function: g( n
s
s
s
integer part of number).
It is easy to see that g(x+y) g(x)+g(y). If we set h( 1
s ), then we obtain h( n
s .
But kg hk is a growing function of n and so there is no way to bound the error at all points.
The following example shows that the error bound obtained in Corollary 8 using our
technique is tight: we have shown how to construct a linear function h so that kh gk
2. We now show that there is a function g that, given our method of constructing h,
asymptotically approaches this bound from below. Dene g as follows:
(3x=n It is easy to see that g
is -approximately linear: If x
Our construction sets 0, the zero
function. However, kg large enough n.
2.2 Approximate Multilinearity
In this section we focus our attention on multilinear functions. A multivariate function is
multilinear if it is linear in any one input when all the other inputs are xed. A multilinear
function of k variables is called a k-linear function. An example of a bilinear function is
multiplication, and bilinearity property can be stated concisely as
that distributive property of multiplication
with respect to addition is a special case of multilinearity.
A natural extension of this class of functions is the class of approximately multilinear
functions, which are formally dened below:
Denition 9 (Approximate Multilinearity) A k-variate function g is -approximately
k-linear on D k
n;s if it is -approximately linear on D n;s in each variable.
For instance, for is -approximately bilinear if 8x 1
Now we generalize Lemma 7 to -approximately k-linear functions. Let g be a -
approximately k-linear function and h be the multilinear function uniquely dened by the
condition h( n
s
s
g. e is a -approximately k-linear function.
Since g takes k inputs from D n;s , if we consider each input to g as a coordinate, the set
of all possible k-tuples of inputs of g form a (2n of dimension k.
We show that for any point
Theorem 10 The approximate k-linearity property is (D k
In other
words, if a function g is -approximately k-linear on D k
n;s , then there exists a k-linear h on
n;s such that kh gk 2k.
Proof. With h dened as above, e( n
s
First, we argue about points that have
one coordinate that is dierent from n
s
. Fix k 1 of the inputs to be n
s
(hard-wire into
g) and vary one (say x i ). This operation transforms g from a -approximately k-linear
function of x to a -approximately linear function of x i . By Lemma 7, this function
cannot have an error of more than 2 in D n;s . Therefore, je( n
s
s
s
s
s
. Next we consider points which have two coordinates that are dierent from n
s
Consider without loss of generality an input a; b; n
s
. By the result we just argued,
we know that e( n
s
s
s
2. By xing inputs 2 through k to be b; n
s
s
and
varying the rst input, by Lemma 7, we have je(a; b; n
s
)j 4 for any a 2 D n;s . Via
symmetric arguments, we can bound the error by 4 if any two inputs are dierent from n
s
Continuing this way, it can be shown that for all inputs, the error is at most 2k.
The following theorem shows that the error can be reduced to (1+) for any constant > 0
by imposing the multilinearity condition on a larger domain D 0 and tting the multilinear
function h on D, where jD 0 d2k=e. Note that doubling the domain size only involves
adding one more bit to the representation of a domain element.
Theorem 11 For any > 0, the approximate multilinearity property is (D k
))-stable.
Proof. By Theorem 10, g is 2k-close to a k-linear h on D 2kn=;s . For any x
we x all coordinates except x i and argue in the i-th coordinate as below.
For any D m;s , rst we show that if je(x)j Dm;s then je(x)j D m=2;s
To
observe this, note that if x 2 D m=2;s , then 2x 2 D m;s . Therefore the function should satisfy
e(x)+e(x) e(2x), which implies that je(x)j (+)=2. Thus, in general, the maximum
error in D m=2 i ;s is the error in D 2kn=;s is at most 2k, the error
in D n;s is at most (1 our choice of parameters. In the multilinear case, we can
make a similar argument by using points which have at least one coordinate x i within the
smaller half of the axis.
Note that, since h is multilinear, it is also symmetric.
To test programs purportedly computing polynomials, it is tempting to (1) interpolate the
polynomial from randomly chosen points, and then (2) verify that the program is approximately
equal to the interpolated polynomial for a large fraction of the inputs. Since a degree
d k-variate polynomial can have (d this leads to exponential running times.
Furthermore, it is not obvious how error bounds that are independent of the domain size
can be obtained.
Our test uses the same \evenly spaced" interpolation identity as that in [RS96]: for all
0: This identity is computed by the method of
successive dierences which never explicitly interpolates the polynomial computed by the
program, thus giving a particularly simple and ecient (O(d 2 ) operations) test.
We can show that the interpolation identity is approximately robust by modifying the
robustness theorem in [RS92]. (Section 3.3). Our proof of stability of the interpolation
identity (Section 3.2), however, uses a characterization of polynomials in terms of multilinear
functions that previously has not been applied to program checking. This in turn allows us
to use our results on the stability of multilinearity (Section 2.2) and other ideas from stability
theory. Section 3.4 extends these techniques to multivariate polynomials.
3.1 Preliminaries
In this section, we present the basic denitions and theorems that we will use. Dene
to be the standard forward dierence operator. Let
r d
d
z }| {
d
d
and r t 1
f(x). The following are simple facts concerning this operator.
Facts 12 The following are true for the dierence operator r:
1. r is linear: r(f
2. r is commutative: r t 1
, and
3. r t 1 +t 2
Let x [k] denote
z }| {
x. For any k-ary symmetric f , let f diagonal
restriction. We use three dierent characterizations of polynomials [MO34, Dji69]. For the
following equations to be valid, all inputs to f must be from D. 5
Fact 13 The following are equivalent:
1. 8x 2 D;
d
a k x k ,
2.
3. there exist symmetric k-linear functions F k , 0 k d such that 8x 2 D;
d
F
The following denitions are motivated by the notions of using evenly and unevenly spaced
points in interpolation.
Denition 14 (Strong Approximate Polynomial) A function g is called strongly -
approximately degree d polynomial if 8x; t
g(x)j .
Denition 15 (Weak Approximate Polynomial) A function g is called weakly -approximately
degree d polynomial if 8x; t such that n < x
t g(x)j .
3.2 Stability for Polynomials
First, we prove that if a function is strongly -approximately polynomial then there is a
polynomial that (2 d lg d ; 0)-approximates it. Next, we show that if a function is weakly
approximately polynomial on a domain, then there is a coarser subdomain on which the
function is strongly approximately polynomial. Combining these two, we can show that if
a function is weakly approximately polynomial on a domain, then there is a subdomain on
5 Due to the denition of the r operator, the inputs may sometimes slip outside the domain. Then, it is
not stipulated that the equation must hold.
which the function approximates a polynomial. By using Theorem 11, we can bring the
above error arbitrarily close to by assuming the hypothesis on a large enough domain. In
order to pass programs that err by at most 0 , we need to set (d
Strongly Approximate Case. One must be careful in dening polynomial h that is
close to g. For instance, dening h based on the values of g at some d points will not
work. We proceed by modifying techniques in [AB83, Gaj90], using the following fact:
Fact 16 If a function f is symmetric and k-linear, then r t 1 ;:::;t d f
The following theorem shows the stability of the strong approximate polynomial property.
Theorem 17 The strong approximate polynomial property is (D n(d+2);s ; D n;s
stable. In other words, if g is a strongly -approximately degree d polynomial on D n(d+2);s ,
then there is a degree d polynomial h d such that kg h d k Dn;s O(2 d lg d ).
Proof. The hypothesis that g is a strongly -approximately degree d polynomial on
D n(d+2);s and the fact that x+t 1
. The rest of the proof uses this \modied hypothesis" and works with
D n;s .
We induct on the degree. When by the modied hypothesis, we have 8x; t 2
constant, we are done.
Suppose the lemma holds when the degree is strictly less than d + 1. Now, by the
modied hypothesis, we have Using Fact 12 and
then our modied hypothesis, we have jr t 1 +t 0
By symmetry of the dierence operator, we have in eect a -
approximate symmetric d-linear function on D n;s , say G(t
?? on multilinearity guarantees a symmetric d-linear H with kG Hk 2d. Let
d (x) for x 2 D n;s .
Now, we have 8x; t
d (x))j (denition of g 0 )
d (x)j (triangle inequality)
d (x)j (denition of r)
d (x)j (denition of G)
(denition of H d )
(modied hypothesis on g)
Now we apply the induction hypothesis. g 0 satises the hypothesis above for d and larger
error so by induction, we are guaranteed the existence of a degree d 1
polynomial h d 1 such that kg 0 h d 1 k e d 1 0 . Set h
d . By Fact 13 about
the characterization of polynomials, h d is a degree d polynomial. Now, e
Unwinding the recurrence, the nal error kg h d
Weakly Approximate Case. We rst need the following useful fact [Dji69] which
helps us to go from equally spaced points to unequally spaced points:
Fact
d
d
Using this fact, we obtain the following theorem. Let dg.
Theorem 19 If g is weakly (=2 d+1 )-approximately degree d polynomial on D n(d+1);s(d+1) ,
then g is strongly -approximately degree d polynomial on D n;s .
Proof. For we have by our choice of
parameters that t 0
Therefore,
jr d+1
3.3 Approximate Robustness for Polynomials
This section shows that the interpolation equation for degree d polynomials is in some
sense, approximately robust. All the results in this subsection are modications of the
exact robustness of polynomials given in [RS92]. Let
d+1
. To self-test P on
D n;s , we use the following domains. (These domains are used for technical reasons that will
become apparent in the proofs of the theorems in this section.) We use Pr x2D [] to denote
the probability of an event when x is chosen uniformly from domain D.
1. D (d+2)n;s
2.
3.
All assume that P satises the following properties (which can
be tested by sampling):
1. Pr
2. for each 0 j d
3. for each 0
f
We obtain the following theorem that shows the
approximate robustness of polynomials. Let E (n;s) be the distribution that
ips a fair three-sided
die and on outcome i 2 f1; 2; 3g, chooses inputs according to distribution given in the
(i)-th equation above. Let D (n;s) be the union of the domains used in the above properties.
Theorem 20 The interpolation equation, where inputs are picked according to the distribution
)-approximately robust.
The rest of this section is devoted to proving the above theorem.
By Markov's inequality, g's denition, and the properties of P , it is easy to show that P
2)-approximates g:
Theorem 21 If program P satises the above three properties, then, for all
2.
Now, we set out to prove that g is a weakly approximate polynomial. Let (p
otherwise. For two domains A; B, subsets of a universe X , let
call the domains -close if (A; B) 1 . The
following fact is simple:
Fact 22 For any x 2 D (d+2)n;s , the domains T j and fx are
For any x, the domains T ij and fx are
The following lemma shows that, in some sense, g is well-dened:
Lemma 23 For all x 2 D (d+2)n;s , Pr
Proof. Consider l . For a xed 0 j d using
properties of P and T jk and fx+jt are 2 -close (Fact 22), we get Pr[P
Summing over all 0 noting that
Using Lemma 45, we can
show that with a relaxation of twice the error, this probability lower bounds the probability
in the rst part of the lemma. The second part of the lemma follows from the rst via a
simple averaging argument.
Now, the following theorem completes the proof that g is a weakly approximate degree d
polynomial.
Theorem 24 For all x 2 D (d+2)n;s , 8i; Pr
[jr d+1
Proof. Theorem 21, Lemma 23 and the closeness of T ij and fx imply that
Summing the latter expression and putting together, we have the rst part of the lemma.
The second part follows from the rst part and the fact that T j and ft are
For an appropriate choice of ; 1 ; 2 , we have a g that is weakly (2 d+3 )-approximately
degree d polynomial on D n;s with g (; 2)-approximating P on D n;s .
3.4 Multivariate Polynomials
The following approach is illustrated for bivariate polynomials. We can easily generalize this
to multivariate polynomials. It is easy to show that the approximate robustness holds when
the interpolation equation ([RS96]) is used as in Section 3.3.
An axis parallel line for a xed y (horizontal line) is the set of points l
Zg. A vertical line is dened analogously. As a consequence of approximate robustness,
we have a bivariate function g(x; y) that is a strongly approximately degree d polynomial
along every horizontal and vertical line. We use this consequence to prove stability.
The characterization we will use is: f(x; y) is a bivariate polynomial (assume degree in
both x and y is d) if and only if there are d
where the range is the space of all (degree d) univariate polynomials in x.
For each value of y, g y (x) is a strongly approximately degree d polynomial. Using the
univariate case (Theorem 17), there is an exact degree d polynomial P y (x) such that for
all x, g(x; y) 2 d lg d P y (x). Construct the function g 0 (x;
Now, for a xed x (i.e., on vertical line) for any y, using r t 1 ;:::;t d+1
g(x; y) 0, we have
is a bivariate function where along every horizontal
line, it is an exact degree d polynomial and along every vertical line, it is a strongly 0 -
approximate degree d polynomial. Interpreting g 0 (x; y) as g 0
x (y) and using the same idea
as in univariate case, we can conclude that r(t is a symmetric
approximate d-linear function (here, we used the fact that g 0
[x]). The rest of the
argument in Theorem 17 goes through because our proofs of approximate linearity (Lemma
and multilinearity (Theorem 10) assume that the range is a metric space (which is true for
PD [x] with, say, the Chebyshev norm). The result follows from the above characterization
of bivariate polynomials.
4 Functional Equations
Extending the technique in Lemma 7 to addition theorems f(x
straightforward, since G can be an arbitrary function. In order to prove approximate robustness
(Section 4.3), several related properties of G are required. Proving that G satises each
individual one is tedious; however, the notion of modulus of continuity from approximation
theory gives a general approach to this problem. We show that bounds on the modulus of
continuity imply bounds on all of the quantities of G that we require. The stability of G is
shown by a careful inductive technique based on a canonical generation of the elements in
D n;s (Section 4.2). The scope of our techniques is not only limited to addition theorems; we
also show that Jensen's equation is approximately robust and stable. (Section 4.2.4)
4.1 Preliminaries
For addition theorems, we can assume that G is algebraic and a symmetric function (the
latter is true in general under some technical assumptions as in [Rub94]). We need a notion
of \smoothness" of G. The following notions are well-known in approximation theory [Lor66,
Tim63].
Denitions 25 (Moduli of Continuity) The modulus of continuity of the function f()
dened on domain D is the following function of 2 [0;
The modulus of continuity of the function f(; ) is the following function of x ; y 2 [0;
The partial moduli of continuity of the function f(; ) are the following functions of 2
sup
sup
We now present some facts which are easily proved.
Facts 26 The following are true of the modulus of continuity.
1.
2.
3. !(f
4. If f 0 , the derivative of f exists, and is bounded in D, then
5.
7. If f(; ) is symmetric, then !(f
8. If b
x is the partial derivative of f with respect to x, then !(f
f .
We need a notion of an \inverse" of G. If G[x;
y.
Since G is symmetric, G 1
2 and we denote G 1 [z;
An Example. Wherever necessary, we will illustrate our scheme using the functional
equation
y). The solution to this functional
equation is some constant C. The following fact [Tit47] is useful in locating
the maxima of analytic functions.
Fact 27 (Maximum Modulus Principle) If f is analytic in a compact region D, then f
attains extremum only on the boundary of D.
Over a bounded rectangle is analytic and hence by Fact
27, attains maximum on the boundary. G 2 C 1 [L; U ] in D (i.e., continuously dierentiable).
We have G 0
which is a decreasing function of x. By Fact 27, kG 0
attains maximum when
Therefore, using 26,
4.2 Stability for Functional Equations
In this section, we prove (under some assumptions) that, if a function g satises a functional
equation approximately everywhere, then it is close to a function h that satises
the functional equation exactly everywhere. Our functional equations are of the form
Example. If g(x
for all valid x; y, then there is a function h such
that h(x
for all valid x; y, and and h(x) 0 g(x) for all valid x. The
domains for the valid values of x, y, as well as the relationship between and 0 will be
discussed later.
in D n;s . In the following sections we show how to construct the function h that
is close to g, satisfying a particular functional equation. Given such an h, let e(x) denote
We consider the cases when c < 1, rst show how to obtain h, and then
obtain bounds on e(x). Then, we can conclude that h, which satises the functional equation
everywhere, also approximates g; i.e., the functional equation is stable.
Call x
s
even (resp. odd) if x is even (resp. odd).
4.2.1 When c < 1
We rst construct h by setting h( 1
s
s
This determines h for all values in D by the
fact that h satises the functional equation.
We obtain a relationship between the error at x and 2x using the functional equation.
Lemma 28 e(2x) ce(x) +.
Proof.
using the denition of the Modulus of Continuity, jH 1
We have to explore the relationship between e(x+ 1
s
and e(x). For simplicity, let H
s
s
))j d for some constant d. Now,
s
Proof. e(x
s
s
s
s
s
)]j. But,
We will show a scheme to bound e(x) for all x when d < 1. This scheme can be thought of
as an enumeration strategy, where at each step of the process, certain constraint equations
have to be satised. First, we will show a canonical listing of elements in D
n;s .
Construct a binary tree T k in the following manner. The nodes of the tree are labeled
with elements from D
n;s . The root of the tree is labeled 1
s . If x is the label of a node, then
2x is the label of its left child (if 2x is not already in the tree). and x
s
is the label of
its right child (if x
s
is not already in the tree). Using induction, we can prove that T k
contains all elements of D
n;s . This is by induction on k. When the tree is obvious.
Suppose the above requirement. Build T k as follows: For every leaf (with label
of T k 1 , a left child 2x is added and to that left child, a right child 2x
s
is added. By
assumption on T k 1 , T k has all nodes in D
n=2;s . Moreover, each left child generates an even
element in ( 2 k 1
s
and each right child generates an odd element in ( 2 k
s
Corollary In T k , if x is even (except root), then x is a left child; if x is odd, then x is
a right child.
A canonical way of listing elements in D
n;s arises from a preorder traversal of T k . This
ordering is used in our inductive argument.
Lemma 31 For all x
n;s , if x is even, then e(x) 1+c
Proof. We will prove this by induction on the preorder enumeration of the tree given by the
above ordering. Let x be the next element to be enumerated. By preorder listing, its parent
has already been enumerated and hence, its error is known. If is even, from Corollary
30, it is a left child, and hence generated by a 2y operation. Hence, e(y) 2
by induction
hypothesis. This together with Lemma 28 yields e(x) ce(y)
preserving the induction hypothesis. If
s
is odd, from Corollary 30, it is a right
child, and hence generated by a y
by induction
hypothesis. This together with Lemma 29 and d 1 yields e(x) de(y)
1+c
, preserving the induction hypothesis.
This yields (under the assumptions made before on c and d): the following theorem
Theorem 32 The addition theorem is (D
)-stable.
With our example, we have H 1
s
from which H 0
s
s
s
Thus, d 1. By Theorem 32,
we have e(x) 4 for all x
n;s . When n is not a power of 2, we can argue in the following
manner. From our proof, we see that we use very specic values of x; y in the approximate
functional equation. First we extend D
n;s to nearest power of 2 to get D 0 and dene values
of g at these new points: at even x (= 2y) let and at odd x (= y
These can be thought of new assumptions on g which are satised
\exactly" (i.e., without error ). We can use Lemma 31 to conclude that there is a linear
h on D 0 that is 2
close to g. Hence, h is close to g even on D
n;s . To argue about D n;s ,
we pick a \pivot" point in D n;s (0 for simplicity). Now, we have h(
Therefore, as before, we have e( x)
When d 1, the error can no longer be bounded. In this case, we have c 1 < d. Let
cd. We can see from the structure of T k that the maximum error can occur at 2 k 1
s
By simple induction on the depth of the tree, the error is given by e(
s
. If e < 1, we obtain a constant error bound of
by geometric summation. Otherwise, we obtain
4.2.2 When c > 1
In this case, we require additional assumptions. We dene the quantity
Note that !(f ;
some c 0 > 1.
s
s
the addition theorem, this can be used to x all of h,
1 is well-dened. Let
As before, we rst obtain a relationship between the error at x and at 2x using the
addition theorem.
Lemma
Proof. We have as in Lemma 28, jH 1 . By denition
our assumption, we get e(x)
For simplicity, let H 3
s
s
x] and let !(H 3 ; ) d for some constant d.
The following lemma can be proved easily.
Lemma 34 e(x) de( 2 k
s
our construction. We adopt a strategy similar to the one in the previous section.
Construct a binary tree T k in the following manner. The nodes of the tree are labeled
with elements from D
n;s . The root of the tree is labeled 2 k
s
. If x is the label of a node and
x is even, then x=2 is the label of its left child (if x=2 is not already in the tree). and 2 k
s
x
is the label of its right child (if 2 k
s x is not already in the tree). It is easy to see that T k
contains all elements of D
n;s .
Corollary
s
(except the root), then x is a left child and if x
s
, then x is
a right child.
We use the preorder enumeration of D
n;s using T k in the following inductive argument.
Lemma 36 For all x
s
and d 1, then e(x) 2c 0
s
then
Proof. The proof is by induction on the preorder enumeration of the tree given by the
above ordering. It uses Lemma 34 and Corollary 35, and is similar in
avor to the proof of
Lemma 31.
This yields (under the assumptions on c and d) the following theorem:
Theorem 37 The addition theorem is (D
This case arises for linearity where H 1 2. Using the above
theorem, we get a weaker bound of e(x) 3 (as opposed to 2 by Corollary 8). Similar
techniques as in previous section can be used to argue about D n;s .
The case when d > 1 can be handled by schemes as in the previous section.
4.2.3 When
In this case, it means that !(H or in other words, by Fact 26, kH 0
1. By Fact
27, the maximum occurs only at the boundary of the domain. Hence, we can test by looking
at a subdomain in which the maximum is less than 1.
4.2.4 Jensen's Equation
Jensen's equation is the following: 8x; y 2 D n;s ; f( f(x)+f(y). The solution to this
functional equation is ane linearity i.e., constants a; b. Jensen's
equation can be proved approximately robust by modifying the proof of its robustness in
[Rub94]. We will show a modied version of our technique for proving its stability. As
before, we have 8x; y 2 D n;s ; g( x+y) g(x)+g(y). To prove the stability of this equation,
we construct an ane linear h. Note that two points are necessary and sucient to fully
determine h. We set h( n
s ) and
Lemma 38 e( x+y) e(x)=2
Proof. e(
The following corollary is immediate.
Corollary
Proof. Since for
We construct a slightly dierent tree T k in this case. The root of T k is labeled by n
s
and if x
is the label of a node, then x=2 (if integral and not already present) is label of its left child
and ( n
integral and not already present) is the label of its right child. It is easy
to see that T k contains all elements of D
n;s .
Theorem 40 The Jensen equation is (D
Proof. The proof is by induction on enumeration order of T k given by, say, a breadth-rst
traversal. Clearly, at the root, e( n
s
2. Now, if e(x) 2, then, consider its
children. Its left (resp. right) child (if exists) is x=2 (resp.
s
)=2). Thus, by Corollary
39, we have e( x)
4.3 Approximate Robustness for Functional Equations
As in [GLR + 91, RS92], we test the program on D 2p;s and make conclusions about D n;s .
The relationship between p and n will be determined later. The domain has to be such
that G is analytic in it. Therefore, we consider the case when f is bounded on D 2p;s , i.e.,
G be the family of functions f that satisfy the following conditions:
1. Pr
x2D 2p;s
2. Pr
x2D 2p;s
3. Pr
x;y2D2p;s
4. Pr
Note that the membership in G is easy to determine by sampling. We can dene a distribution
(n;s) such that if P satises the functional equation on E (n;s) with probability at least 1 ,
then P also satises the following four properties.
1. Pr
x;y2Dp;s
2. Pr
x;y2D p;s
3. Pr
x;y2D p;s
4. Pr
x2Dn;s ;y2Dp;s
E (n;s) is dened by
ipping a fair four-sided die and on outcome i 2 f1; 2; 3; 4g, choosing
inputs according to the distribution given in the (i)-th property above. Recall from Fact 26
that b
We can then show the following:
Theorem 41 The addition theorem with the distribution E (n;s) is
)-approximately robust.
Dene for x 2 D p;s ,
inequality, denition
of g, and the properties of P , it is easy to show the following:
Lemma 42 Pr x2Dn;s [g(x) P (x)] > 1 2.
Proof. Consider the set of elements n;s such that Pr y2Dp;s [P (x) G[P (x
1. If the fraction of such elements is more than 2, then it contradicts hypothesis
on P that Pr x2Dn;s ;y2D p;s [P (x) G[P . For the rest, for
at least half of the y's, P (x) G[P dening g to be the median (over y's
in D p;s ), we have for these elements
For simplicity of notation, let P x denote P (x) for any x 2 D p;s and G x;y denote G[P (x); P (y)]
for any x; y 2 D p;s . Since G is xed, we will drop G from the modulus of continuity. Let
Fact 43 For x 2 D 2n;s , Pr y2D p;s [x
Lemma 44 For x 2 D 2n;s , Pr
Proof. Pr
Note that x z, y, z are all random. The error in the rst step (due to computation of P x y
is !(; 0) and the equation holds with probability 1 by hypothesis (3). The bounds on
G x z;z y also hold with probability at least 1 2 by hypotheses (3), (4) and so the error is
just !(; 0). There is a loss of 2
probability for x z to be in D p;s by Fact 43. The
next line is just rewriting. In a similar manner, the nal equation holds with probability at
least 1 by hypothesis (2) and the error bound is !(0; ) The bounds on random points
hold with probability at least 1 8 by hypotheses (a), (b) on P to make
the error !(0; ). Hence, the total error is Fact 26 and the
equality holds with probability at least 1 12 2
The following lemma, which helps us to bound the error, is from [KS95].
Lemma is a random graph with edges inserted with probability
is a graph where the probability
that a randomly chosen node is not in the largest clique is at most .
Proof. Let p. The crucial observation is that the clique number of G 2 is at least as
big as the maximum degree in G. Hence, for a random node x, probability that x is present
in the largest clique in G 2 is more than the probability that x is connected to the maximum
degree vertex (say y) in G. Let the degrees of vertices in G be d 1 d p . Then, the
degree of y is d 1 . The probability that x has an edge to y is d 1 =p. Probability of an edge
between two random nodes is 1
=p. The lemma
follows.
The following shows, in some sense, that g is well-dened:
Lemma 46 For all x 2 D 2n;s , Pr y2D p;s [g(x) 2 0 G x y;y
, where
Proof. We have the following: for all x 2 D 2n;s , Pr y;z2D p;s [G x y;y 0 G x z;z
Now, we use Lemma 45. If G denotes a graph in which (y; z) is an edge i G x y;y 0 G x z;z
then G 2 denotes the graph in which (y; z) is an edge i G x y;y 2 0 G x z;z . Now, using
Lemma 45, we have that number of elements that are 2 0 away from the largest clique is at
most 2. Thus, at least 1 2 of elements are within 2 0 of each other. If < 1=2 and since
g(x) is the median, the lemma follows.
Now, the following theorem completes the proof that g satises the addition theorem approximately
Theorem 47 For all with probability at least 1
, where
G).
Proof. Pr
u;v2D p;s
!(0;) G u;x+y u
By Lemma 46, the rst equality holds with probability 1
and error !(2
bounds on G u;x u hold with probability at least 1 4 to make the error !(2
Fact 26. The second and third equalities are
always true. The fourth equality holds with probability at least 1
by hypothesis (1)
on P and the error accrued is !(0; !(; 0)). The bounds on P
with probability at least 1 10 to make the error !(0; !(; 0). The fth
equality also holds with probability at least 1 2
by hypothesis (1) on P and the error
accrued is !(0; after bounds on P (with probability at least 1 4).
The nal equality holds with probability at least 1 12 2
by Lemma 46 and error is
Thus, the total error is 9!(!(;
G by
Fact 26. Hence, using Fact 26, !(!(; 0)) b
If < 1=56; p > 11n, we have 1 56 9
> 0 and so the above lemma is true with
probability 1. In the case of our example function, we already calculated b
Hence,
5 Approximate Self-Testing and Self-Correcting
5.1 Denitions
The following modications of denitions from [GLR capture the idea of approximate
checking, self-testing, and self-correcting in a formal manner. Let P be a program for f ,
n;s an input to P , and the condence parameter.
Denition 48 A )-approximate result checker for f is a probabilistic
oracle program T that, given P , x 2 D n;s , and , satises the following:
0)-approximates f on D (n;s) ) Pr[T P outputs \PASS"] 1 .
(2) outputs \FAIL"] 1 .
Denition )-approximate self-tester for f is a probabilistic oracle
program T that, given P and , satises the following:
0)-approximates f on D (n;s) ) Pr[T P outputs \PASS"] 1 .
(2) P does not ( 2 ; )-approximate f on D n;s ) Pr[T P outputs \FAIL"] 1 .
Observe that if a property is )-approximately robust and (D n;s ; D n 0 ;s
stable, it is possible to do equality testing for the function family satisfying the property,
then it is possible to construct a
Denition 50 A (; ; 0 ; D (n;s) ; D n;s )-approximate self-corrector for f is a probabilistic
oracle program SC P
f that, given P that (; )-approximates f on D (n;s) , x 2 D n;s , and ,
outputs SC P
f (x) such that Pr[SC P
Note that a
self-corrector yield a
5.2 Constructing Approximate Self-Correctors
We illustrate how to build approximate self-correctors for functional equations. The approach
in this subsection follows [BLR93, GLR
Then the self-corrector SC P
f at input x is constructed as
follows. To obtain a condence of :
1. choose random points y
2. let SC P
f (x) be the median of G[P
By the assumption on , both the calls to P are within of f with probability greater than
3=4. In this case, the value of G[P
G away from f(x) (see Section 4.1
for b
G). Using Cherno bounds, we can see that at least half of the values G[P
are at most 0 away from f(x). Thus, their median SC P
f (x) is also at most 0 away from
f(x).
For degree d polynomials, a similar self-corrector works with In
order to pass good programs, this is almost the best 0 possible using the evenly spaced
interpolation equation since the coecients of the interpolation equation are
e d ). Using
interpolation equations that do not use evenly spaced points seem to require 0 that is
dependent on the size of the domain.
5.3 Constructing Approximate Self-Testers
The following is a self-tester for any function satisfying an addition theorem f(x
computing the function family F over D n;s . We use the notation from Section
4.1. To obtain a condence of , we choose random points x
O(maxf1=; 48g ln 1=)) and verify the assumptions on program P in the beginning of Section
4.3. If P passes the test, then using Cherno bounds, approximate robustness, and
stability of the property, we are guaranteed that P approximates some function in F . We
next perform the equality test to ensure that P approximates the given f 2 F . Assume
that f( 1
s
s
Modifying the proofs in Section
4.2, one can show that if there is a constant such that SC P
s
s
s
s
1), the error between SC P
f and f can be bounded by a constant
on the rest of D n;s . Since SC P
approximates P , the correctness of the self-tester follows.
For polynomials, we use random sampling to verify the conditions on program P required
for approximate robustness that are given in the beginning of Section 3.3. If P satises
the conditions then using the approximate robustness and stability of the evenly spaced
interpolation equation, P is guaranteed to approximate some degree d polynomial h. To
perform the equality test that determines if P approximates the correct polynomial f , we
assume that the tester is given the correct value of the polynomial f at evenly
spaced points x
s
. Using the self-corrector SC P
f from Section 5.2,
we have kSC P
. The equality tester now tests that for all x i ,
Call an input x bad if jf(x) h(x)j >
If x is bad then jf(x) SC P
. If x is a sample point, and x is bad, then
the test would have failed. Dene a bad interval to be a sequence of consecutive bad points.
If the test passes, then any bad interval in the domain can be of length at most (2n
because any longer interval would contain at least one sample point. The two sample points
immediately preceding and following the bad interval satisfy jf(x) h(x)j 00 . This implies
that there must be a local maximum of f h (a degree d polynomial) inside the bad interval.
Since there are only d extrema of f h, there can be at most d bad intervals, and so the total
number of bad points is at most d(2n 1)='. Thus, on 1 fraction of D n;s , SC P
f 's error is
at most These arguments can be generalized to the k-variate case by partitioning
the k-dimensional space into ((d cubes.
5.4 Reductions Between Functional Equations
This section explores the idea of using reductions among functions (as in [BK89, ABC
to obtain approximate self-testers for new functions. Consider any pair of functions f
that are interreducible via functional equations. Suppose we have an approximate self-
tester for f 1 and let there exist continuous functions F; F 1 such that f 2
Given a program P 2 computing f 2 , construct program P 1 computing
. We can then self-test P 1 . Suppose P 1 is -close to f 1 on a large portion of the
domain. Then for every x for which P 1 (x) is -close to f 1 (x), we bound the deviation of
If we can bound the right-hand side by a constant (at least for a portion of the domain), we
can bound the maximum deviation 0 of P 2 from f 2 . This idea can be used to give simple
and alternative approximate self-testers for functions like sin x; cos x; sinh x; cosh x which can
be reduced to e x .
Suppose we are given a
we want an approximate self-tester for the function f 2 given by f 2 x. By the Euler
Given a program P 2 that supposedly computes
, we can build a program P 1 (for e ix ) out of the given P 2 (for cos x) and self-test P 1 .
Let the range of f 1 be equipped with the following
In other words, in our case, we have jP 1 (x) e ix
This metric ensures that P 1 is erroneous on x if and
only if P 2 is erroneous on at least one of x; x+3=2. Alternatively, there is no \cancellation"
of errors.
Suppose P 1 is what can we say about fraction of the
\bad" domain for P 1 , the errors can occur in both the places where P 2 is invoked. Hence, at
most fraction of the domain for P 2 is bad. The rest of the domain for P 1 is 1 -close to
which by our metric implies P 2 is also 1 -close to f 2 . Thus, P 2 is
Similarly, suppose P 1 is not not good on at least 2 fraction of the
domain, where P 1 is not 2 -close to f 1 . Thus, at these points in the domain, at least one of
points where P 2 is called is denitely not 2 =2-close to f 2 . Thus, P 2 is not
Therefore, we have an approximate self-tester for f 2 from a
approximate self-tester for f 1 , given by [GLR
Acknowledgements
. We would like to thank Janos Aczel (U. of Waterloo), Peter
Borwein (Simon Fraser U.), Gian Luigi Forti (U. of Milan), D. Sivakumar (SUNY, Bualo),
Madhu Sudan (IBM, Yorktown), and Nick Trefethen (Cornell U.), for their suggestions and
pointers.
--R
Lectures on Functional Equations and their Applications.
Functions with bounded n-th dierences
Checking approximate computations over the reals.
Proof veri
Probabilistic checking of proofs: A new characterization of NP.
Designing programs that check their work.
Software Reliability via Run-Time Result-Checking Journal of the ACM <Volume>44</Volume><Issue>(6)</Issue>:<Pages>826-849</Pages>
Re ections on the Pentium division bug.
Functional Equations and Modeling in Science and Engineering.
The stability problem for a generalized Cauchy type functional equation.
Performance evaluation of programs related to the real gamma function.
The use of Taylor series to test accuracy of function programs.
A representation theorem for
An existence and stability theorem for a class of functional equations.
Stability of homomorphisms and completeness.
Local stability of the functional equation characterizing polynomial func- tions
On the stability of the linear functional equation.
New directions in testing.
Approximation of Functions.
The Power of Interaction.
Grundlegende Eigenschaften der Polynomischen Opera- tionen
Robust functional equations and their applications to program test- ing
Testing polynomial functions e
Robust characterizations of polynomials and their applications to program testing.
Sull'approssimazione delle applicazioni localmente
Personal Communication
Theory of Approximation of Functions of a Real Variable.
The Theory of Functions.
Algebraic Methods in Hardware/Software Testing.
--TR | program testing;polynomials;property testing;approximate testing;functional equations |
586859 | Evolutionary Trees Can be Learned in Polynomial Time in the Two-State General Markov Model. | The j-state general Markov model of evolution (due to Steel) is a stochastic model concerned with the evolution of strings over an alphabet of size j. In particular, the two-state general Markov model of evolution generalizes the well-known Cavender--Farris--Neyman model of evolution by removing the symmetry restriction (which requires that the probability that a "0" turns into a "1" along an edge is the same as the probability that a "1" turns into a "0" along the edge). Farach and Kannan showed how to probably approximately correct (PAC)-learn Markov evolutionary trees in the Cavender--Farris--Neyman model provided that the target tree satisfies the additional restriction that all pairs of leaves have a sufficiently high probability of being the same. We show how to remove both restrictions and thereby obtain the first polynomial-time PAC-learning algorithm (in the sense of Kearns et al. [Proceedings of the 26th Annual ACM Symposium on the Theory of Computing, 1994, pp. 273--282]) for the general class of two-state Markov evolutionary trees. | Introduction
The j-State General Markov Model of Evolution was proposed by Steel in 1994 [14]. The
model is concerned with the evolution of strings (such as DNA strings) over an alphabet of
size j . The model can be described as follows. A j-State Markov Evolutionary Tree consists
of a topology (a rooted tree, with edges directed away from the root), together with the
following parameters. The root of the tree is associated with j probabilities ae
which sum to 1, and each edge of the tree is associated with a stochastic transition matrix
whose state space is the alphabet. A probabilistic experiment can be performed using the
Markov Evolutionary Tree as follows: The root is assigned a letter from the alphabet according
to the probabilities ae (Letter i is chosen with probability ae i .) Then the letter
propagates down the edges of the tree. As the letter passes through each edge, it undergoes
a probabilistic transition according to the transition matrix associated with the edge. The
result is a string of length n which is the concatenation of the letters obtained at the n leaves
of the tree. A j-State Markov Evolutionary Tree thus defines a probability distribution on
length-n strings over an alphabet of size j . (The probabilistic experiment described above
produces a single sample from the distribution. 1 )
To avoid getting bogged down in detail, we work with a binary alphabet. Thus, we will
consider Two-State Markov Evolutionary Trees.
Following Farach and Kannan [9], Erd-os, Steel, Sz'ekely and Warnow [7, 8] and Ambainis,
Desper, Farach and Kannan [2], we are interested in the problem of learning a Markov Evolutionary
Tree, given samples from its output distribution. Following Farach and Kannan and
Ambainis et al., we consider the problem of using polynomially many samples from a Markov
Evolutionary Tree M to "learn" a Markov Evolutionary Tree M 0 whose distribution is close
to that of M . We use the variation distance metric to measure the distance between two
distributions, D and D 0 , on strings of length n. The variation distance between D and D 0
is
are n-leaf Markov Evolutionary Trees, we use the
to denote the variation distance between the distribution of M and the
distribution of M 0 .
We use the "Probably Approximately Correct" (PAC) distribution learning model of
Kearns, Mansour, Ron, Rubinfeld, Schapire and Sellie [11]. Our main result is the first
polynomial-time PAC-learning algorithm for the class of Two-State Markov Evolutionary
Trees (which we will refer to as METs):
Theorem 1 Let ffi and ffl be any positive constants. If our algorithm is given poly(n; 1=ffl; 1=ffi)
samples from any MET M with any n-leaf topology T , then with probability at least
the MET M 0 constructed by the algorithm satisfies var(M;
Interesting PAC-learning algorithms for biologically important restricted classes of METs
have been given by Farach and Kannan in [9] and by Ambainis, Desper, Farach and Kannan
in [2]. These algorithms (and their relation to our algorithm) will be discussed more fully in
Section 1.1. At this point, we simply note that these algorithms only apply to METs which
satisfy the following restrictions.
Restriction 1: All transition matrices are symmetric (the probability of a '1' turning
into a '0' along an edge is the same as the probability of a `0' turning into a '1'.)
Biologists would view the n leaves as being existing species, and the internal nodes as being hypothetical
ancestral species. Under the model, a single experiment as described above would produce a single bit position
of (for example) DNA for all of the n species.
Restriction 2: For some positive constant ff, every pair of leaves (x; y) satisfies
We will explain in Section 1.1 why the restrictions significantly simplify the problem of learning
Markov Evolutionary Trees (though they certainly do not make it easy!) The main
contribution of our paper is to remove the restrictions.
While we have used variation distance (L 1 distance) to measure the distance between the
target distribution D and our hypothesis distribution D 0 , Kearns et al. formulated the problem
of learning probability distributions in terms of the Kullback-Leibler divergence distance from
the target distribution to the hypothesis distribution. This distance is defined to be the sum
over all length-n strings s of D(s) log(D(s)=D 0 (s)). Kearns et al. point out that the KL
distance gives an upper bound on variation distance, in the sense that the KL distance from
D to D 0 is \Omega\Gamma/ ar(D; D 0 Hence if a class of distributions can be PAC-learned using KL
distance, it can be PAC-learned using variation distance. We justify our use of the variation
distance metric by showing that the reverse is true. In particular, we prove the following
lemma in the Appendix.
class of probability distributions over the domain f0; 1g n that is PAC-learnable
under the variation distance metric is PAC-learnable under the KL-distance measure.
The lemma is proved using a method related to the ffl -Bayesian shift of Abe and Warmuth [3].
Note that the result requires a discrete domain of support for the target distribution, such as
the domain f0; 1g n which we use here.
The rest of this section is organised as follows: Subsection 1.1 discusses previous work
related to the General Markov Model of Evolution, and the relationship between this work
and our work. Subsection 1.2 gives a brief synopsis of our algorithm for PAC-learning Markov
Evolutionary Trees. Subsection 1.3 discusses an interesting connection between the problem
of learning Markov Evolutionary Trees and the problem of learning mixtures of Hamming
balls, which was studied by Kearns et al. [11].
1.1 Previous Work and Its Relation to Our Work
The Two-State General Markov Model [14] which we study in this paper is a generalisation of
the Cavender-Farris-Neyman Model of Evolution [5, 10, 13]. Before describing the Cavender-
Farris-Neyman Model, let us return to the Two-State General Markov Model. We will fix
attention on the particular two-state alphabet f0; 1g. Thus, the stochastic transition matrix
associated with edge e is simply the matrix
where e 0 denotes the probability that a '0' turns into a `1' along edge e and e 1 denotes the
probability that a '1' turns into a `0' along edge e. The Cavender-Farris-Neyman Model
is simply the special case of the Two-State General Markov Model in which the transition
matrices are required to be symmetric. That is, it is the special case of the Two-State General
Markov Model in which Restriction 1 (from page 1) holds (so e
We now describe past work on learning Markov Evolutionary Trees in the General Markov
Model and in the Cavender-Farris-Neyman Model. Throughout the paper, we will define the
weight w(e) of an edge e to be
Steel [14] showed that if a j-State Markov Evolutionary Tree M satisfies (i) ae i ? 0 for
all i, and (ii) the determinant of every transition matrix is outside of f\Gamma1; 0; 1g, then the
distribution of M uniquely determines its topology. In this case, he showed how to recover
the topology, given the joint distribution of every pair of leaves. In the 2-state case, it suffices
to know the exact value of the covariances of every pair of leaves. In this case, he defined the
weight (e) of an edge e from node v to node w to be
w(e)
w is a leaf, and
w(e)
r
(1)
Steel observed that these distances are multiplicative along a path and that the distance between
two leaves is equal to their covariance. Since the distances are multiplicative along a
path, their logarithms are additive. Therefore, methods for constructing trees from additive
distances such as the method of Bandelt and Dress [4] can be used to reconstruct the topology.
Steel's method does not show how to recover the parameters of a Markov Evolutionary Tree,
even when the exact distribution is known and 2. In particular, the quantity that he
obtains for each edge e is a one-dimensional distance rather than a two-dimensional vector
giving the two transition probabilities e 0 and e 1 . Our method shows how to recover the
parameters exactly, given the exact distribution, and how to recover the parameters approximately
(well enough to approximate the distribution), given polynomially-many samples from
M .
Farach and Kannan [9] and Ambainis, Desper, Farach and Kannan [2] worked primarily
in the special case of the Two-State General Markov Model satisfying the two restrictions
on Page 1. Farach and Kannan's paper was a breakthrough, because prior to their paper
nothing was known about the feasibility of reconstructing Markov Evolutionary Trees from
samples. For any given positive constant ff, they showed how to PAC-learn the class of
METs which satisfy the two restrictions. However, the number of samples required is a
function of 1=ff, which is taken to be a constant. Ambainis et al. improved the bounds
given by Farach and Kannan to achieve asymptotically tight upper and lower bounds on the
number of samples needed to achieve a given variation distance. These results are elegant
and important. Nevertheless, the restrictions that they place on the model do significantly
simplify the problem of learning Markov Evolutionary Trees. In order to explain why this is
true, we explain the approach of Farach et al.: Their algorithm uses samples from a MET
M , which satisfies the restrictions above, to estimate the "distance" between any two leaves.
(The distance is related to the covariance between the leaves.) The authors then relate the
distance between two leaves to the amount of evolutionary time that elapses between them.
The distances are thus turned into times. Then the algorithm of [1] is used to approximate the
evolutionary times with times which are close, but form an additive metric, which
can be fitted onto a tree. Finally, the times are turned back into transition probabilities.
The symmetry assumption is essential to this approach because it is symmetry that relates a
one-dimensional quantity (evolutionary time) to an otherwise two-dimensional quantity (the
probability of going from a '0' to a `1' and the probability of going from a '1' to a `0'). The
second restriction is also essential: If the probability that x differs from y were allowed to
approach 1=2, then the evolutionary time from x to y would tend to 1. This would mean
that in order to approximate the inter-leaf times accurately, the algorithm would have to get
the distance estimates very accurately, which would require many samples. Ambainis et al. [2]
generalised their results to a symmetric version of the j-state evolutionary model, subject to
the two restrictions above.
Erd-os, Steel, Sz'ekely and Warnow [7, 8] also considered the reconstruction of Markov
Evolutionary Trees from samples. Like Steel [14] and unlike our paper or the papers of
Farach et al. [9, 2], Erd-os et al. were only interested in reconstructing the topology of a MET
(rather than its parameters or distribution), and they were interested in using as few samples
as possible to reconstruct the topology. They showed how to reconstruct topologies in the
j-state General Markov Model when the Markov Evolutionary Trees satisfy (i) Every root
probability is bounded above 0, (ii) every transition probability is bounded above 0 and
below 1=2, and (iii) for positive quantities - and - 0 , the determinant of the transition matrix
along each edge is between - and . The number of samples required is polynomial
in the worst case, but is only polylogarithmic in certain cases including the case in which
the MET is drawn uniformly at random from one of several (specified) natural distributions.
Note that restriction (iii) of Erd-os et al. is weaker than Farach and Kannan's Restriction 2
(from Page 1). However, Erd-os et al. only show how to reconstruct the topology (thus they
work in a restricted case in which the topology can be uniquely constructed using samples).
They do not show how to reconstruct the parameters of the Markov Evolutionary Tree, or
how to approximate its distribution.
1.2 A Synopsis of our Method
In this paper, we describe the first polynomial-time PAC-learning algorithm for the class of
Two-State Markov Evolutionary Trees (METs). Our algorithm works as follows: First, using
samples from the target MET, the algorithm estimates all of the pairwise covariances between
leaves of the MET. Second, using the covariances, the leaves of the MET are partitioned into
"related sets" of leaves. Essentially, leaves in different related sets have such small covariances
between them that it is not always possible to use polynomially many samples to discover
how the related sets are connected in the target topology. Nevertheless, we show that we can
closely approximate the distribution of the target MET by approximating the distribution
of each related set closely, and then joining the related sets by "cut edges". The first step,
for each related set, is to discover an approximation to the correct topology. Since we do
not restrict the class of METs which we consider, we cannot guarantee to construct the
exact induced topology (in the target MET). Nevertheless we guarantee to construct a good
enough approximation. The topology is constructed by looking at triples of leaves. We show
how to ensure that each triple that we consider has large inter-leaf covariances. We derive
quadratic equations which allow us to approximately recover the parameters of the triple,
using estimates of inter-leaf covariances and estimates of probabilities of particular outputs.
We compare the outcomes for different triples and use the comparisons to construct the
topology. Once we have the topology, we again use our quadratic equations to discover the
parameters of the tree. As we show in Section 2.4, we are able to prevent the error in our
estimates from accumulating, so we are able to guarantee that each estimated parameter is
within a small additive error of the "real" parameter in a (normalised) target MET. From
this, we can show that the variation distance between our hypothesis and the target is small.
1.3 Markov Evolutionary Trees and Mixtures of Hamming Balls
A Hamming ball distribution [11] over binary strings of length n is defined by a center (a string
c of length n) and a corruption probability p. To generate an output from the distribution, one
starts with the center, and then flips each bit (or not) according to an independent Bernoulli
experiment with probability p. A linear mixture of j Hamming balls is a distribution defined
by j Hamming ball distributions, together with j probabilities ae which sum to 1 and
determine from which Hamming ball distribution a particular sample should be taken. For
any fixed j , Kearns et al. give a polynomial-time PAC-learning algorithm for a mixture of j
Hamming balls, provided all j Hamming balls have the same corruption probability 2 .
A pure distribution over binary strings of length n is defined by n probabilities,
To generate an output from the distribution, the i'th bit is set to `0' independently with
probability - i , and to '1' otherwise. A pure distribution is a natural generalisation of a
Hamming ball distribution. Clearly, every linear mixture of j pure distributions can be
realized by a j-state MET with a star-shaped topology. Thus, the algorithm given in this
paper shows how to learn a linear mixture of any two pure distributions. Furthermore, a
generalisation of our result to a j-ary alphabet would show how to learn any linear mixture
of any j pure distributions.
2 The Algorithm
Our description of our PAC-learning algorithm and its analysis require the following defini-
tions. For positive constants ffi and ffl , the input to the algorithm consists of poly(n; 1=ffl; 1=ffi)
samples from a MET M with an n-leaf topology T . We will let ffl
We have made no effort to
optimise these constants. However, we state them explicitly so that the reader can verify
below that the constants can be defined consistently. We define an ffl 4 -contraction of a MET
with topology T 0 to be a tree formed from T 0 by contracting some internal edges e for which
is the edge-distance of e as defined by Steel [14] (see equation 1).
If x and y are leaves of the topology T then we use the notation cov(x; y) to denote the
covariance of the indicator variables for the events "the bit at x is 1" and "the bit at y is 1".
Thus,
We will use the following observations.
Observation 3 If MET M 0 has topology T 0 and e is an internal edge of T 0 from the root r to
node v and T 00 is a topology that is the same as T 0 except that v is the root (so e goes from v to
r) then we can construct a MET with topology T 00 which has the same distribution as M 0 . To
do this, we simply set appropriately (from the distribution of M 0 ). If
we set e 0 to be (from the distribution of M 0 ). If 1 we set e 1 to be
(from the distribution of M 0 ). Otherwise, we set e
Observation 4 If MET M 0 has topology T 0 and v is a degree-2 node in T 0 with edge e leading
into v and edge f leading out of v and T 00 is a topology which is the same as T 0 except that e
2 The kind of PAC-learning that we consider in this paper is generation. Kearns et al. also show how to do
evaluation for the special case of the mixture of j Hamming balls described above. Using the observation that
the output distributions of the subtrees below a node of a MET are independent, provided the bit at that node
is fixed, we can also solve the evaluation problem for METs. In particular, we can calculate (in polynomial
time) the probability that a given string is output by the hypothesis MET.
and f have been contracted to form edge g then there is a MET with topology T 00 which has
the same distribution as M 0 . To construct it, we simply set
Observation 5 If MET M 0 has topology T 0 then there is a MET M 00 with topology T 0 which
has the same distribution on its leaves as M 0 and has every internal edge e satisfy e 0 +e 1 - 1.
Proof of Observation 5: We will say that an edge e is "good" if e 0 Starting
from the root we can make all edges along a path to a leaf good, except perhaps the last edge
in the path. If edge e from u to v is the first non-good edge in the path we simply set e 0
to This makes the edge good but it has the side effect
of interchanging the meaning of "0" and "1" at node v. As long as we interchange "0" and
"1" an even number of times along every path we will preserve the distribution at the leaves.
Thus, we can make all edges good except possibly the last one, which we use to get the parity
of the number of interchanges correct. 2
We will now describe the algorithm. In subsection 2.6, we will prove that with probability
at least 1 \Gamma ffi , the MET M 0 that it constructs satisfies var(M; M 0 ) - ffl . Thus, we will prove
Theorem 1.
Estimate the covariances of pairs of leaves
For each pair (x; y) of leaves, obtain an "observed" covariance d
cov(x; y) such that, with
probability at least 1 \Gamma ffi=3, all observed covariances satisfy
d
Lemma 6 Step 1 requires only poly(n; 1=ffl; 1=ffi) samples from M .
Proof: Consider leaves x and y and let p denote Pr(xy = 11). By a Chernoff bound
(see [12]), after k samples the observed proportion of outputs with
of p, with probability at least
For each pair (x; y) of leaves, we estimate
From these estimates, we can calculate
d
cov(x; y) within \Sigmaffl 3 using Equation 2. 2
2.2 Step 2: Partition the leaves of M into related sets
Consider the following leaf connectivity graph whose nodes are the leaves of M . Nodes x and
y are connected by a "positive" edge if d
are connected by a "negative"
edge if d cov(x; y) - \Gamma(3=4)ffl 2 . Each connected component in this graph (ignoring the signs of
edges) forms a set of "related" leaves. For each set S of related leaves, let s(S) denote the
leaf in S with smallest index. METs have the property that for leaves x, y and z , cov(y; z) is
positive iff cov(x; y) and cov(y; z) have the same sign. (To see this, use the following equation,
which can be proved by algebraic manipulation from Equation 2.)
where v is taken to be the least common ancestor of x and y and ff 0 and ff 1 are the transition
probabilities along the path from v to x and fi 0 and fi 1 are the transition probabilities along
the path from v to y. Therefore, as long as the observed covariances are as accurate as stated
in Step 1, the signs on the edges of the leaf connectivity graph partition the leaves of S into
two sets S 1 and S 2 in such a way that s(S) 2 S 1 , all covariances between pairs of leaves in
are positive, all covariances between pairs of leaves in S 2 are positive, and all covariances
between a leaf in S 1 and a leaf in S 2 are negative.
For each set S of related leaves, let T (S) denote the subtree formed from T by deleting all
leaves which are not in S , contracting all degree-2 nodes, and then rooting at the neighbour
of s(S). Let M(S) be a MET with topology T (S) which has the same distribution as M on
its leaves and satisfies the following.
ffl Every internal edge e of M(S) has e 0
ffl Every edge e to a node in S 1 has e 0
ffl Every edge e to a node in S 2 has e 0
Observations 3, 4 and 5 guarantee that M(S) exists.
Observation 7 As long as the observed covariances are as accurate as stated in Step 1 (which
happens with probability at least 1 \Gamma ffi=3), then for any related set S and any leaf x 2 S there
is a leaf y 2 S such that jcov(x; y)j - ffl 2 =2.
Observation 8 As long as the observed covariances are as accurate as stated in Step 1 (which
happens with probability at least 1 \Gamma ffi=3), then for any related set S and any edge e of T (S)
there are leaves a and b which are connected through e and have jcov(a; b)j - ffl 2 =2.
Observation 9 As long as the observed covariances are as accurate as stated in Step 1 (which
happens with probability at least 1 \Gamma ffi=3), then for any related set S , every internal node v
of M(S) has
Proof of Observation 9: Suppose to the contrary that v is an internal node of M(S)
with Using Observation 3, we can re-root M(S) at v
without changing the distribution. Let w be a child of v. By equation 3, every pair of leaves
a and b which are connected through (v; w) satisfy jcov(a; b)j -
The observation now follows from Observation 8. 2
As long as the observed covariances are as accurate as stated in Step 1
(which happens with probability at least 1 \Gamma ffi=3), then for any related set S , every edge e of
M(S) has w(e) - ffl 2 =2.
Proof of Observation 10: This follows from Observation 8 using Equation 3. (Recall
that
2.3 Step 3: For each related set S , find an ffl 4 -contraction T 0 (S) of T (S).
In this section, we will assume that the observed covariances are as accurate as stated in Step 1
(this happens with probability at least 1 \Gamma ffi=3). Let S be a related set. With probability
at least 1 \Gamma ffi=(3n) we will find an ffl 4 -contraction T 0 (S) of T (S). Since there are at most n
related sets, all ffl 4 -contractions will be constructed with probability at least 1 \Gamma ffi=3. Recall
that an ffl 4 -contraction of M(S) is a tree formed from T (S) by contracting some internal
edges e for which (e) We start with the following observation, which will allow us
to redirect edges for convenience.
Observation 11 If e is an internal edge of T (S) then (e) remains unchanged if e is redirected
as in Observation 3.
Proof: The observation can be proved by algebraic manipulation from Equation 1 and
Observation 3. Note (from Observation that every endpoint v of e satisfies
(0; 1). Thus, the redirection in Observation 3 is not degenerate and (e) is defined. 2
We now describe the algorithm for constructing an ffl 4 -contraction T 0 (S) of T (S). We
will build up T 0 (S) inductively, adding leaves from S one by one. That is, when we have
an ffl 4 -contraction T 0 (S 0 ) of a subset S 0 of S , we will consider a leaf x build
an ffl 4 -contraction T 0 ;. The precise order in which
the leaves are added does not matter, but we will not add a new leaf x unless S 0 contains
a leaf y such that jd cov(x; y)j - (3=4)ffl 2 . When we add a new leaf x we will proceed as
follows. First, we will consider T use
the method in the following section (Section 2.3.1) to estimate (e 0 ). More specifically, we
will let u and v be nodes which are adjacent in T (S 0 ) and have in the
show how to estimate (e). Afterwards (in Section 2.3.2), we
will show how to insert x.
2.3.1 Estimating (e)
In this section, we suppose that we have a MET M(S 0 ) on a set S 0 of leaves, all of which
form a single related set. T (S 0 ) is the topology of M(S 0 ) and T 0 (S 0 ) is an ffl 4 -contraction of
is an edge of T 0 (S 0 is the edge of T (S 0 ) for which
We wish to estimate (e) within \Sigmaffl 4 =16. We will ensure that the overall
probability that the estimates are not in this range is at most ffi=(6n).
The proof of the following equations is straightforward. We will typically apply them in
situations in which z is the error of an approximation.
y
y
Case 1: e 0 is an internal edge
We first estimate e 0 , e 1 , of the correct values.
By Observation 9, are in [ffl 2 our estimate of
is within a factor of (1 \Sigma 2ffl 5 of the correct value. Similarly, our
estimates of are within a factor of (1 \Sigma ffl 4 2 \Gamma9 ) of the
correct values. Now using Equation 1 we can estimate (e) within \Sigmaffl 4 =16. In particular,
our estimate of (e) is at most
s
s
In the inequalities, we used Equation 6 and the fact that (e) - 1. Similarly, by Equation 7,
our estimate of (e) is at least
s
s
We now show how to estimate e 0 , e 1 , We say
that a path from node ff to node fi in a MET is strong if jcov(ff; fi)j - ffl 2 =2. It follows from
Equation 3 that if node fl is on this path then
We say that a quartet (c; b j a; d) of leaves a, b, c and d is a good estimator of the edge
is an edge of T (S 0 ) and the following hold in T (S 0 ) (see Figure 1).
1. a is a descendent of v.
2. The undirected path from c to a is strong and passes through u then v.
3. The path from u to its descendent b is strong and only intersects the (undirected) path
from c to a at node u.
4. The path from v to its descendent d is strong and only intersects the path from v to a
at node v.
We say that d) is an apparently good estimator of e 0 if the following hold in the
1. a is a descendent of v 0 .
2. The undirected path from c to a is strong and passes through u 0 then v 0 .
3. The path from u 0 to its descendent b is strong and only intersects the (undirected) path
from c to a at node u 0 .
4. The path from v 0 to its descendent d is strong and only intersects the path from v 0 to
a at node v 0 .
e
a
c
a
d
Figure
1: Finding
Observation 12 If e is an edge of T (S 0 ) and (c; b j a; d) is a good estimator of e then any
leaves
Proof: The observation follows from Equation 8 and 9 and from the definition of a good
estimator. 2
d) is a good estimator of e then it can be used (along with poly(n; 1=ffl; 1=ffi)
samples from M(S 0 )) to estimate e 0 , e 1 , use
sufficiently many samples, then the probability that any of the estimates is not within \Sigmaffl 5 of
the correct value is at most ffi=(12n 7 )).
Proof: Let q 0 and q 1 denote the transition probabilities from v to a (see Figure 1) and let
the transition probabilities from u to a. We will first show how to estimate
loss of generality (by Observation 3) we can
assume that c is a descendant of u. (Otherwise we can re-root T (S 0 ) at u without changing
the distribution on the nodes or p 0 or p 1 .) Let fi be the path from u to b and let fl be the
path from u to c. We now define
(These do not quite correspond to the conditional covariances of b and c, but they are related
to these.) We also define
cov(b; c)
The following equations can be proved by algebraic manipulation from Equation 10, Equation
2 and the definitions of F and D.
Case 1a: a 2 S 1
In this case, by Equation 4 and by Observation 10, we have
Equation 13, we have
Equations 12 and 14 imply
Also, since
From these equations, it is clear that we could find p 0 ,
exactly. We now show that with polynomially-many
samples, we can approximate the values of
sufficiently accurately so that using our approximations and the above equations, we obtain
approximations for which are within \Sigmaffl 6 . As in the proof of Lemma 6,
we can use Equations 2 and 10 to estimate
within \Sigmaffl 0 for any ffl 0 whose inverse is at most a polynomial in n and 1=ffl. Note that our
estimate of cov(b; c) will be non-zero by Observation 12 (as long as ffl 0 - (ffl 2 =2) 3 ), so we
will be able to use it to estimate F from its definition. Now, using the definition of F and
Equation 5, our estimate of 2F is at most
By Observation 12, this is at most
The error is at most ffl 00 for any ffl 00 whose is inverse is at most polynomial in n and 1=ffl. (This
is accomplished by making ffl 0 small enough with respect to ffl 2 according to equation 18.) We
can similarly bound the amount that we underestimate F . Now we use the definition of D
to estimate D. Our estimate is at most
Using Equation 5, this is at most
cov(b; c)
Once again, by Observation 12, the error can be made within \Sigmaffl 000 for any ffl 000 whose is
inverse is polynomial in n and 1=ffl (by making ffl 0 and ffl 00 sufficiently small). It follows that
our estimate of
D is at most
us an upper
bound on the value of D as a function of ffl 2 ), we can estimate
D within \Sigmaffl 0000 for any
ffl 0000 whose inverse is polynomial in n and 1=ffl. This implies that we can estimate p 0 and p 1
within \Sigmaffl 6 . Observation 12 and Equation 3 imply that w(p) - (ffl 2 =2) 3 . Thus, the estimate
for
D is non-zero. This implies that we can similarly estimate using
Equation 17.
Now that we have estimates for which are within \Sigmaffl 6 of the correct
values, we can repeat the trick to find estimates for q 0 and q 1 which are also within \Sigmaffl 6 . We
use leaf d for this. Observation 4 implies that
Using these equations, our estimate of e 0 is at most
Equation 5 and our observation above that w(p) - (ffl 2 =2) 3 imply that the error is at most
which is at most 2 7 ffl 6 =ffl 3
. Similarly, the estimate for e 0 is at least e and the
estimate for e 1 is within \Sigmaffl 5 of e 1 . We have now estimated e 0 , e 1 , and
As we explained in the beginning of this section, we can use these estimates to estimate
Case 1b: a 2 S 2
In this case, by Equation 4 and by Observation 10, we have
equation 13, we have
Equations 12 and 19 imply
Equation 17 remains unchanged. The process of estimating (from the
new equations) is the same as for Case 1a. This concludes the proof of Lemma 13. 2
Observation 14 Suppose that e 0 is an edge from u 0 to v 0 in T 0 (S 0 ) and that
is the edge in T (S 0 ) such that u 2 u 0 and v 2 v 0 . There is a good estimator (c; b j a; d)
of e. Furthermore, every good estimator of e is an apparently good estimator of e 0 . (Refer to
Figure
2.)
ae-
oe-
ae-
a
c
d
Figure
2: d) is a good estimator of and an apparently good estimator of
&%
c
&%
a ?
d
Figure
3: d) is an apparently good estimator of e good estimator of
Proof: Leaves c and a can be found to satisfy the first two criteria in the definition of
a good estimator by Observation 8. Leaf b can be found to satisfy the third criterion by
Observation 8 and Equation 8 and by the fact that the degree of u is at least 3 (see the
text just before Equation 4). Similarly, leaf d can be found to satisfy the fourth criterion.
d) is an apparently good estimator of e 0 because only internal edges of T (S 0 ) can be
contracted in the ffl 4 -contraction T 0 (S
Observation 15 Suppose that e 0 is an edge from u 0 to v 0 in T 0 (S 0 ) and that
an edge in T (S 0 ) such that u 2 u 0 and v 2 v 0 . Suppose that (c; b j a; d) is an apparently good
estimator of e 0 . Let u 00 be the meeting point of c, b and a in T (S 0 ). Let v 00 be the meeting
point of c, a and d in T (S 0 ). (Refer to Figure 3.) Then (c; b j a; d) is a good estimator of
the path p from u 00 to v 00 in T (S 0 ). Also, (p) - (e).
Proof: The fact that (c; b j a; d) is a good estimator of p follows from the definition of
good estimator. The fact that (p) - (e) follows from the fact that the distances are
multiplicative along a path, and bounded above by 1. 2
Observations 14 and 15 imply that in order to estimate (e) within \Sigmaffl 4 =16, we need
only estimate (e) using each apparently good estimator of e 0 and then take the maximum.
By Lemma 13, the failure probability for any given estimator is at most ffi=(12n 7 ), so with
probability at least 1 \Gamma ffi=(12n 3 ), all estimators give estimates within \Sigmaffl 4 =16 of the correct
values. Since there are at most 2n edges e 0 in T 0 (S 0 ), and we add a new leaf x to S 0 at
most n times, all estimates are within \Sigmaffl 4 =16 with probability at least 1 \Gamma ffi=(6n).
Case 2: e 0 is not an internal edge
In this case is a leaf of T (S 0 ). We say that a pair of leaves (b; c) is a
good estimator of e if the following holds in T (S 0 The paths from leaves v, b and c meet
at u and jcov(v; b)j, jcov(v; c)j and jcov(b; c)j are all at least (ffl 2 =2) 2 . We say that (b; c) is an
apparently good estimator of e 0 if the following holds in T 0 (S 0 The paths from leaves v, b
and c meet at u 0 and jcov(v; b)j, jcov(v; c)j and jcov(b; c)j are all at least (ffl 2 =2) 2 . As in the
previous case, the result follows from the following observations.
Observation c) is a good estimator of e then it can be used (along with poly(n; 1=ffl; 1=ffi)
samples from M(S 0 )) to estimate e 0 , e 1 , and (The probability that
any of the estimates is not within \Sigmaffl 5 of the correct value is at most ffi=(12n 3 ).)
Proof: This follows from the proof of Lemma 13. 2
Observation 17 Suppose that e 0 is an edge from u 0 to leaf v in T 0 (S 0 ) and that
an edge in T (S 0 ) such that u 2 u 0 . There is a good estimator (b; c) of e. Furthermore, every
good estimator of e is an apparently good estimator of e 0 .
Proof: This follows from the proof of Observation 14 and from Equation 9. 2
Observation Suppose that e 0 is an edge from u 0 to leaf v in T 0 (S 0 ) and that
an edge in T (S 0 ) such that u 2 u 0 . Suppose that (b; c) is an apparently good estimator of e 0 .
Let u 00 be the meeting point of b, v and c in T (S 0 ). Then (b; c) is a good estimator of the
path p from u 00 to v in T (S 0 ). Also, (p) - (e).
Proof: This follows from the proof of Observation 15. 2
2.3.2 Using the Estimates of (e).
We now return to the problem of showing how to add a new leaf x to T 0 (S 0 ). As we indicated
above, for every internal edge e use the method in Section 2.3.1 to
estimate (e) where is the edge of T (S 0 ) such that u 2 u 0 and v 2 v 0 . If the
observed value of (e) exceeds then we will contract e. The accuracy of our
estimates will guarantee that we will not contract e if and that we definitely
contract e if We will then add the new leaf x to T 0 (S 0 ) as follows. We will
insert a new edge We will do this by either (1) identifying x 0 with a node
already in T 0 (S 0 ), or (2) splicing x 0 into the middle of some edge of T 0 (S 0 ).
We will now show how to decide where to attach x 0 in T 0 (S 0 ). We start with the following
definitions. Let S 00 be the subset of S 0 such that for every y 2 S 00 we have jcov(x; y)j - (ffl 2 =2) 4 .
Let T 00 be the subtree of T 0 (S 0 ) induced by the leaves in S 00 . Let S 000 be the subset of S 0 such
that for every y 2 S 000 we have jd cov(x; y)j - (ffl 2 000 be the subtree of T 0 (S 0 )
induced by the leaves in S 000 .
Observation 19 If T (S 0 [ fxg) has x 0 attached to an edge is the
edge corresponding to e in T 0 (S 0 ) (that is, e
an edge of T 00 .
Proof: By Observation 14 there is a good estimator (c; b j a; d) for e. Since x is being
added to S 0 (using Equation 8), jcov(x; x 0 )j - ffl 2 =2. Thus, by Observation 12 and Equation 9,
every leaf y 2 fa; b; c; dg has jcov(x; y)j - (ffl 2 =2) 4 . Thus, a, b, c and d are all in S 00 so e 0 is
in T 00 . 2
attached to an edge
are both contained in node u 0 of T 0 (S 0 ) then u 0 is a node of T 00 .
Proof: Since u is an internal node of T (S 0 ), it has degree at least 3. By Observation 8
and Equation 8, there are three leaves a 1 , a 2 and a 3 meeting at u with jcov(u; a i )j - ffl 2 =2.
Similarly, jcov(u; v)j - ffl 2 =2. Thus, for each a i , jcov(x; a i )j - (ffl 2 =2) 3 so a 1 , a 2 , and a 3 are
in S 00 . 2
Observation
Proof: This follows from the accuracy of the covariance estimates in Step 1. 2
We will use the following algorithm to decide where to attach x 0 in T 000 . In the algorithm,
we will use the following tool. For any triple (a; b; c) of leaves in S 0 [ fxg, let u denote the
meeting point of the paths from leaves a, b and c in T (S 0 [fxg). Let M u be the MET which
has the same distribution as M(S 0 [ fxg), but is rooted at u. (M u exists, by Observation 3.)
Let c (a; b; c) denote the weight of the path from u to c in M u . By observation 11, c (a; b; c)
is equal to the weight of the path from u to c in M(S 0 [fxg). (This follows from the fact that
re-rooting at u only redirects internal edges.) It follows from the definition of (Equation 1)
and from Equation 3 that
c (a; b; c) =
s
cov(a; c)cov(b; c)
If a, b and c are in S 000 [fxg, then by the accuracy of the covariance estimates and Equations 8
and 9, the absolute value of the pairwise covariance of any pair of them is at least ffl 8
As
in Section 2.3.1, we can estimate cov(a; c), cov(b; c) and cov(a; b) within a factor of (1 \Sigma ffl 0 )
of the correct values for any ffl 0 whose inverse is at most a polynomial in n, and 1=ffl. Thus,
we can estimate c (a; b; c) within a factor of (1 \Sigma ffl 4 =16) of the correct value. We will take
sufficiently many samples to ensure that the probability that any of the estimates is outside
of the required range is at most ffi=(6n 2 ). Thus, the probability that any estimate is outside
of the range for any x is at most ffi=(6n).
We will now determine where in T 000 to attach x 0 . Choose an arbitrary internal root u 0
of T 000 . We will first see where x 0 should be placed with respect to u 0 . For each neighbour v 0
of u 0 in T 000 , each pair of leaves (a 1 ; a 2 ) on the "u each leaf b on the "v
side of perform the following two tests.
Figure
4: The setting for is an internal
node of T 000 . (If v 0 is a leaf, we perform the same tests with
a
x
f u
a 2
Figure
5: Either
The test succeeds if the observed value of x (a
is at least 1
The test succeeds if the observed value of b (a 1 ; a
is at most 1 \Gamma 3ffl 4 =4.
We now make the following observations.
Observation 22 If x is on the "u side" of (u; v) in T (S 000 [ fxg) and u is in u 0 in T 000 and
v is in v 0 6= u 0 in T 000 then some test fails.
Proof: Since u 0 is an internal node of T 000 , it has degree at least 3. Thus, we can construct
a test such as the one depicted in Figure 5. (If x the figure is still correct, that
would just mean that (f Similarly, if v 0 is a leaf, we simply have (f 0
is the edge from v to b.) Now we have(f )
However, only succeed if the left hand fraction is at least 1
Furthermore, only succeed if the right hand fraction is at most
our estimates are accurate to within a factor of (1 \Sigma ffl 4 =16), at least one of
the two tests will fail. 2
Observation 23 If x is between u and v in T (S 000 [ fxg) and the edge f from u to x 0 has
succeed for all choices
of a 1 , a 2 and b.
@
x
Figure
succeed for all choices of a 1 , a 2 and
b.
a 1
@
@
@
f
x
a 1
@
@
@
f
x
Figure
7: succeed for all choices of a 1 , a 2 and
b.
Proof: Every such test has the form depicted in Figure 6, where again g might be degen-
erate, in which case (g) = 1. Observe that x (a its estimate is at
least succeeds. Furthermore,
so the estimate is at most 1 \Gamma 3ffl 4 =4 and Test2 succeeds. 2
Observation 24 If x is on the "v side" of (u; v) in T (S 000 [fxg) and
from the beginning of Section 2.3.2 that (e) is at most 1 \Gamma 7ffl 4 =8 if u and v are in different
nodes of T 000 ), then succeed for all choices of
a 1 , a 2 and b.
Proof: Note that this case only applies if v is an internal node of T (S 000 ). Thus, every
such test has one of the forms depicted in Figure 7, where some edges may be degenerate.
Observe that in both cases x (a its estimate is at least 1
and Test1 succeeds. Also in both cases
so the estimate is at most 1 \Gamma 3ffl 4 =4 and Test2 succeeds. 2
Now note (using Observation 22) that node u 0 has at most one neighbour v 0 for which all
tests succeed. Furthermore, if there is no such imply that x 0 can
be merged with u 0 . The only case that we have not dealt with is the case in which there is
exactly one v 0 for which all tests succeed. In this case, if v 0 is a leaf, we insert x 0 in the middle
of edge Otherwise, we will either insert x 0 in the middle of edge or we will
insert it in the subtree rooted at v 0 . In order to decide which, we perform similar tests from
node v 0 , and we check whether Test1(v succeed
for all choices of a 1 , a 2 , and b. If so, we put x 0 in the middle of edge Otherwise, we
recursively place x 0 in the subtree rooted at v 0 .
2.4 Step 4: For each related set S , construct a MET M 0 (S) which is close
to M(S)
For each set S of related leaves we will construct a MET M 0 (S) with leaf-set S such that
each edge parameter of M 0 (S) is within \Sigmaffl 1 of the corresponding parameter of M(S). The
topology of M 0 (S) will be T 0 (S). We will assume without loss of generality that T (S) has
the same root as T 0 (S). The failure probability for S will be at most ffi=(3n), so the overall
failure will be at most ffi=3.
We start by observing that the problem is easy if S has only one or two leaves.
Observation then we can construct a MET M 0 (S) such that each edge parameter
of M 0 (S) is within \Sigmaffl 1 of the corresponding parameter of M(S).
We now consider the case in which S has at least three leaves. Any edge of T (S) which
is contracted in T 0 (S) can be regarded as having e 0 and e 1 set to 0. The fact that these are
within \Sigmaffl 1 of their true values follows from the following lemma.
Lemma 26 If e is an internal edge of M(S) from v to w with
Proof: First observe from Observation 9 that 1g and from Observation 10
that Using algebraic manipulation, one can see that
Thus, by Equation 1,
which proves the
observation. 2
Thus, we need only show how to label the remaining parameters within \Sigmaffl 1 . Note that
we have already shown how to do this in Section 2.3.1. Here the total failure probability is
at most ffi=(3n) because there is a failure probability of at most ffi=(6n 2 ) associated with each
of the 2n edges.
2.5 Step 5: Form M 0 from the METs M 0 (S)
Make a new root r for M 0 and set 1. For each related set S of leaves, let u
denote the root of M 0 (S), and let p denote the probability that u is 0 in the distribution
of M 0 (S). Make an edge e from r to u with e
2.6 Proof of Theorem 1
Let M 00 be a MET which is formed from M as follows.
ffl Related sets are formed as in Step 2.
ffl For each related set S , a copy M 00 (S) of M(S) is made.
ffl The METs M 00 (S) are combined as in Step 5.
Theorem 1 follows from the following lemmas.
Lemma 27 Suppose that for every set S of related leaves, every parameter of M 0 (S) is within
1 of the corresponding parameter in M(S). Then var(M
Proof: First, we observe (using a crude estimate) that there are at most 5n 2 parameters
in M 0 . (Each of the (at most n) METs M 0 (S) has one root parameter and at most 4n edge
parameters.) We will now show that changing a single parameter of a MET by at most \Sigmaffl 1
yields at MET whose variation distance from the original is at most 2ffl 1 . This implies that
ffl=2. Suppose that e is an edge from u to v and e 0 is changed. The
probability that the output has string s on the leaves below v and string s 0 on the remaining
leaves is
Thus, the variation distance between M 00 and a MET obtained by changing the value of e 0
(within is at most
s
s
s
Similarly, if ae 1 is the root parameter of a MET then the probability of having output s is
So the variation distance between the original MET and one in which ae 1 is changed within
1 is at most X
s
Before we prove Lemma 28, we provide some background material. Recall that the weight
w(e) of an edge e of a MET is define the weight w(') of a leaf ' to be the
product of the weights of the edges on the path from the root to '. We will use the following
lemma.
Lemma 29 In any MET with root r, the variation distance between the distribution on the
leaves conditioned on and the distribution on the leaves conditioned on 0 is at mostP
w('), where the sum is over all leaves '.
Proof: We proceed by induction on the number of edges in the MET. In the base case
there are no edges so r is a leaf, and the result holds. For the inductive step, let e be an edge
from r to node x. For any string s 1 on the leaves below x and any string s 2 on the other
leaves,
Algebraic manipulation of this formula shows that Pr(s 1 s
It follows that the variation distance is at most the sum over all s 1 s 2 of the absolute value of
the quantity in Equation 23, which is at most
The result follows by induction. 2
Lemma that m is a MET with n leaves and that e is an edge from node u to
node v. Let m 0 be the MET derived from m by replacing e 0 with
z is the maximum over all pairs (x; y) of leaves
which are connected via e in m of jcov(x; y)j.
Proof: By Observation 3, we can assume without loss of generality that u is the root
of m. For any string s 1 on the leaves below v and any string s 2 on the remaining leaves, we
find (via a little algebraic manipulation) that the difference between the probability that m
outputs s 1 s 2 and the probability that m 0 does is
Thus, the variation distance between m and m 0 is times the
product of the variation distance between the distribution on the leaves below v conditioned
on and the distribution on the leaves below v conditioned on and the variation
distance between the distribution on the remaining leaves conditioned on and the
distribution on the remaining leaves conditioned on this is at most
below v
other '
which by Equation 3 isX
connected via e
which is at most 4(n=2) 2
Lemma 31 If, for two different related sets, S and S 0 , an edge e from u to v is in M(S)
and in M 0 (S), then e
Proof: By the definition of the leaf connectivity graph in Step 2, there are leaves a; a 0 2 S
and b; b 0 2 S 0 such that the path from a 0 to a and the path from b 0 to b both go through
jd cov(a; a
and the remaining covariance estimates amongst leaves a, a 0 , b and b 0 are less than (3=4)ffl 2 .
Without loss of generality (using Observation 3), assume that u is the root of the MET. Let
denote the path from u to a 0 and use similar notation for the other leaves. By Equation 3
and the accuracy of the estimates in Step 1,
Thus,
By Equation 1,
The result now follows from the proof of Lemma 26. (Clearly, the bound in the statement of
Lemma 31 is weaker than we can prove, but it is all that we will need.) 2
Proof of Lemma 28: Let M be the MET which is the same as M except that every edge e
which is contained in M(S) and M(S 0 ) for two different related sets S and S 0 is contracted.
Similarly, let M 00 be the MET which is the same as M 00 except that every such edge has
all of its copies contracted in M 00 . Clearly, var(M; M 00
is the number
of edges in M that are contracted. We now wish to bound var(M
M (S) and M (S 0 ) do not intersect in an edge (for any related sets S and S 0 ). Now suppose
that M (S) and M (S 0 ) both contain node u. We can modify M without changing the
distribution in a way that avoids this overlap. To do this, we just replace node u with two
copies of u, and we connect the two copies by an edge e with e Note that this
change will not affect the operation of the algorithm. Thus, without loss of generality, we can
assume that for any related sets S and S 0 , M (S) and M (S 0 ) do not intersect. Thus, M
and M 00 are identical, except on edges which go between the sub-METs M (S). Now, any
edge e going between two sub-METs has the property that for any pair of leaves, x and y
connected via e, jcov(x; y)j - ffl 2 . (This follows from the accuracy of our covariance estimates
in Step 1.) Thus, by Lemma 30, changing such an edge according to Step 5 adds at most n 2 ffl 2
to the variation distance. Thus, var(M is the number of edges that
are modified according to Step 5. We conclude that var(M; M 00
Acknowledgements
We thank Mike Paterson for useful ideas and discussions.
--R
On the Approximability of Numerical Taxonomy
Nearly Tight Bounds on the Learnability of Evolution
Polynomial Learnability of Probabilistic Concepts with Respect to the Kullback-Leibler Divergence
Reconstructing the shape of a tree from observed dissimilarity data
Elements of Information Theory
A few logs suffice to build (almost) all trees (I)
"Inferring big trees from short quartets"
Efficient algorithms for inverting evolution
A probability model for inferring evolutionary trees
On the Learnability of Discrete Distributions
On the method of bounded differences
Molecular studies of evolution: a source of novel statistical problems.
Recovering a tree from the leaf colourations it generates under a Markov model
--TR
--CTR
Elchanan Mossel, Distorted Metrics on Trees and Phylogenetic Forests, IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), v.4 n.1, p.108-116, January 2007
Elchanan Mossel , Sbastien Roch, Learning nonsingular phylogenies and hidden Markov models, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA | learning of distributions;evolutionary trees;PAC-learning;markov model;computational learning theory |
586866 | On-Line Load Balancing in a Hierarchical Server Topology. | In a hierarchical server environment jobs are to be assigned in an on-line fashion to a collection of servers which form a hierarchy of capability: each job requests a specific server meeting its needs, but the system is free to assign it either to that server or to any other server higher in the hierarchy. Each job carries a certain load, which it imparts to the server it is assigned to. The goal is to find a competitive assignment in which the maximum total load on a server is minimized.We consider the linear hierarchy in which the servers are totally ordered in terms of their capabilities. We investigate several variants of the problem. In the unweighted (as opposed to weighted) problem all jobs have unit weight. In the fractional (as opposed to integral) model a job may be assigned to several servers, each receiving some fraction of its weight. Finally, temporary (as opposed to permanent) jobs may depart after being active for some finite duration of time. We show an optimal e-competitive algorithm for the unweighted integral permanent model. The same algorithm is (e+1)-competitive in the weighted case. Its fractional version is e-competitive even if temporary jobs are allowed. For the integral model with temporary jobs we show an algorithm which is 4-competitive in the unweighted case and 5-competitive in the weighted case. We show a lower bound of e for the unweighted case (both integral and fractional). This bound is valid even with respect to randomized algorithms. We also show a lower bound of 3 for the unweighted integral model when temporary jobs are allowed.We generalize the problem and consider hierarchies in which the servers form a tree. In the tree hierarchy, any job assignable to a node is also assignable to the node's ancestors. We show a deterministic algorithm which is 4-competitive in the unweighted case and 5-competitive in the weighted case, where only permanent jobs are allowed. Randomizing this algorithm improves its competitiveness to e and e+1, respectively. We also show an $\Omega(\sqrt{n})$ lower bound when temporary jobs are allowed. | Introduction
. One of the most basic on-line load-balancing problems is the
following. Jobs arrive one at a time and each must be scheduled on one of n servers.
Each job has a certain load associated with it and a subset of the servers on which it
may be scheduled. The goal is to assign jobs to servers so as to minimize the cost of
the assignment, defined as the maximum load on a server.
The nature of the load-balancing problem considered here is on-line: decisions
must be made without any knowledge of future jobs, and previous decisions may not
be revoked. We compare the performance of an on-line algorithm to the performance
of an optimal o#-line scheduler-one that knows the entire sequence of jobs in advance.
The e#cacy parameter of an on-line scheduler is its competitive ratio, roughly defined
# Received by the editors October 26, 1998; accepted for publication (in revised form) March 29,
2001; published electronically September 26, 2001. An extended abstract of this paper appeared in
Proceedings of the 7th European Symposium on Algorithms, Lecture Notes in Comput. Sci. 1643,
http://www.siam.org/journals/sicomp/31-2/34613.html
research.att.com). This author was on leave from the Electrical Engineering Department, Tel Aviv
University, Tel Aviv 69978, Israel.
# Computer Science Department, Technion, Haifa 32000, Israel (arief@cs.technion.ac.il, naor@cs.
technion.ac.il).
528 AMOTZ BAR-NOY, ARI FREUND, AND JOSEPH (SEFFI) NAOR
as the maximum ratio, taken over all possible sequences of jobs, between the cost
incurred by the algorithm and the cost of an optimal assignment.
1.1. The hierarchical servers problem. In the hierarchical servers problem
the servers form a hierarchy of capability; a job which may run on a given server may
also run on any server higher in the hierarchy. We consider the linear hierarchy in
which the servers are numbered 1 through n, and we imagine them to be physically
ordered along a straight line running from left to right, with server 1 leftmost and
server n rightmost. Leftward servers are more capable than rightward ones. We say
that servers 1, . , s are to the left of s, and that servers s are to the right
of s.
The input is a sequence of jobs, each carrying a positive weight and requesting
one of the servers. A job requesting server s can be assigned to any of the servers
to the left of s. These servers are the job's eligible servers. The assignment of a job
with weight w to server s increases the load on s by w (initially, all loads are 0). We
use the terms "job" and "request" interchangeably. The cost of a given assignment is
l s is the load on server s. We use OPT for the cost of an
optimal o#-line assignment. An algorithm is c-competitive if there exists some b > 0,
independent of the input, such that COST # c OPT sequences. For
scalable problems (such as ours) the additive factor b may be ignored in lower bound
constructions.
We consider variants, or models, of the problem according to three orthogonal
dichotomies. In the integral model each job must be assigned in its entirety to a
single server; in the fractional model a job's weight may be split among several eligible
servers. In the weighted model jobs may have arbitrary positive weights; in the
unweighted model all jobs have unit weight. Our results for the fractional model hold
for both the unweighted and weighted cases, so we do not distinguish between the
unweighted fractional model and the weighted fractional model. Finally, permanent
jobs continue to load the servers to which they are assigned indefinitely; temporary
jobs are active for a finite duration of time, after which they depart. The duration
for which a temporary job is active is not known upon its arrival. We may allow
temporary jobs or we may restrict the input to permanent jobs only. When temporary
jobs are allowed, the cost of an assignment is defined as
where l s (t) is the load on server s at time t. The version of the problem which we
view as basic is the weighted integral model with permanent jobs only.
A natural generalization of the problem is for the servers to form a (rooted) tree
hierarchy ; a job requesting a certain server may be assigned to any of its ancestors in
the tree. The various models pertain to this problem as well.
The hierarchical servers problem is an important practical paradigm. It captures
many interesting applications from diverse areas such as assigning classes of service
to calls in communication networks, routing queries to hierarchical databases, signing
documents by ranking executives, and upgrading classes of cars by car rental
companies.
From a theoretical point of view, the hierarchical servers problem is also interesting
by virtue of its relation to the problem of related machines [3]. In this problem
all servers are eligible for every job, but they may have di#erent speeds; assigning a
job of weight w to a server with speed v increases its load by w/v. Without loss of
generality, assume is the speed of server i. Consider a
set of jobs to be assigned at a cost bounded by C, and let us focus on a particular
job whose weight is w. To achieve COST # C we must refrain from assigning this
LOAD BALANCING ON HIERARCHICAL SERVERS 529
job to any server i for which w/v i > C. In other words, there exists a rightmost
server to which we may assign the job. Thus, restricting the cost induces eligibility
constraints similar to those in the hierarchical servers problem. Some of the ideas
developed in the context of the hierarchical servers problems are applicable to the
problem of related machines, leading to better bounds for that problem [10].
1.2. Background. Graham [16] explored the assignment problem where each
job may be assigned to any of the servers. He showed that the greedy algorithm has
competitive ratio 2 - 1
. Later work [8, 9, 17, 2] investigated the exact competitive
ratio achievable for this problem for general n and for various special cases. The
best results to date for general n are a lower bound of 1.852 and an upper bound of
1.923 [2].
Over the years many other load-balancing problems were studied; see [4, 20]
for surveys. The assignment problem in which arbitrary sets of eligible servers are
allowed was considered by Azar, Naor, and Rom [7]. They showed upper and lower
bounds of #(log n) for several variants of this problem. Permanent jobs were assumed.
Subsequent papers generalized the problem to allow temporary jobs; in [5] a lower
bound of # n) and an upper bound of O(n 2/3 ) were shown. The upper bound was
later tightened to O( # n) [6].
The related machines problem was investigated by Aspnes et al. [3]. They showed
an 8-competitive algorithm based on the doubling technique. This result was improved
by Berman, Charikar, and Karpinski [12], who showed a more refined doubling algorithm
that is 3 5.828-competitive. By randomizing this algorithm, they were
able to improve the bound to 4.311. They also showed lower bounds of 2.438 (de-
terministic) and 1.837 (randomized). The randomized bound was recently improved
to 2 [14]. Azar et al. [6] generalized the problem to allow temporary jobs. They
showed a deterministic upper bound of 20 (which implies a randomized upper bound
of 5e # 13.59) and a lower bound of 3. The upper bounds were later improved to
The resource procurement problem was defined and studied by Kleywegt et al. [18]
independently of our work. In this problem jobs arrive over (discrete) time, each
specifying a deadline by which it must complete, and all jobs must be executed on a
single server. We can view this as a problem of assigning permanent jobs to parallel
servers if we think of the time slots as servers. In fact, the problem is equivalent
to the following variant of the hierarchical servers problem. The model considered
is the fractional model with permanent jobs only. The input consists of precisely n
jobs. The jth job to arrive specifies a server s servers eligible for
the job are s j , s In addition, the on-line nature of the problem
is less demanding. The scheduler need not commit to the full assignment of a job
immediately on its arrival. Rather, when the jth job arrives, it must decide what
fraction of each of the first j jobs to assign to server n 1. Kleywegt et al.
[18] developed a lower bound technique similar to ours and were able to establish a
lower bound of 2.51 by analytic and numerical means. They also described a 3.45-
competitive algorithm.
1.3. Our results. A significant portion of our work is devoted to developing a
continuous framework in which we recast the problem. The continuous framework
is not a mere relaxation of the problem's discrete features. Rather, it is a fully
fledged model in which a new variant of the problem is defined. The advantage of
the continuous model lies in the ability to employ the tools of infinitesimal calculus,
making analysis much easier.
In section 2 we use the continuous model to design an optimal e-competitive
algorithm. Surprisingly, this algorithm operates counterintuitively; the weight distribution
of an assigned job is biased to the left, i.e., more weight ends up on the
leftward servers. We show a general procedure for transforming an algorithm for the
continuous model into an algorithm for the fractional model. We also show a general
procedure for transforming an algorithm for the fractional model into an algorithm for
the integral model. Thus we get an e-competitive algorithm for the fractional model
and an algorithm which is, respectively, e and 1)-competitive for the unweighted
integral and weighted integral models. The former algorithm admits temporary jobs;
the latter does not. Our upper bound of e also applies to the resource procurement
problem of Kleywegt et al. [18] by virtue of Theorem 2 in their paper. Thus we
improve their best upper bound of 3.45.
In section 3 we develop a procedure for deriving lower bounds in the context of the
continuous model. The construction of the continuous model is rather unconventional
in its not being a generalization of the discrete model. In fact, on the surface of things,
the two models seem incomparable, albeit analogous. At a deeper level, though, it
turns out that the continuous model is actually a special case of the discrete model,
making lower bounds obtained in the continuous model valid in the discrete setting
as well. This makes the upper bound all the more intriguing, as it is developed in the
continuous framework and transported back to the discrete model. The lower bounds
obtained with our procedure are also valid in the discrete models (fractional as well
as integral), even in the unweighted case with permanent jobs only and even with
respect to randomized algorithms. Using our procedure we find that e is a tight lower
bound. Since our lower bound technique is the same as the one used (independently)
by Kleywegt et al. [18] in the context of the resource procurement problem, our lower
bound of e applies to that problem as well, and it improves their best lower bound of
2.51. Thus our work solves this problem completely by demonstrating a tight bound
of e.
In section 4 we consider temporary jobs in the integral model. We show a doubling
algorithm that is 4-competitive in the unweighted case and 5-competitive in the
weighted case. We also show a deterministic lower bound of 3.
Section 5 extends the problem to the tree hierarchy. We show an algorithm which
is, respectively, 4-, 4-, and 5-competitive for the fractional, unweighted integral, and
weighted integral models. Randomizing this algorithm improves its competitiveness
to e, e, and e + 1, respectively. We show lower bounds of # n) for all models, both
deterministic and randomized, when temporary jobs are allowed.
The e#ect of restricting the sets of eligible servers in several other ways is discussed
in section 6. In the three cases we consider, we show a lower bound of # log n). For
example, this lower bound holds in the case where the servers form a circuit (or a line)
and eligible servers must be contiguous. Note that since these problems are all special
cases of the problem considered in [7], an upper bound of O(log n) is immediate.
2. Upper bounds. In this section we show an algorithm whose respective versions
for the fractional, unweighted integral, and weighted integral models are e, e,
1)-competitive. The fractional version admits temporary jobs; the integral
versions do not. We build up to the algorithm by introducing and studying the
semicontinuous model and the class of memoryless algorithms. We begin with the
optimum lemma, which characterizes OPT in terms of the input sequence.
2.1. The optimum lemma. For the fractional and unweighted integral models
the lemma provides an exact formula for OPT . For the weighted integral case it gives
LOAD BALANCING ON HIERARCHICAL SERVERS 531
a 2-approximation. 1 The optimum lemma is a recurrent theme in our exposition.
For a given input sequence and a given server s, denote by W s the total weight
of jobs requesting servers to the left of s, and let { s }.
Clearly, H is a lower bound on OPT , and in the unweighted integral model we can
tighten this to #H#. In addition, the maximum weight of a job in the input sequence,
denoted w max , is also a lower bound on OPT in the integral model.
Turning to upper bounds on OPT , let us say that a given server is saturated
at a given moment if its load is at least H. For the integral model, consider an
algorithm that assigns each job to its rightmost unsaturated eligible server. This
algorithm treats the jobs in an on-line fashion but requires advance knowledge of H
so it is o#-line. Clearly, if an unsaturated eligible server can always be found, then
COST < H +w max . We claim that this is indeed the case. To see this, suppose that
when some job of weight w arrives, all of its eligible servers are saturated. Let s be
maximal such that the servers to the left of s are all saturated. By the maximality
of s, the jobs assigned to the left of s must have all requested servers to the left of
s. Since their total weight is at least s H, we have W s # s
contradiction.
For the fractional model we modify the above algorithm as follows. When a job
of weight w arrives, we assign it as follows: let s be the rightmost unsaturated eligible
server and let l s is the current load on s. If # w, we assign the
job in its entirety to s. Otherwise, we split the job and assign # units of weight to
s and treat the remainder recursively as a new job to be assigned. This algorithm
achieves COST # H.
The optimum lemma summarizes these results.
. In the fractional model,
. In the unweighted integral model,
. In the weighted integral model,
2.2. Memoryless algorithms. A memoryless algorithm is an algorithm that
assigns each job independently of previous jobs. Of course, memoryless algorithms
are only of interest in the fractional model, which is the model we are going to consider
here. We focus on a restricted type of memoryless algorithms, namely, uniform
algorithms. Uniform memoryless algorithms are instances of the generic algorithm
shown below; each instance is characterized by a function u
Algorithm GenericUniform
When a job of weight w requesting server s arrives, do:
1. r w; i s.
2. While r > 0:
3. Assign a = min {w u(i), r} units of weight to server i.
4. r r - a.
5.
1 It is unreasonable to expect an easily computable formula for OPT in the weighted integral
model, for that would imply a polynomial-time solution for the NP-hard problem PARTITION
532 AMOTZ BAR-NOY, ARI FREUND, AND JOSEPH (SEFFI) NAOR
The algorithm starts with the server requested by the job and proceeds leftward
as long as the job is not fully assigned. The fraction of the job's weight assigned
to server i is u(i), unless w u(i) is more than the remainder of the job when i is
reached. The condition ensures that the job will always be fully assigned by
the algorithm.
Note that the assignment generated by a uniform memoryless algorithm is independent
of both the number of servers and the order of jobs in the input. Moreover,
any collection of jobs with total weight w, requesting some server s, may be replaced
by a single request of weight w for s. We therefore assume that exactly one job requests
each server (we allow jobs of zero weight) and that the number of servers is
infinite. We denote the weight of the job requesting server s by w s .
Consider a job of weight w requesting a server to the right of a given server s. If
the requested server is close to s the job will leave wu(s) units of weight on s regardless
of the exact server requested. At some point, however, the job's contribution to the
load on s will begin to diminish as the distance of the request from s grows. Finally, if
the request is made far enough away, it will have no e#ect on s. We denote by p s the
point beyond which the e#ect on s begins to diminish and by p # s the point at which
it dies out completely:
Note that p s and p # s may be undefined, in which case we take them to be infinity. We
are interested only in functions u satisfying p s < # for all s.
The importance of p s lies in the fact that the load on s due to jobs requesting
servers in the range s, . , p s is simply u(s) times the total weight of these jobs. The
following lemma and corollary are not di#cult to prove.
(worst case lemma). Let A be a uniform memoryless algorithm. The
following problem,
Given K > 0 and some server s, find an input sequence I that maximizes
the load on s in A's assignment subject to
is solved by I
and l s -the resultant load on s-satisfies p s
Corollary 3. Let A be a uniform memoryless algorithm, and let CA be the
competitive ratio of A. Then sup
2.3. The semicontinuous model. In both the fractional and the integral versions
of the problem, the servers and the jobs are discrete objects. We therefore refer
to these models as the discrete models. In this section we introduce the semicontinuous
model, in which the servers are made continuous. In section 3 we define the
continuous model by making the jobs continuous as well.
The semicontinuous model is best understood through a physical metaphor. Consider
the bottom of a vessel filled with some nonuniform fluid applying varying degrees
of pressure at di#erent points. The force acting at any single point is zero, but any
LOAD BALANCING ON HIERARCHICAL SERVERS 533
region of nonzero area su#ers a net force equal to the integral of the pressure over
the region. Similarly, in the semicontinuous model we do not talk about individual
servers; rather, we have a continuum of servers, analogous to the bottom of the vessel.
An arriving job is analogous to a quantity of fluid which must be added to the vessel.
The notions of load and weight become divorced; load is analogous to pressure and
weight is analogous to force.
Formally, the server interval is [0, #), to which jobs must be assigned. Job j has
weight w j , and it requests the point s j > 0 in the server interval. The assignment of
job j is specified by an integrable function
2. x > s
3. g j is continuous from the right at every point.
The full assignment is . For a given full assignment g, the load l I on an
defined as l I
x
g(z) dz-the mean
weight density over I. The load at a point x is defined as l
g(x) . (We introduce the notation l x for consistency with previous notation.) The
cost of the assignment is
Lemma 4 (optimum lemma: semicontinuous model). Let W (x) be the total weight
of requests made to the left of x (including x itself ) and
Proof. The lower bound is trivial. For the upper bound, let x 1 # x 2 # be the
points requested by the jobs and rearrange the jobs such that the jth job requests x j .
The idea is to pack the jobs (in order) in a rectangle extending from the left end of the
server interval. Let #
consider the following assignment:
This assignment clearly attains It follows from the definition of H that
all j, which is su#cient for the assignment's validity.
We adapt the definition of uniform memoryless algorithms to the semicontinuous
model. In this model a uniform algorithm is characterized by a function
(0, #) as follows. For a given point x > 0, let q(x) be the point satisfying the equation
1. Then the assignment of job j is
For q(x) and g j to be defined properly we must require that #
. (Otherwise, the algorithm may fail to fully assign jobs requesting points close
to 0.) Note that the load at 0 is always zero.
For a given point x > 0, we define p(x) as the point such that # p(x)
x
If p(x) does not exist, then the algorithm's competitive ratio is unbounded, as demonstrated
by the request sequence consisting of m # jobs, each of unit weight, where
the jth job requests the point s j. For this sequence, l
We shall therefore allow only algorithms satisfying #
all M # 0.
The semicontinuous model has the nice property that p s and p # s , which were
disparate in the discrete model, fuse into a single entity, p(s). The worst case lemma
and its corollary become the following lemma.
Lemma 5 (worst case lemma: semicontinuous model). Let A be a uniform memoryless
algorithm defined by u(x). The following problem,
Given K > 0 and some point s > 0 in the server interval, find an
input sequence that maximizes the load at s in A's assignment, subject
to
is solved by a single job of weight p(s)K requesting the point p(s), and the resultant
load at s is p(s)Ku(s).
Corollary 6. The competitive ratio of A is sup x {p(x)u(x)}.
2.4. An e-competitive algorithm for the semicontinuous model. Consider
Algorithm Harmonic, the uniform memoryless algorithm defined by
Let us calculate p(x):
x
dz
z
x
Thus, the competitive ratio of Algorithm Harmonic is sup x # ex 1
2.5. Application to the discrete models. Having devised a competitive algorithm
for the semicontinuous model, we wish to import it to the discrete model.
We start by showing how to transform any algorithm for the semicontinuous model
into an algorithm for the (discrete) fractional model. Following that, we show how
any algorithm for the fractional model may be transformed into an algorithm for the
integral models.
Semicontinuous to fractional. Let I be an input sequence for the fractional
model. If we treat each server as a point on (0, #), that is, we view a request for
server s as a request for the point s, then we can view I as a request sequence for the
semicontinuous model as well. By the respective optimum lemmas (Lemmas 1 and 4),
the value of OPT is the same for both models.
Let A be a c-competitive online algorithm for the semicontinuous model. Define
algorithm B for the fractional model as follows. When job j arrives, B assigns
units of weight to server i, for all i, where g j is the assignment function
generated by A for the job. Clearly, the cost incurred by B is bounded by the cost
incurred by A. Thus, B is c-competitive.
An important observation is that if A is memoryless, then so is B. Thus, even if
temporary jobs are allowed, the assignment generated by B will be c-competitive at
all times, compared to an optimal (o#-line) assignment of the active jobs.
We give the algorithm thus derived from Algorithm Harmonic the name Fraction-
alHarmonic.
Proposition 7. Algorithm FractionalHarmonic is e-competitive even when temporary
jobs are allowed.
Fractional to integral. Let A be an algorithm for the fractional model. Define
algorithm B for the integral model (both weighted and unweighted) as follows. As
jobs arrive, B keeps track of the assignments A would make. A server is said to be
overloaded if its load in B's assignment exceeds its load in A's assignment. When a
assigns it to the rightmost eligible server which is not overloaded (after
A is allowed to assign the job).
Proposition 8. Whenever a job arrives at least one of its eligible servers is not
overloaded.
LOAD BALANCING ON HIERARCHICAL SERVERS 535
Proof. Denote by l A
i (j) the load on server i after job j is assigned by
A and B, respectively. When job j is considered for assignment by B, server i is
overloaded
(j). Define A i
l A
(j).
We claim that for all j,
1. when job j arrives, server 1 (which is eligible) is not overloaded;
2. A i
The proof is by induction on j. The claim is clearly true for
some job j > 1 whose weight is w. We have l A
(j - 1), where the second inequality is justified by the induction hypothesis.
Thus, server 1 is not overloaded. It remains to show that for all i, A i
Let a be the rightmost server to which algorithm A assigns part of job j, i.e., a =
max{s | A s (j) > A s (j - 1)}. Let b be the server to which B assigns the job. By the
induction hypothesis, A i (j -
for all i, and B i
a and for i < b.
Assuming b < a, we still have to prove the claim for i # {b, . , a - 1}. Algorithm
assigns job j to server b and not to one of the servers b + 1, . , a, all of which are
eligible and to the right of b. It must therefore be the case that l A
(j - 1) for
a. Thus, for i # {b, . , a - 1},
a
l A
a
l A
> A a (j -
a
l B
a
l B
The second inequality is justified by the induction hypothesis.
Let w max (j) be the maximum weight of a job among the first j jobs. Algorithm
maintains l B
j. In the unweighted case we have
and in the weighted case w max # OPT . By the optimum lemma (Lemma 1)
the value of OPT in the integral model is at least as high as its value in the fractional
model. Thus if A is c-competitive, then B is c-competitive in the unweighted case
1)-competitive in the weighted case.
We give the algorithm thus derived from Algorithm FractionalHarmonic the name
IntegralHarmonic.
Proposition 9. Algorithm IntegralHarmonic is e-competitive in the unweighted
case and (e + 1)-competitive in the weighted case.
3. Lower bounds. In this section we devise a technique for proving lower
bounds in the limit n #. The bounds obtained are valid in both the fractional
and integral models, even in the unweighted case. In fact, they remain valid even
in the presence of randomization with respect to oblivious adversaries. Using this
technique, we obtain a tight constant lower bound of e. The success of our approach
is facilitated by transporting the problem from the discrete setting into a continuous
model, in which both jobs and servers are continuous.
3.1. A simple lower bound. We consider the fractional model, restricting our
attention to right-to-left input sequences, defined to be sequences in which for all
are made before any request for server i. We further
restrict our attention to sequences in which each server is requested exactly once. (We
allow jobs of zero weight.)
536 AMOTZ BAR-NOY, ARI FREUND, AND JOSEPH (SEFFI) NAOR
(b)
Fig. 1. (a) Histogram of job weights;
Let A be a k-competitive algorithm. For a given right-to-left input sequence,
denote by w s the weight of the job requesting server s and by l s the load on server s
at a given moment. Suppose the first n- (culminating with the request for
server i) have been assigned by A. Recall the definition of H in the optimum lemma
(Lemma 1); denote by h i the value of H with respect to these jobs. Since A is k-
competitive, the loads must obey l s # kh i for all s. For
Now consider the specific input sequence defined by w
w > 0. For this sequence, h Thus, after the first job is
assigned we have l n # kw/n. After the second job is handled we have l n-1 # 2kw/n,
but l n # kw/n still holds because the new job could not be assigned to server n. In
general, after the request for server s is processed, we have l i # (n
Noting that the total weight of the jobs in the input equals the total load
on the servers once the assignment is complete, we get
kw
kw
Hence, k # lim n# 2 n
2.
3.2. Discussion. Figure 1 depicts the request sequence and the resultant kh i 's
in histogram-like fashion with heights of bars indicating the respective values. The
bars are of equal width, so we can equivalently consider their area rather than height.
To be precise, let us redraw the histograms with bars of width 1 and height equal to
the numerical values they represent. Then, the total weight to be assigned is the total
area of the job bars, and the total weight actually assigned is bounded from above
by the total area of the kh i bars. Now, instead of drawing a histogram of kh i , let us
draw a histogram of h i . The lower bound is found by solving
total area of job bars # k total area of h i bars,
total area of job bars
total area of h i bars
Note that if we multiply the weights of all jobs by some constant c > 0, the
heights of both the job bars and the h i bars will increase by a factor of c, leaving the
area ratio intact. Similarly, we can express the scaling of job weights by scaling the
width of the bars in both histograms. This, too, has no e#ect on the resultant ratio.
Thus we can express the entire procedure in geometric terms as follows. Select an
"input" histogram in which the width of each bar is 1
. Let h i,j be the area of bars
divided by j/n (the width of j bars), and let h
divide the area by j/n rather than j because h i is the height of the bar whose area
LOAD BALANCING ON HIERARCHICAL SERVERS 537
equals the value of OPT for the first n - Divide the area of the input
histogram by the area of the h i histogram (drawn to the same scale) to obtain a lower
bound. The scaling of the histograms allows us to keep considering finite areas as n
goes to infinity. This forms the link between the discrete model and the continuous
model, which we introduce next.
3.3. The continuous model. The continuous model is motivated by the observation
that the analysis suggested in the previous section tends to be exceedingly
di#cult for all but the simplest of input histograms. We turn to the continuous model
in order to avail ourselves of the machinery of infinitesimal calculus. The continuous
model di#ers from the semicontinuous model introduced in section 2 in two ways.
Instead of the infinite server interval, we use a finite interval [0, S], and, more impor-
tantly, jobs in the continuous model are not discrete; rather, we have a continuous
job flow arriving over time.
It is possible to define a general continuous model in which the arrival of jobs over
time is described by a function of place (in the server interval) and time. Although this
model is an interesting mathematical construction in its own right, we focus here on a
more restricted model-one that allows only the equivalent of right-to-left sequences.
Formally, the input is a request function, which is an integrable nonnegative real
function f(x) defined on the server interval [0, S]. The interpretation of f is by means
of integration, i.e., # x1
x0 f(x) dx is the total amount of weight requesting points in the
interval [x 0 , x 1 ]. The underlying intuition is that the request flow is right-to-left in
the sense that the infinitesimal request for point x is assumed to occur at "time"
Assignment are continuous too; an assignment is described by an assignment
function, which is an integrable nonnegative real function g(x) on [0, S) that (1) is
continuous from the right at every point and (2) satisfies # S
for all x 0 # [0, S) with equality for x algorithm in this model is an
algorithm which, given f(x), outputs g(x) such that for all x # [0, S), g(x) on the
interval [x, S) is independent of f(x) outside that interval. The definition of load and
cost are the same as in the semicontinuous model.
continuous model).
z
Proof. Let
z # zf(x) dx # and
H. In addition, the assignment function
achieves cost H.
Let us adapt the lower bound procedure from section 3.2 to the continuous model.
Consider a request function f(x) and the corresponding assignment g(x) generated
by some k-competitive on-line algorithm. We wish to bound the value of g(x) at some
fixed point a. Define a new request function
f a
a
with respect to f a equals h(a). (Note the analogy with h i,j and h i in the discrete
538 AMOTZ BAR-NOY, ARI FREUND, AND JOSEPH (SEFFI) NAOR
model.) da. The value of g in [a, S) must
be the same for f and f a , as g is produced by an on-line algorithm; thus g(a) # kh(a).
This is true for all a, hence,
from which the lower bound W/W # is readily obtained.
For certain request functions we can simplify the procedure. If f(x) is a continuous
monotonically decreasing function tending to 0 at some point x 0 # S (where x
is allowed), and we use f 's restriction to [0, S] as a request function, then we have the
following shortcut to h(a). Solve
d
db
h a
a f(x) dx
a
for b, and let b(a) be the solution. The following is easy to justify:
h(a) =b(a)
a
Note that if x 0 > S, this simplified procedure may return values for b(a) outside
the server interval [0, S]. In this case the true value of h(a) is less than the value
computed, leading to a less tight, but still valid, lower bound. We can therefore use
the simplified method without being concerned by this issue. Also, it is sometimes
more convenient to assume that the server interval, rather than being finite, is [0, #).
This too can be easily seen to make no di#erence, at least as far as using the simplified
procedure is concerned.
Example. Let
k . We find it easier to solve e
a
e -kx dx for a rather
than b:
# db.
Thus, setting z
z
is the familiar exponential integral function [1]. The
lower bound obtained is therefore
LOAD BALANCING ON HIERARCHICAL SERVERS 539
11. A lower bound of e can be obtained with our method by considering
the request function e -kx 1/k
in the limit k # with server interval [0, #).
Proof. For convenience we consider only integral values of k. We start by reviewing
some elementary facts concerning gamma functions [1]. The gamma function
is defined by
and it can be shown that for positive integer
1)!. The incomplete gamma function is defined by #(a,
z
dt. Integrating by parts we obtain the recurrence #(a
z a e -z . We also need Stirling's approximation:
Finally, consider the integral #
dx. Substituting
k# 1/k z k-1 e -z dz. Thus, for finite # and #,
Returning to our problem,
dx =k k-1
The relation between a and b is given by
be -kb 1/k
a
e -kx 1/k
Substituting r and t in the previous
equation and simplifying gives
Applying the recurrence to both sides of this equation and rearranging terms yields
We also get (directly)
Let us explore the relationship between r and t. Clearly, r # t. It is easy to see
that the function x k e -x increases in [0, k) and decreases in (k, #). Thus, referring to
Figure
2(a), we see that for r # k, #(k +1, r)-#(k+1,
r
is the area of
the region marked "X" between r and t, and r k e -r is the area of the dotted rectangle
between r and r + 1. Since both areas are equal and x k e -x decreases in this region,
Next consider r # k-2. Referring to Figure 2(b) and applying the same reasoning
we see that t # r+1 # k-1. Let us now consider the function x k-1 e -x . Its maximum
occurs at 1. Thus, since t # k - 1, the function increases in the interval [r, t].
Referring to Figure 2(c) and appealing to #(k, r) - #(k,
we see that
To summarize,
e -r ,
r
1-1/k .
(a)
(b)
(c)
Fig. 2. The relationship between r and t. The graphs are not drawn to the same scale. (a) Using
x k e -x to show r # k # t # r + 1. (b) Using x k e -x to show r # k - 2 # t # k - 1. (c) Using
x k-1 e -x to show
By di#erentiating we get
, which implies
r k-1 e -t dr # k
e
Putting all the pieces together,
#e -kb 1/k
da =W
dr
r k-1 e -t dr
r k-1 e -t dr #
e
+e
r k-1 e -r dr # .
Substituting
1-(1/k) in the first integral gives
1-(1/k)z k-1 e -z dz #e
Thus,
e
e
Hence, lim k#
e , and we obtain the lower bound lim k#
3.4. Application to the discrete models. Returning to the discrete setting,
we claim that lower bounds obtained using our method in the continuous model apply
in the discrete models as well. This is intuitively correct, since the continuous model
may be viewed as the limiting case of the integral models with n # and arbitrarily
long input sequences. We omit the proofs for lack of space.
We also claim that these lower bounds are valid even against randomized algo-
rithms. Whereas typical deterministic lower bound constructions are adversarial, that
is, a di#erent input sequence is tailored for each algorithm, our lower bound technique
provides a single sequence fit for all algorithms. Consequently, the bounds we derive
are also valid for randomized algorithms. This can be seen easily, either directly or
via Yao's principle (see, e.g., [13]).
LOAD BALANCING ON HIERARCHICAL SERVERS 541
4. Temporary jobs. In this section we allow temporary jobs in the input. We
restrict our attention to the integral model, as we have already seen (in section 2.5)
an optimal e-competitive algorithm for the fractional model that admits temporary
jobs. We present an algorithm that is 4-competitive in the unweighted case and 5-
competitive in the weighted case. We also show a lower bound of 3 for the unweighted
integral model.
Recall the definition of H in the optimum lemma (Lemma 1). Consider the jobs
which are active upon job j's arrival (including job j) and denote by H(j) the value
of H defined with respect to these jobs. A server is saturated on the arrival of job j
if its load is at least kH(j), where k is a constant to be determined later.
Algorithm PushRight
Assign each job it to its rightmost unsaturated eligible server.
Proposition 12. If k # 4, then whenever a job arrives, at least one of its eligible
servers is unsaturated. Thus, by taking
5)-competitive in the unweighted (resp. weighted) models.
Proof. We begin by considering the properties of certain sequences of numbers
whose role will become evident later. Consider the infinite sequence defined by the
recurrence a conditions a 1. We are
interested in the values of k for which this sequence increases monotonically. Solving
the recurrence reveals the following.
. If 4, then a
. If k > 4, then a
are the two roots of the quadratic
polynomial
. If k < 4, then a
It is easy to see that the sequence increases monotonically in the first two cases but
not in the third.
Now consider some infinite sequence {s i } obeying s
It is not di#cult to show that if k is chosen such that {a i }
increases monotonically, i.e., k # 4, then s i+2 # k(s
Returning to our proposition, let k # 4 and suppose that some job arrives, only
to find all of its eligible servers saturated. Let j 0 , be the first such job, and let s 1
be the server requested by it. We show how to construct two sequences, {s i } # i=0 and
with the following properties.
1.
2. For all i, although the servers s are all eligible for job j i , the
algorithm does not assign this job to the right of s i .
3. The jobs {j i } # i=1 are all distinct and they arrive before j 0 .
Property 3 states that job j 0 is preceded by an infinite number of jobs, yielding
a contradiction.
We have already defined Having defined
we define s i+2 and j i+1 as follows. Property 1 implies that s i < s
property 2 we know that when job arrives, the total weight of active jobs assigned
to servers s is at least k(s By the optimum lemma
(Lemma 1), at least one of these jobs must have requested a server whose number
is at least k(s any one such job and s i+2 as the server it
requests.
542 AMOTZ BAR-NOY, ARI FREUND, AND JOSEPH (SEFFI) NAOR
4.1. A lower bound. We show a lower bound of 3 on the competitive ratio
of deterministic algorithms. We use essentially the same construction as was used
in [6] in the context of related machines. Some of the details are di#erent, though,
owing to the di#erent nature of the two problems. For completeness, we present the
construction in full detail.
We consider the unweighted model. To motivate the construction, suppose the
value of OPT is known in advance and consider the algorithm that assigns each job
to its rightmost eligible server whose current load is less than k OPT , where k # 1 is
an appropriately chosen constant. We design an input sequence targeted specifically
at this algorithm. As we shall see, the lower bound of 3 obtained using this input
sequence is valid for any on-line algorithm; we use only this algorithm to motivate
our construction.
Recall that a right-to-left input sequence is a sequence such that for all i < j,
all requests for server j are made before any of the requests for server i. Our input
sequence will be right-to-left. For now, we focus on the principles at the cost of rigor.
We shall refer to either of the servers s, s simply as server s. We will
also refer to server "x" without worrying about the fact that x may be noninteger.
To simplify matters, we design the sequence with OPT = 1. (This will be changed
later.)
Figure
3 depicts the first few steps in the ensuing construction.
(1)
kn
(2)
kn
kn
Fig. 3. The first two rounds in the input sequence.
We start by making requests to server n. Since OPT is already known to the
algorithm, we lose nothing by making n requests, which is the maximum permitted
by 1. The algorithm assigns these jobs to servers n(1 - 1
We now
remove #n jobs. (# will be determined shortly.) Naturally, the jobs we remove
will be the ones assigned rightmost by the algorithm. The remaining n(1 - #) jobs
will be the ones assigned to servers n(1 - 1
k . The adversary
assigns these jobs to servers #n, . , n. The value of # is determined by our desire
that the remaining jobs be assigned by the algorithm strictly to the left of their
assignment by the adversary. To that end, we select
k+1 , which solves the
equation
#n.
We proceed with a second round of jobs. The logical choice is to request server
n. We make #n requests, again, the maximum permitted by OPT = 1. The
algorithm assigns these jobs in the range n(1 - 1
#n of these jobs, the ones assigned rightmost by the algorithm. The remaining n(#)
jobs are assigned by the adversary in the range #n, . , #n and by the algorithm in the
range
. To determine # we solve n(1 - 1
LOAD BALANCING ON HIERARCHICAL SERVERS 543
(a)
(b)
Fig. 4. The ith round. Solid rectangles represent the assignment of the active jobs at
the beginning of the round; the dotted rectangle represents the assignment of the jobs that
arrive in the ith round and do not subsequently depart. (a) The algorithm's assignment.
(b) The adversary's assignment.
#n, arriving at
To generalize the procedure, note that the number of jobs that arrive in a given
round is chosen equal to the number of jobs that depart in the round preceding it.
Let us introduce some notation. Denote by r i the server to which requests are made
in the ith round and by f i the leftmost server to which any of these jobs gets assigned
by the algorithm. We have already chosen r
n, and we have seen
that
n. For the ith round of jobs,
suppose the following two conditions hold at the end of round i - 1 (see Figure 4).
1. In the adversary's assignment the active jobs are all assigned in the range
2. In the algorithm's assignment the active jobs that arrived in round
occupy servers f i-1 , . , r i , and no jobs are assigned to the left of f i-1 .
In the ith round, r i requests are made to server r i . They are assigned by the algorithm
in the range f i , . , f i-1 . Thus,
r i .
Next, r i+1 of these jobs depart, where r i+1 is chosen such that the r i -r i+1 remaining
jobs occupy servers f i , . , r i+1 . Thus, k(r equivalently,
f i-1 .
The actual lower bound construction follows. Let A be an on-line algorithm
purporting to be k-competitive for some k < 3. Without loss of generality, k is a
rational number arbitrarily close to 3. Consider the two sequences {# i } and {# i }
defined simultaneously below.
By substituting k
# i-1 for # i+1 in the second recurrence, we get
# i-1 .
It can be shown (see [6]) that there exists a minimal integer p such that # p < 0
and that # i and # i are rational for all i. The number of servers we use is n such that
are integers for . The recurrences
defining {# i } and {# i } hold for {f i } and {r i } as well. Let c be any positive integer;
we construct an input sequence of unit weight jobs such that
1. cr 1 jobs request server r 1 .
2. For
3. Of the cr i-1 jobs which have requested server r i-1 , the cr i jobs that
were assigned rightmost by A depart. (Ties are broken arbitrarily.)
4. cr i new jobs request server r i .
The lower bound proof proceeds as follows. (We omit the proofs for lack of space.)
For the input sequence to be well defined we must have r
13. For Figure 4).
Denote by J i the set of jobs requesting server r i and by J #
i the set of the cr i+1
jobs in J i that eventually depart. Let W (s, t) be the number of active jobs assigned
to the left of server s at time t and denote by t i the moment in time immediately
prior to the arrival of the jobs J i .
Observe that the recurrence f
r i+1 is equivalent to r
15. Suppose algorithm A is k-competitive. Then W (r
Corollary 16. Algorithm A is not k-competitive.
5. The tree hierarchy. In this section we study a generalization of the problem
in which the servers form a rooted tree. A job requesting some server t may be assigned
to any of t's ancestors in the tree.
Let us introduce some terminology. A server is said to be lower than its proper
ancestors. The trunk defined by a set of servers U is the set U # { s | s is an ancestor
of some server in U}. The servers eligible for a given job form a path which is also a
trunk. We refer to it interchangeably as the job's eligible path or eligible trunk.
For a given input sequence, denote by W T the total weight of jobs requesting
servers in trunk T , and let { T | T is a trunk}.
Denote by w max the maximum weight of a job in the sequence. Note the analogy with
the linear hierarchy. The following lemma can be proved in a manner similar to the
proof of the optimum lemma for the linear hierarchy (Lemma 1).
Lemma 17 (optimum lemma: tree hierarchy).
. In the fractional model,
. In the unweighted integral model,
. In the weighted integral model,
5.1. A doubling algorithm. The o#-line algorithm used in the proof of the
optimum lemma (Lemma 17) is nearly a valid on-line algorithm; its only o#-line feature
is the requirement that the value of H be known at the outset. Thus, employing
the standard doubling technique (see, e.g., [4]) we can easily construct an on-line algorithm
which is respectively 4-, 4-, and 7-competitive for the fractional, unweighted
integral, and weighted integral models. The algorithm we present here is based on the
more sophisticated doubling approach pioneered in [12]. It is 4-, 4-, and 5-competitive
in the respective cases. The randomized version of this algorithm is, respectively, e-,
e-, and (e
We start by describing the algorithm for the weighted integral model. The algorithm
uses two variables: GUESS holds the current estimate of H, and LIMIT deter-
LOAD BALANCING ON HIERARCHICAL SERVERS 545
mines the saturation threshold; a server is saturated if its load is at least LIMIT . We
say that a set of servers U is saturated if every server s # U is saturated. The set U is
overloaded if the total weight assigned to servers in U is greater than |U | LIMIT . A
newly arrived job is dangerous if assigning it to its lowest unsaturated eligible server
will overload some trunk. In particular, if its eligible trunk is saturated, the job
is dangerous. The algorithm avoids overloading any trunk by incrementing LIMIT
whenever a dangerous job arrives. This, in turn, guarantees that whenever a job
arrives, at least one of its eligible servers is unsaturated. Note that assigning the job
may saturate the server to which it is assigned.
Algorithm Doubling
Initialize (upon arrival of the first job):
1. Let w be the first job's weight and T its eligible trunk.
2. GUESS w/|T |; LIMIT GUESS .
For each job:
3. While the job is dangerous:
4. GUESS 2 GUESS .
5. LIMIT LIMIT +GUESS .
6. Assign the job to its lowest unsaturated eligible server.
We divide the algorithm's execution into phases. A new phase begins whenever
lines 4-5 are executed. (The arrival of a heavy job may trigger a succession of several
empty phases.) Let p be the number of phases and denote by GUESS i and LIMIT i
the respective values of GUESS and LIMIT during the ith phase. For consistency
define Note that the initial value of GUESS ensures that GUESS 1 # H.
Proposition 18. If GUESS i # H for some i, then the ith phase is the last one.
Consequently, GUESS p < 2H.
Proof. Suppose GUESS i # H and consider the beginning of the ith phase. We
claim that from this moment onward, the algorithm will not encounter any dangerous
jobs. Suppose this is not true. Let us stop the algorithm when the first such dangerous
job is encountered and assign the job manually to its lowest unsaturated eligible server.
This overloads some trunk R. Let T be the maximal trunk containing R such that
T - R is saturated. Clearly, T is overloaded as well; the total weight assigned to it
is greater than |T | LIMIT i . On the other hand, T was not overloaded at the end
of the (i - 1)st phase, since the algorithm never overloads a trunk. Thus, the total
weight of jobs assigned to T during the ith phase (including the job we have assigned
manually) is greater than |T |(LIMIT i - LIMIT i-1
maximality, all of these jobs must have requested servers in T . Thus, the total weight
of jobs requesting servers in T is greater than |T |H, yielding a contradiction.
Corollary 19. COST < 4OPT
Proof. The claim follows since COST < LIMIT
Thus, Algorithm Doubling is 4-competitive in the unweighted integral model and
5-competitive in the weighted integral model. In the fractional model we modify the
algorithm as follows. A job is called dangerous i# its eligible path is saturated. When
assigning the job we may have to split it, as in the proof of the optimum lemma for
the linear hierarchy (Lemma 1). This algorithm achieves COST # LIMIT p <
4OPT .
5.2. Randomized doubling. We consider randomization against oblivious ad-
versaries. The randomization technique we use is fairly standard by now; a similar
idea has been used several times in di#erent contexts (see, e.g., [11, 19, 15, 12, 10]).
The idea is to randomize the initial value of GUESS and to tweak the doubling pa-
rameter. Specifically, let r be a random variable uniformly distributed over (0, 1] and
select any constant k > 1. We replace lines 2 and 4 with
It can be shown that E (LIMIT (see [12] or [10] for details). This
expression is minimized at e H by putting e. Thus, for e, the algorithm is
e-competitive in the fractional and unweighted integral models and (e+1)-competitive
in the weighted integral model.
5.3. Lower bounds for temporary jobs. In contrast to the linear hierarchy,
allowing temporary jobs in the tree hierarchy has a drastic e#ect on the competitiveness
of the solutions. For the unweighted integral model, we show deterministic and
randomized lower bounds of # n and 1(1 1), respectively. These bounds are
tight, up to a multiplicative constant, as demonstrated by the upper bound shown
in [6] for the general problem with unrestricted eligibility constraints. Our randomized
lower bound construction applies in the fractional model as well.
5.3.1. A deterministic lower bound for the integral models. Let A be a
deterministic on-line algorithm, and let 1. We show an
input sequence for which A's assignment satisfies COST # k
The server tree we use has a flower-like structure. It is composed of a stem and
petals. The stem consists of k servers s 1 , s 2 , . , s is the root, and s i is the parent
of s 1. The petals, are all children of s k . Server
s k is called the calyx.
Suppose that the competitiveness of A is better than k for the given n. Consider
the following request sequence. Let c be an arbitrarily large integer.
1. For
2. jobs, each of unit weight, request the petal p i .
3. Of these jobs, ck now depart. (The rest are permanent.) The choice of
which jobs depart is made so as to maximize the number of jobs assigned
by A to p i which depart.
4. ck jobs request the calyx.
During the first stage (lines 1-3), the adversary always assigns the permanent
jobs to the petal and the temporary ones to the servers in the stem, c jobs to each
server. Thus, at the beginning of the second stage, no jobs are assigned to the stem
servers, and the adversary assigns c new jobs to each of them. Thus,
Consider the jobs requesting p i . Since A is better than k-competitive, it assigns
fewer than ck jobs to p i . Thus, in each iteration more than c permanent jobs are
assigned by A to servers in the stem. Hence, at the beginning of the second stage
there are more than c(k 2
jobs assigned to servers in the stem. Since the additional
ck jobs must be assigned to servers in the stem, at least one server must end up
with a load greater than ck, contradicting the assumption that A is better than k-
competitive.
LOAD BALANCING ON HIERARCHICAL SERVERS 547
5.3.2. A randomized lower bound. Let us generalize the previous construc-
tion. We use a flower-like server tree with p petals and a stem of s servers
As before, the input sequence has two stages. The first consists of p iterations, where
in the ith iteration, c request the petal p i and then c s of them depart.
In the second stage cs jobs request the calyx. The goal in the first stage is to "push"
enough jobs into the stem so that the algorithm will fail in the second stage.
Consider the ith iteration. Let X j be the random variable denoting the contribution
of the jth job to the load on servers in the stem. (In the integral model
this is a 0-1 variable; in the fractional model X j may assume any value between 0
and 1.) Since the algorithm is better than k-competitive, the expected total weight
assigned to p i is less than ck, so E( # j k). Thus, there exist c jobs
c such that
c(s+1-k)/(s+1). The adversary makes these
jobs permanent and terminates the rest.
Consequently, at the beginning of the second stage the expected total load on
the servers in the stem is greater than cp(s 1), and at the end of the
second stage the expectation grows to more than cs
the algorithm is better than k-competitive, the expected maximum load on a server
in the stem must be less than ck, and thus the expected total load on servers in the
stem must be less than cks. To reach a contradiction we choose s and p to satisfy
Solving the equation yields
ks 2
Minimizing n (for fixed subject to the last equation yields
6. Other eligibility restrictions. In the hierarchical servers problem the sets
of eligible servers for a job have a restricted form. For example, in the linear hierarchy
they have the following form: all servers to the left of some server s. This is a special
case of the problem considered in [7], where eligible sets may be arbitrary. In this
section we study various other restrictions on the eligible sets. We focus on the
following three.
1. The servers form a path and the set of servers eligible for a given job must
be contiguous on the path.
2. The servers form a rooted tree and each job specifies a node v, all of whose
descendents (including v itself) are eligible.
3. The number of servers eligible for any given job is at most k for some fixed
We show how to extend
the# log n) lower bound of [7] to these three variants
of the problem. This shows that the greedy algorithm, which is O(log n)-competitive
for the general problem, remains optimal (up to a multiplicative constant) in many
restrictive scenarios.
For the first variant, consider the following input sequence. For convenience
assume n is a power of 2 (otherwise consider only the first 2 #log n# servers on the
path). All jobs have unit weight, and they arrive in log n rounds. The ith round
consists of m all of which specify the same set of eligible servers S i . The
sets S i are chosen such that |S i In the first round all servers are eligible.
548 AMOTZ BAR-NOY, ARI FREUND, AND JOSEPH (SEFFI) NAOR
Having defined S i , we construct S i+1 as follows. Suppose the total weight assigned
to servers in S i at the end of the ith round is at least ni/2 i (as is certainly the case
1). The set S i is contiguous, i.e., its servers form a path. At least half of the
total weight assigned to S i is assigned to either the first half of this path or to its
second half. We define S i+1 as the half to which the majority of the weight is assigned
(breaking ties arbitrarily). Thus, the total weight assigned to S i+1 at the end of the
1)st round is at least n/2 We define S log n+1 in the
same manner and call the single server which it comprises the leader. The load on the
leader at the end of the last round is at least 1log n. The adversary assigns the jobs
in the ith round to the servers S i - S i+1 , one job to each server. Thus, OPT # 1,
and the lower bound follows.
For the second variant we use a very similar construction. The servers are arranged
in a complete binary tree. The number of jobs in the ith round is defined by
the recurrence m
1). The sets of eligible servers
are defined as follows. In the first round all servers are eligible. Let v i be the root of
the subtree S i ; we define S i+1 as the subtree rooted at the child of v i to which more
weight is assigned at the end of the ith round.
For the third variant we use a recursive construction. Partition the servers into
n/k subsets of k servers each and apply the construction of the first variant to each
subset. The load on the leader of each subset is now # log k). Continue recursively
on the set of leaders. Each level of the recursion increases the load on the leaders
in that level by # log k), and there are #(log k n) = #(log n/ log levels. Thus,
log n). At each level of the recursion, the adversary assigns no weight to
the leaders and at most one job to the other servers. Hence, OPT # 1, and the lower
bound follows.
--R
Handbook of Mathematical Functions
Better bounds for online scheduling
The competitiveness of on-line assignments
New algorithms for an ancient scheduling problem
A better lower bound for on-line scheduling
New algorithms for related machines with temporary jobs
Yet more on the linear search problem
A lower bound for on-line scheduling on uniformly related machines
An improved approximation ratio for the minimum latency problem
Bounds for certain multiprocessor anomalies
A better algorithm for an ancient scheduling problem
Online resource minimization
--TR
--CTR
Pilu Crescenzi , Giorgio Gambosi , Gaia Nicosia , Paolo Penna , Walter Unger, On-line load balancing made simple: Greedy strikes back, Journal of Discrete Algorithms, v.5 n.1, p.162-175, March, 2007
On-line algorithms for the channel assignment problem in cellular networks, Discrete Applied Mathematics, v.137 n.3, p.237-266, 15 March 2004 | hierarchical servers;temporary jobs;load balancing;resource procurement;on-line algorithms |
586883 | Resource-Bounded Kolmogorov Complexity Revisited. | We take a fresh look at CD complexity, where CDt(x) is the size of the smallest program that distinguishes x from all other strings in time t(|x|). We also look at CND complexity, a new nondeterministic variant of CD complexity, and time-bounded Kolmogorov complexity, denoted by C complexity.We show several results relating time-bounded C, CD, and CND complexity and their applications to a variety of questions in computational complexity theory, including the following: Showing how to approximate the size of a set using CD complexity without using the random string as needed in Sipser's earlier proof of a similar result. Also, we give a new simpler proof of this result of Sipser's. Improving these bounds for almost all strings, using extractors. A proof of the Valiant--Vazirani lemma directly from Sipser's earlier CD lemma. A relativized lower bound for CND complexity. Exact characterizations of equivalences between C, CD, and CND complexity. Showing that satisfying assignments of a satisfiable Boolean formula can be enumerated in time polynomial in the size of the output if and only if a unique assignment can be found quickly. This answers an open question of Papadimitriou. A new Kolmogorov complexity-based proof that BPP\subseteq\Sigma_2^p$. New Kolmogorov complexity based constructions of the following relativized worlds: There exists an infinite set in P with no sparse infinite NP subsets. EXP=NEXP but there exists a NEXP machine whose accepting paths cannot be found in exponential time. Satisfying assignments cannot be found with nonadaptive queries to SAT. | Introduction
Originally designed to measure the randomness of strings, Kolmogorov complexity has become an
important tool in computability and complexity theory. A simple lower bound showing that there
exist random strings of every length has had several important applications (see [LV93, Chapter
6]).
Early in the history of computational complexity theory, many people naturally looked at resource-bounded
versions of Kolmogorov complexity. This line of research was initially fruitful and led to
some interesting results. In particular, Sipser [Sip83] invented a new variation of resource-bounded
complexity, CD complexity, where one considers the size of the smallest program that accepts the
given string and no others. Sipser used CD complexity for the first proof that BPP is contained
in the polynomial-time hierarchy.
Complexity theory has marched on for the past two decades, but resource-bounded Kolmogorov
complexity has seen little interest. Now that computational complexity theory has matured a bit,
we ought to look back at resource-bounded Kolmogorov complexity and see what new results and
applications we can draw from it.
First, we use algebraic techniques to give a new upper bound lemma for CD complexity without
the additional advice required of Sipser's Lemma [Sip83]. With this lemma, we can approximately
measure the size of a set using CD complexity.
We also give a new simpler proof of Sipser's Lemma and show how it implies the important Valiant-
lemma [VV85] that randomly isolates satisfying assignments. Surprisingly, Sipser's paper
predates the result of Valiant and Vazirani.
We define CND complexity, a variation of CD complexity where we allow nondeterministic com-
putation. We prove a lower bound for CND complexity where we show that there exists an infinite
set A such that every string in A has high CND complexity even if we allow access to A as an ora-
cle. We use this lemma to prove some negative result on nondeterministic search vs. deterministic
decision.
Once we have these tools in place, we use them to unify several important theorems in complexity
theory. We answer an open question of Papadimitriou [Pap96] characterizing exactly when the
set of satisfying assignments of a formula can be enumerated in output polynomial-time. We also
give straightforward proofs that BPP is in \Sigma p
(first proven by G'acs (see [Sip83])) and create
relativized worlds where assignments to SAT cannot be found with non adaptive queries to SAT
(first proven by Buhrman and Thierauf [BT96]), and where there exists
a NEXP machine whose accepting paths cannot be found in polynomial time (first proven by
Impagliazzo and Tardos [IT89]).
These results in their original form require a great deal of time to fully understand the proof because
either the ideas and/or technical details are quite complex. We show that by understanding
resource-bounded Kolmogorov complexity, one can see full and complete proofs of these results
without much additional effort. We also look at when polynomial-time C, CD and CND complexity
collide. We give a precise characterization of when we have equality of these classes, and
some interesting consequences thereof.
Preliminaries
We use basic concepts and notation from computational complexity theory texts like Balc'azar,
D'iaz, and Gabarr'o [BDG88] and Kolmogorov complexity from the excellent book by Li and
Vit'anyi [LV93]. We use jxj to represent the length of a string x and jjAjj to represent the number
of elements in the set A. All of the logarithms are base 2.
Formally, we define Kolmogorov complexity function C(xjy) by
where U is some fixed universal deterministic Turing machine. We define unconditional Kolmogorov
complexity by
A few basic facts about Kolmogorov complexity:
ffl The choice of U affects the Kolmogorov complexity by at most an additive constant.
ffl For some constant c, C(x) - jxj
ffl For every n and every y, there is an x such that
We will also use time-bounded Kolmogorov complexity. Fix a fully time-computable function
We define the C t (xjy) complexity function as
runs in at most t(jxj
As before we let C t different universal U may affect the complexity by at most a
constant additive factor and the time by a log t factor.
While the usual Kolmogorov complexity asks about the smallest program to produce a given string,
we may also want to know about the smallest program to distinguish a string. While this difference
affects the unbounded Kolmogorov complexity by only a constant it can make a difference for the
time-bounded case. Sipser [Sip83] defined the distinguishing complexity CD t by
(1) U(p; x; y) accepts.
rejects for all z 6= x.
runs in at most t(jzj steps for all z 2 \Sigma
Fix a universal nondeterministic Turing machine U n . We define the nondeterministic distinguishing
complexity CND t by
(1) U n (p; x; y) accepts.
rejects for all z 6= x.
runs in at most t(jzj steps for all z 2 \Sigma
Once again we let CND t
We can also allow for relativized Kolmogorov complexity. For example, CD t;A (xjy) is defined as
above except that the universal machine U has access to A as an oracle.
Since one can distinguish a string by generating it we have
Lemma 2.1 8t 9c ct
where c is a constant. Likewise, since every deterministic computation is also a nondeterministic
computation we get
Lemma 2.2 8t 9c ct
In Section 5 we examine the consequences of the converses of these lemmas.
Approximating Sets with Distinguishing Complexity
In this section we derive a lemma that enables one to deterministically approximate the density of
a set, using polynomial-time distinguishing complexity.
Lemma 3.1 1g. For all x i 2 S and at least half of the primes
Proof: For each x holds that for at most n different prime numbers p,
by the Chinese Remainder Theorem. For x i there are at most dn primes p such that
S. The prime number Theorem [Ing32] states that for any m there are
approximately m= ln(m) ? m= log(m) primes less than m. There are at least 4dn
primes less than 4dn 3 . So at least half of these primes p must have
Lemma 3.2 Let A be any set. For all strings x 2 A =n it holds that CD p;A =n
O(log n) for some polynomial p.
Proof: Fix n and let and a prime p x fulfilling the conditions of Lemma 3.1
for x.
The CD poly program for x works as follows:
input y
If y 62 A =n then REJECT
else if y mod
else REJECT
The size of the above program is jp This is 2 log(jjAjj) +O(log n). It is clear
that the program runs in polynomial time, and only accepts x. 2
We note that the above Lemma also works for CND p complexity for p some polynomial.
Corollary 3.3 Let A be a set in P. For each string x 2 A it holds
O(log(n)) for some polynomial p.
Proof: We will use the same scheme as in Lemma 3.2, now using that A 2 P and specifying
the length of x, yielding an extra log(n) term for jxj plus an additional 2 log log(n) penalty for
concatenating the strings. 2
Corollary 3.4 1. A set S is sparse if and only if for all x 2 S, CD p;S (x) - O(log(jxj)), for
some polynomial p.
2. A set S 2 P is sparse if and only if for all x 2 S, CD p (x) - O(log(jxj)), for some polynomial
p.
3. A set S 2 NP is sparse if and only if for all x 2 S, CND p (x) - O(log(jxj)), for some
polynomial p.
Proof: Lemma 3.2 yields that all strings in a sparse set have O(log(n)) CD p complexity. On
the other hand simple counting shows that for any set A there must be a string x 2 A such that
CND A (x) - log(jjAjj). 2
3.1
We can also use Lemma 3.1 to give a simple proof of the following important result due to
Sipser [Sip83].
Lemma 3.5 (Sipser) For every polynomial-time computable set A there exists a polynomial p and
constant c such that for every n, for most r in \Sigma p(n) and every x 2 A =n ,
CD p;A =n
Proof: For each k, 1 - k - n, let r k be a list of 4k(n randomly chosen numbers less than
r be the concatenation of all of the r k .
Fix x 2 A =n . Let Consider one of the numbers y
listed in r k . By the Prime Number Theorem [Ing32], the probability that y is prime and less than
2 is at least 1
. The probability that y fulfills the conditions of Lemma 3.1 for x is at
least 1
4 log
4k . With probability about 1=e n+1 ? 1=2 n+1 we have that some y in r k fulfills the
condition of Lemma 3.1.
With probability at least 1=2, for every x 2 A there is some y listed in r k fulfilling the conditions
of Lemma 3.1 for x.
We can now describe x by x mod y and the pointer to y in r. 2
Note: Sipser can get a tighter bound than c log n but for most applications the additional O(log n)
additive factor makes no substantial difference.
Comparing our Lemma 3.2 with Sipser's lemma 3.5, we are able to eliminate the random string
required by Sipser at the cost of an additional log jA =n j bits.
4 Lower Bounds
In this section we show that there exists an infinite set A such that every string in A has high
CND complexity, even relative to A.
Fortnow and Kummer [FK96] prove the following result about relativized CD complexity:
Theorem 4.1 There is an infinite set A such that for every polynomial p, CD p;A (x) - jxj=5 for
almost all x 2 A.
We extend and strengthen their result for CND complexity:
Theorem 4.2 There is an infinite set A such that CND 2
The proof of Fortnow and Kummer of Theorem 4.1 uses the fact that one can start with a large
set A of strings of the same length such that any polynomial-time algorithm on an input x in A
cannot query any other y in A. However, a nondeterministic machine may query every string of a
given length. Thus we need a more careful proof.
This proof is based on the proof of Corollary 4.3 of Goldsmith, Hemachandra and Kunen [GHK92].
In Section 7, we will also describe a rough equivalence between this result and an "X-search"
theorem of Impagliazzo and Tardos [IT89].
Proof of Theorem 4.2:
We create our sets A in stages. In stage k, we pick a large n and add to A a nonempty set of
strings B of length n such that for all nondeterministic programs p running in time 2
n such that
accepts either zero or more than one strings in A. We first create a B that makes
as many programs as possible accept zero strings in B. After that we carefully remove some strings
from B to guarantee that the rest of the programs accept at least two strings.
Let P be the set of nondeterministic programs of size less than n=4. We have We will
clock all of these programs so that they will reject if they take time more than 2
n . We also assume
that on every program p in P , input x and oracle O, p O (x) queries x.
For any set X, let
. Pick sets
that maximizes j\Deltaj + jHj such that jHj - wj\Deltaj, and for all X ' \Sigma
Note that H 6= \Sigma n since jHj - wj\Deltaj - wjP
always accepts we have that \Delta 6= P .
Our final B will be a subset of \Sigma which guarantees that for all p 2 \Delta, p AB will not accept
any strings in B. We will create B such that for all accepts at least two strings in
B. Initially let For each for each integer i, 1 do the following:
Pick a minimal X ' B such that for some y 2 X, p X (y) accepts. Fix an accepting
path and let Q p i be all the queries made on that path. Let y y,
Note that jQ p;i
n . We remove no more than jP jv2
strings in total. So if we cannot
find an appropriate X, we have violated the maximality of j\Deltaj jHj. Note that y p;i 2 X p;i ' Q p;i
and all of the X p;i are disjoint.
Initially set all of the X p;i as unmarked. For each do the following twice:
Pick an unmarked X p;i . Mark all X q;j such that X q;j " Q p;i 6= ;. Let
We have that y p;i 2 B and p B (y p;i ) accepts for every X p;i processed.
At most 2 \Delta 2
of the X q;j 's get marked before we have finished, we always can find an
unmarked X p;i .
Finally note that B ' \Sigma we have at least two y 2 B, such that p B (y)
accepts. Since also guarantees that B 6= ;. Thus we have fulfilled the requirements
for stage k. 2
Using Theorem 4.2 we get the following corollary first proved by Goldsmith, Hemachandra and
Kunen [GHK92].
Corollary 4.3 (Goldsmith-Hemachandra-Kunen) Relative to some oracle, there exists an infinite
polynomial-time computable set with no infinite sparse NP subsets.
Proof: Let A from Theorem 4.2 be both the oracle and the set in P A . Suppose A has an infinite
sparse subset S in NP A . Pick a large x such that x 2 S. Applying Corollary 3.4(3) it follows that
CND A;p (x) - O(log(n)). This contradicts the fact that x 2 S ' A and Theorem 4.2. 2
The above argument shows actually something stronger:
Corollary 4.4 Relative to some oracle, there exists an infinite polynomial-time computable set
with no infinite subset in NP of density less than 2 n=9 .
5 CD vs. C and CND
This section deals with the consequences of the assumption that one of the complexity measures C,
CD, and CND coincide for polynomial time. We will see that these assumptions are equivalent to
well studied complexity theoretic assumptions. This allows us to apply the machinery developed
in the previous sections. We will use the following function classes:
Definition 5.1 1. The class FP NP[log(n)] is the class of functions computable in polynomial
time that can adaptively access an oracle in NP at most c log(n) times, for some c.
2. The class FP NP
tt is the class of functions computable in polynomial time that can non-
adaptively access an oracle in NP.
Theorem 5.2 The following are equivalent:
1. log(jxj).
2. log(jxj).
3. FP
tt .
first need the following claim due to Lozano (see [JT95, pp. 184-185]).
5.3 FP
tt if and only if for every f in FP NP
there exists a function
2 FP, that generates a polynomial-size set such that f(x) 2 g(x).
In the following let f 2 FP NP
tt . Let y. We will see that there exists a p and c such that
log(jyj). We can assume that f(x) produces a list of queries
to SAT. Let #c be the exact number of queries in Q that are in SAT. Thus
SATjj.
Consider the following CND p program given x:
input z
use f(x) to generate Q.
guess are in SAT
guess satisfying assignments for q
REJECT if not for all q are satisfiable.
compute f(x) with q answered YES and q answered NO
ACCEPT if and only if
The size of the above program is c log(jyj), accepts only y, and runs in time p, for some polynomial p
and constant c depending only on f . It follows that also all the prefixes of y have CND p complexity
bounded by c log(jyj) O(1). By assumption there exists a polynomial p 0 and constant
d such that CD p 0
a prefix of y. Since we can enumerate and simulate all
the CD p programs of size d log(jyj) in time polynomial in jyj we can generate a list of possible
candidates for as well. It follows using Claim 5.3 that FP
tt .
y be a string such that CND p 0
be the program of length k that
this. Consider the following function:
there exists a z i of length m with the i th bit equal
to y i such that Turing machine M e , given x,
nondeterministically accepts z i in l steps
Note that if e is a CND program, that runs in l steps, then it accepts exactly one string, y of
length m. Hence f(!e; 0 l y. It is not hard to see that in general f is in FP NP
tt and by
assumption in FP NP[log(n)] via machine M . Next given i, l, and m, x and the c log(n) answers to
the NP oracle that M makes we have generated y in time p for some polynomial depending on i,
and M . Hence we have that C p (y
For the next corollary we will use some results from [JT95]. We will use the following class of
limited nondeterminism defined in [DT90].
Definition 5.4 Let f(n) be a function from IN 7! IN . The class NP[f(n)] denotes that class of
languages that are accepted by polynomial-time bounded nondeterministic machines that on inputs
of length n make at most f(n) nondeterministic moves.
Corollary 5.5 If 8p 2 9p then for any k:
1. NP[log k (n)] is included in P.
2. SAT 2 NP[ n
log k (n)
3. SAT 2 DTIME(2 n O(1= log log n)
4. There exists a polynomial q such that for every m formulae OE variables each
such that at least one is satisfiable, there exists a i such that OE i is satisfiable and
Proof: The consequences in the corollary follow from assumption FP
tt follows from Theorem 5.2. 2
We can use Corollary 5.5 to get a complete collapse if there is only a constant difference between
CD and CND complexity.
Theorem 5.6 The following are equivalent:
1.
2.
3.
Proof are easy.
combined with the the assumption we have for any formulae OE
where at least one is satisfiable that
log log(n +m)
for some satisfiable OE i . We can enumerate all the programs p of length at most c log log(n +m) and
find all the formula OE i such that p(OE
Thus given OE we can in polynomial-time create a subset of size log c (n +m) that contains
a satisfiable formula if the original list did. We then apply a standard tree-pruning algorithm to
find the satisfying assignment of any satisfiable formula. 2
A simple modification of the proof shows that Theorem 5.6 holds if we replace the constant c with
a log n for any a ! 1.
For the next corollary we will need the following definition (see [ESY84]).
Definition 5.7 A promise problem is a pair of sets (Q; R). A set L is called a solution to the
promise problem (Q; R) if 8x(x 2 Q R)). For any function f , fSAT denotes the
set of boolean formulas with at most f(n) satisfying assignments for formulae of length n.
The next theorem states that nondeterministic computations that have few accepting computations
can be "compressed" to nondeterministic computations that have few nondeterministic moves if
and only if C poly - CD poly .
Theorem 5.8 The following are equivalent:
1.
2. (1SAT,SAT) has a solution in P.
3. For all time constructible f , (fSAT,SAT) has a solution in NP[2 log(f(n)) +O(log(n))].
Proof: (1 () 2) This was proven in [FK96].
(3 and the fact [DT90] that
OE be a formula with at most f(jOEj) satisfying assignments. Lemma 3.2 yields that for every
satisfying assignment a to OE, there exists a polynomial p such that CD p (a
O(log(jOEj)). Hence (using that 1 () 2) it follows that C p 0
(a
for some constant c and polynomial p 0 . The limited nondeterministic machine now guesses a C p 0
program program e of size at most 2 log(f(jOEj)) log(jOEj), and runs it (relative to OE) and accepts
iff the generated string is a satisfying assignment to OE. 2
Corollary 5.9 FP
tt implies the following:
1. For any k the promise problem (2 log k (n) SAT,SAT) has a solution in P.
2. For any k, the class of languages that is accepted by nondeterministic machines that have at
most 2 log k (n) accepting paths on inputs of length n is included in P
Proof: This follows from Theorem 5.2, Theorem 5.8, and Corollary 5.5. 2
6 Satisfying Assignments
We show several connections between CD complexity and finding satisfying assignments of boolean
formulae. By Cook's Theorem [Coo71], finding satisfying assignments is equivalent to finding
accepting computation paths of any NP computation.
6.1 Enumerating Satisfying Assignments
Papadimitriou [Pap96] mentioned the following hypothesis:
Hypothesis 6.1 There exists a Turing machine that given a formula OE will output the set A of
satisfying assignments of OE in time polynomial in jOEj and jjAjj.
We can use CD complexity to show the following.
Theorem 6.2 Hypothesis 6.1 is equivalent to (1SAT ; SAT ) has a solution in P.
In Hypothesis 6.1, we do not require the machine to halt after printing out the assignments. If
the machine is required to halt in time polynomial in OE and jjAjj we have that Hypothesis 6.1 is
equivalent to
Proof of Theorem 6.2: The implication of (1SAT ; SAT ) having a solution in P is straightfor-
ward. We concentrate on the other direction.
3.2 and Theorem 5.8 we have that for every element x of A, C q (xjOE) -
log d+c log n for some polynomial q and constant c. We simply now try every program p in length
increasing order and enumerate p(OE) if it is a satisfying assignment of OE. 2
6.2 Computing Satisfying Assignments
In this section we turn our attention to the question of the complexity of generating a satisfying
assignment for a satisfiable formula [WT93, HNOS96, Ogi96, BKT94]. It is well known [Kre88]
that one can generate (the leftmost) satisfying assignment in FP NP . A tantalizing open question
is whether one can compute some (not necessary the leftmost) satisfying assignment in FP NP
tt .
Formalizing this question, define the function class F sat by f 2 F sat if when ' 2 SAT then f(')
is a satisfying assignment of '.
The question now becomes F sat
Translating this to a CND setting we have the
following.
Lemma 6.3 F sat
only if for all OE 2 SAT there exists a satisfying assignment
a of OE such that CND p (a j OE) - c log(jOEj) for some polynomial p and constant c.
Toda and Watanabe [WT93] showed that relative to a random oracle F sat
On the
other hand Buhrman and Thierauf [BT96] showed that there exists an oracle where F sat
;. Their result also holds relative to the set constructed in Theorem 4.2.
Theorem 6.4 Relative to the set A constructed in Theorem 4.2, F sat
Proof: For some n, let OE be the formula on n variables such that only if
tt 6= ;. It now follows by Lemma 6.3 that there exists an x 2 A such
that CND p;A (x) - O(log(jxj)) for some polynomial p, contradicting the fact that for all x 2 A,
6.3 Isolating Satisfying Assignments
In this section we take a Kolmogorov complexity view of the statement and proof of the famous
lemma [VV85]. The Valiant-Vazirani lemma gives a randomized reduction from
a satisfiable formula to another formula that with a non negligible probability has exactly one
satisfying assignment.
We state the lemma in terms of Kolmogorov complexity.
Lemma 6.5 There is some polynomial p such that for all OE in SAT and all r such that
and C(r) - jrj, there is some satisfying assignment a of OE such that CD p (ajhOE; ri) - O(log jOEj).
The usual Valiant-Vazirani lemma follows from the statement of Lemma 6.5 by choosing r and the
O(log jOEj) program randomly.
We show how to derive the Valiant-Vazirani Lemma from Sipser's Lemma (Lemma 3.5). Note
Sipser's result predates Valiant-Vazirani by a couple of years.
Proof of Lemma 6.5: Let
Consider the set A of satisfying assignments of OE. We can apply Lemma 3.5 conditioned on OE using
part of r as the random strings. Let We get that every element of A has a CD
program of length bounded by d log n for some constant c. Since two different elements from
A must have different programs, we have at least 1=n c of the strings of length d must
distinguish some assignment in A.
We use the rest of r to list n 2c different strings of length d log n. Since r is random, one of
these strings w must be a program that distinguishes some assignment a in A. We can give a CD
program for a in O(log n) bits by giving d and a pointer to w in r. 2
7 Search vs. Decision in Exponential-Time
If given a satisfiable formula, one can use binary search to find the assignment.
One might expect a similar result for exponential-time computation, i.e., if
one should find a witness of a NEXP computation in exponential time. However, the proof for
polynomial-time breaks down because as one does the binary search the input questions get too
long. Impagliazzo and Tardos [IT89] give relativized evidence that this problem is indeed hard.
Theorem 7.1 ([IT89]) There exists a relativized world where there exists a
NEXP machine whose accepting paths cannot be found in exponential time.
We can give a short proof of this theorem using Theorem 4.2.
Proof of Theorem 7.1: Let A be from Theorem 4.2.
We will encode a tally set T such that EXP be a nondeterministic
oracle machine such that M runs in time 2 n and for all B, M B is NEXP B -complete.
Initially let ;. For every string w in lexicographic order, put 1 2w into T if M A\PhiT (x) accepts.
\Phi T at the end of the construction. Since M(w) could only query strings with length at
most 2 jwj - w, this construction will give us EXP
We will show that there exists a NEXP B machine whose accepting paths cannot be found in time
exponential relative to B.
Consider the NEXP B machine M that on input n guesses a string y of length n and accepts if y
is in A. Note that M runs in time 2 jnj - n.
Suppose accepting computations of M B can be found in time 2 jnj k
relative to B. By
Theorem 4.2, we can fix some large n such that A =n 6= ; and for all x 2 A =n ,
We will show the following claim.
Assuming Claim 7.2, Theorem 7.1 follows since for each i, jw 1. We thus have our contradiction
with Equation (1).
Proof of Claim 7.2: We will construct a program p A to nondeterministically distinguish x. We
use log n bits to encode n. First p will reconstruct T using the w i 's.
Suppose we have reconstructed T up to length 2 i . By our construction of T , strings of T of length
at most 2 i+1 can only depend on oracle strings of length at most 2
of the nondeterministically verify that these are the strings in T .
Once we have T , we also have so in time 2 log k n we can find x. 2
Impagliazzo and Tardos [IT89] prove Theorem 7.1 using an "X-search" problem. We can also relate
this problem to CND complexity and Theorem 4.2.
Definition 7.3 The X-search problem has a player who given N input variables not all zero, wants
to find a one. The player can ask r rounds of l parallel queries of a certain type each and wins if
the player discovers a one.
Impagliazzo and Tardos use the following result about the X-search problem to prove Theorem 7.1.
Theorem 7.4 ([IT89]) If the queries are restricted to k-DNFs and N ? 2(klr) then the
player will lose on some non-zero setting of the variables.
One can use a proof similar to that of Theorem 7.1 to prove a similar bound for Theorem 7.4. On
needs just to apply Theorem 4.2 relative to the strategy of the player.
One can also use Theorem 7.4 to prove a variant of Theorem 4.2. Suppose Theorem 4.2 fails. For
any A and for every x in A there exists a small program that nondeterministically distinguishes x.
For some x suppose we know p. We can find x by asking a DNF question based on p about the
ith bit of x.
We do not in general know p but there are not too many possibilities. We can use an additional
round of queries to try all programs and test all the answers in parallel. This will give us a general
strategy for the X-search problem contradicting Theorem 7.4.
8 BPP in the second level of the polynomial hierarchy
One of the applications of Sipser's [Sip83] randomized version of Lemma 3.2 is the proof that BPP
is in \Sigma p
2 . We will show that the approach taken in Lemma 3.2 yields a new proof of this result. We
will first prove the following variation of Lemma 3.1.
1g. There exists a prime number p such that for all
log(n).
Proof: We consider only prime numbers between c and 2c. For x holds that for at most
log c
log(c) different prime number p x p. Moreover there are at most d (d \Gamma 1)
different pairs of strings in S, so there exists a prime number p among the first d (d \Gamma 1) log(n)
log(c)
prime numbers such that for all x holds that x i 6j x j mod p. Applying again the
prime number Theorem [Ing32] it follows that if we take c ? d
The idea is to use Claim 8.1 as a way to approximate the number of accepting paths of a BPP
machine M . Note that the set of accepting paths ACCEPTM(x) of M on x is in P. If this set is
"small" then there exists a prime number satisfying Claim 8.1. On the other hand if the set is "big"
no such prime number exists. This can be verified in \Sigma p
There exists a number p such that for all
pairs of accepting paths x In order to apply this idea we need the gap
between the number of accepting paths when x is in the set and when it is not to be a square: if x
is not in the set then jjACCEPT M(x) jj - k(jxj) and if x is in the set jjACCEPT M(x) jj ? k 2 (jxj).
We will apply Zuckerman's [Zuc96] oblivious sampler construction to obtain this gap.
Theorem 8.2 Let M be a probabilistic machine that witnesses that some set A is in BPP. Assume
that M(x) uses m random bits. There exists a machine M 0 that uses 3m+9m
log (m) random bits
such that if x 2 A then Pr[M 0
Proof: Use the sampler in [Zuc96] with ffl ! 1=6,
log (m) .2
Let A 2 BPP witnessed by probabilistic machine M . Apply Theorem 8.2 to obtain M 0 . The \Sigma palgorithm for A works as follows:
input x
Guess
log (m)
log (m) )
If for all
log (m)
, and Claim 8.1 guarantees that the above
program accepts. On the other hand if x 2 A then jjACCEPT M 0
log (m) \Gamma1 and
for every prime number
log (m)
log (m) ) there will be a pair of strings
in ACCEPTM 0 (x) that are not congruent modulo p. This follows because for every number p -
log (m)
log (m) ) at most 2 2m+18m
log (m)
log (m) ) different u and
it holds that u 6j v mod p. 2
Acknowledgments
We would like to thank Jos'e Balc'azar and Leen Torenvliet for their comments on this subject. We
thank John Tromp for the current presentation of the proof of Lemma 3.2. We also thank Sophie
Laplante for her important contributions to Section 5. We thank Richard Beigel, Bill Gasarch and
Leen Torenvliet for comments on earlier drafts.
--R
Structural Complexity I.
On functions computable with nonadaptive queries to NP.
The complexity of generating and checking proofs of mem- bership
The complexity of theorem-proving procedures
Classes of bounded nondeterminism.
The complexity of promise problems with applications to public-key cryptography
Computing solutions uniquely collapses the polynomial hierarchy.
The Distribution of Prime Numbers.
Decision versus search problems in super-polynomial time
Jenner and Toran.
The complexity of optimization problem.
An Introduction to Kolmogorov Complexity and Its Ap- plications
Functions computable with limited access to NP.
The complexity of knowledge representation.
A complexity theoretic approach to randomness.
NP is as easy as detecting unique solutions.
Structural analysis on the complexity of inverse functions.
--TR
--CTR
Harry Buhrman , Troy Lee , Dieter Van Melkebeek, Language compression and pseudorandom generators, Computational Complexity, v.14 n.3, p.228-255, January 2005
Troy Lee , Andrei Romashchenko, Resource bounded symmetry of information revisited, Theoretical Computer Science, v.345 n.2-3, p.386-405, 22 November 2005
Allender, NL-printable sets and nondeterministic Kolmogorov complexity, Theoretical Computer Science, v.355 n.2, p.127-138, 11 April 2006 | kolmogorov complexity;CD complexity;computational complexity |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.