index
int64
0
18.8k
text
stringlengths
0
826k
year
stringdate
1980-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
1,100
tation of M ion Syste Steve Kuo and Dan Moldovan skuo@gringo.usc.edu and moldovan@gringo.usc.edu DRB-363, (213) 740-9134 Department of Electrical Engineering - Systems University of Southern California Los Angeles, California 90089- 1115 Abstract The performance of production programs can be im- proved by firing multiple rules in a production cy- cle. In this paper, we present the multiple-contexts- multiple-rules (MCMR) model which speeds up pro- duction program execution by firing multiple rule con- currently and guarantees the correctness of the solu- tion. The MCMR model is implemented using the RUBIC parallel inference model on the Intel iPSC/2 hypercube. The Intel iPSC/2 hypercube is chosen be- cause it is a cost-effective solution to large-scale appli- cation. To avoid unnecessary synchronization and im- prove performance, rules are executed asynchronously and messages are used to update the database. Pre- liminary implementation results for the RUBIC par- allel inference environment on the Intel iPSC/2 hy- percube are reported. 1 Introduction The multiple rule firing production systems increase the available parallelism over parallel match systems by parallelizing not only the match phase, but all phases of the inference cycle. To speedup the deriva- tion of correct solutions by multiple rule firing, two problems - the compatibility problem and the conver- gence problem - need to be addressed. The compati- bility problem arises from the data dependences be- tween production rules. If a set of rules does not have data dependence among themselves, they are said to be compatible and are allowed to fire concurrently. The convergence problem arises from the need to fol- low the problem solving strategy used in a production program. If the problem solving strategy is ignored, then two tasks may be executed out of sequence or two actions for the same task may be executed in the 304 PARALLEL SUPPORT FOR R-B SYSTEMS wrong order resulting in an incorrect solution. There are three approaches to address the com- patibility and the convergence problems. The first approach considers only the compatibility problem and resolves it by data dependence analysis [3] [6] [S]. Both synchronous and asynchronous execution models have been proposed. In these models, rules which are compatible are fired concurrently in a pro- duction cycle. Because the convergence problem is not addressed in these models, the problem solving strategy for a production program may be violated when multiple rules are fired simultaneously. The second approach addresses the compatibility and the convergence problems by developing parallel produc- tion languages. CREL [6] and Swarm [2] are two such languages. Production programs written in these lan- guages do not use control flow or conflict resolution to ensure that the right rules are fired. Instead, pro- duction rules are fired as soon as they are matched. The correctness of these parallel production programs is guaranteed by showing that for any arbitrary exe- cution sequence the correct solutions are always ob- tained [l]. A potential hurdle for CREL and Swarm is the possible difficulty to prove the correctness of a large production program. In addition, to be able to fire production rules as soon as they are matched may not be the same as being able to fire multiple rules concurrently. These questions will be answered when the benchmark production programs have been translated into CREL and Swarm programs and their performance measured. The multiple-contexts-multiple-rules (MCMR) model presented in this paper represents a third ap- proach. The MCMR model addresses the compatibil- ity problem by data dependence analysis. It addresses the convergence problem by analyzing the control flow in a production program to ma.intain the correct task From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. ordering. In a production program, a complex prob- lem can be solved by dividing it into smaller tasks until they are easily solved. These tasks are called contexts and each context is solved by a set of con- text rules. The MCMR model improves the perfor- mance of a production program by activating multi- ple contexts and firing multiple rules concurrently. It guarantees the correctness of the solution by deter- mining the conditions under which multiple contexts and multiple rules can be activated and fired. To cap- ture the maximum parallelism, the production rules and the working memory are distributed when the MCMR model is implemented on the Intel iPSC/2 hypercube. The sequential inference cycle is also re- placed with the RUBIC parallel inference cycle. The RUBIC parallel inference cycle executes rule in dif- ferent nodes asynchronously and updates the work- ing memory by message passing. The performance of production programs on iPSC/2 is measured. -Contexts- To resolve the compatibility and the convergence problems successfully, one needs to understand how problems are solved in production programs. In gen- eral problem solving, a complex problem is usually solved by stepwise refinement. It is divided into smaller and smaller subproblems (or tasks) until they are easily solved. If other subproblems need to be solved before solving a subproblem, the program con- trol is transferred from one subproblem to another. Production programs solve complex problems in ex- actly the same way. First, the production rules in a production program are divided into subsets of rules, one subset for each subproblem. A subset of rules is called a context and each individual rule in the sub- set is called a context rule. Every context rule in the same context has a special context WME. Rules in different contexts have different context WMEs. A programmer can control which context is active by adding and removing the context WMEs. Context rules are divided into domain rules and control rules. Domain rules address the subproblem associated with the context and conflict resolution is used to select the right rule to fire. If other subproblems need to be solved before solving a subproblem, the control rules transfer the program control to the appropri- ate contexts by modifying the context WMEs. By analyzing the control rules, the control flow between different contexts can be determined. The problem solving strategy and the control flow diagram for an . . . . . . . . R3.1 - -* - %I R4.1 context Lewl (control flow) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rule Level (conflict msolution) 0-0 slgnlties that tbe program control is tfansfeired hnn Cl to C3 context Readable set context Reachable set cO Cl (Co Cl C2 C3 C4 CSI (Cl C3 C4 CS) c3 c4 CC31 c2 IC2CS) cS IC4 CSI KS) Figure 1: Control in Production Program example production program is shown in Figure 1. The multiple-contexts-multiple-rules model re- solves the compatibility and the convergence prob- lems at two levels: the rude level and the context level. At the rule level, the MCMR model resolves the com- patibility problem by data dependence analysis. A set of rules is allowed to fire concurrently and are said to be compatible or seriahble if executing them either sequentially or concurrently, the same state is reached. This is the case if there are no data depen- dences among rules in the set. The data dependence analysis is performed at compile time to construct a parallelism matrix P = [pij] and a communication matrix C = [cQ]. Rules Ri and Rj are compatible if Pij = 0; they are incompatible if pij = 1. The commu- nication matrix C is used for communication purpose when production rules are partitioned and mapped into different processing nodes in a message-passing multiprocessor. Rules Ri and Rj need to exchange messages to update the database if cij = 1; they do not need to if cij = 0. The MCMR model resolves the convergence problem at the rule level by dividing contexts in a production program into three different types: (1) converging contexts, (2) parallel nonconverging con- texts and (3) nonconverging or sequential contexts. A context C is a converging context if starting at a state satisfying the initial condition INIT for that context, all execution sequences result in states satis- fying the post condition POST for that context [l] [2]. Otherwise context C is a nonconverging context. The conflict resolution can be eliminated for a converging context because all execution sequences converge to the correct solution. Compatible rules can be fired simultaneously within a converging context without error. This is because firing a set of compatible rules concurrently is equivalent to executing them in some sequential order and all execution sequences reach the Kuo & MOLDOVAN 305 correct solution for a converging context (for proof, see [5]). For a nonconverging context, conflict resolu- tion must be used to reach the correct solution. The performance of a nonconverging context can be im- proved by parallelizing its conflict resolution. A par- allel nonconverging context is a nonconverging con- text whose conflict resolution is parallelizable and as a result multiple rules may be selected. The conflict resolution for a sequential context is not paralleliz- able and only sequential execution is possible. By di- viding contexts into different types and applying the correct execution model for each type, the compati- bility and the convergence problems are resolved at the rule level. The MCMR model resolves the compatibility and the convergence problems on the context level by analyzing the control flow diagram to determine which contexts are allowed to be active at the same time. These contexts are called compatible contexts. Two contexts are compatible if their reachable sets do not intersect and rules in the two reachable sets do not have data dependences (for proof, see [4]). The reachable set for a context Ci is the set of contexts which are reachable by following the directed arcs in the control flow diagram starting from Ci. Context Ca is included in its own reachable set. The reach- able set for context Cr for the example production program in Figure 1 is {Cl, C’s, C4, C’s). The production rules are analyzed at compile time to generate the compatibility context matrix CC = [cc;~]. Two contexts Ci and Cj are compat- ible and are allowed to be active at the same time if CCij = 0; they are incompatible if ccij = 1. The programmer then consults the CC matrix and mod- ifies the production rules if needed so that only the compatible contexts will be activated concurrently in production program execution. In this way, the com- patibility and the convergence problems are resolved on the context level. nference Model Because the traditional rule based inference cycle is sequential in nature and does not reflect well with the MCMR model, a new parallel inference model has been developed. This new model is called the RUBIC parallel inference model and is shown in Figure 2. It consists of seven phases: match, local conflict resolu- tion (L CR), context-wide conflict resolution (CCR), context-wide compatibility determination (CCD), act, send-message and receive-m,essuge. +: executed for all contexts LCR: local conflict !lxolution ++: executed for umvergent contexts only LCD: local wmpatibility determination +*+: executed fat pat&l nonconvcrgcnct (X3 context-wide compatibility determination and sequential contexts only CcRi context-wide contlict resolution RUBIC Parallel Ittfercnce Cycle TYPC Context Numlm SdY Message Format Figure 2: RUBIC Parallel Inference Cycle and Mes- sage Format Before the start of production program execu- tion, the production rules and the working memory are distributed among the processing nodes. Each processing node compiles the rules into a Rete net- work. During the execution of production programs, each node executes the RUBIC parallel inference cy- cle asynchronously and communicates with each other by messages. The action each node takes depends on its incoming message. If processing node Nodei re- ceives two messages telling it to execute the match phase for context Ci first then the CCD phase for context (72, it would execute the match phase for Cl first and then the CCD phase for Cz. At the same time, Nodej may be executing the CCR phase for context C’s and the act phase for context C4 depend- ing on its incoming messages. Each outgoing message is prepared and sent to the appropriate nodes during the send-message phase. When each node finishes processing a message, it enters the receive-message phase. If there are messages waiting to be processed or newly arrived messages, the oldest message is ex- amined and the appropriate action is taken. If no message is available, the node loops and waits for an incoming message. By allowing each node to execute in an asynchronous, message-driven fashion, the vari- ance between execution times for different processing nodes is reduced and the performance is improved. We also need to be able to detect the termination of a production program, which occurs when there is no rule matched; i.e. all tasks have been completed. By requiring that the context WME for a context be removed at the completion of that context, a produc- tion program concludes its computation when no con- text WME exists . This is easily implemented with a. 306 PARALLEL SUPPORT FOR R-B SYSTEMS special termination rule whose LHS condition is sat- isfied when no working memory element is present in the working memory and whose RHS action consists of an explicit halt command. The implementation of the RUBIC parallel inference model on Intel iPSC/2 hypercube is explained below in more detail. RUBIC Parallel Inference Model Match: The Rete algorithm is used to match the production rules. However, at the end of the match phase, different actions are executed depending on the types of contexts. For rules in convergent con- texts, the processing node skips the local conflict res- olution phase and enters the send-message phase im- mediately. During the send-message phase, the rule numbers of the matched rule, not rule instantiations, are sent by messages to the designated CCD node for compatibility determination. Different converging contexts have different designated CCD nodes allow- ing the compatibility determination phases for com- patible contexts to be executed asynchronously. For rules in parallel nonconverging and sequen- tial contexts, each processing node immediately exe- cutes local conflict resolution phase at the conclusion of the match phase. Local Conflict Resolution: The local conflict res- olution phase is executed for the sequential and the parallel nonconverging contexts, but is skipped for the convergent contexts. For the sequential context, each node uses the conflict resolution strategy to select the local dominant rule instantiation. At the end of local conflict resolution phase, the send-message phase is called to send this dominant rule instantiation to the designated CCR processor for context-wide conflict resolution. For parallel nonconverging context, the local conflict resolution phase is parallelized to select possibly multiple rule instantiations. The node then enters the send-message phase and sends these instan- tiations to the designated CCR node. Each parallel nonconverging context and sequential context has a different CCR node allowing the context-wide conflict resolution to be executed asynchronously for different contexts. Context-wide Conflict Resolution: The context- wide conflict resolution is not executed for converg- ing contexts. For a sequential context, a context- wide dominant rule is selected from local dominant rules. The send-message phase is called to send the context-wide dominant rule to all nodes containing rules in that context. For a parallel nonconverging context, the context-wide conflict resolution is par- allelized and multiple rule instantiations may be se- lected. The send-message phase is again called to send messages to all nodes containing rules in that context. Context-wide Compatibility Determination: The context-wide compatibility determination phase is not executed for parallel nonconverging and sequen- tial contexts. For a converging context, a rule is cho- sen arbitrarily as the dominant rule. To conserve stor- age space, the parallelism P matrix is broken down into smaller Pi matrices, each for one context. The CCD node consults its Pi matrix to select a set of compatible rules. At the end of the CCD phase, the send-message phase is called to deliver messages to all processors containing rules in that context. Act: Each node examines the incoming messages and executes the right-hand-side of the selected rules. The communication C matrix is used to send messages between processors updating and keeping the WM consistent. If cij = 1, then a message needs to be sent to PEj if rule Rs is fired. If caj = 0, no message needs to be sent. Send-Message: The send-message phase deals pri- marily with the communication protocol for iPSC/2. It is implemented as a distinct module to make the fi- nal code more portable to other message-passing ma- chines. A message consists of three components: a message type, a context number and the body of the message. A message can be of the type LCR, LCD (local compatibility determination), CCR, CCD and ACT. Even though LCD stands for local compatibil- ity determination, there is no such operation. It is used to distinguish different types of messages. The type of a message dictates the action to be performed by the receiving node. The context number indicates the context in which the action should be executed. The body of a message contains information such as the local dominant rule, the set of selected rules and RHS actions. The message format is shown in Fig- ure 2. Two matrices are needed to send messages: the allocation matrix A = [aaj], and the communication matrix C. The allocation matrix contains the parti- tioning information. It is used to send LCR and LCD messages from nodes containing rules in a given con- text to the designated context node. It is also used to send the CCR and CCD messages back to the origi- nal nodes. The communication matrix is used to send ACT messages to different nodes to maintain a con- sistent working memory. Receive-Message: All incoming messages are stored in a queue. The receive-message phase is called when the node finishes its current message. If there are newly arrived messages, these messages are en- queued and the first message is dequeued and appro- priate action is taken depending on its type. If the Kuo & MOLDOVAN 307 queue is empty and there is no newly arrived mes- sages, the node loops waiting for an incoming mes- sage. Program Termination: A special termination rule is used to halt the program execution. This rule is sat- isfied when there is no context WME in the working memory. Its execution causes an explicit halt com- mand to be sent to all nodes which terminates the program execution. 4 Results Table 4.1 Test Production System I Program Description A Tournament: scheduling bridge tournaments B Toru-WaltzlG: implementing the Waltz’s edge labelling algorithm C Cafeteria: setting up cafeteria D 1 Snap-2dl: a two-dimensional semantic network E 1 Snap-TA: verifying the eligibility F of TA candidates Hotel: modeling hotel operations Table 4.2 Sequential Simulation Results Production Programs Measurements A B C D E F # of rules 26 48 94 574 574 832 # of sequential cycles = cr 528 207 493 590 1175 5115 Table 4.3 Simulation Results using the RUBIC Parallel In- ference Model there are an infinite number of processors. This sim- ulator is written in Common LISP and is currently running on a Sun Spare workstation. Six test produc- tion programs developed at USC, CMU and Columbia have been simulated and their performance measured. To measure the performance of production programs on iPSC/2 hypercube, we have added the necessary message-passing protocols to the simulator codes. Be- cause of the large memory requirement of the LISP program, we were only able to run 8 nodes concur- rently. We have finished measuring the performance of two test programs and are in the process of mea- suring the rest. 4.1 Simulation Results Six test production programs developed at USC, CMU and Columbia have been simulated under two models: the sequential execution and the RUBIC par- allel inference model. By analyzing the simulations results, we can verify the validity of the MCMR model and measure its speedups. Table 4.1 lists and de- scribes the six test programs. Because the test pro- grams range from small programs to large programs with varying degree of parallelism and contain all three types of contexts, they represent a good mix of production programs. The sequential and MCMR simulation results are listed in Table 4.2 and 4.3 respectively. All six test programs reached the correct solution when they were executed using the the RUBIC parallel inference model; therefore the validity of the MCMR model was verified. For Tournament and Toru-Waltzl6, only Measurements one context was activated at a time. They obtained Production Programs A ]B ]C ID ]E IF speedups of 3.18 and 6.21-folds by firing multiple rules to exploit the available parallelism within a context. On the other hand, Cafeteria, Snap-2d, Snap-TA and Hotel were able to exploit the parallelism across dif- ferent contexts by activating multiple contexts simul- taneously. As a result, they achieved speedups from 6.32 to 19.45-folds. This indicates that there is con- siderable parallelism in production programs and the RUBIC parallel inference model can effectively cap- ture the available parallelism. In this section, we first present the simulated speedups obtainable for production programs using the RUBIC parallel inference cycle, and then present the performance achieved on the Intel iPSC/2 hyper- cube. A logic-level simulator has been developed to measure the theoretical speedups by assuming that l Snap is a sim ulator for semantic network array processor under development at USC [7]. 4.2 Hypercube Performance Results The RUBIC parallel inference cycle has been imple- mented on the Intel iPSC/2 hypercube with 8 nodes. We have run two test programs, Cafeteria and Ho- tel, using the static partitioning-by-context scheme. When rules are partitioned by context, all rules in the same context are mapped to the same processing 308 PARALLEL SUPPORT FOR R-B SYSTEMS 48 35 2: Speedw 2 1.5 1 0.5 0 Number of Nodes Figure 3: Performance for Partition-by-Context node. The performances for Cafeteria and Hotel on the Intel iPSC/2 hypercube are shown in Figure 3. Cafeteria and Hotel were able to achieve good speedups when rules are partitioned by contexts. Note, due to the timing limitations of LISP on iPSC/2 only the real time was measured. The theoreti- cal speedup for Cafeteria is 6.32 and it achieved a speedup of 3.67 for 8 nodes. The theoretical speedup for Hotel is 8 (only 8 nodes are available) and it achieved a speedup of 3.79. Since only eight nodes were available, the upward bound for speedup is eight. For this reason, the performances for Cafeteria and Hotel were quite close. But if more nodes were avail- able, we expect the performance for Hotel would con- tinue to increase while the performance of Cafeteria would begin to top off. Even though good speedups were obtained by allocating rules in a context to the same node, it is not clear whether this is the best partition. We are developing a partitioning algorithm which uses sim- ulated annealing for rule allocation. This algorithm will read in the run-time information and uses the parallelism and the communication matrices to esti- mate the computational and the communication costs for each partitioning. To accomplish this goal, we are also in the process of extending the timing functions to provide better run-time information. 5 Summary In this paper, we have presented the multiple- contexts-multiple-rules (MCMR) model which guar- antees the correctness of the obtained solution when multiple rules are fired. Six test programs have been simulated under the MCMR model and all six pro- grams reached the correct solutions. Speedups of 3.18 to 19.45-folds have been obtained for these programs which indicate to us that there are considerable par- allelism in production programs. To implement the MCMR effectively on the Intel iPSC/2 hypercube, the RUBIC parallel inference cy- cle has been developed. Production programs Cafe- teria and Hotel have been executed on the iPSC/2 hypercube and obtained good performance. The the- oretical speedup for Cafeteria is 6.32-fold, and it ob- tained a speedup of 3.67-folds 8 nodes. The theo- retical speedup for Hotel is 8-folds (only 8 nodes are available) and it achieved a speedup of 3.79-fold. The hypescube performance results indicate that there are considerable parallelism in production programs and the RUBIC parallel inference model can effectively capture the available parallelism. eferences PI PI PI PI PI PI 171 PI Chandy, K. M., Misra, J. “Parallel Program De- sign: A foundation.” Addison Wesley, Reading, Massachusetts, 1988. Gamble, R. “Transforming Rule- based Programs: form the sequential to the parallel. n Third Int’l Conference on Industrial and Engineering Appli- cations of AI and Expert Systems, July 1990. Ishida, T. el at “Towards the Parallel Execu- tion of Rules in Production System Programs”. International Conference on Parallel Processing, 1985, 568-575. Kuo, S., Moldovan, D., Cha, S. “Control in Production Systems with Multiple Rule Firings.” Technical Report No. 90-10. Parallel Knowledge Processing Laboratory, USC. Kuo, S., Moldovan, D., Cha, S. “A Multiple Rule Firing Modele - The MCMR Model.” Technical Report. Parallel Knowledge Processing Labora- tory, USC. Miranker, D.P., Kuo, C., Browne, J .C. “Parul- lelizing Transformations for A Concurrent Rule Execution Language. ” In Proceeding of the In- ternational Conference on Parallel Processing, 1990. Moldovan, D., Lee, W., Lin, C. “SNAP: A Marker-Propagation Architecture for Knowledge Processing. ” Technical Report No. 90-l. Parallel Knowledge Processing Laboratory, USC. Schmolze, J. “A Parallel Asynchronous Dis- tributred Production System. ” Proceeding of Eight National Conference on Artificial Intelli- gence. AAAISO. Page 65-71. Kuo & MOLDOVAN 309
1991
42
1,101
IXM2: A Parallel Associative Processor for Knowledge Tetsuya Higuchil, Hiroaki Kitano2, Tatsurni Furuya’, Ken-ichi Handal, Naoto Takahashi3, Akio Kokubul Electrotechnical Laboratory’ Carnegie Mellon University2 University of Tsukuba3 l- 1-4 Umezono, Tsukuba, Center for Machine Translation l-l-l Tennodai, Tsukuba Ibaraki, Japan 305 Pittsburgh, PA 15213 Ibaraki, Japan 305 Email: higuchi@etl.go.jp Abstract This paper describes a parallel associative pro- cessor, IXM2, developed mainly for semantic network processing. IXM2 consists of 64 as- sociative processors and 9 network processors, having a total of 256K words of associative memory. The large associative memory en- ables 65,536 semantic network nodes to be pro- cessed in parallel and reduces the order of al- gorithmic complexity to O(1) in basic semantic net operations. We claim that intensive use of associative memory provides far superior per- formance in carrying out the basic operations necessary for semantic network processing: in- tersection, marker-propagation, and arithmetic operations. 1 Introduction In this paper, we propose a parallel associative memory processing architecture, and examine its performance su- periorities over existing architectures for massively paral- lel machines. The parallel associative memory processing architecture is characterized by its intensive use of asso- ciative memory to obtain massive parallelism. The archi- tecture is ideal for processing very large knowledge bases often represented by semantic networks. We have imple- mented the IXM2 associative memory processor based on our architecture in order to validate benefits of our architecture. Several efforts are underway to develop a very large knowledge base (VLKB) which contains over a million concepts. MCC’s CYC [Lenart and Guha, 19891 and EDR’s electric dictionaries [EDR, 19901 are such exam- ples. The basic framework of these knowledge-bases can be represented by semantic networks [Quillian, 19671. While notable effort has been made to develop a sound theory on how to represent and develop VLKB, no sig- nificant investigation has been made on how to process VLKBs. The obvious problem of processing VLKB, as opposed to a small or medium size knowledge-base, is its computational cost. Even a simple operation to propa gate markers through a certain link would require in- creasing computing time on serial machines as the size of the network grows. This also applies to three basic op- 296 PARALLEL SUPPORT FOR R-B SYSTEMS erations for processing semantic networks: (1) intersec- tion search, (2) marker-propagation, and (3) arithmatic operations. One obvious way out from this problem is the devel- opment of massively parallel machines. There are sev- eral massively parallel machines already developed, or currently being developed (SNAP [Moldovan, 19901, the Connection Machine [Hillis, 19851). In general, these machines assume one node per processor type mapping of semantic network onto the hardware. The underlying assumption is that significant speed up can be obtained due to parallel computing by each processor in SIMD manner. However, the pitfalls of this approach are that (1) pro- cessing within each processor is performed in a bit-serial manner, and (2) all marker-propagation must be done through communication links which is very slow. This implies that current architectures exhibit serious degra- dation of performance regardless of the fact that these operations look highly parallel for the user who observes the phenoema from outside of the processors. In sci- entific computings, especially in matrix computing, all PEs are always active and communictions are limited to neighbor PEs, thus it takes full advatange of SIMD par- allelism. However, in the semantic network, although most processing can be carried out in a SIMD manner, not all PEs are activated all the time. Number of PEs ac- tive at a time vary during processing and it could range from a few PEs to thousands of PEs. Communication of- ten need to be performed between distant PEs. Thus, a processing and communication capability of each PE sig- nificantly affects overall performance of the system. Un- fortunately, for a machine with l-bit PEs, bit-serial op- erations and communication hampers high performance processing. In this paper, we propose a new approach to mas- sively parallel computing in order to avoid the problem described above. Our approach is based on intensive use of large associative memories. The IXM2 is a ma- chine built based on this paradigm. The IXM2 consists of 64 associative processors and 9 network processors. These are interconnected based on a complete connec- tion scheme to improve marker propagation, and pro- vides a 256K words of large associative memory. Using an associative memory of this size, IXM2 can perform the parallel processing of 65,536 semantic network nodes From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. with a processing time of O(1) in basic operations, and only a minimum communication will be carried out be- tween processors. 2 Roblems of Current Massively I?arallel Machines The central problem which prevent current massively parallel machines from further performance improve- ment in AI applications is that all PEs are not always active. Number of active PEs at a time range from one to few thousands. Thus, performance bottleneck emerge in case when relatively small number of PEs are active and non-local message passings in irregular communica tion patterns are required. In such cases, the two char- acterestics of current massively parallel machines, (1) a bit-serial operation in each processor, and (2) bit-serial communication between processors, cause the degrada- tion of performance. This is because current massively parallel machines assume tasks where most of 1 bit-PEs are highly utilized during the execution and the local and simultaneous communications among PEs are per- formed. In VLKB processing, marker propagation is es- pecially tough in this respect. In addition, since one node in the semantic network is mapped to a single PE, any propagation of a marker must go through commu- nication links which results in so called hillis bottdeneck. This section reviews problems in current architectures in basic operations for processing semantic networks: (1) set intersection, (2) marker-propagation. The set intersection is a very important operation in AI and is frequently performed to find a set of PEs with two properties. Although set intersection contains SIMD parallelism, there is a room for further improvement be- cause a current architecture carrys out the intersection in a bit serial manner by each l-bit PE. Marker propagation, however, presents more serious problems. First, propagation of markers from a base node to N descendant nodes requires a sequential marker propagation operation to be carried out N times at the l-bit PE of the base node. In addition, the serial link is very slow. Thus, as average fan-out increases, hence par- allelism increases, current architecture suffers from se- vere degradation. Second, marker propagations are very slow because all the propagations are performed through the message passing communications among PEs. Mes- sage passing communications between PEs are slow due to the limited bandwidth of communication lines, the de- lays caused by intervening PEs in message passing, mes- sage collisions and so on. For these reasons, the marker propagation in Connection Machine is two orders of mag- nitude slower than SUN-4, Cray, and IXM2. 3 IXM2 Architecture 3.1 Design Philosophies behind HXM2 The use of associative memory is the key feature of the IXM2 architecture. Needless to say, IXM2 is a parallel machine which is constituted from a number of proces- sors. However, the IXM2 attains massive parallelism by associative memory, not by processor itself. IXM2 has only 64 T800 transputers, each of which has a large (4K) associative memory [Ogura, 19891. Instead of having thousands of l-bit PE, IXM2 stores nodes of the seman- tic network in each associative memory. Because each associative memory can store up to 1,024 nodes, IXM2 can store 65,536 nodes in total. Four major issues have been addressed in the design decision to use associative memory in the IXM2: (1) at- tainment of parallelism in operations on each node, (2) minimization of the communication bottleneck, (3) pow- erful computing capability in each node, and (4) parallel marker propagation. The term node refers to a node in the semantic network, not necessary that of a single PE. By allocating nodes of semantic networks on associa- tive memories, association and set intersections can be performed in 0( 1). B ecause of the bit-parallel process- ing on each word of associative memory, these operations can be performed faster than l-bit PE. Multiple nodes allocation on a PE offers a significant advantage in minimizing communication bottleneck. Be- cause a large number of nodes can be loaded in a PE, propagation of markers from a node to other node on the same PE can be done not by communication be- tween PEs, but by memory reference. In addition, to deal with marker propagation between PEs, IXM2 em- ploys a full-connection in which all processors in a cluster are directly connected. Thus, it decreases the time re- quired for communication between PEs more than other connection models such as N-cube or torus. Specifically, the full-connection minimizes the chance of collision and message-interleaving PEs. Associative memory has extremely powerful logical and arithmetic computing power. When an operand is stored in each word of the associative memory, a bit- serial operation can be executed to all words in parallel. Massive parallelism offered by this mechanism provides nano seconds order execution time per datum. By use of the parallel write and the search functions of associative memory, multiple marker propagation from a node can be done in parallel, independent of the number of fan-outs (O(l)). W e call this powerful feature parallel marker propagation. 3.2 Overall structure IXM2 contains 64 associative processors (AP) and 9 network processors (NP) for communication. Figure 1 shows an external view of the IXM2 (left), and its struc- ture (right). Eight APs and one NP form a processing module (PM), where eight APs are completely intercon- nected. In recursive fashion, eight processing modules are also interconnected each other and are connected to one NP which has the connection with the host SUN- 3. IXM2 works as an AI co-processor. The technical summary of IXM2 is described in the appendix. IXM2 employs complete connections to speed-up marker propagation among APs. Marker propagations between two APs have to be done as much as possible between APs which are directly connected; message path distance in marker propagation must be kept as close to HIGUCHI, ET AL. 297 Figure 1: The IXM2: An External View and its Structure Table 1: Average message path distance for IXM2 and other interconnections No. of PE IXM2 hypercube torus 64 2.77 3.04 4.03 1 as possible. Although it is almost impossible to es- tablish a complete connection among 64 APs, complete connection among a smaller number of APs is possible. Eight APs are selected as a unit of complete connection due to the implementation requirement. Furthermore, it is possible to keep the communication locality in marker propagation within these 8 APs. It is known that a large semantic network can be divided into sub semantic net- works, with dense connectivity among the nodes of a given sub-semantic network, but with relatively few con- nections to the outside sub-semantic network [Fahlman, 19791. Even in a worst case scenario, where communication locality within a PM is difficult or impossible to main- tain (i.e. marker passings occur between any pair of the 64 APs), the average message path distance in the IXM2 interconnection can be kept smaller than that of hyper- cube and torus, as shown in Table 1. There, marker passings are assumed to occur between each AP and all other APs in the system and then the average length of each message path distance is calculated. The programming language for IXM2 is the knowledge representation language IXL [Handa, 19861. In IXM2, data and programs for semantic net processing are allo- cated as follows: (1) A large semantic network to be processed by IXM2 is partitioned into semantic sub-networks and stored in the associative memory of each AP. (2) Subroutines to execute IXL commands1 are stored in the local memory of each AP. The network processors broadcast an IXL command simultaneously to all the APs. NPs also accept results from each AP and pass them back to the host computer for the IXM2 ( SUN-3/260). An IMS B014 transputer board is installed in the SUN-3 to control the IXM2, load occam2 programs into the IXM2, collect answers returned from the IXM2, han- dle errors, and so on. recessing Using Associative Memories This section describes how parallel processing is per- formed on an associative memory. We begin by describ- ing the data representation of a semantic network. Then parallel marker propagation and set intersection are de- scribed. ‘ commands are predicates defined for semantic net- work processing. 298 PARALLEL SUPPORT FOR R-B SYSTEMS 4.1 Representation of Semantic Network The semantic network representational scheme in IXM2 is strongly node-based. Each node stores information in both associative memory and RAM. Node information stored in associative memory is in- tended to be processed with the massive parallelism pro- vided by large associative memory ( 256K words). By this means, the times for association, set intersection and marker propagation operations can be reduced to O(1). The node information in associative memory com- prises; (1) A marker bit field (28 bits), (2) A link field (8 bits), (3) A p arallel marker propagation identifier (ab- breviated as PID; 22 bits), and (4) A literal field (16 bits). The marker bit field stores the results of processing and is used just like a register in microprocessors. There are 28 marker bits in the current implementation. The link field consists of 8 bits; each bit indicates the existence of a primitive link through which the node is connected to other nodes. The four types of primi- tive links are defined in IXM2 to support basic inference mechanisms in the knowledge representation language TXL which is an extended Prolog for IXM2. The prim- itive links are isa, instance-of (iso), destination (des), and source (sot) link. Because the direction of a prim- itive link must be distinguished, there are 8 bits in a link field; from the most significant bit (MSB), they are rim, isa, tiso, iso, rdes, des, rsoc and sot. ‘r’ signifies an inverse link. If a node is pointed to by an isa link, the node has a risa link and the MSB of the link field becomes 1. The literal field is prepared for a node which is itself a value and is processed by algorithms which exploit the massive parallelism of large associative memories. On the other hand, the following node information is kept in RAM; (1) destination nodes from the node (clas- sified according to the link type), (2) parallel marker propagation identifiers ( PID), (3) search masks for par- allel marker propagation ( PMASK). PID and PMASK are the information for parallel marker propagation and are classified according to the 8 link types. Figure 2 shows an example of the representation of a semantic network. The node C points to the A node via an isa link, and the link field of node C has ‘01000000’. This is because the position of ‘1’ in the link field rep- resents that the C node has out-going isa links. The destination field on RAM area has ‘A’ in the isa part, because the destination of the C node connected by an isa link is the A node. 4.2 Marker Propagation Marker propagation in IXM2 is performed either by a sequential marker propagation or by a parallel marker propagation. A sequential marker propagation is performed by mes- sage passing either within an associative processor or among associative processors, using the destination in- formation stored in the RAM area. Parallel marker propagation is performed within an as- sociative processor; it can perform multiple marker prop- agations from a large fan-out node in parallel. (In addi- tion to this parallelism within one AP, parallelism among 64 APs is available.) The rest of this section describes how the parallel marker propagation is performed. In the network example in Figure 2, marker propaga- tion from node A to C, D and E can be performed using parallel marker propagation. We call the node A a base node and nodes C, D and E descendant nodes. The basic idea of parallel marker propagation is to search descendant nodes and write a particular marker bit into them by use of associative memory; the search and parallel write functions are used. Specifically to, (1) assign an identifier to all the descendent nodes, (2) assign the same identifier to the base node, and (3) (at the base node) issue the search operation with the identifier to find descendants nodes, and set a new marker bits in parallel into descendent nodes just searched. The identifier in (1) and (2) is a parallel marker prop- agation identifier (PID) described earlier. This is pro- vided beforehand by the allocator, and loaded with the network. In the search in (3), a search mask (PMASK) de- fined before is used to search only for the bits satisfying the matching. Using this method, parallel marker propagation is per- formed in Figure 2 as follows. Suppose parallel marker propagation is to be performed from the A node to C, D and E nodes. At first, the PID and the PMASK are re- trieved from the RAM area for the A node: ‘0100’ for the PID and ‘0011’ for the PMASK. They are set into the search data register (SDR) and the search mask register (SMR)of the associative memory respectively, as shown in Figure 3 (b); the bits of the dotted area in the search mask register are all one and the search for those bits is disabled. Next the search operation is executed; the words for C, D and E node are hit by this search. Finally, the parallel write operation, which is a function of the associative memory, is performed to set a marker bit 1 at the same time in each of the three words just searched. Similarly, marker bit 2 of members of set B (nodes D and E) are written using parallel marker propagation. The data for parallel marker propagations such as PID and PMASK are prepared by the semantic network allo- cator. The allocator recognizes pairs of { a base node, link type, descendent nodes } and gives to each pair a unique identifier (PID) and a search mask (PMASK). The recognition of such pairs is based on the number of the fan-out which is given to the allocator as a parameter. If the parameter is N, only the nodes with more than N fan-out are recognized as the candidates of the parallel marker propagation. For example, in Figure 2, N is 2 and two pairs are recognized: { A, risa, (C, D, E) } and ( B, risa, CD, E) }. The number of pairs recognized can be controlled by changing the value of N, although PID has an enough length of 22 bits for 1000 nodes on each PE and so it is unlikely that these 22 bits get used up. 4.3 Set Intersection To obtain the intersection of sets A and B, we only have to search those words at which both marker bits 1 and 2 are set. Figure 3 (a) shows the status of associative HIGUCHI, ET AL. 299 -f+ : isa link Representation on associative memory Representation on RAM _ destinations . PID PMASK marker bit field link field Ii teral PID 5 E : : : : : : . : : : I ’ ’ . : 12 . . .28 L isa link : D,E Figure 2: Data representation of semantic network memory after two parallel marker propagations. By set- ting the search mask and the search data registers as in Figure 3 (c), intersection can be found in one search operation. Then, a parallel write operation can be per- formed to set marker bit 3 of the D and the E nodes. Thus, set intersection can be done in O(1). marker bit field marker bit field link field link field literal literal PID PID [olol....- l0I10000000 I .I. looool 12.. . 28 SDRloo . . . . . . . . . . . . 01001 SDR[llOO. . . . . . . . - . . . 0001 (c) Figure 3: Set intersection with associative memory 5 Performance Evaluation This section discusses the performance of IXM2 in two contexts: basic operations and applications. The IXM2 performance is compared with results from other high performance machines such as the Connection Machine (CM-2), the Cray X-MP and the SUN-4/330. Sequential computers become very slow as the data grows larger. It is true even for the Cray, in spite of the indexing algorithm of O(N). Although the Cray is much faster than the SUN-4 because of the vectorization avail- able in the algorithm, there is a difference of three orders of magnitude between IXM2 and Cray in the processing of 64K data. 5.1.2 Marker propagation The figures for IXM2 shown below were measured us- Next we compare the performance of marker propaga- ing IXM2 of which clock speed is 17.5 MHz. In the pro- tion. First, we compare the time to complete propaga- grams for IXM2, IXM machine instructions are written tion of markers from one node to all descending nodes. in occam2. Programs for Cray and SUN are written in The left chart of Figure 4 shows performance by each C and are optimized with -04. Programs for CM-2 are machine with different fanout from the node. IXM2 is Table 2: Execution times of set intersection (ps) ~1 written in C* and are optimized. Programs are executed on CM-2 of which PE clock is 7.0 MHz. Execution times have been measured using timers on CM-2 and they show CM busy time, excluding host interaction timings. 5.1 Basic operations 5.1.1 Association and set intersection Because of the bit-parallel processing in the associative memory, the execution time of the association operation on IXM2 is always 12 /.JS for any amount of data up to 64K. It is also possible on sequential machines to imple- ment O(1) association time using hashing or indexing. However, set intersection for large data is very time consuming on sequential machines. Table 2 shows the performance of set intersection on IXM2, CM-2, Cray X-MP and SUN-4/330, where two sets of the same size are intersected. IXM2 can consistently perform the set intersection in 18 ps for any size of data up to 64K; the set intersection is performed in O(1). Although CM-2 can also perform the set intersection constantly in 103 ps, IXM2 is faster because of bit-parallel associative memory. 300 PARALLEL SUPPORT FOR R-B SYSTEMS outperformed when only one link exists from the node. However, if an average fanout is over 1.75, IXM2 outper- forms the Cray and the SUN-4.2 If an average fanout is nearly 1, using a parallel machine is not a rational deci- sion in the first place. It should be noticed that IXM2 completes propagation at a constant time due to paral- lel marker-passing capability with associative memory. The pa.rallel marker propagation by one AP constantly takes 35 ps, independent of the number of descendent nodes N. On serial machines (Cray and SUN-4), compu- tational time increases linearly to the number of fanouts. CM-2 also requies more linear time as fanout increases. This is due to its serial link constraints that markers for each descending node has to be send in a serial man- ner. As we have discussed in section 2, CM-2 does not gain advantage of parallelism at each processor. Thus, if average fanout is over 1.75, IXM2 will provide a faster marker-propagation than any other machines. The right chart of Figure 4 shows performance against parallel activation of marker-propagation. We used a network with 1000 nodes with fanout of 10. By parallel activation, we mean that more than one node is simul- taneously activated as a source of marker-propagation. On the X-axis, we show a level of parallelism. Paral- lelism 1,000 means that markers are propagated from 1,000 different nodes at a time. Time measured is a time to complete all propagations. Obviously, serial machines degrade linearly to parallelism. IXM2 shows similar lin- ear degradation, but with much less coefficient. This is because IXM2 needs to fire nodes sequentially at each T800 processor. The data in this graph is based on one AP out of 64 APs. Thus, when all 64 APs are used, the performance improves nearly 64 times. CM-2 has al- most constant performance because all nodes can simul- taneously start propagation. It is important, however, to notice tha.t CM-2 outperforms only when the paral- lelism exceeds certain level (about 170 in this example), in case only one AP of the IXM2 is used. This would be equivalent to 10,880 with 64 APs. This implies that if aapplications do not require more than 10,880 simulta- neous marker-propagation, IXM2 is a better choice than CM-2. This trade-off point, however, changes as average fanout changes. 5.1.3 Arithmetic and logical operations Node information can contain the literal field in asso- ciative memory when a node is itself a value. This literal field can be processed with bit-serial algorithms for as- sociative memory [Foster, 19761. Execution time is con- stant, independent of the number of data items. There- fore, the execution time per item becomes extremely fast if the number of data items stored in associative memory is large. The less than operation takes on average 36 /JS for the comparison of 32-bit data. This seems quite slow when compared with the execution time on sequential 2Tlle reason why Cray is slower than SUN-4 in marker propagation is considered to be the overhead of recursive pro- cedure calls used in link traverse. Another Cray (Y-MP) was also slower. Table 3: Query processing time ( milli sec.) computers; for example, it takes 1.25 ps in an occam2 program run on a T800 transputer at 20 MHz. However, it corresponds to an execution time per datum of 0.56 ns (nano second) when each of 64 K nodes contains a literal field and is processed in parallel. Although the number of data items available as candidates for the processing is application dependent, the associative memory algo- rithm will surpass the performance on sequential com- puters if there are at least 100 candidates. The additions for 8-bit data and Is-bit data take 46 ps (0.72 nano sec- ond per datum) and 115 ps (1.80 nano second per da- tum) respectively. 5.2 Application Two applications have been developed so far on IXM2: a French wine query system, and memory-based natural language processing system [Kitano and Higuchi, 19911. The wine knowledge base consists of 277 nodes and 632 links. The sample query elicits wines which belong to Bordeaux wine with the ranking of 4 stars. Table 3 shows the results on IXM2, CM-2 and SUN-4/330. Although the network is relatively small to take advantage of paral- lelism, IXM2 performs better than other machines. Max- imum parallel activation is 87 in this knowledge-base, so that CM-2 is much slower than IXM2. Plus, over 95% of computing time in CM-2 was spent on communication to propagate markers between PEs. In the natural language processing task, a network to cover 405 words and entire corpus has been created. The inputs starts from phoneme sequence so that speech input can be handled. The network has relatively large fanout (40.6). Figure 5 shows performance of the IXh/I2, the SUN-4, and the CM-2. Due to a large fanout factor, IXM2 far surpasses processing speed of other machines (SUN-4 and CM-2). SUN-4 is slow because set intersec- tions are heavily used in this application. 6 Discussions First, drastic difference of performance in set operation between serial processors (SUN-4 and Cray) and SIMD parallel processors (CM-2 and IXM2) rules out the pos- sibility of serial machines to be used for a large scale semantic network processing in which extensive set op- erations involving over 1,000 nodes are anticipated. Second, performance comparison in marker propaga- tion indicates that IXM2 exhibits superior performance than CM-2 for many AI applications. IXM2 is consis- tently faster for processing semantic network with large fanout, but limited simultaneous node activation. When the average fanout is large, IXM2 have advantage over CM-2, and CM-2 gains benefits when large simultane- ous activations of nodes take place. Let F and N be an average fanout, and number of simultaneously activated nodes in the given task and network. IXM2 outperforms HIGUCHI, ET AL. 301 Parallel Marker-Propagation Time Parallel Activation and Performance Time (Microseconds) micro seconds x lo2 0.00 ii let02 1 Fanout es Figure 4: Marker Propagation Time milliseconds Parsing Time vs. Length of Input 9.00 8.00 7.00 6.00 5.00 4.00 3.00 2.00 1.00 ,.‘- .’ .’ .’ .’ .’ CM-2 I 2. . &vut Ien& Figure 5: Parsing Time vs. Input String Length CM-2 when the following equation stands: Tc~iink: N Trx~noae ’ FxAP AP is a number of APs used in IXM2 which ranges from 1 to 64. TcM/~~~: is a time required for the CM-2 to propagate marker for one link from a node. ~~~~~~~~ is a time required for the IXM2 to propagate markers from one node to all descending nodes. In this experiment, TcMlin]c was 800 micro second3, and TIXM~*~~ was 35 micro seconds. Of course, this value changes as system’s clock, software, compiler optimization and other factors change. Average fanout of actual applications which we have examined, wine data-base and natural language, were 2.8 and 40.6, respectively. In order for the CM-2 to outperform IXM2, there must be always more than 4097 and 59392 simultaneous activation of nodes, respectively. Notice that it is the “average number” of simultaneous activation, not the peak number. In the wine database, maximum possible parallel ac- tivation is 87. In this case, all terminal nodes in the net- work simultaneously propagate markers. For this task, obviously IXM2 outperforms CM-2. In the natural lan- guage processing, maximum parallelism is 405 in which all words are activated (which is not realistic). Since syntactic, semantic, and pragmatic restriction will be im- posed on actual natural language processing, the number of simultaneous marker-propagations would be a magni- tude smaller than this figure. Thus, in either case, IXM2 is expected to outperform CM-2, and this has been sup- ported from the experimental results on these applica- tions. Although the average parallelism throughout the ex- ecution would be different in each application domain, it is unlikely that such a large numbers of nodes (4.11~ and 59.4K nodes in our examples) continue to simulta- neously propagate markers throughout the execution of the application. In addition, when such a large number 3This value is the best value obtained in our experiments. Other values we obtained include 3,000 micro second per link. 302 PARALLEL SUPPORT FOR R-B SYSTEMS of nodes simultaneously propagate markers, a communi- cation bottleneck would be so massive that performance of the CM-2 would be far less efficient than speculated from the data in the previous section. In addition, there are other operations such as a set operation, and logic and arithmetic operations in which IXM2 has magnitude of performance advantages. Thus, in most AI applications in which processing of semantic networks is required, IXM2 is expected to outperform other machines available today. 7 Conclusion In this paper, we proposed and examined the IXM2 ar- chitecture. Most salient features of the IXM2 architec- ture are (1) an extensive use of associative memory to attain parallelism, and (2) full connection architecture. Particularly, the use of associative memory provides the IXM2 with a truly parallel operation in each node, and nano-seconds order logical and arithmetic operations. We have evaluated the performance of the IXM2 asso- ciative processor using three basic operations necessary for semantic network processing: intersection search, parallel marker-propagation, and logical and arithmetic operations. Summary of results are: e Association operations and set intersections can be performed in O(1). IXM2 attains high performance due to bit-parallel processing on each associative memory word. 63 Parallel marker propagations from a large fanout node can be performed in O(1) through the use of associative memory, while marker propagation implemented on a sequential computer and CM-2 requires linear time proportional to the number of links along which a marker is passed. m Arithmetic and logical operations are executed in an extremely fast manner due to the algorithms de- veloped for associative memory. These algorithms fully utilize parallel operations on all words, thus attaining nanoseconds performance in some cases. Cases where other machines possibly outperform IXM2 has been ruled out, because these situations are practically implausible. Thus, we can conclude IXM2 is a highly suitable machine for semantic network process- ing which is essential to many AI applications. Beside performance of IXM2, one of major contri- butions of this paper is the identification of some of the benchmark criteria for massively parallel machines. While parallelism has been a somewhat vague notion, we have clearly distinguished a parallelism in message pass- ing (by a fanout factor) and in simultaneous activation of PEs (by active node number). These critaria are essen- tial in determining which type of mahcine should be used for what type of processings and networks. For exam- ple, we can expect IXM2 to be better on large fanout but not too large simultaneous activations (most AI applica- tions are this type), but CM-2 is better when a fanout is small, but large number of PEs are always active (most scientific computings are this type). Acknowledgement Authors thank Hiroshi Kashiwagi and Toshitsugu Yuba (ETL) for assisting this research, Takeshi Ogura (NTT) for providing associative memory chips, and Jaime Car- bonell, Masaru Tom&a, Scott Fahlman, Dave Touretzky, and Mark Boggs (CMU), Yasunari Tosa (Thinking Ma- chines) for various supports and valuable advice. IXM2 technical appendix References Assoclatrve Processor Network Processor ~1 4096 X 40 bit associative memory 4 iink adaptors (IMS COlZ) T800 Transputer (17.5 MHz), [EDR, 19901 Japanese Electronic Dictionary Research Insti- tute. An Overview of the EDR Electronic Dictionaries, TR-024, April 1990. [Evett, 19901 Evett, M., Hendler, J. and Spector, L. PARKA: Parallel knowledge representation on the Con- nection Machine, CS-TR-2409, Univ. of Maryland, 1990. [Fahlman, 19791 Fahlman, S.E. iVETL: a system for repre- senting Q& U&rag real-world knowledge, MIT Press, 1979. [Foster, 19761 Foster, C.C. Content Addressable Parallel Processors, Van Nostrand Reinhold Company,l976. [Handa, 19861 Handa, K., Higuchi, T., Kokubu, A. and Fu- ruya, T. Flexible Semantic Network for knowledge Rep- resentation, Journal of Information Japan, Vol.10, No.1, 1986. [Higuchi, 19911 Higuchi, T., Furuya, T., Handa, K., Taka- hashi, N., Nishiyama H. and Kokubu, A. IXM2: a parallel associative processor, In Proceedings of 18th International Symposium on Computer Architecture, May 1991. [Hillis, 19851 Hillis, D. Connection Machine, MIT Press, 1985. [Kitano and Higuchi, 19911 Kitano, H. and Higuchi, T. High Performance Memory-Based Translation on IXM2 Massively Parallel Associative Memory Processor, AAAI- 91, 1991. [Lenart and Guha, 19891 L enart, D., and Guha, R., Building Large Knowledge-Based Systems, Addison-Wesley, 1989. [Moldovan, 19901 Moldovan, D. et. al. Parallel knowledge processing on SNAP, In Proceedings of International Con- ference on Parallel Processing, August 1990. [Ogura, 19891 Ogura, T., Yamada, J., Yamada, S. and Tanno, M. A 20-Kbit Associative Memory LSI for Arti- ficial Intelligence Machines, IEEE Journal of Solid-State Circuits, VoI.24, No.4, August 1989. [Quillian, 19671 Quillian, M.R. Word concepts: a theory and simulation of some basic semantic capabilities, Behaviora Science, 12, 1967, pp.410-430. HIGUCHI, ET AL. 303
1991
43
1,102
Ian Green Department of Engineering University of Cambridge Cambridge CB2 1PZ England img@eng.cam.ac.uk Abstract The problem of automatically improving func- tional programs using Darlington’s unfold/fold technique is addressed. Transformation tactics are formalized as methods consisting of pre- and post- conditions, expressed within a sorted meta-logic. Predicates and functions of this logic induce an ab- stract program property space within which con- ventional monotonic planning techniques are used to automatically compose methods (hence tactics) into a program improving strategy. This meta- program reasoning casts the undirected search of the transformation space as a goal-directed search of the more abstract space. Tactics are only weakly specified by methods. This flexibility is required if they are to be applicable to the class of generalized programs that satisfy the pre-conditions of their methods. This is achieved by allowing the tactics to generate degenerate scripts that may require re- finement. Examples of tactics and given, with illustrations of their use program improvement. methods are in automatic Introduction The effectiveness of transformational models of pro- gramming is limited by the cost of controlling search in transformation space. One such transformational model is Darlington’s generative set approach, based around unfold/fold transformations; this is a general technique for effecting efficiency-improving, correctness preserving transformations on functional programs ex- pressed as recursion equations (Burstall and Darling- ton, 1977). In contrast to catalogue based transfor- mation systems such as CHI (Green et al., 1983) and TI (Balzer, 1981), the generative set style is charac- terized by a small number of simple transformations which are selected and applied to the program under development. The effect of a single transformation is typically negligible, nevertheless a significant improve- ment in program efficiency is possible if a number of *This research is funded by a SERC Research Studentship. them are arranged correctly. Darlington’s Functional Programming Environment supports the transformational development of Hope+ programs, and is the vehicle for this work. The FPE (Sephton et al., 1990) is a transformation pro- cessor which automatically applies externally gener- ated transformation scripts to programs. The man- ual construction of program-improving scripts is time- consuming and difficult. This paper reports current theoretical activity in automating script construction. We have formalized transformation tactics in a principled, flexible way us- ing property-oriented program descriptions expressed within a sorted logic. The abstract space defined by this logical formalization can be searched in a goal directed way, permitting efficient reasoning with tactics in or- der to construct an overall program-improving strategy. We believe that goal directed program development is central to the success of transformation techniques. The following example illustrates use of unfold/fold and serves to outline the problem of automatic program transformation. Example problem. The unfold/fold technique con- sists of six elementary rules: define, instantiate, unfold, fold, abstract and replace. We assume the reader has some familiarity with the unfold/fold technique (see Burstall and Darlington (1977) for details), but detailed knowledge is not required. Briefly, unfold corresponds to replacing a function with its (suitably instantiated) body, and fold is the reverse of this. A replace trans- formation is a equality lemma such as the associativity of addition; define introduces a new equation with a unique left hand side; instantiate creates a substitution instance of an existing equation; abstract introduces a let clause. We shall base our presentation on natu- ral numbers, tuples, and lists of natural numbers (lists are constructed with the infix cons, written ‘::’ and nil, written ‘[I’). The following example program illustrates the use of unfold/fold. We see that uv is intuitively defined in GREEN 317 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. terms of count and sum. (1) av(Z) t- (count(l), sum(l)); (2) count([]) e 0; (3) count(p :: ps) -+ 1 + count(ps); (4) sum(D) -e 0; (5) sum(p :: ps) -4= p + sum(ps); Using the unfold/fold technique we can derive by trans- formation an improved av shown in Figure 1 on the following page. The improved uv (shown below) is now an immedi- ately recursive linear function which traverses its argu- ment once, not twice. 4[3) t- (0); uv(p :: ps) -4= let (24, v) E uv(ps) in (1 + 21, p + v); This example illustrates the need to control the search when deciding what transformation ought to be ap- plied in order to arrive at this improvement. The aim of making uv immediately recursive stems from one of two places: (1) certain program structures may be effi- ciently mapped onto certain machine architectures, and are desirable, or, (2) improvement of another function is dependant upon uv being in this form; we address this more general case throughout this paper. Our thesis is that AI planning techniques can be used to automate program transformation, but that a straight- forward mapping of transformation patterns into, for example, STRIPS-like operators places a prohibitively large emphasis on object-level search. Instead, we de- scribe transformations at the level of tactics that encap- sulate pieces of programming knowledge, in much the same way that catalogue based transformation systems do. By weakly specifying these tactics they remain flex- ible and are scalable to larger problems. By reasoning with these tactic formalizations, sequences of tactics, called strategies, are generated that effect a global pro- gram improvement. The remainder of this paper is organized as follows: the following section discusses previous work in au- tomating program transformation and highlights as- pects which influenced this work. Then we describe our formulation of a flexible approach to the transfor- mation planning problem, using a sorted logic to rep- resent program abstractions. Finally, three examples illustrate these ideas with a small selection of simple tactics and their formalizations. Relevant work Green’s PSI project (Green, 1976) heralded the begin- ning of a number of catalogue-based transformation systems; PSI was entirely automatic, but experience with it underlined the difficulty of selecting suitable transformations from its large database. PSI’S suc- cessor, CHI (Green et al., 1983; Smith et al., 1985), covered a broader spectrum of transformation prob- lems by keeping the user in the transformation pro- cess. Closer to our work is the Glitter system, a com- ponent of which is Fickas’ Jitterer (Fickas, 1980), capa- ble of performing short, conditioning transformations on programs. This ‘jittering’ process was driven by the user who was responsible for selecting non-trivial transformations. In a slightly different way, Feather’s ZAP system (Feather, 1982) partially automated the unfold/fold approach we use. ZAP allowed the user to express goals in terms of patterns which were then converted into transformation sequences automatically. With a higher-level approach, Chin (1990) and Smith’s CYPRESS (Smith, 1985) h ave focussed on constructing tactics to manipulate programs in specific ways, but without an overall guiding strategy. Darlington’s own efforts at partially automating his unfold/fold tech- nique (Darlington, 1981) were not entirely successful because the heuristics embodied in his system failed to scale to larger problems, and were not flexible enough to be used uniformly. As a result, the reliance on pure search became prohibitive and this work was stopped. Each of these low-level systems adopts an ad hoc ap- proach to following a transformation strategy. With the exception of PSI, they need significant user involve- ment for realistic programs; indeed, the majority of them are interactive. Furthermore, with the exception of generative-set systems, scant regard is paid to cor- rectness of the final program due to the difficulty of veri- fying the complex transformations present in catalogue- based systems. Abstractions, Tactics and Methods Abstract ions Input abstractions have been used to remove inessen- tial detail from complex input domains in a object- level to object-level mapping (e.g., clause set to clause set (Plaisted, 1981) or wff to wff (Tenenberg, 1988)). We have investigated this style of degenerate abstrac- tion by constructing abstraction mappings from pro- grams to programs: a program P is mapped to P' and a script is generated for P' by some means. In a re- finement process this ‘abstract’ script guides the search for a ‘concrete’ script, that may be applied to improve P. A mapping similar to Plaisted’s generalized propo- sitional mapping has been used with some success on small problems but these mappings were not scalable to more complex programs. This is because the properties of programs which influence transformation cannot eas- ily be expressed at the object-level (i.e., as a program). For this reason we have chosen a more expressive, log- ical description which effectively generalizes programs if they satisfy some meta-level wff. Tactics ancl Met hods A tactic is a sequence of transformations that exhibits a definite invariant structure commonly used in un- fold/fold transformation and theorem proving. One of 318 PARALLEL SUPPORT FOR R-B SYSTEMS 41) t- bunul)? sum(H)); * (0, suNI>>; e (W); av(p :: ps) * ( count(p :: ps), sum(p :: ps)); e (1 + count(ps), sum(p :: ps)); e (1 + count(ps), p + sun+)); -G let v E sum(ps) in (I+ count(ps),p + v); G let u z count(ps) in let v z sum(ps) in (1 -I- u, p + v); t- let (u, v) G ( count(ps), sum(ps)) in (1 + u,p + v); t- let (u, v) E av(ps) in (1 + u, p + v); instantiate 1 to [] in (1) unfold count([]) using (2) unfold sum([]) using (4) instantiate 1 to p :: ps in (1) unfold count(p :: ps) using (3) unfold sum(p :: ps) using (5) abstract sum(ps) abstract count (ps) flatten lets (a replace) fold (count(ps), sum(ps)) with (1) Figure 1: Derivation of an improved version of au the challenges in automating unfold/fold is the need to express the behaviour of these tactics in a formaliza- tion that is amenable to mechanized reasoning so that they may be organized into a global program improv- ing strategy. After Bundy et al. (1988) we call these descriptions methods. In Feather’s ZAP system, tac- tics are used extensively during program improvement, but no strategy for using those tactics is given. Fur- thermore, the tactics are expressed in terms of pattern- oriented transformations, i.e., weak meta-level descrip- tions. We choose a more expressive logical description following Bundy et al. (1988) and Green et al. (1983). A method is a 3-tuple consisting of a tactic, and pre- and post-conditions that formalize the selection and ef- fect of that tactic on a program in such a way that: the method embodies in a formalized way the condi- tions required of a program in order that it is worth attempting the tactic, and, the method may be used to predict the effect of the tactic on program properties; in this way the actual tactic does not requEre execution in order to deter- mine its effect on important program properties-an important computational saving. Methods are similar to triangle tables in STRIPS (with MACROPS), and the abstract operators in (Tenen- berg, 1988), but, unlike our methods, these are guaran- teed to have specializations which succeed due to con- sistency restrictions imposed on their formation. The skeletal plans of MOLGEN which are refined by descend- ing a user determined abstraction hierarchy, are similar to methods but methods are not part of an object-level hierarchy. This gives us greater expressive freedom in describing tactics, a limitation of ZAP’s weak patterns and, as mentioned above, input abstractions. In order to form tactics into a strategy, we perform simple STRIPS-like reasoning with methods. Methods whose post-conditions unify with the current goal are chosen according to some search policy and their pre- conditions are used as new goals. When all goals are satisfied the corresponing tactics are executed and a script is generated. In short, reasoning with methods is simple theorem proving; the advantage is that we are searching an abstract space which is a better space to search than the undirected transformation space. The unfold/fold technique is monotonic in that new equations are always consistent with existing ones (we shall not discuss the possibility of losing total correct- ness); our formalization preserves this monotonicity hence we do not require a mechanism to circumvent the frame problem. Flexible program transformation Degenerate abstraction within the object-level are lim- ited to the expressiveness of the object-level. Programs are not good representations of properties that influence transformation and so a meta-program logic is used to represent these properties. Using this logic we define methods that specify the behaviour of tactics. By partially specifying a tactic with weak method conditions, the method generalizes a class of programs differing perhaps only in detail: the tactic must be ap- plicable to this class. Tactics accommodate this flexibil- ity by generating degenerate scripts which may require refinement before being passed to the FPE for execu- tion. This is where the system gains much of its flexi- bility because the details of the script can be completed more easily when a basic structure is known. GREEN 319 These degenerate scripts provide subgoals which enable conventional means-end analysis and goal regression to be used to constrain search within object-level space, as in other degenerate object- level planners e.g., ABSTRIPS and Plaisted’s theorem prover (Plaisted, 1981). It is possible that a refinement of a degenerate script cannot be found and so search must return to the meta- level by rejecting that (instance of the) tactic (back- tracking) and re-planning. We are experimenting with pre- and post-condition strength to strike a balance be- tween amount of backtracking, the cost of computing these conditions and the extra search incurred by in- creasing the number of methods. Tactics, methods and examples Here we present three tactics and methods, namely TUPLE, COMPOSE and FOLD. The detail of these and other tactics is ommited here due to space limita- tions, but may be found elsewhere (Green, 1991). The table below provides an informal interpretation of the predicates and functions used (in the examples, an underscore indicates an anonymous placeholder). Predicate/Function Interpretation EQN(f,f.) f e fD occuqf, 9,pos) f is a symbol appearing in g at position pos TUPLE( t) t is a tuple FUNCTION(pos, f) f contains a function symbol at position pos ELEMENTS( t, f) positions in t at which there are non-foldable expressions contain- ing recursion argument of f ARGS(f) collection of arguments off A brief description of the tactic is followed by the method, shown as a table of pre- and post-conditions. TUPLE tactic This tactic is used to improve functions containing tu- ples whose elements are immediately recursive functions of the recursion parameter. The recursion argument is instantiated with [] and then unfolded. The construc- tor (p :: ps) is used similarly, but additionally is, folded with the top-level function. Note that this fold may not be possible, in which case the tactic will fail. TUPLE method pre-condi tions post-conditions EQN(~,~D) A OCCURS(t,f&pos) A TUPLE(t) A (t71p E ELEMENTS(t,f)) occuRs(f,f~,-) (FUNCTION(P,fD) AOCCURS(MD, P) A E&N@, hD) A OCCURS@, hD, _)) 320 PARALLEL SUPPORT FOR R-B SYSTEMS COMPOSE tactic Removes nested function calls from expressions with define; a new function symbol is introduced on the left of an equation whose right hand side is this nested call. A fold with this new equation removes the nested call from the expression. A more general case of this, fusion (or deforestation), is presented by Chin (1990). COMPOSE method pre-condi tions EQN(~,~D) A EQN(9, 90) A OCCURS(9,9D, -) A OCCURS(P,~D, ~0s) A OCCURS(g, ARGS(f), _) post-conditions FUNCTION(pos, fD) A OCCURS(f’, fD, POS) A EQN(f’, fh) A lOCCURS(f, ARGS(f); ..) FOLD tactic Performs the classic unfold/fold sequence on nested functions (Burstall and Darlington, 1977) in which re- cursion parameters are instantiated to type constants and constructors followed by repeated unfolding and unfolding/folding, respectively. FOLD method pre-con di tions EQN(h, f(s@))) post-conditions A EQN(g, 90) A EQN(~,~D) A OCCURS(g, 90, Pas) OCCuRs( h, ho, _) Examples The following three examples illustrate how the above tactics can be reasoned with, as described, in order to automatically improve a number of functions. Exam- ples 1 and 2 require only tactics FOLD and TUPLE re- spectively; example 3, which requires all three tactics, illustrates how meta-level reasoning chains tactics to- gether into an overall transformation strategy. In these last two examples the generated scripts require refine- ment, the details of which are not presented here. Example 1 e Let us derive an immediately recursive version of foe where foo is defined by: - foo( I) * sum(squares( I)); sum(H) t 0; sum(p :: ps) t p + sum(ps); vares([l> * [I; squares(p :: ps) -C (p * p) :: squares(ps); An immediately recursive form of foe is represented the meta-level description EQN(f00, HOOD) A OCCURS(f00, HOOD, _). The first conjunct is satisfied, the second is not. bY A method is found whose post-conditions unify with this unsatisfied goal. Both the TUPLE and FOLD methods fulfil this requirement and so this is a backtracking point. FOLD is chosen as its pre-conditions are satis- fied. The tactic is executed and the following script is generated:’ type constant ([I) instantiate I to [] unfold squares([]) unfold sum([]) type constructor (p :: ps) unfold squares(p :: ps) unfold s~m((p * p) :: squares) fold foo( 1) which does not require refinement. When applied to foe this yields: foo([l) e 0; fOO(P :: P) t- (P * P) +foo(Ps); Example 2. Consider the introductory example with an additional parameter x added to av: a?@, X) * (count(l), x + sum(l)); With the goal EQN(uv, Uvo)A OCCURS( UW,U?I~,_) the TUPLE method indicates that this tactic is to be tried. As there are no other goal to satisfy, the tactic is executed resulting in the script shown in Figure 2a. type constant ([I) instantiate I to [J unfold count([]) unfold sum([]) type constructor (p :: ps) instantiate d to p :: ps unfold count(p :: ps) unfold sum(p :: ps) fold uv(l, x) (a) instantiate 1 to [] unfold count([]) unfold sum( [I) instantiate I to p :: ps unfold count(p :: ps) unfold sum(p :: ps) replace x + (p + sum(ps)) with (p + sum(ps)) + x replace (p + sum(ps)) + x with p + (sum(ps) + x) replace sum(ps) + x with x + sum(ps) abstract count(ps) abstract x + sum(ps) replace (flatten lets) fold a+ x) 09 Figure 2: Scripts for foe before (a), and after (b) refinement This script would fail as the fold uv(f, 2) cannot succeed on a+ :: ps, x) e (1 + count(ps), x + (p + sum(ps))); 1 Scripts shown are simplified for clarity. In order to avoid failure of this kind the pre-conditions of methods could be strengthened. However, it is not clear how this would be implemented, and evaluation of these conditions is likely to be computationally expen- sive. But more importantly it would make the TUPLE method more rigid and specialized requiring more tac- tics and methods to handle anomalous cases. When the script fails in this way it is used to guide search for a suitable refinement. Means-end analysis and backward reasoning from the goal of folding uv results in a number of replace transformations which complete the script as shown in Figure 2b above. The meta-level reasoning in this case gives us better than 1O:l (object-level:meta-level) gearing on the num- ber of steps needed, as well as reducing search needed to discover them. The improved uv is: uv(p :: ps, x) t let (u, v) f uv(ps, x) in (I+ u,p + v); Example 3. In this example we illustrate the way a transformation strategy is automatically constructed from tactics by searching in the abstract space. foo(l, xc) ‘t- (count(l), x + sum(l), sum(squures(l)); sum([]) + 0; sum(p :: ps) C- p + sum(ps); wa4[1) + [I; squures(p :: ps) * (p * p) :: squures(ps); The goal is for foe to be immediately recursive which is expressed at the meta-level as EQN(fOo,fOOg) A OCCURS(foo,foo~,.m). The post-conditions of TUPLE unify with this goal but its pre-conditions are unsatisfied. We seek to satisfy its unsatisfied pre-condition, which is: (Vp E ELEMENTs( t, foe)) (FUNCTION(p, f000) A OCCURS(~,hP) A EQN(h,/@) AOCCURS(h,hg,_)) (We abbreviate the right hand side of foe as foor, .) This pre-condition fails because sum(squures(l)) is not a function symbol. The COMPOSE method achieves this goal by defining a new function, sumsquares, and fold- ing it into fooo. Choosing this method leaves OCCURS(sumsquures, sum(squares(/)), _) unsatisfied so we seek to derive a new right hand side to sumsquares that contains sumsquares. The post-conditions of the FOLD and TUPLE methods unify with this goal. The FOLD tactic is chosen as its pre- conditions are satisfied. Now the entire pre-condition of the TUPLE method is satisfied and so the TUPLE tac- tic may be applied. GREEN 321
1991
44
1,103
Conditional Existence of Variables in Generalized Constraint Networks* James Bowentand Dennis Bahlerz Dept. of Computer Science Box 8206, North Carolina State University Raleigh, NC 27695-8206 Abstract Classical constraint systems require that the set of variables which exist in a problem be known ab initio. However, there are some ap- plications in which the existence of certain vari- ables is dependent on conditions whose truth or falsity can only be determined dynamically. In this paper, we show how this conditional ex- istence of variables can be handled in a math- ematically well-founded fashion by viewing a constraint network as a set of sentences in free logic. Based on these ideas, we have developed, implemented and applied, a constraint lan- guage in which any sentence in full first-order free logic, about a many-sorted universe of dis- course which subsumes %!, is a well-formed con- straint. Classical constraint systems [8] require that the set of variables which exist in a problem be known ab initio. However, there are some applications [4; 91 in which not just the value of certain variables, but also their very existence, depends on conditions whose truth can only be determined dynamically. In this paper, we show how this conditional existence of variables can be handled in a mathematically well-founded fashion by viewing a con- straint network as a set of sentences in first-order free logic (FOFL) [7]. A s well as having a better-developed theoretical underpinning, this approach is more general than that presented in [9]. We briefly review a language, based on these ideas, which we have developed, imple- mented and applied to a range of CAD applications. In the literature, several different definitions are given for classical constraint networks, with varying degrees of formality. However, they may all be regarded as vari- ations of the following theme: *This work was supported in part by NSF grant number DDM-8914200. +jabowen@adm.csc.ncsu.edu *drb@adm.csc.ncsu.edu Definition 1, Constraint Network: A constraint network is a triple (D, X, C). D is a finite set of p > 0 domains, the union of whose members forms a universe of discourse, U. X is a finite tuple of q > 0 non-recurring variables. C is a finite set of T 1 q constraints. In each constraint Ck(Tk) E 6, Tk is a sub-tuple of X, of arity ak; Ck(!Q) is a subset of the ab-ary Cartesian product Z&Q. In C, there are q unary constraints of the form Ck(Xj) = Di, one for each variable Xj in X, restricting it to range over some domain Di E II. The overall network constitutes an intensional specifica- tion of a joint possibility distribution for the values of the variables in the network. This distribution is a q-ary relation on UQ, called the intent of the network: Definition 2, The Intent of a Constraint Network: The intent of a constraint network (D, X, C) is HD,X,C = J%(X) n ***f-J E,(X), where, for each constraint Cb(T’) E C, Ek(X) is its cylindrical extension [5] in UQ. Three forms of constraint satisfaction problem (CSP) can be distinguished: Definition 3, The Decision CSP: Given a network (D, X, C), decide whether nD,X,C is non-empty. Definition 4, The Exemplification CSP: Given a network (D, X, C), return some tuple from UD X C, if UD X C is non-empty, or nil otherwise. Definiiion 5, Tie bnumeration CSP: Given a network (D, X, C), return IID,X,C. Classical constraint processing may be regarded as se- mantic modeling in first-order classical logic (FOCL), in the following sense. The constraints in C correspond to sentences of an FOCL theory P, written in some first- order language L = (P, F, K) .l The variables in X cor- respond to object symbols, from K, which appear in I?. The decision CSP corresponds to deciding whether l? is satisfied under any model M = (U, 2) of the language ‘In the language C = (P, 3, Kc), P is a set of relation or predicate symbols, 3 is a set of function symbols and K is a set of object or constant symbols. BOWEN & BAHLER 215 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. l, where 24 is the union of the domains in D and Z is an interpretation function from the symbols of L to enti- ties in, and relations over, 2.4. The exemplification CSP corresponds to finding Z for one such model, while the enumeration CSP corresponds to finding Z for all such models. 3.1 Example 1 Consider the task of semantically modeling the theory r = (positive(area), nonnegative(tensile-stress), nonnegative(load), load = area * tensile-stress, tensile-stress < 200}, in L = ({positive, nonnegative, =? 9, {*I, R u { area, tensile-stress, load)).= Sup- pose we are given a universe of discourse 24 = % and a partial interpretation function Zp for L, which provides an interpretation for each predicate and function sym- bol of C as well as a total one-to-one mapping from R to the finitely expressible rationals Qr C & C !l?. That is, as well as object mappings of the form 200 -+ 200, Zp contains these predicate and function mappings: positive 3 nonnegative -+ = 4 ,< 4 * 4 a?+ !R”+ {(X,Y)lX EUAY EU A EQUALS(X,Y)} {(X,Y)IXE%AYE%A LEQ (x, y)) ((X,Y,Z)IXERAYE!RA 2 E %A EQUALS(Z,TIMES(X,Y))}. The constraint network (D, X, C) corresponding to this situation is a possibility distribution for inter- pretations of the remaining uninterpreted object sym- bols of L, such that I’ is satisfied. The com- ponents of this networks are: D = {ZR+, !R”+, % , X = (area, tensile-stress, load) and C = {Cl(area , 1 Cz(tensile-stress), Cs(load), Ca(area, tensile-stress, load), Cs(tensile,stress)}. The constraints are defined as follows: Cl(area) = 9?+ Cz(tensdest;e+ss) = !I?‘+ c3 (load) = 32 C~(aTea, tensile-stress, load) = {(X,Y,Z)IXE~AY E%AZ EUA(Z,(X,Y))} Gj(tensileAress) = {X(X E ?RA(X, 200)). Each sentence in the theory has a corresponding con- straint in the network, the definition of which depends on whatever information is provided, by the partial in- terpretation &, about the symbols appearing in the sen- tence. Consider, for example, Ca(ureu, tensile-stress, loud), which corresponds to the sentence loud = area * tensile-stress. There are three variables in this con- straint, corresponding to the three object symbols in the ‘R is the set of object symbols composed from the charac- ters f, -, . and 0..9 according to a grammar for real numeric strings. R is distinguished from !J?, the set of real numbers. In this paper, to distinguish between symbols of ,C and enti- ties of U, we use typewriter font for the latter. Thus, 2011 E R is an object symbol while 200 E R would be in a universe of discourse. sentence. The restriction imposed by the constraint is derived from the interpretations in Zp for the symbols = and * in the sentence. Consider Cs(tensile,stress), which corresponds to the sentence tensile-stress =< 200. Although two object symbols appear in the sen- tence, there is only one variable in the constraint, be- cause Zp provides an interpretation for 200. 3.2 Example 2: The Inadequacy of Classical Constraint Networks Constraint networks and constraint processing have wide applicability, in areas as diverse as design [2; 91 and computer vision [lo]. The network in Example 1 repre- sents some considerations affecting the cross-sectional area of, and tensile-stress in, a bar carrying a load. However, consider the following: Example 2: Extend Example 1 to incorporate consid- erations about the cross-section of the bar. Circular and rectangular shapes are allowed; circular bars have a cross-sectional radius; rectangular bars have a breadth and height. This example is beyond the expressive competence of classical constraint networks, because the radius cannot exist if the height and breadth exist. We have encoun- tered a fundamental ontological inadequacy of classical logic: its inadequacy for reasoning in depth about ex- istence or non-existence [6]. In what follows, we show how this difficulty can be overcome, by using FOFL [7], rather than FOCL, as the theoretical basis for constraint networks. 4 Free Logic In classical logic, a model M = (24, Z) for a first-order language ic = (P, K>s comprises a universe U and an interpretation function Z. The interpretation function Z, assigns to each predicate symbol p E P some rela- tion Z(p) over 24 and, because of the absence of function symbols in Ic, provides a mapping from K to U which is total and surjective. That is, the function Z assigns to each object symbol K E K some element u E 24 and makes each u E $4 the image of at least one K E K. Free logic [7] differs from classical logic in that a free logic model M = (U,Z, S) for a first-order language ic = (P, K) contains a third item, a story S. As before, the function Z assigns to each p E P a relation Z(p) over 24. However, although Z provides a surjective mapping from K to U (i.e., every u E 24 is the image of some K E ic), this mapping need not be total: it need not assign a u E 24 to every K E K. The story S is a (possibly empty) set of atomic sentences, each of which contains some K: E /c to which Z has assigned no u E U. A special symbol R, additional to V and 3, is used in atoms like (R K) or ~(0 K) to talk about whether or not Z maps K E K: to some u E 24; (52 K) can be read as ‘“K designates 3 For simplicity, in this section we discuss a function-free language, since an n-ary function is really an (n + 1)-ary relation. 216 CONSTRAINT-BASED REASONING some element of U” or simply “the object denoted by K exists.“4 The model-theoretic rules for determining truth in FOFL are as given below. Rules (a), (g) and (h) are different from their counterparts in FOCL, while rule (i) has no counterpart. (a) M I= p(ar, . . . . a,) iff p(ar, . . . . a,) is in S or @(al), .,.,2(un)) is in Z(p). (b) M + 1A iff M k A. (c)M bAr\BiffM FAandM FB. (d)M /=AVBiffM FAorM b23. (e) M /=A+BiffM FAorM bB. (f) M + A e B iff (M k A and M k B) or (M k A and M /= B). (g) M k (VX)A iff M b (K/X)A for every K E K: to which Z has assigned a u E 24. (h) M /= (3X)A iff M + (tc/X)A for some K E K to which Z has assigned a u E 2.4. (i) M b (Q IC) iff K E ic is assigned a u E 24 by 1. 5 ee Constraint Processing Definition 6, Free Constraint Network: A free constraint network is a quadruple (D, V, X, C) . D is a non-empty, finite set of domains, the union of whose members forms a universe of discourse 24. V is a distinguished entity, V # 24. X is a non-empty, possibly infinite, tuple of non-recurring variables. C is a finite set of r > 0 constraints. Each constraint Ck(!&) E C is a possibility distribution which restricts the existence of the variables in T’, a (possibly infinite) sub-tuple of X, and the values that these variables may assume if they exist. Ck(T,) is a subset of the Cartesian product (Hx, in I&)) where v(X) = (U U (VI). Free constraint networks are a generalization of clas- sical networks. A classical constraint network is a free constraint network containing a finite number of vari- ables, all of which exist; that is, in a classical network X is a finite tuple, and the constraint(s) in C uncon- ditionally require(s) every variable in X to exist. In contrast, a free constraint network may, in general, con- tain an infinite number of variables, none of which need, in general, exist. The intent of a free constraint network is a joint pos- sibility distribution on the existence of the variables in the network and on the values that these variables may assume, if they exist: Definition 7, Intent of a Free Constraint Network: The intent of a free constraint network (D, V, X, C) is ’ ’ @D,x,c = Fl(X) n . . . n E.(X), where J”(X) is the cylindrical extension of the constraint Cj(Tj) in the Cartesian product Px. in X(w(xj)))y th e existence and value space for x. 3 *$I may also be regarded as a unary predicate symbol whose interpretation is zLll of 2.4; when viewed as such, S2 is special, in the sense that no atom with 32 as predicate can be in the story S. Definition 8, Free Decision CSP: Given a free’ constraint network (D, V, X, C) , decide whether @D,X,G is non-empty. Definition 9, Free Exemplification CSP: Given a free constraint network (D, V, X, C), return SOme tuple in @D,X,C, if Qep),X,C is non-empty; otherwise return nil. Definition 10, Free Enumeration CSP: Given a free constraint network (D, V, X, C), return @D,X,C* The possibility that some variables in a free constraint network need not exist (need not be mapped into 24) means that several additional forms of CSP may be de- fined [l]. Here, however, we are interested in just two of these: the Minimal Exemplification CSP and the Mini- mal Enumeration CSP. These are defined as follows. Definition 11, Interpretation Set: Given a mapping from a tuple of variables Tk to a tuple of values tk, the expression L(T~, tk) denotes the corresponding interpretation set, the set of all those variable to value mappings in which the variables denote something in the universe 24. Definition 12, Minimal Intent of a Free Constraint Network: The minimal intent of a free constraint network (D, V, X, C), written @?D,X,C, is @D X C = tyiy E @D X C A -‘((32) (2 k @D X c A I(x; 2) c L&y)))}. Definition 13,’ kinimal Exemplification CSP: Given a free constraint network (D, V, X, C), return some tuple in p@D,X,C, if @D X,C is non-empty; 9 otherwise return nil. Definition 14, Minimal Enumeration CSP: Given a free constraint network (D, V, X, C), return @D,X,C- A free constraint network (D, V, X, C) specifies the pos- sibility distribution for the existence, as well as the interpretation, of object symbols which appear in a free logic theory F, written in a first-order language L = (P,F,K). A s in the classical case, the definition of the network depends on I’, the universe of discourse U, and a partial interpretation function Zp for L which gives interpretations for all function and predicate sym- bols, and some of the object symbols, of C. However, the free network definition also depends on the story S. Taking any tuple Y in @D,X,$ and computing Z = Zp U L(X, Y) produces an interpretation function 2 such that the theory l? is satisfied under the free logic model M = (U,Z, S) of L. Similarly, taking any tuple 2 in $I?D X c and computing Z = Z,, U L(X, 2) produces a minimal interpretation function Z such that the theory l? is satisfied under the minimal free logic model M = (U,Z,S) of L. BOWEN & BAHLER 217 6.1 Example 3 Consider, for example, the following situation: Language: Z = ({P, dr 01 (1’ 2,3’%4 4 Theory: r = (~(a, b), ~(a, c> * da, 4) Universe of discourse: 24 = { i,2,3} Story: S = {p(a, b)} Partial Interpretation: Zp = (1 + I, 2 --) 2,3 ---) 3, P + {(i, 2)) (2, q, (3, w, Q + {(k2), k3)H. The free constraint network (D, V, X, C) corresponding to this is the possibility distribution for the existence and interpretation of a, b and c. The components D, X and C of the network are as follows: D = {{I, 2,s)); X = (a, b, c); C = (G(u, b), C2(a, 4). Cl(u, b) corresponds to p(u) b) E I’. Since p(u) b) E 5, at least one of a or b must not exist. Thus, Cl(u, b) = (W) x iv, 1’2’31) u (W’ 112’31 x WI). C2(% 4 corresponds to p(a,c) 3 q(a,c). Since this can be rewritten as -~(a, c) V q(u,c), C~(U,C) can be defined as the union of two sets, one for each disjunct. Since PC% 4 4 4 PC a c is false if a does not exist or if c f > does not exist or if a and c both exist but do not sat- isfy p. Since q(u, c) Q! S, q(u, c) is true iff a and c both exist and satisfy q. Calculating the union of the corre- sponding possibility distributions, and simplifying, we get C2(a, c) = ((V, 1,2,3) x P, I, 2,3-j) - ((3,3)1. In- tersecting the cylindrical extensions of these constraints, we get @b,x,c = (((WI x 07 I, 2, wJ(W, I, 223) x’ {V})) x (V,‘l,2,3}) - ((3,V, 3)}, which means that #f?D X c = {(V, V, V)}. This means that there are 27 dikerent models M = (U,Z,S),Z > Zp, of the lan- guage C under which I’ is satisfied; the; is one minimal model, (U,Z,, S), under which neither a, b nor c exist. 6.2 Example 4 To see the impact of the story on the intent of a the- ory, consider the above situation, modified so that the story S = { ). The only difference in the network is that Cl(u, b) = {(I, 4, C&3), (3,W. Computing the intent and the minimal intent, we get @D,X,G = ({(V), k% (33)) x WA2,3)) - ((3,3,3)), and P@D,X,C = ((1,2, V), (2,3, v), (3,3, V)}, so, although c need not exist, both a and b must exist when the story is empty. 7 A Brief Overview of Galileo A program in Galileo is a declarative specification of a free constraint network, analogous to the problem specifications in Examples 3 and 4. That is, in gen- eral, a Galileo program specifies a first-order language ic = (P, F, K), a theory I’ containing sentences from that language, a universe of discourse 24, a partial in- terpretation function Zp for JZ and a story S. Of these, only the theory I’ must always be specified explicitly. The Galileo run-time system provides a default language L, = (Ps, F’s) R) in which R contains the real numeric strings, Ps contains names of standard predicates (=, =<, etc.), and F9 contains names of standard functions (*, +, etc.). The run-time system also provides a uni- verse of discourse Us = 3, an interpretation function Zs for .& in terms of Us and a story S, = { ). The language l= (P, F, K) defined by a Galileo pro- gram has the following components: P = Ps u{ predicate symbols defined in the program}; F = Fg U {function symbols defined in the program}; K = RU {object sym- bols used in the program}. The universe of discourse 2.4 defined by a Galileo program is the union of Us = % with any application-specific domains that are defined in the program. The partial interpretation function Zp defined by the program is the union of the set of map- pings in Zs with the mappings provided by any defini- tions of application-specific domains, relations and func- tions that are in the program. The story S defined by a Galileo program is the union of S, = {) with any story provided in the program. The Galileo program corresponding to Example 1 above is as follows: area : positive.5 tensile-stress : nonnegative. load : nonnegative. load = area * tensile-stress. tensile-stress =< 200. The language C = (P, F, Ic) defined by this program has the following components: P = Ps; F = F9; K = R U {urea, tensile-stress, loud}. The universe of discourse 2-4 = ug Zg. = ZJ?. The partial interpretation function Zp = The story S = S, = {}. This program, in effect, defines a classical constraint network because the first three constraints specify that the object symbols urea, tensile-stress and loud must exist. However, the following program, which corresponds to Example 2, does define a free constraint network, because the existence of radius or breadth and height is contingent on the interpretation of shape: domain form =::= {circular, rectangular). area : positive. tensile-stress : nonnegative. load : nonnegative. shape : form. load = area * tensile-stress. tensile-stress =< 200. shape=circular implies exists(radius : positive) area = 3.14159 * radius- ‘J.~ shape=rectangular implies exists(breadth : positive) and exists(height : positive) and area = exists(radius) equiv breadth * height. not exists(breadth) and not exists(height). exists(height) equiv exists(breadth). and 5A Galileo statement of the form “K : p” is merely an elliptical form of the Galileo statement “exists(K) and P(K),” where “exists(~)” is Galileo syntax for (a~). 6A Galileo expression of the form “exists(6 : p)” is an elliptical form for the equivalent “exists(K) and P(K).” The distinction between K : p and exists(n : p) is made purely for ease in parsing. 218 CONSTRAINT-BASED REASONING The language C = (P, .T, K) defined by this program has the following components: P = Ps U (form); .F = Fg; I-C = R U (circular, rectangular, area, tensile-stress, load, shape, radius, breadth, height). The universe of discourse U = Us lJ {circular,rectangular) = !I? U (circular ,rectangular}. The partial interpre- tation function Zp = Zs U { form -+ {circular, rectangular}, circular ---) circular, rectangular --) rectangular }. The story S = Ss = {}. The intent of the network would be greatly expanded by the elimina- tion of the last two statements in the program, but the minimal intent would be unaltered. The following program, which corresponds to Exam- ple 3, contains two application-specific relation defini- tions, as well as an application-specific story: relation p(number,number) =::= ((1,2), (2,3), (3,3)}. relation q(number,number) =::= {(1,2), (2,3)]. story =::= {p(a,b)). PW). p(a,c) implies q(a,c). The language C = (P, .T, Kc) defined by this program has the following components: ‘P = Ps U {p, q}; 3 = 3g ; K = R U {a, b, c}. The universe of discourse U = us = 8. The partial interpretation function Zp = Z lJ 1 P + { (w), c&3), (3,3) 1, Q --+ { W), (4 }}. ThestoryS= sg lJ ~P(%b)l = ~PW)l. The language L defined by this program is larger than the language in Example 3, because of the presence of the default language L, provided by the Galileo run-time system; however, the free constraint network defined by this program is identical to that in Example 3. A full description of the Galileo language is beyond the scope of this paper. However, it should be noted that the language supports the full FOFL. Thus, for example, although all sentences in the example theories consid- ered above were ground, Galileo supports theories which contain arbitrarily nested quantified sentences. Also, al- though none of the programs considered here involved application-specific functions, the definition and use of such functions is supported. Since FOFL subsumes FOCL, satisfiability for FOFL is undecidable. There cannot exist, therefore, an algo- rithm capable of performing automatic constraint sat- isfaction on an arbitrary network specified in Galileo. However, various forms of CSP can be solved for a wide variety of networks, by the Galileo run-time system, which runs in either of two modes. In autonomous mode, it is capable of solving the Mini- mal Enumeration CSP (and also, therefore, the Minimal Exemplification CSP and the decision CSP), for those free networks in which all variables have finite domains. It is capable of autonomously solving the Minimal Ex- emplification CSP (and, therefore, the decision CSP) for networks in which some or all variables have infi- nite domains, provided the networks are decidable by backtrack-free search. In interactive mode, the run-time system can solve the Minimal Exemplification CSP (and, therefore, the decision CSP) for any network which the user, by non- monotonically augmenting the theory with additional assertions, renders decidable by backtrack-free search. Since any model of an augmented theory is also a model of the original theory, a solution to the Minimal Exem- plification CSP for the augmented network is a solution for the original network. Full details of constraint satisfaction in Galileo are beyond the scope of this paper. However, an inference algorithm called Compound Propagation [3] is a cen- tral part of both the backtracking searcher and the non- backtracking searcher. The algorithm, whose top-level is shown in Figure 1, involves interleaved application of three inference techniques: a version of local propagation of known states, extended to assimilate conditional existence; this is performed by invoking a lower-level procedure called LP; a version of arc consistency, generalized to infinite domains and constraints of arbitrary arity; this is performed by invoking procedure AC; e a form of path consistency, generalized to infinite domains and constraints of arbitrary arity; this op- eration, which is performed by invoking procedure PC, is only applied to small portions of the network in certain very specific circumstances. procedure CP (Assert ions) localwar Qlp, Qac, Qpc begin Qlp t Assertions; Qac t () repeat ‘; QP + 0 LP (Qlp, Qac, Qpc) ; if Qac # 0 VQPC # (1 then repeat if Qac # 0 then AC(Qlp, Qac, Qpc) ; if Qlp = 0 r\Qpc # 0 then PC (Qac, Qpc) until Qlp # {} V Qac = {} until QiP = {}; end Figure 1: Top-level of the CP algorithm. ee Logic We have built several CAD applications using Galileo. Here, both to illustrate the utility in real-world applica tions of free logic’ and to touch, at least briefly, on quan- tification in Galileo, we provide one universally quanti- fied constraint8 from an expert system on Design for Testability (DFT) [4]: 7Most practical applications involve the empty story. 81n Galileo, “(VX)(p(X))” is written “all X : p(X).” This constraint uses 0 inside two levels of V. Also, as indicated by the dot notation in “Y.maxfreq,” it uses structured domains, another feature of Galileo which is beyond the scope of this paper. BOWEN & BAHLER 219 all X : crystal(X) implies (all Y : tester(Y) and Y.maxfreq < X.freq implies exists(X.ancillary,circuit : divider)) This constraint represents the following information about the design of printed wiring boards provided by our domain experts: “Each crystal on a board must have its own associated divider circuit if the crystal’s oscillation frequency exceeds the maximum oscillation frequency that can be handled by any of the pieces of test equipment that will be used to analyze the board.” 9 Comparative Discussion By basing our work on free logic, we have been able to give our approach what seems to be a more developed theoretical underpinning than the only other known ap- proach to conditional existence of network variables, that of Mittal and Falkenhainer [9]. In addition, our approach is more general. Their notion of a dynamic CSP is a special case of our concept of a minimal free enumeration CSP, as follows: all domains are finite; the number of potentially-existent variables is finite and a non-empty subset must be existent initially; quantified constraints are not allowed; the notion of conditional ex- istence can be used in only a few different ways, whereas, in Galileo, an expression of the form exists(object) can appear anywhere that an atom may appear in a free logic theory. In [9], the distinction between “active” and “inac- tive” variables refers only to denotation. Our approach, by contrast, maintains two distinctions: denotation ver- sus non-denotation and existence versus non-existence in the name space. This difference has an important prag- matic consequence: although space limitations prevent us from showing it here, our approach can handle prob- lems with an infinite number of potentially-existent vari- ables, because storage space is not used by network vari- ables until such time as their actual existence is proven or disproven. Indeed, in most cases even the names of potentially-existent variables are generated only when the variables’ existence is proven; these names can be regarded as equivalent to functional expressions involv- ing the successor function in finite object vocabulary me&theories of first-order languages. In practical ap- plications, this happens for only a finite subset of the infinity of variables. Therefore no storage burden is im- posed by the infinity of other variables whose possible existence is irrelevant to the particular minimal model constructed. Free logic is a principled attempt to remedy an on- tological inadequacy of classical logic. This logic has long been studied by philosophers and has recently [6] attracted some attention in the Knowledge Representa- tion literature. Nevertheless, although there are several computer languages which are explicitly based on first- order classical logic, our language, Galileo, seems to be the first based on free logic. References PI PI PI PI PI PI PI PI PI WI Bowen J, and Bahler D, 1990, “Improving Onto- logical Expressiveness in Constraint Processing,” Technical Report, Department of Computer Sci- ence, North Carolina State University. Bowen J, O’Grady P and Smith L, 1990, “A Con- straint Programming Language for Life-Cycle En- gineering,” International Journal for Artificial In- telligence in Engineering, 5(4), 206-220. Bowen J, Bahler D and Paramasivam M, 1991, “Compound Propagation: A Constraint Monitor- ing Algorithm,” Technical Report TR-91-11, De- partment of Computer Science, North Carolina State University. Dholakia A, Bowen J and Bahler D, 1990, “Rick: A DFT Advisor for Digital Circuit Design,” Technical Report, Department of Computer Science, North Carolina State University. Friedman G and Leondes C, 1969, “Constraint The- ory, Part I: Fundamentals,” IEEE Transactions on Systems Science and Cybernetics, ssc-5, 1, 48-56. Hirst 6, 1989, “Ontological assumptions in knowl- edge representation,” Proceedings of the First In- ternational Conference on Principles of Knowledge Representation and Reasoning, 157-169. Lambert K and van Fraassen B, 1972, Derivation and Counterexample: An Introduction to Philo- sophical Logic, Enrico, CA: Dickenson Publishing Company. Mackworth A, 1987. “Constraint Satisfaction,” in S. Shapiro (ed.), The Encyclopedia of Artificial In- telligence, New York: Wiley, 205-211. Mittal S and Falkenhainer B, 1990, “Dynamic Con- straint Satisfaction Problems,” Proceedings of the Eighth National Conference on Artificial Intelli- gence, 25-32. Mulder J, Mackworth A and Havens W, 1988, “Knowledge Structuring and Constraint Satisfac- tion: The Mapsee Approach,” IEEE Transac- tions on Pattern Analysis and Machine Intelli- gence, 10(6), 866-879. 220 CONSTRAINT-BASED REASONING
1991
45
1,104
Laboratoire d’Informaticlue, de Robotique et de Micro&ctronique de 860, rue de Saint Priest 34090 Montpellier PRANCE Email: bessiere@xim.crim.fk Abstract Constraint satisfaction problems (CSPs) provide a model often used in Artificial Intelligence. Since the problem of the existence of a solution in a CSP is an NP-complete task, many filtering techniques have been developed for CSPs. The most used filtering techniques are those achieving arc-consistency. Nevertheless, many reasoning problems in AI need to be expressed in a dynamic environment and almost all the techniques already developed to solve CSPs deal only with static CSPs. So, in this paper, we first define what we call a dynamic CSP, and then, give an algorithm achieving arc-consistency in a dynamic CSP. The performances of the algorithm proposed here and of the best algorithm achieving arc-consistency in static CSPs are compared on randomly generated dynamic CSPs. The results show there is an advantage to use our specific algorithm for dynamic CSPs in almost all the cases tested. Constraint satisfaction problems (CSPs) provide a simple and good framework to encode systems of constraints and are widely used for expressing static problems. Nevertheless, many problems in Artificial Intelligence involve reasoning in dynamic environments. To give only one example, in a design process, the designer may add constraints to specify further the problem, or relax constraints when there are no more solutions (see the system to design peptide synthesis plans: SYNTHIA [Janssen et al 19891). In those cases we need to check if there still exist solutions in the CSP every time a constraint has been added or removed. Proving the ‘stence of solutions or finding a solution in a CSP are -complete tasks. So a filtering step is often applied to CSPs before searching solutions. The most used filtering algorithms are those achieving arc- consistency. All arc-consistency algorithms are written for static CSPs. So, if we add or retract constraints in a CSP and achieve arc-consistency after each modification with one of these algorithms, we will probably do many times almost the same work. So, in this paper we define a Dynamic CSP (DCSP) ([Dechter & Dechter I988], [Janssen et al 19891) as a sequence of static CSPs each resulting from the addition or retraction of a constraint in the preceding one. We propose an algorithm to maintain arc-consistency in DCSPs which outperforms those written for static CSPs. The paper is organized as follows. Section 2 presents e CSP model (2.1) and defines what we call a Dynamic CSP (2.2). filtering method is introduced andthebe ving it (AC-4 in [Mohr & Henderson (2.3). Why this algorithm is n 2.4. Section 3 randomly generated DCSPs is given. Section 5 contains a summary and some fmal remarks. ic constraint satis involves a set of elements of dam, and a set of co~s~ra~~ts C. Each constraint Cp in C involves a subset (il,... is labeled by a relation R* of R, subset of dom(il) x...x dom(i& that specifies which values ariables are compatible with each other. A binary constraint satisfaction problem is one in which all the constraints are bin i.e., involve two variables. A binary CSP can be associa& with a constraint-graph in which Pigure 1: An example of CSP nodes represent variables and edges connect those pairs of variables for which constraints are given. In that case, the BESSIERE 221 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. constraint between the variables it connects (non-oriented edges are equality constraints and oriented ones are a strict lexicographic order along the arrows). A solution of a CSP is an assignment of values to all the variables such that all the constraints are satisfied. The task, in a CSP, is to find one, or all the solutions. We now only consider binary CSPs for clarity, but the results presented here can easily be applied to general CSPs ~essiere 19911. 2.2. Dynamic Constraint Satis A dynamic constraint satisfaction a sequence of static CSPs Q),..., resulting from a change in the preceding one i “the outside world”. This change can be a res new constraint is imposed on a pair of variables) or a relaxation (a constraint that was present in the CSP is removed because it is no longer interesting or because the current CSP has no solution). So, if we have P(,)=(X, dom, T;(o), R), we will have P(cr+l)=W, dam c(a+l)v RI where C(,+1)=t+,) +, C, C being a constraint. ;P(o) =(X, dom, 0, R). 2.3. Arc-consistency The task of finding solutions in a CSP has been treated by several authors, and since the problem is NP-complete, some of them have suggested that a preprocessing or filtering step be applied before the search (or backtracking) procedures. Then, consistency algorithms were proposed ([Montanari 19741, [Mackworth 19771, [Freuder 19781, [Dechter & Pearl 19881). These algorithms do not solve a CSP completely but they eliminate once and for all local inconsistencies that cannot participate in any solutions. These inconsistencies would otherwise have been repeatedly discovered by most backtracking procedures. 3 4 Figure 2: The CSP of fig.1 after application of an arc-consistency algorithm A k-consistency algorithm removes all inconsistencies involving all subsets of size k of the n variables [Freuder 19781. In fact, the most widely used consistency algorithms are those achieving 2-consistency (or arc- consistency). Arc-consistency checks the consistency of values for each couple of nodes linked by a constraint and removes the values that cannot satisfy this local condition (see figure 2). It is very simple to implement and has a good efficiency. The upper bound time complexity of the best algorithm achieving arc-consistency (AC-4 in [Mohr & Wenderson 19861) is O(ed2) with e the number of constraints and d the maximal number of values in the domain of a variable . Arc-consistency can be seen as based on the notion of support. A value Q for the variable i is viable if there exists at least one value that “supports” it at each variable j. The and Henderson’s algorithm, AC-4, makes this evident by assigning a counter to each arc-value pair. Such pairs are denoted [(i,j), a] and indicate the arc from i to j with value a at node i. The edge (i, j) between i and j may be replaced by the two directed arcs (i, j) and 0, i) as they are treated separately by the algorithm (but we still have RQ = Rji-’ ). The counters are designed by counter[(i, J), a] and indicate the number of j values that support the value a for i in the constraint {i, j). In addition, for each value b at node j, the set sjb is COlX&MCtd where Sjb =( (i, a) / b at node j supports a at node i), that is, if b is eliminated at node j, then counters at [(i,j), a] must be decremented for each (i, a) supported by (j, b). This algorithm uses too, a table, M, to keep track of which values have been deleted from which nodes, and a list, List, to control the propagation of deletions along the constraints. List is initialized with all values (i, a) having at least one counter equal to zero. These values are removed from M. During the propagation phase, the algorithm takes values (j, b) in List, removes one at C%C~ COMI&X counter[(i, Jl, al for all (i, a) in Sjbv and when a counter[(i,j), a] becomes equal to zero, it deletes (i, a) from M and puts it in List. The algorithm stops when List is empty. That means all values in M have non empty supports on all the c s. So, the CSP is arc- consistent. And it is the arc-consistent domain. 2.4. Arc-consistent CSPS Mohr’s and Henderson’s algorithm, AC-4, can be used in DCSPs. It keeps all its goods properties when we do a restriction, starting filtering from the current arc-consistent domain and pruning a new value when one of its counters has become zero (i.e. the value has no support on a constraint) after addition of constraints. But, when we remove a constraint (making a relaxation), AC-4 cannot find which value must be put back and which one must not: as it has “forgotten” the reason why a value has been removed, it cannot make the opposite propagation it has done during restrictions. So, we have to start filtering from the initial domain. 3. A ne As we have seen above, AC-4 does not have good properties (incrementality) for processing relaxations. So, in this section, we propose DnAC-4, a new arc-consistency algorithm for DCSPs. In DnAC-4 we extend AC-4 by recording some informations during restrictions while keeping its good properties. Then, DnAC-4 remains incremental for relaxations. 222 CONSTRAINT-BASED REASONING More precisely, during a restriction, for every value deleted, we keep track of the constraint origin of the deletion as e “‘justification” of the value dele .The justification is the first constraint on which the value is without support. During a relaxation, with the help of justifications we can incrementaly add to the current domain values that belong to the new maximal arc- consistent domain. But we need to be careful because after the relaxation, the system must be in the same state as if the algorithm had s with the initial CSP restrictions with all the new set of new domain must be the maximal arc- n and the set of justifications of removed values must remain we~l=~~M~~e~. Well-founded means that every value removed is justified by a non-cyclic chain of justifications (see figure 2: (2, c) deletion justified by the constraint {2,6), (6, c) by {6,5) and (5, c) by { 5,2) would not be a well-founded set of justifications). This process of storing a justification for every value deleted is based on the same idea as the system of justifications of deductions in truth maintenance systems (TMSs) [Doyle 19791, [McAllester 19801. 3.2. The algosit The algori works with nearly the same data structures h arc-value pair [(i,J’), counter of the number o counter[(i,J’), a]. A table D of which values are in the current domain or not. The first difference is that a set of supported values Sjib is constructed for each arc-value pair [u, 0, b]: S’& ={ a / b at node j supports a at node i) (we have sjb (of AC-4) qual I.0 USjib for u, i} E c). SO, when a COnStdnt (i, j) iS retracted, we delete Sqa and Sjib for all arc-value pairs [(i, j), a] and [(j, 0, b] instead of removing values (i, a) in Sjb and values u, b) in Sk. In the data structure we added a table justifto record the justifications of the values deleti justif(i, a)=j iff (i, a) has been removed from D because counter[(i, ~3, a] was equal to zero (i.e. (i, j} is the origin of (i, a) deletion). Then, for all (i, a) in D, justifli, a)=nil. The lists SL and RL respectively control the propagation of deletions and additions of values along the constraints. When the algorithm starts with (01, the tables are initialize& for each (i, a ) E dam begin D(i, a ) c lru Adding a constraint {i, j} is done by calling: ~r~~~,~~ Add ((i, j) ); Put {i, j} in the set of constraints C; SLe %; Beg-Add ((i, j), SL ); Beg-Add ((i, i), SL ); Propag-Suppress (SL ); nd; 0 the procedure Beg-Add (see fig.3) builds COUPzfer~(i, 51, Q], COUntePf~, i), b19 Sija, Sjib, for each in the suppression list SL on (i, j) (i.e. with counter ppend(SL, [(i, j ), a I); Figure 3 a in the procedure Propag-Suppress (see fig.4), values Propag-Suppress (w SL and remowe it f un?er((i, m ), a l-0 munte~(j i ), b ] f i), b ] - 1; if countefi(j, i ), b ]=O th &wnd(SL, [(it i), b] ) ; . 9 Figure 4 Removing a constraint (k, pn) is done by calling: Init-Propag-Relax ((k, m }, SL ); ropag-Suppress (SL ); . 9 The well-foundness property must be kept after the relaxation of a constraint. So, there are two parts in the BESSIERE 223 relaxation process: 0 partl: the procedure Init-Propag-Relax (see fig.5) in step 1 puts in the relaxation list RL values (k, a) and (m, b) for which removing was directly due to (k, m} (i.e. justifik, a)=m or justiflm, b)=k ), and deletes counters and sets of supported values for all arc-value pairs [(k, m), a] and [(m, k), b]. In step 2 it adds in D values in RL and these adding of values are recursively propagated to each value that has a support restored on the constraint marked as its justification. Init-Propag-Relax finishes when every value with a support on the constraint marked as its justification of deletion is added to D. During this phase of putting back values, when an added value (i, a) is still without support on a constraint (i, j) (i.e. counter[(i, J), a]=O), Init-Propag-Relax puts in SL the arc- value pair [(i, j), a]. Brocedure Init-Propag-Relax ((k, m ); var SL ); begin { Step 1: values whose justification was (k, m } are putinRL} I RLc0; 2 for each a E dam(k) do if justiflk, a )=m then begin Append(RL, (k, a )); justif(k, a ) f nil ; end; 3 for each b E dom(m ) do if justif(m, b )=k then begin Appmd(RL, (m, b )); justi@, b ) c nil ; end; 0 Delete {k, m} from the set of constraints t and remove its counters and sets of supported values; ( Step 2: values in RL are added to D and consequences are propagated } 5 while RLz 0 do begin 3 choose (i, a ) from RL and remove it from RL; 7 D(i, a)+ true ; B for each jl{i, j}E C do begin 0 for each 6 E Sua do begin 10 coun?efi(i, i), 61 c Counte~O; i), b ] + 1; 11 if justif(j, 6 )=ithen begin zded( RL, (j, 6 )); justi@, b ) c nil ; . , end; 12 If counterf(i, j ), a l-0 then AppenWL, Ki jh al ); end; end; end; Figure 5 0 part2: Propag-Suppress retracts again values in SL, marking as the new justification the constraint on which 224 CONSTRAINT-BASED REASONING the value is still without support (or one of the constraints if there are more than one). These suppressions are propagatedz the classic arc-consistency process restarts. We develop here DnAC-4 on the DCSP of figures 1,2 and 6, to show the mechanism of justifications: Changes: . . Add (1.21 deletion of (2, a) odd 12,3) deletion of (2, b) justification(2, b)={2, 3) deletion of (2, c) justification(2, c)=(2, 3) odd {2,41 odd (2.5) deletion of (5, b) justification(5, b)= { 2, 5) deletion of (5, c) justification(5, c)=( 2, 5) Add {5,61 deletion of (6, b) justification(6, b)=(5, 6) deletion of (6, c) justification(6, c)=( 5, 6) Add Vi 2) Relax (2, 3): 0 stepl: (2, b) and (2, c) are added because justification(2, b)= (2, 3 ) and justification(2, c)={2, 3). So, (5, b) and (5, c) are added and so (6, b) and (6, c) too. . step2 (2, b) has no support on { 2, 4) so (2, b) is deleted and justification(2, b)=(2, 4). So, (5, b) and (6, b) are removed too by propagation and their justifications are recorded: justification(5, b)={ 2, 5) or (5, 6) and justification(6, b)={ 5, 6) or { 6, 2) (it depends of the order of the propagation of suppressions). Remarks: - (2, a) is not added in step 1 of the relaxation process because its empty support on { 1, 2) (its justification) is not affected by the (2, 3) retraction. - when step 1 starts, (2, c) has no support on (6, 2}, but since its justification is not (6, 2), it is added and the propagation shows that (2, c) deleted cannot be supported by a well-founded set of justifications. (2, c) is in the new arc- consistent domain. - at the end of step 1 (2, b) is still without support on the constraint {2,4), so the classic arc-consistency process restarts, deleting (2, b) and propagating. Figure 6: The CSP of fig.2 after the relaxation of the constraint (2, 3 ) 3.3. Correctness of nAC-4 We outline here the key steps for a complete proof given in @essiere 19911. Notations; AC&imn)=the maximal arc-consistent domain of the CSP %M* dom = ((i, a) / i E X, a E dam(i)) (i, a) E D m D(i, a)=true TN3 = (I(i,j$ al / co~~d(i,jh 4 (* Tws : true withut s dam : p(E) = 3 (i, a) E E, justifli, a)=j =ClWS=0.Wecan looking lines g-9 of Suppress and lines 7 an Corollarv 1; D is an arc-consistent domain at the end of w that Bl is true after 0 at the end of Prop (pl)* V(i, a) ~5 D, Vj /(i, j} 4-5 iE : counter[(i, j), a] > 0 (*I=3 V(i, a) E D, Vj /{i, j} E C : (i, a) has a support (j, b) on {i, j) in D * D is arc-consistent CJ At the end of Add, D is arc-consistent. lax, D is arc-consistent. ( AC&iom) c;I; D ) is not affected Proof: Suppose ( A&(dom) G D ) is true when Pr~pag- Suppress starts. A value is removed from D if one of its counters i zero. So, it has no D on one before its deletion, the value remains true after the deletion of the v M) G D ) is true at the end of Init- Proof: AC&O~)W # 0 implies (from (5)) that: 3 (i, a) E AC&om Now justifli, a)=j so: Siia (I AC&rdom) = 0. It is a contradiction because (i, a) E A~~~~o~~ e AC&om)=D. cl of 2ed counters (with e the maximal number of values in for DnAC-4 is during a restriction, DnAC-4 builds Sjib and countea[(i,j), a] even after the deletion of the value (i, a) , because it needs these informations for an tic future relaxati -4 stops this work, as scm as (i, a) is out of D. d, during a relaxation, DnAC-4 only ch to verify the property of justifications. AC-4 handles all the new CSP. DnAC-4 is efficient w en the phase of adding values is short. This is case when the constraint graph is not connectd. n, the propagation stay in one connected- first constraint for means that when a algorithm probably many justifications, and the propagation larger. , we counted the total number of consistency checks done in AC-4 and in DnAC-4 to achieve arc-consistency when we add BESSIERE 225 number of consistency checks done in AC-4 and in DnAC-4 to achieve again arc-consistency in the CSP. We summed for each algorithm the number of consistency checks done during restrictions and relaxation. The comparison of the results indicates if DnAC-4 is better than AC-4 after only one relaxation (i.e. the number of consistency checks avoided during relaxation is more important than the number done in excess during restrictions). We tested the algorithms on random DCSPs with 8,12 and 16 variables, having respectively 16, 12 and 8 values. We tried three values for (PC, pu ): (35,65), (50, 50) and (6535). For each of the nine classes of CSPs defined, we made the test on 10 different instances of DCSPs to have a result representative of the class. The results reported in the table below are the averages of the ten tests for each class. daxahM-4 rdaxaknDAc4 btal AC4 btdDAc-4 DAc-m Figure 7: Results of comparison tests between AC-4 and DnAC-4 We can see that on all the classes of problems tested, after one relaxation of constraint DnAC-4 has recovered the time losed during restrictions. We found only three instances, in class 3, where AC-4 remains better than DnAC-4 after one relaxation. But in that class, CSPs are too restricted and much more than one relaxation is needed before the CSP accepts solutions. So, we can say that DnAC-4 can easily recover its extra-time-consuming. Thev results after one relaxation in classes 1, 4,7and t really significant because CSPs in that classes are underconstrained, and doing a relaxation in that case is unlikely. The last remark we can add is that randomly generated CSPs are not the best way to test efficiency of an algori . Constraints that are created are meaningless and propagations during relaxations always found very short in our tests could be larger in real applications, and so the algorithm DnAC-4 be less advantageous. But the gain during a relaxation is so important here in all DCSPs tested that we can hope DnAC-4 remains good on real applications. DnAC-4 is currently under implementation on the SYNTHIA system [Janssen et al 19891. 5. Conclusion We have defined what we call Dynamic CSPs and have provided an efficient algorithm (DnAC-4) achieving arc- consistency in DCSPs. We have compared the and AC-4 (the fastest arc- tatic CSPs) on many different If DnAC-4 uses a little more arc-consistent domain after a nt for a relaxation because it has learned informations about the reasons of the deletions of values. DnAC-4 can be useful for many systems that work in a dynamic environ nt. It can easily be extended to non- binary CSPs (see s&-e 199 11). The data structure crea for the algorithm DnAC-4 can to answer requests of the system (or the l “why this value has been deleted ?I’. The given is then the set of constraints currently justifying the deletion of the value. It is a TMS-like use. I would like to e-Catherine Vilarem who gives me advice and invaluable help in preparing this paper, and also Philippe Janssen and Philippe Jegou for their useful comments. Bessiere, C. 1991. ’ g CSPs to encode TMSs. Technical Report, LIR , Montpellier II, France Dechter, R., and Dechter, A. 1988. Belief Maintenance in Dynamic Constraint Networks. in Proceedings AAAI-88, St Paul MN, 37-42 Dechter, R., and Pearl, J. 1988. Network-Based Heuristics for Constraint-Satisfaction Problems. Artificial Intelligence 34,1-38 Doyle, J. 1979. A Truth Maintenance System. Artificial Intelligence 12,23 l-272 Freuder, E.C. 1978. Syn ing Constraint Expressions. Communications of the Vol.21 No. 11,958-966 Janssen, P.; JCgou, P.; Nouguier, B .; and Vilarem, MC. 1989. Problt?mes de Conception : une Approche bas&e sur la Satisfaction de Contraintes. 9emes Journees Internationales d’Avignon: Les Systemes Experts et leurs Applications, 7 l-84 Mackwort 1977. Consistency in Networks of Relations. Intelligence 8,99- 118 McAllester, D.A. 1980. An Outlook on Truth Mainte e. Technical Report AI Memo No.551, MIT, Boston Mclhr, .) and Henderson, T.C. 1986. Arc and Path Consistency Revisited. Artificial Intelligence 28,225-233 ontanari, U. 1974. Networks of Constraints: Fundamental Properties and Applications to Picture Processing. Information Science 7,95- 132 226 CONSTRAINT-BASED REASONING
1991
46
1,105
Eugene 6. Freuder Department of Computer Science University of New Hampshire Durham, NH 03824 ecf@cs.unh.edu Abstract Constraint satisfaction problems (CSPs) involve finding values for variables subject to constraints on which combinations of values are permitted. This paper develops a concept of intndrangcability of CSP values. Fully interchangeable values can be substituted for one another in solutions to the problem. Removing all but one of a set of fully interchangeable values can simplify the search space for the problem without effectively losing solutions. Refinements of the interchangeability concept extend its applicability. Basic properties of interchangeablity and complexity parameters are established. A hierarchy of local interchangeability is defined that permits recognition of some interchangeable values with polynomial time local computation. Computing local interchangeability at any level in this hierarchy to remove values before backtrack search is guaranteed to be cost effective for some CSPs. Several forms of weak interchangeability are defined that permit eliminating values without losing all solutions. Interchangeability can be introduced by grouping values or variables, and can be recalculated dynamically during search. The idea of interchangeability can be abstracted to encompass any means of recovering the solutions involving one value from the solutions involving another. Introduction A solution to a constraint satisfaction problem (CSP) finds values for variables subject to constraints on what combinations of values are permissible. This paper develops a concept of interchangeability of CSP values. Interchangeable values will be in a sense redundant. Their removal will simplify the problem space. Definition: A value b for a CSP variable V is fully interchangeable with a value c for V iff l.every solution to the CSP which contains b remains a solution when c is substituted for b, and 2.every solution to the CSP which contains c remains a solution when b is substituted for c. In other words the only difference in the sets of solutions involving b and c are b and c themselves. We can replace a set of fully interchangeable values with a single representative of the set without effectively losing any solutions. It is not necessary to retain all the fully interchangeable values during the search process, for solutions involving one can easily be recovered from solutions involving another. I extend this basic insight in a number of directions to make it more useful in practice. Figure 1 shows a simple graph coloring problem: color the vertices so that no two vertices which are joined by an edge have the same color. The available colors at each vertex are shown. The colors green, maroon, purple, white and yellow, for vertex Y are fully interchangeable. For example, substituting maroon for green in the solution redlX (red for X), greenly, blue12 yields another sofution redlX, maroonlY, bluelZ. Figure 1. Full interchangeability. My intuition is that real-world problems may well contain values which are, more or less, interchangeable. In configuration tasks, for example, we may find that, for a particular piece of a particular assembly, several stock parts can serve equally well. In a conventional CSP algorithm needless search effort might be expended on these interchangeable parts. Forms of interchangeability have been used by Van Hentenryck to reduce the search space for car-sequencing and graph-coloring problems (V,an Hentenryck 1988, 1989). Other related work can be found in (Yang 1990) and (Mackworth, Mulder, & Havens 1985). (It may well be that puzzles, like the %queens prddem, often used to illustrate CSP algorithms, will not benefit greatly from basic interchangeability techniques, that they are puzzles in part because they are unusually particular about which pieces of the puzzle will fit together. However, advanced forms of interchangeability may be still be of use; see the section on functional interchangeability below.) Interchangeability techniques complement the usual CSP inconsistency methods, which attempt to remove values that will not participate in any solution (Mackworth 1977; Freuder 1978). These techniques can lead to removal of values that may well participate in solutions. It is just that these values succeed (or possibly FREUDER 227 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. fail) “equally well”. Tnconsistency can also be viewed as a special case of interchangeability: inconsistent values all participate in the same set of solutions: the empty set. Removing interchangeable values complements work on removing redundant constraints from CSPs (Dechter & Dechter 1987). Interchangeability emphasizes what P call the microstructure of a CSP. The microstructure involves the pattern of consistency connections between values as opposed to variables. Eliminating interchangeable values can prune a great deal of effort from a backtrack search tree. The example of Figure 1 demonstrates this on a very small scale: with variables and values chosen in lexicographic order during search, eliminating redundant interchangeable values results in a backtrack search tree with half as many branches. If we are seeking all solutions, interchangeability allows us to find a family of similar solutions without what might otherwise involve a complete duplication of effort, for each member of the family. The processing necessary to automate the removal of interchangeable values may prove particularly useful in contexts where constraint networks are used as knowledge bases subject to multiple queries, over which such preprocessing can be amortized. For simplicity I will assume binary CSPs, which involve only constraints between two variables. However, interchangeability clearly applies to non-binary CSPs (and non-binary CSPs can be transformed into binary ones (Rossi, Dhar, & Petrie 1989)). Section 2 discusses local forms of interchangeability. Section 3 provides means of taking advantage of interchangeability even when such opportunities are not strictly or immediately available. Along the way I introduce a new, concrete methodology for evaluating CSP search enhancements with best and worst case constructions, and suggest that interchangeability can motivate concept formation and problem decomposition. Local Interchangeability Completely identifying fully interchangeable values would seem, in general, to require solving for aU solutions. This section identifies various forms of local interchangeability that are more tractable computationally. This section defines a basic form of local interchangeability, neighborhood interchangeability. Neighborhood interchangeability is a sufficient, but not necessary, condition for full interchangeability. All redundancy induced by neighborhood interchangeability can be removed from a CSP in quadratic time. Consider the coloring problem in Figure 2. Colors red and white for vertex W are interchangeable from the point of view of the immediate neighbors, X and Y. Red and white are both consistent with any choice for X and Y. The blue value for W is different, it obviously is not consistent with a choice of blue for either X or Y. Figure 2. Neighborhood interchangeability. We will say that red and white are neighborhood interchangeable values for W. The term “neighborhood” is not being used simply because it is motivated by the coloring problem. CSPs are commonly represented by constraint graphs, where vertices correspond to variables and edges-to-constraints. (Constraint graphs for graph coloring problems conveniently, or confusingly, have the - _ same graph structure as the -graph to be colored.) In general we have the following: Definition: Two values, a and b, for a CSP variable V, are neighborhood interchangeable iff for every constraint c on v: { i I (a,i) satisfies C} = { i I (b,i) satisfies C). Notice that the blue value for W is in fact fully interchangeable with red and white, even though it is not neighborhood interchangeable with them. The reason for this is that blue must be chosen for Z, thus cannot be chosen for X or Y. Thus blue for W will fit into any complete solution that red or white will. The incompatibility of blue for W with blue for X and Y does not matter in the end. On the other hand, since red and white are neighborhood interchangeable, there is no way they could fail to be interchangeable in any complete solution: there is no constraint that one could satisfy and not the other. More generally we have the following simple theorem: Theorem I: Neighborhood interchangeability is a sufficient, but not a necessary condition for full interchangeability. To identify neighbotiood interchangeable values we can construct discrimination trees. The leaves of the trees will be the equivalence classes of neighborhood interchangeable values. The process relys on a canonical ordering for variable and value names; without loss of generality we will assume lexicographic ordering. geenIX -- yellowlX -- greenly -- yellnwlY lb1 ‘bluelx -- geenIX -- yellowlX -- bluely -- greedY -- yehwlY lr. WI Figure 3. Discrimination tree. Figure 3 shows the discrimination tree for variable W. For the blue value we build a branch containing first the consistent values for X, then the consistent values for Y. We do the same for the red value. As we start at the root 228 CONSTRAINT-BASED REASONING to build the branch for the white value, we see that the first consistent value, blue for X, is already present as one of the children, so we move down to it. We keep following preexisting branches as long as we can; in this case, we follow one all the way to the end, and red and white are found to be equivalent. Neighborhood Interchangeability Algorithm: Finds neighborhood interchangeable values in a CSP. Repeat for each variable: Build a discrimination tree by: Repeat for each value, v: Repeat for each neighboring variable W: Repeat for each value w consistent with v: Move to if present, construct if not, a node of the discrimination tree corresponding to wlW A complexity bound for this algorithm can be found by assigning a worst case bound to each repeat loop. Given n variables, at most d values for a variable, we have the bound (the factors correspond to the repeat loops in top- down order): Q(n * d * (n-l) * d) = O(n2d2) While this algorithm will find all neighborhood interchangeable values exhaustively, a practitioner might observe some interchangeabilities informally. Semantic groupings can help suggest where to look. For example, in Figure 4, we see a variation on the coloring problem, where the allowable color combinations, e.g. red for X and orange for Y, are indicated by links. (Note that this is unlike the usual constraint graph convention where constraint graph edges represent entire constraints. We are representing the microstructure of the problem here: each link joins an allowable pair of values.) Red and orange are interchangeable, for both X and Y, as are blue and green. Semantic knowledge of the “warm” and “cool” color concepts might suggest looking for such interchangeability. X Y Figure 4. Semantic interchangeability. On the other hand it might be interesting to view the grouping by an interchangeability algorithm into equivalence classes of red and orange, blue and green, as a concept formation process. The “functional” semantics inherent in the underlying problem (perhaps a problem in decoration or design) motivates creation of classes of colors, corresponding to the conventional concepts of warm and cool. K-interchangeability This section introduces levels of local interchangeability through a concept called k-interchangeability. K- interchangeability involves CSP subproblems. For our purposes a subproblem of a CSP will consist of a subset S of the CSP variables along with all the constraints among them, call this the subproblem induced by S. Definition: Two values, a and b, for a CSP variable V, are k-interchangeable iff a and b are fully interchangeable in any subproblem of the CSP induced by V and k-l other variables. Observe that 2-interchangeability is equivalent to neighborhood interchangeability. (Values will be trivially interchangeable in a subproblem if there are in fact no constraints in the subproblem.) For a problem with n variables, n-interchangeability is equivalent to full interchangeability. (A set is, of course, a subset of itself.) The term local interchangeability will be used to refer to k-interchangeability for ken. The theorem of the previous section generalizes: Theorem 2. For icj, i-interchangeability is a sufficient, but not necessary condition for j-interchangeability. In particular, any level of local interchangeability is sufficient to ensure full interchangeability. Proof. As the level of interchangeability increases we can only increase the size of the interchangeability equivalence classes. A solution to a subproblem may fail to be part of a solution to a larger subproblem, removing an impediment to interchangeability as k increases. Thus the condition is not a necessary one. It is sufficient, however. Suppose a and b, values for V, are i-interchangeable. I claim they are j-interchangeable. Suppose not. Then there is a j-tuple subproblem solution where substituting a for b (or vice versa) fails to produce another solution. The failure involves at least one constraint, between V and another variable, say U. Throw away j-i elements of the j-tuple solution, but make sure you keep the values for V and U. You now have a solution to an i-tuple of variables where substituting a for b (or vice versa) fails to produce another solution. This contradicts the assumed i-interchangeability. QED. The algorithm for finding neighborhood interchangeable values generalizes to an algorithm for finding k- interchangeability. The assumed ordering of the variables and values induces a canonical ordering of variable and value tuples. Each entry in the discrimination net is now a (k-l)-tuple of values. K-l~ztercltangeabilit?! Algorithm: Finds k-interchangeable values in a CSP. Repeat for each variable: Build a discrimination tree by: Repeat for each value, v: Repeat for each (k-1)-tuple of variables Repeat for each (k-1)-tuple of values w, which together with v constitute a solution to the subproblem induced by W: NIove to if present, construct if not, a node of the discrimination tree corresponding to wlW FREUDER 229 Complexity Analysis This section is concerned with local interchangeability complexity issues. A complexity bound for the k- interchangeability algorithm is obtained, O(nkdk), where d is the maximum number of values (the size of the domain) for any variable. There is reason to believe that this is an optimal upper bound. I prove that for any level of local interchangeability there are cases in which preprocessing to remove redundant k-interchangeable values before backtracking, k-interchangeability preprocedng, will be cost effective. The complexity analysis of the k-interchangeability algorithm is similar to that for neighborhood interchangeability, allowing for a worst case O(nk-‘) (k- I)-tuples of variables and dk’l (k-1)-tuples of values, we get a bound: O(n * d * nksl * dk-‘) = O(nkdk). The algorithm includes a brute force search for all the solutions of each subproblem. Performance might be improved by carrying out these searchs more efficiently in advance. On the other hand O(nkdk) seems likely to be an optimal worst case bound for finding all the subproblem solutions. Since it is hard to imagine how all k- interchangeable values could be identified without completely solving all these subproblems, O(nkdk) seems likely to be a tight worst case bound for this identification. Once the equivalence classes of k- interchangeable values are identified one representative of each class can be retained and the rest of the class declared redundant and removed (within the same time bound obviously). Despite this potentially costly worst case behavior, I claim that removing redundant interchangeable values can yield great savings in some cases. Furthermore this is true even for large k. For any ken there are problems for which preprocessing to remove redundant k- interchangeable values before backtracking will be cost effective. In fact for any ken, k-interchangeability preprocessing is arbitrarily effective in the sense that whatever savings you specify, I can find a CSP for which k-interchangeability preprocessing saves that amount of effort. , The basic observation is that eliminating an interchangeable value prunes the subtree below that value in the backtrack search tree. If all values of a single variable are found to be interchangeable, we have effectively eliminated a level of the backtrack tree; we are left with a maximum of d”-l search tree branches, rather than d”. If all values of i variables are interchangeable the search tree has at most dnwi branches. If all values of all variables are found to be interchangeable, we are effectively done (anything is a solution). If all values of one variable are interchangeable and they all participate in no solutions, we are effectively done (there is no solution). Theorem 3. For any number of variables ~2, any ken, and any computation cost C, there exist CSPs for which the cost of solving by backtrack search exceeds by more than C the cost of solving by k-interchangeability preprocessing followed by backtrack search. This is true regardless of whether we take “solving” to mean finding a single solution, finding all solutions or finding that there is no solution. Proof. For each interpretation of “solving” I demonstrate that I can construct CSPs with the desired property. I show that the best results we can hope for on those CSPs without preprocessing is worse, by at least C, than the worst behavior we can expect with preprocessing. For simplicity assume that a constructed CSP will have the same number of values, d, for each variable. Assume that backtrack search instantiates variables and chooses values according to their lexicographic ordering. Consider first the case where there is no solution: We construct a CSP where failure occurs during backtrack search only when instantiating the last variable, V, and conducting the last consistency check, against variable U. IBacktrack search will thus need to examine the complete search space tree. There will be d” branches in the search tree. Since between most pairs of variables there is really no constraint, I will only count one constraint check per branch. The effort for backtrack search is then c ldn+c2, for appropriate constants cl and c2. Substituting n- 1 into the k-interchangeability bound, we obtain a worst case effort for interchangeability preprocessing of c3n n-1dn-1+c4, for appropriate constants c3 and ~4. The algorithm which identifies (n- l)- interchangeability will discover, and can be trivially altered to report, that all the values of V fail to participate in any solutions to subproblems involving U. This is sufficient to determine that the CSP has no solution, without any subsequent backtrack search. Now the question is can it be true that: cld”+cz > c3n n-1dn-1+c4 + C ? Simple algebra tells us that this will be true if: d > Cal-+ + c6 for appropriate constants cs and c6. In other words, we only need construct a problem where the number of values is sufficiently large in comparison with the number of variables. (The nature of “sufficiently large” may be offputting, but bear in mind this is a worst case scenario for interchangeability involving (n- I)-interchangeability. Also observe that there are simple cases with 2 variables and 3 values that demonstrate the desirability of even (n-l)- interchangeability preprocessing.) Now consider the case where we search for a single solution: The construction is similar. This time the last variable in the search tree, V, will be such that k- 230 CONSTRAINT-BASED REASONING interchangeability preprocessing reduces the domain of V to a single value. This value will only be consistent with the last value, in the ordering of values, for each other variable. Thus there will be a single solution, which will appear as the “rightmost” branch of the search tree. Again all pairs of values between variables other than V are consistent. We have at most d”-’ branches in the search tree. (Actually interchangeability redundancy removal should prune the search tree further.) For each branch there are n-l non-trivial constraints to check, those between V and the other n-l variables. Thus the backtrack search effort, after preprocessing reduces the number of variables for V to one, is at most: cl(n-l)dn1+c2. Adding the effort for (n-I)-interchangeability preprocessing we have a total effort of: (cl(n-l)dn-‘+c2) + (c3n n-1dn-1+c4). Without preprocessing the search tree will have d”-(d-l) branches. Search will repeatedly try all d values for V, until success is achieved with the last value for every other variable. Thus search effort will be: c5(n-l)(d”-(d-l))%6 We want: (c,(n-l)dn1+c2)+(c3n n-1dn-1+c4) c Cs(n- l)(d”-(d- 1 ))+C@C. Simplifying, it is sufficient that: d>c7nn-‘+c 8 for appropriate constants c7 and cg. Finally, consider the case where we are looking for all solutions: This time construct a CSP where k-interchangeability reduces a variable V to a single value and that variable is the first variable to be instantiated in the backtrack search order. Consider the final variable, W. Arrange that only the final value for W is consistent with anything, and it is only consistent with the final value for each of the other variables (with the exception of V; it is consistent with all values of V). Other pairs of values are all consistent. Thus backtrack search will check out dn-1 branches before finding the first solution. Backtrack search after interchangeability preprocessing will require if anything less effort to reach the first solution (interchangeability can reduce the number of values for other variables). Once the first solution has been found interchangeability preprocessing will permit simply substituting to obtain the other d-l solutions, while backtrack search requires searching another size d n-l search tree for each additional solution. Clearly, for sufficiently large d, the savings here can be as large as we like. Q.E.D. Observe that the constructions in the proof work for any level of k-interchangeability, including the lowest. If 2- consistency, for example, is sufficient to create a situation like those constructed above. the savings can be dramatic, even without an inflated value for d, and not just for finding multiple solutions. It may be that the potential payoff fro relatively inexpensive %nterch,angeabihty preproces or the potential cost of doing without it-- will motivate routine preprocessing for 2- interchangeability. Theorem 4: For any n>2 and any d, there exist CSPs with n variables and d values for each, for which the savings achieved by preprocessing for 2-interchangeability is O(n2dn) . Proof. Construct a CSP with no solution where none of the values for one variable, V, are consistent with any value for one of the others, U, while all other value pairs are permitted. 2-interchangeability will discover that there is no solution with O(d2n2) effort. Assume a search order where U is the first variable to be instantiated and V the last. This will require a full backtrack tree search with O(n2dn) effort . e difference is O(n2dn) Q E D . *. . Observe further that the cost ‘of (n- 1 )-interch,angeability preprocessing is such that the arguments in Theorem 3 focused not on how easily we could proceed with the search after removing redundant values, but on ensuring that backtrack search without preprocessing would require a sufficiently large effort. The calculations did not take into account that interchangeability can affect more than a single variable. Even 2-interchangeability can significantly reduce the number of values for many of the variables (in the extreme down to a single value for each variable) resulting in a major savings in the effort required to find one, or all, solutions. Note that very similar arguments to those in this section should produce similar results regarding the efficacy of k-consistency preprocessing. Indeed constructions in this section are reminiscent of the basic “thrashing” arguments that long ago pointed out problems with conventional backtrack search, motivating consistency “relaxation” preprocessing techniques among other refinements (Bobrow & Raphael 1974). However, I do not believe that this kind of concrete analysis has been carried out previously for thrashing type behavior. This section provides means of taking advantage of interchangeability even when such opportunities are not strictly or immediately available. This section defines several forms of ~‘4 a k interchangeability. These may involve sacrificing some solutions, but this will not matter if we are seeking a single solution. Locally computable forms of these concepts are available. ity. The simplest form of weak interchangability is substitutability; this captures the idea that interchangeability can be restricted to a “one-way” FREUDER 231 concept. Definition: Given two values, a and b, for a CSP variable V, a is substitutable for b iff substituting a in any solution involving b yields another solution. We can remove b from the problem, knowing that we have not removed all the solutions. If there was any solution involving b, there will remain a solution where a is substituted for b. However we cannot recover solutions involving b by substituting b in the solutions involving a, as we do not know which, if any, of those substitutions produce solutions, If each of two values can be substituted for the other, the two values are fully interchangeable. Substitutability can be computed locally. In particular, we have: Definition: For two values, a and b, for a CSP variable V, a is neighborhood substitutab&e for b iff for every constraint C on V: { i I (a,i) satisfies C } r, { i I (b,i) satisfies C } . In the example of Figure 2, red is neighborhood substitutable for blue for variable W, even though red and blue are not neighborhood interchangeable. Partial Interchangeability. Partial interchangeability captures the idea that values for variables may “differ” among themselves, but be fully interchangeable with respect to the rest of the world. Definition: Two values zue partially interchangeable, with respect to a subset S of variables, iff any solution involving one implies a solution involving the other with possibly different values for S. When S is the empty set, the values are fully interchangeable. Figure 5 presents an example of partial interchangeability: blue and red for W are partially interchangeable, with respect to the set (X}. Note: blue and red for W are interchangeable as far as V is concerned; blue for W goes with red for X, and red for W with blue for X; blue and red for X are interchangeable as far as Y and 2 are concerned. Thus while substituting say red for blue in a solution for W necessitates a change in the value for X, it will not require any change in the values for V, Y or Z. rohkm Interchangeability. Subproblem interchangeability captures the idea that values can be interchangeable within a subproblem of the CSP. Subproblem interchangeability may motivate and guide a divide and conquer decomposition of a CSP. Definition: Two values are subproblem interchangeable, with respect to a subset of variables S, iff they are fully interchangeable with regards to the solutions of the subproblem of the CSP induced by S. Note the values are required to be fully interchangeable with regard to the subproblem, not the complete CSP. Of course, when S is the entire set of variables, the subproblem is the complete CSP, and the values are fully interchangeable for the CSP. Subproblem interchangeability and partial interchangeability are not quite inverse notions. Theorem 5: Subproblem interchangeability with respect to S implies partial interchangeability with respect to S’, the variables not in S; however, partial interchangeability with respect to S does not imply subproblem interchangeability with respect to S’. Proof: The key observation is that a solution to a subproblem may fail to appear as a portion of any solution to the complete problem. On the other hand if we take from a solution to the complete CSP the values for a subset of variables, those values will constitute a solution to the subproblem induced by those variab1es.Q.E.D. By grouping variables into “metavariables”, or values into “metavalues”, we can introduce interchangeability into higher level “metaproblem” representations of the original CSP. Meta-interchangeability might also be viewed as providing motivation and guidance for dividing CSP variables into subproblems and CSP values into concept hierarchies (Mackworth, Mulder, & Havens 1985). Figure 6 presents an example of metavalue grouping. As in Figure 4 we indicate the allowable pairs of values in the original CSP with links, e.g. yellow for Y is consistent with light blue and light red for X. In the original problem yellow and brown for Y are not interchangeable. However, if we were to combine sky, light and dark blue into a metavalue “blue”, and similarly create the metavalue “red”, we would have a problem in which yellow and brown are fully interchangeable. X ( yellow brown black ) Figure 5. Partial interchangeability. Y Figure 6. Meta-interchangeability. 232 CONSTRAINT-BASED REASONING We can also merge variables into metavarinbles. The values of a metavariable will be the solutions of the subproblem induced by the individual variables. Values for two metavariables will be consistent if the component values for the original variables are all consistent. Forming the’ metaproblem may create new interchangeabilities. Dynamic Interchangea Interchangeability can be recalculated after choices are made for variable values during backtrack search. It can be recalculated after inconsistent values have been filtered out during the search process in a preprocessing step or by a “hybrid” algorithm that interleaves backtracking and relaxation. Interchangeability can also be recalculated to reflect changes in a dynamic constraint representation. Interchangeability might be sought dynamically during a knowledge acquisition or problem deftition process. The idea of integrating local interchangeability recalculations with backtrack search is especially intriguing given the success of local consistency calculations in enhancing backtrack search performance. The essential idea of interchangeability is that given the solutions involving one value, we can recover the solutions involving another. We have been using simple substitution to go from one set of solutions to another. However, substitution is only the simplest function we could use. Definition: Let SvIv be the set of solutions inclusing value v for variable V. CSP values a for V and b for W are functionally interchangeable iff there exist functions fa and fb such that fa(SalV) = SblW and fb(SblW) = &IV. (V and W may be the same variable.) This is a very general definition that deserves further study. The definition does not even require that a and b be values for the same variable. In fact, strictly speaking, any two values are functionally interchangeable; once we have all the solutions we can give a “brute force” definition of the necessary functions. The key obviously is for the functions to be a priori available or cost effective to obtain. One natural refinement of the definition involves a solution preserving function on variable values: Definition: Two values, a and 6, for a CSP variable, are isomorphically interchangeable iff there exists a 1-I function f such that: l.b=f(a) 2. for any solution S involving a, { f(v) I v E S } is a solution 3. for any solution S involving b, { f’(v) I v E S } is a solution. Problem symmetry is a likely source of this sort of transformational interchangeability. Consider the 8-queens problem, for example, placing 8 queens on a chessboard such that no two attack one another. where rows correspond to variables and columns to values. The reduction of the values in the first row suggested in (Reingold, Nievergelt, & Deo 1977) can be viewed as an application of isomorphic interchangeability. First observe that the column 1 position for the queen in the first row is isomorphically interchangeable with the column 8 position, the 2 position with the 7 position, etc. The interchangeability function f maps position i into position 9-i, for each row, simulating flipping the board about its vertical axis of symmetry. This permits eliminating positions 5 through 8 for the first row. Next observe that because of symmetry about the diagonal axes position 1 of row 1 is isomorphically interchangeable with position 8 of row 8, thus we can eliminate position 1 for the first row. ents. This material is based upon work supported by the National Science Foundation under Gr‘ant No. IRI-8913040. The Government has certain rights in this material. The author is currently a Visiting Scientist at the MIT Artificial Intelligence Laboratory. eferences (Bobrow & Raphael 1974) New programming languages for artificial intelligence research. Compu t. SW-V. 6,3, 153-174. (Dechter & Dechter 1987) Removing redundancies in constraint networks. Proc. AAAI-87, 1 OS- 109. (Freuder 1978) Synthesizing constraint expressions. Commun. ACM 21,1 I, 958-966. (Mackworth 1977) Consistency in networks of relations. Artzf. Intd. 8, 99-l 18. (Mackworth, Mulder, & Havens 1985) Hierarchical arc consistency: exploiting structured domains in constraint satisfaction problems. Comput. Intell. 1, 118- 126. (Reingold, Nievergelt, & Deo 1977) Corn bina torial Algorithms. Prentice-Hall. (Rossi, Dhar, & Petrie 1989) On the equivalence of constraint satisfaction problems. MCC Technical Report ACT-m-222-89. MCC, Austin, Texas 78759. (Van Hentenryck 1988) Solving the car-sequencing problem in constraint logic programming, Proc. ECAI- 88. (Van Hentenryck 1989) A logic language for combinatorial optimization, Annals of Operations Research 21, 247-274. (Yang 1990) An algebraic approach to conflict resolution in planning. Proc. AAAZ-YO, 40-45. FREUDER 233
1991
47
1,106
Wrard Ligozat LIMSI, Universite Paris-Sud, B.P. 133 91403 Orsay Cedex, France ligozat@ limsifr Abstract The calculus of time intervals defined by Allen has been extended in various ways in order to accomodate the need for considering other time objects than convex intervals (eg.time points and intervals, non convex intervals). This paper introduces and investigates the calculus of generalized intervals, which subsumes these extensions, in an algebraic setting. The set of (p,q)-relations, which generalizes the set of relations in the sense of Allen, has both an order structure and an algebraic structure. We show that, as an order, it is a distributive lattice whose properties express the topological properties of the set of (p,q)-relations. We also determine in what sense the algebraic operations of transposition and composition act continuously on this set. In Allen’s algebra, the subset of relations which can be translated into conjunctive constraints on the endpoints using only c,>,= ,1,2 has special computational significance (the constraint propagation algorithm is complete when restricted to such relations). We give a geometric characterization of a similar subset in the general case, and prove that it is stable under composition. As a consequence of this last fact, we get a very simple explicit formula for the composition of two elements in this subset. convenience of a language based on non convex intervals in such applications as the specification of concurrent processes. x -before y X .bepins y X .during y X &Q&Y 0 x .after y Fig. 1 : some of Vilain’s point-to-interval relations An interval in the sense of Allen is just a pair of ordered time points. A non convex interval (with a finite number of convex components) is entirely determined by the sequence of pairs associated to each convex component. This means that a non convex interval is an ordered sequence of an even number of time points. xl x2 x3 x4 X5 x6 I. Introduction Fig. 2: a non convex interval In [Vil82], Marc Vilain describes a logic for reasoning about time which is an extension of the logic defined by James Allen [A1183]. Specifically, it is at its core composed of 13 relational primitives (Allen’s relations) and of a body of inference rules (Allen’s “transition table”). Moreover, it is extended to a logic which besides relations between two intervals, can also handle relations between two time points, or relations between a point and an interval. As concerns this extension, Vilain comments: “We should state that including [points] along with intervals in the domain of our system only minimally complicates the deduction algorithms. The polynomial complexity results and the consistency maintenance remain unaffected”. In other words, allowing basic time objects to be either time intervals or time points does not alter in a significant way the framework of Allen’s logic. We extend somewhat further these remarks and define a generalized interval as an ordered, finite sequence of points in a linear order. We also call a generalized interval with n points a n-interval. More generally, for any subset S of the integers, a S-interval is a n-interval where n belongs to S. In this way, Allen’s calculus is the calculus of 2- intervals; Vilain’s universe is the set of { 1,2) -intervals. The calculus of non convex intervals in the sense of Ladkin is the calculus of P-intervals, where P is the set of even integers. xl x2 x3 x4 X5 I I I I I I I I I 8 Y? YF l+g. 3: generalized inteivals Ladkin Lad861 introduced the notion of non convex Remark Translating real-life intervals into n-intervals temporal intervals and gave a taxonomy of important can raise problems of interpretation if both points and relations between them. He also argued for the (ordinary) intervals are allowed in the interpretation of a 234 CONSTRAINT-BASED REASONING From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. single n-interval; e.g., a 3-interval could represent 3 points, or a point preceding an (ordinary) interval, or the other way round; a first solution is to consider extended and punctual entities separately; another would be to introduce types on the boundaries. Allen’s calculus is basically algebraic in nature. Ladkin and Maddux [LaMa87b, LaMa showed that it can be adequately described using the notion of a relation algebra as defined by Tarski and Jonsson [JoTa52]. Specifically, consider the set lYI(2,2) of 13 possible relations between two intervals. The set of subsets of lI(2,2) is a Boolean algebra with additional structure, in particular, a transposition is defined on it, as well as an operation of composition (described by the transition table). This additional structure makes it an integral relation algebra A(2). The connexion between this algebra and the models of Allen’s logic can be established using the algebraic notion of a weak representation, which generalizes the classical notion of a representation, as shown in @ig9Oc]. A well known theorem of Cantor states that the ordered set of the rational numbers is up to isomorphism, the only denumerable linear order which is both dense and unlimited on the left and on the right. This can be seen as a result about the calculus of points, or about the representations of the relation algebra A(1) associated to the set lI(l,l) of 3 possible relations between two points: up to isomorphism, A(1) has only one denumerable representation. Ladkin bd87b] proved that a similar fact is true of A(2). In [Lig9Oc], we show that these two results are just the n=l and n=2 cases of a quite general result: for any n 2 1, A(n) has only one denumerable representation up to isomorphism, where A(n) is the representation algebra associated to the calculus of n- intervals. But this is only part of the story. Consider again Allen’s transition table, which gives the result of the composition of two primitive relations. Fir&ly, only 26 relations appear in it, among the 213 possibilities in A(2). Then, as remarked by many people [Noe88, LiBe89a], there is a notion of neighbourhood between relations, which corresponds to the physical intuition of small moves of the intervals considered. For example, if an interval i meets another interval j, we can move i slightly to the right, and then i overlaps j; or to the left, and then icj. imeetsj i j i <j i ovedaps j -- i j i j Figl4: neighbouring relations Hence, relation m (meets) in some sense has o (overlaps) and < as immediate neighbours. In [LiBe89a, Lig9Ob] we describe how the topological structure of the set of relations in lT(2,2) can be conveniently represented by a polygon; essentially the same picture (a lattice) was known to [Noe88]. Finally, the transition table makes apparent the fact that composition has some kind of continuity property: composing neighbouring relations, we tend to get neighbouring sets of results. This is a first motivation for understanding more about the topology of relations. A further motivation is concerned with its computational relevance. Allen’s original publications described a polynomial algorithm for determining the consistency of a temporal network, ie. a graph of intervals with arcs labeled by elements of A(2). This algorithm was known not to be complete. Subsequently, Vilain and Kautz [ViKa86] showed that the problem in the general case is NP- complete. However, restricting the labeling to a small subset of A(2) (83 elements) makes Allen’s algorithm complete, as shown in [vBeek89, vBC90, ViKa86]. This subset can be defined as the set of relations which can be expressed as a conjunction of convex conditions on the endpoints of the intervals; or equivalently, it is the set of intervals in the lattice associated to lI(2,2). This last characterization makes apparent the topological nature of this set of well behaved elements. In this paper, we will be concerned both with the algebraic and the topological aspects of the calculus of generalized intervals. As the preceding discussion shows, giving this more general framework allows us to discuss in a unified way a whole body of results about representing and reasoning about time, The main purpose of the paper is to give a precise content to the remarks made above, by - giving formal definitions of the interval algebras and their associated objects; - proving the main facts about the algebraic and topological structures and the relationships between algebra and topology . e fin i t ion Let T be a linear order. A n-interval in T is an increasing sequence x = (xl, x2 , . . ..Xn) of elements of T: Xl c x2 < . ..cx.. Let x = (xl ,..., xp) be a p-interval, y = (yl,..., ys) a q-interval in a linear order T. The points yl, y2 , . . ..yg define a partition of T into 2q+l zones in T, which we number from 0 to 2q (Fig.5): zone0 is (tE Tltayl); zone 1 is ~1; zone2 is {tE Tlyt ct<y2]; . . . zone 2q is (t E T I t > y, }. LIGOZAT 235 e order structure of ( 0 1 2 3 4 5 . . . Q-1 2q I Yl Y2 Y3 .-. YQ Fig. 5 : the partition of T determined by a q-interval. Now the relation of x relative to y is entirely determined by specifying for each xi which zone it belongs to; a constraint is that each oddly numbered zone contains one of the Xi’s at most. This motivates the following: Definition The set lYI(p,q) of (p&relations is the set of non-decreasing sequences n of p integers between 0 and 2q, where each odd integer occurs at most once. Remark In [Lig9Oc] we give another equivalent definition of the set of (p&relations; it has the advantage of being more symmetrical; however, the present definition makes the order structure of this set more obvious. Examples (i) II (1,l) has 3 elements 0, 1, 2, corresponding respectively to x <y, x=y, ycx. (ii) II( 1,2) has 5 elements 0, 1, 2, 3, 4; using the terminology of Vilain in [Vi182], we can call them respectively ,before, .becrins. .durinn. .ends. .after . On the other hand, lI(2,l) also has 5 elements, namely (O,O), (O,l), (0,2), (1,2), (2,2); in Vilain’s notation, these are called before.. ended-bv,, contains., begun-b%, after., respectively. (More generally, exchanging the roles of x and y, we see that II(p,q) and lI(q,p) contain the same number of elements; more about this later). (iii) lI(2,2) has 13 elements, namely Allen’s primitive relations; in particular, (1,3) is equality; six elements are the following (the remaining ones are obtained by switching roles between x and y): (O,O)=< (before); (0, l)=m (meets); (0,2)=0 (overlaps); (2,2)=d (during); (2,3)=e (ends); (1,2)=s (starts). 2.2. Associated inequations Fig. 6 : Hasse diagrams Each element n in I’I(p,q) corresponds to a set of equations and inequations E(Z): . IXi = Y (7t(i)+l)L2 (1) E(n) 1 Xi > Y x(i)/2 if n(i) is odd; I xi < Y (It(i)+2)/2 if Z(i) is even, Z(i) < 2q; if X(i) is even, 7c(i) > 0 for i =l,...,p. As a consequence, any subset of H(p,q) corresponds to a formula in the language with equality with variables Xl ,a**, xp , Yl ,***, x4 and predicate ‘k“. We extend this language with >, I, 2, considered as abbreviations. )-relations We have defined the set of (p&relations as a subset of Nx . . . x N (p times); hence the product order on Nx . . . x N defines an order on H(p,q), namely: efinition Let 7~, n’ be two elements of lI(p,q); then n: I ‘JI’ if and only if n(i) 5 n’(i) for i=l, . . ..p. reposition (H(p,q), I) is a distributive lattice. Proof A product of linear orders is a distributive lattice. The sup and inf of two elements can be computed componentwise. II(p,q) is a subset of such a product which contains sup’s and infs. Q.E.D. Again because of its very construction, lYI(p,q) is a p; we can consider its Hasse diagram, which is by definition the graph with lI@,q) as its set of vertices, where an arc joins n to K’ if Z’ is an immediate successor of Z. In this way, the Hasse diagram of H&q) is naturally embedded into up, Figure 6 represents H( l,l), H( 1,2), H(2,1), H(2,2). Intuitively, two relations are neighbours if passing from one to the other only involves changing the relation Of one pair (Xi,Yj). We claim that H(p,q) (with its canonical embedding in p) gives an adequate representation of the topology of (p&relations. 0 =-=--Ad 0 1 2 3 4 W 2) 01 W&2) 00 3.2. c~~~act~rizin the intervals in lI( Recall that in any lattice L, an interval is any subset of the form I(ltJ2) = (1 E L I 11 I 15 12) where 11 I 12 are two elements of L. The central theme in this section will be the set of intervals in H(p,q). We first give a characterization of the set of intervals in terms of inequations. Proposition The set of intervals in II(p,q) coincides with the set of subsets associated to conjunctions of formulas Xkv y,, where w E { =, <, >, I, 2). 236 CONSTRAINT-BASED REASONING Proof By definition of lJ(p,q), we have n(k) = 2s -1 if and only if ( xk= yS ); (2) z(k) = % if and only if ( y, c xk< yS+l ) (s f 0,2q); n(k)=0 ifandonlyif xk<yl; n(k) = 2q if and only if y c xk. Hence each one of formulas (a, B ,c,d,e) defines the subset of X’S verifying a corresponding condition: (a)xFy, defines {7tln(k)=2s-1); (b) xkc y, defines { it I n;(k) 5 2s -2); (c) xk> y, defines ( 7c I x(k) 2 2s } ; (d)xklyS defines (1~ t~(k)52s-1); (e)x$ys defines (7tl1(k)22s-1). Obviously, the subsets defined in lI(p,q) by (a,b,c,d,e) are intervals in II(p,q); hence any subset defined by a conjunction of such formulas is an interval. Conversely, it suffices to show that any interval of the form I,,(k,[m,n]) = ( n I rc(k) E [m,n]) is defined by such a formula. By (2), z(k) E [m,n] is equivalent to Yd.2 < xk < y(n+2)L)n if m,n are even; Y(m+l)/z 5 xk < Y(n+2)/2 if m is odd, n is even; Ym/2 < xk 5 Y(n+1)/2 if m is even, n is odd; Y(m+1)/2 5 xk s Y(n+1)/2 if m,n are odd; (we use the convention to leave out the inequations involving yo or yq+l ; e.g. yo C xk C y(n+2)/2 is replaced bY xk < Y(n+2)/2 )* The general result follows from this fact. Q.E.D. This result generalizes what was essentially known for lI(2,2); intervals are called “convex relations” in Nokel 1[Noe88]. The algorithmic properties of the set of intervals in lI(2,2) are examined in van Beek [vBeek89, vBeek90] and van Beek and Cohen [vBC90]. 4. perations and intervals perations on ll(p,q) Transposition As already remarked, switching roles between x and y sends a @,q)-relation x to a (q,p)-relation IE’. We give a precise description of this operation, which is called transposition: Definition Let ‘TC be an element of II(p,q). Then 7cf E lJ(q,p) is defined as follows: Consider the first q odd integers 1,3,..., 2q-1; call odd(i) = 2i - 1 the i-th odd number; consider each odd(i) in sequence, and position it in the sequence IC( l),...,n(p): if odd(i) < n(l), then 7ct (i) = 0; if n;(p) < odd(i), then zt (i) = 2p; if odd(i) = n;(i) , then nt (i) = 2j-1; if n;(j) < odd(i) < @+l), then Ict (i) = 2j. Proposition Transposition is an order reversing bijection from lI(p,q) onto II(q,p). Proof Using the definition, it is easily shown that if X’ is an immediate successor of 7c, then JP is an immediate predecessor of & Q.E.D. Examples Fig.7 resp. Fig 8 illustrates the corres- pondance between H( 1,2) and H(2,1), resp. H(2,3) and W,% 02 12 22 H(2.1) O1 4 3 H(1 ,:I 1 0 Fig. 7 - Fig. 8 Composition If a p-interval x is in relation ‘41: relative to a q-interval y, and y itself is in relation n;’ relative to a r-interval z, there is a finite number of elements in lI(p,r) representing the possible relations of x relative to z. Translating this fact into our notations, we arrive at the definition of composition. In order to express it conveniently, we need the following: Notation If mln are two integers, [[m,n]] denotes [m,n] if m and n are even, [m+l,n] if m is odd, n is even, [m,n-l] if m is even, n is odd, [m+l, n-l] if both m and n are odd (in other words, we leave out the odd endpoints). Convention For any 7~ E lX(s,t), n;(O) = @ n(i) = 2t if i > s. Definition Let z E lT(p,q), Z’ E lI(q,r). Then (K;x’) is the set of elements IC” E II&r) such that, for every j , 1 Sj<p : x”(i) = a’((rc(i)+l)/2) if z(j) is odd; a”(i) E [[x’(lc(i)/2), n;‘((7c(j)+2)/2) ]] if a(i) is even. Examples (i) Take X=X’= (0,2) E lI(2,2) (the o = “overlaps” relation); since n(l) and x(2) are even, we get the set of IE” such that: If (1) E [[rr’(O),X’(l)]] = [[O,O]] = 0; LIGOZAT 237 7f’ (2) E ww,7fcw1 = Km1 = Km hence (rr;rr’) is the set { (0,0),(0,1),(0,2)) = { <,m,o) . (ii) Take IC= (2,3), IC’= (3,4) E II(2,2); then 7c” (1) is E [[7r’(l),lc’(2)]] = [[3,4]] = 4; If (2) = n;‘(2) = 4; hence (z;rr’) is the set reduced to (4,4) (the “9‘ relation). Remark The above definition clearly shows that, for n E II(p,q) and rc’ E II(q,r), (IC;~‘) is an interval in II(p,r). In other words, the entries in the corresponding transition table are intervals. Moreover, they are intervals of a special kind: their projection on each component is either an integer or an interval with even boundaries. In particular, the entries in Allen’s transition table should be of this type. Checking all possibilities, we get a set of 28 intervals among which 26 do appear in the transition table. We show in the next section that this fact is only a particular aspect of the stability of intervals with respect to composition. 4.2. Stability of the set of intervals We first extend composition to sets in a natural way: Definition Let E, F be subsets of II(p,q), II(q, r) resp. Then we denote by (E;F) the union of all (re;llc’), where n: E II&q) and n;’ E II(q,r). In the same manner, Et is the set of rrt, where n; E E. The interaction of composition with transposition is described by: Proposition For all n; E II(p,q), n’ E II(q,r), (n; n’)’ = (7c; 7cq. The next proposition expresses the fact that composition is non decreasing relative to its arguments: Proposition If n; IIc1 , ‘Ic’ I7r’lthen : (a) inf(n;;lc’) I inf(rcl;n:‘) and sup(rr;z’) I sup(zl;z’); hence (b) inf (~;7c’) I sup (zl;z’l). Proof Suppose rc1 is an immediate successor of K; then all components of n: and ICI coincide, except for one; let in such that rrl(ie) = rc(ie) + 1. Then the definitions of (rc;rc’) and (rcl;rc’) only differ on their ic-th components: if n(ic) = m is odd, we get z’((m+1)/2) for @;rr’) and [[7r’((m+l)/2), n’((m+3)/2) ]] for (?tl;rc’); ifrc(ic) = m is even, we get [[n’(m/2), n;‘((m+2)/2) ]] for (rr;rc’) and n ‘((m+2)/2) for (n: 1;7r’); in all cases, inf(n;7r’) I inf(n;&), and sup(n;z’) I sup(rcl;rr’); hence inf(rc;z’) I sup(rrl;z’). A similar, easier reasoning works for the right argument; (alternatively, transposition can be used to deduce the result from the preceding one). By induction, we get the general result. Q.E.D. We now state the main result about stability: Proposition The set of intervals is intersection, transposition, and composition. stable bY The last statement means that, if I in II(p?q) and J in lI(q,r) are two intervals, then (1;J) is an interval in II(p,r). Proof The part about intersection and transposition is obvious. For composition, we use the more precise Lemma (I Let 4, (k,[m,n]) = { n: I x(k) E [m,n]); then (k,m) ; IE’) =? (k, rr+‘((m+1)/2)) if m is odd; ($$wO ; n’) = Ipd& [[7r’(m/2), 7c’((m+2)/2)]]), if m is even. 9 Putting together the preceding results, we get an explicit formula for computing the composition of two intervals: Theorem Let [a$] be an interval in II(p,q), [y,6] an interval in II( then U&PI ; WI) is the interval [ inf(a;y), sup@,&)] in WPJ). Proof Since ([a$1 ; LyJl) 2 (w), inf(a;y) E [a$1 ; [y,6]. The same holds for sup@,&. Since ([a$] ; [y,S]) is an interval, ([a$1 ; [y,W a [ infOxy), sup@&1 . Conversely, let t E (rr;n;‘), for ‘II: E [a$], 7~’ E [y,6]. Since a 5 n; and y I z’, we have inf(a;y) I inf(rc;n;‘) I t. In the same way, we show that t I sup@,@. Q.E.D. 5. Interval calculus as algebra 5.1. Constructing relation algebras We now look in more detail at the algebraic structure of wP& Notations For ~21, 1’ p,p denotes equality in II(p,p), i.e. the element in II(p,p) such that 1’ p,p (i) = odd(i), 1 <i<p. If E is a set, p(E) denotes the set of subsets of E. Proposition The following properties obtain, for any m E WP,~), 7~ E Wqs) and 7~ E Wr,s): i) (( x:1 ; n: 2 1 ; E3) = (xl ; (7c2 ; x3)); ii) (~1 ; 1’ q, iii) 1’ p,pE ? ) = ~1 and (1’ ,,p ; n;l ) = ~1; ~1 ; n+ ) and H q,qE w ; Xl 1; iv) n; E (rrl;rr2 ) implies ~1 E (7~; 7r$ ) and 7r2 E (ICI’; n: ); v) (Icl ; Yc2 )’ = ( n2f ; Xl’) . Let S be a non empty subset of the positive integers. In order to construct the relation algebra A(S) which describes the calculus of S-intervals, we proceed as follows: - firstly, we take the union of all (p&relations lI(p,q), where (p,q) E S x S: l-m) = u<p*q>E s x sl%P& - composition extends to II(S), with values in p(II(S)), in the following way : 238 CONSTRAINT-BASED REASONING if 7~1 E II&q), rc2 E II@‘,q’), then if q=p’, (~1 ; ~2 ) is the same set of elements of II&q’) as defined before, considered as elements of II(S); else, it is o (the empty w; - transposition is globally defined on II(S); - the union of unit elements l’,, , for p E S, is an element 1’s of P(II(S)): I’S = { l’,,P I p E S ) . The proposition implies that II(S), together with composition, transposition and l’s , is a connected groupoid in the sense of Comer [Com83, Def. 3.11; if S has one element, this polygroupoid is in fact a polygroup. Still using [Com83], this can be expressed equivalently in terms of relation algebras in the sense of Tarski [JoTaS2]: Let A(S) = P(II(S)); it is a boolean algebra; besides, it inherits an unary operation of transposition, a binary operation of composition, and a distinguished element 1’s. Then: Theorem A(S) is a simple, complete, atomic relation algebra, with 0 #l. It is integral if and only S ={n), for some r&l. Remark More generally, we can define II(S), hence A(S), in the case where S is an equivalence relation on a subset of N+. In this way, we get algebras which are not necessarily simple. For instance, if S is the partition { 1) ,{ 2) of ( 1,2}, we get an algebra with 26 atoms which is defined in wa88] in terms of constraint networks. Examples (i) A(1) has 8 elements, which can be identified with the subsets of II( 1 ,l); identifying the elements 0, 1, 2 of II(l,l) with >, 6 (equality), > resp., we see that the elements of A(1) are <, a,>, 5 (ie. c + 6),1 (ie. > + a),# (ie. < + >), 0 (the empty set), and 1 (ie. < + > + 6). The structure of A(1) is entirely determined by the effect of transposition on < (it exchanges < and >), and by the conditions (< ; <) = c, and (< ; >) = 1. (ii) A(l,2) is an algebra with 26 atoms, which are the elements of II(l,l) u II(12) u II(2,l) u II(2,2). Its structure is determined by the effect of transposition, which operates on II( 1,l) as described above, on II(2,2) (which is the set of atoms of Allen’s algebra) and exchanges II( 1,2) and II(2,1), and by composition, which is as described by Allen’s transition table and Vilain’s extension. In the general case, A(S) describes the calculus on n- intervals, where n E S. 5.2. Using the algebra: weak representations Temporal reasoning in AI deals with temporal databases consisting of sets of time objects; a basic problem consists in proving and maintaining the consistency of such a database; we now sketch how the algebraic machinery allows us to define such databases as algebraic structures. e fin it ion A weak representation of a relation algebra A is a map Bi, of A into a direct product of algebras of the form P (U x u) , such that: (a) 0 defines a homomorphism of boolean algebras; (b) aql’) = ((u,u) I u E U). (c) a,(t) = t (transposition of a binary relation) 00 Q, (a ; P> I> WG 0 WO. This notion is an extension of the classical notion of a representation, which is defined as a weak representation satisfying : (e) <P is one-to-one; (0 @ (a ; P> = @(a) 0 Q(P). If A is a relation algebra, we shall say that a weak representation of A into P (U x U) is connected if @( 1) = U x U (1 denotes the greatest element in the underlying boolean algebra). Examples (i) A connected weak representation of A(1) is defined by the following data : - a non empty set U (call its elements time points); - a binary relation R = a,(c) on U. In fact, because of (b), the image of equality has to be A = { (u,u) I u E U) ; because of (c), <P(B) is the transpose R’ of R. These data are subject to the following conditions: by (a) and connectedness, (R, Rt, A) is a partition of U x U; hence R is irreflexive and total; by (d), we have R 2 R o Rf. But this is just transitivity for R. In other words, R defines a linear order on U. This shows that the notion of a connected weak representation of A( 1) is equivalent to that of a linear order. Representations of A(1) have to satisfy the further conditions : RoR 2R; R OR’ 2 R; R’oR 2 R. It is easily verified that the first condition expresses the fact that R is dense; the second one, that U is unlimited on the right; the third one, that U is unlimited on the left. Hence a representation of A(1) is a linear order which is dense and unlimited in both directions. By a theorem of Cantor, any denumerable order having these properties is isomorphic to the order of the rational numbers Q. We show in [Lig90] that this result generalizes to representations of A(n) for any til. (ii) Weak representations of A( ( 1,2}) Consider such a representation a, with underlying set U. Then @(l’l,l) = { (u,u) I u E U* ) , @(l’& = ((u,u) I u E U2). The sets U* and U2 are time points and time intervals respectively. LIGOZAT 239 For example, Fig. 9 represents a weak representation withUl= (u&@= ( ~2~~3) which corresponds to the specifications: wo = ((u~,uQ)) (o denotes (0,2) in lI(2,2)); @,Cdurin~ = ((ul,u2)} (,during denotes 2 in II( 1,2)); @(before) = ((ul,u3)} (,before denotes 0 in II( 1,2)). ul u2 0 u3 Fig 9 : A weak representation of A( { 1,2}) In applications, we have sets of constraints which can be represented as networks with arcs labeled by elements of A(S), where S is some set of integers. The preceding discussion gives a framework for maintaining and checking the consistency of such a network. It also implies that restricting the labels to the subsets of intervals guarantees the completeness of the constraint propagation algorithm. 6. Conclusion We have introduced a calculus of (p&-relations which provides a framework subsuming a number of formalisms used in temporal reasoning. Investigating the partial order structure of the set of relations allowed us to characterize in a simple way an important subset of relations: the subset of intervals; we related this subset to the operations of composition and transition, and obtained a simple explicit expression for the composition of two intervals. We also gave a precise definition of the algebraic basis of this generalized calculus, and showed how results on the computational feasability of consistency checking can be extended to this wider framework. References [A11831 Allen, J.F., 1983, Maintaining Knowledge about Temporal Intervals, Communications of the ACM 26, 11, 832-843. [Be&851 Bestougeff, H., and Ligozat, G., 1985, Parametrized abstract objects for linguistic information processing, Proceedings of the European Chapter of the Association for Computational Linguistics, Geneva, 107-l 15. [BeLi89] Bestougeff, H., and Ligozat, G., 1989, Outils logiques pour le traitement du temps: de la linguistique b l’intelligence artijicielle, Masson, Paris. [Corn831 Comer, S.D., 1983, A New Foundation for the Theory of Relations, Notre Dame Journal of Formal Logic, 24,2, 181-187. [JoTa Jonsson, B., and Tarski, A., 1952, Boolean Algebras with Operators II, American Journal of Mathematics 74, 127-162. LLad86J Ladkin, P.B., 1986, Time Representation: A Taxonomy of Interval Relations, Proceedings of M-86 ,360-366. [Ma] Ladkin, P.B., 1987, The Completeness of a Natural System for Reasoning with Time Intervals, Proceedings of IJCAb87,462467. [Lad87b] Ladkin, P.B., 1987, Models of Axioms for Time Intervals, Proceedings of AAAI-87, Seattle, 234- 239. [LaMa87J Ladkin, P.B., and Maddux, R.D., 1987, The Algebra of Convex Time Intervals, Kestrel Institute Technical Report, KES.U.87.2. [LaMa Ladkin, P. B., and Maddux, R. D., 1988, On Binary Constraint Networks, Draft Paper. WgW Ligozat, G., 1986, Points et intervalles combinatoires, TA Informations, 27, no 1, 3-15. [LigWa] Ligozat, G., 1990, Intervalles generalis% I, Comptes Rendus de I’Academie des Sciences de Paris, S&ie A, Tome 310,225-228. [Lig90b] Ligozat, G., 1990, Intervalles general&% II, Comptes Rendus de I’Academie des Sciences de Paris, S&ie A, Tome 310,299-302. [Lig9&] Ligozat, G., 1990, Weak Representations of Interval Algebras, Proceedings of AAAI-90,7 15-720. [LiBe89a] Ligozat, G., and Bestougeff, H., 1989, On Relations between Intervals, Information Processing Letters 32, 177-182. [Noe88] Noekel, K., Convex Relations Between Time Intervals, SEKI Report SR-88-17, Kaiserslautem, W. Germany, 1988. [vBeek89] van Beek, P., 1989, Approximation Algorithms for Temporal Reasoning, Proceedings of of the 11th IJCAI, 1291-1296. [vBeek90] van Beek, P., 1990, Reasoning about Qualitative Temporal Information, Proceedings of m-90, 728-734. [vBC90] van Beek, P., and Cohen, R., 1990, Exact and Approximate Reasoning about Temporal Relations, Computational Intelligence, to appear. [ Vi1821 Vilain, M.B., 1982, A System for Reasoning About Time, Proceedings of AAAI-82, 197-201. [Vi&&j Vilain, M.B., and Kautz, H., 1986, Constraint Propagation Algorithms for Temporal Reasoning, in: Proceedings of AAAI-86, 377-382, Morgan Kaufmann. [ZhuS7J Zhu, M., Loh, N.K., and Siy, P., 1987/88, Towards the minimum set of primitive relations in temporal logic, Information Processing Letters 26 , 121-126. 240 CONSTRAINT-BASED REASONING
1991
48
1,107
Combinin ualitative and Quantitative strai s in Temporal Reasoning* Itay Meiri Cognitive Systems Laboratory Computer Science Department University of California, Los Angeles, CA 90024 itay@cs.ucla. edu Abstract This paper presents a general model for tempo- ral reasoning, capable of handling both qualita tive and quantitative information. This model al- lows the representation and processing of all types of constraints considered in the literature so far, - including metric constraints (restricting the dis- tance between time points), and qualitative, dis- junctive, constraints (specifying the relative posi- tion between temporal objects). Reasoning tasks in this unified framework are formulated as con- straint satisfaction problems, and are solved by tra- ditional constraint satisfaction techniques, such as backtracking and path consistency. A new class of tractable problems is characterized, involving qual- itative networks augmented by quantitative do- main constraints, some of which can be solved in polynomial time using arc and path consistency. 1 Introduction In recent years, several constraint-based formalisms have been proposed for temporal reasoning, most no- tably, Allen’s interval algebra (IA) [l], Vilain and Kautz’s point algebra (PA) [14], Dean and McDer- mott’s time map [2], and metric networks (Dechter, Meiri and Pearl [4]). In these formalisms, temporal reasoning tasks are formulated as constraint satisfac- tion problems, where the variables are temporal objects such as points and intervals, and temporal statements are viewed as constraints on the location of these ob- jects along the time line. Unfortunately, none of the existing formalisms can conveniently handle all forms of temporal knowledge. Qualitative approaches such as Allen’s interval algebra and Vilain and Kautz’s point algebra face difficulties in representing and reasoning about metric, numerical information, while the quanti- tative approaches exhibit limited expressiveness when it comes to qualitative information [4]. *This work was supported in part by the Air Force Office of Scientific Research, AFOSR 900136. 260 TEMPORAL CONSTRAINTS In this paper we offer a general, network-based com- putational model for temporal reasoning, capable of handling both qualitative and quantitative information. In this model, variables represent both points and inter- vals (as opposed to existing formalisms, where one has to commit to a single type of objects), and constraints may be either metric, between points, or qualitative dis- junctive relations between objects. The unique feature of this framework is that it allows the representation and processing of all types of constraints considered in the literature so far. The main contribution of this paper lies in provid- ing a formal unifying framework for temporal reasoning, generalizing the interval algebra, the point algebra, and metric networks. In this framework, we are able to uti- lize constraint satisfaction techniques in solving several reasoning tasks. Specifically: General networks can be solved by decomposition into singleton labelings, each solvable in polynomial time. This decomposition scheme can be improved by traditional constraint satisfaction techniques such as variants of backtrack search. The input can be effectively encoded in a minimal network representation, which provides answers to many queries. Path consistency algorithms can be used in prepro- cessing the input network to improve search effi- ciency, or to compute an approximation to the mini- mal network. We were able to identify two classes of tractable prob- lems, solvable in polynomial time. The first consists of augmented qualitative networks, composed of qual- itative constraints between points and quantitative domain constraints, which can be solved using arc and path consistency. The second class consists of networks for which path consistency algorithms are exact. We also show that our model compares favorably with an alternative approach for combining quantita- tive and qualitative constraints, proposed by Kautz and From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. Ladkin [6], f rom both conceptual and computational points of view. The paper is organized as follows. Section 2 formally defines the constraint types under consideration. The definitions of the new model are given in Section 3. Section 4 reviews and extends the hierarchy of quali- tative networks. Section 5 discusses augmented qual- itative networks-qualitative networks augmented by domain constraints. Section 6 presents two methods for solving general networks: a decomposition scheme and path consistency, and identifies a class of networks for which path consistency is exact. Section 7 provides summary and concluding remarks, including a compar- ison to Kautz and Ladkin’s model. Proofs of theorems can be found in the extended version of this paper [IO]. 2 The Representation Language Consider a typical temporal reasoning problem. We are given the following information. Example 1. John and Fred work for a company in LA. They usually work at the local office, in which case it takes John less than 20 minutes and Fred between 15- 20 minutes to get to work. Twice a week John works at the main office, in which case he commutes at least 60 minutes to work. Today John left home between ‘7:05- 7:10, and Fred arrived at work between 7:50-‘7:55. We also know that Fred and John met at a traffic light on their way to work. We wish to represent and reason about such knowledge. We wish to answer queries such as: “is the information in this story consistent., 7 ” “who was the first to arrive at work?,” “what are the possible times at which John arrived at work?,” and so on. We consider two types of temporal objects: points and intervals. Intervals correspond to time periods dur- ing which events occur or propositions hold, and points represent beginning and ending points of some events, as well as neutral points of time. For example, in our story, we have two meaningful events: “John was go- ing to work” and “Fred was going to work.” These events are associated with intervals J = [PI, &I, and F = [Ps, Pd], respectively. The extreme points of these intervals, PI, . . . , P4, represent the times in which Fred and John left home and arrived at work. We also intro- duce a neutral point, PO, to represent the “beginning of the world” in our story. One possible choice for PO is 7:00 a.m. Temporal statements in the story are treated as constraints on the location of objects (such as in- tervals J and F, and points PO, . . . , P4) along the time line. There are two types of constraints: qualitative and quantitative. Qualitative constraints specify the relative position of pairs of objects. For instance, the fact that John and Fred met at a traffic light, forces intervals .7 and F to overlap. Quantitative constraints place absolute bounds or restrict the temporal distance between points. For example, the information on Fred’s Relation Symbol Inverse Relations on Endpoints p before I b bi p < I- p starts I p during I i ii: p = I- I- < p < I+ p finishes I f fi p = I+ p cbfter I a ai p > I+ i Table 1: The basic relations between a point p and an Interval I = [I-, I+]. commuting time constrains the length of interval F, i.e., the distance between P3 and P4. In the rest of this section we formally define qualitative and quantitative constraints, and the relationships between them. ualitative Constraints A qualitative constraint between two objects Oi and Oj, each may be a point or an interval, is a disjunction of the form (Oi Yl Oj) v l s-V (Oi rk Oj), (1) where each one of the ra’s is a basic relation that may exist between the two objects. There are three types of basic relations. e Basic Interval-Interval (II) relations that can hold between a pair of intervals [l], before, meets, starts, during, finishes, overlaps, their inverses, and the equality relation, a total of 13 relations, denoted by the set (b, m, s, d, f, o, bi, mi, si, di, fi, oi, =). e Basic Point-Point (PP) relations that can hold be- tween a pair of points [14], denoted by the set {< 9 =9 >I* m Basic Point-Interval (PI) relations that can hold be- tween a point and an interval, and basic Interval- Point (IP) relations that can hold between an inter- val and a point. These relations are defined in Table 1 (see also [7]). A subset of basic relations (of the same type) corre- sponds to an ambiguous, disjunctive, relationship be- tween objects. For example, Equation (1) may also be written as Oi { r1, . . . , rk} oj; alternatively, we say that the constraint between Oa and Oj is the relation set {rl, . . . , Tk}. One qualitative constraint given in Ex- ample 1 reflects the fact that John and Fred met at a traffic light. It is expressed by an II relation specifying that intervals J and F are not disjoint: J (s, si, d, di, f, fi, o, oi, =) F. To facilitate the processing of qualitative constraints, we define a qualitative aZgebru(QA), whose elements are all legal constraints (all subsets of basic relations of the same type)-2 l3 II Relations, 23 PP’ relations, 25 PI relations, and 25 IP relations. Two binary operations are defined on these elements: intersection and compo- sition. The intersection of two qualitative constraints, MEIRI 261 Table 2: A full transitivity table. R’ and R”, denoted by R’ @ R”, is the set-theoretic in- tersection R’ n R”. The composition of two constraints, R’ between objects Oi and Oj, and R” between objects Oj and Ok, is a new relation between objects Oi and Ok, induced by R’ and R”. Formally, the composition of R’ and R”, denoted by R’ 8 R”, is the composition of the constituent basic relations, namely R’ @ R” = (P’ @ r”[r’ E R’, r” E R”). Composition of two basic relations r’ and P”, is defined by a transitivity table shown in Table 2. Six transitivity tables, Tr, . . . , T4, TPA, TIA, are required; each defining a composition of basic relations of a certain type. For example, composition of a basic PP relation and a basic PI relation is defined in table Tr. Two important sub- sets of QA are Allen’s Interval Algebra (IA), the restric- tion of &A to II relations, and Vilain and Kautz’s Point Algebra (PA), its restriction to PP relations. The cor- responding transitivity tables are given in [l] and [14], and appear in Table 2 as TIA and TPA, respectively. The rest of the tables, Tl, . . . , T4, are given in the ex- tended version of this paper [lo]. Illegal combinations in Table 2 are denoted by 0. Quantitative Constraints Quantitative constraints refer to absolute location or the distance between points [4]. There are two types of quantitative constraints: o A unary constraint, on point Pi, restricts the location of Pi to a given set of intervals (4 E Il)V ‘**V (Pi E Ik). o A binary constraint, between points Pi and Pj, con- strains the permissible values for the distance Pj -Pi: (Pj - Pi E II) V l ” V (Pj - Pi E Ik). In both cases the constraint is represented by a set of in- tervals(I~,...,&}; each interval may be open or closed in either sidel. For example, one binary constraint given in our story specifies the duration of interval J (the event “John was going to work)“: -f'2 - PI E ((WO),(6O,c+ c ‘The set (II,... , Ik} represents the set of rea,l num- bers 11 U - - . U Ik. Throughout the paper we shall use the convention whereby a real number v is in {II,. . . , Ik} iff 2, E 11 u*‘-ul;e. The fact that John left home between 7:05-7:lO is trans- lated into a unary constraint on PI, PI E ((5, lo)}, or 5 < PI < 10 (note that all times are relative to PO, i.e. 7:00 a.m.). Sometimes it is easier to treat a unary con- straint on Pi as a binary constraint between PO and Pi, having the same interval representation. For exa.mple, the above unary constraint is equivalent to the binary constraint, PI - PO E ((5,lO)). The intersection and composition operations for quantitative constraints assume the following form. Let C’ and C” be quantitative constraints, represented by interval sets I’ and I”, respectively. Then, the intersec- tion of C’ and C” is defined as c’ $ c” = {XIX E I’, dc E I”). The composition of C’ and C” is defined as c’ @I c” = (%)3X E I’, 3y E I”, 2 + y = z). Relationships between Qualitative and Quantitative Constraints The existence of a constraint of one type someGmes implies the existence of an implicit constraint of the other type. This can only occur when the constraint involves two points. Consider a pair of points Pi and Pj. If a quantitative constraint, C, between Pi and P’ is given (by an interval set (II, a . . , Ik )), then the implied qualitative constraint, QUAL(C), is defined as follows (see also [6]). o If 0 E (11,. . . , Ik}, then “=” E QUAL(C). e If there exists a value 2, > 0 such that v E {Il,..., 1k), then “<” E QUAL(C). e If there exists a value v < 0 such that v E {L-., Ik}, then “>” E QUAL(C). Similarly, If a qualitative constraint, C, between Pi and c’ is given (by a relation set R), then the implied quan- titative constraint, QUAN(C), is defined as follows. e If “<” E R, then (0,oo) E QUAN(C). 0 If %99 E R, then [0] E QUAN(C). e If “>” E R, then (-oo,O) E QUAN(C). The intersection and composition operations can be extended to cases where the operands are constraints of different types. If C’ is a quantitative constraint and C” is qualitative, then intersection is defined as quantitative intersection: c’ $ c” = C’ @ QUAN(C”). (2) Composition, on the other hand, depends on the t’ype of C”. e If C” is a PP relation, then composition (and conse- quently the resulting constraint) is quantitative c’ @I c” = C’ @I QUAN(C”). o If C” is a PI relation, then composition is qualitat#ive c’ @I c” = QUAL(C’) 8 C”. 262 TEMPORAL CONSTRAINTS 3 General Temparral Constraint Networks We now present a network-based model which facili- tates the processing of all constraints described in the previous section. The definitions of the new model fol- low closely those developed for discrete constraint net- works [ll], and for metric networks [4]. A general temporal constraint network involves a set of variables (Xl,. . . , Xra}, each representing a tempo- ral object (a point or an interval), and a set of unary and binary constraints. When a variable represents a time point its domain is the set of real numbers R; when a variable represents a temporal interval, its do- main is the set of ordered pairs of real numbers, i.e. {(a, b)jcl, b E %,a < b}. Constraints may be quan- titative or qualitative. Each qualitative constraint is represented by a relation set R. Each quantitative con- straint is represented by an interval set I. Constraints between variables representing points are always main- tained in their quantitative form. We also assume that unary quantitative constraints are represented by equivalent binary constraints, as shown in the previous section. A set of internal constraints relates each in- terval I = [I-, 1+] to its endpoints, I- {starts) I, and I+ (finishes) I. A constraint network is associated with a directed constraint graph, where nodes represent variables, and an arc i ---) j indicates that a constraint Cij, between variables Xi and Xj, is specified. The arc is labeled by an interval set (when the constraint is quantitative) or by a QA element (when it is qualitative). The con- straint graph of Example 1 is shown in Figure 1. A tuple X = (xl,. . . , xra) is called a solution if the assignment {Xi = 21,. . . , Xn = x~} satisfies all the constraints (note that the value assigned to a variable which represents an interval is a pair of real numbers). The network is consistent if at least one solution exists. A value v is a feasible value for variable Xi, if there exists a solution in which Xi = v. The set of all feasible values of a variable is called its minimal domain. We define a partial order, C, among binary con- straints of the same type. A constraint C’ is tighter than constraint C”, denoted by C’ E C”, if every pair of values allowed by C’ is also allowed by C”. If C’ and C” are qualitative, represented by relation sets R’ and R”, respectively, then C’ C C” if and only if R’ E R”. If C’ and C” are quantitative, represented by interval sets I’ and I”, respectively, then C’ E C” if and only if for every value v E I’, we have also v E I”. This partial order can be extended to networks in the usual way. A network N’ is tighter than network NJ’, if the partial order C is satisfied for all the corresponding constraints. Two networks are equivalent if they possess the same solution set. A network may have many equivalent rep- resentations; in particular, there is a unique equivalent network, M, which is minimal with respect to E, called the minimal network (the minimal network is unique because equivalent networks are closed under intersec- tion). The arc constraints specified by M are called the minimal constraints. The minimal network is an effective, more explicit, encoding of the given knowledge. Consider for exam- ple the minimal network of Example 1. The minimal constraint between J and F is {di}, and the minimal constraint between PI and P2 is ((60,oo)). From this minimal network representation, we can infer that to- day John was working in the main office; he arrived at work after 8:OO a.m., and thus Fred was the first to arrive at work. Given a network N, the first interesting task is to determine its consistency. If the network is consistent, we are interested in other reasoning tasks, such as find- ing a solution to N, computing the minimal domain of a given variable Xi, computing the minimal constraint between a given pair of variables Xi and Xj, and com- puting the full minimal network. The rest of the paper is concerned with solving these tasks. 4 The ierarchy of Qualitative Networks Before we present solution techniques for general net- works, we briefly describe the hierarchy of qualitative networks. Consider a network having only qualitative con- straints. If all constraints are II relations (namely IA elements), or PP relations (PA elements), then the net- work is called an IA network, or a PA network, re- spectively [12]. If all constraints are PI and IP rela- tions, then the network is called an IPA network (for Interval-Point Algebra2). A special case of a PA net- work, where the relations are convex (taken only from {<, 5, = , 2, >}, namely excluding # ), is called a convex PA network (CPA network). It can be easily shown that any qualitative network can be represented by an IA network. On the other hand, there are some qualitative networks that can- not be represented by a PA network. For example (see [14]), a network consisting of two intervals, I and J, and a single constraint between them, I {before, after} J. Formally, the following relationship can be established among qualitative networks. Proposition 1 Let QN be the set of al/ qualitaiive networks. Let net(CPA), net(PA), net(IPA), and net(IA) denote the set of qualitative networks which can be represented by CPA networks, PA networks, IPA networks, and IA networks, respectively. Then, net(CPA) C net(PA) C net(IPA) C net(IA) = QN. 2We use this n ame to comply with the names IA and PA, although technically these relations, together with the intersection and composition operations, do not constitute an algebra, because they are not closed under composition. MEIRI 263 t(O, 2’9,630, d) Figure 1: The constraint graph of Example 1. By climbing up in this hierarchy from CPA networks towards IA networks we gain expressiveness, but at the same time lose tractability. For example, deciding con- sistency of a PA network can be done in time O(n’) [13], but it becomes NP-complete for IA networks [14], or even for IPA networks, as stated in the following theorem. Theorem 2 Deciding consistency of an IPA network is NP- hard. Theorem 2 suggests that the border between tractable and intractable qualitative networks lies somewhere between PA networks and IPA networks. 5 Augmented Qualitative Networks We now return to solving general networks. First, we observe that even the simplest task of deciding con- sistency of a general network is NP-hard. This fol- lows trivially from the fact that deciding consistency for either metric networks or IA networks is NP-hard [4, 141. Therefore, it is unlikely that there exists a gen- eral polynomial algorithm for deciding consistency of a network. In this section we take another approach, and pursue “islands of tractability”-special classes of net- works which admit polynomial solution. In particular, we consider the simplest type of network which contains both qualitative and quantitative constraints, called an augmented qualitative network, a qualitative network augmented by unray constraints on its domains. We may view qualitative networks as a special case of augmented qualitative networks, where the domains are unlimited. For example, PA networks can be re- garded as augmented qualitative networks with do- mains (-oo,oo). It follows that in our quest for tractability, we can only augment tractable qualitative networks such as CPA and PA networks. In this section, we consider CPA and PA networks over three domain classes which carry significant im- portance in temporal reasoning applications: 1. Discrete domains, where each variable may assume only a finite number of values (for instance, when we settle for crude timing of events such as the day or year in which they occurred). (15JO)I 2. Single-interval domains, where we have only an up- per and/or a lower bound on the timing of events. 3. Multiple-intervals domains, which subsumes the two previous cases3. A CPA network over multiple-intervals domains is de- picted in Figure 2a, where each variable is labeled by its domain intervals. Note that in this example, and also throughout the rest of this section, we use a special constraint graph representation, where the domain con- straints are expressed as unary constraints (in general networks they are represented as binary constra.ints). We next show that for augmented CPA networks and for some augmented PA networks, all interesting reasoning tasks can be solved in polynomial time, by enforcing arc consistency (AC) and path consistency PC>* First, let us review the definitions of arc consistency and path consistency [8, 111. An arc i + j is arc consis- tent if and only if for any value x E Di , there is a value y E Dj, such that the constraint Cij is satisfied. A path P from i to j, io = i ---) il + a.. + i, = j, is path consistent if the direct constraint Cij is tighter than the composition of the constraints along P, namely, Cij E CiO,il @ . l l @ Cirnel,irn. Note that our defini- tion of path consistency is slightly different than the original one [8], as it does not consider the domain con- straints. A network G is arc (path) consistent if all its arcs (paths) are consistent. Figure 2b shows an equiv- alent arc- and path-consistent form of the network in Figure 2a. The following theorems establish the local consis- tency levels which are sufficient to determine the con- sistency of augmented CPA networks. Tlneorem 3 Any nonempty arc-consistent CPA net- work over discrete domains is consistent. Theorem 4 Any nonempty arc- and path-consistent CPA network over multiple-intervals domains is con- sistent. 3Note that a discrete domain { ~1, . . . , vk) is essentially a multiple-intervals domain {[VI, VI], . . . , [vk, vk]}. 4A nonemp t y network is a network in which all domains and all constraints are nonempty. 264 TEMPORAL CONSTRAINTS PA w4 (OS9 em P951 P4) [W L (4 (b) Figure 2: (a) A CPA network over multiple-intervals domains. (b) An equivalent arc- and path-consistent form. Theorems 3 and 4 provide an effective test for de- ciding consistency of an augmented CPA network. We simply enforce the required consistency level, and then check whether the domains and the constraints are empty. The network is consistent if and only if all do- mains and all constraints are nonempty. Moreover, by enforcing arc and path consistency we also compute the minimal domains, as stated in the next theorem. Theorem 5 Let G = (V, E) be a nonempty arc- and path-consistent CPA network over multiple-intervals domains. Then, all domains are minimal. When we move up in the hierarchy from CPA net- works to PA networks (allowing also the inequality re- lation between points), deciding consistency and com- puting the minimal domains remain tractable for single- interval domains. Unfortunately, deciding consistency becomes NP-hard for discrete domains, and conse- quently, for multiple-intervals domains. Theorem 6 Let G = (V, E) be a nonempty arc- and path-consistent PA network over single-interval do- mains. Then, G is consistent, and all domains are minimal. Proposition ‘7 Deciding consistency of a PA network over discrete domains is NP-hard. One way to convert a network into an equivalent arc-consistent form is by applying algorithm AC-3 [8], shown in Figure 3. The algorithm repeatedly applies the function REVISE((i, j)), which makes arc i + j consistent, until a fixed point, where all arcs are con- sistent, is reached. The function REVISE( (i, j)) re- stricts Di, the domain of variable Xi, using operations on quantitative constraints: Di + Di @ Dj 8 QUAN(Cji). Taking advantage of the special features of PA net- works, we are able to bound the running time of AC-3 ai follows. Theorem 8 Let G = (V, E) be an augmented PA net- work. Let n and e be the number of nodes and the num- ber of edges, respectively. Then, the timing of algorithm AC-3 is bounded as follows. 1. Q t {i - jli - j E E} 2. while Q # 8 do 3. select and delete any arc k - m from Q 4. if REVISE( (Ic, m)) then 5. Q c Q U {i - kli - k E ,?I?, i # m} 6. end Figure 3: AC-3-an arc consistency algorit,hm. If the domains are discrete, then AC-3 takes O(eklog k) time, where k is the maximum domain sixe5. (B If the domains consist of single intervals, then AC-3 takes O(en) time. If the domains consist of multiple intervals, then AC- 3 takes O(en2K2) time, where K is the maximum number of intervals in any domain. A network can be converted into an equivalent path- consistent form by applying any path consistency algo- rithm to the underlying qualitative network [8, 14, 121. Path consistency algorithms impose local consistency among triplets of variables, (i, k, j), by using a relax- ation operation cij t cij @ cik @ ckj. (3) Relaxation operations are applied until a fixed point is reached, or until some constraint becomes empt’y in- dicating an inconsistent network. One efficient path consistency algorithm is PC-2 [8], shown in Figure 4, where the relaxation operation of Equation (3) is per- formed by the function REVISE((i, k, j)). Algorithm PC-2 runs to completion in O(n3) time [9]. Table 3 summarizes the complexity of determining consistency in augmented qualitative networks. Note that when both arc and path consistency are required, we first need to establish path consistency, which re- sults in a complete graph, namely e = n2. Algorithms 5Recently, Deville and Van Hentenryck [5] have devised an efficient arc-consistency algorithm which runs in O(eL) time for CPA networks over discrete domains, improving the O(eX;log k) upper bound of AC-3. MEIRI 265 Discrete Single interval Multiple intervals CPA networks AC AC + PC AC + PC 0( ek log n) w ) O(n* K2) PA networks NP-complete AC + PC NP-complete O(n’) IPA networks NP-complete NP-complete NP-complete Table 3: Complexity of deciding consistency in augmented qualitative networks. 1. Q + ((4 k.i)l(i < j), (k # G.i)} 2. while Q # 0 do 3. select and delete any triplet (i, k, j) from Q 4. if REVISE((i, k, j)) then 5. Q t Q u RELATED-PATHS( (;, k, j)) 6. end Figure 4: PC-2-a path consistency algorithm. for assembling a solution to augmented qualitative net- works are given in the extended version of this paper [lo]. Their complexity is bounded by the time needed to decide consistency. 6 Solving General Networks In this section we focus on solving general networks. First, we describe an exponential brute-force algorithm, and then we investigate the applicability of path con- sistency algorithms. Let N be a given network. A basic label of arc i + j, is a selection of a single interval from the interval set (if Cij is quantitative) or a basic relation from the QA element (if Cij is qualitative). A network whose arcs are labeled by basic labels of N is called a singleton labeling of N. We may solve N by generating all its singleton labelings, solve each one of them independently, and then combine the results. Specifically, N is consistent if and only if there exists a consistent singleton labeling of N, and the minimal network can be computed by taking the union over the minimal networks of all the singleton labelings. Each qualitative constraint in a singleton labeling can be translated into a set of up to four linear inequalities on points. For example, a constraint I {during} J, can be translated into linear inequalities on the endpoints of I and J, I- > J-, I- < J+, I+ > J-, and I+ < J+. Using the QUAN translation, these inequalities can be translated into quantitative constraints. It follows, that a singleton labeling is equivalent to an SZ’P network-a metric network whose constraints are labeled by single intervals [4]. An STP network can be solved in O(n3) time [4]; thus, the overall complexity of this decomposi- tion scheme is O(n3ke), where n is the number of vari- ables, e is the number of arcs in the constraint graph, and k is the maximum number of basic labels on any arc. This brute-force enumeration can be pruned signifi- cantly by running a backtracking algorithm on a meta- CSP whose variables are the network arcs, and their do- mains are the possible basic labels. Backtrack assigns a basic label to an arc, as long as the corresponding STP network is consistent and, if no such assignment1 is possible, it backtracks. Imposing local consistency among subsets of variables may serve as a preprocessing step to improve backtrack. This strategy has been proven successful (see [3]), as enforcing local consistency can be achieved in polyno- mial time, while it may substantially reduce the num- ber of dead-ends encountered in the search phase it- self. In particular, experimental evaluation shows that enforcing a low consistency level, such as arc or path consistency, gives the best results [3]. Following this rationale, we next show that path consistency, which in general networks amounts to the least a-mount of preprocessing6, can be enforced in polynomial time. To assess the complexity of PC-2 in the context of general networks, we introduce the notion of a range of a network [4]. The range of a quantitative constraint, C, represented by an interval set {II, . . . , Ik}, where the in- tervals’ extreme points are integers, is sup(lk) - inf( r,). The range of a network is the maximum range over all its quantitative constraints. The range of a net- work containing rational extreme points is the range of the equivalent integral network, obtained from the in- put network by multiplying all extreme points by their greatest common divisor. The next theorem shows that the timing of PC-2 is bounded by O(n3R3), where R is the range of the network. Theorem 9 Let G = (V, E) be a given network. Algo- rithm PC-2 performs no more than O(n3R relaxation steps, and its timing is bounded by O(n3R 2 ), where R is the range of G. Path consistency can also be regarded as an alterna- tive approach to exhaustive enumeration, serving as an approximation scheme which often yields the minimal network. For example, applying path consistency to the network of Figure 1 produces the minimal network. Although, in general, a path consistent network is not necessarily minimal and may not even be consistcent, in some cases path consistency is guaranteed to determine ‘General networks are trivially arc consistent since unary constraints are represented as binary constraints. 266 TEMPORAL CONSTRAINTS the consistency of a network or to compute the minimal network representation. Proposition I.0 Let G = (V, E) be a path-consistent network. If the qualitative subnetwork of G is in net(CPA), and the quantitative subnetwork constitutes an STP network, then G is consistent. Corollary 11 Any path-consistent singleton labeling is minimal. Note that for networks satisfying the condition of Proposition 10 path consistency is not guaranteed to compute the minimal network (a counterexample is pro- vided in [lo]); h owever, it can be shown that for these networks, the minimal network can be computed using O(n2) applications of path consistency (see [lo]). We feel that some more temporal problems can be solved by path consistency algorithms; further inves- tigation may reveal new classes for which these algo- rithms are exact. 7 Conclusions We described a general network-based model for tempo- ral reasoning capable of handling both qualitative and quantitative information. It facilitates the processing of quantitative constraints on points, and all qualita- tive constraints between temporal objects. We used constraints satisfaction techniques in solving reasoning tasks in this model. In particular, general networks can be solved by a backtracking algorithm, or by path consistency, which computes an approximation to the minimal network. Kautz and Ladkin [6] h ave introduced an alternative model for temporal reasoning. It consists of two com- ponents: a metric network and an IA network. These two networks, however, are not connected via internal constraints, rather, they are kept separately, and the inter-component relationships are managed by means of external control. To solve reasoning tasks in this model, Kautz and Ladkin proposed an algorithm which solves each component independently, and then circulates in- formation between the two parts, using the QUAL and QUAN translations, until a fixed point is reached. Our model has two advantages over Kautz and Ladkin’s model: 1. 2. It is conceptually clearer, as all information is stored in a single network, and constraint propagation takes place in the knowledge level itself. From computational point of view, we are able to pro- vide tighter bounds for various reasoning tasks. For example, in order to convert a given network into an equivalent path-consistent form, Kautz and Ladkin’s algorithm may require O(n2) informations transfer- ences, resulting in an overall complexity of O(n5R3), compared to O(n3R3) in our model. Using our integrated model we were able to identify two new classes of tractable networks. The first class is obtained by augmenting PA and CPA networks with various domain constraints. We showed that some of these networks can be solved using arc and path consis- tency. The second class consists of networks which can be solved by path consistency algorithms, for example, singleton labelings. Future research should enrich the representation lan- guage to facilitate modeling of more involved reasoning tasks; in particular, we should incorporate non-binary constraints in our model. Acknowledgments I would like to thank Rina Dechter and Judea Pearl for pro- viding helpful comments on an earlier draft of this paper. eferences PI PI PI PI PI PI PI PI PI PO1 Pll P21 WI Cl41 J. F. Allen. Maintaining knowledge about temporal intervals. CACM, 26(11):832-843, 1983. T. L. Dean and D. V. McDermott. Temporal data base management. Artificial Intelligence, 32:1-55, 1987. R. Dechter and I. Meiri. Experimental evaluation of preprocessing techniques in constraint satisfaction problems. In Proceedings of IJCAI-89, pages 271-277, Detroit, MI, 1989. R. Dechter, I. Meiri, and J. Pearl. Temporal constraint networks. Artificial Intelligence, 49, 1991. Y. Deville and P. Van Hentenryck. An efficient arc consistency algorithm for a class of CSP problems. In Proceedings of IJCAI-91, Sydney, Australia, 1991. H. Kautz and P. B. Ladkin. Integrating met(ric and qualitative temporal reasoning. In Proceedings of AAAI-91, Anaheim, CA, 1991. P. B. Ladkin and R. D. Maddux. On binary con- straint networks. Technical report, Kestrel Institute, Palo Alto, CA, 1989. A. K. Mackworth. Consistency in networks of relations. Artificial Intelligence, 8(1):99-118, 1977. A. K. Mackworth and E. C. Freuder. The complexity of some polynomial network consistency algorithms for constraint satisfaction problems. Artificial Intelligence, 25(1):65-74, 1985. I. Meiri. Combining qualitative and quantitative con- straints in temporal reasoning. Technical Report TR- 160, UCLA Cognitive Systems Laboratory, Los Ange- les, CA, 1991. U. Montanari. Networks of constraints: Fundamental properties and applications to picture processing. In- formation Sciences, (7):95-132, 1974. P. van Beek. Exact and Approximate Reasoning about Qualitative Temporal Retations. PhD thesis, University of Waterloo, Ontario, Canada, 1990. P. van Beek. Reasoning about qualitative temporal information. In Proceedings of AAAI-90, pages 728- 734, Boston, MA, 1990. M. Vilain and H. Kautz. Constraint propagation algo- rithms for temporal reasoning. In Proceedings ofAAAI- 86, pages 377-382, Philadelphia, PA, 1986. MEIRI 267
1991
49
1,108
Hinrichs and Janet slodner School of Information and Computer Science Georgia Institute of Technology Atlanta, Georgia 30332 Abstract Many design tasks have search spaces that are vague and evaluation criteria that are subjective. We present a model of design that can solve such problems using a method of plausible design adaptation. In our ap- proach, adaptation transformations are used to mod- ify the components and structure of a design and con- straints on the design problem. This adaptation pro- cess plays multiple roles in design: 1) It is used as part of case-based reasoning to modify previous de- sign cases. 2) It accommodates constraints that arrive late in the design process by adapting previous deci- sions rather than by retracting them. 3) It resolves impasses in the design process by weakening prefer- ence constraints. This model of design has been im- plemented in a computer program called JULIA that designs the presentation and menu of a meal to satisfy multiple, interacting constraints. Introduction In Artificial Intelligence, design has typically been clas- sified as being either the selection of components to instantiate a skeletal design [Ward & Seering, 19891, the configuration of a given set of components [McDermott, 19801, the fixing of numerical param- eters [Brown & Chandrasekaran, 1985, Mittal & Araya, 19861, or the construction of designs from scratch [Tong, 19881. While useful for routine sorts of design, these rigid classifications do not begin to capture the flexibility that real designers exhibit [Goel & Pirolli, 19891. For high-level conceptual de- sign especially, these tasks are often inseparable. In addition, many design tasks are ill-defined; they have search spaces that are vague and evaluation crite- ria that are subjective. This is in part because design categories may be defined not in terms of necessary and sufficient conditions, but instead by experience and expectations. For such tasks, it is unreasonable to assume that a designer can systematically enumer- ate possible designs. Architectural design, for example, *This research was funded in part by NSF Grant No. IST-8608362, in part by DARPA Grant No. F49620-88-C- 0058 28 TRANSFORMATION IN DESIGN cannot easily be reduced to searching a discrete prob- lem space of alternative configurations or components. Moreover, real-world designs may have requirements that change over time. In fact, for relatively complex problems such as architectural or aerospace design, specifications are updated constantly as the solution is refined. A problem solver should be able to accom- modate such changes with minimal disruption. Part of the solution is to use dependency-directed backtrack- ing [Stallman & Sussman, 19771, but even that can be too heavy-handed if decisions have many consequences. In this paper, we present a strategy for automat- ing high-level design that addresses these three critical issues: How can a design architecture transcend the rigid classifications of the design task (i.e., selection, con- figuration, parameter fixing, and construction)? How can a design problem solver avoid searching problem spaces that are vague and ill-defined? How can the introduction of new constraints late in the design process be accommodated in a computa- tionally efficient way? Our solution to these problems is a model of design that incorporates case-based reasoning [Kolodner et al., 1985, Bammond, 19891, constraint posting [Stefik, 1981a], and problem reduction. Cen- tral to this model is a process of plausible design adap- tation. Design adaptation is a heuristic debugging pro- cess that takes as input a source concept, a set of constraint violations and a set of adaptation transfor- mations and returns a new concept that satisfies con- straints. This process plays multiple roles in our model of design: 1) It is used as part of case-based reasoning to modify previous design cases. 2) It accommodates constraints that arrive late in the design process by adapting previous decisions rather than by retracting them. 3) It resolves impasses in the design process by weakening preference constraints. We have implemented and tested our model of design in a computer program called JULIA [Hinrichs, 1988, Hinrichs, 19891 that interactively designs the presen- tation and menu of a meal to satisfy multiple, inter- acting constraints. Meal planning entails the selection From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. of dishes, the configuration of those dishes and courses within a meal, the parametric manipulation of quan- tities such as cost and calories, and occasionally the construction of new dishes to meet idiosyncratic con- straints. Design is done in JULIA at the request of a client who both approves selections as the design of a meal progresses and often adds information late in the design process. In the next section, we present a series of exam- ples that demonstrate the use of adaptation in JULIA. Section 3 describes the different roles that adaptation plays in our model of design. Section 4 presents an overview of how JULIA works and section 5 summa- rizes an experiment that shows the effect of adaptation on different types of problems. Examples from JULIA In the first example, we see JULIA dealing with an under-constrained problem. It is asked to plan a meal for which there are enormous numbers of satisficing solutions. In this example, JULIA uses adaptation for several different tasks: to specialize the structure of the meal, to change the structure of the meal in re- sponse to a contradiction, and to change a component in the meal, again in response to a contradiction. JU- LIA deals with the enormous search space by using a suggestion made by its case-based reasoner as a start- ing point and adapting it to fit the constraints of the new situation. Problem 1 Plan a meal that is cheap, easy to prepare and includes tomato and cheese in its main dish. JU- LIA begins by choosing a cuisine. It remembers several cases, some of which are Italian and some Mexican. It makes both suggestions to its client and asks for a preference. The client chooses Ital- ian. A meal is generally composed of an appetizer, salad, main course, and dessert, but an Italian meal is different. Its appetizer is antipasto, and rather than a salad course, it has a pasta course. JULIA adapts the normal (default) meal struc- ture it assumed in the beginning to conform to the structure of an Italian meal. JULIA now concentrates on coming up with its main course (the part of the meal that pro- vides the most constraint for the remainder of the meal). JULIA is reminded of an Italian meal it once planned. In that meal, lasagne, garlic bread, and red wine were served. It proposes this to its client, who accepts it. Lasagne, however, violates an important meal constraint: main ingredients of dishes are not allowed to be repeated across courses. JULIA uses an adaptation transformation called SHARE- FUNCTION to combine the pasta course and the main course. Now the client adds a new constraint: The meal should be vegetarian. This introduces a con- tradiction that is resolved by adapting lasagne to vegetarian lasagne using the SPEClAhlZE trans- formation. JULIA proceeds to propose vegetar- ian antipasto, spumoni-messina and coffee. In the next example, adaptation is used in a different way. JULIA solves an over-constrained problem by adapting the structure of the meal to allow conflicting constraints to be solved independently. It recognizes that the problem is over-constrained when it reaches an impasse in its reasoning. Problem 2 Plan a meal that is inexpensive, easy to prepare, and uses eggplant. Furthermore, it should sat- isfy Guestl, a vegetarian, and Guest& a ‘meat and potatoes’ person who requires meat in his meal. In general, JULIA attempts to find single solutions to each of its subgoals. In this situa- tion, however, it fails to find a single dish that will satisfy both guests. JULIA examines the constraints that are responsible for this impasse. One adaptation transformation it has available, SPLIT-FUNCTION, specifies that if conflicting constraints derive from different sources, the con- flict can be alleviated by increasing choice, i.e., using two items to achieve the goal instead of just one. Since the conflicting constraints arise from two different guests, JULIA suggests baba- ghanouj for guest-l and skewered-lamb and egg- plant for guest-2. It supplements this with hummus, pita-bread, retsina, greek-salad, and baklava. In the next example, adaptation is used a third way. Here, the problem solver reaches another impasse. While above the impasse was resolved by adapting the structure of the meal, here it is resolved by adapting one of the conflicting constraints. Problem 3 Plan a brunch for a group of 5 people. Some sort of eggs should be the main dish. One of the guests is on a low cholesterol diet. In attempting to solve this problem, JULIA reaches an impasse. It cannot serve a meal with eggs as the main dish that is also low in cholesterol. It resolves this problem by relaxing the constraint that eggs should be a main ingredient of the main dish. It allows eggs to be a secondary ingredient. It then chooses high-protein pancakes as its main dish. Eggs is only a secondary ingredient and very lit- tle egg is used. The last example shows JULIA anticipating a failure and taking steps to avoid it. When the case-based reasoner uncovers a potential failure, JULIA adds the necessary constraints to its problem description that HINRICHS & KOLODNER 29 allow it to avoid the problem. It then continues to solve the problem by remembering another meal whose menu is appropriate to the client. No adaptation is done. Problem 4 Plan a Mexican meal for my research group that includes tomatoes. JULIA is reminded of a sim- ilar meal that failed when someone wouldn’t eat spicy chili, and inquires whether this will be a problem. When the client says yes, JULIA adds a constraint to rule out spicy dishes. It remembers another meal, and based on its menu, suggests guacamole, tacos, coffee and flan for dessert. Roles of Adaptation The examples adaptation can 1. 2. 3. 4. Adaptation facilitates making decisions in under- constrained problems by allowing a problem solver to re-use an ‘almost right’ plan rather than re- solving from scratch. Adaptation resolves over-constrained problems by serving as a alternative to retraction. Adaptation relaxes preference constraints by mini- mally weakening them. Adaptation extends the vocabulary of a problem solver by designing new components as variations of known components. above suggest play in design: some of the roles that Adaptation and Under-constraint An under-constrained problem is one for which there may be many possible solutions, but the problem con- straints do not help to deduce or construct a solution. Informal tasks such as meal planning tend to be under- constrained and to have large search spaces. Because these search spaces may be ill-defined, it is important for a problem solver to avoid trying to exhaustively search them. One strategy that can be used is to retrieve previ- ous solutions that are nearly adequate and adapt them to fit the current problem. This is called case-based reasoning. Case-based reasoning zeros in on a part of the search space that has proven relevant in the past, and in effect, searches only in the neighborhood lo- cal to the solution provided by the remembered case. JULIA uses case-based reasoning to propose plausible solutions, and in so doing, trades off guaranteed cor- rectness for search efficiency. In this light, adaptation can be viewed as switching to a smaller, more tractable search space. Adaptation is a kind of heuristic search in which transformations are applied to a source concept in or- der to repair constraint violations. The transforma- tions are indexed by constraint violation and type of source concept, and are ranked by their expected cost of application. In JULIA, adaptation transformations -I^ -l!-22-J I-L- L--w- I--‘- A.-----. LL--- Ll--I --I-‘-- Al-- 30 TRANSFORMATION IN DESIGN structure and substitute the components of a concept, and those that retain the components of a concept but alter its structure. Examples of adaptation transfor- mations are: e SPEClALlZE substitutes a component with a more specific variant that satisfies constraints. e GENERALIZE substitutes a component with a more general variant that satisfies constraints. o SUBSTITUTE-BY-FUNCTION substitutes a compo- nent with one that is functionally-identical. o SUBSTITUTE-SIBLING substitutes a component with a taxonomic sibling. e SHARE-FUNCTION re-structures a concept such that one component serves two functions. e SPLIT-FUNCTION re-structures a concept such that two components serve a single function in tandem. The first four are applicable when some feature of a selected design component violates a design con- straint. SHARE-FUNCTION is applicable when two components with the same function have been in- serted into the design. SPLIT-FUNCTION is cho- sen when there is an impasse caused by a failure to be able to solve a conjunctive set of constraints. These transformations are described in more detail in [Hinrichs, 1989, Hinrichs, 19911. Adaptation and Over-constraint An over-constrained problem is one that has no known solution. There are two types of overconstraint: A contradiction denotes a situation in which the problem solver is in an inconsistent state. Problem 1 illustrates several of these. An impasse is a situation in which the solution is incomplete and the problem solver can make no further progress. Problems 2 and 3 illustrate impasses. Two classical techniques for dealing with overconstraint are (a) to switch to a different context and (b) to backtrack to a previous decision and retract all of its consequences. Our approach shows that a contradictory decision need not always be retracted completely to solve a problem. Instead, the decision can often be adapted by finding a similar value that will fulfill all of the constraints while preserving the consequences of the previous decision. For example, in Problem 1, the de- cision to serve lasagne is adapted by specializing it to vegetarian lasagne. This makes it unnecessary to re- tract other decisions that depend on lasagne, such as garlic bread and red wine. Another way to look at this role of adaptation is that while dependency-directed backtracking is intended to retract just those decisions of a problem that are rel- evant to a contradiction, the adaptation approach is designed to modify only those features of a decision that are contradictory. This can involve substituting one known concept for another, or it can involve cre- ating an entirely new concept on the fly. There are several reasons why adaptation is an appropriate strategy for dealing with both overcon- strained and underconstrained problems: I. Satisficing solutions are often ‘nearby’ in the search space. If a component of a solution ‘doesn’t quite’ fit, it is likely that one of its neighbors or relatives will. 2. Replacing one decision with a similar one often leaves consequences of the original decision intact. 3. Minimally altering the solution makes it easier for a human client or user to keep track of what is going on. Adapting Constraints When it is not possible to adapt a value or the structure of a problem to satisfy constraints (as in problem 3), it is sometimes possible to relax the constraints on the decision. Typically, constraints are relaxed by ordering them in terms of their importance or utility, and then simply retracting the least important constraint. Our approach is to ardept an offending constraint in order to weaken it just enough so that the problem can be solved. For example, consider the following constraint that the main ingredients of a dish must contain cheese: (contains (?dish main-ingredients) cheese relaxable) In this constraint, the first argument defines its scope, or domain, and the second argument, cheese, defines its range. Constraints such as this can be adapted by enlarging or contracting their domain and range. The constraint adaptation routine resolves an over- constrained decision by first examining the values that have been considered and ruled out, and then sorting them by the number of relaxable constraints they vio- late. It then attempts to establish the ruled-out value by relaxing each of its violated constraints. An indi- vidual constraint is relaxed by applying one of four transformations: 1. ENLARGE-DOMAIN substitutes the scope of the constraint with one that is higher on the partonomic hierarchy. 2. REDUCE-DOMAIN substitutes the scope of the con- straint with one that is lower on the partonomic hierarchy. 3. GENERALIZE-RANGE replaces the range of a con- straint with a generalization of the range (for nomi- nal constraints) or increases the numeric range (for ordinal constraints). 4. DIMINISH-RANGE specializes the range (for nomi- nal constraints) or contracts the numeric range (for ordinal constraints). Each constraint type indicates in which direction it must be adapted in order to weaken it. For example, the constraint described above could be adapted by enlarging its domain: (contains (?dish ingredients) cheese relaxable), or by generalizing its range: (contains (?dish main-ingredients) dairy-product relaxable). In this way, values that almost satisfy constraints can be re-considered if no better solution is discovered. This is a less drastic approach than retracting con- straints altogether, because it provides a means of par- tially satisfying a constraint. Extending Vocabulary Adaptation in JULIA does more than simply improve efficiency; it also permits some problems to be solved that would otherwise be impossible. It does this by opening up new search spaces in order to construct concepts that can serve as components of the overall solution. For example, consider the situation in Prob- lem 1 again, in which a decision to serve lasagne con- tradicts a constraint that the meal be vegetarian. The given search space for this problem is the set of dishes that JULIA knows about. If JULIA didn’t know of a vegetarian version of lasagne, then exhaustively search- ing for a dish wouldn’t help. The adaptation process can solve this problem by reasoning about which feature of lasagne violates the constraint and recursively adapting that feature. In this example, lasagne could be made vegetarian in two ways: either by substituting the meat with vegetarian burger (a functional substitution), or by applying the DELETE transformation to the set of secondary ingre- dients and eliminating the meat. When a feature is substituted, a new concept is created as a variant of the original. In this way, the problem-solving vocabu- lary is extended as a by-product of adaptation. A critical step in this process is determining the le- gality and plausibility of proposed adaptations. For example, it would make no sense to delete noodles from lasagne, or to make a low calorie version of a dish just by changing the value of its calories attribute. JU- LIA restricts adaptations in two main ways. First, its representation of objects distinguishes between sec- ondary features that are easy to change and primary features that are critical or definitional in some way. For example, secondary-ingredients can be deleted, while main-ingredients, such as the noodles in lasagne, can only be substituted, not deleted. This is one way of representing a partial domain theory. In other words, some kind of lasagne noodle is necessary but not suffi- cient for a dish to be lasagne. The second way adaptations are restricted is by us- ing heuristics to infer which features correspond to in- dependent variables and what their ranges of variabil- ity are. Instead of directly altering dependent features such as the calories of a dish, it analyzes constraints internal to the concept and regresses back to indepen- dent variables (e.g., the ingredients of the dish) and HINRICHS & KOLODNER 31 adapts them. Reasoning of this sort is essential if a problem solver is to create new concepts without the benefit of a strong domain theory. It is this ability to construct new concepts that al- lows JULIA to overcome the rigid classifications of se- lection, configuration and parametric design. If the vo- cabulary were constant, then JULIA could be viewed as selecting dishes from a known set. However, the ability to adapt components and their structural con- figuration enables JULIA to do constructive design, but only in the highly circumscribed space of adapta- tion transformations. Evaluation One claim of this research is that adaptation can re- duce the amount of work needed to solve problems. To establish this, we measured the number of constraints that JULIA checked as it solved three of the problems described above. We then ran each problem two more times, once with the adaptation capability turned off for contradiction resolution (Modifictstdon Only), and once with no adaptation at all. The results of this study are shown in table 1: Table 1: Number of Constraints Checked Although these numbers are not especially informa- tive by themselves, the intent of this experiment is to study the relative costs and benefits of adaptation. To this end, the example problems presented were chosen to illustrate three different effects. For problem 1, the problem-solving performance degrades monotonically as adaptation is turned off. This is because the con- tradiction introduced into the problem requires that lasagne be retracted rather than adapted. When the program cannot recall another case containing an ac- ceptable solution, it is reduced to linearly searching all dishes it knows about. Problem 2 cannot be solved at all without adapta- tion because there is no single dish in JULIA’s knowl- edge base that satisfies the constraints. Problems that require structural modification cannot be solved by simply searching through categories. While the results from problems 1 and 2 were as ex- pected, problem 4 was a surprise. Since its solution did not rely on adaptation, we expected no effect on efficiency. As it turns out, however, there can be a hidden cost of adaptation. The process of attempting to adapt a value entails looking for substitutions and checking constraints on those values. If no satisfactory solution is found in this way, JULIA proceeds on to the next ‘best’ reminding and checks components from that case. What happened in problem 4 was that JU- LIA did not pick the most appropriate case right off the bat, and therefore spent some time chasing down blind alleys attempting to adapt dishes that could never suf- fice. Problem 4 may simply be a pathological case. For the vast majority of cases we have run, adaptation ap- pears to be beneficial. However, adaptation clearly involves a tradeoff between flexibility and efficiency. It opens up search spaces that can lead to otherwise inaccessible solutions, but it may also explore dead- ends. Because of this, models of problem solving that rely on relatively expensive deep reasoning for adap- tation should take into account the arccurcacy of the indexing mechanism and the density of the solution space. For problem solvers that index cases accurately and for problems that have sparse solution spaces, it is worthwhile to expend a lot of effort in trying to adapt previous cases. For problem solvers that index cases less accurately and for problems with dense solution spaces, it is often better to expend less effort adapting cases since a better solution may be found in the next case considered. In principle, the cost of a complex adaptation pro- cess could be amortized if the problem solver were to store and re-use its own reasoning steps. JULIA is lim- ited in this regard because it only applies case-based reasoning to propose design solutions. However, if the cost of adaptation were more significant, then recur- sively applying case-based reasoning would be worth investigating. JULIA draws heavily on ideas from other work in case- based reasoning, such as CHEF [Hammond, 19891. From the perspective of adaptation, CHEF implements modification of cases independently from repair of fail- ures. In JULIA, they are both applications of the same process of adaptation, and rely on the same set of transformations. Also, CHEF does not adapt previous decisions as part of its control strategy. Using adaptation to augment dependency-directed backtracking is not new to JULIA, and has been ad- dressed in PRIDE [Mittal & Araya, 19861, where it is referred to as modification advice, and DONTE [Tong, 19881, where it is called patching. We extend the technique to cover the adaptation of structure (ie, configuration) and constraints. Also, both PRIDE and DONTE solve problems that have well-defined search spaces. Modification in PRIDE cannot design new components. In JULIA, adaptation can construct new components, and thus partially determines the class of problems that can be solved. Like VEXED [Steinberg, 19871, JULIA is a design advisor. As advisory systems, the two systems differ in how they define the division of labor between the user and the program. VEXED assumes responsibility 32 TRANSFORMATION IN DESIGN for evaluating completeness and correctness of designs and delegates control decisions to the user. JULIA, on the other hand, makes its own control decisions, but leaves the user as the final arbiter of design adequacy. Discussion and Conclusions We have presented a model of design that employs adaptation in multiple roles. This model provides the capability to solve design problems that require the in- tegration of processes for selection, configuration, para- metric manipulation, and construction from scratch. By adapting similar known solutions, we trade a poorly defined design search space (meals in the case of JU- LIA) for the better-defined space of adaptation trans- formations. This technique is appropriate when so- lutions to similar problems are known, and when the criteria for judging similarity are well understood. Another feature of our model is a control strategy that exploits adaptation as an alternative to retrac- tion. While this idea has been explored in some previ- ous work, we extend the idea in two ways: First, adap- tation constructs new concepts as a side-effect, so that it permits the solution of problems that would other- wise not be possible. Second, the ability to adapt con- straints unifies the ideas of backtracking and constraint relaxation. This technique is appropriate when deci- sions have many consequences and the solution space is dense. References D.C. Brown and B. Chandrasekaran. Expert sys- tems for a class of mechanical design activity. In J.S. Gero, editor, Knowledge Engineering in Computer- Aided Design. Elsevier Science Publishers B.V., North Holland, 1985. J. Doyle. A truth maintenance system. Artificial In- telligence, 12(3), 1979. V. Goel and P. Pirolli. Design within information pro- cessing theory: The design problem space. AI Maga- zine, Vol. 10, No. 1, 1989. K.J. Hammond. Case-based Planning: Viewing Plan- ning as a Memory Task. Academic Press, New York, 1989. T.R. Hinrichs. Towards an architecture for open world problem solving. In J.L. Kolodner, editor, Proceedings of the 1988 DARPA Workshop on Case-Based Rea- soning, pages 182-189, 1988. T.R. Hinrichs. Strategies for adaptation and recovery in a design problem solver. In K. Hammond, editor, Proceedings of the 1989 DARPA Workshop on Case- Based Reasoning, pages 115-118,1989. T.R. Hinrichs Problem Solving in Open Worlds: A Case Study in Design. PhD thesis, Georgia Institute of Technology, 1991. forthcoming. J.L. Kolodner. Retrieval and Organizational Strate- &es in Concevtual Memory: A Computer Model. Lawrence Erlbaum and Associates, Hillsdale, NJ, 1984. J.L. Kolodner, R.L. Simpson, and K. Sycara. A pro- cess model of case-based reasoning in problem solving. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 284-290, Los Angeles, 1985. S. Mittal and A. Araya. A knowledge based frame- work for design. In Proceedings of the Fifth National Conference on Arti$cial Intelligence, pages 856-865, Philadelphia, PA, August 1986. J. McDermott. Rl: An expert in the computer sys- tems domain. In Proceedings of the First National Conference on Artificial Intelligence, pages 269-271, August 1980. R.C. Schank. Dynamic Memory: A theory of remind- ing and learning in computers and people. Cambridge University Press, London, 1982. R.M. Stallman and G.J. Sussman. Forward reason- ing and dependency-directed backtracking in a system for computer aided circuit analysis. Artificial Intelli- gence, 9:135-196,1977. M.J. Stefik. Planning with constraints. Artificial In- telligence, 16(2):111-140, 1981. M.J. Stefik. Planning and meta-planning. Artificial Intelligence, 16(2):141-170,198I. L.I. Steinberg. Design as refinement plus constraint propagation: The VEXED experience. In Proceedings of the Sixth National Conference on Artificial Intelli- gence, pages 830-835,1987. C.H. Tong. Knowledge-Based Circuit Design. PhD thesis, Stanford University, 1988. (Rutgers University Report No. LCSR-TR-108). A.C. Ward and W.P. Seering. Quantitative inference in a mechanical design “compiler”. memo AIM-1062, MIT AI Lab, January 1989. HINRICHS & KOLODNER 33
1991
5
1,109
Integrating etrie a alitativ TernDora ‘4. easoning Henry A. Kautz AT&T Bell Laboratories Murray Hill, NJ 07974 kautz@research.att.com Abstract Research in Artificial Intelligence on constraint-based representations for temporal reasoning has largely con- centrated on two kinds of formalisms: systems of simple linear inequalities to encode metric relations between time points, and systems of binary constraints in Allen’s temporal calculus to encode qualitative relations be- tween time intervals. Each formalism has certain ad- vantages. Linear inequalities can represent dates, du- rations, and other quantitive information; Allen’s qual- itative calculus can express relations between time in- tervals, such as disjointedness, that are useful for con- straint-based approaches to planning. In this paper we demonstrate how metric and Allen- style constraint networks can be integrated in a con- straint-based reasoning system. The highlights of the work include a simple but powerful logical language for expressing both quantitative and qualitative infor- mation; translation algorithms between the metric and Allen sublanguages that entail minimal loss of informa- tion; and a constraint-propagation procedure for prob- lems expressed in a combination of metric and Allen constraints. Introduction Research in Artificial Intelligence on constraint-bas- ed representations for temporal reasoning has largely concentrated on two kinds of formalisms: systems of simple linear inequalities [Malik and Binford, 1983, Valdes-Perez, 1986, Dechter et al., 19891 to encode met- ric relations between time points, and systems of binary constraints in Allen’s temporal calculus [Allen, 1983, Vilain et al., 1989, Ladkin and Maddux, 1987, van Reek and Cohen, 19891 to encode qualitative relations be- tween time intervals. Each formalism has certain ad- vantages. Linear inequalities can represent dates, du- rations, and other quantitive information that appears in real-world planning and scheduling problems. Allen’s qualitative calculus can express certain crucial relations between time intervals, such as disjointedness, that can- not be expressed by any collection of simple linear in- equalities (without specifying which interval is before Peter B. Ladkin International Computer Science Institute Berkeley, CA 94704 ladkin@icsi.berkeley.edu the other). Such disjointedness constraints form the ba- sis for constraint-based approaches to planning [Allen, 19911. In this paper we demonstrate how metric and qualita- tive knowledge can be integrated in a constraint-based reasoning system. One approach to this problem (as used, for example, in the “time map” system of Dean and McDermott [87]) is to directly attach rules that enforce disjointedness constraints to a network of linear inequalities. One limitation of such an approach is that some natural qualitative inferences are not performed: for example, the facts that interval i is during j and j is disjoint from Ic are not combined to reach the con- clusion that i is disjoint from k. Another disadvantage is that it is often more convenient for the user to enter assertions in a qualitative language, even if they can be represented numerically. Instead of try to augment a single reasoning system, we will take an approach briefly suggested by Dechter, Meiri, and Pearl [89] (henceforth “DMP”), and com- bine a metric reasoning system with a full Allen-style constraint network. The contributions of our research include the following: 1. A simple but powerful logical language C for ex- pressing both quantitative and qualitative informa- tion. The language subsumes both networks of two- variable difference inequalities (called LM) and net- works of binary Allen constraints (called LA), but is much more powerful than either. The axioms of Allen’s temporal calculus are theorems of L. 2. An extension of DMP’s algorithms for networks of non-strict inequalities to handle both the strict and the non-strict inequalities that appear in CM. (Note: a forthcoming paper by Dechter, Meiri, and Pearl [1991] also provides such an extension.) 3. Optimal translations between CM and LA. As we noted, the two formalisms have orthogonal expressive power, so an exact translation is impossible; we say that a tral,slation is optimal when it entails a minimal loss of information. Formally, f : J!Z~ ---) 1s~ is optimal iffforanycrE~1andpE&,thencr~pifff(cr)~ p, where + is the entailment relation over the union of the two languages. KAUTZ & LADKIN 241 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. 4 . A constraint-propagation procedure for the combined constraint language -CM U -CA, which is based on the translation algorithms. The user of the system is able to enter information in terms of point differ- ence inequalities or qualitative interval constraints, whichever is necessary or most convenient. The system we describe in this paper is fully imple- mented in Common Lisp, and is available from the first author. A Universal Temporal Language Consider the following model of time: time is linear, and time points can be identified with the rationals un- der the usual ordering <. The difference of any two time points is likewise a rational number. An interval is a pair of points (n, na), where n < m. Two inter- vals stand in a particular qualitative relationship such as “overlaps” just when their endpoints stand in a par- ticular configuration - in this case, when the starting point of the first falls before the starting point of the second, and the final point of the first falls between the two points of the second. The following language L lets us say everything we’d like to about this model. It is typed predicate calculus with equality and the following types and symbols: types are Rational, Interval, and Infinite. X,Y,*** are Rational variables, and i, j, . . . are Interval variables. functions are L, R : Interval 3 Rational Intuitively, in is the starting (left) endpoint of i, and iR is the final (right) endpoint. - (subtraction): Rational x Rational + Rational Functions to construct rational numerals. 00 : constant of type Infinite. predicates are <, < : Rational x (Rational U Infinite) Allen Predicates : Interval x Interval P(recedes), M(eets), O(verlaps), S(tarts), D(uring), F(inishes), =, and the inverses P-, M-, 0-, S-, D-, F-. The language does not include constants to name spe- cific intervals; instead, we use unbound variables to name intervals, with the understanding that any partic- ular model provides an interpretation for free variables. It is useful to distinguish two special syntactic forms. Formulas of the form i(rl)j V -. - V i(rn)j where the i and j are intervals and the pi are Allen predicates are called simple AIlen constraints, and are abbreviated as +I + - - - + rn)j The sublanguage of such formulas is called LA. A conjunction of two diflerence inequalities of the fol- lowing form: (iF - jG 5 n) A (jG - iF 5 m) 242 TEMPORAL CONSTRAINTS where F,G E {L, R), m and n are numerals or (-)oo, and either or both of the inequality relations may be re- placed by strict inequality (<), is called a simple metric constraint. Such a constraint bounds a difference from above and below, and thus may be abbreviated -m 5 (iF -jG)<n The sublanguage of simple metric constraints is called CM. Note that C is much richer than the union of .CM and LA. For example, the formulas in Table 1 are part of L, but appear in neither LA nor LM. The following axioms capture the intended model of time. e Arithmetic axioms for - (subtraction), <, 5, and numerals. These include Vx . x < 00. e vi . iL < iR o Meaning postulates for each Allen predicate. The ax- ioms for the non-inverted predicates appear in Table 1. We write Ck,D to mean that D holds in all of models of C that satisfy these axioms. The original presentation of the Allen calculus de- scribed the predicates by a set of transitivity axioms such as Vi, j, k . i(M)j A j(D)k > i(D + S + O)k All of these formulas are theorems of E, rather than axioms [Kautz and Ladkin, 19911. Since c is just first-order logic, we could solve prob- lems that involve both metric and Allen assertions by employing a complete and general inference method, such as resolution. This is almost certain to be imprac- tically slow. On the other hand, it appears that we do not need the full power of fZ to express many interesting temporal reasoning problems. The sublanguage LM can express constraints on the duration of an interval (e.g., 2 5 (iR - iL) < 5); on the elapsed time between inter- vals (e.g., 4 < (iR - jL) < 6); and between an interval and an absolute date, which we handle by introducing a “dummy” interval which is taken to begin at time num- ber 0 (e.g., 14 2 (iL - dayOL) 5 14). But XM by itself is not adequate for many problems. For example, in the sublanguage LA one can assert that intervals i and j are disjoint by the formula i(P + M + M’ + P’) j, but there is no equivalent formula in LM. Such a disjoint- edness constraint is useful in planning; for example, if i is a time during which a robot holds a block, and j is a time during which the robot’s hand is empty, a planning system might want to make the assertion that i(P + M + M’ + P’) j. Another useful expression in CA is i(S + F)j, which means that interval i starts or finishes j; for example, in scheduling a conference, you might want to assert that a certain talk begins or ends the conference. So LM U E,J appears to be a good candidate for a practical temporal language. In order to develop an Definition: Minimal Network Representation Suppose G is a consistent network of binary constraints in some language. Then a binary constraint network G’ in that language is a minimal network representation of G iff the following all hold: 1. 2. 3. G’ is logically equivalent to G. For every pair of variables in G there is a constraint containing those variables in G’. For any model M of a single constraint in G’, there is a model M’ for all of G’ which agrees with M on the interpretation of the variables that appear in that constraint. vi,j- i=j E iL-jL<o A jL-iL<o A iR-jR<O jR-iR<O - Vi,j. i(P)j G iR - jL < 0 Vi,j. i(h!f)j G iR -jL<O A jl--iRsO Vi,j. i(O)j E iL -jL<O A jL-iR<O A iR-jR<O Vi,j. i(S)j E iL -jLsO A jL-iL<O A iR-jR<O Vi,j. i(D) j e jL - iL < 0 A iR - jR < 0 Vi,j. i(F) j G jL - iL < 0 A iR - jR < 0 A jR - iR 5 0 Table 1: Meaning postulates for Allen predicates. inference procedure for this language, let us examine the inference procedures that are known for LM and EA individually. Constraint Networks LM and LA can each express certain binary constraint satisfaction problems (CSP) [Montanari, 741. A binary CSP is simply a set (also called a network) of quantifier- free assertions in some language, each containing two variables. One possible task is to find a particular as- signment of values to the variables that simultaneously satisfies all the constraints in the network: that is, to find a model of the network. (Henceforth in this paper we will always talk in terms of models rather than vari- able assignments.) Another important task is to com- pute the minimal network representation of the prob- lem, which is defined as follows: Hence from the minimal network representation one can “read off” the possible values that can be assigned to any variable. LM is very similar to what DMP called simple tem- poral constraint satisfaction problems (STCSP). They considered sets (or networks) of formulas of the form m5(x-Y)<n where x and y are variables and n and m are numer- als. Their representation differs from Ln/r in that (1) They use simple variables like x for time points, where LM uses terms like iL and iR. This difference is not significant, because the interpretation of an interval i is simply the pair consisting of the interpretations of iL and iR. So we can treat iL and iR as “variables” in the CSP formulation. (2) Formulas in LM include strict (<) as well as non-strict (5) inequalities. DMP proved that an all-pairs shortest-path algo- rithm [Aho et al., 1976, page 1981 can compute the minimal network representation of a STCSP. One can modify the algorithm to handle the two kinds of in- equalities as follows. We represent a formula M from LM by a graph, where the nodes are the terms that ap- pear in M (that is, iL and iR for each interval variable i), and the directed arc from iF to jG is labeled with the pair (n, 1) if iF-jG 5 n is one conjunct of a constraint in M, and labeled (n, 0) if iF - jG < n is one conjunct of a constraint in M. Next we add the constraints from L that state that the left point of an interval is before its right point; that is, we add an arc iL(9, 9)iR for each i. Finally we compute the shortest distance between all nodes in the graph using the following definitions for comparison and addition: (m,x)<(n,y)~m<nV(m=nAx<y) b-4 x> + (n, Y) = (m + n, n-+x, Y)) In the resulting graph D an arc appears between every pair of nodes in the graph, and the inequalities corre- sponding to the arcs are the strongest such inequalities implied by M. Thus the minimal network representa- tion of M is the set of formulas {-m < iF -jG < n I jG(m,O)iF, iF(n,O)jG E D}U {-m 5 iF -jG <n IjG(m,l)iF, iF(n$)jG E D}U {-m < iF -jG 2 n jjG(m,O)iF, iF(n,l)jG E D}U l---m LiF-jG <n IjG(m,l)iF, iF(%l)jG ED) This procedure takes O(n3) time. Binary CSP’s based on the qualitative language LA have been studied extensively [Allen, 1983, Ladkin and Maddux, 1987, Vilain et al., 1989, van Beek and Co- hen, 19891. Computing the minimal network represen- tation of a set of such constraints is NP-Hard. In prac- tice, however, one can approximate the minimal net- work by a weaker notion, called n-consistency. While we do not have space here to discuss the details of n- consistency, we note that the original presentation of LA by Allen [83] included an algorithm that computes “3-consistency” in O(n3) time, and VanBeek [89] stud- ied the improvements to the approximation likely to be found by computing higher degrees of consistency. For KAUTZ & LADKIN 243 combined-metric-Allen(M, A) = input: simple metric network M and simple Allen network A output: networks M’, A’ implied by M U A repeat A’ := metric-to-Allen(M) U A M’ := Allen-to-metric(A’) U M M := M’; A := A’ until A = A’ and M = M’ return M’, A’ end combined-metric-Allen Figure 1: Inference procedure for CM U CA. any fixed n, n-consistency can be computed in polyno- mial time. Thus we have an efficient and complete algorithm for inference in LM, and a number of efficient approx- imation algorithms for CA. Figure 1 presents a con- straint satisfaction algorithm for the union of the two languages. The method is to separately compute the minimal network representation of the metric and Allen constraints; derive new Allen constraints from the met- ric network and add these to the Allen network; derive new metric constraints from the Allen network and add these to the metric network; and repeat this process until no new statements can be derived. The system answers any query in l&f U LA by examining the ap- propriate network. The procedure is clearly correct; but now we must see how to translate LM to CA and vice-versa. Translating and Combining Metric and Allen Constraints This section presents the optimal translations between the metric and Allen constraint languages, and a com- plexity analysis of the combined inference algorithm. we begin with the translation from LM to CA. At first impression, one might think that it is sufficient to con- vert each metric constraint to the Allen constraint it implies. For example, from the meaning postulates one can deduce that iL-jL<O>i(P+M+O+F’+D”)j So, if the metric network M contains -9 < (iL - jL) < -3 (which implies the antecedent of the formula), the translation mcludes i(P+M+O+F’+D”)j. This ap- proach is correct, but fails to capture all implications in CM. For example, suppose M is the following network: 3 < (iR - iL) < 00 -oo< (jR-jL) < 2 The minimal network representation of M has only trivial constraints between i and j (such as --oo < (iL - jR) < oo), so the approach just outlined fails to infer that i cannot be during j, because i has longer duration than j. metric-to-Allen(M) = input: a simple metric constraint network M. output: the strongest set of simple Allen constraints implied by M. let M’ be the minimal network representation of M if M’ is inconsistent then return any inconsistent Allen network AM:=~ for each pair of intervals i, j do let S be the {iL, iR, jL, jR} subnet of M’ R 8 .- foreach primitive Allen relation r do S’ := s u { m I m is a difference inequality in the meaning postulate for i(r) j } if S’ is consistent then R := R U {r) end do AM := AM U {i(R)j} end do return AM end metric-to-Allen Figure 2: Converting simple metric constraints to sim- ple Allen constraints. Therefore an optimal translation must consider sev- eral metric constraints at a time; but how many? One might imagine that the problem required an exponen- tial procedure that checked consistency of every possi- ble Allen constraint between two intervals with all of M. Fortunately, this is not necessary: we can compute the strongest set of implied Allen constraints by consid- ering constraints between just four points (that is, two intervals) at a time. The algorithm metric-to-Allen appears in Figure 2, and the following theorem formally states that it is optimal. Theorem 1 The algorithm metric-to-Allen is sound and entails minimal loss of information: For any M E LM and A E LA, it’s the case that Mb,A iff metric-to-Allen(M)bLA. The algorithm runs in O(n3) time, where n is the number of intervals. Proof: By theorem 2 of [Dechter et al., 19891, any consis- tent and minimal simple metric network is decomposable. This means that any assignment of values to a set of terms that satisfies the subnet containing those terms can be ex- tended to a satisfying assignment for the entire net. Another way of saying this is that if such a subnet has a model, then the net has a model that agrees with the subnet’s model on the interpretation of the terms in the subnet. Note that if two models agree on the interpretations of the terms in, iR,j~, jR then they assign the same truth value to the expression i(r)j where T is any primitive Allen relation. From the construction of S’ it is therefore the case that S’ is consistent iff S’ has a model iff S has a model in which i(r) j holds iff M’ has a model in which i(r) j holds. Since M and M’ are logically equivalent, we see that for any pair of intervals i and j, i(R) j E metric-to-Allen(M) iff for all r E R and no T $! R, M has some model in which i(r)j 244 TEMPORAL CONSTRAINTS holds. To show that the algorithm is sound, suppose that i(R)j E metric-to-Allen(M). If this clause were not im- plied by M, then there would be some model of M in which i and j stand in an Allen relation not in R. But that is im- possible, as stated above. So M+,metric-to-Allen(M), and metric-to-Allen(M)~=cA implies M+=,A. To show that the algorithm entails minimal loss of infor- mation, suppose that Mk,A. Because A is a conjunction of statements of the form i(R)j, we can assume without loss of generality that it is a single such statement. From the operation of the algorithm we see that there is some R’ such that i(R’)j E metric-to-Allen(M). We claim that R’ C R. Suppose not; then there would be an T E R’ such that T f$ R. But the former means that there is a model of M in which ;(r)j holds, and the latter means that there is no such model, since in any particular model only a single AlIen relation holds between a pair of intervals. So since R’ C R means that i(R’)j implies ;( R)j, it follows that metric-to-Allen(M) bLi( R)j. Computing the minimal metric network takes O(n3) time, and the outer loop iterates O(n2) with constant time for all operations in the loop. Therefore the overall complexity is O(n3). q Next we consider the translation from LA to CM. It is not sufficient to simply replace each Allen predicate with its definition according to the meaning postulates, because the resulting formula is not necessarily in CM. Indeed, we can show that the problem is inherently in- tractable: Theorem 2 Computing the strongest set of simple metric constraints equivalent to a set of simple Allen constraints is NP- Hard. roof: Checking the consistency of a set of formulas in CA NP-Complete, but checking consistency of formulas in LM is polynomial. Since the best translation must pres consistency, the translation itself must be NP-Hard. Suppose, however, we wish to compute the minimal network representation of a set of simple Allen con- straints for other reasons. We can then quickly compute the strongest set of simple metric constraints implied by that network, by computing the metric constraints one Allen constraint at a time. Figure 3 presents the algo- rithm Allen-to-metric that performs this calculation; the following theorem states that this algorithm is op- timal. Theorem 3 The algorithm Allen-to-metric is sound and entails minimal loss of information: For any A E LA, M E L&f, it’s the case that A/=,M ifl AIIen-to-metric(A)~LM. The algorithm runs in O(e + n”) time, where e is the time needed to compute the minimal network representation of the input, and n is the number of intervals. Proof: At the end of the inner loop, it is clear that m E S iff m is a difference inequality implied by each ;(r)j for each r E R; that is, m E S iff m is implied by i(R)j. Since 4=,A’&Wj f or each such i(R)j that appears in M, it follows that Ab.LAllen-to-metric(A). Therefore the algo- rithm is sound: if Allen-to-metric(A)+,M, then A+=,M. Allen-to-metric(A) = input: a simple Allen constraint network A output: the strongest set of simple metric constraints implied by A. let A’ be the minimal network representation of A if A’ is inconsistent then return any inconsistent metric network kfA := 8 for each pair of intervals i, j do let R be the (complex) Allen relation such that i(R)j appears in A’ S := { m 1 m is of the form x - y < 0 or 3~ - Y < 0 and X,Y E (iL,iR,jLdR] ) for each prim:tive Allen relation r in R do S := S n { m 1 m is a difference inequality implied by i(r)j } end do MA :=kfAus end do return{-ocI < (iF - jG) < nl iF - jG < n E MA}u {-w < (iF - jG) 5 nl iF - jG 5 n E MA) end Allen- to-metric Figure 3: Converting simple Allen constraints to simple metric constraints. To show that the algorithm entails minimal loss of infor- mation, suppose that AkLM. Because M is conjunction of simple metric constraints, without loss of generality we can assume it is a single such constraint. Furthermore, because each constraint is a conjunction of two difference inequali- ties, without loss of generality we can take M to be a sin- gle difference inequality: z - y < n or x - y < 7t, where x, y E (hk,jL, jR). If n = 00, then the inequality triv- ially holds. Otherwise, because A is equivalent (using the meaning postulates) to a boolean combination of difference inequalities containing only the number 0, it is plain that n cannot be negative; and furthermore, if n is positive, A must also imply the inequality x - y 5 0. So without loss of gen- erality we can -also assume that M is of the form 2 - y < 0 or x -yLO. At the start of the loop in which the algorithm selects the pair of intervals (i, j) the variable S contains M, and we claim that S must still contain M at the conclusion of the inner loop. Suppose not; then there is some r E R such that ;(r)j has a model M in which M does not hold. But because i( R)j E A’ and A’ is the minimal network representation of A, it must be the case the A has a model that agrees with M on the interpretations of a’ and j. Therefore A has a model that falsifies M, so A cannot imply M after all. Since S implies M and S C M A, it is clear that the set of simple metric constraints constructed from MA in the last step of the algorithm also implies M. The complexity O(e + n2) follows immediately from the iteration of the outer loop; everything inside takes constant time. In order to simplify the presentation of the Allen-to-metric algorithm, we have described it so that while it returns the strongest metric network im- plied by the Alien network, it does not actually re- KAUTZ & LADKIN 245 turn the minimal network representation of that met- ric network. It is easy to modify the algorithm (with- out additional computational overhead) so that it does return the minimal network representation; see Kautz and Ladkin El9911 for details. Finally we turn to an analysis of the algorithm combined-metric-Allen. What is its computational complexity? The answer depends on how many times the algorithm iterates between the two networks. Be- cause each iteration must strengthen at least one sim- ple Allen constraint (which can only be done 13 times per constraint), in the worst case the number is linear in the maximum size of the Allen network (or O(n2) in the number of intervals). In fact, this is its lower bound, as well: we have discovered a class of tempo- ral reasoning problems that shows that the maximum number of iterations does indeed grow with the size of the constraint set [Kautz and Ladkin, 19911. Theorem 4 The algorithm combined-metric-Allen is sound: M U Akrcombined-metric-Allen(M, A). The alqo- rithm terminates in O(n2(e + n3)) time, where n is the number of intervals that appear in M U A, and e is the time required to compute the minimum network repre- sentation of A. So far in our experience with the implemented sys- tem the algorithm tends to converge quickly. In fact, if the Allen network is pointisable [Ladkin and Mad- dux, 19881, we can prove that the algorithm interates no more than two times [Kautz and Ladkin, 19911. The question of whether combined-metric-Allen is a complete inference procedure for the language EM u CA remains open. we are CUrreUtly investigating whether the algorithm detects all inconsistent networks, and whether it always computes the minimal network representation in -CM u CA. Conclusions The framework presented in this paper unifies the great body of research in AI on metric and qualitative tem- poral reasoning. We demonstrated that both Dechter, Meiri, and Pearl’s simple temporal constraint satisfac- tion problems and Allen’s temporal calculus can be viewed as sublanguages of a simple yet powerful tem- poral logic. We provided algorithms that translate between the languages with a minimal loss of infor- mation. Along the way we generalized known tech- niques for dealing with non-strict linear inequalities to handle strict inequalities as well. Finally, we showed how the translations can be used to combine two well- understood constraint-satisfaction procedures into one for the union of the two languages. References [Aho et al., 19761 Alfred V. Aho, John E. Hopcroft, and Jeffrey D. Ullman. The Design and Analysis of Computer Algorithms. Addison-Wesley Publish- ing Co., Reading, MA, 1976. [Allen, 19831 J ames F. Allen. Maintaining knowledge about temporal intervals. Communications of the ACM, 11(26):832-843, November 1983. [Allen, 19911 J ames Allen. Planning as temporal rea- soning. In Proceedings of the Second International Conference on Principles of Knowledge Representa- tion and Reasoning (KR-89), Cambridge, MA, 1991. [Dean and McDermott, 871 T. L. Dean and D. V. Mc- Dermott. Temporal data base management. Artificial Intelligence, 32~1-55, 87. [Dechter et al., 19891 Rina Dechter, Itay Meiri, and Judea Pearl. Temporal constraint networks. In Ronald J. Brachman, Hector J. Levesque, and Ray- mond Reiter, editors, Proceedings of the First Inter- national Conference on Principles of Knowledge Rep- resentation and Reasoning (KR ‘89), page 83, San Mateo, CA, May 1989. Morgan Kaufmann Publish- ers, Inc. [Dechter et al., 19911 Rina Dechter, Itay Meiri, and Judea Pearl. Temporal constraint networks. Arti- ficial Intelligence, (to appear), 1991. [Kautz and Ladkin, 19911 Henry Kautz and Peter Lad- kin. Communicating temporal constraint networks. (In preparation), 1991. [Ladkin and Maddux, 19871 Peter Ladkin and Roger Maddux. The algebra of convex time intervals, 1987. [Ladkin and Maddux, 1988] Peter B. Ladkin and Roger D. Maddux. On binary constraint net- works. Technical Report KES.UNIVERSITY.88.8, Kestrel Institute, Palo Alto, CA, 1988. [Malik and Binford, 19831 J. Malik and T.O. Binford. Reasoning in time and space. In Proceedings of 8th IJCAI 1983, pages 343-345. IJCAI, 1983. [Montanari, 741 U. Montanari. Networks of constraints: Fundamental properties and applications to picture processing. Information Sciences, 7:95-132, 74. [Valdes-Perez, 19861 Raul E. Valdes-Perez. Spatio- temporal reasoning and linear inequalities. A.I. Memo No. 875, M.I.T. Artificial Intelligence Labo- ratory, Cambridge, MA, February 1986. [van Beek and Cohen, 19891 Peter van Beek and Robin Cohen. Approximation algorithms for temporal rea- soning. Research Report CS-89-12, University of Wa- terloo, Waterloo, Ontario, Canada, 1989. [Vilain et al., 19891 Marc Vilain, Henry Kautz, and Pe- ter van Beek. Constraint propagation algorithms for temporal reasoning: a revised report. In Johan deKleer and Dan Weld, editors, Readings in Qual- itative Reasoning About Physical Systems. Morgan Kaufmann, Los Altos, CA, 1989. 246 TEMPORAL CONSTRAINTS
1991
50
1,110
ra ease Fei Song and Robin Cohen Dept. of Computer Science, Univ. of Waterloo Waterloo, Ontario, Canada N2L 3Gl {fsong,rcohen}@watdragon.uwaterloo.ca Abstract This paper presents a strengthened algorithm for tem- poral reasoning during plan recognition, which im- proves on a straightforward application of Allen’s rea- soning algorithm. This is made possible by viewing plans as both hierarchical structures and temporal networks. As a result, we can show how to use as constraiuts the temporal relations explicitly given in input to improve the results of plan recognition. We also discuss how to combine the given constraints with those prestored in the system’s plan library to make more specific the temporal constraints indicated in the plans being recognised. Introduction Plan recognition is the process of inferring an agent’s plan based on the observation of the agent’s actions. A recognized plan is useful in that it allows us to de- cide an agent’s goal as well as predict the agent’s next action. Suppose we observe that John has made the sauce and he is now boiling the noodles. Then, based on the plan in figure 1, we can decide that John’s goal is to make a pasta dish and predict that his next action is to put noodles and sauce together. /F,\ Make Noodles SME Boil Put Noodles Together Figure 1: Hierarchical Structure of a Plan Plan recognition has found applications in many research areas such as story understanding ([Schank and Abelson, 19771, [B ruce, 19811, [Wilensky, 1983]), psychological modeling [Schmidt et al. 19781, natural language pragmatics ([Allen, 1983b], [Litman, 19851, [Carberry, 1986]), and intelligent interfaces ([Huff and Lesser, 19821, [Goodman and Litman, 19901). Most plan recognition models assume a library of typical plans that can occur in a particular domain. Then, a search and matching mechanism is used to recognize all the plans that contain the observed ac- tions, called candidate plans. One problem is that it is difficult to unambiguously decide the plan of an agent, since the observation of the agent’s actions is often incomplete and some actions may appear in many dif- ferent plans of the system’s plan library. Kautz [1987] suggests that one way of reducing the set of candidate plans is to use the temporal relations explicitly given in the observations as constraints to eliminate the plans that are inconsistent with the given constraints’. How- ever, Kautz only provided a simplified procedure for checking temporal constraints and did not elaborate on the types of temporal constraints that are neces- sary to be represented in the system’s plan library. In this paper, we assume a model for plan recogni- tion that is similar to Allen’s [1983b], with the focus being on temporal reasoning, the process of checking the inconsistencies between the temporal constraints given in the input and those prestored in the plans of the system’s plan library. Allen [1983a] proposed an algorithm that can be used to perform this task. How- ever, Allen’s algorithm [Allen, 1983a] can give weak results when applied to plan recognition, specifically for the case where some actions are defined in terms of their decomposed subactions. Our contribution is to provide a closing procedure, which makes specific the temporal constraints between an action and its decom- posed subactions and works interactively with Allen’s algorithm to obtain strengthened results. Moreover, we discuss briefly that in a natural language setting, the process of deriving the temporal constraints from the input through linguistic analysis, called temporal analysis, can benefit from temporal reasoning as well. Two iffesent Views 0% A plan can be viewed as a hierarchical structure, orga- nized by the decomposition of an action into its subac- tions. In the cooking plan introduced earlier, MakePas- taDish is an action that can be decomposed into sub- ‘Other solutions include the use of preference heuris- tics ([Allen, 1983b], [Litman, 19851, [Carberry, 19861) and probabilities [Goldman and Charniak, 19881. SONG & COHEN 247 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. actions: MakeNoodles, MakeSauce, BoilNoodles, and PutTogether. A plan can also be viewed as a temporal network, indicating the temporal constraints that must hold be- tween the intervals of all actions and states in the plan. We can use Allen’s interval algebra [Allen, 1983a] to represent the temporal constraints between intervals. Given intervals X and Y, there can be thirteen ba- sic relations between them, as shown in the following table. Constraints that are less certain than basic re- Primitives X Before Y X Meets Y X Overlaps Y X Starts Y X During Y X Finishes Y X Equal Y Symbols Inverses XbY Y bi X XmY YmiX XOY Y oi X XSY XdY XIY X eq Y Y si X Y di X YfiX Y eq X Examples xxx YYY XXXYYY xxx lations are represented as the disjunction of a set of basic relations. For convenience, we define two high level constraints as follows: “Precedes” = {b, m, 03 and “Includes” = {si, di, fi,eq). Roughly, “Includes” describes the decomposition of an action into its subactions, since the interval of the action includes all the intervals of the subactions, while “Precedes” holds when one action “enables” another action, as the enabling action must be executed prior to the enabled action ‘. As a result, the temporal network for the cooking plan can be given in figure 2. Relations that are more specific than “Precedes” and “Includes” can be indicated as shown. InKitchen - Boil - Put Together Figure 2: Temporal Structure of a Plan ‘A definition of enablement can be found in [Pollack, 1986]. For example, fkling a phone number enables the action of making a phone call. A hierarchical structure provides a straightforward view showing all the actions in a plan, but it does not exhibit the states and temporal relations between ac- tions. A temporal network, on the other hand, gives a detailed representation that allows all the actions, states, and their relationships to be clearly specified. However, since a temporal network treats all the in- tervals as the same, the temporal dependency between an action and its subactions is not explicitly indicated. We will argue that both views of a plan are important for doing temporal reasoning during plan recognition. Algorithms for Temporal Reasoning Weak Results of Allen9s Algorithm Allen [1983a] proposed a reasoning algorithm that propagates a new constraint to others and at the same time checks for inconsistencies. To reason about con- straints, two operations of intersection and composi- tion are defined. Intersection is just set intersection be- tween two constraints. Composition of two constraints is the set of pair-wise multiplications between all the basic relations in the two constraints, i.e., Cl o C2 = {a x b ] a E Cl and b E 623, where the result of a x b can be looked up in a prede- fined table [Allen, 1983a]. For example, given Cl = {b, m, 0) and C2 = {m, 03, the composition of Cl o C2 is Cb, m, 03. However, Allen’s algorithm can provide weak results when applied to plans that contain decompositions. Consider an example of one decomposition shown in figure 3. Here, we use A to denote the interval of the action, and al and a2, the intervals of the two subac- tions. Suppose that in the initial specification of the plan library there is no constraint between al and a2. Then, all we can decide are: A {si, di, fi, eq) al and A {si, di, fi, eq) a2. al - a2 Figure 3: An Example of One Decomposition Now, assume that from the input we get a new con- straint of {b) between al and a2. Then, based on Allen’s algorithm, we are able to propagate it to the other constraints and obtain the results shown in fig- ure 4 (a). However, if we know that al and a2 are the only two subintervals of the interval A and that al {b) a2, then we should be able to decide A {si) al and A {fi) a2, i.e., al and a2 together comprise all of A. These desired results are shown in figure 4 (b). 248 TEMPORAL CONSTRAINTS al/ \ c a2 Figure 4 (a): Weak Results from Allen’s Algorithm / A Figure 4 (b): Strong Results Desired These weak results can be carried further when we consider a network that consists of more than one de- composition. Suppose that at the beginning we have the network shown in figure 5, where “inc” stands for {si, di, fi, eq). yA1/\ al Figure 5: An Example of Two Decompositions When the constraints: al {b) a2 and a2 {b) a3 are given from the input, we can apply Allen’s algorithm to obtain the results shown in figure 6 (a). Here, “corn” stands for the constraint: {o, oi, s, si, d, di, f, fi, eq). However, using a similar argument a.s made in the previous example, we should get the stronger re- sults shown in figure 6 (b). Figure 6 (a): Weak Results from Allen’s Algorithm Figure 6 (b): Strong Results Desired There are also networks that are considered as con- sistent by Allen’s algorithm but in fact are not when used to represent decompositions. For instance, if the constraint between A and a2 in figure 4 (b) is labeled as {di), Allen’s algorithm would decide this as consistent, but as we argued above, this is in fact not consistent with the desired results. The reason that Allen’s algorithm gives us weak re- sults when applied in plan recognition is that it treats all the intervals of actions as independent of each other. In plans where actions are connected through decom- positions, the intervals of abstract actions actually de- pend on the intervals of their decomposed subactions. To make these dependencies explicit in the reasoning process, we must acknowledge that decompositions of abstract actions into their subactions are complete; no more subactions are needed for or can be added to the decompositions. This will allow us to compute the boundary intervals of the abstract actions based on all the constraints between the subactions. For instance, if there is a linear ordering between all the subactions, we will be able to decide that the abstract action is temporally bounded by the subactions that occur the earliest and the latest. We say that a decomposition is closed if the interval of the abstract action is tempo- rally bounded by the intervals of the subactions. Closing Procedures for Decsmpositioras We can classify the 13 basic temporal relations intro- duced in section 2 into five classes: {b, m, 03, {bi, mi, oi3, Csi, di, fi3, {s, 4 f), and {eq). Then, we can di- vide a constraint into five subsets accordingly. Given the constraint (0, oi, s, si, d, di, f, ii, eq), for example, the corresponding five subsets are: (03, {oil, {si, di, Ii), {s, d, f), and {es). Let C be the constraint be- tween the two subactions of a decomposition. Then, for each non-empty subset of C, we can provide a sim- ple solution to close the decomposition, as shown in figure 7. w iIyA\ al +- a2 C2 C (bi,mi,oi} J I / I al = ;2 ai * a2 C3 E (si,di,fi} C4 C (s,d,f) ai C5 = (eq} b a2 Figure 7: Five Special Cases for Closing a Decomposition Case (a) indicates that A is bounded by al and a2 since for any subset of Cl={b, m, 03 the start part of al is clearly located before the end part of a2. Case (c) suggests that A is bounded by al since for any subset SONG & COHEN 249 of C3=(si, di, fi} we can decide that A {eq) al, and from the composition of A (eq) al and al 63 a2 we can derive A C3 a2. Cases (b) and (d) are the reverse cases of (a) and (c), and caSe (e) is trivially justified. If only one subset of C is not empty, then the cor- responding c8se above already provides a solution for closing the decomposition. However, a constraint gen- erally has more than one non-empty subset. As a re- sult, the solution of closing a decomposition should be the disjunction of 8ll the &es that contain these non- empty-subsets. We can illustrate the closing process by viewing a decomposition 8s the conjunction of all its constraints. For example, the formula for a decom- position of two subactions may be described 8s: A {si, di, fi, eq) al A A (si, di, fi, eq) 82 A 81 {b, bi) 82 By dividing {b, bi) into two subsets, {b) and {bi), we can change it into an equivalent formula: (A (si, di, fi, eq) al A A (si, di, fi, eq) 82 A 81 {b) 82) V (A {si, di, fi, eq) al A A {si, di, fi, eq) 82 A al {bi) 82) Now it becomes clear that each subformula above cor- responds to a special case in figure 7. So we can close both subformulas and get the disjunction of two special cases: (A {si) al A A {fii) a2 A al {b) 82) V /* from case (a) */ (A {fi) 81 A A {si) a2 A 81 (bi) 82) /* from caSe (b) */ Unfortunately, this result is too strong in that we have to divide the temporal network of a plan into two net- works, each cont&ing a special case. For a plan that consists of many decompositions, the total number of different networks could be much larger. In order to re- tain just one network while still taking decompositions into account, we will have to relax our result to some extent. Here, we merge the two subformulas by taking the disjunctions of the corresponding constraints: - A (si, fi) al A A {si, fi) 82 A al (b, bi) a2 This formula is equivalent to the previous result, but it also contains two redundant, inconsistent cases. This can be seen from the expansion of the formula: (A {si) al A A {fi) 82 A al {b) 82) V (A (fi) al A A (si) 82 A al {bi) a2) V (A {si) al A A {si) 82 A al {b) 82) V /* inconsistent */ (A {fi) al A A {fi) 82 A al (bi) a2) /* inconsistent */ However, even though our result is weakened, it is still stronger than that from Allen’s algorithm in most cases. For the example above, the result from Allen’s algorithm will be: A {si, di, fi) al A A {si, di, fi) a2 A al {b, bi) a2 In the following, we summarire our discussion and provide a procedure for closing a decomposition of two subactions. Here, R(k, n) is a relation given between two intervals labeled 8s nodes k and n. fuencion close-two( R(k, n)) create a dummy node labeled temp; W-v, k) +- ( 3; R(temp, n) <- ( 3; if R(k, n) n {b, m, 03 is not empty then R(temp, k) <- R(temp, k) U {si); R(temp, n) <- R(temp, n) U {fi); if R(k, n) (7 {bi, mi, oi) is not empty then R(temp, k) <- R(temp, k) U (II); R(temp, n) <- R(temp, n) U {si); C3 <- R(k, n) n {si, di, fi); if C3 is not empty then R(temp, k) <- R(temp, k) U {eq); R(temp, n) <- R(temp, n) U C3; 64 <- R(k, n) n (s, d, f); if C4 is not empty then R(temp, k) <- R(temp, k) U -C4; R(temp, n) <- R(temp, n) U {eq); if R(k, n) n {eq) is not empty then R(temp, k) <- R(temp, k) U (eq); R(temp, n) <- R(temp, n) U {eq); return temp end Having developed the procedure of “close-two”, we are now in a position to extend the result to close a decomposition of more than two subactions. Figure 8 (a) shows a decomposition of three subactions. - Figure 8: Closing a Decomposition of Three Subactions In order to close the decomposition, we introduce an intermediate action aI that takes 8.1 and 82 as subac- tions, shown in (b). Now, for this new decomposition, we can call “closeitwo” and get the closed constraints between a12 and al and between a12 and 82. Then, 250 TEMPORAL CONSTRAINTS based on these results, we can compute a new con- str8int between al2 and a3, shown in (c). Now, 812 and a3 form the two subactions of A. Once again, we can call “close-two” and get the closed constraints be- tween A and al2 and between A and 83. At this time, the constraint between A and a3 has been closed. To get the closed constraints between A and al and be- tween A and 82, we can perform the compositions of A to al and a2 via a12. Since all the constraints have been closed, we can eliminate the intermediate action 812 and all the constraints connected to it. The result brings us back to the structure in (a), but this time, all the constraints from A to its subactions have been closed. The above process can be repeated if there are more subactions to be closed. We can now give a general procedure, which closes a decomposition of any number of subactions by using our “close-two” procedure. Here, k denotes an abstract action, and S, a list of the subactions of the abstract action. Also, given nodes i and j, N(i, j) corresponds to the existing constraint, and R(i, j), a new or derived constraint between i and j. Finally, U denotes the dis- junctive set of all possible primitive interval relations, i.e., U = {b, bi, m, mi, o, oi, s, si, d, di, f, fi, eq). procedure close-all(k, S) begin get first n from the list S; g(> “(’ ;- Ga3; - n; while S is not empty do begin get next n from the list S; R(k, n) <- U; foreaeh c in C do R(k, n) <- R(k, n) n N(k, c) o N(c, n); temp <- close-two(R(k,n)); foreach c in C do N(k, c) <- R(temp, k) o N(k, c); ;‘: n& ;-{Rjtemp, n); n end end Our Strengthened Algorithm Our closing procedure is built on the temporal con- straints between all the subactions. In order to get stronger results, we first view plans 8s temporal net- works and use Allen’s algorithm to make these con- straints more specific. Then, we view plans as hier- archical structures and close all the decompositions in a depth first order. Once all the decompositions are closed, some of the constraints in the network may be updated. As a result, we need to call Allen’s algorithm again to propagate these constraints. In general, we can design a strengthened algorithm by interactively calling Allen’s algorithm and our closing procedure a number of times. Such a process will eventually termi- nate since every time we update 8 constraint, some of its basic relations will be eliminated and there are at most 13 basic relations in a constraint3. Applications of Tern There are two possible results that can be obtained from temporal reasoning: if the given constraints are inconsistent with the prestored constraints of a candi- date plan, then the plan will be eliminated; otherwise, the given constraints will be added to make the pre- stored constrabints more specific. Here is an example to show the importance of doing temporal reasoning during plan recognition. Suppose that our plan library contains two plans for making GuoTie and JianJiao, two common ways of making fried dumplings in Chinese cooking, shown in figure 9. Make - Dumplings Dumplings Dumplings Make Jian Jiao (b) LITI] I {bh Make - Boil - Fry Dumplings Dumplings Dumplings Figure 9: Two Simplified Plans in a Plan Library Then, given the observation that BoilDumplings oc- curs earlier than FryDumplings*, 8 plan recognition model that does not use temporal constraints from the input would propose MakeGuoTie and MakeJianJiao 8s the candidate plans, for both of them contain the two given actions. However, by taking the temporal relations given in the input as a constraint and check- ing them with those prestored in candidate plans, we find that MakeGuoTie is inconsistent with the given constraint, 8s BoilDumplings occurs later than Fry- Dumplings in this plan. As a result, our plan recogni- tion model will only propose MakeJianJiao as the plan that the agent is performing. The other result of making prestored constraints more specific can benefit the process of deriving the temporal constraints from observation, which we call “temporal analysis.” In a natural language setting, the need for automating temporal analysis becomes im- portant, 8s the observations are described in terms of “Due to the sp ace limitation, the algorithm is not given here, but the readers should be able to construct it easily based on the discussion of this section. ‘In a natural language setting, for example, such a tem- poral constraint may be obtained by linguistically analyz- ing the input: “I have boiled the dumplings and am now frying them.” SONG & COHEN 251 utterances and the temporal constraints suggested by linguistic expressions such as tense, aspect, temporal adverbials and connectives. However, as pointed out in ([Webber, 19881, [Allen, 19881, [Song, lQQO]), these expressions are sometimes not strong enough to help derive specific temporal relations between the actions mentioned in the input; other discourse indicators such as cue-phrases and world knowledge are also needed for doing temporal analysis. Temporal reasoning is useful in that it provides a way of combining the given con- straints and the prestored constraints in a candidate plan. In cases where the constraints indicated in the input are specific, we can use them to reduce the set of candidate plans, but in cases where the given con- straints are vague, the prestored constraints in a can- didate plan can be used to fill in the details about the temporal relations between actions. Readers are re- ferred to Song [lQQO] for more discussion on temporal analysis. Conclusion In this paper, we present a strengthened algorithm for temporal reasoning during plan recognition. We view plans as both hierarchical structures and temporal net- works. This allows us to design a closing procedure which makes specific the temporal constraints between an action and its decomposed subactions and works in- teractively with Allen’s algorithm to obtain strength- ened results. Two main applications of temporal rea- soning are to reduce the number of candidate plans during plan recognition and to help derive the tem- poral constraints from natural language input through linguistic analysis. Note that the strengthened algorithm is not quite efficient in that it has to call our closing procedure and Allen’s algorithm interactively. Some results on localizing the propagation in Allen’s algorithm have been reported in [Koomen, 19891. Future work should be directed to find efficient ways for combining the two processes during temporal reasoning. Acknowledgements We would like to thank Peter van Beek and the anony- mous referees for their useful comments. This work was partially supported by the Natural Sciences and Engi- neering Research Council of Canada (NSERC), Insti- tute of Computer Research (ICR), and the University of Waterloo. References Allen, James F. 19838. Maintaining knowledge about temporal intervals. Communications of the ACM 26( 11):832-843. Allen, James F. I983b. Recognizing intentions from natural language utterances. In Brady, M. and Berwick, R., editors 1983b, Computational Models of Discourse. The MIT Press. 107-166. Allen, James F. 1988. Natural Language Understand- ing. The Benjamin/Cummings Publishing Company. Bruce, B. 1981. Plans and social actions. In Spiro, R.; Bruce, B.; and Brewer, W., editors 1981, The- oretical Issues in Reading Comprehension. Lawrence Erlbsum, Hillsdale. Carberry, Sandra 1986. Pragmatic Modeling in Infor- mation System Interfaces. Ph.D. Dissertation, Uni- versity of Deleware. Goldman, Robert and Charniak, Eugene 1988. A probabilistic ATMS for plan recognition. In AAAI-88 Workshop on Plan Recognition. Goodman, Bradley A. and Litman, Diane J. 1990. Plan recognition for intelligent interfaces. In IEEE Conference on ArtificiaZ Intelligence Applications. Huff, Karen and Lesser, Victor 1982. Knowledge- based command understanding: An example for the software development environment. Technical Report TR 82-6, Computer and Information Science, Univer- sity of Massachusetts, Amherst. Kautz, Henry A. 1987. A Formal Theory of Plan Recognition. Ph.D. Dissertation, University of Rochester. Koomen, Johannes A. 1989. Localizing temporal con- straint propagation. In Proceedings of the 1st Interna- tional Conference on Principles of Knowledge Repre- sentation and Reasoning, Toronto, Canada. 198-202. Litman, Diane 1985. Plan Recognition and Dis- course Analysis: An Integrated Approach for Under- standing Dialogues. Ph.D. Dissertation, University of Rochester. Pollack, Martha E. 1986. Inferring Domain Plans in Question-Answering. Ph.D. Dissertation, University of Pennsylvania. Schank, Roger C. and Abelson, Robert P. 1977. Scripts, Plans, Goals, and Understanding. L. Erl- baum Associates, Hillsdale, New Jersey. Schmidt, C. F.; Sridharan, N. S.; and Goodson, J. L. 1978. The plan recognition problem: An intersection of psychology and artificial intelligence. Artificial In- telligence 11:45-83. Song, Fei 1990. A Processing Model for Tempo- ral Analysis and its Application to Plan Recognition. Ph.D. Dissertation, University of Waterloo. Webber, Bonnie Lynn 1988. Tense as discourse anaphor. computational Linguistics 14(2):61-73. Wilensky, Robert 1983. Planning and Understand- ing: A Computational Approach to Human Reason- ing. Addison-Wesley Publishing Company. 252 TEMPORAL CONSTRAINTS
1991
51
1,111
ric ts Q 0 Massimo Poe&* Computer Science Department AT&T Bell Laboratories University of Rochester Rochester, NY 14627 poesio@cs.rochester.edu Abstract Reasoning about one’s personal schedule of appoint- ments is a common but surprisingly complex activ- ity. Motivated by the novel application of planning and temporal reasoning techniques to this problem, we have extended the formalization of the temporal dis- tance model of Dechter, Meiri, and Pearl. We have developed methods for using dates as reference inter- vals and for meeting the challenge of repeated activities, such as weekly recurring appointments. Introduction Intelligently managing a busy personal schedule is not an easy task, but it is a ubiquitous one. The benefit to be derived from an automated facility for maintaining an appointment calendar-with all of the complex con- straint management that that entails’--is potentially enormous. In the context of CLASM,2 an intelligent appointment manager, we have begun to explore the utility of AI methods for maintaining a library of com- plex activity descriptions, verifying whether an activity is possible, and checking the consistency of a temporal network of dates and activities. An application like CLASM can provide a realis- tic testbed for AI temporal reasoning and planning techniques, but programs of this kind are rare, no doubt because current techniques for maintaining tem- poral databases flounder on unconstrained real-world- size problems. Appointment calendars, however, come *This work was done in part while the first author was at AT&T Bell Laboratories, Murray Hill, NJ, and was in part supported by ONR/DARPA contract number N00014- 82-K-0193 and NSF grant number CDA-8822724. ‘Note that these constraints are not just temporal, and are not purely numeric. They can involve resource as- signments, preconditions for initiating activities, complex goal interactions, management of typical activity sequences (scripts), exceptions to such scripts, etc. In any case, the problem is incremental, involves much commonsense rea- soning, and is not in general amenable to operations re- search approaches. 2CLASM (“CLASSIC Schedule Manager”) is built on top of CLASSIC [B rat h man, et ad., 19911. For more details, see [Poesio, 19901. 600 Mountain Ave. Murray Hill, NJ 07974 rjb@research.att.com equipped with dates, which constitute a natural refer- ence system; that is, they can be used to partition the temporal data base. Here we explore the leverage that dates can give us on the temporal constraint manage- ment problem. Another obstacle to the utility of current techniques in this area is the need to represent repeated activi- ties, such as “going to economics class every Tuesday and Thursday at lO:OO,” which constitute a large part of the normal schedule. The few projects in this area have devoted considerable effort attempting to over- come the expressive limitations of temporal notations that prevent them from dealing with repeated activi- ties. However, work is still needed on the representa- tion (the language used to talk about repeated activi- ties) and especially on the algorithms: to date no algo- rithms have been specified for the two most important operations needed for the kind of temporal database in CLASM, uncovering inconsistencies and detecting po- tential overlaps of activities. We will first review existing methods for represent- ing temporal information, and motivate our choice of the TCSP model, a formalization due to Ladkin [1989] and Dechter, et al. [1991]. We present the model in some detail, introduce our own representation for tem- poral information in CLASM, and then turn to the more novel aspects of our work. First we show how to check consistency and detect overlaps in networks containing repeated activities. We then discuss our ap- proach to using dates as reference intervals, illustrating the improvement achieved. nformation About Everyday ctivities The “temporal ontology” of a system dealiug with ev- eryday activities involves two types of temporal ob- jects: activities and dates. The temporal information that the user needs to specify about an activity a in- cludes its duration, and its qualitative relations [Allen, 19831 with other activities or dates, such as “u is before b” or “u takes place next Monday.” Among the most important operations that a system like CLASM has to perform are verifying that the schedule of activities POESIO & BRACHMAN 253 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. with metric durations and qualitative temporal con- straints is consistent, and detecting overlaps between activities. Also useful, although not addressed here, is the operation of finding a solution to the schedule, i.e., suggesting to the user an organization of his/her activities that will fit all the constraints. Current techniques for this can be divided into three groups. Allen and Kautz [1985] use a representation exclusively based on intervals. Special intervals like YEAR, MONTH, etc., are used to represent durations. They maintain two different networks: one in which the links are labeled with (disjunctions of) qualitative relations, another in which the links are labeled with the relative duration of two intervals (e.g., “the dura- tion of I is from 6 to 7 times the duration of J”). This representation is very powerful, but has two problems: checking the consistency of the network of constraints is known to be NP-hard, and therefore only approxi- mation algorithms are used [Vilain and Kautz, 19861; and there are problems making the algorithm converge quickly during relaxation. Malik and Binford [1983] and Dean and McDer- mott [1987] ( among others) suggest a representation based on points, with constraints on the temporal dis- tance between points. A constraint on the temporal distance of the points Xj and Xi is an inequality of the form Xj -Xi 5 c. This approach has recently been for- malized by Ladkin [1989] and by Dechter, Meiri, and Pearl [1991]. If the application does not require com- plex disjunctions of qualitative relations, this method has the best complexity results overall. The sim- plest temporal distance method proposed by Dechter, et al.-STP-has a complete polynomial consistency- checking algorithm (O(n3)), and the algorithm for find- ing a solution is also polynomial. However, tempo- ral distance networks do not have enough expressive power to represent certain disjunctions of qualitative relations, such as “I is before or after J,” and they are not easily extendable to do so. Hybrid representations that make use of both points and intervals have been proposed by Schmiedel [1988], Ladkin [1989], and Ladkin and Kautz [1991]. Two nets of constraints are maintained, one of qualitative con- straints among intervals and one of metric constraints among points. The nets are kept mutually consistent by transferring information back and forth. Most of the time, the kind of temporal informa- tion that needs to be represented in a schedule can be transformed into an STP by translating the qual- itative relations into metric constraints between the endpoints of the related intervals.3 We decided there- fore to optimize the performance in the expected case by representing the temporal information as an STP, and to develop a method based on backtracking for those problems that couldn’t be represented as an STP, 3Kautz and Ladkin [1991] proved that this translation can be done in linear time. rather than using a hybrid network. We now the techniques used for the basic case. discuss The TCSP model A Temporal Constraint Satisfaction Problem (TCSP) involves a set Xi, . . .Xn of variables, each represent- ing a time point. In general these variables can have continuous domains; we will only allow them to take integers as values. A constraint is represented by a set of intervals { [ur , bi], . . . [a,, bm]}. A binary constraint constrains the permissible values for the distance be- tween two variables; that is, it represents the disjunc- tion(al SXj-Xi <b,)V . ..V(u. <Xj-Xi <bm) [Dechter, et al., 19911. Once we introduce a distin- guished time point Xe representing “time zero,” bi- nary constraints are all we need to represent not only the distance between time points but also their abso- lute position. A network of binary constraints (binary TCSP) consists of a set of variables and a set of binary constraints. This network can be represented by a di- rected constraint graph. A Simple Temporal Problem (STP) is a TCSP in which all constraints contain a sin- gle interval. In this case, the constraint graph becomes a distance graph, analogous to Dean and McDermott’s time map. The most important computations on a constraint network are (i) determining its consistency, (ii) find- ing a solution (that is, a set of values for the variables which satisfies all the constraints), and (iii) answering queries concerning the set of all solutions. The consis- tency problem for a general TCSP is NP-hard. A solu- tion to a general TCSP can be generated in O(n3ke), where k: is the maximum number of intervals labeling an edge of the constraint graph, and e is the number of edges, by a brute-force algorithm that generates all the labelings (each of which constitutes a STP) and solves each of them. In the case of an STP, however, both the consistency and the solution problems have complete algorithms in O(n3) [Dechter, et al., 19911. Two consistency-checking algorithms based on path consistency are known to be complete for an STP; Mackworth called them PC-1 and PC-Z [1977]. Both have a worst case behavior of O(n3). PC-l is essen- tially Floyd-Warshall’s shortest-path algorithm. Vi- lain and Kautz [1986] used a version of PC-Z, a part of which is shown in Fig. 1. The algorithm is so well known that no comment should be necessary, except to point out that in our version Intervats becomes a variable passed to Propagate by Close, for reasons that will become clear shortly. Among the two algo- rithms, PC-1 has a better constant factor, but PC-Z has a better average case performance [Dechter, et al., 19911, and is therefore the algorithm most commonly found in the literature. The Temporal Constraint Network A schedule defines an STP consisting of two points per temporal object (activity or date) and the distin- 254 TEMPORAL CONSTRAINTS 1. To Close 2. while Queue is not empty do 3. begin 4. Get next < I, .7 > from Queue 5. Propagate( I, J, Iratervals) 6. end Figure 1: A simple version of path consistency. 1. To AddJLctivity( a) 2. for each temporal constraint c of a do 3. Add-Constraint(c) 4. DetectJntersections(a) 5. To Add-Constraint(c) 6. C c Convert(c) 7. for each constraint c’ in C between points p and q do 8. if ‘Tuble[p,q] # c’ then 9. if Tuble[p,q] + C’ = 0 then {signal contradiction} 10. else Place <p,q> on Queue 11. Close0 12. To DetectJntersections( u) 13. for each activity b # a do 14. if PJntersect?( a, a) then (signal overlap) 15. To PJntersect?(a, b) 16. not(ub( u) < lb(b) or lb(a) > ub( b)) Figure 2: The algorithms for adding activities. guished point 0. A data structure called a temporal constraint network (TCN) is used in CLASM to repre- sent it. The TCN is a graph with one node per activity and one link per pair of activities. Activities, rather than points, are chosen as nodes of the TCN because, first, all of the other information CLASM has to store is about activities; and second, the links are a conve- nient place to store additional information that cannot be represented directly as an STP (see below). A schedule is represented in the TCN as follows. Let BEGIN(I) and END(I) d enote the initial and final end- points of the activity I. 1. Each node I of the TCN is associated with con- straints on the three following distances: END(I) - BEGIN(I) (the duration of I), END(~) - 0 (the end of I in absolute terms), and BEGIN(I) - 0 (the start of I in absolute terms). The record for node I has a separate field for each of these constraints. 2. Each link < I, J > of the TCN is associated with four binary constraints: END(J) - BEGIN(I), END(J) - END(I), BEGIN(J) - END(I), and BEGIN(J) - BEGIN(I). These constraints are stored in fields of the record associated with the link. The functions required to add an activity to the net,- work are shown in Fig. 2. 4 The most complex oper- ations performed by AddActivity are checking the consistency of the STP (via calls to Close caused by calls to Add-Constraint) and detecting intersections *In this and later figures, read P-Intersect as short for PossiblyJntersect. of a with other activities. Add-Constraint first con- verts a constraint c from qualitative to metrical terms usin Convert (essentially from [Kautz and Ladkin, 1991 ); ‘j it then calls the function Close (see Fig. 1) to infer the consequences of the addition of c. Once the constraints have been propagated, the system can check whether a has a non-empty intersection with other activities using etectlntersections. The ac- tivities a and b inte t iff (in Allen’s notation) a (o oi s si f fi d di) b. The functions returning the upper bound ub and lower bound lb of an activity used in Fig. 2 are defined in the obvious way. Since Close is in O(n3) and Convert is linear, Add-Constraint is in O(n3). We have so far not mentioned dates at, all. Since the temporal constraints among dates are known a priori, letting the temporal constraint propagation algorithm try to recompute them would be a waste. Therefore, we do not want nodes in the TCN representing dates. At, the same time, however, we need to express rela- tions between dates and activities (since, for example, we might want to say “go to New York on Monday”), and dates are very useful for partitioning the TCN, as shown later. We make dates transparent to the con- straint propagation algorithm by converting the con- straints between activities and dates into constraints between activities and point 0, as follows: 1. A constraint among two dates is never added to the TCN, since the system can compute it; 2. A mixed constraint of the form a 5 Xj - Xi 5 b in which one of the points-say Xi----is the endpoint of a date, is converted into a constraint between Xj and 0 in the following way: Let a’ 5 X; - 0 5 b’, then a + a’ 5 Xj - Xi+Xi-0 5 b+b’. (This transformation is again done by Convert.) Data about the performance of the constraint propaga- tion algorithm, presented below, show that this simple transformation is very effective. epeated Activities None of the four proposals for representing repeated events that we are aware of [Leban, et al., 1986; Ladkin, 1987; Poesio, 1988; Koomen, 19901 is com- pletely satisfactory. Ladkin and Koomen make inter- esting proposals about languages for representing re- peated activities, but these are too simple even for our limited purposes. None of the proposals includes al- gorithms for detecting overlaps among activities or de- tecting inconsistencies in networks containing repeated activities. In this section we will present, a simple lan- guage that meets our needs, and two such algorithms. A Simple Language for Specifying Repeated Activities A repeated activity (RA) is a sequence al -< . . . 5 a, of temporal objects of the same type (the elements). The activity definition language of CLASM includes POESIO & BRACHMAN 255 two constructs for specifying repeated activities. The expression (seq <type> <quant>) defines a repeated ac- tivity containing <quant> consecutive elements of type <type>. (<quant> can be a natural number or every.) For example, (seq monday every) denotes the sequence of all Mondays. The expression (tie <type> (R <ra>) <qua.nt> <ref date>) defines a repeated activity each of whose ele- ments is of type <type> and is related to an element of <ra> by R. (At the moment, R ‘can only be a strict containment relation; the algorithm shown below only works when R is si.) The <quant> specifies how many elements there are within <ref date>: one per element of <ra> or less. For example, (1) (tie (and CLASS (dur "lht')) (si (seq (hour 4 monday) every)) every (and (s (day january 21)) (f (day may 12)))) defines a “one-hour class that meets every Monday at 4~00, and which starts on January 21 and ends on May 12,” that is, a sequence of activities of type CLASS, of duration one hour, each of which is in the si rela- tion to an element of the RA denoted by (seq (hour 4 monday)). Algorithms for Checking Consistency and Detecting Overlaps The ideal method for representing repeated activities would require a minimal increase in the size of the TCN, and would not increase the complexity of the constraint propagation algorithm. Here are some pos- sibilities: Expand each RA as it gets added to the schedule. This is the simplest method; it does not in fact re- quire any change to the representation or the algo- rithms. It cannot be used with infinite RAs, but one may limit the user to use bounded RAs only. The real problem with this method is that one node per element of the RA has to be added to the TCN. Instead of expanding each RA, use two different kinds of nodes in the TCN, and define new algo- rithms for consistency checking and for detecting overlaps. With this method, only one node per RA is added to the TCN. Overlap detection can also be done quite efficiently, as shown below. The problem is how to ensure consistency, and how to find a solu- tion to the network (although this is not a problem we have been concerned with). Proceed as in case 2, but expand the RAs when they are referenced by the constraint propagation algo- rithm. That is, when an activity has to be added, its reference date can be computed using the algo- rithms presented above, and then all RAs that over- lap with that reference date get expanded “within the bounds of that reference date.” This method is slightly more complicated than method 1, and it is not guaranteed to have a better performance. We are currently testing the first two possibilities. The first method is not difficult to implement; all it requires is a way of expanding an RA. The second and third methods also require that certain information be kept about RAs. For each RA CLASM adds to the TCN a special node, which includes, among other things, the following information: The period. This is the distance between the mini- mum starting point of each element of the RA and the minimum starting point of the next. The period of RAs like (seq monday every) is known a priori; the period of an RA defined using tie is derived from the tied RA. For example, the period of the RA in (1) is derived from the period of (seq (hour 4 monday) every). The duration of each element of the RA, if known. The step, if known. This is the distance between the end of each element and the beginning of the next. The algorithms for consistency checking and overlap detection presented in Fig. 2 need not be changed when the first or third method are chosen. New algorithms are however required to implement the implicit method for representing RAs. In particular, we need new algo- rithms for detecting inconsistencies and intersections. Consistency checking is rather easy to implement. If one can guarantee that the RA itself is internally con- sistent, all that is required is to ensure that the RA as a whole will not create inconsistencies. This may be done by adding to the TCN as a node the convex- ification of the RA, defined by Ladkin 119871 as the minimum convex interval with the same lower and up- per bounds as the RA. (Ladkin’s convex intervals are analogous to our atomic temporal objects, while his non-convex intervals are analogous to our repeated ac- tivities.) The only non-obvious algorithm is the one required to detect the intersection of two RAs, or of an RA and an atomic (non-repeated) activity. The new version of Plntersect? is shown in Fig. 3. (The purely atomic version from above, now called PIntersectAA?, is used in Fig. 3 as a subroutine.) The parameters of P-Intersect? are a (possibly repeated) activity A, a repeated activity RA, and a reference date RD. (For simplicity, we have omitted the case when both argu- ments are atomic activities, handled using the previous version of the function.) Ceiling*(x) returns the ceil- ing (smallest integer greater than or equal to) of 2 if x is positive, 0 if negative. Period(x) returns the period of an RA. In order to understand how PJntcrsect? works, it is useful to think of a repeated activity as a line split into a sequence of adjacent segments of the same length (its period). The first if clause checks that, indeed, A and RA may intersect, and that they have a non-empty intersection with RD, using Lad- kin’s Convexify function (see above). There are then 256 TEMPORAL CONSTRAINTS 1. To PJntersect?( A, RA, RD) 2. if lP,Intersect44?(RD, Convexify(lb(A), ub(A))) V 3. -rPJntersectJLA?(RD, Convexify(lb(RA),ub(RA))) V 4. ~P,Intersect44?(Convexify(max(lb(RD),lb(A)), min(ub(RD),ub(A))), Convexify(max(lb(RD), lb(RA)), min( ub( RD),ub( RA)))) 5. then return NIL 6. else 7. if A is atomic 8. then PJntersectAR.A?(A, RA, RD) 9. else PJntersectlRAJLA?(A, RA, RI)) 10. To PJntersect&LRA?(A, RA, RD) 11. I1 c lb(RA) + (Ceiling*((lb(RD) - Ib(RA)) / Period(RA)) x Period(RA)) 12. 13. 12 c m.ax(lb(A), Ib(RD)) 14. K I; -dub(A), ub(RA), ub(RD)) = lb(RA) A lb(RA) # lb(RD) then 15. 11 c 11 + Period(RA) 16. if 11 > ub(A) then return NIL 17. if 11 > ub( A) then 18. if 11 > lb(RA) A 12 5 (II-Step(RA)) 5 U then return T 19. else return NIL 20. if 12 > ub(RA) then return NIL 21. D + I, - I1 22. if D 2 Period(RA) then D c (D mod Period(RA)) 23. if D < Step(RA) then return T 24. if (12 - D+Period(RA)) < U then return T 25. return NIL Figure 3: Detecting overlaps between activities. two separate cases. If A is atomic, the function re- turns T iff the lower bound of one of the segments of RA that fall within RD is also contained within A. This happens if the activity part of the first segment of RA within RD’s bounds (whose leftmost bound is 11) falls within the part of A within RD (line 18), or if one of the other elements of RA does (lines 23 and 24). The proof of correctness by cases is elementary but long; PhtersecLLRA? is in O(1). If A is a repeated activity, the algorithm in Fig. 4 is used. One has again to find the lower bounds 11 and 12 of the first elements of RA and A that fall within RD. Then one has to find the first time at which one element of the repeated activity with period P2 falls within an element of RD. Two cases have to be consid- ered, depending on whether Pr and P2 are relatively prime or not. If they are, one can just look for the first time at which the repeated activity with the smallest period will fall within an element of the other activity, and check that this happens within RD. Again, it is easy to see that the algorithm is indeed correct and complete; and the cycle at line 14 will require at most P2 iterations. Dates as Reference Intervals An average realistic schedule defines a simple tempo- ral problem with a large enough number of nodes than even the O(n2) average performance of PC-2 is inad- equate. Two solutions are usually proposed: either 1. To PJntersectR.AJtA?(RAl, RA2, RD) 2. I1 + Ib(RA1) + (Ceiling*((lb(RD) - lb(RA1)) / Period(RA1)) x Period( RA 1)) 3. if 11 > ub(RA2) then return NIL 4. I2 t lb(RA2) + (Ceiling* ((lb(RD) - Ib(RA2)) / Period(RA2)) x Period(RA2)) 5. if 12 > ub(RA1) then return NIL 6. D c abs(l2 - 11) 7. U c min(ub(A), ub(RA), ub(RD)) 8. if Period(RA1) and Period(RA2) are relatively prime 9. then 10. PI + max(Period(RAl), Period(RA2)) 11. P2 + min(Period(RAl), Period(RA2)) 12. ifP1 = Period(RA1) then R c Period(RA1) - Step(RA1) I3 + 12 13. else do the same but exchange RAl with RA2 and 12 with 11 14. find the first K such that 0 5 D+ (K x P2) mod PI 2 R 15. if (B+ (Kx P2)) < U then return T 16. else return NIL 17. else 18. if 12 2 11 then 19. PI +- Period(RA1) 20. S1 + Period(RA1) - Step(RA1) 21. P2 + Period(RA2) 22. S2 c Period(RA2) - Step(RA2) 23. L + I1 24. else assign L, 9, S1, P2 and S2 in the opposite way 25. if D 2 PI then D + (D mod PI) 26. if D < S1 then return T 27. if L + PI > U then return NIL 28. if0 < D < P2 then 29. if(P1 > P2) A (D+S2 > (((Pl - P2) / mcd(Pl, f’2)) x mcd(Pl, f’2))) then return T 30. if (PI 5 P2) A (PI < D + S2) then return T 31. else return NIL 32. if D = P2 then return T 33. if (D mod P2) < S1 then return T 34. return NIL Figure 4: Finding overlaps between repeated activities. use faster but incomplete algorithms, or reduce the size of the problem via reference intervals [Allen, 1983; Koomen, 19901. Reference intervals are a way to trade off assertion- time work against query-time work. A reference in- terval for interval 1 is an interval J that properly contains I. The hierarchy of reference intervals en- codes information about the relations between inter- vals: for example, if the activity MAILING-CAMERA- READY-COPY is before the activity ATTENDING-AAAI- 91, and the activity PRESENTING-PAPER is contained within ATTENDING-AAAI-91, then MAILING-CAMERA- READY-COPY is also before ATTENDING-AAAI-91. The reference hierarchy can be used to reduce the number of nodes among which a constraint C has to be prop- agated. When C is added, if the constrained nodes I and J have the same reference interval Ii, the only nodes considered for propagation are those with refer- ence interval Ii; we can infer the relation of I and J with other intervals from the reference hierarchy. In this way the time to query the network becomes loga- POESIO & BRACHMAN 257 rithmic (rather than constant) but the time for assert- ing a constraint becomes almost linear (see [Koomen, 19901 for details). The most detailed proposal for dealing with refer- ence intervals is by Koomen. His system automatically organizes the intervals in a reference hierarchy using the information given by the strict containment rela- tions, and eliminates “cross-links” between subtrees of the reference hierarchy whenever the information they carry can be inferred from the reference hierarchy. Koomen achieves a considerable speedup in this way. His algorithm, however, does not exploit the fact that dates constitute an excellent reference hierarchy.5 Us- ing dates appropriately, one gets a reference interval for any two pairs of temporal objects; this is true not only for systems like CLASM but for other applications as well6 Another problem with Koomen’s system is that most of its speedup comes from cutting the cross links, which in our application carry information about distance that cannot be recovered from the reference hierarchy. Let the propagation set of a constraint C be the set of temporal objects among which C is propagated (that is, the set Intervals on which CIose calls propagate in line 5 of Fig. 1). The propagation set of the con- straint C between the points P and & is defined as the set of activities which possibly intersect the refer- ence date of P and Q. The reference date of P and & is defined as the smallest date (year, month, week or day, or dense sequence of them) that properly contains both P and Q, unless one of P or Q is the point 0, in which case the reference interval is defined as the small- est date which properly includes that among P and Q which is not 0. For example, the reference date of an activity happening at 4pm today and an activity hap- pening at 6pm today will be this day. The reference date is computed by first determining the minimum among the lower bounds of I and J and the maximum between their upper bounds, and then determining the minimum date that contains both bounds. The overlap with the reference date is determined by Plntersect? shown in Fig. 2.7 In this way, we compute only those consequences of C that cannot otherwise be derived from the reference system. For example, if a is properly contained within January 27, 1991, and b is properly contained within January 28, 1991, we do not compute the constraint on the distance between a and b when checking the consistency of the database. We do however compute the consistency of the constraints between a, b, and c 5Unless one introduces into the TCN one node per date, which of course we want to avoid. ‘However, a system like CLASM may exploit dates bet- ter than the planning systems Koomen had in mind. 7At the moment, we only use calendar dates as refer- ence intervals. It would be a straightforward extension to have the algorithm use other fixed temporal objects such as “summer” or “AAAI-91” or “my vacation.” 258 TEMPORAL CONSTRAINTS considered by CLOSE (18.9) (9.7) (16.4) (7.9) inferred (av) av. tune per assertion (sec.) (12.0) (3.5) (12.3) (3.1) 97.89 (7.67) Table 1: Results of the measures. if all we know is that they are all properly contained within January 27. The claim, then, is that the only consequences of a constraint between the points Xj and Xi that cannot be derived from the date system are 1. 2. constraints on the relations between those activities that fall in the same reference date as those whose endpoints are Xj and Xi, or the constraints on the relations between the said activities and those whose absolute position is un- known. Experimental Results Eliminating dates from the TCN and using them as reference intervals turned out to have a very positive effect. Our measures are reported in Table 1. The columns indicate the version of the system used: NC- NR means no conversion to absolute times, no elimina- tion of the dates, and no use of reference dates; C-NR means conversion to absolute times, elimination of the dates, but no use of reference dates; etc. The data in the table were obtained by running two sample ses- sions, one (figures in parentheses) in which extensive use was made of reference intervals when specifying activities, the other in which most activities did not have a precise temporal location. (No repeated activ- ities were defined.) The average time required to pro- cess an assertion with reference dates and conversion is 5% of the time required with neither. The big win is the conversion of the constraints to eliminate dates: that alone reduces the time to l/15 of the previous value, and the number of constraints to be stored to l/3. Using dates as reference intervals further reduces the number of constraints which need to be stored to 25%, especially when extensive usage of reference dates is made. The average number of calls to Propagate for each addition of a metric constraint was computed in order to test whether PC-2 really performs better than PC-l in practice. 8 The figures in the table seem to support the choice of PC-2 over PC-l: the average number of calls to Propagate is well below the cube of the size of the net. Conclusions We have shown how to extend the standard temporal distance method to make it usable by a program that can reason about the schedule of its user. We have demonstrated that reducing the size of the problem by using relatively simple observations about the do- main does lead to significant improvements, and should therefore be pursued with the same vigor as the search for tradeoffs between expressive power and efficiency. We certainly think that better algorithms for partition- ing the TCN can be developed, and methods not based on constraint propagation have recently been proposed that look very promising [Miller and Schubert, 19901. We have also shown that checking the consistency of the schedule and detecting overlaps between activities can still be done in O(1) when the temporal database includes repeated activities of a certain form. We are currently developing a method for obtaining the additional power required to say, for example, that activity 1 is either before or after activity J, without using representations that do not have complete poly- nomial consistency checking algorithms. Ladkin and Maddux [19SS] list 13 “simple” and 174 “composite” relations (disjunctions of simple relations) that can be converted without loss of information into Vilain and Kautz’s point algebra (a subset of TCSP). As long as only (a subset of) these relations is asserted, the prob- lem stored in the TCN can be solved by the methods presented so far. When more complex disjunctions are asserted, only the maximal subset of the disjunction that can be represented in an STP is added to the TCN, and the difference is stored away in the link. For ex- ample, if the user asserts that I (b a) J, I b J is asserted, and (a) is stored in the link between I and J. When an inconsistency is detected, CLASM tries to backtrack: it looks at the assumptions from which the inconsistent constraint was derived; if any of these assumptions derives from a simplification of the orig- inal assertion, it is retracted, replaced with the next representable subset of the original assertion, and the system propagates again. cknowledgrnents We wish to thank Prem Devanbu, Diane Litman, Peter Patel-Schneider, and above all Henry Kautz for fruitful dis- cussions and suggestions. Massimo Poesio wants to thank Paul Dietz and James Allen for help with the algorithms and the paper. 8When PC-l is used, this figure is always n3, where n is the number of nodes in the TCN. With PC-2, the value depends on the number of constraints that must be revised. References Allen, 3. F. and Kautz, H. A. 1985. A model of naive tem- poral reasoning. In Hobbs, J.R. and Moore, R., editors, Formal Theories of the Commonsense World. Ablex Pub. co. Allen, J. 1983. Maintaining knowledge about temporal intervals. Communications of the A CM 26( 11):832-843. Brachman, R. J.; McGuinness, D. L.; Patel-Schneider, P. F.; Resnick, L. Alperin; and Borgida, A. 1991. Living with CLASSIC: How and When to Use a KL-ONE-like Language. In Sowa, J., editor 1991, Principles of Se- marttic Networks: Explorations in the Representation of h’nowledge. Morgan Kaufmann, San Mateo, CA. Dean, T.L. and McDermott, D. 1987. Temporal data base management. Artificial Intelligence 32:1-55. Dechter, R.; Meiri, I.; and Pearl, J. 1991. Temporal con- straint networks. Artificial Intelligence 49. Kautz, H. and Ladkin, P. 1991. Integrating metric and qualitative temporal reasoning. In Proc. AAAI-91, Ana- heim, CA. Koomen, J. A. 1990. The TIMELOGIC temporal rea- soning system. Technical Report TR-231, University of Rochester. Ladkin, P. and Maddux, R. 1988. On binary constraint networks. Technical Report KES.U.88.8, Kestrel Insti- tute, Palo Alto, CA. Ladkin, P. 1987. The logic of time representation. Tech- nical Report KES.U.87.13, Kestrel Institute, Palo Alto, CA. Ladkin, P. 1989. Metric constraint satisfaction with inter- vals. Technical Report TR-89-038, ICSI, Berkeley, CA. Leban, B.; McDonald, D.; and Forster, D. 1986. A rep- resentation for collections of temporal intervals. In Proc. AAAI-86, Philadelphia, PA. 367-371. Mackworth, A. K. 1977. Consistency in networks of rela- tions. Artijicial Intelligence 8( 1):99-118. Malik, J. and Binford, T.O. 1983. Reasoning in time and space. In Proc. IJCAI-83, Karlsruhe, West Germany. 343- 345. Miller, S. A. and Schubert, L. K. 1990. Time revisited. Computational Intelligence 6:108-118. Poesio, M. 1988. Toward a hybrid representation of time. In Proc. ECAI-88, Miinchen, West Germany. 247-252. Poesio, M. 1990. CLASM: A schedule manager in CLAS- SIC. In preparation. Schmiedel, A. 1988. A temporal constraint handler for the BACK system. KIT-Report 70, Technische Universitaet Berlin. van Beek, P. 1990. Reasoning about qualitative temporal information. In Proc. AAAI-90, Boston, MA. Vilain, M. and Kautz, H. 1986. Constraint propagation algorithms for temporal reasoning. In Proc. AAAI-86, Philadelphia, PA. 377-382. POESIO & BRACHMAN 259
1991
52
1,112
gie 0 Dept of Computer Sciences and Center for Cognitive Science University of Texas University of Texas Austin, TX 78712-1188 Austin, TX 78712 USA USA Abstract Know-how is an important concept in Artificial Intelligence. It has been argued previously that it cannot be successfully reduced to the knowledge of facts. In this paper, I present sound and complete axiomatizations for two non-reductive and intu- itively natural formal definitions of the know-how of an agent situated in a complex environment. I also present some theorems giving useful proper- ties of know-how, and discuss and resolve an inter- esting paradox (which is described within). This is done using a new operator in the spirit of Dy- namic Logic that is introduced herein and whose semantics and proof-theory are given. 1 Introduction Knowledge and action are the two staples of AI. Tra- ditionally, research in AI has focused on the concep- tion of knowledge corresponding to know-that or the knowledge of facts. In earlier work, I have argued that an important notion of knowledge from the point of AI is the one corresponding to know-how or the knowledge of skills, and that this cannot be easily reduced to know-that (Singh, 1990; Singh, 1991a). I have also given two formal model-theoretic definitions of know-how in an abstract model of action and time- one purely reactive; the other involving abstract strate- gies. This theory does not require many troublesome assumptions (e.g., that only one event may occur at a time) that are often believed necessary. Thus it applies uniformly to both traditional and recent architectures. This theory has applications in the design and anal- ysis of intelligent systems, and in giving a semantics for communication among different agents in a multi- agent system (Singh, 1991b). However, while a formal model-theoretic definition of know-how has been given, no axiomatization is available for it-this is something that would facilitate these applications considerably. The goal of this paper is precisely to present a sound *This research was supported by the National Science Foundation (through grant # IRI-8945845 to the Center for Cognitive Science, University of Texas at Austin). and complete axiomatization for the two definitions mentioned above. In $2, I briefly outline the motivating arguments of (Singh, 1990). Next in $3, I present the formal model. In $4, I present the definition of know-how for the case of purely reactive agents, and in $5, a sound and com- plete axiomatization for it. In $6, I define strategies and present the definition of know-how for complex agents whose behavior is best described by strategies. In $7, I present a sound and complete axiomatization for this notion as well. These definitions involve an extension of dynamic logic (Kozen and Tiurzyn, 1989) that is explained within. Some important theorems about know-how are presented and a paradox about know-how described and resolved in f8. 2 Traditional AI systems consist of a knowledge base of facts or beliefs coupled with an interpreter that reads and writes on it. Recently, this conception has been seen as being quite unsatisfactory (Agre and Chapman, 1987; Rosenschein, 1985; Singh, 1990). Hut corre- sponding to this conception, traditional formal theories of knowledge and action (Konolige, 1982; Moore, 1984; Morgenstern, 1987) stress the knowledge of facts (or know-that), rather than the knowledge of skills (or know-how). They assume that know-how can be re- duced to know-that-this assumption too is quite prob- lematic (Singh, 1990). Moore provides the following definition of know-how: an agent knows how to achieve p iff it has a plan such that it knows that it will result in p and that it can execute it (Moore, 1984, p. 347). This idea embodies many unrealistic assumptions and is objected to in (Singh, 1990) in detail; some problems with it are outlined here. Real-life agents have to act rapidly on the basis of incomplete infor- mation, and lack the time and knowledge required to form complete plans before acting. The tradi- tional reduction stresses explicitly represented action descriptions and plans. However, much know-how is reactive; e.g., someone may know how to beat some- one else at tennis, but not have a plan that will guarantee it, much less one that is explicitly repre- SINGH 343 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. sented and interpreted. Traditional accounts assign the same basic actions to all agents, and allow exper- tise to vary only because of differences in the knowl- edge of facts, not because of differences in skills. Also, they typically consider only single agent plans: thus they cannot account for the know-how of a group of agents-organizations know how to do certain things, but may not have any explicit plan (Hewitt, 1988; Singh, 1991a). In the theory of (Singh, 1990), which is adopted here, an agent (2) knows how to achieve p, or to “do” p, if it is able to bring about the conditions for p through its actions. The world may change rapidly and un- predictably, but 2 is able to force the situation ap- propriately. It has a limited knowledge of its rapidly changing environment and too little time to decide on an optimal course of actions, and can succeed only if it has the required physical resources and is appropriately attuned to its environment. While this “situated” the- ory does not use plans, it can accommodate the richer notion of strategies (see $6). Therefore, it contrasts both with situated theories of know-that (Rosenschein, 1985), and informal, and exclusively reactive, accounts of action (Agre and Chapman, 1987). While the pure plan-based view is troublesome, it is also the case that real-life agents lack the computational power and per- ceptual abilities to choose optimal actions in real-time, so the assumption of pure reactivity is not realistic ei- ther. Abstract “strategies” (discussed in $6) simplify reactive decision making by letting an agent have par- tial plans that can be effectively followed in real-time. 3 The Formal Model The formal model is based on possible worlds. Each possible world has a branching history of times. The actions an agent can do can differ at each time- this allows for learning and forgetting, and changes in the environment. Let M = (F, N) be an intensional model, where F = (W,T, <,A,U) is a frame, and N = ([I, B) an interpretation. Here W is a set of pos- sible worlds; T is a set of possible times ordered by <; A is the class of agents in different possible worlds; U is the class of basic actions; as described below, ([I assigns intensions to atomic propositions and actions. B is the class of functions assigning basic actions to the agents at different worlds and times. Each world w E W has exactly one history, constructed from the times in T. Histories are sets of times, partially or- dered by <. They branch into the future: The times in each history occur only in that history. A scenario at a world and time is any maximal set of times containing the given time, and all times that are in a particular future of it; i.e., a scenario is any single branch of the history of the world that begins at the given time, and contains all times in some linear subrelation of <. A skeletal scenario is an eternal linear sequence of times in the future of the given time; i.e., SS at w, t is any sequence: (t = te,tr,. . .), where (Vi : i 2 O-+ ti < &+I) (linearity) and (Vi, t’ : t’ > tp+ (3j : t’ 3 tj)) (eternity). Now, a scenario, S, for w, t is the “linear closure” of some skeletal scenario at w, t . Formally, S, relative to some SS, is the minimal set that satisfies the following conditions: 0 Eternity: SS 5 S e Lineur Closure: (Vi?’ : t” E S3 (Vi? : to < t’ < t”4 t’ E S)) S,,t is the class of all scenarios at world w and time t: (20 # w’ V t # t’)* S,,t fl Swt,*t = 0. (S,t, t’) is a subscenario of S from t to t’, inclusive. Basic actions may have different durations relative to one another in different scenarios, even those that begin at the same world and time. The intension of an atomic proposition is the set of worlds and times where it is true; that of an action is, for each agent x, the set of subscenarios in the model in which an instance of it is done (from start to finish) by x; e.g., (S, t, t’) E [a]” means that agent SI: does action a in the subscenario of S from time t to t’. I assume that [] respects B; i.e., a E B,,t(x). The following coherence conditions on models are imposed: (1) at any time, an action can be done in at most one’way on -any given scenario; (2) subscenarios are uniquely identified by the times over which they stretch, rather than the scenario used to refer to them; (3) there is always a future time available in the model; and (4) something must be done by each agent along each scenario in the model, even if it is a dummy action. Restrictions on [] can be used to express the limitations of agents, and the ways in which their actions may interfere with those of others; e.g., at most one person enters an elevator at a time- i.e., in the model, if one person takes a step (a basic action) into an elevator, no one else takes a step into it at that time. The habits of agents can be similarly modeled; e.g., x always brakes before turning. The formal language is CTL* (a propositional branching time logic (Emerson, 1989))) augmented with operators [J, 0, K’, K and quantification over basic actions. [] depends on actions (as defined); 0 on trees and strategies (to be defined); K’ is a sim- ple version of know-how; and K is proper know-how. The agent is implicit in K’ and K. The semantics is given relative to intensional models: it is standard for CTL*, but is repeated for A and F to make explicit their connection with the other operators, which are also considered below. 4 eactive Know-how As will soon become clear, it is useful to define know-how relative to a ‘tree’ of actions. A ‘tree’ of actions consists of an action (called its ‘root’), and a set of subtrees. The idea is that the agent does ‘root’ initially and then picks out one of the available sub- trees to pursue further. An agent, x, knows how to achieve p relative to a tree of actions iff on all scenar- ios where the root of the tree is done, either p occurs or 2 can choose one of the subtrees of the tree to pursue, 344 TIME AND ACTION and thereby achieve p. The definition requires p to oc- cur on all scenarios where the root is done. The agent gets to “choose” one subtree after the root has been done. This is to allow the choice to be based on how the agent’s environment has evolved. However, modulo this choice, the agent must achieve p by forcing it to occur. This definition is intuitively most natural when ness. It is clear that the definition of K is non-normal (Chellas, 1980, p. 114); e.g., we have YKtrue. In order to take care of this complication, I present the axiom- atization in two stages-here I axiomatize K’ (which is normal), and then in $8, motivate and consider K. applied to models that consist only of “normal” scenar- ios (see (Singh, 1990) f or a discussion). It is important to note that the tree need not be represented by the agent-it just encodes the selection function the agent may be deemed to be using in picking out its actions at each stage (it co&d possibly be implemented as table 1ookupLa kind of symbolic representation). When a tree is finite in depth, it puts a bound on the number - - of actions that an agent may perform to achieve some- thing. Since intuitively know how entails the ability to achieve the relevant proposition in finite time, this re- striction is imposed here. The definitions do not really depend on the tree being of finite breadth, but that too would be required for finite agents. Formally, a tree is either (1) 0, the empty tree or (2) (root, (s&trees)), where ‘root’ is as described above, and ‘subtrees’ is a non-empty set of trees. ‘Tree’ refers to a tree of this form. Intuitively, [treejp is true iff the agent can use ‘tree’ as a selection function and thereby force p to become true. If we wish, we can impose a restriction that for all trees the empty tree 0 is always in ‘subtrees’ to ensure that the agent does as little work as possible-i.e., so that the agent can stop acting as soon as the right condition comes to hold. 8 M bw,t @QP iff M Fur,t P (b M b=w,t [treelp iff (37, t’ : S E S,,t A (S, t,t’) E [roots) A (VS : S E S,,t A (3’ : (S, t, t’) E [roots)-) (3’ : (S,t,t’) E [root] A @sub E tree.subtrees : M b=uI t’ {subl)p))) Now we can define kiow-how as follows: e M /=w,t K’p iff @tree : M bru,t [tree]p) o M bw,t Kp iff M b=ur,t (K’p) A (3s : S E S,,t A (‘Jt’ : t’ E S- M Fw ,tI P)) This definition seems to require that the agent be able to make the right choices at the level of basic - actions-it would attribute know-how to an agent even if it may not be able to do so, just if it has the right set of basic actions. Thus it can be applied only purely externally on agents that are tured, so that they have only _ - somewhat rigidly struc- those basic actions that they will in fact choose from while acting. But then it de-emphasizes the inherent autonomy of complex in- telligent agents. At the same time, it is not acceptable to require that agents explicitly represent and inter- pret shared plans. In $6, this will motivate us to look at abstract strategies that need not be explicitly rep- resented, but which can characterize the coordinated behavior of intelligent agents. 5 Axioms for Reactive Know-how Now I present an axiomatization for the definition of K’ given above and a proof of its sound ness and complete- I now describe the formal language in which the ax- iomatization will be given. Loosely following standard dynamic logic (DL) (Kozen and Tiurzyn, 1989), I in- troduce a new modality for actions. For an action a and a formula p, let [ali denote that on the given sce- nario, if a is ever done starting now, then p holds when a is completed. Let (a)~ abbreviate -[a]lp. Let A and E be the path- or scenario-quantifiers of branch- ing temporal logic (Emerson, 1989). A denotes “in all scenarios at the present time,” and E denotes “in some scenario at the present time”-i.e., Ep z -A-p. Thus A[a]p denotes that on all scenarios at the present mo- ment, if a is ever done then p holds; i.e., it corresponds to the necessitation operator in DL and E(a)p to the possibility operator in DL. The advantage of defining [alp over scenarios rather than states (as in DL) is to simplify the connection to times. pUq denotes “even- tually q, and p until q.” Fp denotes “eventually p” and abbreviates “trueup.” Formally, we have M +s,t [a]~ iff (3t’ : (S, t, t’) E Qa])-+ (3’ : (s, t, t’) E [a] A M i=s,v P) I$ipasy to-see that ([alp A [u]q) E [a](~ A q). w,t AP lff (VS : S E Sw !t- M /=s,t P) M bs,t p iff M bw,t p, if p IS not of the form [a]q or (a)q, and w is the unique world such that S E S,,t. e M /=s,t pUq iff (3’ : t’ 5 t A M /==s,tl q A (Vt” : t 5 t” 5 t’+ M +s,ttl p)). Then we have the following axiomatization (except for the underlying operators such as [] and A). . 1. p+ K’p 2. (3a : E(a)true A A[a]K’p)+ K’p a is a basic action of E(a)true means that at the given time. the agent These axioms, though they refer to only one action, allow any number of them-the agent achieves p triv- ially when it is true, and knows-how to achieve it when- ever there is an action he can do to force it to hold trivially. Axiom 2 can be applied repeatedly. Tlneorem 1 The above axiomatization is sound and complete for K’. Proof. Construct a model branching time model, M. The indices of M are notated as (20, t) and are maximally consistent sets of formulae that obey the usual con- straints for [u]p, A[a]p, etc. (i.e., they contain all the substitution instances of the theorems of the underly- ing logic). Furthermore, these sets are closed under the above axioms. Soundness: For axiom 1 above, soundness is triv- ial from the definition of Q@p. For axiom 2, let (3~ : E(a)true A A[a]K’p) hold at (w, t). Then (33, t’ : s E Sw,t A (s&t’) E [a]) A (v’s : s E Sw,t A (3t’ : SINGH 345 (S,t, t’) E [a])--+ (3’ : (S, t, t’) E [a] A M +w,v K’p)). At each (w,t’), @tree’ : M bw,t’ [tree’jp). Define ‘tree’ as (a, ((tree’ltree’ is a tree used to make K’p true at any of the (w, t’) in the above quantification})). Thus M bw,t @reeJp, or M bSW,t K’p. Completeness: The proof is by induction on the structure of the formulae. Only the case of the oper- ator K’ is described below. Completeness means that M l=w,t K’p implies K’p E (w , t) . M bw ,t K’p iff (3t ree : M bW ,t QtreeJp). This proof is also by in- duction, though on the structure of trees using which different formulae of the form K’p get satisfied in the model. The base case is the empty tree 8. And M bur,t @?Dp iff M bw,t p. By the (outer) inductive hypothesis on the structure of the formulae, p E (w, t). By axiom 1 above, K’p E (w, t) as desired. For the inductive case, M kw,t [treelp iff (3S, t’ : S E SW ,t A (S, t, t’) E [root]) A (VS : S E S,,t A (3 : (S, t, t’) E [rootJj)+ (3’ : (S, t, t’) E [root] A (3sub E tree.subtrees : M bW.t’ (rsub)K’p))). But since ‘sub’ is a subtree of ‘tree,” we can use ‘the inductive hy- pothesis on trees to show that this is ps, t’ equivalent to : s E SWJ A (S, t, t’) E [root]) A (VS : S E S,,t A (3t’ : (s, t, t’) E [root])-> (3t’ : (S,t, t’) E [root] A M kw,tl K’p)). But it is easy to see that (3S, t’ : S E S,,t A (S, t, t’) E [root])) iff E(root)true. And (using the definition of [I) the second conjunct holds iff A[root](K’p). Thus M bw,t @ree]p iff M kW,t (3root : E(root)true AA[root](K’p)). But since the axiomatization of the underlying logic is complete, (3root : E(root)true A A[root](K’p)) E (w, t). Thus by axiom 2, K’p E (W , t) . Wence we have completeness. 0 6 Strategies and Strategic Even for situated agents, and especially for complex ones, it is useful to have an abstract description of their behavior at hand. Such descriptions, I call strategies. Strategies correspond to plans in traditional systems, and to the architectural structure of reactive agents, as instantiated at a given time. In the formalism, they are taken to be of the form of regular programs (Kozen and Tiurzyn, 1989). A strategy is simply the designer’s de- scription of the agent and the way in which it behaves. An agent knows how to achieve p, if it can achieve p whenever it so “intends” -strategies are a simple way of treating intentions. I now let each agent have& strat- egy that it follows in the current situation. Intuitively, an agent knows how to achieve p relative to a strategy Y, iff it possesses the skills required to follow Y in such a way as to achieve p. Thus know-how is partitioned into two components: the “ability” to have satisfactory strategies, and the “ability” to follow them. Strategies abstractly characterize the behavior of agents as they perform their actions in following their strategies. As described below, these actions are derived from the tree (as used in $4) characterizing their selection function for each substrategy. Let Y be a strategy of agent 2; ‘current(Y)’ the part of Y now up for execution; and ‘rest(Y)’ the part of Y remaining after ‘current(Y)’ has been done. I will define strategies, ‘current,” ‘rest’ and the strutegy- relative intension of a tree (i.e., [ny) shortly, but first I formalize know-how using the auxiliary definition of know-how relative to a strategy. Extend the notation to allow QD to be applied to strategies. QY]p means that if the agent can be said to be following strategy Y, it knows how to perform all the substrategies of Y that it may need to perform, and furthermore that it can perform them in such a way as to force the world so as to make p true. Basically, this allows us to have the know-hows of an agent to achieve the conditions in different substrategies strung together so that it has the know-how to achieve some composite condition. This is especially important from the point of view of designers and analyzers of agents, since they can take advantage of the-abstraction provided by strate- gies to design or understand the know-how of an agent in terms ofits know-how to achieve simpler conditions. Even formally, this latter know-how is purely reactive as in $4 (see Theorem 2). * M l=w,t (ski& iff M l=w,t P o M bul,t (IyDp iff (3 tree: (X, t’ : (S, t, t’) E [tree]l,,,,,,t(y)) A (VS : S E S,,t A (3t’ : (S, t, 2’) E [tree~,,,rent(Y)>-+ (3t’ : (S, t, t’) E beencurrent A M l=w,t’ bt(Y)DPN) This definition says that an agent, z knows how to achieve p relative to strategy Y iff there is a tree of actions for it such that it can achieve the ‘current’ part of its strategy by following that tree, and that on all scenarios where it does so it either achieves p or can continue with the ‘rest’ of its strategy (and knows how to achieve p relative to that). Now K’p may be defined as given below. Kp is unchanged relative to K’p. * M bw,t K’p iff (3Y : M bw,t (YIP) Define a strategy, Y, recursively as below. These definitions allow us to incorporate abstract specifica- tions of the agent’s actions from the designer’s point of view, and yet admit the purely reactive aspects of its behavior in case 1 below.- - 0. skip: the empty strategy 1. do(p): a condition to be achieved 2. Yi; Yz: a sequence of strategies 3. if p then Yi else Yz: a conditional strategy 4. while p do Yi: a conditional loop By definition, ‘skip; Y’ = Y, for all Y. The ‘current’ part of a strategy depends on the current situation. For case 0, ‘current(Y)’ is ‘skip’; for case 1, it is ‘Y’ itself; for case 2, it is ‘current(Y for case 3, if p holds in the current situation, it is ‘current(Y else ‘current(Y for case 4, if p holds (in the current sit- uation), it is ‘current(Y else ‘skip.’ The ‘rest’ of a strategy is what is left after the current part is per- formed. For cases 0 and 1 ‘rest(Y)’ is ‘skip’; for case 2, it is ‘rest(Yi); Yz’; for case 3, if p holds, it is ‘rest(Yr),’ else ‘rest(Yz)‘; for case 4, if p holds, it is ‘rest(Yi); while p do Yi,’ else ‘skip.’ 346 TIME AND ACTION Since ‘current(Y)’ is always of the form ‘skip’ or ‘do(p),’ Ktr41current(Y) is invoked (for a given tree) only for cases 0 and 1, and is defined for them below. The expression lb41current~y~ denotes a restricted inten- sion of ‘tree’ relative to ‘an agent (implicit here) and the substrategy it is achieving by following ‘tree’-in the sequel, I refer to it as the strategy-relative intension of ‘tree.’ This expression considers only those subsce- narios where the success of the given substrategy is as- sured, i.e., forced, by the agent-fortuitously success- ful subscenarios are excluded. Briefly, [tree],,,,,,t(y) is the set of subscenarios corresponding to executions of ‘tree’ such that they lead to the strategy ‘current(Y)’ being forced. Here t , t’ E S; the world is w. (2% t , t’) E [treelJcurrent(Y) if-f 0. current(Y) = skip is achieved by the empty tree; i.e., tree = 0 and t = t’. 1. current(Y) = do(p): ‘Tree’ follows ‘do(p)’ iff the agent can achieve p in doing it. if tree = 0 then t = t’ A M bW,t p else M kW,t {tree]p A (3t” : t < t” 5 t’ A (S, t, t”) E [root] A (3sub : sub E tree.subtrees A (S, t”, t’) E lb~qJalrre,t(Y)N Now for some intuitions about the definition of know-how relative to a strategy. The execution of a strategy by an agent is equivalent to its being unrav- eled into a sequence of substrategies of the form do(p). The agent follows each by doing actions prescribed by some tree. Thus the substrategies serve as abstractions of trees of basic actions. In this way, the definition of know-how exhibits a two-layered architecture of agents: the bottom layer determining how substrategies of lim- ited forms are achieved, and the top layer how they are composed to form complex strategies. 7 Axioms for Strategic Know-how Since strategies are structured, the axiomatization of know-how relative to a strategy must involve their structure. This comes into the axiomatization of [YDp. Many of these axioms mirror those for standard DL modalities, but there are important differences: 1. [skipIp z p 2. QY,; GDP = QYlD@iDP 3. [if Q then Yi else Y2lp E (q-, [Yl]p) A(-q-, aY2lp) 4. awhile Q do YlJp E (q-t {while Q do Yl]p)A(y* p) 5. ado(q)Jp G (‘J A p) V (1’~ A (3~ : E(u)true A A[4 CIdo(q)bp)) Theorem 2 The above axiomatization is sound and complete for any model M as described in $3, with respect to the definition of K’ relative to a strategy. Proof. Soundness: The proof is not included here to save space. However, it quite simple for cases 1 through 4; for case 5, it follows the proof in Theorem 1. Completeness: As in $5, construct a model whose indices are maximally consistent sets of sentences of the language. Completeness is proved here only for formu- lae of the form QY]p, and means that M bW,t [Yjp entails QYJp E (u), t), the corresponding point in the model. Let a[Y]p. The proof is by induction of the structure of strategies. M i=w,t @kipbp iff M i=w,t P. But this is ensured by axiom 1. Similarly, M kv,t [if (I then Yl else Y2Jp iff there exists a tree that follows current(if Q then Yi else Yz) and some further proper- ties hold of it. But Q implies that current(if q then Yi else Yz) = current and lq implies that it equals current(Y2). Similar conditions hold for the function ‘rest.’ Therefore, by axiom 3, M bur,t aif q then Yr else Y2Dp iff (if q then M kw,t QK]P else M kw,t QKDp). By induction, since Yl and Y2 are structurally smaller than the above conditional strategy, we have that [if q then Vi else Yz]p E (w, t). The case for iterative strate- gies is analogous, since axiom 4 captures the conditions for ‘current’ and ‘rest’ of an iterative strategy. The case of [do(q)l)p turns out to be quite simple. This is because the right to left direction of axiom 5 is entailed by the following pair, which are (almost) identical to the axioms for reactive know-how given in $5. Completeness for this case mirrors the proof of completeness given there and is not repeated here. O (!? A P)+ (rdo(q)l)p 0 (3~ : Ebbrue A %I @ok&)-+ @o(dl)p Surprisingly, the trickiest case turns out to be that of sequencing. When Yi = skip, the desired condition for the axiom 2 follows trivially. But in the general case, when Yi # skip, the satisfaction condition for {Yi; Y2Dp recursively depends on that for [rest(Yi );Yz]p. How- ever, this strategy does not directly feature in axiom 2. Also, rest(Yi);Y 2 may not be a structurally smaller strategy than Yi;Y2 (e.g., if Yi is an iterative strat- egy, rest(Y1) may be structurally more complex than Yi ). However, we can invoke the fact that here (as in standard DL), iterative strategies are finitary; i.e., they lead to only a finite number of repetitions. Thus we can assume that for any strategy, Y # skip, world, 20 and time t, the execution tree in the model (i.e., as in- duced by < and restricted to the execution of Y) has a finite “depth,” i.e., number of invocations of ‘current.’ If Y is followed at w, t, then the ‘rest(Y)’ is followed at those points where ‘current(Y)’ has just been followed. The depth of ‘rest(Y)’ = (depth of Y) - 1. The depth of ‘skip’ is 0. Thus the depth is a metric to do the necessary induction on. The remainder of the proof is quite simple. Thus for all cases in the definition of a strategy, M bw ,t [Y]p entails [YDp E (w, t). 0 8 Consequences Formal definitions are supposed to help clarify our in- tuitions about the concepts formalized. The above ax- iomatizations do just that. Some of their consequences are listed and proved below. These consequences are important and useful since they help us better delin- eate the properties of the concept of know-how. While SINGH 347 the reactive and strategic definitions K’ are signifi- cantly different in their implementational import, they share many interesting logical properties. Theorem 3 K’p A AG(p-t q)-, K’q Proof Idea. This follows from the corresponding result for [alp mentioned in $5. 0 Theorem 4 K’K’p-, K’p Proof Idea. Construct a single tree out of the trees for K’ K’p. 0 This seems obvious: if an agent can ensure that it will be able to ensure p, then it can already ensure p. But see the discussion following Theorem 6. While K’ captures a very useful concept, it is some- times preferable to consider a related concept, which is captured by K. K is meant to exclude cases such as the rising of the sun (assuming it is inevitable)- intuitively, an agent can be said to know how to achieve something only if it is not inevitable anyway. K can be axiomatized simply by adding the following axiom 3. Kp E (K’pA 1AFp) However, this also makes the logic of K non-normal, i.e., not closed under logical consequence. This is be- cause the proposition entailed by the agent’s given know-how may be something inevitable. Therefore, despite Theorem 3, the corresponding statement for K fails. Indeed, we have Theorem 5 1Ktrue Proof. We trivially have AFtrue, which by axiom 3 entails YKtrue. 0 Theorem 6 p+ -Kp Proof. Trivially again, since p+ AFp. 0 I.e., if p already holds then the agent cannot be fe- licitously said to know how to achieve it. By a simple substitution, we obtain Kp+ ‘KKp, whose contraposi- tive is KKp-, 1Kp. This is in direct opposition to The- orem 4 for K’, and is surprising, if not counterintuitive. It says that an agent who knows how to know how to achieve p does not know how to achieve p, simply be- cause if it already knows how to achieve p it may not know how to know how to achieve it. This too agrees with our intuitions. The explanation for this paradox is that when we speak of nested know-how (and in nat- ural language, we do not do that often), we use two different senses of know-how: K for the inner one and K’ for the outer one. Thus the correct translation is K’Kp, which entails K’p as desired. If p describes a condition that persists once it holds (as many p’s in natural language examples do) then we also have Kp. 9 Conclusions I presented two sound and complete logics for non- reductive and intuitive definitions of the know-how of an intelligent agent situated in a complex environment. This formalization reveals many interesting properties of know-how and helps clarify our intuitions. It also simplifies the application of the concept of know-how to the design and analysis of situated intelligent agents. Of special technical interest is the operator expressed by QD that is different from those in standard Dynamic Logic. This operator provides the right formal notion with which to capture the know-how of an agent whose behavior is abstractly characterized in terms of strate- gies. The differences between the reactive and strate- gic senses of know-how are mainly concerned with the complexity of the designs of the agents to whom they may be attributed. The power of the strategic sense arises from the fact that it lets an agent act, and a designer reason about it, using “macro-operators.” References Agre, Philip and Chapman, David 1987. Pengi: An implementation of a theory of activity. In AAAI-87. 268-272. Chellas, Brian F. 1980. Modal Logic. Cambridge Uni- versity Press, New York, NY. Emerson, E. A. 1989. Temporal and modal logic. In Leeuwen, J.van, editor 1989, Handbook of Theoretical Computer Science. North-Holland Publishing Com- pany, Amsterdam, The Netherlands. Hewitt, Carl 1988. Organizational knowledge pro- cessing. In Workshop on Distributed Artificial Intel- ligence. Konolige, Kurt G. 1982. A first-order formalism of knowledge and action for multi-agent planning. In Hayes, J. E.; Mitchie, D.; and Pao, Y., editors 1982, Machine Intelligence 10. Ellis Horwood Ltd., Chich- ester, UK. 41-73. Kozen, Dexter and Tiurzyn, Jerzy 1989. Logics of program. In Leeuwen, J.van, editor 1989, Handbook of Theoretical Computer Science. North-Holland Pub- lishing Company, Amsterdam, The Netherlands. Moore, Robert C. 1984. A formal theory of knowledge and action. In Hobbs, Jerry R. and Moore, Robert C., editors 1984, Formal Theories of the Commonsense World. Ablex Publishing Company, Norwood, NJ. 319-358. Morgenstern, Leora 1987. A theory of knowledge and planning. In IJCA I-87. Rosenschein, Stanley J . 1985. Formal theories of knowledge in AI and robotics. New Generation Com- puting 3(4). Singh, Munindar P. 1990. Towards a theory of situ- ated know-how. In 9th European Conference on Ar- tificial Intelligence. Singh, Munindar P. 1991a. Group ability and struc- ture. In Demazeau, Y. and Miiller, J.-P., editors 1991a, Decentralized Artificial Intelligence, Volume 2. Elsevier Science Publishers B.V. / North-Holland, Amsterdam, Holland. Singh, Munindar P. 1991b. Towards a formal theory of communication for multiagent systems. In Inter- national Joint Conference on Artificial Intelligence. 348 TIME AND ACTION
1991
53
1,113
rova ngzhen Ein Yoav Shoham Computer Science Department Stanford University lin@cs.stanford.edu shoham@cs.stanford.edu Abstract Research on nonmonotonic temporal reason- ing in general, and the Yale Shooting Prob- lem in particular, has suffered from the ab- sence of a criterion against which to evalu- ate solutions. Indeed, researchers in the area disagree not only on the solutions but also on the problems. We propose a formal yet intuitive criterion by which to evaluate the- ories of actions, define a monotonic class of theories that satisfy this criterion, and then provide their provably-correct nonmonotonic counterpart. P Introduction The histories of the frame problem [McCarthy and Hayes 19691, and of the particular Yale Shooting Prob- lem (YSP) which has become its best known illus- tration [Hanks and McDermott 19861, have followed a disturbing pattern. The frame problem itself, al- though introduced in the context of formalizing com- mon sense, was never formally defined, and was only illustrated through suggestive examples. The attempt in [Shoham and McDermott 19881 to analyze issues in nonmonotonic temporal reasoning in a principled fashion was itself informal, and did not provide pre- cise definitions. This is an initial disturbing factor. A second disturbing factor is that, despite a lack of a formal definition, arguments were made that a partic- ular collection of formal tools, namely nonmonotonic logics, would ‘solve’ the problem. Again, no formal analysis was provided, and the claim was based only on sketchy examples. A third disturbing factor is that the response in the community was not that the above claim is ill defined, but that it’s false. In particular, the Yale Shooting Problem was proposed as an illustration of the false- ness. Given these shaky foundations, it is not surprising that subsequent research on the topic became increas- ingly splintered and controversial. Prom the outset there were arguments that the YSP is not a problem at all [Loui 19871. S imultaneously there were several proposed solutions to it [Shoham 1986, Lifschitz 1986, Kautz 1986, Lifschitz 1987, Haugh 19871. Then there were counter arguments that each of these solutions was ‘wrong, ’ in that they didn’t solve other problems such as the ‘qualification problem’ or the ‘ramifica- tion problem,’ or that that they supported ‘predic- tion’ but not ‘explanation.’ New examples were then devised, with names such as the Stolen Car Problem [Kautz 19861 and the Stanford Murder Mystery [Baker 19891. The responses again varied, including dismissals of some of the complaints, as well as new solutions to the YSP that allegedly avoided some of these problems [Morgenstern and Stein 1988, Lifschitz and Rabinov 1989, Baker and Ginsberg 1989, etc.]. Each solution has attracted some measure of criticism. The lack of precise criteria against which to evalu- ate theories of action does not mean that the research has been worthless; quite the contrary. It is widely recognized that the frame problem is real and that its identification was insightful, even if it has not yet been formally defined. Similarly, the YSP led to major improvements in the understanding of nonmonotonic logics and their applications, as well as to better un- derstanding of formal temporal reasoning. Nevertheless, in order to better understand what we have achieved so far, it is important to arrive at precise criteria for the adequacy of theories of action. In this article we take a step in that direction. Specifically, we identify a formal yet intuitive adequacy criterion, prove a certain class -of monotonic theories of action adequate relative to this criterion, and then show an equivalent nonmonotonic counterpart for a significant subclass of the theories. To our knowledge this is the first instance of provably-correct nonmonotonic tem- poral reasoning. The article is structured as follows. In the following section we illustrate our approach through an anal- ogy with data bases and closed-world assumption. We then start a technical development; after short logi- cal preliminaries in section 3, in section 4 we define the epistemological adequacy of a (monotonic or non- monotonic) theory of action. In section 5 we illustrate the notion through the YSP. In section 6 we define a wide class of monotonic theories of action, and show LIN & SHOHAM 349 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. those to be epistemologically adequate. In section 7 we show that a version of circumscription captures a subclass of the theories. In section 8 we point to our future work and make some concluding remarks. 2 The approach Recall the following intuitive explanation of the frame problem. Suppose we are trying to formalize the ef- fects of actions. Usually, an action causes only a small number of changes. For example, when we paint a block, only the color of the block changes. Most of the other facts, such as the location of the block, the smell of the paint, etcetera, do not change. The frame problem is the problem of representing concisely these numerous facts that are unaffected by an action. Our approach in making the problem precise is con- ceptually very simple, and perhaps best illustrated by an analogy with simple data bases. Consider a data base of flight connections between pairs of cities. One way to structure the data base is by set of assertions of the form FZight(z, y) and +‘Zight(x, y), where for each pair of cities A, B exactly one of FZight(A, B) and lFZight(A, B) appears in the data base. The se- mantics of this data base are those of classical logic. This is an epistemologically complete representation since for any pair of cities it tells one whether the two are connected. However, while epistemologically ade- quate, the representation is pragmatically inadequate: it requires representation of all pairs of cities, whereas the connectivity graph is usually quite sparse. The solution is, of course, to omit all the lFZight(z, y) assertions, and infer lFZight(A, B) ‘by default’ in the absence of FZight(A, B). This is a sim- ple application of the so-called closed world assump- tion (CWA) [Reiter 19781, and the equivalent mono- tonic formulation can be regenerated from the abbre- viated representation through data base completion [Clark 19781. Th’ is concise representation is episte- mologically complete since it too entails FZight(A, B) or lFZight(A, B) for all pairs of cities A and B, al- beit nonmonotonically. Furthermore, the nonmono- tonic version is epistemologically correct in a stronger sense: it is sound and complete relative to the mono- tonic version, since they entail the same facts. Thus there are two criteria for evaluating the episte- mological adequacy of a theory. Both monotonic and nonmonotonic theories can be tested for their episte- mological completeness; this is an absolute criterion. In addition, nonmonotonic theories can be tested for equivalence to a given, monotonic, often better un- derstood, and typically much larger, theory; this is a relative criterion. In principle our treatment of theories of actions will be identical; we will require them to be complete, and furthermore evaluate a nonmonotonic theory relative , to an equivalent and larger monotonic one. The com- plications will arise from a more complex definition of epistemological completeness, a resulting difficulty in determining whether a given theory is indeed episte- mologically complete, and a nonmonotonic mechanism that is more complex than CWA. 3 sgicd Preliminaries We shall base our presentation on situation calculus [McCarthy and Hayes 19691. although we believe that a similar treatment is possible in other frameworks such as temporal logics. The standard situation calcu- lus, which we adopt here, precludes the representation of certain notions such as concurrent actions. In future publications we will address those. In this section we review the language for discussing the situation cal- culus. We do this briefly and almost apologetically since we realize that the situation calculus is very well known; however, we feel that in this article it is im- portant to be precise about the language. Our language L is a three-sorted first-order one with equality. Its three sorts are: 1. Situation sort: with situation constants Si, S2, . . . . and situation variable s, si, ~2, . . . . We will use S, S’, . . . as meta-variables for ground situation terms. 2. Action sort: with action constants Al, AS, . . . . and action variables a, al, ~2, . . . . We will use A, A’, . . . as meta-variables for action constants. 3. Propositional fluent sort: with fluent constants Pl,P2, "', and fluent variables p, pi, ~2, . . . . We will use P, P’, . . . as meta-variables for fluent constants. We have a binary function result whose first argu- ment is of action sort, second argument of situation sort, and whose value is of situation sort. Thus for any action term A, any situation term S, resuZt(A, S) is a situation term. Intuitively, resuZt(A, S) is the re- sulting situation when the action A is performed in the situation S. We also have a binary predicate holds whose first argument is of fluent sort, and second argu- ment of situation sort. Intuitively, hoZds(P, S) means that the fluent P is true in the situation S. As with any other language, we may interpret L clas- sically, assuming the standard notion of entailement, or nonmonotonically, using some form of nonmono- tonic entailment. 4 Epistemologically complete theories of action Suppose we want to use our language to state that action toggle changes the truth value of the fluent PI. We may wish to use the following axiom: Vs.(hoZds(Pl, s) = lholds( PI, result (toggle, s))) (1) Intuitively speaking, this axiom alone is not enough. For example, it tells us nothing about the effects of toggle on P2; for that we would need to add the fol- lowing so-called frame axiom: Vs.(hoZds(Pz, s) E holds(P2, resuZt(toggZe, s))) (2) 350 TIME AND ACTION Clearly we need such a frame axiom for every fluent that is different from PI. But, are those frame axioms enough? In other words, do these axioms together completely formalize our knowledge about toggle? For this simple example it is easy to convince oneself that the above axioms indeed do completely formalize the action toggle. In general, however, the answer may not be obvious, and it is essential for us to have a precise definition of when a first-order theory is a complete formalization of an action. In this paper we are only concerned with determin- istic actions. Intuitively speaking, a theory T is a complete formalization of a deterministic action A if, given a complete description of the initial situation, it enables us to deduce a complete description of the re- sulting situation after A is performed. We now proceed to make this intuition precise. First, we notice that in actual applications, it is most convenient to talk about whether a description of a situation is complete with respect to a set of fluents in which we are interested. Thus we shall define conditions under which a theory is complete about an action and with respect to a set offiuents. This fixed set of fluents plays a role similar to that of the Frame predicate in [Lifschitz 19901. In the following, let P be a fixed set of fluent constants. Definition 4.1 A set SS is a state of the situation S (with respect to P) if th ere is a subset P’ of P such that T b ASS’ > cp. Thus if we replace br in Defi- nition 4.2 by classical entailment j=, we get the same definition we have earlier for monotonic first-order the- ories. Referring back to the data base example from sec- tion 2, we note that requiring a theory to be episte- mologically complete is analogous to requiring a repre- sentation for Flight to tell us, for any (A, B), whether FZight(A, B) is true. In the following, we shall say that T is a monotonic (nonmonotonic) theory if the entailment relation as- sociated with T is classical (nonmonotonic). Thus if T is an epistemologically complete theory according to classical entailment j=, then we say that T is an epistemologically complete monotonic theory. In the later sections of the article, we first identify a broad class of epistemologically complete monotonic theories of action. These monotonic theories may con- tain a large number of explicit frame axioms. Then for a class of the monotonic theories, we develop equiva- lent nonmonotonic ones that do not appeal to explicit frame axioms. However, we first re-examine the Yale Shooting Problem [Hanks and McDermott 19861 as an extended example of the foregoing definitions. The YSP turns out to be a very special case of the class of theories we consider later. that 5 The le Shooting Problem SS = {hoZds(P, S)IP E P’}u{lhoZds(P, S)IP E P-P’) evisited Therefore, if SS is a state of S, then for any P E P, either holds(P, S) E SS or lholds(P, S) E SS. Intuitively, states completely characterize situ- ations with respect to the fluents in P. Thus we can define that a first-order theory T is epistemo- logically complete about the action A (with respect to P) if it is consistent, and for any ground situ- ation term S, any state SS of S, and any fluent P E P, either T U SS b hoZds(P, resuZt(A, S)) or T U SS + lhoZds(P, resuZt(A, S)), where j= is classi- cal first-order entailment. However, as we said earlier, the notion of epistemo- logical completeness is not limited to monotonic first- order theories. In general, for any given monotonic or nonmonotonic entailment kc, we can define when a theory is epistemologically complete about an action according to the entailment kc: In the YSP we consider three actions: Shoot, Loud, and Wait. After Loud is performed, the gun is loaded, and if the gun is loaded, then after Shoot is performed, Fred is dead. Thus we have the following two ‘causal rules’: Vs.hoZds( Loaded, result (Loud, s)) (3) Vs(hoZds( Loaded, s) I hoZds( Dead, resuZt(Shoot, s))) (4) This theory is of course insufficient to fully capture the effects of the three actions. 5.1 Monotonic completion One way to achieve epistemological completeness is to supply frame axioms. Let P = {Dead, Loaded}. For the action Shoot, we have that for each P E P, Definition 4.2 A theory T is epistemologically com- plete about the action A (with respect to P, and ac- cording to +,-) if T kc False, and for any ground situation term S, any state SS of S, and any jlu- Vs(lholds(Louded, s) > (hoZds( P, s) E hoZds( P, result (Shoot, s)))) (5) Vs( hoZds( Loaded, s) z ent P E P, there is a finite subset SS’ of SS such hoZds(Louded, result (Shoot, s))) (6) that either T bC ASS’ III hoZds(P,resuZt(A, S)) or T kc ASS’ 1 lhoZds(P,resuZt(A,S)). For the action Loud, we have We notice here that for any sets T, SS, and formula Vs( hoZds( Dead, s) 5 hoZds(Deud, result (Loud, s))) ‘p, TUSS b cp if there is a finite subset SS’ of SS such (7) LIN & SHOHAM 351 For Wait, we have that for any P E P: We now simply take T2 to be the conjunction of (3), Vs( hoZds( P, s) z hoZds(P, result( Wait, s))) (8) t4h tg>j and (lo), and chronologically minimizing ab Let Tl = {(3), (4), (5), (6), (7), (8)). It is possible to in T2. It is now possible to show that Ti U ((10)) and show that the monotonic theory Tl is epistemologically T2 are equivalent: for any ‘p in the language of 7’1, complete about the actions Wuit, Shoot, and Loud Tl U ((10)) b cp iff T2 b=c cp, where k is classical with respect to P. Using first-order logic only, we entailment and +c is the nonmonotonic entailment. can answer queries about the theory. As a ‘temporal In particular, we have that the nonmonotonic theory projection’ example, we have T2 is epistemologically complete. We remark here that we have avoided formally Tl b Vs.hoZds(Deud, resuZt(Shoot, Wait, Loud, s)) where resuZt(Shoot, Wait, Loud, s) is claiming that T2 solves the frame problem for YSP. The reason is that we do not yet have a formal crite- rion to decide when a representation is concise enough resuZt(Shoot, resuZt(Wuit, resuZt(Loud, s))) to qualify as a solution to the frame problem. Until we That is, Dead holds after the Loud, Wait, and Shoot have one, the frame problem will continue to contain actions are performed in sequence in any situation. As an informal factor. However, this does not affect our an example of ‘temporal explanation’, we have claim about provable correctness of theories of action. Tl b Vs[hoZds(Deud, result(Shoot, s)) A yhoZds(Deud, s) 6 A class of complete causal theories > hoZds( Loaded, s)] ~- In this section we identify a class of monotonic causal That is, in any situation, if Dead is not true initially but becomes true after the Shoot action, then Louded must be true initially. 5.2 Nonmonotonic completion Although the above monotonic theory is epistemo- logically complete, it does not solve the frame prob- lem since it contains the explicit frame axioms. We now provide an equivalent nonmonotonic theory that avoids them. It was implied in [McCarthy 19861 that the frame axioms can be replaced by the single ‘ifpa+ab(p, a, s) > (hoZds(p, s) z holds(p, resuZt(u, s)))) (9) and minimizing the abnormality predicate ub with holds allowed to vary. The main technical result of [Hanks and McDermott 19861 is that this does not work. We mentioned in the introduction -the slew of proposed solutions, all criticized on some grounds or others. Suprisingly, most of them are actually correct relative to the above monotonic theory. We pick as an example chronological minimization [Shoham 19861, although other proposals, such as [Lifschitz 19861 and [Kautz 19861, would work as well. For the full definition of chronological minimization theories that are epistemologically complete. In reasoning about action, our knowledge can be generally divided into two kinds. One is about the environment where the actions are taken place, and is commonly referred to as domain constraints. The other is about actions themself, and is usually called causal rules. Causal rules only tell us the direct effects of the actions. In different environments, an action may have different side effects. Side effects are deter- mined by the direct effects and the domain constraints. A fact that is neither a direct effect nor a side effect of an action is assumed to be unchanged by the action. This motivates the following definition. Again in the following, we fix a set of fluents P. Definition 6.1 Let C(s), Ra(s), i = 1, . . . . n, n 2 0, be formulas with a free variable s. The causal the- ory about the action A with the domain constraint C, and the direct e$ects PI, . . . . P, under precondi- tions RI, . . . . R,, respectively, is the following set of sentences. The domain constraint: the reader is referred to the above publication; we only remind the reader that in this framework the preferred models are those in which the minimized predicate is true as late as possible, rather than as infrequently as possible. In our framework, the obvious (partial) temporal ordering on situations is Vs.C(s) (11) For each 1 < i 5 n, the causal rules: Vs(Ri(s) > hoZds(P;, resuZt(A, s))) (12) For any subset M of N = (1, . . . . n), and any P E P, if k Vs( A hoZds( Pi, s) A C(s) I holds( P, s)) iEM and If= Vs( A hoZds(Pi, s) A C(s) > lholds(P, s)) JCgM S < result(Shoot, S) < . . . then the following frame axiom: and so on. Like circumscription, we also need unique names assumptions for chronological minimization: Vs.( // -R;(s) 1 icN-M Loaded # Dead # Shoot # Loud # Wait (10) (hoZds( P, s) z hoZds( P, result (A, s)))) (13) 352 TIME AND ACTION We notice here that the above definition is for single shall use a simple version of circumscription. We ex- action. The causal theory for a set of actions is the pect that other formalisms and theories like chronolog- union of the causal theories for the actions in the set. ical ignorance [Shoham 19861, pointwise circumscrip- We now formulate a sufficient condition under which tion [Lifschitz 19861, and Baker’s method [Baker 19891 the monotonic causal theories are epistemologically will also work. complete. Again, we fix a set of fluents P. We assume that p E Let Te consist of the domain constraint (1 l), and for P can be formalized by a first-order formula. Formally, each 1 5 i 5 n, the causal rule (12). we assume that Frame(p) is a formula with a free For any state SS of S, let variable p such that for any interpretation M, and any result(A, SS) = (holds(P, S’) 1 P E P, P* in the fluent domain of M, M b Frame(P*) iff SS u TO b holds(P, S’)) there is a P E P such that P is interpreted as P*. (yholds(P, S’) 1 P E P, For example, if P = {PI, Pz}, then Frame(p) can be u p= P1Vp= P2. SS u To b yholds(P, S’)} Let ab(p; s; a) be the abbreviation of the following u (holds(P, S’) 1 holds( P, S) E SS, formula SS u To k Tholds(P, S’)) Frame(p) A (hoids(p, s) E lholds(p, result(a, s))) u { lholds(P, S’) 1 lholds(P, S) E SS, 0 ur circumscriptive policy will be that for any given SS u To k holds(P, S’)} where S’ = result(A, S). We see that result(A, SS) is a state of S’ if SS u To is consistent. Theorem. 1 Let T be the causal theory about the ac- tion A with the domain constraint C, and direct ef- fects PI, . . . . P,, under preconditions RI, . . . . R,,, n >, 0, respectively. T is an epistemological complete theory about A with respect to P if the following conditions are satisfied: situation S and any give action A, we minimize ab(p; S; A) as a unary formula of p with holds(p, S) fixed but holds(p, S’) allowed to vary for every S’ that is different from S. In order to do that, we extend our language to include a new predicate holds’ that is sim- ilar to holds. Then for any formula IV, we minimize ab (as a unary formula of p) in the following formula with holds’ fixed and holds allowed to vary: Condition 1. C(s), RI(s), . . . . h(s) do not contain any situation term other than s. Condition 2. For any state SS of S, either SS b p(S) or SS b iv(S), where p(S) E {C(S), RI(S), ...I Rn(S)). Condition 3. Vs.C(s) is consistent. Condition 4. For any SS of S, if SS b C(S), then result(A, SS) is consistent, and result(A, SS) b C(result(A, S)). In most cases, Conditions 1 to 3 are easy to check. Condition 4 is the only one that is nontrivial. Although Theorem 1 is about a single action, it is easily extendable to multiple actions. This is because that the five conditions guarantee that all of the ac- tions must be independent. For example, Condition 1 excludes any action terms from appearing in C(s), RI(S), . . . . and Rn(s). The class of the causal theories that satisfy the five conditions includes most of the blocks world examples found in the literature. If we ignore the predicates frame and possible in [Lifschitz 199Oa], then our class includes the causal theories in the main theorem of [Lifschitz 199Oa]. 7 Capturing the causal theories in circumscription We now proceed to see how to have a succint represen- tation for the frame axioms in the causal theories. We Vp(holds(p, s) E holds’(p, s)) A W Also, in order to use circumscription, we need to have some unique names axioms. We suppose there is an axiom ‘71 that captures the unique names as- sumption for fluents in P. We also suppose U2 is the following axiom: Vas.earlier(s, result(a, s)) A Vsslqj(earlier(s, ~1) A earlier(sl, ~2) r> earlier(s, ~2)) AVssl(earlier(s, sl) > s # ~1) where earlier is a new binary predicate. The purpose of Us is to capture the following unique names axiom: S # result(A, S) # result(A, result(A, S)) # . . . Now we can have the following result Tlheorerrn 2 Under the assumptions and conditions in Theorem 1, for any formula cp in L (our orig- inal language without holds’), T U (VI, U2) j= cp ifi Vs.Circum(W; ab(p; s; A); holds) b (p, where Circum(W; ab; holds) is the circumscription of ab in W with holds allowed to vary, W is the following for- mula tlp(hoWp, 4 E holds’(p, s)) A U1 A U2 A (A To) and To consists of the domain constraint (11) and for each 1 5 i 5 n, the causal rule (12). Thus if we define bc such that {W) b=c cp iff Vs.Circum(W; ab(p; s; A); holds) b (p, then we have the following corollary LIN & SHOHAM 353 Corollary 2.1 Under the assumptions in Theorem 2, if W is classically consistent, then (W) is an episte- mological complete nonmonotonic theory about A ac- cording to FE. 8 Future work and concluding remarks We have argued that a useful way to tackle the frame problem is to consider a monotonic theory with ex- plicit frame axioms first, and then to show that a succinct and provably equivalent representation using, for example, nonmonotonic logics, captures the frame axioms concisely. The idea is not startling. A simi- lar idea is used in verifying negation-as-failure [Clark 19781. It seems that several researchers have enter- tained the idea of verifying a nonmonotonic theory of action against a monotonic one (for example, it is listed as future work in [Winslett 1988]), even if no one to date has actually followed this course. Cen- tral to our project is the definition of an epistemolog- ical completeness condition for deterministic actions, whose intuitive purpose is to determine whether suffi- cient axioms are included in a theory. Our main tech- nical contribution is in formulating a wide class of epis- temologically complete monotonic causal theories, and showing that for each of causal theories, there is a suc- cinct representation using a version of circumscription. We notice here that our reformulation of the causal theories in circumscription does not address the quali- fication problem. This can be easily done by using the method in [Lifschitz 19871. There are many directions for future work. At the present time, the most important one is to extend our work to allow concurrent actions. Just as fluents can partially describe a situation, we may have action descriptions that partially describe the set of actions taken in any situation. As a result we will be able to infer by default not only what is true in situations, but also what actions have been taken; in the current framework that is not even expressible. Acknowledgement We would like to thank Vladimir Lifschitz for stimu- lating discussions related to the subject of this paper. References [l] Baker, A. B. (1989), A simple solution to the Yale shooting problem, in Proceedings of the First In- ternational Conference on Principles of Knowledge Representation and Reasoning, 1989. [2] Baker, A. B. and M. L. Ginsberg (1989), Tem- poral projection and explanation, in Proceedings of IJCAI-89, 1989. [3] Clark, K. (1978), Negation as failure, in Logics and Databases, eds. by Gallaire and Minker, Plenum Press, 293-322, 1978. [4] Hanks, S. and D. McDermott (1986), Nonmono- tonic logics, and temporal projection, Artificial In- telligence, 33 (1987), 379-412. [5] Haugh, B. (1987) S im pl e causal minimizations for temporal persistence and projection, in Conference Proceedings of AAAI-87. [6] Kautz, H. (1986), The logic of persistence, in Pro- ceedings of A AAI-1986. [7] Lifschitz, V. (1986), Pointwise circumscription, in AAAI - 1986. [8] Lifschitz, V. (1987), Formal theories of action,IJCAI-87. [9] Lifschitz, V. (1990), Frames in the Space of Situa- tions, Artificial Intelligence, 46 (1990) 365-376. [lo] Lifschitz, V. (1990a), Toward a Meta-theory of Action, Draft, 1990. [ll] Lifschitz, V. and A. Rabinov (1989), Miracles in formal theories of action, Artificial Intelligence, 38 (1989) 225-237. [12] Loui, R. (1987), Response to Hanks and McDer- mott: temporal evolution of beliefs and beliefs about temporal evolution, Cognitive Science 11 (1987). [13] McCarthy, J. (1986), Applications of circumscrip- tion to formalizing commonsense knowledge, Artifi- cial Intelligence 28 (1986), 89-118. [14] McCarthy, J. and P. Hayes (1969), Some philo- sophical problems from the standpoint of artificial intelligence. In Machine Intelligence 4, Meltzer, B and Michie, D. (eds), Edinburgh University Press. [15] Morgenstern, L. and L. A. Stein (1988), Why things go wrong: A formal theory of causal reason- ing, in Proceedings of AAAI-1988. [16] Reiter, R. (1978), On closed world data bases, in II. Gallaire and J. Minker (eds.), Logics and Data Bases, Plenum Press, New York, 1978. [17] Shoham, Y. (1986), Ch ronological ignorance: time, nonmonotonicity and necessity. In Proceedings of AAA I-86. [18] Shoham, Y. and D. McDermott (1988), Problems in formal temporal reasoning, Artificial Intelligence, 36 (1988) 49-62. [19] Winslett, M. (1988), Reasoning about actions us- ing a possible models approach, in Conference Pro- ceedings of AAA I-88. 354 TIME AND ACTION
1991
54
1,114
of tasks that have in AI-notably planning and prediction, diagnosis and explanation. Recently it has become an ob- ject of study in its own right, drawing inspiration from the work of philosophers and logicians as well as more immediately AI-oriented concerns. In this paper I shall examine just one approach to that advocated by Yoav Shoham and article. In particular, I s number of assumptions under- lying Shoham’s work, all of which I shall call into question. Key assumptions are that causal- ity is an epistemic notion, that causal reasoning general lines which I feel such a theory ought Ito follow. study in its own right, drawing sf philosophers and logicians as AI-oriented concerns. Amongst recent attempts to provide a formal basis for causal reasoning in AI may be mentioned Lifschitz’s use of circumscription to secure the desired non-monotonicity of lar, I shall try TV lay bare a number of assumptions underlying , all of which I shall call into question. I do ific theory of my own, but shall conclude with some suggestions as to the general lines which I feel such a theory ought to follow. A CUW~ theory, for Shoham, is a collection of causal state- ments, which are rules of the form 041 A 042 A. a .AO~,AOX~AOX~A...AOX~--)~~ 4i, xj, and II) are atomic sentences; in addition some constraints on the time-reference of these atomic constituents (designed to ensure that an effect cannot its cause), but the details of these do not immediately ss the active causes, while nditions (the ‘causal field’) order for the active causes to have rs are there in order to allow ptions to the effect that unless we explicitly told otherwise, we can e the background conditions to be present. am illustrates his ideas using an example concerning the engine of a motor-car. Fop the sake of variety, f the doorbell causes e circuit is connected up Wress-button(t) A OAll-working(t) --) OBell-rings@+ 1) ted(t) ---) q lAll-working(t) aBell-broken(t) Shoham uses a non-monotonic inference mechanism cdkd chronological ’ one to put 0 sible. By a ‘l’dote that I have taken a few liberties with Shoharn’s no- tation; in particular I have ‘unreified’ his atomic propositions, writing for example Press-button(t) where Shoham would write True(t, t,Press-button&see (Galton 1991). GALTON 355 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. form 04. In the example above, suppose we are given the statement CPress-button(O). Then one possibility is ts assume, say, aBattery-dead(O), which would entail q lAll-working(O), and hence would not allow us to conclude anything about time 1 (since the antecedent of the first conditional is falsified). Alternatively, we could refrain from making any positive assumptions about time 0, so that we do not have O-All-working(O); not having this is tantamount to having OAll-working(O), and we can now use the first conditional to conclude that OBell-rings( 1). Of the two alternatives outlined above, nciple of chronological ignorance prefers the second, since it puts off making positive assertions as late as possible-the first al a positive assertion about time 0 (namely 0 ) whereas the second only makes a positive assertion about time 1 (namely q Bell-rings(l)). e modal operators I shall begin my critique of Shoham’s theory by asking what, precisely, the modal operators 0 and 0 are supposed to mean. Shoham is rather vague about this; he says ‘I will feel free to describe the 0 modalityas knowledge and belief interchangeably’. Thus we might choose to read Shoham’s 0 as something like ‘It is known that . . . ‘, or alternatively as something like ‘It is believed that . . . ‘. Or maybe the knowledge/belief should be located in a knowing/believing subject, say ‘I know that.. . ’ or ‘I believe that.. . ‘. All this Shoham deliberately leaves inexplicit. Consider now a causal rule such as Ol?ress-button(t) A OAll-working(t) ---) OBell-rings(t+l). Ifweread Oas ‘Iknowthat...‘,thenthisrulemustberead as saying something like (1) If I know that the button is pressed at t, and I do not know that all is not working at t, then I know that the doorbell rings at t + 1. If on the other hand we read 0 as ‘I believe that . . . ‘, then the rule comes out rather as saying that (2) If I believe that the button is pressed at t, and I do not believe that all is not working at t, then I believe that the doorbell rings at t + 1. Now (1) will clearly not do. Suppose button is pressed at t; suppose also that, u thebatteryisdeadatt,sothatitisnotthecasethatallis working at t. In therefore, contrary to reading (1) of possibly know that it will ring then is in fact false). What about (2)? This is more plausible, possible to believe a falsehood. But as it stands, it still can’t be right: what we want is a causal law, not a psychological one, yet all (2) gives us is a statement of the dynamics of belief. The causal law cannot in itself induce belief; at best it can give us a ESOIL for believing that such-and-such an effect wilf follow, given certain causes. This suggests that operator 13 is something . . . ’ or, with a personal subject, ‘I have reason to believe that. . . ’ . With this reading, our ex le comes out as (3) If I have reason ieve that the button is pressed at t, amd I do not e reason to believe that all is not ave reason to believe that the It is w comparing this formulation with Geffner ‘s ac- count using the causal operator C (Geffner 1990). In Geffner’s system, the rule about the door-bell might come out as Press-button(t) A -0ut-of-order(t) + CBell-rings(t + 1) i.e., roughly, ‘if the button is pressed at t, and the bell is not out of order at t, then we can explain why the bell rings at t + 1’. (Note that, reasonably enough, ‘we can explain why p’ implies p in Geffner’s system.) There is clearly an between the expressions like ‘I have reason to belie . . ’ and those like ‘We can ex- plain why . . . 99 which appear in my renderings of Shoham’s they are by no means identi- a sufficiently rich causal theory of expression and articulate the relationship between them. 0 has the following sitions to form propositions: if 4 is a make sense; (c) It does not commute with negation: O-4 is, in general, not equivalent to 704. How does the expression ‘I have reason to believe that.. . ’ measure up to these properties? As regards item (a)9 it may seem obvious that if 4 is a sition then so is ‘I have reason to believe that 4’-but e this depemds on what exactly one takes a propo- Specifically, do we require a causal reasoning m to be able to handle expressions of both these if so, should these expressions belong to the same syntactic category? may distinguish assertions about the domain (e.g.) door-bells and batteries) on the one hand from state- ments about our state of knowledge of the domain on er. These two kinds of statements belong at different levels, for ex le ‘The battery is dead’ and ‘I have reason to believe that the battery is dead’. It is only in exceptional 356 TIME AND ACTION the levels ever car example and my applies whether one 0 as ‘I have reason to believe that . . . 99 as I have suggested, or in one of the ways that Shoham lieve that 4. evidence, though, we need to consider propositions of -04) and O-4 respectively. tions 63 are false, then one believes B. background assmnptions C are case, tive. GALTON 357 ew is the one to of causal g. We relation as existing “out there” in the world. Shoham’s advocacy of the subjectivist view rests mainly on the notion of a ‘causal field’, wi resulting non-monotonicity of causal reasoning. wality inherently non- Shoham suggests that causal reasoning is monotonic reasoning. This would imply perfect knowledge, and hence had no need for non- monotonicity in our reasoning, then we should have no n for causality either. I do not want to deny that non-man reasoning in general, and hence of titular. For that 9 it is also a feature the weather, but this cannot in itself justify a c orological reasoning is a form of non-monotonic reasoning, for such a claim distinctly suggests that the weather itself is a non-monotonic phenomenon-whatever that could mean. Shoham’s case for building non-monotonicity in as an in- trinsic component of causal reasoning rests on the following considerations. Shoham asks MS to consider his motor-car example. Nor- mally, we say that turning the key causes the motor to start, even though what actually enables the motor to start is a conjunction of circumstances including not just turning the key but also the battery’s being charged, and csnnected up, etc. We don’t say that these are what cause the motor to start, though-we would not normally say, for example, in answer to the cluestion ‘Why did the motor start?‘, ‘Because the battery was connected up’. These are things which we tend to assume by default: they are the normal background conditions (the “causal field”), which enable the motor to be started, but do not in themselves suffice to start it. Shoham now asks us to envisage key jams in the ON position, so that in order to s motor we have to disconnect the battery. We can’t afford to get the key fixed so from now on we start the car by connecting up the battery. Shoham says: ‘After a short while it will seem words that causal inference is non-monotonic. I believe that there has bee ight of hand here. Shoham has tried to present us puzzle: why is it, he seems to ask, that normally w d say that the cause of the motor’s starting is the ignition key coming to the G&J positim, and not the battery’s being connected, whereas in the “jammed key” scenario, we do say that the of the battery is the cause of the motor’s starting, the key’s being in the ON position? He uses this example to motivate the idea of a causal field, without which the thing would be-so he would have us believe-inexplicable. has missed g very obvious? The 1 the cause cases something that em, I.e., an event; whereas sal field is composed of states of &Gks, i.e., the states which obtain at the time aunt to anything more than a demon- s example is ill-chosen. Perhaps a ut it does show that one can- respecting distinctions of uspectual between states and events (Galton explicitly disavows when he says ‘I will not introduce a distinction among events, facts, pro- cesses, and so on, at the primitive level’. Of muse, even if my criticism of Shoham is right, this does not mean that non-monotonicity is necessarily ab- sent from causal reasonin it means, though, is that Shoham is wrong to source of this non- monotonicity in an intrins of causal reasoning as such, as opposed to reasoning c@e generally. It may be that non-monotonicity, as a feature of everyday reasoning, is all-pervasive, and hence interacts with causal reasoning as of reasoning; where I believe Shoham upposing that there is s ng about which is inextricably up with non-monotonicity. Shoham’s causal theory rests on two basic assumptions which we have called into question in this paper. They are 1. Causality is an epistemic notion; 2. Causal reasoning is non-monotonic. 1 have argued that causality is not intrin- sically epistemic, nor causal reasoning non-mononotonic. epistemic and non-monotonic notions are im- our treatment of cau of reasoning general1 causality has no speci 358 TIME AND ACTION ave some sue in which the causal relation is described (not, in other words, from the declara- tive logic of the theory) but from the general principles of issue, see (Galton 1991). merits. I should like to and the two anonymous referees careful and c&structive comments on the origin for their of this paper. efere Galton, AR 1984. The Logic of Aspect. Oxford, UK: Clarendon Press. Galton, A.I? 1991. Reified tern eories and how to unreify them. To appear in ngs of the Twelfth International Joint Conference on Artificial Intelligence. Calif.: AAAI I%essFrIT Press. theories of action (Preliminary nal Joint Con- Menlo Park, Calif.: International Joint Conferences on Artificial Intelli- gence, Inc. 2Just as, traditionally, logic has always been regarded as trm- scendm& that is, independentof this or that specific subject-matter. hical prob- In TvIelzer, : 465502. ing causality in default reasoning. Art@&1 Intelligence 35(2): 259-27 1. ut Change: Time and CQU- int of Artificial Intelligence. &lam- S 9 Y. 1 Cognitive Skied ~Q~ono~nic reasoning and causation. 14: 2 13-252. GALTON 359
1991
55
1,115
A Logic and Time Nets for Probabilistic Inference Keiji Kanazawa* Department of Computer Science Brown University, Box 1910 Providence, RI 02912 kgk@cs.brown.edu Abstract In this paper, we show a new approach for reason- ing about time and probability that combines a for- mal declarative language with a graph representation of systems of random variables for making inferences. First, we provide a continuous-time logic for express- ing knowledge about time and probability. Then, we introduce the time net, a kind of Bayesian network for supporting inference with statements in the logic. Time nets encode the probability of facts and events over time. We provide a simulation algorithm to com- pute probabilities for answering queries about a time net. Finally, we consider an incremental probabilistic temporal database based on the logic and time nets to support temporal reasoning and planning applica- tions. The result is an approach that is semantically well-founded, expressive, and practical. Introduction We are interested in the design of robust inference sys- tems for supporting activity in dynamic domains. In most domains, things cannot always be predicted ac- curately in advance. Thus, the capability to reason about change in an uncertain environment remains an important component of robust performance. Our goal is to develop a computational theory for temporal rea- soning under uncertainty that is well suited to a wide variety of domains. To this end, in past work [Dean and Kanazawa, 1988; Dean and Kanazawa, 1989a], we initiated inquiry into the application of probability theory to temporal rea- soning and planning. We chose probability theory, be- cause it is the most well understood calculus for uncer- tainty [Shafer and Pearl, 19901. This effort to develop *This work was supported in part by a National Sci- ence Foundation Presidential Young Investigator Award IRI-8957601 with matching funds from IBM, by the Ad- vanced Research Projects Agency of the Department of Defense monitored by the Air Force Office of Scientific Research under Contract No. F49620-88-C-0132, and by the National Science foundation in conjunction with the Advanced Research Projects Agency of the Department of Defense under Contract No. IRI-8905436. 360 TIME AND ACTION probabilistic temporal reasoning came alongside com- plementary developments by other researchers [Hanks, 1990; Weber, 1989; Berzuini et al., 19891. In probabilistic temporal reasoning, we have been interested first and foremost in projection tasks. As an example, consider a scenario taken after [Dean and Kanazawa, 1989b]. We are at a warehouse. Sally the truck driver arrives at 2pm, expecting to pick up cargo en route to another warehouse. Trucks arrive to pick up and deliver cargo all day long; a long time can pass before a truck is taken care of. What is the chance that Sally will become fed up and leave without picking up her cargo? What is the chance of her staying in the loading dock 10 minutes? 30 minutes? Should we load Sally’s truck earlier than than Harry’s, who arrived 10 minutes earlier? In this paper, we extend our past work by defining new languages for expressing knowledge about time and probability, and by designing an algorithm for answering queries about the probability of facts and events over time. First, we present a temporally- quantified propositional logic of time and probability. Then we present the time net, a graph representation of a system of random variables developed for inference about knowledge expressed in the logic. We conclude by considering an incremental probabilistic temporal database based on the logic and time nets. A Logic of Time and Probability In this section, we describe a logic for expressing state- ments about probability and time. Our primary goal is to design a language to support expression of facts and questions such as arising in the warehouse sce- nario. The key entities of interest are the probability of facts and events over time. Our logic of time and probability combines elements of the logics of time by Shoham [Shoham, 19881 with elements of the logics of probability by Bacchus [Bac- thus, 19881 and Halpern [Halpern, 19891. Basically, the features related to time are borrowed from Shoham, and the features related to probability are borrowed from Bacchus and Halpern. The resulting logic, called .c Cp, is similar in many respects to the logic of time From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. and chance by Haddawy [Haddawy, 19901. L,, is a continuous-time logic for reasoning about the probability of facts and events over time. It is temporally-quantified, and has numerical functions for expressing probability distributions. f& is otherwise propositional, as far as the basic facts and events that it addresses. In this paper, we concentrate on the basic ontology and syntax of &,. A detailed exposition of a family of logics of time and probability, both propo- sitional and first order, complete with their semantics and a discussion of complexity, is given in [Kanazawa, Forthcoming]. In Lp, there is a set of propositions that rep- resent facts or events, such as here (Sally) and arrive (Sally), intended to represent, respectively, the fact of Sally being here, and the event of Sally arriving. A proposition such as here (Sally) taken alone is a fact (or event) type, the generic condi- tion of Sally being here. Each particular instance of here (Sally) must be associated with time points, . e.g., via the holds sentence former, in a statement such as holds (2pm, 3pm, here (Sally) > . Such an in- stance is a fact (or event) token. Each token refers to one instance of its type. Thus if Sally comes to the warehouse twice on Tuesday, there will be two tokens of type here (Sally), which might be referred to as here1 (Sally) and here2 (Sally). Where unambigu- ous, we commonly use the same symbol to refer to both a token and its type. Facts and events have different status in ,C,,. A fact is something that once it becomes true, tends to stay true for some time. It is basically a fluent [McCarthy and Hayes, 19691. A fact is true for some interval of time [to, t 11 if and only if it is true for all subintervals [u, v] where tO 5 u < v 5 ti. By contrast, an event takes place instantaneously; it stays true only for an infinitesimally small interval of time. We restrict events to be one of three kinds: persistence causation [McDermott, 19821 or enab, per- sistence termination or clip, or point event. The first two are, respectively, the event of a fact becoming true, and the event of a fact becoming false. For the fact type here (Sally), the enab event type is written beg(here(Sally)) and the clip event type is written end(here(Sally)). A point event is a fact that stays true for an instant only, such as Sally’s arrival. In &,‘s ontology, a point event at time t is a fact that holds true for an infinites- imally small interval of time [t, t+e], E > 0. Although a point event such as arrive (Sally) is really a fact, because it is only true for an instant, we usually omit its enab and clip events. In L,, sentences, a fact or event token is associated with an interval or instant of time. Facts are associ- ated with time by the holds sentence former as we saw earlier, e.g., holds(2pm,3pm,here(Sally)), and events are associated with a time instant with the occ sentence former, e.g., occ(t,arrive(Sally)) or occ(t,beg(here(Sally))). occ(t,s) is equivalent to holds (t, t+e, s) for some E > 0. Sentences about probability are formed with the P operator. For instance, the probability that Sally arrives at 2pm is P (occ (2pm, arrive (Sally) > > . The probability that she never arrives is writ- ten as P(occ(+oo,arrive(Sally))). The proba- bility that Sally is here between lpm and 2pm is P(holds(l:00,2:00,here(Sally))). With continuous time, the probability of a random variable at a point in time is always 0. Because of our definition of events as facts that hold for e, this is not problematic. An alternative is to introduce prob- ability density functions into the language, and de- fine probability in terms of such functions [Kanazawa, Forthcoming]. The probability that a fact holds for an arbitrary interval, P (holds (u, v, cp) > , is given by Pbcc(b,beg(p)) A occb,end(cp)) Ab<u A v<e> Knowledge about how events cause other events, and how events follow other events, are expressed by con- ditional probability statements. Let cp stand for the proposition here (Sally). Then, the probability that Sally is still here, given that she arrived some time ago, canbewrittenasP(holds(.,now,cp)locc(.,beg(cp))). A function that describes such probability of a fluent’s persistence is often called a survivor function [Cox and Oakes, 19841. In the semantics for Ccp, the meaning of a token is the time over which it holds or at which it occurs. In other words, the meaning of a fact token here(Sally) is the interval over which it holds, and the meaning of an event token arrive(Sally) is the instant at which it takes place. The functions range and date are used to denote those times, i.e., range (here (Sally) > and date(arrive(Sally)). More precisely, &, has possible worlds semantics, and the meaning of a token is a set of interval - world pairs (([too, tOl], no), ([tlO, till, WI), . ..}. where the interval is the times over which the token holds or oc- curs in that world (actually, a potential world history, or chronicle [McDermott, 19821). This interpretation is similar to that in [McDermott, 19821 and [Shoham, 19881. Together with a probability measure over the chronicles, this defines the semantics for &atements about probability in C,, . For temporal logics that have similar semantics, see [Haddawy, 19901. Syntax of L,, &, is a three-sorted logic, with a different sort that corresponds to each of the different types of things that we wish to represent, domain objects, time, and prob- ability. These are the object, time, and field sorts. The object sort 0 is used to represent objects of interest in the domain. It is a countable collection of objects, including things like Sally. Object constants and predicates are used to form propositions such as KANAZAWA 361 here (Sally). The time sort 7 is used for expressing time. L,, is temporally-quantified, and contains both time constants and variables, and the date function. The operator 4 defines time precedence. The field sort F is used to represent real-numbered probability. In addition to constants, the field sort contains functions for representing probability distri- butions. An example is nexp(t, 0.75)) which is the negative exponential distribution for time t with the parameter denoted by the field constant 0.75. As in this example, field functions range over both time and field sorts, for specification of probability functions of time. We now present, the syntax for our logic L,,. In the following, a n(i, j)-ary fin&ion is a function of arity n, with i time sort arguments and j field sort, arguments, where n = i + j, and both i, j 2 0. Given this, we can provide an inductive definition of L,, starting with the terms. First of all, a single object constant is an O-term. If p is an n-ary object predicate symbol and 011 . . . , o, are O-terms, then ~(01,. . . , on) is a proposi- tion. The set, of all propositions is P. If x E P, then beg (nl and end(r) are events. The set, of all events is E. E includes the point events, a subset of P. A single time variable or constant is a T-term. If E , is an event, then date(a) is a T-term. A single field constant is an F-term. If g is an n(i, j)-ary field func- tion symbol, tl, . . . , ti are T-terms, and &+I,. . . , t, are F--terms, then g(tl , . . . , tn) is an F-term. The well-formed formulas (wffs) are given by: o If tl and t2 are T-terms, then tl 4 t2 is a wff. l If tl and t2 are T-terms, and x E P, then holds(tl, t2,7r) is a wff. e If t is a T-term, and E E E, then occ(t, e) is a wff. e If p is a wff not containing the P operator, and f is an F-term, then P(p) = f is a wff. We add standard wffs involving logical connectives and quantifiers, as well as those involving an extended set of time relations such as + and E, the latter of which denotes temporal interval membership. Conditional probability is defined as P(cpl$) k P(pA$) / PC@), where cp and $ are wffs. Some examples of L,, sentences follow. The predicate always that is always true is defined by: Vti, t2 holds(tl, t2, always). It has the property: Vtl, t2 P(holds(t1, t2, always)) = 1.0. The chance that Sally arrives at a certain time today might be given by \Jt E today P(occ(t, arrive(Sally))) = norm(2pm, 0: 15, t,6) where norm(p, CT, t > is a field function giving the nor- mal distribution N(,, a) for each time t, where today is an abbreviation for the time interval that comprises the current date, and e is the duration of instantaneous events. The chance that Sally leaves without her cargo, beg(here(Sally)) end(here(SaIly)) here(Sally) Figure 1: A time net, for here(Sally). given that she arrives at 2pm might be given by the negative exponential distribution with parameter 0.75. vt Etoday P(occ(t,leave/load(Sally))l occ(2pm,arrive(Sally))) = nexp(2pm,t,0.75,6) Time Nets Our goal is to construct a probabilistic temporal database for making inferences and supporting queries based on knowledge expressed in a language such as L . In this section, we show how to represent knowl- ec$e for making inferences about time and probability in graphs called time nets. The time net belongs to the popular class of directed graph representation of knowledge about probability often known as the Bayesian network [Pearl, 1988]. A time net is a directed acyclic graph G = (N, A), con- sisting of a set of nodes N, and a set of arcs A. We often refer to a node NR representing a random vari- able R by the random variable itself. Arcs are directed edges that represent, direct, dependence between ran- dom variables. Parents and children of nodes, on the basis of arc direction, are defined in the usual fashion, as are root and Zeuf nodes. In a time net, we allow nodes representing random variables that have values that are discrete or contin- ’ uous. In addition, there are range nodes, which are bivariate nodes representing continuous non-negative length subsets of the real line. As an example, a range node might represent the probability of the range of body weight of an individual. The use of range vari- ables in time nets is for representing time intervals as random variables. Each node has a distribution for the random vari- able that it represents. As usual, there are marginal distributions for root, nodes, and conditional distribu- tions for other nodes. For details on the workings of Bayesian networks, we refer the reader to [Pearl, 19881. A time net is used to represent probabilities about fact and event tokens in a body of L,, sentences as nodes, and direct conditional dependence between facts and events (on the basis of conditional distribu- tions) as arcs. The random variables that a time net, represents are the date of event tokens, and the range of fact tokens. For each point event token, we create a continuous valued node, a date node, representing the 362 TIME AND ACTION arrive(Sally) load leave(Sally) here(Sally) Figure 2: A simple time net for arrive and load. time at which the event occurs. The distribution for a date node is the marginal distribution for the prob- ability of the event. For each fact token, we create a range valued node, a range node, representing the in- terval over which the fact holds. The distribution for a range node is the marginal distribution for the in- terval over which the fact holds. For each fact token, we also create two date nodes representing the enab and clip events for the fact token, the enab node and clip node. A time net corresponding to the fact token here (Sally) is shown in figure 1. For convenience, we name a date node by its event token, and a range node by its fact token. For a given fact token 7r, we commonly refer to its clip node as the clip node of x’s enab node, and so on. The time net is similar to the network of dates of Berzuini and Spiegelhalter [Berzuini, to appear]. The major difference is that we provide range nodes, and that we have the enhanced ontology of the enab and clip events that we use to represent knowledge em- bodied in sentences such as in L,,. There is clear dependence between the three types of nodes for a fact token. The range node of a fact token depends on its enab and clip nodes because the range of the former is given by values of the latter. Furthermore, a clip node depends on its enab node always because a fact can only become false after it be- comes true, and sometimes because we know how long the fact persists. For example, we may know that a telephone weather report always ends 60 seconds after it begins. Note that the arcs between the nodes could conceivably reversed. For instance, if we know the du- ration of some fact, such as a concert, and the time at which it ended, we can infer the time at which this concert began. However, note that in general, we do not expect to have the distribution corresponding to a range node. A simple example of a more general time net is shown in figure 2. This represents a scenario with arrive, load, and leave point events, and the fact to- ken here(Sally). The fluent here(Sally) is caused by arrive (Sally). The latter event also causes load to occur, which in turn causes leave(Sally). Fi- nally, leave(Sally) causes here(Sally) to termi- nate. The interval over which here (Sally) holds is [beg(here(Sally)), end(here(Sally))]. Typi- cally, there are delays between, arrive (Sally) and load ad) arrive(Sally) \leave/load(Sally) kave/load(Sally)/ baVe(SallY) here(Sally) Figure 3: A time net with potential dates. load, and between load and leave (Sally), so that here (Sally) is not an instantaneous event. A time net showing a more complex example of the same scenario is shown in figure 3. This is essen- tially the example given in the beginning of the pa- per. Unlike the previous case, where load was guaran- teed to occur, and leave always followed load, in this case, Sally may become impatient and leave without her cargo. Furthermore, load is not an instantaneous event, but a fact that takes some time to complete. This example potentially introduces circularities into the dependencies. For example, beg (load) depends on Sally not having left, and Sally leaving depends on both beg(load) not having taken place, and on beg(load) having taken place already (causing end( load)). To handle this, first of all, it is neces- sary to separate out the different cases of Sally leav- ing. This is done by inventing the new event type “Sally leaves without cargo”. We name this event leave/load(Sally). Sally actually leaves if either end(load) or leave/load(Sally) occurs. The cir- cularity is handled with a new kind of event, the po- tential date [Berzuini, to appear]. A potential date is the date at which an event would take place, if something does not prevent it from happening. For example, po-load, the potential date of beg(load), is the time at which beg(load) would take place if Sally does not become impatient and leave. Sim- ilarly, po-leave/load(Sally) is the time at which Sally becomes completely impatient, if beg(load) does not take place. The date of the actual beg( load) is po-loadiff po-load < po-leave/load(Sally);oth- erwise, it is +oo. Similarly, leave takes place at po-leave/load(Sally)iff po-leave/load(Sally)< po-load; otherwise, it takes place at some fixed time after end(load). Simulation Algorithm In this section, we show a simple algorithm for com- puting densities in a time net. A time net represents knowledge about time, probability, and dependence embodied in a body of L,, sentences as a system of KANAZAWA 363 random variables. The main use of the time net is for answering queries about probabilities of facts and events over time. When a time net is created, only the marginal densities of root nodes are already known, perhaps along with some evidence about the value of some other random variables. Marginal densities for all other nodes must be computed on the basis of con- ditional densities and known marginal densities. The algorithm that we show is a sumpling algorithm. Sampling is a widely used method for estimating prob- ability distributions , including its use for estimating probabilities in Bayesian networks (see, e.g., [Pearl, 19881). Sampling is the best available general method for estimating densities for continuous variables, and it, has been employed for continuous-time probabilis- tic reasoning [Berzuini et al., 19891. The algorithm given here is simpler than those previously proposed. It is a forward sampling method, suitable principally for predictive inference. Although more sophisticated algorithms are available, the algorithm is conceptually simple, and illustrates the essentials. The algorithm is easily extended to handle explanatory inference, for instance, by likelihood weighting [Shach’cer and Peot, 19911. The central idea in sampling algorithms is to esti- mate variables, in our case densities, by repeated tri- als. In each trial, each random variable represented in a time net is sampled from its underlying distribution in a fixed order. A sample is a simulated value for the random variable. Each sample is scored according to some criteria. During repeated trials, a random vari- able takes different, values with varying frequency, and the different values accumulate different scores. At the end, the scores are used to estimate the distribution for each random variable. Under certain conditions, the estimates are shown to converge to the true distri- bution (see, for example, [Geweke, to appear]). For a discrete random variable, each sample is a par- ticular discrete state out, of a finite set. Thus, scoring the samples is easy. For a continuous random vari- able, we discretize the range of the continuous vari- able into a finite number of subranges, or buckets. A bucket is scored each time a sample falls within its subrange. The discretization defines the resolution to which we can distinguish the occurrence of events. In other words, it fixes the E over which we considered an event to occur. We must bound the range in addi- tion to discretizing it, to limit the number of buckets. At the same time, it is sometimes convenient to create distinguished uppermost and bottommost buckets con- taining +CQ and -00, respectively. For a continuous range variable, we store the upper and lower bound of the sampled range and the score of each trial. In our algorithm, we first find an order for sampling nodes. We may sample a node if it has no parents, or the values of all of its parents are known, either as evidence, or by sampling. It is easy to show that there is always such an order starting from any root node, provided that the time net is a proper Bayesian network, i.e., it is acyclic. We sample each node from its distribution according to the order we find. In our basic algorithm, we score each sample equally by 1. At the end, we approximate the distribution of date nodes by examining the score in each bucket. Dividing by the number of trials gives the probability for that bucket.’ This can be used to estimate a distribution such as P(beg(T) E [t, t+c]). Because of our discretization, our estimate corresponds to the probability that beg(n) occurred in the in- terval constituting the bucket that contains the time point t, not necessarily the probability that it occurred in the interval [t, t+e]. It is also possible to corn- pute cumulative densities simply by summing; thus , it is possible to answer queries about the probability that an event takes place over a longer interval, e.g., P(beg(d E [u,v])?, or even if it ever occurs at all, e.g., P(beg(r) < +oo)? (Note that we have loosely used an event for its date in these example queries). In general, we do not compute the complete proba- bility distribution for a range node. Instead, the prob- ability of a particular range is computed only on de- mand. To do this, we count the number of trials whose sample is the queried range. For instance, in response to a query P( [8am, loam] = x)?, we count each trial where the sample for R was [8am, loam]. The count is divided by the number of trials to estimate the proba- bility. The query P (holds (8am, loam, x) )? is different from the above, because holds (8~4 loam, 7r) is true if x begins before 8 a.m. and ends after 10 a.m.. Thus, we would count each trial for which the sample for x contained the interval [8am, loam]. There is ample scope for optimizations and improve- ments to this scheme. We may apply variance re- duction techniques such as importance sampling for faster convergence. In many cases, optimization de- pends heavily on knowing what type of queries are ex- pected. If we are only interested in the marginal den- sities of a select few nodes, then we can considerably improve performance by reducing the network through node elimination [Kanazawa and Dean, 19891, and by focusing only on relevant portions of a network based on graph theoretic criteria [Shachter and Peot, 19911. Although we have only considered sampling algorithms here, it may also be possible to further improve perfor- mance by combining with 6ther types of algorithms. Summary We have described an approach of reasoning about time and probability using logic and time nets. The ideas and algorithms in this paper have been im- plemented in a program called goo (for grandchild lIf we know the number of trials n beforehand, then we can score each sample by l/n instead of 1. Alternatively, we can forget division altogether; the scores are proportional to the true distribution. 364 TIME AND ACTION of ODDS; ODDS is an earlier probabilistic temporal database and decision theoretic planning system [Dean and Kanazawa, 1989b]). We are currently reimple- menting goo as a probabilistic temporal database. The probabilistic temporal database is a system for maintaining and applying knowledge and information about time and probability. It allows specification of general knowledge about facts, events, time, and prob- ability in terms of &, sentences. On the basis of this knowledge, the database constructs and incrementally augments or clips time nets in response to information about true events and relevant scenarios. It is thus similar to other systems that construct and maintain Bayesian networks on the fly (e.g., [Breese, 19871). As far as extensions to this work, promising exten- sions to the logic and to time nets involve continu- ous quantities, especially in the context of a time- branching logic of plans. Such an extension is useful for decision theoretic planning and for control applica- tions [Haddawy, 19901. It may also be applied fruit- fully in control of inference [Dean and Boddy, 19881. Other, relatively trivial, but useful, extensions include a life operator denoting the duration of fact tokens, and sentence formers for higher moments such as the mean. Important outstanding research issues involve the construction of time nets from &, sentences, including the automatic unfolding of circularities, and practical performance characteristics in large scale applications. References Fahiem Bacchus. Representuting and Reasoning with Probabilistic Knowledge. PhD thesis, University of Alberta, 1988. Also issued as Waterloo University Technical Report CS-88-31. Carlo Berzuini, Riccardo Ballazzi, and Silvana Quaglini. Temporal reasoning with probabilities. In Proceedings of the Fifth Workshop on Uncertainty in Artificial Intelligence, pages 14-21, Detroit, Michi- gan, August 1989. Carlo Berzuini. A probabilistic framework for tempo- ral reasoning. Artificial Intelligence, to appear. John S. Breese. Knowledge representation and in- ference in intelligent decision systems. Technical Re- port 2, Rockwell International Science Center, 1987. D. R. Cox and D. Oakes. Analysis of Survival Data. Wiley, 1984. Thomas Dean and Mark Boddy. An analysis of time dependent planning. In Proceedings of the Seventh National Conference on Artificial Intelligence, pages 49-54, Minneapolis, Minnesota, 1988. AAAI. Thomas Dean and Keiji Kanazawa. Probabilistic temporal reasoning. In Proceedings of the Seventh National Conference on Artificial Intelligence, pages 524-528, Minneapolis, Minnesota, 1988. AAAI. Thomas Dean and Keiji Kanazawa. A model for rea- soning about persistence and causation. Computa- tional Intelligence, 5(3):142-150, August 1989. Thomas Dean and Keiji Kanazawa. Persistence and probabilistic projection. IEEE Transactions on Systems, Man and Cybernetics, 19(3):574-585, May/June 1989. Luc Devroye. Non-Uniform Random Variate Gener- ation. Springer-Verlag, New York, 1986. J. Geweke. Bayesian inference in econometric mod- els using monte carlo integration. Econometrica, to appear. Peter Haddawy. Time, chance, and action. In Proceed- ings of the Sixth Conference on Uncertainty in Arti- ficial Intelligence, pages 147-154, Cambridge, Mas- sachusetts, 1990. Joseph Y. Halpern. An analysis of first-order logics of probability. In Proceedings of the Eleventh Inter- national Joint Conference on Artificial Intelligence, pages 1375-1381, Detroit, Michigan, 1989. IJCAI. Steven John Hanks. Projecting Plans for Uncertain WorZds. PhD thesis, Yale University Department of Computer Science, January 1990. Keiji Kanazawa and Thomas Dean. A model for pro- jection and action. In Proceedings of the Eleventh International Joint Conference on Artificial IntelZi- gence, Detroit, Michigan, 1989. IJCAI. Keiji Kanazawa. ProbubiZity, Time, and Action. PhD thesis, Brown University, Providence, RI, Forthcom- ing. John McCarthy and Patrick J. Hayes. Some philo- sophical problems from the standpoint of artificial in- telligence. Machine Intelligence, 4, 1969. Drew V. McDermott. A temporal logic for reasoning about processes and plans. Cognitive Science, 6: lOl- 155, 1982. Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988. Ross D. Shachter and Mark A. Peot. Simulation ap- proaches to general probabilistic inference on belief networks. In John F. Lemmer and Laveen F. Kanal, editors, Uncertainty in ATtificiul Intelligence.5 North- Holland, 1991. Glenn Shafer and Judea Pearl, editors. Readings in Uncertain Reasoning. Morgan Kaufmann, Los Altos, California, 1990. Yoav Shoham. Reasoning About Change: Time and Causation from the Standpoint of Artificial Intelli- gence. MIT Press, Cambridge, Massachusetts, 1988. Jay Weber. Principles and Algorithms for Causal Reasoning with Uncertainty. PhD thesis, University of Rochester Computer Science, May 1989. Technical Report 287. KANAZAWA 365
1991
56
1,116
efaul ositiona ic * Rachel Ben-Eliyahu < rachel@cs.ucla.edu > Cognitive Systems Laboratory Computer Science Department University of California Los-Angeles, California 90024 Abstract We present a mapping from a class of default theories to sentences in propositional logic, such that each model of the latter corresponds to an extension of the former. Using this mapping we show that many properties of default theories can be determined by solving proposi- tional.satisfiability. In particular, we show how CSP techniques can be used to identify, analyze and solve tractable subsets of Reiter’s default logic. 1 Iutroduction Since the introduction of Reiter’s default logic [Reiter, 19801, many researchers have elaborated its semantics ([Etherington, 19871, [Konolige, 19S8]) and have developed inference algorithms for default the- ories ( [Etherington, 1987],[Kautz and Selman, 19891, [Stillman, 19901). It was clear from the beginning that most of these computations are formidable (not even semi-decidable), and so research has focused on restricted classes of the language, searching for tractable subclasses of default theories. Unfortunately, many simplified sublanguages still remained intractable ([Kautz and Selman, 19891, [Stillman, 19901). Since Reiter’s logic is an important formalism for nonmonotonic reasoning, it is worth exploring new di- mensions along which tractable classes can be identi- fied. The approach we propose here examines the struc- tural features of the knowledge base, and leads to a topological characterization of nonmonotonic theories. One language that has received a thorough topo- logical analysis is constraint networks. This proposi- tional language, based on multi-valued variables and relational constraints is also intractable, but many of its tractable subclasses have been identified by topo- logical analysis. A constraint network is a graph (or hypergraph) in which nodes represent variables and arcs represent pairs (or sets) of variables that are in- cluded in a common constraint. The topology of such a network uncovers opportunities for problem decompo- *Supported by Air Force Office of Scientific Research, AFOSR 900136. Rina Dechter < dechter@ics.uci.edu > Information & Computer Science University of California Irvine, California, 92717 sition techniques and provides estimates of the prob- lem complexity prior to actual processing. Graphi- cal analysis has led to the development of effective solution strategies and has identified parameters such as width and cycle-cutset that govern problem diffi- culty ([Freuder, 19821, [Mackworth and Freuder, 19841, [Dechter, 19901, [Dechter and Pearl, 19891). Our approach is to identify tractable classes of de- fault theories by mapping them into tractable classes of constraint networks. Specifically, we reformulate a default theory within the constraint network language and use the latter to induce the appropriate solution strategies. Rather than attempting a direct translation to con- straint network, this paper describes an intermediate translation of default theories into propositional logic. Since propositional logic can be translated into con- straint networks this yields a mapping from default the- ories to constraint networks. The intermediate trans- lation into propositional logic may point to additional tractable classes and can shed new light on the seman- tics of default theories. In the first part of this paper we show that any disjunction-free propositional default theory with semi- normal rules can be translated in polynomial time to a propositional theory such that all the interesting prop- erties of the default theory can be computed by solving the satisfiability of the latter. In the second part we show how constraint networks can be utilized to iden- tify tractable classes of default theories. The paper is organized as follows. Sections 2 and 3 describe Reiter’s default logic and introduce neces- sary notations and preliminaries. Section 4 presents our transformation and describes how tasks on a de- fault theory are mapped into equivalent tasks in propo- sitional logic. Section 5 discusses cyclic and ordered theories, while Section 6 presents new procedures for query processing and identifies tractable classes us- ing constraint networks techniques. Section 7 pro- vides concluding remarks. Due to space considera- tions all proofs are omitted. For more details see [Ben-Eliyahu and Dechter, 1991a]. BEN-ELIYAHU & DECHTER 379 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. 2 Reiter’s Default Logic Let C be a first order language. A default theory is a pair (0, IV); h w ere D is a set of defaults and W is a set of closed wffs (well formed formulas) in L. A default is a rule of the form o : PI, . . . . &/y, where cy, ,@I, . ..p., and y are formulas in C. The intuition behind a default can be: If o is believed and there is no reason to believe that one of the ,& is false, then y can be believed. A default cy : ,817 is normal if 7 = ,B. A default is semi- normal if it is in the form o : p A y/y. A default theory is closed if all the first order formulas in D and W are closed. The set of defaults D induces an extension on W. In- tuitively, an extension is a maximal set of formulas that can be deduced from W using the defaults in D. Let E* denote the logical closure of E in L. We use the follow- ing definition of an extension ([Reiter, 19801,theorem 2.1 ): Definition 2.1 Let E E t be a set of closed wfls, and let (0, W) be a closed default theory. Define 0 Eo =w o For i 2 0 Ei+l = Ei*U (71~~ : &, . . . . /&l-y E D where CY E Ei and -$?I, . ..y&. 4 E) E is an extension for (D, W) i$for some ordering E = UzoE;. (Note the appearance of E in the formula for &+I). 0 Most queries on a default theory (D, W) fall into one of the following classes: Existence: Does (D, W) have an extension? If so, find one. Set-Membership: Given a set of formulas S, Is S con- tained in some extension of (D, W)? Set-Entailment: Given a set of formulas S, Is S con- tained in every extension of (0, W)? In this paper we restrict our attention to Proposi- tional Disjunction-free Semi-normal Default theories, denoted PDSD (where formulas in D and W are disjunction-free). This is the same subclass studied by Kautz and Selman [Kautz and Selman, 19891. Clearly, when dealing with PDSDs, every extension E* is a log- ical closure of a set consisting of literals only. We as- sume, w.1.g. that the consequent in each rule is a single literal. We can also assume, w.l.g., that W is consistent and that no default has a contradiction as a justifica- tion; when W is inconsistent, only one trivial extension exists and a default having contradictory justification can be eliminated by inspection. 3 Definitions and Preliminaries We denote propositional symbols by upper case let- ters P,Q, R . . . . propositional literals (i.e. P, 1P) by lower case letters p,q,r... and conjunctions of literals by cy, p.... The operator N over literals is defined as fol- lows: If p = l&, “p = Q, If p = Q then -p = l&. If6 = cy : P/y is a default, we define pre(S) = cy, just(s) = p and concl(6) = y. Given a set of literals E, we say that E satisfies the preconditions of S if pre(6) E E and for each q E just(s) -q 4 E l. We say that E satisfies the rule S if it does not satisfy the preconditions of 5 or else, it satisfies both its preconditions and includes its conclusion. A proof of a literal p, w.r.t. a set of literals E and a PDSD (D, W) is a sequence of rules 61, . . ..S. such that the following three conditions hold: 1. concl(&) = p. 2. For all lsi 5 n and for each q E just(&), lq $ E 3. For all l<i< n pre(6i)~WZ]{cOncl(S1),..., concl(Si-1)). The following lemma is instrumental throughout the paper. It can be viewed as the declarative counterpart of lemma I in [Kautz and Selman, 19891. Lemma 3.1 E* is an extension of a PDSD (D, W) ifl E* is a logical closure of a set of literals E that satisfies: 1. W&E 2. E satisfies each rule in D. 3. For each p E E, there is a proof of p in E . 0 We define the dependency graph GcD,w)of a PDSD (D, W) to be a directed graph constructed as follows: Each literal p appearing in D or in W is associated with a node, and an edge is directed from p to T iff there is a default rule where p appears in its prerequisite and T is its consequent. An acyclic PDSD is one whose dependency graph is acyclic, a property that can be tested linearly. 4 Expressing PDSD iu Logic The common approach for building an extension, used by [Etherington, 19871, [Kautz and Selman, 19891, and others, is to increment W using rules from D. We take a totally different approach by making a declarative ac- count of such process: using lemma 3.1, we formulate the default theory as a set of constraints on the set of its extensions. We first present the transformation of acyclic PDSDs and then extend it to the cyclic case. 4.1 The Acyclic Case Extensions of acyclic PDSDs can be expressed and generated in a simpler fashion. This is demonstrated through Lemma 4.1, a relaxed version of the general Lemma 3.1. We can show that (note the change in item 3): ‘Note that since we are dealing with PDSDs, if (Y is not a contradiction, the negation of one of its conjuncts is in the extension iff the negation of cr is there too. 380 NONMONOTONIC REASONING Lemma 4.1 E* is an extension of an acyclic PDSD (D, W) ifl E* is a logical closure of a set of literals E that satisfies: 1. WCE 2. E satisfies each rule in D. 3. for each p E E - W there is 6 E D such that concl(S) = p and E satisfies the preconditions of 6. cl Expressing the above conditions in propositional logic results in a propositional theory whose models coincide with the extensions of the acyclic default theory. Let & be the underlying propositional language of (D, W). For each propositional symbol in L we define two propo- sitional symbols, Ip and I-p, yielding a new set of sym- bols: L’ = {Ip, I,p(P E .C}. Intuitively, Ip stands for “P is in the extension” while I-p stands for “1P is in the extension”. To simplify notations we use the notions of in(o) and cons(a) that stand for “a is in the extension”, and “cv is consistent with the extension”, respectively. Formally, in(o) and cons(a) are defined as functions from con- juncts in L to conjuncts in C’ as follows: e ifa= P then in(o) = Ip, cons(a) = -rI,p. 0 ifa= 1P then in(cr) = I-p, cons(a) = TIP. e if cy = /?r\r then in(cu) = [in(P)] A [in(r)], cons(a) = bw)l A bNY)l l The following procedure,translate-1, translates an acyclic PDSD (D, W) into a propositional theory % W) as follows: translate-l((D, W)) 1. for each p E W, put Ip into P( D W) . 2. for each a : ,0/r E D, if y i W, add in(tr) A cons(/?)+in(y) into P (Q w>- 3. Let Sp = {[in(a) A cons(/3)]]% E D such that S = o : PIP]. For each p $! W, if Sp # 8 then add to P(D, W) the formula Ip-+[VaE~,~]. else, (If p 4 W and Sp = a), add to P(D, W) the formula 71p. 0 We claim that: Theorem 4.2 Procedure translate-l transforms an acyclic PDSD (D, W) into a propositional theory P(D, w) such that 0 is a model for P(D, W) ifl {p]e(Ir) = true}* is an extension for (D, W). •I Algorithm translate-l is time and space linear in ] D+ W] (assuming W is sorted). Example 4.3 (based on Reiter’s example 2.5) Consider the following acyclic PDSD : D = {A : P/P, : A/A, lA/lA), W = 0. p(D,W) =I( remains empty after step l), (following step 2:) 1~ A iI-p+Ip, lI&+IA, lIA +I-r A , (following step 3:) Ip+IA A -I+, IA-)l17A, I YA-)~IA , -‘kP) p(D w) has Old)' 2 models : {IA = trtle, 17A = false, i,p = false, Ip = true}, that corresponds to the extension {A, P}, and { IA = false, 1-A = true, 1-p = false, Ip = false), that corresponds to the extension (1A). •I Since procedure translate-l assumes acyclic PDSD, it does not exclude the possibility of unfounded proofs. If applied to cyclic PDSD, the resulting transformation will possess models that correspond to illegal exten- sions. Consequently, in order to adjust our translation to the cyclic case we need to strengthen the constraint in step 3 of translate-l. Namely, we must add the con- straint that if a literal, not in W, belongs to the exten- sion, then the prerequisite of at least one of its rules should be in the extension on its own rights, namely, not as a consequence of a circular proof. One way to avoid circular proofs is to impose indexing on literals such that for every literal in the extension there exists a proof with literals having lower indices. To implement this idea, originally mentioned at [Dis89,], we associate an index variable with each lit- eral in the transformed language, and require that p is in the extension only if it is the consequent of a rule whose prerequisite’s indexes are smaller. Let #p stand for the “index associated with p”, and let k be its number of values. These multi-valued variables can be expressed using k propositional literals and additional O(k2) clauses [Ben-Eliyahu and Dechter, 1991a]. For simplicity, however, we will use the multi-variable no- tations, viewing them as abbreviations to their propo- sitional counterparts. Let ~5” be the language C’U{#p]p E L }, where C’ is the set {Ip, I,p]P E ,C} defined earlier. Procedure translate-2 transforms any PDSD (cyclic or acyclic) over L to a set of propositional sentences over L”. It is defined by modifying step 3 of translate-l as follows: procedure translate-2(D, W) - step 3 #zALetAT#g= . . . WPIP). n I$.$l;y$ an) A cons(P)1 N#q1 < such that 6 = q1 A qz... A For each p # W, if Cp is not empty then, add to p(D, W) the formula I,,-[Vaec,~]* Else, (If p C$ W and Cp = 0) add ~1~ to P (WV I3 The complexity of this translation requires adding n index variables, n being the number of literals in L, each having at most n values. Since expressing an inequality in propositional logic requires O(n2) clauses, and since there are at most n possible inequalities per default, BEN-ELIYAHU & DECHTER 381 the resulting size of this transformation is bounded by 0( I W I + I D]n3) propositional sentences. The following theorems summarize the properties of our transformation. In all of them, P @A WI is the set of sentences resulting from translating a given PDSD (D, W) using translate-2 (or translate-l when the the- ory is acyclic). Theorem 4.4 Let (D, W) be a PDSD. If P(D, W) is satisfiable and if 8 is a model for P (D, W) 7 then {ple(r,) = true}* , is an extension for (D, W). 0 Theorem 4.5 If E* is an extension for (D, W) then there is a model 9 for P(D 7 true i$pE E*. 0 W) such that e(in(p)) = Corollary 4.6 A PDSD (D, W) has an extension ifl P(D, W) is satisfiable. 0 Corollary 4.7 A set of literals S is contained in an extension of (0, W) iff there is a model for P(D, w) which satisfies the set (IpIp E S}. 0 Corollary 4.8 A literal p is in every extension of a PDSD (D, W) i$ th ere is no model for P(D W) which 9 satisfies +. 0 The above theorems suggest that we can first trans- late a given PDSD (D, W) to P(D w) and then answer queries as follows: to test if (D, b) has an extension, we test satisfiability of P (0, W)’ to see if a set S of liter- als is a member in some extension, we test satisfiability Of p(D, W)u{l.lP E sh and to see if S is included in every extension, we test if for every p E S , P P, W) lJ {+} is not satisfiable. 4.3 An Improved Translation Procedure translate-2 can be further improved. If a prerequisite of a rule is not on a cycle with its con- sequent, we do not need to index them, nor to en- force the partial order among their indexes. Thus, only literals which reside on cycles in the depen- dency graph need indexes. Furthermore, we will never have to solve cyclicity between two literals that do not share a cycle. We show that the index vari- able’s range can be bounded by the maximal length of an acyclic path in any strongly connected compo- nent in Gco,w)[Ben-Eliyahu and Dechter, 1991a]. The strongly-connected components of a directed graph is a partition of its set of nodes such that for each subset C in the partition, and for each x, y E C, there is a directed path from x to y and from y to x in G. The strongly connected components can be identified in lin- ear time [Tarjan, 19721. Procedure translate-3 incorporates these observations by revising step 3 of translate-2. The procedure asso- ciates index variables only with literals that are part of a non-trivial cycle (i.e. cycle with at least two nodes). 382 NONMONOTONIC REASONING procedure translate-3(( D, W))-step 3 3.a Identify the strongly connected components of %w* 3.b Let Sp = {[in(qI A qz... A an) A cons(P)] A\[#ql < #p] A . . . A [#q, < #p] 136 E D such that 6 = q1 A . . . A qn : p/p, and ql, . . . . q,. (0 5 r 2 n) are in p’s Zmponent } For each p 6 W add &,+[V,Es,a] to P(D, w). Ifp# W and Sp = 8 add -‘Ip to Pp, W). 0 Procedure translate-3 will behave exactly as translate-l when the input is an acyclic PDSD. The number of index variables produced by translate-3, is bounded by Min(lc SC, n), where k: is the size of a largest component of G(D,~), c is the number of non-trivial components and n the number of literals in the lan- guage. The range of the index variable is bounded by 1 - the length of the longest acyclic path in any compo- nent (1 5 L). Since in each rule’s prerequisite we have at most Ic literals that share a component with its con- sequence, the resulting propositional transformation is bounded by additional O(]W( + ]D]k12) sentences, giv- ing an explicit connection between the complexity of the transformation and its cyclicity level. Theorems 4.4 through 4.8 hold for procedure translate-3 as well. 5 Acyclicity and rderness While we distinguish between cyclic and acyclic PDSDs, Etherington has distinguished between ordered and un- ordered default theories. He has defined an order in- duced on the set of literals by the defaults in D, and showed that if a semi-normal theory is ordered, then it has at least one extension. To understand the relationship between these two categories we define a generalized dependency graph of a PDSD, to be a directed graph with blue and white arrows. Each literal is associated with a node in the graph, and for every 6 = cy : /3/p in D, every q E o, and every r E p, there is a blue edge from q to p and a white edge from wr to p. A PDSD is unordered iff its gen- eralized dependency graph has a cycle having at least one white edge. A PDSD is cyclic iff its generalized dependency graph has a blue cycle (i.e., a cycle with no white edges). Therefore, a set of default rules which is ordered is not necessarily acyclic and vice versa. For instance, the set {P : Q/Q, Q : P/P} is ordered but cyclic while the set {P : Q/Q, S : 1Q A P/P} is acyclic but not ordered . Clearly, the expressive power of both ordered and acyclic subsets of PDSD is restricted [Kautz and Selman, 19891. Cyclic theories are needed, in particular, for characterizing two properties which are highly correlated. For example, to express the belief that usually people who smoke drink and vice versa, we need the defaults Drink : Smoke /Smoke, Smoke : Drink / Drink, yielding a cyclic default theory. The characterization of default theories presented in the following section may be viewed as a generalization of both acyclicity and orderness. What can be gained from the above transformation? Since our translation is polynomial, if its resulting output belongs to a tractable propositional subclass, tasks of existence, set-membership and set-entailment can be performed efficiently. One such subclass is %-SAT, a subclass containing disjunctions of at most two literals. The correspond- ing class of default theories which translates into 2- SAT was called by [Kautz and Selman, 19891 and by [Stillman, 19901 “Prerequisite free normal unary” (a PDSD with normal rules having no prerequisite ). The linear satisfiability of 2-SAT induces a linear time algo- rithm for the corresponding class of default theories. In contrast, Kautz and Selman presented a quadratic al- gorithm (for deciding “membership in all extensions”) applicable to a broader class of PDSDs (called “normal unary”) where the prerequisite of each (normal) rule consists of a single positive literal. Next, we view propositional satisfiability as a con- straint satisfaction problem and use techniques bor- rowed from that field to solve satisfiability. Constraint satisfaction techniques exploit the struc- ture of the problem through the notion of a “con- straint graph”. For propositional sentences, the con- straint graph (also called a “primal constraint graph”) associates a node with each propositional letter and connects any two nodes whose associated letters ap- pear in the same propositional sentence. Various graph parameters were shown as crucially related to solving the satisfiability problem. These include the induced width, w*, the size of the cycle-cutset, the depth of a depth-first-search spanning tree of this graph and the size of the non-separable components ([Freuder, 1985]),[Dechter and Pearl, 19881, [Dechter, 19901). It can be shown that the worse- case complexity of deciding consistency is polynomialy bounded by any one of these parameters. Since, these parameters can be bounded easily by simple processing of the given graph, they can be used for assessing tractability ahead of time. For instance, when the constraint graph is a tree, sat- isfiability can be answered in linear time. In the sequel we will demonstrate the potential of this approach using one specific technique, called Tree- Clustering [Dechter and Pearl, 19891, customized for solving propositional satisfiability, and emphasize its ef- fectiveness for maintaining a default data-base. The Tree-Clustering scheme has a tree- building phase, and a query processing phase. The complexity of the former is exponentially dependent on the sparseness of the constraint graph, while the complexity of the latter is always linear in the size of the data-base generated by the tree-building preprocessing phase. Consequently, even when building the tree is computationally expen- sive it may be justified when many queries on the same PDSD are expected. The algorithm is summarized be- low (for details see [Dechter and Pearl, 1989]). Propositional- Tree-Clustering (tree- building) input: a set of propositional sentences S and its con- straint graph. 1. Use the triangulation algorithm to generate a chordal constraint graph. A graph is chordal if every cycle of length at least four has a chord. The triangulation algorithm transforms any graph into a chordal graph by adding edges to it [Tarjan and Yannakakis, 19841. It consists of two steps: (a) Select an ordering for the nodes, (various heuris- tics for good orderings are available). (b) Fill in edges recursively between any two nonad- jacent nodes that are connected via nodes higher up in the ordering. 2. Identify all the maximal cliques in the graph. Let Cl, “‘7 Ct be all such cliques indexed by the rank of their highest nodes. 3. Connect each Cd to an ancestor Cj (j < i) with whom it shares the largest set of letters. The resulting graph is called a join tree. 4. Compute Mi, the set of models over Ci that satisfy Si, where Si be the set of all sentences composed only of letters in Ci. 5. For each Ci and for each Cj adjacent to Ci in the join tree, delete from Mi every model M that has no model in Mj that agrees with it on the set of their common letters. This amounts to performing arc consistency on the join tree. •I Since the most costly operation within the tree- building algorithm is generating all the submodels of each clique (step 5), the time and space complexity of this preliminary phase is O(n * 21cl), where ]C] is the size of the largest clique and n is the number of letters used in S . It can be shown that (Cl = w* + 1, where w* is the width 2 of the ordered chordal graph (also called induced width). As a result, for classes having a bounded induced width, this method is tractable. Once the tree is built it always allows an efficient query processing. This procedure is described within the following general scenario. (n stands for the number ‘The width of a node in an ordered graph is the number of edges connecting it to nodes lower in the ordering. The width of an ordering is the maximum width of nodes in that ordering, and the width of a graph is the minimal width of all its orderings BEN-ELIYAHU & DECHTER 383 of letters in the original PDSD, m, bounds the number of submodels for each clique.) 3 1. Translate the PDSD to propositional logic (generates O((W( + ]D]n3) sentences). 2. Build a default data-base from the propositional sen- tences using the Tree-building method (takes O(n2 * exp(w* + 1))). 3. Answer queries on the default theory using the pro- duced tree: o To answer whether there is an extension, test if there is an empty clique. If so, no extension exists (bounded by O(n2) steps). o To find an extension, solve the tree in a backtrack- free manner: In order to find a satisfying model we pick an ar- bitrary node Cg in the join tree, select a model Ma from Mi, select, from each of its neighbors Cj , a model Mj that agrees with Mi on common letters, unite all these models and continue to the neigh- bors’ neighbors, and so on. The set of all models can be generated by exhausting all combinations of submodels that agree on their common letters (finding one model is bounded by O(n2 *m) steps). o To answer whether there is an extension that sat- isfy a set of literals A, check if there is a model satisfying { Ir ]p E A} (This takes O(n2 * m * logm) steps). o To answer whether a literal p is included in all the extensions, check whether there is a solution that satisfies +,, (bounded by O(n2m) steps). Following is an example demonstrating our approach. Example 6.1 Consider the following PDSD : D=( Dumbo : Elephant A Fly ElephantA- Fly:, Dumbo Elephant -Dumb0 Elephant : -Fly ~Fly Elephant : ~Circus Dumbo:ElephantA Circus 1 Circus Circus I W = (Dumbo, Elephant) The propositional letter “Dumbo” represents here a special kind of elephants that can fly. These defaults state that normally, Dumbos, assuming they fly, are elephants, if an elephant does not fly we do not believe that it is a Dumbo. Elephants usually do not fly, while Dumbos usually fly. Most elephants are not living in a circus while Dumbos usually live in a circus. This is an acyclic default theory, thus algorithm translate-1 produces the following set of sentences (each proposition is abbreviated by its initial letter): 3Note that the number of letters in the propositional sen- tences is O(n2) if the PDSD is cyclic, and O(n) if it is acyclic, and that m is bounded by the total number of extensions. Figure 1: Constraints graph for example 6.1 Sentences generated in step 1 of translate-l: 10, 1~. step 2 : step 3 : I yD-+IE A I+ A -10, &‘-+IE A -IF, IF-+ID A -I-+, I-,c-+~E A I - c, IC-+ID A -1-E A -I-c, -IyE The primal graph of this set is shown in fig- ure 1. It is already chordal and the ordering IE, I+‘, ID, I-D,, I-c, Ic, IF, 1-E SUggeStS that for this particular problem, w* < 3. Thus, using the tree- Clustering method we can answer queries about ex- tension, set-membership and set-entailment in polyno- mial time (bounded by exp(4)). Note that this PDSD is unordered and not unary, therefore, the complex- ity of answering queries for such PDSD is NP-hard [Kautz and Selman, 19891. We conclude this section with a characterization of the tractability of PDSD theories as a function of the topology of their interaction graph. The interaction graph is an undirected graph, where each literal in W or D is associated with a node and, for every S = o : p/p in D, every q E cy and every -r such that r E p, there are arcs connecting all of them into one clique with p. The first theorem considers the induced width of the interaction graph: Theorem 6.2 For a PDSD (D, W) whose interaction graph has an induced width w*, existence, membership and entailment can be decided in O(n * 2w*+1) steps when the theory is acyclic and O(~X~*+~) steps when the theory is cyclic. 0 The second theorem relates the complexity to the size of the cycle cutset. A cycle cutset of a graph is a set of nodes that, once removed, would render the constraint graph cycle-free. For more details about this method, see [Dechter, 19901. Theorem 6.3 For a PDSD (D, W) whose interaction graph has a cycle cutset of cardinality c, existence, membership and entailment can be decided in O(n * 2’) 384 NONMONOTONIC REASONING steps when the theory is acyclic and O(n’+‘) steps when the theory is cyclic. 0 7 Summary and @onclusions This paper presents a transformation of a disjunction- free semi-normal default theory into a propositional the- ory such that the set of models of the latter coincides with the set of extensions of the former. Questions of existence, membership and entailment posed on the de- fault theory are thus transformed into equivalent prob- lems of satisfiability and consistency of constraint net- works. These mappings bring problems in nonmono- tonic reasoning into the familiar arenas of propositional satisfiability and constraint satisfaction problems. Using our transformation, we showed that default theories whose interaction graph has a bounded w* are tractable, and can be solved in time and space bounded by O(nw*+2 ) steps. This permits us to predict worse-case performance prior to processing, since w* can be bounded in time quadratic in the number of liter- als. Moreover, the tree-clustering procedure, associated with the w* analysis, provides an effective preprocess- ing strategy for maintaining the knowledge; Once ap- plied, all incoming queries can be answered swiftly and changes to the knowledge can often be incorporated in linear time. Similar results were established relative to a second parameter - the cardinality of the cycle cutset. In the full paper we elaborate on these and on ad- ditional tractable classes identified by CSP techniques like cycle-cutset, non-separable component and back- jumping. In [Ben-Eliyahu and Dechter, 1991b] we have extended the results presented in this paper to “network default theories”, defined by Etherington, in which W contains clauses of size less or equal to two. We believe that our transformation can be carried over to general disjunctive semi-normal default theories as well. Acknowledgment We thank Judea Pearl for helpful comments on earlier versions of this paper and Caroline Ehrlich for proof reading it. efesences [Ben-Eliyahu and Dechter, 1991a] Rachel Ben-Eliyahu and Rina Dechter. Expressing default theories in con- starint language, 1991. in preparation. [Ben-Eliyahu and Dechter, 1991b] Rachel Ben-Eliyahu and Rina Dechter. Inference in inheritance networks using propositional logic and constraints networks techniques. Technical Report R-163, Cognitive sys- tems lab, UCLA, 1991. [Dechter and Pearl, 19881 Rina Dechter and Judea Pearl. Network-based heuristics for constraint sat- isfaction problems. Artificial Intelligence, 34~1-38, 1988. [Dechter and Pearl, 19891 Rina Dechter and Judea Pearl. Tree clustering for constraint networks. Adi- ficial Intelligence, 38:353-366, 1989. [Dechter, 19901 Rina Dechter. Enhancement schemes for constraint processing: Backjumping, learning, and cutset decomposition. Artificial Intelligence, 41:273-312, 1990. [Dis89, ] Paul Morris suggested it in a discussion follow- ing the constranits processing workshop in AAAI-89. [Etherington, 19871 David W. Etherington. Formaliz- ing nonmonotonic reasoning systems. Artificial In- telligence, 31:41-85, 1987. [Freuder, 19821 E.C. F reuder. A sufficient condition for backtrack-free search. J. ACM, 29( 1):24-32, 1982. [Freuder, 19851 E.C. F reuder. A sufficient condition for backtrack-bounded search. J. ACM, 32(4):755-761, 1985. [Kautz and Selman, 19891 Henry A. Kautz and Bart Selman. Hard problems for simple default logics. In KR-89, pages 189-197, Torontb,Ontario,Canada, 1989. Konolige, 19881 Kurt Konolige. On the relation be- tween default and autoepistemic logic. Artificial In- telligence, 35:343-382, 1988. Mackworth and Freuder, 19841 A.K. Mackworth and E.C. Freuder. The complexity of some polynomial network consistency algorithms for constraint satis- faction problems. Artificial Intelligence, 25( 1):65-74, 1984. [Reiter, 19801 Ray Reiter. A logic for default reasoning. Artificial Intelligence, 13:81-132, 1980. [Stillman, 19901 Jonathan Stillman. It’s not my de- fault : The complexity of membership problems in restricted propositional default logics. In AAAI-90, pages 571-578, BostonMA, 1990. [Tarjan and Yannakakis, 19841 Robert E. Tarjan and M Yannakakis. Simple linear-time algorithms to test chordality of graphs, test acyclicity of hypergraphs and selectively reduce acyclic hypergraphs. SIAM journal of Computing, 13(3):566-579, 1984. [Tarjan, 19721 Robert Tarjan. Depth-first search and linear graph algorithms. SIAM journal of Computing, l(2), June 1972. BEN-ELIYAHU & DECHTER 385
1991
57
1,117
Some Variations 0 efault Logic Institute of Computer Science Polish Academy of Sciences PKiN, 00-901 Warsaw, POLAND Abstract In the following paper, we view applying default rea- soning as a construction of an argument supporting agent’s beliefs. This yields a slight reformulation of the notion of an extension for default theories. The proposed formalism enjoys a property which we call rational maximization of beliefs. Introduction The fundamental importance of default reasoning in AI had been recognized long before appeared its first for- malization (Reiter 1980). A default reasoning agent is able to derive her conclusions using the following infer- ence patterns: if there is no reason to believe something else, assume that. . . This enables to fill up the gaps in agent’s knowledge that result from incomplete informa- tion and unblock construction of arguments that could support her beliefs. Such patterns of inference allow to represent exceptions together with general knowl- edge about what things normally can be expected to be like. Since any default conclusion might be subject to change when new information is provided, default reasoning is nonmonotonic. Default logic is defined by extending the language of some base standard logic1 such as predicate logic (either classical or modal) by specific inference rules, called defaults. Definition 1 Any default 6 has the form: [a : I%,-*, A/71 where cy, pi, and 7 are formulae of the base logic and are called the prerequisite, justification, and consequent of 6, respectively. Definition 2 If S is a default, then pre(S) = a, jus(S> = ~P~,-dnh con(S) = 7. ‘In the following discussion we will use propositional calculus. Applying a technique given in (Reiter 1980) we can easily generalize our results into more interesting cases in which some or all of the formulae appearing in defaults may contain free variables. If D is a set of defaults, then PRE(D) = (pre(6) : 6 E D}, JUS(D) = U jus(S), 6ED CON(D) = (con(s) : 6 E D}. In certain circumstances, defaults allow to augment the set of agent’s beliefs that follow deductively from what she knows about the world. Assumption of a consequent of some default is conditioned by the re- quirement that every justification of this default is con- sistent with the belief set. Therefore, the meaning of defaults depends on the belief set of the agent. The agent’s knowledge might be represented by a pair A = (IV, D), called 8 defadt theory, where W and D are sets of formulae (axioms) and defaults, respec- tively. A delimits the agent’s belief set, which, since defaults refer to it, must be defined as a fixed-point construct over A. Such a belief set is generally called an extension of A. Were is the simplest definition of this notion. Definition 3 (Reiter 1980) Let A = (IV, D) be a de- fault theory. For any set S of formulae, define I?h (S) as the minimal set satisfying the following conditions: @ w E rA(s), @ Th(rA(s)) = PA(s), e if [o : &..,/347] E D, a, E IA(s) and 1 l,..., P lPn fi! S, then 7 E r,(s). We say that E is an a-eztension of A if and only if E = rap). Unfortunately, some default theories may have no a-extension. There are known large classes of default theories for which the existence of an a-extension is guaranteed (Reiter 1980, Etherington 1986). Never- theless, they might turn to be inadequate to repre- sent knowledge in some specific cases. Nonexistence of an a-extension of some theories steams from the fact that the application of defaults is not sufficiently constrained: a situation is possible in which a jus- tification of an applied default is denied by axioms and consequents of some subset of all applied defaults. RYCHLIK 373 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. Lukaszewicz (1988) modifies the notion of an extension imposing new applicability criteria. Definition 4 (Lukaszewicz 1988) Let A = (W, D) be a default theory. For any two sets S and T of formu- lae, define I’i(S, T) and I’i(S, T) as the minimal sets satisfying the following conditions: 0 0 e w c rids, T), Th(Ws T)) = Ws, T), if[a:@i,... , Pn/7] E D, o E I’i(S, T) and for all ~PETU{A,... y bd, S U (7) Y 19, then - 7 E q&T), - Pl,.*. , A E ri(S, T). We say that E is a b-extension of A with respect to F if and only if E = I’# F) and F = I’i(E, F). This modification causes that every default theory has a b-extension. Moreover, the new formalism has the property of semimonotonicity, which means that introducing new defaults into a default theory does not change any of the previously derivable beliefs. In the following discussion, we will suggest and ex- amine further changes to the notion of an extension. Defaults as arguments2 We claim that the definitions given in the previous sec- tion do not reflect the way in which some perfectly rational agent would explain her own beliefs. Suppose that all what the agent knows about the world can be expressed by a simple default theory (0, {al = CT : r/p], & = [T : s/p], d3 = [p : t/t]}) (1) where T stands for a tautologically true formula. Ac- cording to Definition 3 (or Definition 4), the only ex- tension of (1) is Th( (p, t}). Let us observe that these definitions force every default of (1) to be applied. A perfectly reasoning agent, however, would notice that in order to explain t, it is sufficient to refer to d3 and one of the remaining defaults, either dl or d2. Referring to both dl and d2 is somewhat redundant. Therefore, there are two possible explanations why the agent be- lieves t. She can use the following arguments: Since I find r to be consistent with what I believe then, by dl, I assume that p holds and hence, by d3, I can conclude t. She uses defaults dl and d3 (in this order) as ar- guments justifying her belief in t. Of course, she can use d2 and d3 (in this order) to achieve the same aim. But in both cases, she realizes that it is superficial to consider a default and think whether its justification is consistent with her beliefs, if she already knows its consequent. This kind of reasoning seems to be manifested when we attempt to explain something in the most concise way. In such case, we try to use, for the sake of clarity, only those arguments that are necessary to achieve our 2The proofs o fall cited theorems can be found in (Rych- lik 1991). task. Lukaszewicz (1988) maintains that it is a worse approximation of human reasoning than the one pro- posed in Definition 4. However, although people are not logically omniscient, they seem to frequently use such a pattern of inference, at least in simple cases. Nevertheless, this kind of reasoning certainly can be attributed to perfect reasoners. Summarizing the above discussion, the perfectly reasoning agent treats defaults as arguments choos- ing only those defaults whose consequents cause the progress of the process of generating her own beliefs. The question remains, how this new constraint im- posed on applicability criteria for defaults changes the notion of an extension. Not surprisingly, this new con- straint does not change a-extensions. Let us recall that the only condition that must be satisfied by a default in order to include its consequent into an a-extension, is that its justifications are consistent with some current set of beliefs. It does not matter, therefore, if we use all applicable defaults or only the “strongest”, whose con- sequents together with axioms entail the consequents of all other potentially applicable defaults. Of course, it might happen that there are no “strongest” defaults, but if they exist, then in both cases, we would gener- ate the same set of a-extensions. On the other hand, b-extensions are vulnerable to this new policy of ap- plying defaults. From the definition of a b-extension, it follows that applying a default we have to check not only whether all of its justifications are consistent with some current set of beliefs, but additionally, if its con- sequent does not interfere with the justifications of the previously applied defaults. It might happen, there- fore, that restricting the application of defaults only to the “strongest” unblocks some other defaults which would otherwise remain inapplicable. Let us formalize our intuitions concerning the ra- tionally thinking agents and redefine the notion of an extension for default theories. Definition 5 By an indexed formula we will under- stand a pair (cy , 6) where (II is a formula and S is a default or a special symbol e. Definition 6 If S is a set of indexed formulae, then /c(S) = U%4 E Sl, F(S) = {a : b, 6) E Sl, D(S) = (6 # e : (cr,S) f S}. The next two definitions formalize what we will un- derstand by a “stronger” default. Definition 7 We say that an indexed formula (CY, 6) is weakly subsumed by a set S of indexed formulae if and only if there is S’ E S such that (cy, S) e S’, F(S’) I- cr and for every default 5 E ?>(S’) there is a sequence so,. . . , S, E S satisfying the following conditions: (a) for every 0 5 i 5 12, (CY, 6) $$ Si, W F(S0) t- pre(t), (c) for every 1 5 i 5 n, r(Si) t- PRE(ZJ(Si-I)), 374 NONMONOTONIC REASONING (d) s, = 0. To illustrate the above notion, let us consider two sets of indexed formulae: S’ = {sr,s2,ss} and S” = {sr,s2,ss,s4} where 31 = (a, P- : P/cd), s2 = (5 [4 : +I), s3 = (uw~:4Q~4), s4 = (qAt,[T:t/qAt]). Let us note that sr is not weakly subsumed by Sf. Although we have that F( {ss}) l- Q and F( (~2, ss}) l- Q, the only sequence satisfying conditions (a)-(d) for the default of sg, namely the sequence 8, {sr}, {sz}, fails to satisfy condition (u). On the other hand, sr is weakly subsumed by Sff, because Sff > (~4) t- Q and, for the default of ~4, the sequence consisting of only one empty set of indexed formulae satisfies conditions W-(d)* Definition 8 We say that an indexed formulaq is sub- sumed by a set S of indexed formulae if and only if there is Sf C S such that c is weakly subsumed by Sf and no element of S’ is weakly subsumed by S. Of course, if an indexed formula is subsumed by a set of indexed formulate, then it is also weakly subsumed by this set. However, the converse is not true. As an example, let us consider the following infinite set of indexed formulae: s= (sfJ,s1,...} where si = (~0 A . . . A qi, [T : p/q0 A . . . A qi]), for all i 1 0. It is easy to see that for every si E S, (si+i} satisfies conditions (u)-(d), which means that si is weakly subsumed by S, for any i 2 0. For the same reason, no si is subsumed by S. Definition 9 Let A = (W, D) be a default theory. For any set S of formulae and any set T of indexed formulae, define I’: (S, T) and I’i(S, T) as the minimal sets satisfying the following conditions: @ w c J%(sm, e m(rt,(s,~)) = rh(s,q, 0 (QI, e} E rip, T) for all a E W, e if S = [o : pi,. . . ,&Jy] E D, (;Y E I$(S,T), for all 9 E JUWV)) U (PI, . . -, PA S U (7) Y -97 and (7,s) is not subsumed by 57, then - Y E rpm, - (7,s) E r; (s, T). We say that E is a c-e&en&on of A with respect to F if and only if E = I’i(E,F) and F = I’i(E, F). As in the case of b-extensions, I?; corresponds, roughly speaking, to beliefs. I’; can be viewed as some sort of a catalog in which all sources of information, that is axioms and applied defaults, are recorded. It essentially contains a trace of the agent’s reasoning. Some of the flavor of the idea of recording the “his- tory” of applying defaults is captured by some work in inherit ante .3 The following theorem gives more intuitive char- acterization of a c-extension and closely corresponds to the theorems proved in (Reiter 1980) and (Luka- szewicz 1988). Theorem 1 If A = (IV, D) is a default theory, then E is a c-extension for A with respect to a set F of indexed formulae if and only if 00 60 E = U Ed and F = U Fi i=O i=o where Eo = W and FO = {(a,~) : a E W}, for i 1 0 E i+l = Z’h(Ei) U (7 : C} and Fi+l = Fi U ((7,s) : E} and C stands for the condition that there is 6 = [o : Pl Y"'> &/r] E D such that a E Ei, for every ‘p E JUS(D(F)) u (PI,. . . , Al, E u {Y) Y ‘cp and (Y,q is not subsumed by F. A simple conclusion that follows from Definition 9 is that every c-extension is a set of formulae that are entailed by axioms and consequents of applied defaults of an underlying default theory. Theorem 2 Suppose E is a c-extension of some de- fault theory with respect to F. Then E = Th(T(F)). As we may expect, c-extensions, just also satisfy the next two theorems. as b-extensions, Theorem 3 Every default theory has a c-extension. Theorem 4 (Semimonotonicity) Let D and D’ be sets of defaults such that D E Df and suppose that A = (VV, D) has a c-extension E. Then Af = (IV, 0’) has a c-extension E’ such that E E E’. There are many other similarities between the no- tions of a b- and c-extension. It is worth mentioning, for example, that like b-extensions, c-extensions need not to be maximal sets of beliefs, and hence, two dif- ferent c-extensions do not necessarily have to represent orthogonal sets of beliefs. As an example, let us con- sider the theory (0, (4 = [T : q/d, 4 = [T : r/-p A Y-I]}). (2) This theory has two c-extensions El C E2 with respect to Fl and F2, respectively, where El = Z’h( {-p}), Fl = ((1p,dl)}, E2 = Th({-PAY}) and F2 = {(-PA YQ, dz)}. El and E2 are also the only b-extensions of (2). The only a-extension of (2) is E2. 3Some review material concerning this issue can be found in (Rychlik 1989). RYCHLIK 375 There are default theories, however, some of whose b-extensions are not maximal sets of beliefs, whereas all their c-extensions are. Let us examine an example from (Lukaszewicz 1988) in which the following theory is presented: ((PI, (4 = [T : r/PI, d2 = ET : !I/-a). (3) This theory has two b-extensions El c E2 with respect to Fl and F2, respectively, where El = Th((p}), Fl = {r}, ES = Th((p,v}), and F2 = (a}. But, Es with respect to ((v, d2)) is the only c-extension of (3). E2 is also the only a-extension of (3). Although examples (2) and (3) are very similar, there is one important difference. In (2), a rationally thinking agent believes that p holds, but she can choose between two arguments that support this belief. One of them blocks the assumption that r holds. In (3), there is no need for her to justify p using defaults, because she already knows it, and therefore she can conclude r. We say that in such a case, the agent shows the abil- ity to rationally maximize her own beliefs applying as few defaults as possible. These intuitions are formally described below. Definition 10 Let E be an extension4 of a default theory A. By GD(E,A) we will understand all de- faults used in generating E. If E is a c-extension of A with respect to F, then GD(E, A) = ‘D(F). Definition 11 Let A = (IV’, D) be a default theory and E an extension of A. We say that a default S is superfluous in GD(E, A) if and only if (con(S), S) is subsumed by ((cy , E) : a E W} U {(con(t), 6) : t E WE, A>l. Definition 12 Let E be an extension of a default the- ory A = (W, 0). We say that E is a rationally maximal set of beZiefsif and only if there is no extension E’ C E of A such that there is a default S E GD(E’, A) which is superfluous in GD (E’, A). Theorem 5 Every c-extension is a rationally maximal set of beliefs. One may wonder whether our approach offers any advantages over Reiter’s or Lukaszewicz’s simpler for- mulations. The main advantage of a c-extension over an a-extension is that the former is guaranteed to ex- ist for any default theory, which is not the case for the latter. On the other hand, for many representational problems which involve default reasoning, our new for- malization might give solutions that are less intuitive than those supported by Reiter’s or Lukaszewicz’s for- malizations (and vice versa). In most knowledge representation systems, the in- tuitive meaning of a default [o : /?I,. . . , ,&/y] is that normaUy, if cy is satisfied, then also 7 is satisfied un- less some pi is assumed to be false. Let us return to the example given above. The first default in (3) says that normally p is satisfied unless r is assumed to be false. In other words, the cases where p is satisfied and r is known to be false are outnumbered by the cases in which p and r are both satisfied. Given (3), a common sense reasoner would conclude that, most probably, r is satisfied. In this case, she would prefer El over Ea. Suppose, however, that she is presented the following default theory: ( (PAS), ( dl =[T:r/~l, d2 = [T : q/v], d3=[PAs:w/-r] } ). (4) The above theory is similar to (3)) but additionally d3 says that normally r is false whenever p and s are sat- isfied. Since the only axiom of (4) says that p and s are true, r is very likely false. A common sense reasoner would, therefore, choose an extension which supports lr. Notice that (4) has two b-extensions El = Th( (p A s}), which does not conform to our in- tuitions, and E2 = Th({p A s, v}). E2 is the only a-extension and c-extension of (4). Finally, let us consider an example of a default the- ory ( (eAce}, ( dl=[e:gA-)re/g], d2 = [ce : re A -r/re] } ) (5) given in (Besnard 1989). Assume the following inter- pretation of the predicate symbols: e - Clyde is an elephant, ce - Clyde is a circus elephant, re - Clyde is a royal elephant, !I - Clyde is gray, r - Clyde is rare. This theory has two b-extensions which are also the only c-extensions. One of them contains the fact that Clyde is a royal elephant. The other one supports the assumption that Clyde is gray, which seems to be un- intuitive. The only a-extension of (5) contains the fact that Clyde is a royal elephant. The above examples suggest that neither Reiter’s, nor Lukaszewicz’s, nor our approach to default reason- ing can be preferred overall. Let us end our discussion with the following result: Theorem 6 If A, B and C are sets of a-extensions, b-extensions and c-extensions of some default theory, respectively, then A E C 2 B. Semantics In this section, we will provide a model-theoretic se- man tics of c-extensions for default theories. We will use a technique proposed in (Etherington 1986) and (Lukaszewicz 1988). Any nonmonotonic logic can be viewed as a result of transforming some base standard logic by a selec- tion strategy defined on models.5 For default logics, 4 We mean her e either an a-, b-, or c-extension. 5See, for example, (Rychlik 1990). 376 NONMONOTONIC REASONING this selection strategy consists, roughly speaking, in restricting the set of models of the underlying default theory. Suppose A = (W, D) is a default theory. Ap- plying any default Sr E D whose conclusion does not follow directly from W causes the class M{I of all mod- els of W to be narrowed to the class M{61) of models that satisfy the consequent of Sr. If we further ap- ply another default S2 E D, the class m{dlI will be narrowed to the class Mja,,~~l which contains those models from Mfal) that additionally satisfy the con- sequent of 62. And so on. In this way we will obtain a sequence MI) E !?R{61~ C M{sl,aaI C . . . Every exten- sion of A has its models among the maximal elements of such sequences. Let us formalize these intuitions as follows. Definition 13 Let M be a set of models for some set of formulae and F a set of indexed formulae. A default 6 = [a : Pl,...,PnlYl is applicable with respect to (Z?R, F) if and only if for every m E 52, m b o, for every cp E JUS(ID(F)) U {/3r, . . . , pn} there is m E !X such that m + y A cp and (7, S) is not subsumed by F. Definition 14 Let %Q and F be as above. For a default S = [o : /?I, . . . , pn/~], we construct two se- quences Xc, X1, . . . and Yc, Yr , . . . as follows: x0 = {F u UY, Wh Yo = {cEXO\X(F):C is subsumed by X0) and for i 3 0 X i+l = U ( U {x\{cl})7 XEXi CEYi Y* 1+1 = (C E Xi+1 \ K(F) : c is subsumed by Xi+r}. Put x= fixi. i=o The result of S in (9$ F) (written s(%Z, F)) is either: (a) {(3n\{m:mj==~y},z):z~X}ifandonlyif6is applicable with respect to (!JJ&, F) and (7, S) 4 F, (b) {(m, F)} otherwise. Definition I.5 Let P be a set of pairs (9R, F) where 9R and F are defined as above. We say that w = u S(P) PEP is a result of S in P. Definition 16 Let !JJZ and F be as above. We say that (9X, F) is stable with respect to a set D of defaults if and only if for all S E D, S(lm, F) = ((9.R, F)} and every element of F \ K(F) is not subsumed by F. Definition 17 Let nZ and F be as above. Suppose (Si) is a sequence of defaults. By (&)(!JR, F) we denote U Ri where Ro = {(Z%R, F)} and for i 2 0, Ri+l = b(R)- & ({m:m~PAq},((PAq,dz)}) & a2 Figure 1: Network corresponding to (6). Definition 18 Let m and F be as above. Suppose (n is a set of models for some set of formulae and D a set of defaults. We say that (m, F) is accessible from rt with respect to D if and only if there is a sequence (Si) of defaults from D such that (%R, F) E (Si)(‘JT, K(F)). Theorem 7 (Soundness) If E is a c-extension for a default theory A = (W, D), then there is some set F of indexed formulae such that ({m : (m b E}, F) is stable with respect to D and accessible from the set of models of W. Theorem 8 (Completeness) Let A = (W, D) be a de- fault theory. If (m, F) is stable with respect to D and accessible from the set of models of W, then !J.R is the set of models for some c-extension of A. We can envisage the semantics of default theories as transition networks (Etherington 1986, Lukaszewicz 1988) whose nodes stand for pairs (m, F) and arcs are labeled by defaults. If A = (W, D) is a default theory then M represents some subset of all models of W. The root node of the network for A is the node ({m : m I= WI, {(%4 : a! E W}). From the node (9.R, F), for every S = [a! : /?I,. . . , pn/r] E D, arcs labeled S lead to the nodes S(%Q, F) as it is defined in Definition 14. Figure 1 represents a network corresponding to a default theory (~,(~~=[T:~/PI,~~=[T:~/PA~I}). (6) On Figure 2 a network corresponding to (3) is shown. Conclusion In this paper we introduced an alternative formaliza- tion of default reasoning. We presented, in the form of RYCHLIK 377 Figure 2: Network corresponding to (3). a fixed-point construct, a new definition of the notion of an extension for default theories. Then we charac- terized this notion from the semantic point of view. We also made an observation that extensions redefined in such way represent so-called rationally maximal sets of beliefs. We motivated our approach by noticing that perfect reasoners tend not to use redundant argu- ments while explaining their beliefs about the world. Of course, we know that there might be no single for- malism in which we could reflect our intuitions about how a perfectly reasoning agent should draw her in- ferences. A clash of different intuitions in formalizing multiple inheritance with exceptions is a good example supporting this observation. It seems, however, that it is not difficult to modify a default logic so that it can properly handle many of these intuitions. References Besnard, P. 1989. An Introduction to Default Logic. Symbolic Computation Series, Berlin: Springer- Verlag. Etherington, D. 1986. Reasoning from Incomplete In- formation. Pitman Research Notes in Artificial In- telligence, London: Pitman Publishing Limited. Lukaszewicz, W. 1988. Considerations on default logic: an alternative approach. ComputationaZ In- telligence, 4:1-16. Reiter, R. 1980. A logic for default reasoning. Artifi- cial Intelligence, 13:81-132. Rychlik, P. 1989. Multiple inheritance systems with exceptions. Artificial Intelligence Review, 3:159- 176. Rychlik, P. 1990. The generalized theory of model pref- erence (preliminary report). In Proceedings of the Eight National Conference on Artificial Intelligence, 615-620. Menlo Park, Calif.: AAAI Press. Rychlik, P. 1991. Modifications of default logic, Tech- nical Report, Institute of Computer Science, Polish Academy of Sciences. Forthcoming. 378 NONMONOTONIC REASONING
1991
58
1,118
Michael Gelfond Computer Science Department University of Texas at El Paso El Paso, Texas 79968 cvOO@utep.bitnet Abstract The purpose of this paper is to expand the syntax and semantics of logic programs and deductive databases to allow for the correct representation of incomplete in- formation in the presence of multiple extensions. The language of logic programs with classical negation, epis- temic disjunction, and negation by failure is further ex- panded by a new modal operator K (where for the set of rules T and formula F, KF stands for “F is known to a reasoner with a set of premises 7”‘). Theories con- taining such an operator will be called strongly intro- spective. We will define the semantics of such theories (which expands the semantics of deductive databases from [Gelfond and Lifschitz 199Ob]) and demonstrate the applicability of strongly introspective theories to formalization of some forms of commonsense reason- ing. Introductisn In recent years a substantial amount of work was done to investigate the applicability of logic programming based formalisms to knowledge representation (for a good overview see [Kowalski 901). The results are very promising but some substantial problems related to this approach still remain unsolved. One of such problems - inability to deal directly with incomplete information - was discussed in [Gelfond and Lifschitz 199Oa]. The language of extended logic programs pro- posed there partially overcomes this limitation by dis- tinquishing between two types of negation: “classical” (or “strong”) negation 1 (where 1F can be interpreted as “F is known to be false”), and negation as failure not (where not F can be interpreted as “F is not known to be true”). ’ The semantics of extended logic pro- grams is based on the notion of answer sets---sets of literals which can be viewed as theories satisfying the corresponding program. For extended programs with- out classical negation their answer sets coincide with *This research was partially supported by NSF grant #IRI89-06516 ‘A similar approach was independently developed and investigated in [Pearce and Wagner, 19891, [Wagner, 19901. 386 NONMONOTONIC REASONING stable models from [Gelfond and Lifschitz, 19881. In [Gelfond and Lifschitz, 199Oa] we considered primarily well-behaved extended logic programs, i.e. extended programs with unique consistent answer sets. The an- swer such a program is supposed to return to a ground query Q is yes, no, or unknown, depending on whether the answer set contains Q, l&, or neither. The ex- istence of several answer sets indicate that the corre- sponding program P has several possible interpreta- tions, i.e. it is possible for a rational reasoner to con- struct several theories satisfying P. Such a multiplicity becomes a norm rather then exception if the notion of extended logic program and its answer set semantics is expanded to disjunctive databases (see [Gelfond and Lifschitz 1990b], [Gelfond 19901) - collections of rules of the form: Al or . . . orA, + B1, . . . . B,,not Cl,...,not Ch where A’s, B’s, and C’s are atoms p or their “classical” negations -p. (Notice the use of symbol or instead of classical V. The meaning of a connective or, called epis- temic disjunction, is given by the semantics of disjunc- tive databases and differ from that of V. The meaning of a formula A V B is “A is true or B is true” while a rule A orB t is interpreted epistemically and means “belief A or belief B”.) The answer such a database T returns to a ground query Q is yes if & belongs to all answer sets of T, no if all answer sets contain l&, or unknown otherwise. (The last answer can be split into several more informative answers but above alternatives are sufficient for the purpose of this paper). In [Gelfond and Lifschitz 19901 we argued that for well-behaved pro- grams the presence of two types of negation allows one to deal in a natural and convenient way with incom- plete information. This, however, is no longer the case if the corresponding programs (or deductive databases) are not well-behaved. The purpose of this paper is to expand the notions of extended logic programs and de- ductive databases to allow for the correct representation of incomplete information in the presence of multiple answer sets. We will start by demonstrating the problem using the following example from [Gelfond and Lifschitz 19901: From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. Example 1 Considered a collection of rules 1. Eligible(x) t High GPA(x) , 2. Eligible(x) t Minority(x), FairGPA(x), 3. -Eligible(x) t -FairGPA(x), 4. Interview(x) c not Eligible(x), not -Eligible(x) used by a certain college for awarding scholarships to its students. The first three rules are self explanatory while the fourth rule can be viewed as a formalization of the statement: (*) “The students whose eligibility is not determined by the first three rules should be interviewed by the scholarship committee”. In its epistemic form the rule says : Interview(x), if nei- ther Eligible(x) nor 1 Eligible(x) is known. We assume that this program is to be used in conjunction with a database DB consisting of literals specifying values of the predicates Minority, HighGPA, FairGPA. Con- sider, for instance, DB consisting of the following two facts about one of the students: 5. FairGPA(ann) +. 6. -HighGPA(ann) t. (Notice that DB contains no information about the mi- nority status of Ann). Intuitively it is easy to see that rules (l)-(6) 11 a ow us to conclude neither Eligible(unn) nor lEligible(ann), therefore eligibility of Ann for the scholarship is undetermined and, by rule (4), she must be interviewed. Formally this argument is reflected by the fact that program Ti consisting of rules (l)-(6) has exactly one answer set: (FairGPA(ann), -HighGPA(ann), Interview(ann)). The situation changes significantly if disjunctive in- formation about students is allowed to be represented in the database. Suppose, for instance, that we need to augment rules (l)-(3) by the following information: (**) Mike’s GPA is fair or high. There are several possible ways to represent this in- formation. The most natural one seems to be to use the language and semantics of deductive databases from [Gelfond and Lifschitz, 1990b]. The corresponding de- ductive database Ts consists of rules (l)-(3) augmented by the disjunction 7. FairGPA(mike) or HighGPA(mike) t T2 has two answer sets: Al = { HighGPA(mike), Eligible(mike)) and A:! = { FairGPA(mike)}, and therefore the reasoner modeled by Tz does not have enough information to establish Mike’s eligibility for the scholarship (i.e. answer to Eligible(mike) is un- known). If we now expand this theory by (*) we expect the new theory T3 to be able to answer yes to a query Interview(Mike). It is easy to see however that if (*) is represented by (4) this goal is not achieved. The re- sulting theory Ts consisting of (l)-(4) and (7) has two answer sets A3 = (HighGPA(mike), Eligible(mike)) Aa = (FairGPA(mike), Interview(mike)) and therefore the answer to query Intervieufmike) is unknown. The reason of course is that (4) is too weak to represent (*). The informal argument we are trying to capture goes something like this: theory T3 answers neither yes nor no to the query Interview(mike). There- fore, answer to this question is undetermined, and, by (*), Mike should be interviewed. To formalize this argu- ment our system should have a more powerful introspec- tive ability then the one captured by the notion of an- swer sets from [Gelfond and Lifschitz 1990b]. Roughly speaking instead of looking at only one possible set of beliefs sanctioned by T it should be able to look at all such sets. &mark. The situation will not change if (**) is repre- sented by modeling disjunctions in the language of logic programs. For instance, replacing (7) by two rules FuirGPA( Mike) t not HighGPA(Mike) HighGPA(Mike) t not FairGPA(Mike) will not change answer sets of the program. In this paper we extend the syntax of deductive databases from [Gelfond and Lifschitz 1990b] in two directions. Firstly, following [Lloyd and Topor 19841, and [Wagner 19901 we allow the rules to contain other types of formulae except literals. Secondly, and more importantly, we expand the language by a modal oper- ator K (where for any set of rules T and formula F, KF stands for “F is known to a reasoner with a given set of premises T”). Theories containing such an operator will be called strongly introspective. We will define the semantics of such theories (which expands the seman- tics of deductive databases from [Gelfond and Lifschitz 1990b] and demonstrate the applicability of strongly in- trospective theories for formalization of some forms of commonsense reasoning. Definitions Let us consider a language & consisting of predicate symbols p, q, . . ., object variables, function symbols, logical connectives &, 1, not, 3, and the modal opera- tor K. Formulae of & will be defined in the usual way. Formulae not containing modal operators will be called objective formulae while those starting with K will be called subjective. Formulae of the form p(a) will be called objective atoms. By objective literals we will GELFOND 387 mean objective atoms p(a) and their strong negations -p(u). Literals not containing variables will be called ground. The set of all ground literals will be denoted by Lit. Formulae not containing free variables will be called statements. Let us consider a collection A = (Ai} of sets of ground objective literals and a set W of such literals. (A can be thought of as a collection of possible belief sets of a reasoner while W represents his current (working) set of beliefs.) We will inductively define the notion of truth ($) = a.~: f$$y (= ] ) of formulae of -Ge w.r .t. a pair 1 * Definition 1 Now we will define a collection A of sets of literals (which can be viewed as vivid theories in terminology from [Levesque SS]) satisfying an epistemic specification T. We will call such a collection a world view of T and its elements belief sets of T. The precise definition of these notions will be given in several steps: Definition 3 Let To be an epistemic specification con- sisting of rules of the form F+ A set W of liter& is called a belief set of To iff W is a minimal set with a property W b F for every rule from To. If W contains a pair of complementary literals Example 2 Let T consist of clauses: then W = Lit. 1. P(Q) 01‘ P(b) c 2. -p(b) + 3. not q(b) t 4. r(a) or lr(b) t 5. 32q(x) + It is easy to see that T has two belief sets: M b KF i$ < A, Ak >b F for every Ak from A M =(F&G i$M =(F orM=(G MbF&GiflMbFandMbG M =I 3xF iff for eve y ground term t, M =I F(t) M=I-FiffM#=F M /= 3xF iff there is a ground term t such that M=InotF i’M+F M I= F(t) M b=F i$M=IF MbnotF iffMkF M =Ip(a) iff -p(a) E W M =I KF #< A, An: >=I F for some & from A (p(a), -P(b), q(a)9 r(o)) (p(o), -p(b), q(a), -r(b)) Definition 4 Let T be an arbitrary epistemic specifi- cation, W be a set of litenrls in the language of T and A be a collection of sets of liter& in this language. For every M = < A, W > by TM we will denote the epis- temic specification obtained from T by: 1. Removing from the premises of rules of T all formu- lae G such that M k G. It is easy to see that according to this definition the truth of subjective sentences does not depend on W while the truth of objective ones does not depend on A, i.e. we have a notion of objective formula being true (false) in W and subjective formula being true (false) in A. We will denote the former by W b F (W =I F) and later by A b F (A =I F). In our further discussion we will expand language .Ce by the connectives or and V and a modal operator M (where MF will be read as “F may be believed”) defined as follows: MF iff 1KlF F or G iff 7(-F & ~6) VxF iff 13xlF The language and the satisfiability relation described above together with the notion of a rule from logic pro- gramming will be used to provide a specification of a reasoner with the desired properties. (This view on the role of logic in nonmonotonic reasoning seems to be sim- ilar to the one advocated by H. Levesque in [Levesque 901). The formal notion of such a specification is cap- tured by the following definitions: Definition 2 By an epistemic specification we will mean a collection of rules of the form It is easy to show that A = ( {Qd, Pa, Pc,lPd}, (Qd, Pb, PC,-Pd}} is the only world view of T. F + G1,...,G,,., where F is objective and G’s are arbitrary formulae. Example 5 Let T consist of the formulae 2. Removing premises. all remaining rules with non-empty Then A will be called a world view of T i$ A = (W : W is a belief set of TM) where M =< A, W >. Example 3 Let T consist of the formulae 1. P(a) or P(b) c- 3x&(x), not Q(d) 2. Q(c) + It is easy to see that this specification has a unique world view consisting of two belief sets (Q(c), P(a)) and {Q(c), WI. Example 4 Let T consist of the formulae 1. Paor Pb+- 2. Pet 3. &Cat 4. ~Px + not MPx 388 NONMONOTONIC REASONING 1. Pa+ 2. Qbor &cc 3. Rx t not K&x 4. Sx t not M&x The only world view of T is A = {(Pa, Qb, Ra, Rb, Rc, Sa}, (Pa, Qc, Ra, Rb, Rc, Sa}]. Example 6 Let T = {p t not Kp}. It is easy to see that T does not have a world view. Example 7 Let T = {p + not M q, q t not M p}. This specification is satisfied by two world views: Al = {(q}} and A2 = {{P11* Definition 5 We will say that a world view of epis- temic specification T is consistent if it does not contain a belief set consisting of all laterals. Definition 6 We will say that epistemic specification is consistent if it has at least one consistent world view. Example 8 Let T consist of formulae p and -p. It is easy to see that theory T is inconsistent. Another inconsistent specification is given in Example 6. Definition 7 A specification is called well-defined if it has exactly one consistent world view. Prom now on we will only consider well-defined spec- ifications. Definition 8 Let T be a specification and A = {Ai} be its world view. A formula F is true in T (T b F) ifl < A, Ai >b F for every Ai from A. This definition can be used to define the range of possible answers to a query Q. For the purpose of this paper we will limit ourself to the simple case when an- swer to Q is yes if T b Q, no if T k l&, and unknown otherwise. The following two Propositions establish the relation- ship between deductive databases and epistemic speci- fications. Proposit ion 1. Let T be an epistemic specification consisting of clauses of the form F + G1,... ,G,, not E,,,+l, . . . , not Ek. Then 1. If F, G’s and E’s are atoms (i.e. T is a general logic program) then A is a world view of T iff A is the set of all stable models of T. 2. If G’s, and E’s are objective literals and F is a dis- junction of objective literals (i.e. T is a deductive database) then A is a world view of T iff A is the set of all answer sets of T. Proposition 2. Let T be a well-defined specification with the world view A. Then A is a singleton iff W E A is the unique answer set of the theory T’ obtained from T by deleting modal operators K and M. Applications In this section we will discuss several applications of epistemic specifications to formalization of common- sense reasoning. (a) Representing the Unknown. We will start with demonstrating how statements of the form “unknown p” can be represented by strongly introspective formulae. We suggest to represent formu- lae of this form as a conjunction of formulae not K p and not K lp. Let us go back to Example 1 from the introduction to illustrate this point. Example 1 revisited. Let us consider the theory con- sisting of rules (l)- (3) and (7) from Example 1. To ob- tain the proper formalization of (*) we will just replace statement (4) by 4’. Interview(x) t not K Eligible(x), not K -Eligible(x) which corresponds closely to the intuitive meaning of (*). It is easy to check that the theory T consisting of rules (l)-(3), 4’, and (7) has the world view A = (Al,As} where Al = { HighGPA(Mike), Eligible(Mike), Interview(Mike)} A2 = (FairGPA(Mike), Interview(Mike)) Therefore T answers unknown to the query Eligible(Mike) and yes to the query Interview(Mike) which is the intended behaviour of the system. (b) Closed World Assumption. Now we will illustrate how strong introspection can be used to represent the Closed World Assumption (CWA) of [Reiter 19781 for non-Horn databases (i.e. databases containing disjunctions). The question of formalizing this assumption has been extensively studed by various authors (for a good review see [Przymusinska and Przy- musinski 19901). [Minker I9821 gives perhaps the most widely known form of the CWA for non-Horn databases. As was noticed in [Ross and Topor 19881, this assump- tion tends to interpret disjunction as exclusive. Some attempts to remedy this problem can be found in [Chan 891, [Sakama 891 and [Gelfond 901. Pull discussion of these and other approaches is beyond the scope of this paper. Instead, we suggest a formalization of CWA that we believe is more general. We will start with the following example: Example 9. [Chan 891. following information: Suppose we are given the (*) “If suspect is violent and is a psychopath then the suspect is extremely dangerous. This is not the case if the suspect is not violent or not a psychopath” which is used in conjunction with a database DB con- sisting of literals specifying values of the predicates GELFOND 389 violent and psychopath. Let us also assume that DB contains complete positive information about these predicates, i.e. ground atoms with predicate symbols violent and psychopath assumed to be false if there is no evidence to the contrary. This statement can be viewed as an informal description of Reiter’s Closed World As- sumption (CWA). - The information from (*) can be easily expressed by three statements In the case of T2 such reason may be given by the exis- tence of a belief set containing violent(sam). This con- sideration leads us to the better representation of CWA for a predicate P which is provided by the statement lP(x) + not MP(x) It is easy to see that for well-defined programs both formalizations of CWA coincide. 1. dangerous(x) + violent(x), psychopath(x) 2. -dangerous(x) c- wiolent(x) 3. -dangerous(x) t -psychopath(x) Formalization of CWA is somewhat more problematic. In [Gelfond and Lifschitz 19901 we suggested to express CWA for a predicate P(x) by the rule -P(x) + not P(x) As expected this formalization works nicely for well- defined extended programs but is not suitable in the general case. To see the problem let us apply this idea to our example. CWA for predicates violent and psychopath will look as follows: 4. -violent(x) + not violent(x). 5. -cpsychopath(x) + not psychopath(x) Suppose that our DB contains the following informa- tion: 6. violent(john) t, 7. violent(mike) t, 8. psychopath(mike) t. It is easy to check that the theory Tl consisting of clauses (l)-( 8) is well-defined and has exactly one belief Let us now consider theory T3 obtained from T2 by replacing statements (4) and (5) by 4’. -violent(x) + not Mviolent(x). 5’. -psychopath(x) t not Mpsychopath(x) The resulting theory has exactly one world view A con- sisting of two belief sets Al = A0 U {violent(sam)} and A2 = Ao U (psychopath (Sam)}. T3 implies nei- ther (10) nor its negation and therefore answer to the query dangerous(sam) is unknown. (c) Integrity Constraints We will finish our discussion of applicability of strong introspection to formalization of commonsense reason- ing by an example demonstrating the utility of strong introspection for expressing integrity constraints. Example 10. Let us assume that we are given the specification for a departmental database T: (a) T should contain lists of professors, courses and teaching assignments for a CS department. Let us first assume that the department consists of professors named Sam and John and offers two courses: Pascal and Assembler, taught by Sam and John respectively. (b) The above lists contain all the relevant positive information about the department known to us at a _ set Ao: (violent(john), violent(mike), psychopath(mike), -psychopath(john), -dangerous(john), dangerous(mike)) which properly reflects our intuition. The situation changes when disjunctive information is allowed in DR. Consider, for instance. a statement time. (c) T must satisfy the following integrity constraint: Pascal is taught by at least one professor. Part (a) of the specification is formalized as follows: I. proj(sam) & proj(john) + 2. class(pasca1) & class(assembler) +- 3. teach(sam, Pascal) & teach(john, assembler) + I , 9. violent(sam) or psychopath(sam) and a specification T2 consisting of clauses (l)-(5) and (9). Notice that (9) is not an exclusive disjunction and therefore T2 does not seem to sanction the conclusion Part (b) of the specification can be viewed as the Closed World Assumption and represented as 4. -P(x) + not MP(x) for every predicate symbol P. 10. Idangerous(sam). But it is easy to see that T2 does imply (10) which seems to be overly optimistic. The problem is appar- ently caused by an incorrect formalization of CWA - the fact that not violent(sam) is true in one of the belief sets of T2 does not guarantee that, given T2, a rational reasoner does not have a reason to belief violent(sam). Formalization of part (c) seems to be the less obvi- ous task. The main difficulty is related to the lack of universally excepted interpretation of the meaning and role of integrity constraints in knowledge represen- Cation. In this paper we will adopt the view on in- tegrity constraints recently suggested by Reiter in [Re- iter 19901. According to Reiter an integrity constraint IC is a statement about the content of the knowledge 390 NONMONOTONIC REASONING base T (as opposed to IG being a statement about the world). T satisfies IC iff the answer to IC when viewed as a query to T is yes . A simple analysis of clause (c) from this standpoint shows that (c) can be interpreted in two different ways: ICI : K 3p(proj(p) & teach(p,pascal)) or I& : 3p K (proflp) & teach(p,pascal)) The first one says that the database knows that Pascal is taught by some professor (whose name can be un- known to the database), while the second one means that there is a person known to the database to be a professor teaching Pascal. It is easy to see that Ti consisting of rules (l)-(3) satisfies both integrity con- straints. If however, we consider T2 obtained from Tl by replac- ing rule 2 by 2’. teach(sam, Pascal) or teach( john, Pascal) the situation changes. It is easy to check that T2 satisfies ICI but not IC2. This is of course the intended result since this time the database does not know what professor will teach Pascal but knows that Pascal will be taught. Remark. Even though the approach to formalization of integrity constraint suggested in this paper is similar to the one of Reiter there are some important differ- ences: Reiter views a knowledge base as a first-order theory and a query as a statement of Levesque’s modal logic (called KFOPCE). I n our case knowledge base and queries are both epistemic formulae while the underly- ing logic is nonmonotonic. Acknowledgments I am grateful to Vladimir Lifschitz and Halina Przy- musinska for comments on the first draft of this paper. eferences [Chan 19891 E. Ch an, A Possible World Semantics for Non-Horn Databases, Research Report CS-89-47, Uni- versity of Waterloo, Waterloo, Ontario, Canada, I989 [Gelfond 19901 M. Gelfond, Epistemic Approach to For- malization of Commonsense Reasoning, Research Re- port, University of Texas at El Paso, 1990 [Gelfond and Lifschitz 19881 M. Gelfond and V. Lifs- chitz, The Stable Model Semantics for Logic Programs, Proceedings of the Fifth International Conference and Symposium on Logic Programming, (Kowalski, R.A. and Bowen, K.A. editors), vol. 2, pp. 1070-1080, 1988 [Gelfond and Lifschitz 199Oa] M. Gelfond and V. Lif- schitz, Logic Programs with Classical Negation, Pro- ceedings of the Seventh International Conference on Logic Programming, MIT Press, pp. 579-597, 1990 [Gelfond and Lifschitz 1990b] M. Gelfond, V. Lifschitz, Classical Negation in Logic Programs a Databases, Research Report, University Q Paso and Stanford University, 1990 [Kowalski 901 R. A. Kowalski, Problems and Promises of Computational Logic, Proc. Symposium on Compu- tational Logic, Springer-Verlag, 1990 [Levesque 19861 aking Believers out of Computers, Artificial Intelligence 30, (1986) pp. 81-108 [Levesque 19901 Levesque, All I Know: A Study in Autoepistemic Logic, Artificial Intelligence 42, (1990) pp. 263-309 [Lloyd and Topor 19841 J. Making Prolog More Expressive, Journal of Logic Pro- gramming, I984 : 3, pp. 225-240 ker 19821 J. Minker, On indefinite databases and d world assumption. In Lecture Notes on Com- puter Science 138, pg. 292-38, 1982. [Pearce and r ‘19891 Pearce, 6. ner, Rea- soning with e Information I - St Negation in Logic Programming, Technical Report, Gruppe fur Logic, issenstheorie und Information, I%eie Universi- tat Berlin, 1989 [Przymusinska and rzymusinski 19901 ska, T. Przymusinski, Semantic Issues in Deductive Databases and Logic Programs, In R. , editor, s in Artificial Intelli p. 321- d, Amsterdam, 1990 iter, On Closed World Data J. Minker, editors, Logic an Bases, pg 119-140, Plenum, New York, 1978. [Reiter 19901 R. Reiter, On asking what a database knows, Proc. of Symposium on Computational Logic, Springer-Verlag, 1990 [Ross and Topor 19881 K.Ross, R. Topor, Inferring Neg- ative Information from Disjunctive Databases, Journal asoning, (4:4), pp. 397-424, 1988 [Sakama 19891 C. Sakama, Possible Model Semantics for Disjunctive Databases, Proceedings of the first In- ternational Conference on Deductive and Object Ori- ented Databases, Kyoto, Japan, 1989 [Wagner 19903 G. Wagner, Logic Programming with Strong Negation and Inexact Predicates, 1990 GELFOND 391
1991
59
1,119
Prototype-Based Reasoning: An Integrated Approach to Solving Large Novel Shankar A. Rajamoney Computer Science Department University of Southern California Los Angeles, CA 90089 Abstract Two important computational approaches to problem solving are model-based reasoning (MBR) and case-based reasoning (CBR). MBR, since it reasons from first principles, is especially suited for solving novel problems. CBR, since it reasons from previous experience, is especially suited for solving frequently encountered prob- lems. However, large novel problems pose difficul- ties for both approaches-MBR rapidly grows in- tractable and CBR fails to find a relevant previous case. In this paper we describe an approach called prototype- based reasoning that integrates both ap- proaches to solve such problems. Prototype-based reasoning treats a large novel problem as a novel combination of several familiar subproblems. It uses CBR to find and solve the subproblems, for- mulates a new problem by combining these indi- vidual solutions, and uses MBR to solve this new problem. We demonstrate the effectiveness of this method on several examples involving the causal simulation of complex electronic circuits. Introduction Two computational approaches to problem solving have recently gained considerable importance in Artifi- cial Intelligence (AI). The first approach, model-based reasoning (MBR) [Forbus, 1988; Weld and de Kleer, 19901, uses a domain model describing the basic en- tities in the domain and their interactions to reason from first principles about situations of interest. The second approach, case- based reasoning (CBR) [Kolod- ner, 1984; DARPA, 19901, retrieves, from a memory of previous problem-solving episodes, a suitably rele- vant problem and adapts its solution to the problem at hand. The strengths and weaknesses of both approaches are well known. MBR, since it reasons from first prin- ciples, is especially suited for solving novel problems. CBR, since it reasons from previous experience, is es- pecially suited for solving frequently encountered prob- lems. However, large novel problems pose difficulties for both approaches. Model-based reasoning, because it involves reasoning from first principles, rapidly be- 34 TRANSFORMATION IN DESIGN Hee-Youn Lee Electrical Engineering Department University of Southern California Los Angeles, CA 90089 comes intractable as the complexity and the magnitude of the problem increase. Neither is case-based reason- ing very effective: as the complexity and the magnitude of the problem increase, the likelihood of having pre- viously solved a similar problem decreases drastically and, even if relevant cases are available, the matching costs involved in finding them are prohibitive. A commonly applied strategy in AI for solving diffi- cult problems is to decompose them into simpler prob- lems and to combine the individual solutions to obtain the overall solution.. This paper presents an approach called prototype-based reasoning that adopts this strat- egy. Prototype-based reasoning integrates MBR and CBR to exploit their strengths and eliminate some of their weaknesses. In this approach, a large novel problem is viewed as a novel combination of several commonly encountered problems. Accordingly, it uses CBR to decompose the problem into familiar cases and MBR to compose the individual solutions to obtain the overall solution. This paper presents prototype-based reasoning, discusses the issues involved in integrat- ing MBR and CBR, and describes the additional con- straints that must be imposed on CBR, MBR, and the domain for successfully solving large novel problems. To demonstrate the effectiveness of our approach, we apply it to the task of simulating the behavior of com- plex electronic circuits. rototype-Based easoning Briefly, prototype-based reasoning consists of three principal stages: 1) Finding familiar subproblcms. The system uses a CBR approach to find all the familiar subproblems contained in the given problem. Our CBR approach uses an organized memory of previous problem-solving episodes (cases) which are indexed by cues [Kolodner, 19841. The first step in CBR is to match cues to por- tions of the problem and to retrieve the indexed cases for successful matches. Since it is unlikely that the problem will contain subproblems that exactly match a case, a partial matcher must be used to retrieve po- tential candidate cases. Unlike traditional CBR, our approach must also deal with the problem of identify- ing the subproblem boundaries; consequently, our sys- From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. tern uses the portion of the problem that matches a cue as a seed to generate subproblems of a size com- mensurate with the retrieved cases. The second step involves adapting the solution of the retrieved cases to the subproblems. We use MBR to reason about par- tially matched subproblems, reasoning incrementally from the retrieved case when possible. 2) Composing their solutions. Since partial matches are possible and the boundaries of the sub- problems are not clearly defined, several subproblems may overlap, include other subproblems, be covered by other subproblems, and so forth. Consequently, our system searches for consistent combinations of the so- lutions to the individual subproblems which completely cover the entire given problem. 3) Solving the composed problem. Each consis- tent combination is formulated as a new problem, al- beit greatly simplified in comparison to the original problem, and posed to MBR. MBR uses the domain model to obtain the solution to the new problem from Domain 0 Model first principles. Figure 1: The three principal stages of prototype-based Figure 1 illustrates the three stages of prototype- reasoning. based reasoning. Despite the reduction in the magni- tude and the &mplexity of the problems tackled-by MBR and CBR, prototype-based reasoning may not fare much better than either approach applied directly to the original problem. As Figure 1 illustrates, the decomposition and re-composition steps introduce sev- eral difficult searches including those involved in defin- ing the boundaries of subproblems and in consistently combining the solutions of the individual cases. Since multiple candidates are common, CBR and MBR may have to solve many smaller problems, some of which may be unnecessary. Consequently, a straightforward application of this integrated approach may prove dis- astrous for some types of domains and reasoning tasks. If prototype-based reasoning is to effectively solve large novel problems, these searches must be tightly con- trolled, and to achieve this control, we impose sev- eral additional requirements on the methods, reasoning tasks, and domains: I) Primitive Cases: The cases contained in memory must be primitive, that is, they must be “building blocks” (prototypes) commonly used to construct larger problems. By maintaining a library of such proto- types, the “hit rate” or the likelihood of finding famil- iar subproblems and, consequently, decomposing the large problem, is greatly increased. Furthermore, the size of the prototypes must be much smaller than the problems typically encountered in the domain, thereby placing a tight upper bound on the search for the defin- ing boundaries of subproblems. drastically with this type of learning, since the proto- type must be found first, and only later and only if necessary, are the variations checked to see if they pro- vide a better match. The rationale for this restriction is that human experts typically learn the prototypes of a domain through instruction but acquire more so- phisticated versions through experience. 3) Limited Combinations: Each prototype must com- bine with other prototypes only in a limited number of ways. These strong local constraints are used to control the search for a consistent combination of pro- totypes and unmatched portions of the problem. 4) Topology Constraints: The large problem must pro- vide connectivity information or other guidance for combining prototypes, which further restricts the num- ber of consistent combinations. 5) Abstraction: The prototype solution must empha- size its functional or behavioral role in the larger prob- lem and must abstract away uninteresting details. This restriction considerably limits the search conducted by MBR to find the overall solution. These additional requirements greatly increase the effectiveness of the integrated method but they also restrict its applicability. However, several interest- ing and important classes of problems can still be tackled effectively by prototype-based reasoning. Ex- amples include determining the chemical structure of large compounds, discovering the mechanism of com- plex chemical reactions, finding the genetic composi- tion of proteins, diagnosing multiple-fault failures in a boiler plant, and computing the center of mass of a complex regular object. For concreteness, in the next section, we investigate prototype-based reasoning ap- plied to the problem of finding the causal behavior of 2) Stable Memory: The number of prototypes in mem- ory must remain relatively stable, and new prototypes will rarely be learned during normal problem solv- ing. However, prototype-based reasoning allows learn- ing finer points, subtle variations, or enhancements of prototypes, and these are indexed under the prototype. The search for familiar subproblems does not increase RAJAMONEY & LEE 35 ?Vo - Figure 2: The schematic tional amplifier. circuit diagram of an opera- complex electronic circuits, and provide additional de- tails on how each step in the reasoning is performed. The Qualitative Simulation of Electronic Circuits The qualitative simulation of complex electronic cir- cuits is highly amenable to solution by prototype-based reasoning. Human experts in this field are adept at reasoning about the behavior of complex circuits even when they are not familiar with the circuit or its in- tended functionality. We conjecture that, through instruction, they develop a repertoire of prototypes such as differential amplifiers, current mirrors, cur- rent sources, emitter followers, Darlington pairs, and totem-pole &figurations and, through-experience, be- come proficient at recognizing and reasoning with these prototypes and all their subtle variations. Figure 2 shows the schematic circuit diagram of an operational amplifier (OP-AMP). Two commonly posed problems, with typical answers from human ex- perts, are: What happens to the output if the voltage dijeerence between the two inputs is increased? The change in the output is determined by the two differential amplifiers, the emitter follower, and the voltage-shunt feedback stages. An increase in the volt- age difference between the two inputs of the first dif- ferential amplifier results in an (amplified) increase in the voltage difference between its two outputs. This leads to a (further amplified) voltage decrease at the inverting output of the second differential amplifier. This results in a decrease in the voltage at the output of the next stage, the emitter follower. The voltage shunt feedback circuit, the last stage, inverts the direc- tion of the voltage change and, therefore, the overall output voltage increases. How can the input impedance to diflerential voltages be improved? The differential input impedance is determined by the transistor characteristics and the current flowing into the collectors of the transistors of the first differential Rototvce-based Reasoning Step 1: Finding familiar s&circuits (CBR) 1) Access: Find structural cues and retrieve prototypes indexed by the cues. Try to match retrieved prototypes with the circuit surroundmg the cue. Evaluate partial matches when a perfect match fails. 2) Adapting prototypes: Investigate the relevant portion of the circuit with MBR. Compare it with the constraints of the prototype. Find the improvement when they have same fiction. 3) Leaming: When a variant of a prototype is found add it to the prototype library. Step 2: Combining the causal constraints of prototypes 1) Croup the prototypes and remaining objects to cover the entire circuit. 2) Pick the best group and form a prototype-level constraint network. Step 3: Solving the prototype-level constraint network (MBR) 1) Propagate initial values through the network. Figure 3: A high-level algorithmic description of prototype-based reasoning. amplifier. The input impedance can be increased replacing the transistors with Darlington pairs. by Notice that the experts appear to be decomposing the circuit into familiar pieces, remembering or adapting solutions to these familiar pieces, and reasoning at the level of the prototypes to find the solution. This is exactly the style of reasoning that we hope to capture with prototype-based reasoning. The domain captures all five constraints: large circuits are typically built from primitive building blocks like current mirrors and emitter followers, these prototypes are learned primar- ily through instruction and, typically, only variants are learned through experience, connections to proto- types are through specified input and output termi- nals, the given circuit network constrains the building of the prototype-level problem, and the prototypes ab- stract irrelevant detail (for example, a constant cur- rent source prototype focuses on the constraint that the current supplied is constant and ignores other in- ternal currents and volt ages). 1 Figure 3 provides a high-level description of the steps involved in prototype-based reasoning. We describe each of the stages below. Finding Familiar Circuits CBR is applied iteratively to find all the familiar pieces of circuitry in the given schematic diagram. Each iter- ation involves three steps: I. Access: As in several other CBR systems [Kolod- ner, 19841, the prototype library is organized hierarchi- cally in the form of a tree with the leaves representing prototypical circuit networks that perform a specific function or demonstrate a characteristic behavior, and with the internal nodes representing the structural cues ‘A practical re ason for adopting this task and domain is that the qualitative physics community has extensively investigated this problem, and several MBR methods have been developed. We adopt de Kleer’s [de Kleer, 19841 de- vice ontology for representing prototypes and their quali- tative reasoning approach based on confluences for MBR. 36 TRANSFORMATION IN DESIGN WI [Cl , . vcc Mkror Figure 4: The prototype library and some of the pro- totypes associated with the OP-AMP example. Conditions: on(Qlh WQ3, Identical(Q 1, Q2), m(Ql) <c IdQU, Ib(Q2) << Ic(Ql) = Ic(Q2) Parameters: Icef, Rl, I#l(Rl), WQU, vbe(Ql), WQl), ‘fn, Ic(Q21, vbe(Q3, MQ%, Vcc Explanations: Individuals: Ql, 42, Rl, +VCC, GND A Iref = Ic(Ql)+Ib(Ql)+Ib(Q2) coxflguration: A Ib(Q1) cc Ic(Q1) Connected(Vcc, #l(Rl)) A lb(Q2) cc Ic(Q1) Connected(#2(Rl), C(Ql), B(Ql), A ‘?I2 = It(z) Iref 112 B(Q2)) Ccmm-WE(Ql), E(Q21, Gm) Diffwence Relations~ Constraints: ?I2 E Iref ?I2 = Iref - Ib(Q1) - Ib(Q2) [Notation] B - Base, C - Collector, E - Emitter Figure 5: A current mirror prototype and its represen- tation. (characteristic pieces of the network consisting of rel- evant components and their connections) that index a particular prototype. Figure 4 shows prototypes of dif- ferential amplifiers and current mirrors along with the structural cues used to index them, and Figure 5 gives a brief description of the representation of the current mirror prototype. During access, the system searches the given circuit to find structural cues, and retrieves the prototypes indexed by the discovered cues. Next, the system attempts to match each retrieved proto- type with the actual circuitry surrounding the cue in the given network. The cue functions as a seed, and the system grows the seed systematically along the connec- tivity network of the circuit under the guidance of the retrieved prototypes. In Figure 4, the path from [C] to the prototype [C’] illustrates this growth. Clearly, a termination condition is necessary to stop a fruit- less growth. The system uses the retrieved prototype to compute a boundary within which the growth from the matched cue must provide a successful match. To allow for partial matches of new, augmented or depleted, versions of the prototype, the matching pro- cess starts below the boundary and continues beyond the boundary. The limiting upper and lower thresholds of the boundary are set for each prototype empirically, in advance. The partial matches are first checked with the known variations of the prototype. If a perfect match is found then the prototype’s solution is used along with the enhancements due to the variation. If a perfect match fails, the partial matches are evalu- ated with a lexicographic evaluation function2 [Michal- ski, 19831 that incorporates the following criteria: the type of components, the number of components, the directly connected components, and the indirectly con- nected components. If a partial match passes the lex- 21n a lexicog ra p hit evaluation function, all matches are evaluated according to the first criterion; those that pass are evaluated according to the second criterion; and so forth until only one match is left or all the criteria are exhausted. RAJAMONEY & LEE 37 icographic evaluation function then it may be a vari- ant of the prototype. Figure 4 shows an example of a partial match of a sophisticated version [D] of the cur- rent mirror [D’] in the prototype. Apart from partial matches, multiple prototypes may be discovered for a portion of the circuit. Prototype [B] is an example of a prototype included in another ([A]). 2. Adapting Prototypes: The electronics domain is not an easy one in which to adapt the solution of a prototype3 to a partially matched portion of the cir- cuit. The presence or absence of even a single connec- tion may make a profound difference in the behavior of the matched portion of the circuit (e.g. adding a connection that introduces feedback). Consequently, in the event of a partial match, we use MBR to inves- tigate the relevant portion of the circuit, while making an effort to use the information already available in the prototype. For example, in the case of augmented variations of a prototype, MBR can commence rea- soning from the solution of the prototype and extend the solution to cover the augmentation. The results of the analysis of the relevant portion of the circuit are compared with the constraints stored in the pro- totype. If the two functions are the same then the improvements in the variation are noted and the par- tial match is accepted. Otherwise, the partial match is discarded. Figure 4 shows an example of a partial match of a sophisticated version [D] of the current mir- ror [D’] in the prototype. The MBR analysis of this partial match yields the constraint [?12 Z Iref] which shows that the essential function of the current mirror prototype is met. The variation is an improvement on the prototype since it more closely mirrors the refer- ence current. 3. Learning: If a partial match is found to be a varia- tion of the prototype then it is added to the prototype library and is indexed under this prototype. The im- provement over the prototype is stored with the vari- ation. Figure 4 shows, in dotted lines, the addition to the prototype library of the improved variation of the current mirror prototype, [D’]. The system does not learn new prototypes; it assumes that all the relevant prototypes are initially given or are acquired through some other form of learning (e.g. explanation-based learning on the results of MBR). Combining the Causal Constraints of Prototypes The system groups the prototypes and remaining prim- itive objects in the circuit into sets such that each set completely covers the given electronic circuit (that is, all the components and all the connections are cap- tured uniquely by the set). Several consistent sets may be formed-the system picks the one with a minimum number of total causal constraints from the prototypes and the primitive objects. Intuitively, this selection 3The solution of a prototype consists of the abstract causal constraint(s) that describe its characteristic behav- ior or its intended function. L Diff. Amp2 in.2, x2 t W in1 id2) I (:vd xl x2) J Cv Important nrototvpe causal constraints DE Ampl: (:vd xl x2) =- (:vd in1 in2) R9: (:i x5) = (:vd x4 x5) Diff. Amp2: (:v x3) 0~ (:vd xl x2) RlO: (:i x5) 0~ (:vd out x5) Emitter Follower: (:v x4) = (:v x3) KVL heuristic for R9: Current Mirror: (:constant (:i x5)) (:vd x4 x5) 05 (:v x4) Totem-pole: (:v out) =- (:v x5) KCL heuristic for node x5: (:v x5) = (:i x5) [Notation] :v - voltage, :vd - voltage-difference, (:i x) - current flowing into x. Figure 6: The prototype-level constraint network for the OP-AMP example. corresponds to the highest level decomposition of the given circuit . Using the selected combination, the system forms a prototype-level constraint network (one with proto- types rather than the primitive objects) for the cir- cuit by replacing the prototypes with their constraints and the remaining primitive objects in the circuit with their constraints (obtained from the domain model). The connectivity in the electronic circuit specifies the inputs and outputs to the constraints and the links in the constraint network. Figure 6 shows the network for the OP-AMP example in Figure 2. R9 and RlO are the unmatched portions of the original circuit. Solving the Prototype-Level Constraint Network In the final step, the system uses MBR to solve the prototype-level constraint network for the given cir- cuit. The MBR approach that we use is similar to the one developed by de Kleer [de Klcer, 19841. The given initial values are propagated through the constraint network. We use the heuristics described in [de Kleer, 19841 to continue the propagation if it reaches an im- passe. The results of propagating the input value (an increase in the differential voltage) are shown in Fig- ure 6, and correspond to the English explanation given in Section 3. Empirical Evaluation We have demonstrated our approach on several exam- ples from this domain, and Table 1 shows some empir- ical data obtained from these examples. We compare the results to those obtained from a direct MBR ap- proach on the same problem (the high matching costs for large circuits and the dificulty in adapting partial matches prevented us from applying a direct CBR ap- proach). As shown in the table, the MBR method takes considerably longer thaa the prototype-based reason- ing method and the difference is striking for larger ex- amples. Two major factors contributing to this differ- ence are: a) The abstraction within a prototype. The prototype stores only the functionally or behaviorally 38 TRANSFORMATION IN DESIGN relevant abstract constraints. b) The great simplifi- cation in the problem solved by MBR. The last two columns in the table illustrate the difference in the number of primitive elements handled by MBR at the prototype level and at the actual circuit level. Discussion Most CBR approaches do not break up a problem into multiple cases; rather, they retrieve a relevant case commensurate in size with the given problem. Red- mond’s [Redmond, 19901 work on case pieces or snip- pets is closely related to our work. He describes how, in the course of solving a problem, snippets from mul- tiple previous cases are used. While we share the goal of solving large problems for which no single relevant case is available, there are several important differ- ences. We decompose the large problem into sub- problems each of which constitutes a case; whereas, Redmond views a problem as a combination of several pieces of other cases. Also his snippets are different from prototypes in terms of primitiveness and context- dependency. On a related note, some other CBR sys- tems that have used multiple cases or pieces of cases in solvin a single problem are MEDIATOR [Simp- son, 1985 , JULIA [Kolodner, 19891, and Barletta and 7 Mark’s system [Barletta and Mark, 1988]. However, these systems have not focused on the integration of MBR and CBR, while prototype-based reasoning has. Goel and Chandrasekaran [Goel and Chandrasekaran, 19891 describe a different combination of MBR and CBR: they show how device models may be used in adapting a case to a given problem. While their inte- gration does not address the problem described here (solving a complex novel problem by decomposition), their approach is closely related to our approach for adapting partially matched subproblems to prototypes. Due to the different tasks addressed (design vs simu- lation), the cues used in identifying appropriate cases are functional while ours are structural. Several MBR researchers have proposed or investi- gated the use of functional primitives in reasoning. For example, Forbus [Forbus, 19881 suggests recognition- the redescription of a system in terms of a functional vocabulary (eg. proportional-action controller)-is an important style of qualitative reasoning. De Kleer [de Kleer, 19841 d escribes how a teleological parse can be performed on the causal behavior determined by MBR to obtain the functional description of the system be- ing analyzed. Our approach differs from this in that we use prototypes to directly obtain the causal behavior. In this paper, we described prototype-based reasoning- an integration of MBR and CBR to solve large novel problems. Our ongoing efforts address sev- eral of the limitations in our current work and pursue new research issues in this area. Some of these include learning prototypes through a teleological analysis of the results of MBR, modeling and reasoning about pro- totypes as functional aggregates, supplementing the in- dexing of prototypes by partial behavioral and func- Examples CPU lime #of # of prototypes components PBR(sec.) MBR(sec.) used in step 3 in circuit OP-Amp with emitter- followers at input stage 166 > 1000 7 28 OP-amp (Fig. 2) 104 > 1000 5 26 OP-Amp w/ one diff. amp 30 787 4 16 Emitter-follower + totem-pole 13 118 3 11 Totem-pole 9 84 2 Note: 1. CPU time is measured on W&4/490. 2. Experiments were stopped after 1000 cpu seconds. 10 Table 1: Some empirical performance results of prototype-based reasoning and MBR on electronic cir- cuit examples of varying complexity. tional cues, extending the applicability of prototype- based reasoning, and investigating more thoroughly the trade-offs involved in prototype-based reasoning. We believe that the novel integration of MBR and CBR described in this paper is a significant step towards the development of effective and practical problem solvers. References Barletta, R. and M.ark, W. 1988. Breaking cases into pieces. In Proceedings of Case-Based Reasoning Workshop. DARPA, 1990. Proceedings of the Case-Based Rea- soning Workshop. Morgan-Kaufmann, Los Altos, CA. Kleer, J.de 1984. How circuits work. Artificial Intel- ligence. Forbus, Ken 1988. Qualitative physics: Past, present and future. In Exploring ArtiJiciaZ Intelligence. Mor- gan Kaufmann Publishers. Goel, Ashok and Chandrasekaran, B. 1989. Use of de- vice models in adaptation of design cases. In Proceed- ings of the Second DARPA Workshop on Case-Based Reasoning. Kolodner, Janet L. 1984. Retrieval and Organization Strategies in Conceptual AIemory. Lawrence Erlbaum and Associates, IIillsdale, NJ. Kolodner, Janet L. 1989. Judging which is the “best” case for a case-based reasonere. In Proceedings of the Seconcl Workshop on Case-Based Reasoning. Michslski, R. S. 1983. A theory and methodology of inductive learning. In Machine Learning: An Arti- ficiaZ Intelligence Approach. Tioga Publishing Com- p any. Redmond, Michael 1990. Distributed cases for case-’ based reasoning; facilitating use of multiple cases. In Proceedings of AAAI-90. Simpson, R. L. Jr. 1985. A Computer Model of Case- Based Reasoning in Problem Solving. Ph.D. Disserta- tion, Georgia Institute of Technology, Atlanta, GA. Weld, D. S. and Klecr, J.de 1990. Readings in qualita- tive reasoning about physical systems. Morgan Kauf- maim Publishers. RAJAMONEY & LEE 39
1991
6
1,120
The P-Systems: A Systematic Classification of Logics of Nonmonotonicity Wolfgang Nejdl* Technical University of Vienna Paniglgasse 16, A-1040 Vienna, Austria e-mail: nejdl@vexpert.dbai.tuwien.ac.at Abstract In the last years many logics of nonmonotonicity have been developed using various different formalisms and axiomatizations which makes them very difficult to compare. We develop a classification scheme for these logics using only a few simple concepts and axioms based on conditional logics, properties of partial pre- orders of possible states (worlds) and centering assump- tions. Our framework (the P-Systems) allows us to discuss the similarities, main differences and possible extensions of these logics in a simple and natural way. 1 Introduction A large number of papers on semantic and axiomatic characterizations of logics of nonmonotonicity has ap- peared in recent years, leaving the reader sometimes with the impression of very interesting but in many ca#ses incomparable approaches. Even when compar- isons are made, authors tend to focus more on the differences between these logics than their similarities, treat similarities as surprising results or are simply not aware of similarities to related work. The goal of this paper is to provide a common seman- tic and axiomatic framework for this work and to focus on the similarities of the various approaches. The result is a rather simple framework (the P-Systems) which nevertheless serves well to characterize the principles of various logics and to understand their similarities and main differences. Depending on the constraints used in formulating such a logic, these logics of nonmono tonicity can be used to answer what-if questions (“If kangaroos had no tails, would they topple over?“), to formalize defaults (“Normally kangaroos have tails.“), to incorporate inconsistent updates (“Kangaroos have no tails.“) and other. *This work was done while the author was spending a sab- batical at Xerox PARC. 366 NONMONOTONIC REASONING We will classify the various kinds of logics of non- monotonicity using constraints on the relation between different possible situations and centering assumptions, leading to a unified view of a large number of seemingly different formalisms and logics. Our work extends for- malizations based on conditional logic discussed in [17] and [4] and several results scattered through the lit- erature in various formalisms. Among others, we dis- cuss the relationship between operators and principles used for belief revision (1121, [25], [6]), conditional log- its of normality and modal logics ([9], [lo], [3], condi- tional logics for counterfactuals ([22], [173, [20], [14]), nonmonotonic consequence relations ([ 151) and general conditional logics of nonmonotonicity ([l]). Due to space constraints equivalence proofs are omit- ted. They can be found in a longer version of this paper wo 2 Conditional Logics 2.1 Preliminaries Conditional logics have been used in various disguises to formalize nonmonotonic reasoning. The underlying idea is that a conditional a j b is true iff a A lb is either impossible or at least a less plausible possibility, in some sense, than a A b. This allows us to formalize what-if statements (counterfactuals), as well as state- ments of normality (defaults). We will use a to denote the conditional implication, > to denote material im- plication, and f to denote logical equivalence. The first work on this subject has been [22] and [17] who formalized the notion of counterfactuals. Various conditional logics have been analyzed in later papers and compared to each other (e.g. [4]). Starting with the paper of [13] the use of counterfactuals and conditional logics has been discussed also for artificial intelligence applications with a lot of work appearing in the last years. From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. For sake of simplicity we discuss only the case of propositional logics, although some of the logics have been extended to first-order. Additionally, we con- centrate on theorems containing only unnested occur- rences of =+ as most logics can only handle this case. We neglect small differences between two logics if they are only caused by the different formalisms used although the logics share the same model-theoretic princip1es.l We think this approach is justified in order to stress the common properties of these logics which have not been obvious in many recent papers. 2.2 Normal Conditional Logics We start by describing a very basic conditional logic called CK which has been axiomatized in [5]. Although all our remaining logics are more restricted than CK, CK is useful to characterize the common properties of the logics of nonmonotonicity discussed in section 3. All logic systems we discuss obey additional constraints and are formalized simply by extending the axioms valid in CK. For CK we consider a semantic model M = (IV, f, P), where f is a mapping that selects a set of worlds f(w,A) f or each world w E W and proposition A E P. For such a wff A, ljAllM stands for the set of worlds in M in which A is true. To evaluate a condi- tional A + B at a world T.U in a model M we use l=f A * B iff f(w, 4 c llBllM We do not include any constraints on f(w, A), except that it only selects worlds in which A is true, i.e. fb44 E l141M Assuming that this definition of f defines a notion of plausibility and selects the most plausible worlds in which A is true, we may paraphrase it by A + B is true at w if B holds in all most plausible A-worlds for zu as selected by f. This very basic logic CK can be defined by the fol- lowing two basic inference rules: ArB (RCEA) (A + C) s (B * C) (B1 A -- (RCK) ((A =+ B1) A -AB,)>C -~(A=sB,))>(A*C) Similar rules have been used in axiomatizations of other conditional logics compared in this paper. They l An example would be the difference in expressiveness be- tween the lo&s described in [3] and those in [15]. can be derived in CK: B,C (RCEC) (A =+ B) = (A 3 C) B3C (RCM) (A 3 B) I (A =+ C) (BAC)+D (RcR) ((A + B) A (A =+ C)) 3 (A + D) In [15], RCEA corresponds to left logical equivalence, RCK to right weakening. Similarly, some axioms used in various logics of non- monotonicity are already theorems in CK. CC (A*B)A(A+C)>(AS-BAC) CM (A*(BAC))>((A=SB)/I(A+C)) In [15], CC corresponds to und. As all logics in [15] already contain RCEA and RCK, and is redundant in their axiomatization. Logics including only RCM instead of RCK have to include CC as axiom. If we use RCEC instead of RCM, we also have to include CM. In [4] CC corresponds to Al and CM to Ad. Additionally, the rule Mp (modus ponens) is valid for >, as well as all truth-functional tautologies of prop* sitional calculus (PC). It is possible (and in some cases useful) to define con- ditional logics which do not satisfy RCK (e.g. the logic from [11], as axiomatized as the logic G in [19]). CC and CM are not theorems of G, and the plausibility re- lation is dependent on both antecedent and consequent of the conditional A a B. Although this is an inter- esting possibility, we will not discuss such logics in our paper. 3 The 3.1 System2 The previous section has left open the specification of the selection function f and thus the notion of how to determine the plausible worlds. Building on [4], we proceed by introducing a ternary plausibility relation defining a partial pre-order on those worlds with re- spect to the current world. We understand under a semantic model M the set of all pairs (IV, R), with W an nonempty set of possible states (or worlds) and R an ternary relation on it. For x E W, W, = {y I3zRxyz). Each state is labeled with a propositional model de- scribing the facts at this state. Several states can be labeled by the same model (see [15] for an example), although this will usually not be the case. The relation NEJDL 367 Rxyz may be interpreted by saying that from the point of view of x, y is at least as plausible as Z. The minimal requirements we place on the relation R are reflexivity and transitivity: . V’z E W Vy E Wz Rxyy ‘ifx E w vyzw E W,: (Rxyz A Rxzw > Rxyw) R defines therefore a partial pm-order on the states in W (which is local to each x E W). The selection function f(x, A) can be defined as selecting the most plausible worlds according to this partial pre-order R, which is expressed by f (x, 4 = {Y E ws I Y E I141M A (Vz E WC n llAllM R=y ZI Rxyz)} Our definition from section 2.2 is then equivalent to the one used in [4], where A a B is true with respect to a world x E W iff ‘fy E Ws n llAllM 3.z E Wz n IIAIIM : (Rxw AVt E W,n llAllM (Rxtz > t E IIBIIM )) and can be paraphrased as A + B is true at x if B holds in all most plausible A-worlds for x according to R. For simplicity’s sake, we assume the limit assumption as defined in [22] and [17] for infinite numbers of worlds. This assumption basically forbids an infinite sequence of ever more plausible worlds (i.e. wekfoundedness for partial pre-orders). 2 For a finite number of worlds this assumption is obviously valid. The smoothness condi- tion defined in [15] is more general, but equivalent to the limit assumption in our case (R being reflexive and transitive). To get a logic system corresponding to this definition, we have to add the following three axioms to CK. We call the resulting conditional logic System-P. ID A+A ASC (A+B)A(A+C)r>(Ar\B+C) CA (A=G)A(B%T)>(AVB=HT) ID needs reflexivity whereas ASC needs transitivity. CA reflects the fact, that each state is labeled exactly with one model (in contrast to weaker logics defined in PW. System-P is equivalent to o the minimal counterfactual logic S defined in [4]. is 2 We will not present a constraint only second-order definable. on R, as well-foundedness 368 NONMONOTONIC REASONING In [15], ID corresponds to reflexivity, ASC to cuu- tious monotony and CA to or. The axioms correspond to All, A3, A4 in [4]. In [l], CA has been named AD. Such a logic might be suitable for reasoning about, morality or obligation, where we neither assume an ab- solute pre-order valid for all worlds nor that our world is the most preferred one. In the following sections we will add additional con- straints to System-P which will be reflected in the sub- scripts of these logics (i.e. the name System-PMc will be used for a logic extending System-P by modularity and centering axioms). 3.2 SystemPc We have said nothing about the actual world so far. In- deed, this is the crucial difference between conditional logics used for evaluating counterfactuals and condi- tional logics of normality (or obligation). Counterfactuals are evaluated with respect to the ac- tual world. This is represented by so-called centerhg axioms (see [17]). A ssuming weak centering the actual world w is among the most plausible worlds for x, while under strong centering w alone is the most plausible world. The difference between strong and weak centering has been analyzed in [19]. Basically, strong centering is assumed in minimal change theories, where only miu- imal changes for accommodating a certain fact are as- sumed. In contrast, weak centering is used in small change theories, which additionally allow small, non- minimal changes, if the difference to minimal changes is negligible. A good example are probability-based systems using a cutoff based on the relative difference to the most probable hypothesis (i.e. the cutoff crite- rion used in the model-based diagnosis system Sherlock described in [8]). Logics of normality (like System-PA, which will be discussed in section 3.3) include these centering as- sumptions as plausible defaults, but do not enforce them. Strong centering is represented by the following con- straint on the plausibility relation R: Vx E W (x E W,AVy E WC (x # y 1 R~~y~lRxyx)) Weak straint centering is represented by the similar VxEW(xEW,AVyEW,Rxxy) The corresponding axioms included in System-PC are MP (A + B) > (A > B) CS (Ar\B)>(A=>B) where MP corresponds to weak centering, CS to strong centering. Conditional logics for evaluating counterfactuals equivalent to System-PC are based on the notion of minimal change. A prominent example is e the counterfactual logic SS defined in [20]. The logic P from [15] is mistakenly compared to SS in [lci]. This is not true, as P lacks the axioms for centering. Let us note, however, that the presence or absence of the centering axioms does not make any dif- ference if we are only concerned with assertions of the form A =G- B. 3.3 SystemA?~ Up to now, we have considered partial pre-orders of states with respect to specific states. We might include the assumption that the partial pre-order of states is absolute (i.e. the same with respect to all worlds), rep- resented by the following constraint: vxw (Rx&z 3 Rwyz) Having an absolute plausibility measure simplifies things as we can use a global binary relation 5 to ex- press preferences between different situations. Abso- lute pre-orders are therefore useful for reasoning about defaults and normality. As we consider only non-nested theorems in this pa per, we do not have to add additional axioms for ab- soluteness (see the following section 3.3.1). Thus, the theorems of System-PA are the same as of System-P. This is a welcome result, as a binary relation is easier to handle and is also the basis for the Kripke semantics used in modal logic. Using this connection, we can show that System-PA corresponds to e the minimal logic of normality CT4 equivalent to the modal logic S4 defined in [3], and to e the logic of preferential consequence relations P defined in [15]. In CT4 each world ut can access all worlds z which are at least as plausible as ‘~1, i.e. x 5 w > wRax where R, is the accessibility relation of modal logic. This leads at once to the following definition of A + B in terms of necessity and possibility used in [3]: o(n-Av O(Ar\ q (AIB))) The definition formalizes the idea, that either A is false in each plausible world, or A is true in some plausible world w and in each world at least as plausible as w, A 1 B is valid. This definition is therefore equivalent to the one we used for defining the truth-value of the conditional A a B. The following weaker definition q lAv O(Ar\ q (A xB)) which is mentioned in [3] amounts to checking this formula only in the current world x. It is therefore sufficient, that in some world wi reachable from x (AA n(A > B)) is true, while it is false in another reachable world wj. So if the worlds accessible from x form two sets which are not connected, we may get both (A A 0 (A 1 B)) and (AA 0 (A 1 -B)). Th is is why we can get absurd results such as both A + B and A + 1B are true (each being supported by another set). 3.3.1 Nested Theorems As indicated in the last section, we do not need to add additional axioms representing absoluteness, if we only consider theorems, which do not contain nested occur- rences of 3. We can express this by the following the- orem, which can easily be proved given the definition of *. Theorem 3.1 All theorems containing unnested oc- currences of =+, which can be expressed by a relational principle on the ternary relation R can also be ex- pressed by a relational principle on the binary global relation 5. Special nested theorems have to be expressed by in- dex principles relating the view of different worlds. An example for a nested theorem depending on such an index principle can be found in [24] (formulated there for a strict partial order). The absorption law (A * (B + C)) > ((A A B) + 6) is equivalent to the index principle Vxyz (1Rxzy > Vu Ryzu) Unfortunately, this constraint is too strong to be of much use. Nested theorems of another sort (containing 3 as the primary operator) are mentioned in [3]. CT4 (and indeed all extensions of System-P containing no index principles) include the theorems: (A A (A + B)) j B (A+C)=Q(AAB)aC) NEJDL 369 The first axiom corresponds to the axiom for weak centering and basically introduces Mp (modus ponens) for 3. It is easy to show that it is equivalent to (A 3 B) + (A > B) which corresponds to a default assumption stating that our world is as normal as we can possibly assume. Strong centering can be transformed similarly into (A A B) j (A + B) which expresses the fact, that we tend to generalize j relations as much as possible using induction from existing facts. The second axiom mentioned in [3] corresponds to the thesis strengthening antecedents valid for classical logic. Indeed, even the versions of transitivity (A 3 B) A (B =+ C) 3 (A + C) and contraposition (A + B) + (‘B 3 1A) seem to be valid. So all these additional theses really tell us is that “normally” we use classical logic. We conclude, that nested theses of the form discussed in [3] do not seem to contain much new information, but are interesting for making these normality assumptions explicit. 3.4 SystemJcA Assuming both centering t em,PcA equivalent to l the conditional logic monotonicity in [ 11. and absoluteness, we get sys- c, presented as logic of non- However, it seems to us, that a general logic of nonmonotonicity should not include centering axioms. These are indeed basically made useless by the con- struction used in [l], where theorems are defined with respect to minimal worlds. Therefore any logic corre- sponding to System-PA would probably be more suit- able for this purpose than C. 3.5 SystemJ?Dc Another possibility, which has not been explored much, is to require a directed partial pre-order, where any two elements have an upper bound. Directedness can be expressed by the following con- straint on R: Vx E W kfyz E W, 3u (Rxyu A Rxzu) The corresponding axiom is CV’ (A + C) A -(A + ‘B) A ‘(T =+ -A) >(AAB&?) T + A can be read as “normally A” (as discussed in [3]). U n d er strong centering T + A is true, iff A is true in the actual world w. If we require directedness and centering, we get log- its suitable for update semantics computing results of changes to the actual world using set-theoretic mini- mality (which respects directedness), such as o the update operator for reasoning about action de- scribed in [25]. 3.6 System_lPDA Directedness has been first considered in [3], where it has been combined with the constraint of absoluteness. SystemJDA is therefore equivalent to e C5!'4G, a logic of normality based on S4.2, which is mentioned in [3]. Model-based diagnosis systems using minimality in a set-theoretic sense are special versions of the logic described by System,PoA, such as o MBD systems using the definition of diagnosis de- scribed in [7] and [21]. In this case, the “most normal” situation is the empty diagnosis (everything is correct), so the axiom CV’ does not help us much, if we want to use condi- tionals to describe the effects of faults (which are not valid in the most normal state of affairs). 3.7 SystemJ?Mc A higher degree of monotony than in our previous logics follows from the assumption that the plausibility rela- tion ranks the possible worlds in disjunct levels of plau- sibility. This leads to a plausibility relation, where all reachable situations are comparable, but where equally plausible situations may exist. This corresponds to the constraint of almost- connectedness of the relation Rxyr: Vx E W Vyz E Wz (Rxyz v Rxay) It is equivalent to the partial pre-order being mod&l (as recognized in [13]), which we can also formalize by Vxyz (x 5 Y) A (Y L x) A (% < 2) 3 (% < Y> where (a < b) E (a 5 b) A (b g a). 370 NONMONOTONIC REASONING This logic is strictly stronger than S?@e?n,PDc, as any modular pre-order is directed. The corresponding axiom to almost-connectedness is the familiar axiom CV (A=d’)r\+A*lB)>(A/\B*C) It is called rational monotonicity in [15]. If we assume almost-connectedness plus centering we get some familiar conditional logics used for counterfac- tuals, as well as some other rational logics. Assuming strong centering, we get e the prototype of counterfactual logic, VC of [17], o the Gardenfors rationality axioms from [12], which are equivalent to VC, and as special versions e the counterfactual construction described in [14], and o the update operator of [6]. If we assume only weak centering we get the logic e VW, which also has been defined in [17]. System-PMC is one of the most monotonic sys- tem we can get including the assumptions of almost- connectedness and centering. It corresponds to a sys- tem of plausibility spheres ordered around the actual world. We cannot add absoluteness to this system, as this would collapse the partial pre-order into one equiv- alence cla+~s.~ This is in accordance with the result in [17], that the system VCA (which corresponds to S?@e?LPMCA) is ordinary truth-functional logic in disguise. Similarly the monotonic logic M from [15] is defined by one equivalence class representing all plausible models. 3.8 System&,fA If we combine almost-connectedness and we get the following logics of normality: absoluteness, o N as defined in [9] and [lo], e CT4D, equivalent to S4.3, as defined in [3], e R, the logic of rational consequence relations, as defined in [16]. Interestingly enough, N lacks the axiom ASC (a fact which has also been mentioned in [16] and [3]) which it should include, being based on an S4.3-like semantic model. 3We might ho w e ver add the weaker condition of aniversality (where each world is reachable from each other, see [17]) to get Lewis’ logic VCU. 3.9 Systemd?Mcx There is one more assumption we can add, and that is the constraint of linearity of the plausibility relation, which excludes ties between possible situations. We then have vx E w vyz E w, (Rxyz V RXZY)II (Rxyz A Rxzy > y = z) Alternatively we might express this constraint by almost-connectedness and antisymmet y. The corresponding partial order is simpZe or to-tat, and is also called a chain: vxy (x = Y) v (x < Y> v (Y < 4 The corresponding axiom is CEM (A*B)v(A+lB) validating the rule of “conditional excluded middle” well known from monotonic reasoning. This logic is equivalent to e the first formally defined logic for counterfactuals, C2, which is discussed in [22]. Slightly modifying a theorem from [23], we can show, that no more new universal constraints can be added to system-PMCx . The only universal additional re- strictions are restrictions to finite cardinalities. In this sense Systf??n,PMCX is the nonmonotonic system which retains most of monotonic inferences. 4 Conclusion an rk We have shown how most logics of nonmonotonicity including many counterfactual logics, logics of normal- ity, as well as belief revision and update semantics c;w be classified using a simple framework built upon nor- mal conditional logics, properties of partial pre-orders between possible situations and centering assumptions. The resulting framework of P-Systems makes it much easier to compare the various logics and to understand their similarities, differences and possible extensions. An interesting avenue for research, which has only been mentioned in this paper, is to generalize this framework by investigating the correspondence of in- dex principles and nested theorems for 3. We are also considering to extend our framework by including non- normal conditional logics weaker than CK. Comparing hypothetical reasoning using intuitionis- tic logic (e.g. [2]) within the framework described in this paper could prove fruitful given the fact, that in- tuitionistic logic can be transformed into the modal logic S4. NEJDL 371 References PI PI PI PI Fl PI PI PI PI Cl01 Pll P21 P31 372 John Bell. The logic of nonmonotonicity. Ati@- cial Intedligence, 41(3):365-374, 1990. Anthony J. Bonner. A logic for hypothetical rea- soning. In Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 480484, St. Paul, Minneapolis, August 1988. Morgan Kauf- mann Publishers, Inc. Craig Boutilier. Conditional logics of nor- mality as modal systems. In Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 594-599, Boston, August 1990. Morgan Kaufmann Publishers, Inc. J. P. Burgess. Quick completeness proofs for some logics of conditionals. Notre Dame Journal of For- mal Logic, 22~76-84, 1981. Brian F. Chellas. Modal Logic - An Introduction. Cambridge University Press, 1980. Mukesh Dalal. Investigations into a theory of knowledge base revision: Preliminary report. In Proceedings of the NationaZ Conference on Artifi- cial InteZZigence (AAAI), pages 475-479, St. Paul, h/Iinneapolis, August 1988. Johan de Kleer and Brian C. Williams. Diagnos- ing multiple faults. Artificial Intelligence, 32:97- 130, 1987. Johan de Kleer and Brian C. Williams. Diagnosis with behavioral modes. In Proceedings of the In- ternational Joint Conference on Artificial InteZZi- gence (IJCAI), pages 1324-1330, Detroit, August 1989. Morgan Kaufmann Publishers, Inc. James P. Delgrande. A first-order logic for prototypical properties. Artificial Intelligence, 33( 1):105-130, 1987. James P. Delgrande. An approach to default rea- soning based on a first-order conditional logic: Re- vised report. Artificial Intelligence, 36( 1):63-90, 1988. Dov M. Gabbay. A general theory of the condi- tional in terms of a ternary operator. Theoria, 38:97-104, 1984. Peter Gtirdenfors. Knowledge in FZux. MIT Press, 1988. Matthew L. Ginsberg. Counterfactuals. Artificial Intelligence, 30:35-79, 1986. NONMONOTONIC REASONING PI PI WI VI WI WI PI WI P21 WI PI 1251 Peter Jackson. On the semantics of counterfactu- als. In Proceedings of the International Joint Con- ference on Artificial Intelligence (IJCAI), pages 1382-1387, Detroit, August 1989. Morgan Kauf- mann Publishers, Inc. Sarit Kraus, Daniel Lehmann, and Menachem Magidor. Nonmonotonic reasoning, preferential models and cumulative logics. Artificial InteZZi- gence, 44(1-2):167-207, 1990. Daniel Lehmann. What does a conditional knowl- edge base entail? In Proceedings of the In- ternational Conference on Principles of Knowl- edge Representation and Reasoning, pages 212- 222, Toronto, May 1989. Morgan Kaufmann Pub- lishers, Inc. D. K. Lewis. Counterfactuals. Blackwell, Oxford, 1973. Wolfgang Nejdl. The P-Systems: A systematic classification of logics of nonmonotonicity. Techni- cal report, Technical University of Vienna, March 1991. Donald Nute. Conditional logic. In D. Gabbay and F. Guenthner, editors, Handbook of Philosoph- ical Logic, Vol. II, chapter 11.8, pages 387-439. D. Reidel Publishing Company, 1984. J. Pollock. A refined theory of counterfactuals. Journad of Philosophical Logic, 10:239-266, 198 1. Raymond Reiter. A theory of diagnosis from first principles. Artificial Intelligence, 32:57-95, 1987. Robert Stalnaker. A theory of conditionals. In N. Rescher, editor, Studies in Logical Theory. Blackwell, Oxford, 1968. American Philosophical Quarterly Monograph Series, No.2. Johan van Benthem. The Logic of Time. Reidel, Dordrecht, 1982. Johan van Benthem. Correspondence theory. In D. Gabbay and F. Guenthner, editors, Handbook of Philosophical Logic, Vol. II, chapter II.4, pages 167-247. D. Reidel Publishing Company, 1984. Marianne Winslett. Reasoning about action us- ing a possible models approach. In Proceedings of the National Conference on ArtiJiciaZ Intelligence (AAAI), pages 89-93, Saint Paul, Minnesota, Au- gust 1988.
1991
60
1,121
Default Reasoning From Statistics Fahiem Bacchus* Department of Computer Science University of Waterloo Waterloo, Ontario, Canada N2L-3Gl fbacchus@logos. waterloo.ca Abstract There are two common but quite distinct interpreta- tions of probabilities: they can be interpreted as a mea- sure of the extent to which an agent believes an asser- tion, i.e., as an agent’s degree of belief, or they can be interpreted as an assertion of relative frequency, i.e., as a statistical measure. Used as statistical measures probabilities can represent various assertions about the objective statistical state of the world, while used as degrees of belief they can represent various assertions about the subjective state of an agent’s beliefs. In this paper we examine how an agent who knows certain sta- tistical facts about the world might infer probabilistic degrees of beliefs in other assertions from these statis- tics. For example, an agent who knows that most birds fly (a statistical fact) may generate a degree of belief greater than 0.5 in the assertion that Tweety flies given that Tweety is a bird. This inference of degrees of be- lief from statistical facts is known as direct inference. We develop a formal logical mechanism for performing direct inference. Some of the inferences possible via di- rect inference are closely related to default inferences. We examine some features of this relationship. Direct Inference Probabilities can be used as a measure of the extent to which an agent believes an assertion, i.e., as a degree of belief, and they can also be used to represent statisti- cal assertions of relative frequency. The key difference between these two types of probabilities is that state- ments of statistical probability are assertions about the objective statistical state of the world, while statements of degree of belief probability are assertions about the subjective state of an agent’s beliefs. For example, the statement “More than 7’5% of all birds fly” is a statistical assertion about the propor- tion of fliers among the set of birds; its truth is deter- mined by the objective state of the world. On the other hand, the statement “The probability that Z’weety flies 41 *This work was supported by NSERC grant #OGPOO- 848. Thanks to the referees for some useful comments. is greater than 0.75” is a assertion about an agent’s degree of belief; its truth is determined by the state of the agent who made the statement. The agent is using degrees of belief to qupntify his uncertainty about the state of the world. An important question is: what is the relationship between the agent’s subjective degrees of belief and his knowledge about objective statistical features of the world? The answer that we will advance is that the agent’s subjective degrees of belief should be determined by his knowledge of objective statistical facts though a mechanism that has been called direct inference. Direct inference is a non-deductive form of inference which has a long history in philosophy, e.g., [l, 2, 3, 41, but has received scant attention in A1.l Simply stated, direct inference is the claim that the degree of belief an agent should assign to the proposi- tion that a particular individual c possesses a partic- ular property P, should be equal to the proportion of individuals that have P from among some class of in- dividuals that c is known to belong to. For example, if the agent knows that Tweety belongs to the class of birds and that more than 75% of birds fly, then direct inference would sanction the inference that the agent’s degree of belief in the proposition Tweety flies should be greater than 0.75. It is important to note that this is not a deductive inference: there is nothing excluding the possibility that Tweety is a member of the subclass of non-flying birds. The motivation for interest in direct inference comes from its common use in applications of probability. For example, much of actuarial science is based on the prin- ciples of direct inference. When an insurance company quotes life insurance rates to particular individuals they compute those rates by equating their degree of belief (willingness to bet) in a particular individual’s death with the proportion of deaths among some set of sim- ilar individuals (e.g., similar in terms of sex, age, job hazard, etc.). ‘Loui [5] has used Kyburg’s system of direct inference [2] as an underlying foundation in some of his systems of defeasible reasoning. 392 NONMONOTONIC REASONING From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. In this paper we will present a formal mechanism for performing direct inference. This mechanism is logi- cally based, and uses a logic that can represent and reason with statistical and degree of belief probabili- ties. By reasoning inside of this logic we can reason about the consequences of direct inference. Although there are many other applications of direct inference, we will concentrate here on what it tells us about de- fault reasoning by examining the relationship between some of the inferences that can be generated via direct inference and typical default inferences. robability Logic Prior to the construction of a mechanism for direct in- ference we need a formalism capable of representing and reasoning with both types of probabilities. In [6] a logic for statistical probabilities was developed by the author. Subsequently, Halpern used some of these ideas to de- velop a logic for degree of belief probabilities, and also demonstrated how the logic for statistical probabilities and the logic for degrees of belief could be combined to yield a combined probability logic capable of dealing with both types of probabilities [7]. In the combined probability logic, however, there is no intrinsic relationship between the two types of proba- bilities: they simply coexist without significant interac- tion (see [8]). H owever, the logic does provide a suitable formal framework for specifying a relationship. The main contribution of this research has been the devel- opment of such a specification: a mechanism for direct inference. This particular paper, however, will concen- trate on the relationship between direct inference and default reasoning. To better understand the mechanism we will present a brief outline of the combined proba- bility logic. For more details see [8] or [7]. The combined logic is a two sorted, first-order, modal logic. There is a sort of objects and a sort of numbers. The sort of objects consists of function and predicate symbols suitable for describing some domain of inter- est, while the numeric sort is used to make numeric assertions, especially assertions about probabilities. In particular, the numeric sort includes the constants 1, -1, and 0; the functions + and x; and the predicates = and 5. Additional inequality predicates and numeric constants can easily be added by definition, and we will use them freely. The formulas of the language are gener- ated by applying standard first-order formula formation rules with the addition of two rules that generate new numeric terms (specifically probability terms) from ex- istent formulas, i.e., these terms denote numbers (that correspond semantically to the values of the probability measure). 1. If c1 is a formula and rc’ is a vector of n distinct object variables, then [cu]~ is a statistical probability term. Since these constructs are terms, they can in turn be used in the generation of additional new formulas. We can extend the language by definition to in- clude conditional probability terms (of both types): bkh =df b -A PlzlL&, and‘ prob(cxl& = prob(a A ,8)jprob(P).2 Another useful abbreviation is: cert(a) =,jf prob(a) = 1. Semantically the language is interpreted using struc- tures of the form Where: 1. 0 is a domain of objects (i.e., the domain of dis- course). S is a set of states or possible worlds. 19 is a world dependent interpretation of the symbols. The numeric symbols are interpreted as relations and function over the reals ,IR, with the numeric functions and predicates, +, x , 1, -1, 0, < and =, given their normal interpretation in every state. 2. ,QO is a discrete probability measure over 0. That is, for every A & 0, PO(A) = CoEApO(u) and CoEO l-44 = 1. 3. PS is a discrete probability measure over S. That is, for every S’ C S, ps(S’) = CsEs, ps(s) and CsES w(s) = 1. The formulas of the language are interpreted with re- spect to this semantic structure in a manner standard for modal languages. In particular, the interpretation of a formula will depend on a structure M, a current world s E S, and a variable assignment function v. The probability terms are given the following interpretation: 1. ( [Q]~)(~~~+) = #,{a’ : (M, s, v[?/Z]) + a}, where v[Z/Z] is the variable assignment function identical to v expect that v(zi) = ai, and & is the n-fold product measure formed from ~0. 2. (prob(a))‘“++’ = ps(s’ : (M, s’, v) + a}. So we see that [+ denotes the measure of the set of satisfying instantiations of 5 in o and prob(a) denotes the measure of the set of worlds that satisfy a. Another way of interpreting the statistical probability terms is to consider them as representing the probability that a randomly selected4 vector of individuals Z will satisfy o. We will sometimes call the variables, 3, bound by the statistical terms random designators. The use of a product measure to assign probabili- ties to vectors means that we are treating the indi- viduals in the vector as if they were selected inde- pendently of each other. Hence marginal probabilities 2For ease of exp osition we will ignore the technicalities of dealing with division by zero. See [S] for details. 3Halpern has called such structures type III probability structures. 2. If Q is a formula, then prob(a) is a degree of belief 4That is, the probability of selecting any particular vec- probability term. tor of individuals 0’ E 0” is plug{ 5’). BACCHUS 393 will display independence. For example, we have that P(4 A eAl(z,y) = PC& x Mdl,~ This says that if we select at random two individuals, the probabil- ity that the first has property P and the second has property R will be the product of the two probabili- ties: the two individuals are selected independently of each other. However, this not mean that the conditional probabilities will be independent. In particular, if we have knowledge that x and y are related, say through relation Q, we will no longer have independence, e.g., in general, [P(x) A R(g)IQ(x, y&Y) is not decompos able into the product of simpler probability terms. For example, if we know that x and y are two randomly se- lected birds that we know are brothers, the probability that x is a flier will not be independent of the probabil- ity of y being a flier. However, if we have no knowledge of any relationship between x and y then the probabili- ties of each being a flier will be independent. For more on this point see [8]. Here are some examples to help the reader under- stand the language and its semantics. [fly(x) ]bird(x)]x > 0.75. A particular triple (M, s, v) satisfies this formula if the probability that a randomly selected bird flies is greater than 0.75. That is, if more that 75% of the birds in the world ‘s’ fly. prob(f ly(Tweety)) > prob(swim(Tweety)). This for- mula is satisfied if the measure of the set of worlds in which Tweety flies is greater than the measure of the set of worlds in which he swims. That is, the agent believes Tweety more likely to be a flier than a swimmer. prob([grey(x)]elephant(x)]x > 0.75) > 0.9. This formula is satisfied if the measure of the set of worlds in which a randomly selected elephant has a greater that 75% probability of being grey, is greater than 0.9. That is, the agent has a high degree of belief that more that 75% of elephants are grey. Such knowledge might come from traditional statistical inference over sample information. A powerful proof theory for this language can be constructed which is complete for various special cases [7, 81. We simply note that all of the examples of rea- soning we give here can be performed within this proof theory. We need to extend the language beyond that devel- oped by Halpern to include an expectation operator. Specifically, if t is a statistical probability term, then E(t) is a new numeric term whose denotation is defined as follows: (E(t))@-‘) = x ps(s’) x t(M++). S’ES That is, the expected value of a term is the weighted (by ,US) average of its denotation over the set of possible worlds. The expected value as the advantage that it 394 NONMONOTONIC REASONING has the same denotation across all possible worlds (it is rigid), unlike the raw statistical term. The Direct Inference Formalism With the combined probability logic in hand we can describe the formalism of direct inference as a theory, i.e., as a collection of sentences, of that logic. Direct inference works by using an agent’s base of accepted knowledge to assign degrees of belief in assertions that are not deductive consequences of the knowledge base. We will call a formula of our logic objective if it does not contain any ‘prob’ or ‘E’ operators. Such formulas do not depend on any world except the cur- rent one for their interpretation. We suppose that the agent’s knowledge base is represented by a finite col- lection of objective formulas, and we let KB denote the formula generated by conjoining all of the formulas in the knowledge base. KB will usually include informa- tion about particular individuals, e.g., bird(Tweety), general logical relationships between properties, e.g., \dx.bird(x) + animal(x), and statistical information, e.g., [f ly(x)]bird(x)]x > .75. Definition 1 [Randomization] Let a! be an objective formula. If (cl, . . . , G) are all the n distinct object constants that appear in a! A KB and (vi, . . . , v,) are n distinct object variables that do not occur in o A KB, then let KBV (cy”) denote the new formula which results from textually substituting q by vi in KB (a), for all i. Definition 2 [Direct Inference Principle] For any ob- jective formula ar the agent’s degree of belief in Q! should be determined by the equality Probb) = E( [~~pd’]~) .5 In addition, the agent must fully believe that [KB~]~ > 0, i.e., cert([KBV],- > O).6 The collection of all instances of the direct infer- ence principle specifies a theory which characterizes the agent’s reasoning with direct inference. Definition 3 [The Base Theory] Let Do be the set of formulas consisting of cert([KBV],- > 0) along with all instances of the direct inference principle. Let T’ denote the closure of DO under logical consequence. TO is the agent’s base theory given her initial knowledge base KB. Before going further we note some important theo- rems which demonstrate that our mechanism is coher- ent. 5The justifica tion for our use of the expected value of the statistical term is given in [8]. A pragmatic reason is that the expectations terms are rigid, as noted above. ‘There is a very reasonable justification for this assump- tion [8], but here it is sufficient to note that it allows us to avoid division by zero. Theorem1 1. The degrees of belief given by the direct inference principle are in fact probabilities. 2. If KB A [KB~], > 0 is satisfiable, then so is T’ (i.e., TO is consistent). 3. prob(a) = 1 is a logical consequence of the direct inference principle if and only if b KB + a. 4. The agent’s full beliefs, i.e., the formulas which are assigned degree of belief 1 in To, have the same logical behavior as a collection of KD45 beliefs with cert acting as a belief modality. These theorems give us the following picture of our mechanism. First, the agent’s degrees of belief are prob- abilistic. Second, the mechanism is consistent. And third, the mechanism generalizes the most common non-probabilistic model of an agent’s beliefs, the modal logic KD45 [9, lo]. In particular, the agent fully be- lieves all logical consequences of his initial accepted col- lection of beliefs (which includes all of KB), and his full beliefs are closed under positive and negative introspec- tion. In accord with the KD& model, his beliefs are not, however, necessarily accurate (although the agent thinks that the probability of his full beliefs being false is zero). Examples With these theoretical guarantees we can proceed to ex- amine the nature of the inferences generated by direct inference through some examples. As will be become clear in this section, direct inference sanctions conclu- sions that are very similar to those conclusions sought after in default reasoning. We only have space for a couple examples, but see [8] for the detailed working out of many other examples. The direct inference principle requires us to condition on the entire knowledge base. In actual applications this store of knowledge can be very large. The following equality is useful for reducing the conditioning formula. * [aI@ A A], = Q: 1 I? 5, if no xi E Z which appears in Q A /? is free in A. This equality is valid, i.e., true in every possible world in every model. Hence, it holds even when inside of a cert context. The equality follows from our use of a product measure. It says that the chance of a vector of individuals satisfying (u will be influenced only by their known properties and by properties of other individuals that are known to be related them. For example, the fact that Fido is a dog will have no influence on the chance of Tweety being a flier. ‘Freedom and bondage is extended in our language to include the clause that the random designators are bound. Also it can be proved that we can rename the random des- ignators without changing the values of the statistical terms P11* As our first example, consider Tweety the bird and Opus the penguin. We will use c as a numeric constant with any denotation greater than 0.5. Example 1 [Tweety, but not Opus, flies] Let KB = bird(Tweety) A bird(Opus) A penguin(Clpus) A[f ly(x)]bird(x)lx > c A ‘v’x.penguin(x) + bird(x) A[f ly(x)]penguin(x)]x < 1 - c. KBV differs from KB only with respect to the first three conjuncts (none of the others have constant symbols). We let vi replace Tweety and v2 replace Opus. First we can examine prob(Tweety): ProwYmJeetYN = WlYh > WV1 (Vi ,v2) 1 (4 ce~([flY(vl)lKBVl(v~,v2) = [f~Y(vlw4vlNvl> @v E([flY(vl)lKBVl(vl,V2) = [f~YhNi+9h1> (4 ceh( [fly(q) Ibird(q)]vl > c> (4 E([flY(vl)lbird(vl)lvl) ’ c (4 prob(f ly(Tweety)) > c Equation (a) is the relevant instance of the direct in- ference principle; (b) is the result of applying equation *; (c) and ( e ) f 11 o ow from the relationship between cer- tainty and expectation;g and (d) is a consequence of Theorem 1.3. The general form of the derivation is applicable to most examples. First, we apply the particular instance of the direct inference principle we are interested in, us- ing equation * to remove irrelevant parts of KB. These parts include formulas with no constants and formulas with constants that stand in no relationship with the constants in the formula of interest (in this first exam- ple, the constant Opus). We can then proceed to reason about the value of the resulting expectation term in the agent’s base theory by examining the statistical knowl- edge in KB. All of this knowledge (and every other de- ductive consequence of KB) will be certain in the agent’s base theory. The case for Opus also yields the expected result that prob(f ly(Opus)) < 1 -c. This follows from the fact that penguins are known to be a subset of birds. Therefore, the chance that a randomly selected penguin-bird is a flier will be equal to the chance that a randomly selected penguin is a flier: these two sets are identical. This example shows that there is a natural subset prefer- ence between defaults encoded as statistical assertions. 8This inference also depends on the fact that [a](,,y) = [crlZ if y does not appear free in cy. This fact is an easy consequence of our semantics: if y is not free its instantiation will not matter. ‘Clearly if two terms have equal value in every world of non-zero probability (i.e., are certainly equal), then their expected values over the worlds must also be equal. Similar comments hold for the cases of their values being certainly related by >, <, 2, or 5. BACCHUS 395 This preference is a direct consequence of the seman- tics of such an encoding. That is, there is no need for a meta-logical preference criterion under the statistical interpretation. Example 2 [Clyde likes Tony but not Fred.] Let KB = elephant(Clyde) A zookeeper(Tony) A zookeeper(Fred) A [likes(x, y)lelephant(x) A zookeeper(y)]txly) > c A [likes(x,Fred)]elephant(x)]x < 1 - c Although there is no space for the details, this example also yields the expected results: it is likely that Clyde likes Tony, but unlikely that he likes Fred. What hap- pens here is that since the defaults are not separated from the rest of the knowledge, i.e., they are represented as formulas of KB, we can condition on specialized de- faults, like the one particular to Fred. This has the desired effect on the resulting degree of belief. Hence, TO has a natural “specificity” preference when dealing with defaults specific to a particular individual. Statistically Encoded Defaults By encoding defaults as qualitative statistical assertions we have demonstrated the similarity between the style of inference generated by our system of direct infer- ence and that sought after by default inference. Others have argued in favor of a statistical interpretation for defaults [12, 51 and have discussed the role of proba- bilities in nonmonotonic reasoning [13, 141, but such an encoding of defaults remains controversial. It does, however, match our basic intuitions that defaults are rules of thumb about what is usually, but not always, the case: this is precisely the semantics of a statistical assertion of majority. It is often claimed that there are many other, non- probabilistic, notions involved in the meaning of de- faults. Unfortunately, this position is difficult to ar- gue for or against, as very little work has been done on specifying exactly what defaults are. Our current understanding of defaults remains mostly a collection of intuitions as to how they should behave, and typi- cally systems of default reasoning are evaluated by their behavior rather than by any argument that they are founded on a reasonable semantics for defaults. I will not claim that a statistical interpretation pro- vides a complete picture of the their semantics, but I would argue that such an interpretation provides an essential component of their meaning (in my opinion, the major component). Thus, to specify this compo- nent precisely, as is the goal of this work, is an essential step in producing a complete specification. I will give two arguments to support my position. It should be noted, however, that I view default reasoning as reason- ing that is used to draw plausible or useful conclusions that are not sound in the strict logical sense. There are many other sources of nonmonotonicity in an agent’s 396 NONMONOTONIC REASONING reasoning, e.g., changing the theory in which reasoning is being performed. My claims about the statistical in- terpretation only pertain to default reasoning, not to all forms of nonmonotonic reasoning (although my in- tuition is that statistics play an important role in these other forms as well). The strongest argument in favor of the statistical in- terpretation comes from its flexibility and how well it obeys the afore mentioned intuitions about how defaults should behave. See [8] for precise details as to how the behaviors describe below follow from the statistical semantics. Besides imparting the natural subset and specificity preferences the statistical interpretation and our system of direct inference includes the following fea- tures: 4. Through its assignment of degrees of belief, the sys- tem formally distinguishes default and deductive in- ferences (Theorem 1.3). As a consequence it avoids the paradoxes of unlimited scope in nonmonotonic reasoning [15] which include the lottery paradox [16]. The statistical semantics allows us to reason natu- rally with the defaults. For example, we can conclude that certain defaults are contradictory, e.g., “birds typically fly” contradicts “birds typically don’t fly.” In fact, all of the reasoning with defaults (and more) that make conditional logic approaches attractive [17] can be duplicated within the statistical semantics. Contraposition is handled correctly: we are not forced to contrapose defaults. With some defaults like “birds typically fly” the contraposition “non- fliers are typically not birds” is a reasonable infer- ence, but with others it is clearly counter-intuitive, e.g., from “living things are typically not fliers” to “fliers are typically not living.” Using the statistical semantics one can demonstrate that in the case of birds, contraposition is sanctioned by the fact that birds are more likely to be fliers than non-birds. Liv- ing things, on the other hand, are not more likely to be non-fliers than non-living things; hence contrapo- sition is blocked in this case. The system is capable of reasoning with many dif- ferent forms of statistical information. For example, instead of encoding defaults as statements of statis- tical majority, as in the examples above, one could encode them as qualitative influences which increase probability, [lg, 191. Each form sanctions a different collection of inferences. Furthermore, by including in the specification statements of conditional indepen- dence (easily expressed in the logic) additional infer- ences can be captured. For example, statements of statistical majority cannot be chained [20], but with conditional independences weak chaining can occur where the strength of the conclusion diminishes as more defaults are used. Intuitively this makes, to me, more sense than systems that do not distinguish between inferences that rely on a very large number of defaults and those that only require one or two defaults, as each use of a default represents the intro- duction of a possible error. As an aside it is interesting to contrast these behav- iors with that of e-probabilities [21]. The approach of e-probabilities does not appeal to a statistical seman- tics; rather, it is more of an attempt to duplicate tra- ditional nonmonotonic approaches using probabilistic notions. It has the advantages that it retains the con- text sensitivity of probability thus capturing the sub- set preference, and it allows some reasoning with the defaults. However, by forcing all probabilities to go to the limit it gives up the ability to assign degrees of belief, thus falling prey to the lottery paradox; it forces contraposition in all cases, intuitive or not; it lacks the ability to deal with different forms of proba- bilistic information, e.g., while the formalism outlined above can include exact qualitative evidential reasoning e-probabilities forces one to “switch logics” [14]; and fi- nally, it gives up the relatively clear notion of statistics for a more obscure notion of probabilities infinitesimally removed from 1. The second argument for the statistical interpreta- tion comes from an appealing view of default reasoning as rational inference, put forward by Doyle and Well- man [22]. In this view an agent adopts a belief (draws a default conclusion) if the expected utility of holding it exceeds the expected utility of not holding it. Un- der such a view it is very natural that utility should be divided into separate notions of cost and likelihood, and that the notion of likelihood required would be a graded one (even if roughly graded). For example, ac- cepting a falsehood yields a tremendous decrease in cost of reasoning -subsequent reasoning in the inconsistent knowledge base can be done by always answering yes, a constant time procedure-but it has zero utility pre- cisely because the likelihood of correctness is zero. An important reason for separating cost and likeli- hood is that these notions can be fairly independent. Computational cost will not usually be affected by the acceptance or rejection of a plausible conclusion, rather it will be more affected by the acceptance or rejection of theories. For example, for reasons of computational cost an agent might decide to accept a Horn theory description of a domain rather than a more accurate first-order description. However, within the Horn the- ory he may still need to infer a new Horn formula that is plausible but does not follow deductively.1° A statistical approach using direct inference provides loIf computational cost is solely determined by acceptance or rejection of theories, which is a source of nonmonotonic- ity in reasoning, then default reasoning, circumscribed to be the generation of plausible inferences within a particular theory, would be solely determined by likelihood. That is, one could argue that although computational cost is impor- tant in general nonmonotonic reasoning, default reasoning is purely based on likelihood. these grades of likelihood in a natural way. Further- more, these grades, degrees of belief, are probabilistic, and thus they lead naturally to well developed mod- els of expected utility. It is quite possible that statis- tical information could be compiled into more tradi- tional default rules that carry an index of caution with them. This index, derived from the statistical informa- tion, would be used to indicate under what conditions of risk the default could be used (cf. [23]). Under such a scheme most of the behavior of these rules, like subset preference, would follow directly from the underlying statistical semantics. Conclusions We have pointed out that direct inference is a use- ful form of non-deductive inference that takes us from statistics to predictions about particular cases. And we have provided a formal mechanism for performing this kind of inference. Our system makes clear the im- portant similarities between reasoning from statistics in this manner and default reasoning as studied in AI. An issue we have not addressed is what has been called the conditional interpretation [24]. This is the problem that the system is unable to reason to default conclusions in the face of irrelevant information, much like other approaches that adopt a conditional inter- pretation (e.g., e-probabilities and conditional 10gics).~~ For example, if we know that Tweety is a yellow bird, we cannot use our statistics about birds; rather, we will re- quire statistics about yellow birds. The system cannot conclude that yellowness is irrelevant without explicit statistical information. In [8] a system is developed that allows the agent to extend his base theory. These extensions are gen- erated as in Reiter’s default logic [25] by adding ex- plicit assertions about statistical irrelevance. Thus it uses a tradition approach to non-monotonic reasoning in a non-traditional manner: to make default assump- tions about relevance and irrelevance rather than the final default conclusions. This preserves the statistical semantics of the ordinary defaults thus retaining the reasonable constraints imposed by those semantics. In joint work a more satisfying approach is under investigation [26]. This work approaches the problem semantically rather than syntactically, building models in which all possible statistical independencies hold. It is related to other principles of maximal independence like maximum-entropy, but it has the advantage of pro- viding a fairly natural semantic construction which cap- tures the reasonable assumptions of irrelevance that the agent can make. llActually the issue of irrelevance appears in other ap- proaches to default reasoning, e.g., default logic, autoepis- temic logic, circumscription. These approaches include im- plicit, built-in, assumptions of irrelevance. These built-in assumptions, however, may not always be appropriate. BACCHUS 397 PI PI PI PI PI KY 173 PI PI PO1 Ml M WI PI M WI References Hans Reichenbach. Theory of Probability. Univer- sity of California Press, Berkeley, 1949. Henry E. Kyburg, Jr. The Logical Foundations of Statis tical Inference. D. Reidel, Dordrecht , Nether- lands, 1974. John L. Pollock. Nomic Probability and the Foun- dations of Induction. Oxford University Press, Ox- ford, 1990. Isaac Levi. The Enterprise of Knowledge. MIT- Press, Cambridge, Massachusetts, Cambridge, Massachusetts, 1980. Ronald P. Loui. Theory and Computation of Un- certain Inference and Decision, PhD thesis, The University of Rochester, September 1987. Fahiem Bacchus. Lp, a logic for representing and reasoning with statistical knowledge. Computa- tional Intelligence, 6(4):209-231, 1990. Joseph Y. Halpern. An analysis of first-order logics of probability. In Proc. International Joint Confer- ence on Artifical Intelligence (IJCAI), pages 1375- 1381, 1989. Fahiem Bacchus. Representing and Reasoning With Probabilistic Knowledge. MIT-Press, Cam- bridge, Massachusetts, 1990. Ronald Fagin and Joseph Y. Halpern. Belief, awareness and limited reasoning. Artificial Intelli- gence, 34:39-76, 1988. Hector J. Levesque. A logic of implicit and explicit belief. In Proc. AAAI National Conference, pages 198-202, 1984. Fahiem Bacchus. Representing and Reasoning With Probabilistic Knowledge. PhD thesis, The University of Alberta, 1988. Henry E. Kyburg, Jr. Objective probabilities. In Proc. International Joint Conference on Artifical Intelligence (IJCAI), pages 902-904, 1987. Benjamin N. Grosof. Non-monotonicity in proba- bilistic reasoning. In L. N. Kanal and J. F. Lem- mer, editors, Uncertainty in Artificial Intelligence VoZ I, pages 91-98. North-Holland, Amsterdam, 1986. Judea Pearl. Probabilistic semantics for nonmono- tonic reasoning: A survey. In Brachman et al. [27], pages 505-516. David W. Etherington, Sarit Kraus, and Donald Perlis. Nonmonotonicity and the scope of reason- ing. In Proc. AAAI National Conference, pages 600-607, 1990. David Poole. What the lottery paradox tells us about default reasoning. In Brachman et al. [27], pages 333-340. VI PI [191 PO1 WI P21 P31 P4 P51 P61 WI James P. Delgrande. A first-order conditional logic for prototypical properties. Artificial Intelligence, 33:105-130, 1987. Michael P. Wellman. Formulation of 13-adeofls in Planning Under Uncertainty. Research Notes in Artificial Intelligence. Morgan Kaufmann, San Ma- teo, California, 1990. Eric Neufeld. Defaults and probabilities; exten- sions and coherence. In Brachman et al. [27], pages 312-323. Fahiem Bacchus. A modest, but semantically well founded, inheritance reasoner. In Proc. Interna- tional Joint Conference on Artifical Intelligence (IJCAI), 1989. Hector Geffner and Judea Pearl. A framework for reasoning with defaults. In Henry E. Ky- burg, Jr., Ronald Loui, and Greg Carlson, edi- tors, Knowledge Representation and Defeasible In- ference. Kluwer Academic Press, London, 1989. Jon Doyle and Michael P. Wellman. Impediments to universal preference-based default theories. In Brachman et al. [27], pages 94-102. Henry E. Kyburg, Jr. Full beliefs. Theory and Decision, 25: 137-162, 1988. Hector Geffner. Conditional entailemnt: Clos- ing the gap between defaults and conditionals. In AAAI Workshop on Nonmonotonic Reasoning (unpublished collection of articles), pages 58-72, 1990. Raymond Reiter. A logic for default reasoning. Artificial Intelligence, 13:81-132, 1980. Fahiem Bacchus, Joseph Halpern, Adam Grove, and Daphne Koller. From statistics to beliefs, 1991. In preparation. Ronald J. Brachman, Hector J. Levesque, and Ray- mond Reiter, editors. Proceedings of the First International Conference on Principles of Knowl- edge Representation and Reasoning. Morgan Kauf- mann, San Mateo, California, 1989. 398 NONMONOTONIC REASONING
1991
61
1,122
Incorporating Nonmonotonic easoning i lause Theories James P. Delgrande School of Computing Science Simon Fraser University Burnaby, B.C. Canada Abstract An approach for introducing default reasoning into first-order Horn clause theories is described. A default theory is expressed as a set of strict implications of the form cy1 A . . . A a, > ,f?, and a set of default rules of the form cy1 A . . . A on =+ p, where the ai and ,6 are function-free literals. A partial order of sets of formu- lae is obtained from these sets of (strict and default) implications. Default reasoning is defined with respect to this ordering and a set of contingent ground facts. Crucially, only strict implications appear in this struc- ture. Consequently the complexity of default reasoning is that of classical reasoning, together with an atten- dant overhead for manipulating the structure. This overhead is O(n2), where n is the number of origi- nal formulae. Hence for defaults in propositional Horn clause form time complexity is O(n2m) where m is the total length of the original formulae. The approach is sound, in that default reasoning in this structure is proven to conform to that of an extant system for default reasoning. Introduction The area of default reasoning has of course received widespread and extensive attention in Artificial Intel- ligence. A difficulty with general theories of default reasoning, such as [Rei80, MDSO, Moo83, De188], is that it is not obvious how they may be reasonably im- plemented. The intent of this paper is to present a re- formulation of an existing approach to exploit the fact that there are restrictions of classical logic with rea- sonable complexity properties. In particular, propo- sitional and (restrictions to) first-order Horn theories have better complexity bounds than do classical propo- sitional and first-order logic respectively; from this fact we develop systems for default reasoning with accept- able complexity properties. In this approach, a default theory consists of a set of regular Horn clauses along with a set of default Horn assertions. A default Horn assertion is simply a Horn clause where the implication operator 5, is re- placed by a default conditional operator j. The clause al A... A a, a p has the informal reading “if ~1 and V5A lS6 . . . and an then, in the normal course of events, P,,. These assertions are translated into a mathematical structure called the canonical structure. This struc- ture consists of sets of formulae arranged in a partial order. Crucially, this ordering is directly based on the semantic theory of a logic of defaults. Crucially too, these sets of formulae are obtained from the defaults by replacing the default conditional + with the stan- dard conditional 1. A notion of derivability is defined for the structure and it is shown that this notion is sound with respect to the logic of defaults presented in [De187]. Default reasoning is defined for this structure and proven to be sound with respect to the approach of [De188]. The time complexity of default reasoning is that of manipulating the structure, together with that of rea- soning with the sets of formulae. For determining some default extension (i.e. consistent world view) the re- sults are quite good: manipulating the structure is O(n2), where n is the total number of conditionals. Hence, for example, for propositional Horn default the- ories the overall complexity is O(n2m) where m is the overall size of the set of assertions. However, for de- termining if a specified property follows in some ex- tension or in all extensions, the results are not as good and in the worst case the problem is intractable (unless P = NP). This result appears to be intrinsic to the problem of defaults reasoning itself, rather than the approach at hand. The underlying formal system and related work is discussed in the next section. The third section infor- mally describes the approach while Section 4 provides the formal details and several examples. Section 5 dis- cusses complexity issues; the last section gives a brief discussion. Background and Related Work We can divide work concerning default, or defeasible (I will use the two terms interchangeably), inference into two broad areas, depending on the primary goal of the work. First there are approaches whose pri- mary intention is to formally and broadly characterise an approach to default reasoning. Representative work DELGRANDE 405 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. includes [Rei80, MD80, Moo83, Sho87, De188]. A dif- ficulty with these approaches is that, while they may adequately characterise a particular phenomenon, they are basically computationally hopeless. In the above cited works, for example, the set of inferences following from a default theory is not recursively enumerable. There has also been a good deal of interest in iden- tifying tractable, or at worst decidable, subcases of de- feasible reasoning. This work began by concentrat- ing on inheritance networks of defeasible properties. Such work arguably began with [ER83] (although here complexity issues were not a concern) and includes [HTT87, THT87, HT88, Hau88, Gef88, Bou89]. As [THT87] points out, many of these approaches lack any semantic basis and so appeal to intuitions for their justification. Others use intuitions differing from those used here; for example, [Gef88] provides a probabilis- tic account of inheritance. In a somewhat different vein, [Poo88] presents a framework for implementing approaches to defeasible reasoning, although complex- ity issues are not addressed. More recently there has been work straddling these two general approaches, wherein tractability is a major concern, but so are questions of semantics. In [KS891 limited versions of Reiter’s default logic are considered and complexity results are obtained. [SK901 considers complexity issues in default systems based on model preference. [PeaSOb] examines several entailment rela- tions in a probability-based system of default rules and again, in some cases obtains good complexity results. [KLMSO] examines a family of nonmonotonic conse- quence relations, based on the approach of [Sho87]. The present work belongs to this last area, in that it is based on an extant general approach to default reasoning, but is intended to address complexity is- sues in default reasoning. The overall approach is to reduce default reasoning to classical reasoning, plus some attendant “overhead”. Thus existing results for restrictions to propositional and first-order logic, no- tably those obtained for theories in Horn clause form, carry over to default reasoning here. Before discussing the general approach however, it is worthwhile first surveying general approaches to de- fault reasoning by how a default assertion is (broadly) interpreted. There are at least four distinct ways an assertion such as “birds fly” may be construed. In probabilistic approaches [PeagOal, “birds fly” is in- terpreted roughly as “a bird flies with certainty n”. In consistency-bused approaches “birds fly” is para- phrased as “if something is a bird, and it is consistent that it flies then conclude that it flies”. [ReiSO, MD801 are the original approaches in this area; [Rei87] sur- veys such approaches. In approaches based on model preference one interprets “birds fly” by restricting the set of models of the sentence. [McC80] is the original work in this area. [Sho87] presents a more general ap- proach while [Rei87] also surveys this area. Lastly, in a (so-called) scientific approach [De187], “birds fly” is an assertion in a naive scientific theory, in the same way that “CO2 freezes at -108’” is. There is, of course, no “correct” interpretation; rather the above categories represent distinct, complementary ways in which one may understand a default. For this last approach, [De1871 describes a condi- tional logic, N, for representing and reasoning about default statements. This logic consists of first-order logic augmented with a vuriubZe conditional operator, 3, for representing default statements. The statement Q + p is read as “if cy then normally P,, . Thus “birds fly” is represented as Vz(Bird(z) + Fly(s)), or in a propositional gloss as Bird + Fly. This approach is intended to capture the notion of defeasibility found in naive scientific theories. That is, in the same way that “CO2 freezes at -108O” states that a quantity of CO2 would freeze at -108O if it were pure, if the measur- ing equipment didn’t affect the substance, etc., so too would a bird fly if we allowed for exceptional circum- stances such as having a broken wing, being a penguin, etc. Truth in the resulting logic is based on a possible worlds semantics: the accessibility relation between worlds is defined so that from a particular world 20 one “sees” a sequence of successively “less exceptional” sets of worlds. Q 3 p is true just when ,6 is true in the least ezceptionul worlds in which (Y is true. Thus “birds normally fly” is true if, in the least exceptional worlds in which there are birds, birds fly. Intuitively, we factor out exceptional circumstances such as being a penguin, having a broken wing, etc., and then say that birds fly if they fly in such “unexceptional” cir- cumstances. A proof theory and semantic theory is provided and soundness and completeness results are obtained. For example, in the logic, the following sen- tences are valid: Thus, in the first formula, if adults normally pay tax and people who pay tax are normally employed, then adults who pay tax are normally employed. Second, if birds normally fly and birds that fly are normally small, then birds are normally small. Lastly, if ravens are normally black, and albino ravens normally are not bla.ck then non-albino ravens are normally black. However, the logic is markedly weaker than classical logic. For example, the following sets of sentences are satisfiable: 1. {R(z) + Bl(z), (R(z) A A@)) a +l(z)} 2. (P(z) > B(z), B(z) j F(z), P(x) j 1F(z)} 3. {B(z) * F(z), B(opus), +yopus)). 406 NONMONOTONIC REASONING Thus in the first case, ravens are (normally) black but albino ravens are normally not black. Second, pen- guins are birds; birds normally fly; but penguins nor- mally do not fly. Lastly, birds normally fly; but Opus is a non-flying bird. In classical logic of course, the corresponding conditionals would either be inconsis- tent or only trivially satisfiable. The last example also shows that the conditional operator + cannot support modus ponens. The reason for this is that the truth of B(opus) and F(opus) depends on the present, con- tingent state of affairs being modelled, while the truth of B(z) + F(z) d p d e en s on other “less exceptional” states of affairs, Hence there is no necessary connection between the truth of B(z)AF(z) and of B(X) + F(z). However, if we know only that Opus is a bird, then it is reasonable to conclude by default that Opus flies. This problem is addressed in [De188], where an ap- proach for default inferencing is developed. Default reasoning is based on the assumption that the world at hand is as unexceptional as consistently possible. This provides a justified means for replacing the vari- able conditional j with the material conditional > in some of the defaults. Thus, if we know only that R is contingently true and that R a BZ is true, then if the world being modelled is as unexceptional as con- sistently possible, then the world is among the least exceptional worlds in which R is true; R > Bd is true at all such worlds, and so we conclude BI by default. The overall approach then differs significantly from previous approaches (with the exception of approaches based on probability theory) in that it provides an ex- plicit semantics for statements of default properties. Hence, in contrast to previous approaches, we can rea- son about statements of default properties. In addi- tion it makes sense to talk of one default statement being a logical consequence of others or of a set of default statements being inconsistent. Default reason- ing, wherein conclusions concerning defeasible proper- ties drawn in the absence of contrary information, is based directly on, and justified by, the semantic the- ory. Thus, for example, specificity of defaults is a,n artifact of the overall theory, and does not need to be “wired” into the approach (as it does, for example, in [Rei80, Poo88, PeaSOb] an circumscriptive approaches d [McC86, BG89]). The Approach: A default theory is a set of necessary and default impli- cations. Specifically, a default theory is a set N where N =N,uN+; elements of N3 are sentences of the form (Y > ,0; elements of N+ are sentences of the form o! =$- p; and (Y is a conjunction of literals and ,0 is a (possibly-negated) literal. N,/, is the set N where all occurrences of the symbol a in formulae are replaced by 3. A literal P(g is an atomic formula, where P is a predicate name, and <is a tuple of constants and vari- ables. a > ,0 is interpreted as “if 0 then p”; a + p is interpreted as “if a then, in the normal course of events, P,, . An example of a default theory is {US(z) a A(z), A(z) 1 P(x), A(s) * E(s), US(2) =3 +3(z)}. Informally, university students are normally adults; adults are persons; adults are normally employed; and university students are normally not employed. C is a set of literals, representing known contingent facts about the domain. For the above example, we might have that C is { US(sue), P(sue)}. The goal is to specify precisely what is to follow by default from C and N. In the above example, we would conclude by default that +Z(sue) and A(sue).’ From the semantic theory of N we have that the accessibility relation between possible worlds yields a sequence of successively “less exceptional” sets of pos- sible worlds. Each set represents an equivalence class of worlds with respect to accessibility. The conditional cy + /3 is true if in the least set of worlds where there is a world in which cy is true, /? is true at these worlds also. I will use the notation s, to denote the least ex- ceptional set of worlds in which a is true at some world in this set. 2 This means that a =+ p is true if Q > ,0 is true at every world in s,. Hence if cu =+ p, then there must be a world in s, where p is true. However p could be true at a still less exceptional world. I will write this last result as sp < scu. Consider the sentences “ravens are normally black” and “albino ravens are normally not black”. These sen- tences can be represented propositionally as R + BI and R A Al j -BI. Clearly R A Ad > R is also true. From the preceding discussion, we have that since R 3 BZ is true, the least worlds in which BI is true are no more exceptional that the least worlds in which R is true; hence sB[ 5 sR. Similarly we obtain S,B~ 5 SR,-,A~, and SR 5 SRAA~. We also know that in the least exceptional (or simplest) worlds in which R is true, R 3 BI must a.lso be true. (That is, in the sim- plest worlds in which there are ravens, these ravens are black.) Thus at all worlds represented by sR, R > Bl is true. Also, in the simplest worlds in which R A Al is true, (R A Al) 1 -BZ is true. Since worlds are con- sistent, a straightforward argument shows that in fact we have SR < SRAA[. Given this general structure on worlds, default res soning is easily formulated. Following the previous ex- ample, if we know only that R, and we assume that the world being modelled, W, is as unexceptional as possi- ble, then it must be that w E sR. Since R > BI is true at every world of sR, BZ must be true at w also. If how- ever we know only that R A AZ is true, then assuming ‘As will become evident from the semantic theory, the default A(z) + E(z) is not used (in fact, should not be used), since its antecedent is less specific than that of US(e) * 77(z). 2The case where cy + ,f!? is true by virtue of Q being necessarily false is handled by a straightforward extension. DELGRANDE 407 that the world is as unexceptional as possible implies 20 E SRAA~. Since RA AZ > lB1 is true at all worlds in SRAA~ we can conclude 1Bl. These considerations are summarised by the following assumption. Normality: The world being modelled is as unexcep- tional as consistently possible. Thus what we know about the world being modelled, UI, is contained in C U N,. Via the assumption of normality we also incorporate the information in N+. There is another factor that must be taken into ac- count. Consider the default theory {R + BZ, RA AI a lBd, R a F}. (H ence ravens also normally fly.) If C = (R, Al} then the assumption of normality allows us to conclude only 1BZ by default. Clearly however we also want to be able to use the default R + F, since there is no reason to suppose that it does not hold: if albinoism also affected the property of flight, then presumably this would have been explicitly stated in the default theory. This is expressed in the following assumption. Maximum Applicability: A default is applicable unless its use contradicts a constraint in the seman- tic theory. The two assumptions appear natural and intuitive; it is arguably an advantage of the approach that these assumptions must be explicitly made before default in- ferencing can be carried out. There is a final factor that must be taken into ac- count. Consider the default theory {Q a P, R j TP). Informally Quakers are normally pacifists while repub- licans normally are not. If C = {Q, R), we have two choices: we can use the first default and conclude by default that P is true, or we can use the second and conclude that 1P is true. This is in contrast to the situation where we have in addition that Qua.kers are normally republicans, Q a R, in which case sR 5 sQ, and so we would conclude by default that P only. Thus we may have to allow for conflicting default inferences, or else remain agnostic about a particular truth value. Given these considerations, I adopt the following approach. First, the information implicit in a set of default and strict conditionals imposes constraints on any model of this set of statements; this information is made explicit in a structure, called the canonical struc- ture. The canonical structure St(N) can be regarded as a “proto-model”, that is, as a structure that in some sense represents the set of models of N, yet isn’t itself a model. This structure consists of a set of “points” in a partial order. A point is denoted si, where i is some arbitrary index or, as we shall use for notation, s, where o is some conjunction of literals. A point represents a set of mutually accessible worlds, and the partial order relation represents what is known about accessibility between two such sets. In addition, sets of formulae are associated with individual points; these formulae specify constraints on worlds represented by a point. The notion of validity is defined in this structure and, for conditional and strict implications, is shown to correspond to that of validity in the logic N. Default reasoning is implemented in this structure by taking the above two assumptions into account, and then by appropriate (classical) reasoning. For the assumption of normality, we locate a point SC, corre- sponding to the world being modelled, in the canonical structure. For the assumption of maximum applica- bility, we appropriately propagate formulae from one point to another in the canonical structure. The notion of an extension, or reasonable set of beliefs that may be held by default, is defined. A key point is that the conditional operator + does not appear in the canoni- cal structure; thus, the time required for default infer- encing is that required for non-default reasoning at a point, together with the overhead of manipulating the canonical structure. Hence (it proves to be the case that) the overall complexity of default reasoning will be determined (except for a polynomial factor) by the overall form of the defaults. The Approach: Details The general idea is that we construct a canonical struc- ture St(N) which preserves the information expressed by N. Validity for conditionals is defined in this struc- ture and is shown to correspond to the definition of validity in N; default inferencing in St(N) is shown to correspond to the approach of [DeM]. The approach consists of four steps: 1. Construct the canonical structure St(N) from N. 2. Locate a point SC in St(N), according to C. This is the assumption of normality. 3. For points sj 5 SC determine which formulae in sj(V) may also be used at SC and under what con- ditions. This is the assumption of maximum appli- cability. 4. Determine default properties by (non-default) rea- soning at SC. In more detail we have: The canonical structure St(N) is the poset [S; <] resulting from the construc- tion given below. The elements si of S are called points of the canonical structure. Each si E S is an ordered pair < si(Y),si(V) > of sets of formulae. If points are interpreted as standing for sets of mutually accessible worlds, then si(V) is a set of formulae that must be true at every such world; si(3) is a set of formulae where each formula must (individually) be true at some world in si. Thus cu,/? E si(3) implies that there is a world in which o is true and equally unexceptional world in which ,f? is true. For Q E si(3), si represents the least set of worlds in which there is a world where CL is true. s, is an abbreviation for si where cy - p and p E si(3). So scu is the set of worlds containing the least exceptional world in which LY is true. In the following construction, if we discover that si 5. sj and sj 5 si we merge the points si and sj. That IS, we replace si and sj by the single point < Si(3) U Sj(3), Si(V) U Sj(V) >. 408 NONMONOTONIC REASONING Construction 1. Initiulise the structure by: for a =+ j3 E N, si t< {a}, {a > /3) > for distinct si, 2. If for? E si(3), (y}Usi(V)UN, j= P and P E sj(3) then assert sj 5 si. In Step 2, si stands for the least set of worlds in which there is a world where y is true. sa(V) U NI expresses what must be true at every world in si; {y} U si(V) U N, then expresses what must be true at some world in si. Consequently, if p must be true at some world in si, then the worlds represented by si must be no less exceptional than the least worlds in which p is true. This step is used primarily to assert sp 5 sa when Q a ,O is a default and /? occurs as the antecedent of another default. Thus, if N is {A + B, A + C, A A D 3 E} then from step 1. we obtain three points: s1 =< (A},{A > B) >, s2 =< (A},{A > C) >, s,=<{Ar\D},{AnD>E}>. From step 2, letting y be A we obtain: A E s1(3), {A} U sl(V) b A and A E s2(3) and so s2 5 sr. A similar argument yields sr 5 ~2, and so we merge si and s2 (retaining, say, sr) obtaining si =< {A}, {A > B, A > C) >. Also, since A A D E SQ(~), {A A D} U Q(V) b A and A E si(3) we obtain that si 5. ss. If A + D were also a member of N we would obtain that Sl = s2 = s3. The notion of truth of conditionals (k’) is defined for St(N) and shown to correspond to truth in the logic N (by). Notably b ’ is defined without making reference to the default conditional =+. Definition 1 St(N) b’ (Y > @ iff 1. N, U sa(V) k a > p ifs, exists; 2. N3 t= a > p otherwise. Theorem 1 If St(N) b’ a > /3 then N +N a j /3. Next, default reasoning can be defined with respect to this structure. First we assume that the world be- ing modelled is as unexceptional as possible; this is the assumption of normality. For this we create a point SC which is < {A(ci) for all ci E C}, 0 >, and locate this point in the structure according to the above construc- tion. So if N is (R j BI, R 3 F, R A AZ + lB/} and C is {R, Ad), we obtain that SC = SRAA[. For the assumption of maximum applicability we want to include certain formulae from points si where si 5 SC. Again, if N is (R + Bl, R + F, R A Al 3 IBE} and C is {R, Ad} then, while SC = SRAA~, we would still want to be able to use R > F E SR(V) at SC, since it has not been explicitly blocked. How- ever there may be more than one way to so “in- clude” formulae at SC. For example if we began with N = {Q 3 P,R + -P} and C = {Q,R}, then we would be able to add Q > P to SC or R I> 1P to SC, but not both. This is accomplished as follows. First we extend C to a maximal set of formulae D by adding formu- lae first from “nearby” points (where consistent), and then from more “distant” points. This set D deter- mines a default extension, or maximal set of beliefs that may justifiably be held. This in turn leads to two possibilities with respect to default inferencing: if something follows in one extension, then we have a notion of “credulous” default inference; if something follows in all extensions, we have a more conservative notion of default inference. Crucially D is determined solely by the model theory of the logic N (as reflected in the canonical structure) along with the assumptions of normality and maximum applicability. Definition 2 D is a maximal set of form&e obtained by the following procedure: 1, Initially: (a) D=CuN,. (b) For every a E si (V) and sa E St(N), CY is flagged us ‘unseen”. 2. For CY E s;(V), where (a) si I SC, (b) a is ‘unseen”, and (c) for every sj # sd where si < sj 5 SC, every P E Sj (V) is “seen”, flag a us “seen” and if D U {a} is consistent, add a to D. Note that Step 2c ensures that defaults with more spe- cific antecedents are used over those with more general antecedents. Hence, given C = {R, AZ, F) we will use RAAl>lBIover RI Bl. Definition 3 A is a credulous default inference if 1D b A for some D us defined above; A is a (general) default inference if D /= A for every such D. We also obtain: Theorem 2 1. If A is a credulous default inference then A follows by default in a maximal contingent extension in the approach of [Del88]. 2. If A follows us a general default according to the above conditions, then A follows by default in the approach of [Del88]. I conclude this section by presenting several exam- ples to illustrate and perhaps clarify the overall pro- cedure for default reasoning. For simplicity I use propositional formulae only. Consider first the de- faults P j Q and Q 3 R. We obtain that sp is < {PMP 1 Q) > and sQ is < {Q},{Q 3 R} >. From step 2 of the construction, sQ 5 sp. If C = {P) then SC = sp (again by step 2 of the construction). D is {P,P > Q,Q > R}, and so Q and R follow as credulous and general default inferences. This example shows that transitivities between defaults follow unless DELGRANDE 409 explicitly blocked. If we also had P 3 -R then clearly R would no longer follow by default. Second, consider the defaults R j B1, R + F, R A Al =+ lB1 - again, ravens are normally black and nor- mally fly, but albino ravens are normally not black. We obtain that sR is < (R}, {R > B1, R 2 F} > and SRAA~ is < (R A Al), (R A Al > -Bl) >. Furthermore SR 5 SRAAI. If C = (R, Al), then SC = SRAA~ . There is only one default extension, in which 1Bl and F are true. Note that from the construction of D, we use the default R A Al 3 -BI over R =+ Bd. If on the other hand C = (R}, then we would conclude by default that BI and F. Finally, if C = (R, ‘F) we would still conclude by default that BZ. As a final example, if we had the defaults Q 3 P, R 3 ‘P, together with C = (Q, R}, then we would obtain that sQ < SC and SR 5 SC. There would be two default extensions, one in which P was true and one in which 1P was true. If we also had that Q j R then we would obtain that SR 5 sQ and with C = {Q, R} would obtain a single extension in which P was true. Complexity Considerations The previous section described four steps in the pro- cedure for determining default inferences. The second part, locating SC in St(N), is essentially a subcase of the first, constructing St(N), and so need not be con- sidered separately. In all steps, time complexity hinges on two facts: first that there are no more points in St(N) than there are elements of N, and second that the complexity of reasoning at a point is determined by the form of the formulae in N,/,. In what fol- lows, I will let n =I N 1, t(n) be an expression for the time required for determining satisfiability of elements of N,/>, and m be the total length of formulae in N=k/3* To begin with, constructing or searching the canon- ical structure is O(n2t(n)). This is because the con- struction of St(N) depends only on the set of condi- tionals in N: the number of points is no more than the number of conditionals, and so is O(n . The number of connections between points is O(n a ) in the worst case. Note too that while in the worst case manipulat- ing the structure is O(n2), in practice it appears to be roughly linear. The construction also makes reference to satisfiability in the underlying (non-default) system, and so the overall complexity is O(n2t(n)). The third step, constructing a default extension, potentially in- volves searching all points and all formulae at these points, together with checks for consistency, so again complexity is O(n2t(n)). For the last step, reasoning at an extension D, complexity is just O(t(n)). In some situations this may be quite helpful. In particular, if N and C are relatively stable, then default reasoning will generally be O(t(n)); t i is only when one of N or C are modified that we must deal with the additional O(n2) “overhead”. Finally, note that the simplest interesting case is where default implications are in the form of propositional Horn clauses. Since the satisfiability of propositional Horn clauses is linear in the total length of formulae [DG84], the whole procedure is O(n2m). Hence if we restrict ourselves to function-free (de- fault and strict) Horn clauses, the complexity of rea- soning is just that of non-default reasoning, together with 0( n2) “overhead”. If we allow functions in the language however, things are much worse. First, we are no longer guaranteed a finite canonical structure. The following set of formulae, for example, yield a canonical structure with an infinite set of points: P(c) f P(x) * Q(x), Q(x) * p(f@:)), P(x) * A Q(x) a 1A. It may be that the overall procedure can be modified so that the construction of the canonical model (step 1 above) can be combined with the propagation of for- mulae (step 3) as part of a general proof procedure. That is, it may be possible to modify the procedure so that points are produced “as required” ; this however remains an area for further work. The procedure can though be immediately applied to theories whose Her- brand base is finite. In the obvious extension of the procedure, where we consider all instances of formu- lae whose variables are instantiated over all constant terms, complexity is O(n2m2t(n)), where m is the size of the Herbrand base. However one further comment is in order here. The time complexity of O(n2t(n)) is for determining what may follow in an arbitrary default extension. That is, the construction in definition 2 simply generates one of perhaps some number of extensions. A much harder question is determining whether there is some extension in which a desired conclusion follows. In this case the procedure appears to be intractable (unless P = NP) in the case of propositional Horn clauses. The difficulty arises since we would need to determine maximal (with respect to N,/,) consistent subsets of an inconsistent set of Horn formulae. Another question is that of determining whether a conclusion follows in all extensions. Th& last ques- tion appears to effectively require that all extensions be generated, and again appears to be unavoidably in- tractable. These results then seem to broadly paral- lel those described in [KS891 for Reiter’s default logic. As a brief point of comparison between the two ap- proaches, we do not have to contend with something analogous to semi-normal defaults for “interacting” conditionals; in addition, the approach presented here enforces specificity between defaults “for free”. Discussion This paper has presented an algorithm for default rea- soning from a set of default and strict Horn condition- als, based on the model theory of an extant logic of 410 NONMONOTONIC REASONING defaults. A set of strict and defeasible Horn assertions is translated into a particular mathematical structure called the canonical structure. This structure in a pre- cise sense represents the set of all models of these as- sertions. The structure then directly reflects intuitions explicitly or implicitly contained in the set of defaults. Default reasoning is correct, in that it is shown to be sound with respect to the approach of [De188]. Thus the semantic theory sanctions this approach to reason- ing about default properties. It is shown that default reasoning may be efficiently carried out in this structure. This is because, first, the size of the canonical structure is bounded by the size of N, and second because the complexity of reasoning at a point is determined by the form of the formulae in N,/,. The time complexity of constructing and manipulating the canonical structure is O(n2), where n =I N I, t o e g th er with the time required for deter- mining satisfiability. Since the formulae that we deal with are restricted to be conditionals involving con- junctions of function-free terms, the complexity of rea- soning with a set of formulae at a point is manageable. In the simplest case, the complexity of reasoning at a point is proportional to the total length of all formulae, m, and so the overall complexity is O(n2m). Acknowledgments This research was supported in part by the Natural Science and Engineering Research Council of Canada grant A0884. The author is a mem- ber of the Institute for Robotics and Intelligent Sys- tems (IRIS) and acknowledges the support of the Net- works of Centres of Excellence Program of the Govern- ment of Canada, and the participation of PRECARN Associates Inc. References A.B. Baker and M.L. Ginsberg. A theorem prover for prioritized circumscription. In Proc. IJCAI-89, pages 463-473, 1989. C. Boutilier. A semantical approach to stable inheri- tance reasoning. In Proc. IJCAI-89, pages 1134-1139, 1989. J .P. Delgrande. A first-order conditional logic for prototypical properties. Artificial Intelligence, 33( 1):105-130, 1987. J .P. Delgrande. An approach to default reasoning based on a first-order conditional logic: Revised re- port. Artificial Intelligence, 36( 1):63-90, 1988. W.F. Dowling and J.H. Gallier. Linear-time algo- rithms for testing the satisfiability of propositional horn formulae. Logic Programming, 1(3):267-284, 1984. D.W. Etherington and R. Reiter. On inheritance hi- erarchies with exceptions. In Proc. AAAI-83, pages 104-108, 1983. H. Geffner. On the logic of defaults. In Proc. AAAI- 88, St. Paul, Minnesota, 1988. B.A. Haugh. Tractable theories of multiple defeasible inheritance in ordinary nonmonotonic logics. In Proc. AAAI-88, St. Paul, 1988. J.F. Horty and R.H. Thomason. Mixing strict and defeasible inheritance. In Proc. AAAI-88, pages 427- 432, 1988. J.F. Horty, R.H. Thomason, and D.S. Touretzky. A skeptical theory of inheritance in nonmonotonic se- mantic networks. In Proc. AAAI-87, 1987. S. Kraus, D. Lehmann, and M. Magidor. Nonmono- tonic reasoning, preferential models and cumulative logics. Artificial Intelligence, 44( l-2), 1990. H.A. Kautz and B. Selman. Hard problems for sim- ple default logics. In Proc. KR-89, pages 189-197, Toronto, Ont., 1989. J . McCarthy. Circumscription - a form of non- monotonic reasoning. Artificial Intelligence, 13:27-39, 1980. J. McCarthy. Applications of circumscription to for- malizing common-sense knowledge. Artificial Intelli- gence, 28:89-116, 1986. D.V. McDermott and J. Doyle. Non-monotonic logic I. Artificial Intelligence, 13:41-72, 1980. R.C. Moore. Semantical considerations on nonmono- tonic logic. In Proc. IJCAI-83, pages 272-279, Karl- sruhe, West Germany, 1983. J. Pearl. Reasoning under uncertainty. In J.F. Traub, B.J. Grosz, B.W. Lampson, and N.J. Nilsson, editors, Annual Review of Computer Science, volume 4, pages 37-72. Annual Reviews Inc., 1990. J. Pearl. System 2: A natural ordering of defaults with tractable applications to nonmonotonic reason- ing. In Proc. of the Third Conference on Theoretical Aspects of Reasoning About Knowledge, pages 121- 135, Pacific Grove, Ca., 1990. D.L. Poole. A logical framework for default reasoning. Artificial Intelligence, 36( 1):27-48, 1988. R. Reiter. A logic for default reasoning. Artificial Intelligence, 13~81-132, 1980. R. Reiter. Nonmonotonic reasoning. In J.F. Traub, B.J. Grosz, B.W. Lampson, and N.J. Nilsson, edi- tors, Annual Reviews of Computer Science, volume 2, pages 147-186. Annual Reviews Inc., 1987. B. Selman and H. Kautz. Model-preference default theories. Artijkiul Intelligence, 45(3):287-322, 1990. Y. Shoham. A semantical approach to nonmonotonic logics (extended abstract). In Symposium on Logic in Computer Science, pages 275-279, Ithaca, New York, 1987. D.S. Touretzky, J.F. Horty, and R.H. Thomason. A clash of intuitions: The current state of nonmono- tonic multiple inheritance systems. In Proc. IJCAI- 87, pages 476-482, Milan, 1987. DELGRANDE 411
1991
62
1,123
Step-logic and the Three-wise- Department of Computer Science and Engineering College of Engineering and Applied Sciences Arizona State University Tempe, AZ 85287-5406 drapkinQenws92.eas.asu.edu Abstract The kind of resource limitation that is most evident in commonsense reasoners is the passage of time while the reasoner reasons. There is not necessarily any fixed and fi- nal set of consequences with which such a reasoning agent ends up. In formalizing commonsense reasoners, then, one must be able to take into account that time is passing as the reasoner is reasoning. The reasoner can then make use of such information in subsequent deductions. Step-logic is such a formalism. It was developed in [Elgot-Drapkin, 19881 to model the on-going process of deduction. Con- clusions are drawn step-by-step. There is no “final” state of reasoning; the emphasis is on intermediate conclusions. In this paper we use step-logic to model the Three-wise- men Problem. Although others have formalized this prob- lem, they have ignored the time aspect that is inherent in the problem: a correct assessment of the situation is made by recognizing that the reasoning process takes time and determining that the other wise men would have concluded such and such by 1u)w. This is an important aspect of the problem that needs to be addressed. ackgrownd Commonsense reasoners have limited reasoning capabil- ities because they must deal with a world about which they have incomplete knowledge. Traditional logics are not suitable for modeling beliefs of commonsense reason- ers because they suffer from the problem of logical omni- science: if an agent has cyl, . . . , cy,, in its belief set, and if p, a wff of the agent’s language, is logically entailed by Ql, . . ., CY~, then the agent will also believe p. The literature contains a number of approaches to lim- ited reasoning. However, the oversimplification of a “final” state of reasoning is maintained; the limitation amounts to a reduced set of consequences, but all con- sequences are deduced instantaneously. In contrast, we are interested in the ever-changing set of (tentative) con- clusions as the reasoning progresses. Konolige [Kono- lige, 19841 studies agents with fairly arbitrary rules of inference, but ignores the effort involved in actually per- *Our thanks to Don Perlis, Kevin Gary, and Laurie Ihrig for helpful comments. 412 NONMONOTONIC REASONING forming the deductions. Similarly, Levesque CLevesque, 19841 and Fagin and Halpem [Fagin and Halpem, 19881 provide formal treatments of limited reasoning, but again the conclusions are drawn instantaneously, without mak- ing the intermediate steps of reasoning explicit. Lake- meyer [Lakemeyer, 19861 extends Levesque’s and Fagin and Halpem’s approaches to include quantifiers, but again does not address the issue with which we are concerned. Vardi [V’di, 19861 deals with limitations on omniscience, again without taking into account the intermediate steps of deduction. Although these approaches all model Iimited reasoning, the process is still in terms of the standard mold of static reasoning. We do indeed have a restricted view of what counts as a theorem, but the logic still focuses on the final state of reasoning. The effort involved in actually performing the deductions is not taken into consideration. We contend that the kind of resource limitation that is most evident in commonsense reasoners is the passage of time while the reasoner reasons. There is not necessarily any fixed and final set of consequences with which such a reasoning agent ends up. In a sense, this is a problem of modeling time. See [Allen, 1984, McDermott, 19821. Yet these treatments deal with reasoning about time, as opposed to reasoning in time. Reasoning in time refers to the fact that, as the reasoner reasons, time passes, and this passage of time itself must be recognized by the reasoner. Step-logic is proposed as an alternative to the approaches to limited reasoning just discussed, where it is not the final set of conclusions in which one is interested, but rather the ever-changing set of conclusions drawn along the way. That is, step-logic is designed to model reasoning that focuses on the on-going process of deduction; there is no final state of reasoning. There are many examples of situations in which the effort or time spent making deductions is crucial. Consider Little Nell who has been tied to the railroad tracks. A train is quickly approaching. Dudley must save her. (See [Haas, 1985, McDermott, 19821.) It is not appropriate for Dudley to spend hours figuring out a plan to save Nell; she will no longer need saving by then. Thus if we are to model Dudley’s reasoning, we must have a mechanism that takes into account the passage of time as the agent is reasoning. The Three-wise-men Problem (described in Section ) is From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. another example in which the effort involved in making deductions is critical. In this paper we show how step- logic is a useful model for the reasoning involved in this problem. In other formalizations of the Three-wise-men Problem this aspect has been ignored. (See [Konolige, 1984, Kraus and Lehmann, 1987, Konolige, 19901.) Step-logic In [Drapkin and Perlis, 1986, Elgot-Drapkin, 19881 we de- fined a family of eight step-logics-S&o, S&l, . . . , SLY- arranged in increasing sophistication, each designed to model the reasoning of a reasoning agent. Each differs in the capabilities that the agent has. In an SLo step- logic, for instance, the reasoner has no knowledge of the passage of time as it is reasoning, it cannot introspect on its beliefs, and it is unable to retract former beliefs. (SLo is not very useful for modeling commonsense reasoners.) In an SL7 step-logic, by contrast, the agent is capable of all three of these aspects that are so critical to common- sense reasoning. Most commonsense reasoners seem to need the full capabilities of an SL7 step-logic. A step-logic is characterized by a language, observa- tions, and inference rules. We emphasize that step-logic is deterministic in that at each step i all possible conclusions from one application of the rules of inference applied to the previous steps are drawn (and therefore are among the wffs at step i). However, for real-time effectiveness and cognitive plausibility, at each step we want only a finite number of conclusions to be drawn. Intuitively, we view an agent as an inference mecha- nism that may be given external inputs or observations. Inferred wffs are called beliefs; these may include certain observations. Let L be a first-order or propositional language, and let W be the set of wffs of C. Definition 1 An observation-function is a function OBS : N --) P(W), where P(W) is the power set of W, and where for each i E N , the set OBS(i) is Jinite. Definition 2 A history is a Jinite tuple of pairs of finite subsets of W. E is the set of histories. Definition 3 An inference-function is a function INF : 3c ---) P(W>, where for each h E ?-t, IN F(h) is jnite. Intuitively, a history is a conceivable temporal sequence of belief-set/observation-set pairs. The history is a finite tuple; it represents the temporal sequence up to a certain point in time. The inference-function extends the tempo- ral sequence of belief sets by one more step beyond the history. Definition 4 An S&-theory over a language L is a triple, < L, OBS, IN F > , where L is a first-order or propo- sitional language, OBS is an observation-function, and INF is an inference-function. We use the notation, SL,(OBS, INF), for such a theory (the language L is implicit in the definitions of OBS and IN F). For more background on step-logic, see [Elgot-Drapkin, 1988, Elgot-Drapkin and Perlis, 19901. The Problem We present a variation of this classic problem which was first introduced to the AI literature by McCarthy in [MC- Carthy, 19781. This version best illustrates the type of reasoning that is so characteristic of commonsense rea- soners. A king wishes to know whether his three advisors are as wise as they claim to be. Three chairs are lined up, all facing the same direction, with one behind the other. The wise men are instructed to sit down. The wise man in the back (wise man #3) can see the backs of the other two men. The man in the middle (wise man #2) can only see the one wise man in front of him (wise man #l); and the wise man in front (wise man #l) can see neither wise man #3 nor wise man #2. The king informs the wise men that he has three cards, all of which are either black or white, at least one of which is white. He places one card, face up, behind each of the three wise men. Each wise man must determine the color of his own card and announce what it is as soon as he knows. The first to correctly announce the color of his own card will be aptly rewarded. All know that this will happen. The room is silent; then, after several minutes, wise man #l says “My card is white!“. We assume in this puzzle that the wise men do not lie, that they all have the same reasoning capabilities, and that they can all think at the same speed. We then can postulate that the following reasoning took place. Each wise man knows there is at least one white card. If the cards of wise man #2 and wise man #l were black, then wise man #3 would have been able to announce immediately that his card was white. They all realize this (they are all truly wise). Since wise man #3 kept silent, either wise man #2’s card is white, or wise man #l’s is. At this point wise man #2 would be able to determine, if wise man #l’s were black, that his card was white. They all realize this. Since wise man #2 also remains silent, wise man #l knows his card must be white. It is clear that it is important to be able to reason in the following manner: If such and such were true at that time, then so and so would have realized it by this time. So, for instance, if wise man #2 is able to determine that wise man #3 would have already been able to fig- ure out that wise man #3’s card is white, and wise man #2 has heard nothing, then wise man #2 knows that wise man #3 does not know the color of his card. Step-logic is particularly well-suited to this type of deduction since it focuses on the actual individual deductive steps. Oth- ers have studied this problem (e.g. see [Konolige, 1984, Kraus and Lehmann, 1987, Konolige, 19901) from the per- spective of a final state of reasoning, and thus are not able to address this temporal aspect of the problem: assessing what others have been able to conclude so far. Elgot- Drapkin [Elgot-Drapkin, 1991al provides a solution based on step-logic to a version of this problem in which there are only two men. ELGOT-DRAPKIN 413 OBSw, is defined as follows, OS&,(i) = w20’, (w(w(ww3(4 x + Y) -+ (K3(4 X) -+ K3(8(i), Y))I)A Kzo’, K3(h, (z A 2’) --) ~1)) --+ I Kz(S(j), K3(k, z A 2’) -+ K3(8(kh ~111 CWXW b 0 [(X-20’, (\d;)[+(8(i), w3) + ‘K3(i, w3)1)A K20’, +(8(k), w3))) + K2(8(j), lG(k, w3)1 (‘di)(i’z)(‘dy)[(.&(i, x + y> A K2(i, -Y)) 3 K2(8(i), -=)I (t/w?: >Wx’ )WY) [(&(i, (x A 2’) 3 y)A K2(i, 1~)) + K2(8($, --+ A x’))] Wi)‘i)[& -+ Kz(i, &)I (+h + Wl) @5)[+7(8(9, w2> 4 lKz(i, w2)1 Figure 1: OBS w3 for the Three-wise-men Problem otherwise Formulation card is white, and i’s card is black, respectively. The step-logic used to model the Three-wise-men prob- lem is defined in Figures 1 and 2. The problem is mod- eled Corn wise man #l’s point of view. The observation- function contains all the axioms that wise man #l needs to solve the problem, and the inference-function provides the allowable rules of inference. We use an SL5 theory. An SL5 theory gives the rea- soner knowledge of its own beliefs as well as knowledge of the passage of time. 1 The language of SLs is first-order, having binary predicate symbols Kj and U, and function symbol S. Kj (i, ‘a’) expresses the fact that “Q! is known2 by agent j at step i”. Note that this gives the agent the expressive power to introspect on his own beliefs as well as the beliefs of others. V(i, ‘x’ ) expresses the fact that an utterance of z is made at step L3 s(i) is the succes- sor function (where s”(O) is used as an abbreviation for s(s(- l l (s(O)) l l 0)) ). Wi and Bi express the facts that i’s ‘For more details on SL, theories, see [Drapkin and Perlis, 1986, Elgot-Drapkin, 19881. 2known believed, or concluded. The distinctions between these (see iGettier, 1963, Perlis, 1986, Perlis, 19881) are not addressed here. 3For simplicity, in the remainder of the paper we drop the quotes around the second argument of predicates U and K j. In the particular version of step-logic that is used, the formulas that the agent has at step i (the i-theorems) are precisely all those that can be deduced from step i - 1 using one application of the applicable rules of inference. As previously stated, the agent is to have only a finite number of theorems (conclusions, beliefs, or simply wffs) at any given step. We write: - . 2. . . ., i+l: . . . . ; to mean that a! is an i-theorem, and /3 is an i + l-theorem. There is no implicit assumption that ar is present at step i+ 1. That is, wffs are not assumed to be inherited or retained in passing from one step to the next, unless explicitly stated in an inference rule. Rule 8 in Figure 2, however, does provide an unrestricted form of inheritance.4 We note several points about the axioms which wise man #l requires. (Refer to Figure 1.) Wise man #l knows the following: 4Although many commonsense reasoning problems require former conclusions to be withdrawn (based on new evidence), this particular formulation of the Three-wise-men Problem does not require any conclusions to be retracted. We can thus use an unrestricted form of inheritance. 414 NONMONOTONIC REASONING The inference rules given here correspond to an inference-function, IN Fw, . For any given history, INFw, returns the set of all immediate consequences of Rules l-8 applied to the last step in that history. Rule 1: i: . . . i+l : . . ..a if a! E OBS(i + 1) Rule 2: i: .*.,%(a! --a i+l: . . ..p Modus ponens Rule 3 : i : - - - ) PlE,..., P,& (Vz)[(Plz A . . . A P,z) + Qz] i+l: . . ., QE Extended modus ponens Rule 4 : i: . . . . +, (a + PI Modus tolens i+l: . . ..YY Rule 5 : i: . . . , -Qir, (Viiz)(Pi? 3 QE) Extended modus tolens i+l: . . ..-Pz - . Rule 6 : 2 . . . . i+l : . ..) lKl(Si(0), U(Siwl(0), Wj)) if U(Si-l(O), Wj) $Z t-i, j=2,3, i> 1 Rule 7 : Rule 8 : i : . ..) @j’wzo’, a) i+l: . . . , K2(si(0), a) i: . . ..cY i+l : . . ..oL Instantiation Inheritance Figure 2: INF w3 for the Three-wise-men Problem 1. Wise man #2 knows (at every step) that wise man #3 man #2 will know this at the next step.6 uses the rule of modus ponens. 2. Wise man #2 uses the rules of modus ponens and modus tolens. 3. Wise man #2 knows (at every step) that if both my card and his card are black, then wise man #3 would know this fact at step 1. 4. Wise man #2 knows (at every step) that if it’s not the case that both my card and his are black, then if mine is black, then his is white? 5. Wise man #2 knows (at every step) that if there’s no utterance of IV3 at a given step, then wise man #3 did not know IV3 at the previous step. (Wise man #2 knows (at every step) that there will be an utterance of IV3 the step after wise man #3 has proven that his card is white.) 6. If I don’t know about a given utterance, then it has not been made at the previous step. 7. If there’s no utterance of W3 at a given step, then wise ‘In other words, if wise man #2 knows that at least one of our cards is white, then my card being black would mean that his is white. Indeed, this axiom gives wise man #2 quite a bit of information, perhaps too much. (He should be able to deduce some of this himself.) This is discussed in more detail in [Elgot- Drapkin, 1988, Elgot-Drapkin, 1991bl. 8. If my card is black, then wise man #2 knows this (at every step). 9. If there is no utterance of IV2 at a given step, then wise man #2 doesn’t know at the previous step that his card is white. (There would be an utterance of IV2 the step after wise man #2 knows his card is white.) Note the following concerning the inference rules: 1. Rule 6 is a rule of introspection. Wise man #l can introspect on what utterances have been made.7 2. The rule for extended modus ponens allows an arbitrary number of variables. 3. Rule 7 is a rule of instantiation. If wise man #l knows that wise man #2 knows Q! at each step then, in partic- ular, wise man #l will know at step i + 1 that wise man #2 knew a at step i. 4. The rule of inheritance is quite general: everything is inherited from one step to the next.8 %rterestingly, it is not necessary for wise man #l to know there was no utterance; wise man #l only needs to know that wise man #2 will know there was no utterance. 7We limit the number of wffs on which the agent can intro- spect in order to keep the set of beliefs at any given step finite. ‘For other commonsense reasoning problems, a far more re- strictive version of inheritance is necessary. ELGOT-DRAPKIN 415 Solution The solution to the problem is given in Figure 3. The step number is listed on the left. The reason (inference rule used) for each deduction is listed on the right. To allow for ease of reading, only the wffs in which we are interested are shown at each step. In addition, none of the inherited wffs are shown. This means that a rule appears to be operating on a step other than the previous one; the wffs involved have, in fact, actually been inherited to the appropriate step. In step 1 all the initial axioms (OBSw,(l)) have been inferred through the use of Rule l.g Nothing of interest is inferred in steps 2 through 4. In step 5, wise man #l is able to negatively introspect and determine that no utterance of W3 was made at step 3. Note the time delay: wise man #l is able to prove at step 5 that he did not know at step 4 of an utterance made at step 3.1° The remaining wffs shown in step 5 were all inferred through the use of Rule 7, the rule of instantiation. Wise man #l needs to know that wise man #2 knows these particular facts at step 4. The reasoning continues from step to step. Note that at step 11, wise man #l has been able to deduce that wise man #2 knows that if wise man #l’s card is black, then his is white. From this step on, we essentially have the Two-wise-men problem. (See [Elgot-Drapkin, 1991al.) In step 17 wise man #l is finally able to deduce that his card is white. We see that step-logic is a useful vehicle for formulating and solving a problem of this kind in which the time that something occurs is important. Wise man #l does indeed determine “if wise man #2 or wise man #3 knew the color of his card, he would have announced it by now.” Wise man #l then reasons backwards from here to determine that his card must not be black, and hence must be white. Several points of contrast can be drawn between this version and the two-wise-men version. 1. In the two-wise-men version, wise man #l needs only to know about a single rule of inference used by wise man #2. In this version wise man #l needs to know several rules used by wise man #2: modus ponens, ex- tended modus ponens, and modus tolens. Because wise man #l reasons within first-order logic, these three rules required the use of six axioms. In the two-wise-men version, it is sufficient for wise man #l to know that wise man #2 has certain beliefs at step 1. In the three-wise-men version, this is not sufficient-wise man #l must know that wise man #2 always holds these beliefs. 2 3. What wise man #2 needs to know about wise man #3 is analogous to what wise man #l needs to know about wise man #2 in the two-wise-men version. So, for in- stance, wise man #2 must know that wise man #3 uses ‘To save space we have not repeated them in the figure. See Figure 1 for the individual axioms. “For a detailed description of this phenomenon, see [Elgot- Drapkin, 19881. the rule of modus ponens (and this is the only rule of wise man #3’s about which wise man #2 must know). Also wise man #2 needs only to know that wise man #3 has certain beliefs at step 1. Many formulations of the Three-wise-men problem have involved the use of common knowledge or common belief (see [Konolige, 19841 and [Kraus and Lehmann, 19871 in particular). For instance, a possible axiom might be C(Wl V IV2 V W3): it is common knowledge that at least one card is white. Adding the common knowledge concept here introduces unnecessary complications due, to a large degree, to the fact that the problem is modeled from wise man #I ‘spoint of view, rather than using a me&language that describes the reasoning of all three (as [Konolige, 1984, Kraus and Lehmann, 19871 have both done). This is more in the spirit of step-logics, where the idea is to allow the reasoner itself enough power (with no outside “oracle” intervention) to solve the problem. Thus we model the agent directly, rather than using a me&theory as a model. Condusions We have shown that step-logic is a powerful formalism for modeling the on-going process of deduction. There is no final state of reasoning; it is the intermediate steps in the reasoning process that are of importance. We have given a solution using step-logic to the Three-wise-men problem. Although others have formalized this problem, they have ignored the time aspect that we feel is so critical. In order to correctly assess the situation, one must be able to recognize that the reasoning process itself takes time to complete. Before wise man #l can deduce that his card is white, he must know that wise men #2 and #3 would have deduced by now the color of their cards. [Allen, 19841 J. Allen. Towards a general theory of action and time. Artificial Intelligence, 23: 123-154, 1984. [Drapkin and Perlis, 19861 J. Drapkin and D. Perlis. Step- logics: An alternative approach to limited reasoning. In Proceedings of the European Cont. on Arttjkial Intelli- gence, pages 160-163, 1986. Brighton, England. [Elgot-Drapkin and Perlis, 19901 J. Elgot-Drapkin and D. Perlis. Reasoning situated in time I: Basic concepts. Journal of Experimental and Theoretical Artificial In- telligence, 2(1):75-98, 1990. [Elgot-Drapkin, 19881 J. Elgot-Drapkin. Step-logic: Rea- soning Situated in Time. PhD thesis, Department of Computer Science, University of Maryland, College Park, Maryland, 1988. [Elgot-Drapkin, 1991al J. Elgot-Drapkin. A real-time so- lution to the wise-men problem. In Proceedings of the AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning, 199 1. Stanford, CA. [Elgot-Drapkin, 1991bl J. Elgot-Drapkin. Reasoning sit- uated in time II: The three-wise-men problem. Forth- coming, 199 1. 416 NONMONOTONIC REASONING 0: 1: W-(p) 2: 3: 4: 5: (a) @> (cl (d) 6: (a) @I 7: (a) (b) 8: (a) OJ) 9: lo: 11: 12: (a) @> 13: 14: 15: 16: 17: All wffs in OBSw,(l) (no new deductions of interest) (no new deductions of interest) (no new deductions of interest) +G(s4(0), U(s3(0), W3N K2(s4(0), (‘dw?: >w’y> [I-c-3@, 2 --+ y) ---) (K3(i, 2) -+ K3(49, Y))l) Kz(s4(O>, K3(40), (& A J32) -+ W3N Kz(s4(0), (vi)[+qs($ w3> + 7K3(i, w3>1> +qs3(0), W3) &.(s5(0), K3(8(0), Bl A B2) --+ K3(s2(O>, w3>) K2(s4(O>, -,W3(O>, W3)) K&(O), (Bl A B2) --$ K3(8(0), Bl A B2)) K2(s5(0), 7K3(s2@>, W3N &(s7(0), +I A B2) --) @I --$ w2)) Kz(&O>, +-3(3(O), BI A B2)) K2(s7(0), +& A B2)) K2(s8@), Bl + W2) (Kz(~*(o), Bl) + p-L’2(sgw, W2>) 1K1(s”(O), v(s’“(o), W2)) ~u(s’“(o>, w> ~K2(S9(O>, w2> 7K2(s8(0), Bl) 1Bl Wl (RI) (W W,W W,lb) @We) (R3SaJfl (R3,5b,5c, lj) (W6aJg) (R7,lc) (R3,7a,5d, lk) W,ld) (R3,8a,6b,ll) (R3,9,7b,lm) (R3,10,8b,li) (R3,llJh) uw (R3,12b,lf) (R3,13,lp) (R4,14,12a) (R5,15,ln) (R2,16,10) Figure 3: Solution to the Three-wise-men Problem [Fagin and Halpem, 19881 R. Fagin and Y. Halpem, J. Belief, awareness, and limited reasoning. ArtiJicial In- telligence, 34(1):39-76, 1988. [Gettier, 19631 E. Gettier. Is justified true belief knowl- edge? Analysis, 23: 121-123, 1963. [Ginsberg, 19911 M. Ginsberg. The computational value of nonmonotonic reasoning. In Proceedings of the Sec- ond International Conference on Principles of Knowl- edge Representation and Reasoning, April 1991. [Haas, 19851 A. H aas. Possible events, actual events, and robots. Computational Intelligence, 1(2):59-70, 1985. [Konolige, 19841 K. Konolige. Belief and incomplete- ness. Technical Report 319, SRI International, 1984. [Konolige, 19901 K. Konolige. Explanatory belief ascrip- tion. In R. Parikh, editor, Theoretical Aspects of Rea- soning about Knowledge: Proceedings of the Third Conference, pages 85-96. Morgan Kaufmann, 1990. Pacific Grove, CA. [Klaus and Lehmann, 19871 S. Kraus and D. Lehmann. Knowledge, belief and time. Technical Report 87-4, Department of Computer Science, Hebrew University, Jerusalem 91904, Israel, April 1987. [Lakemeyer, 19861 G. Lakemeyer. Steps towards a first- order logic of explicit and implicit belief. In J. Halpem, editor, Proceedings of the 1986 Conference on Theo- retical Aspects of Reasoning about Knowledge, pages 325-340. Morgan Kaufmann, 1986. Monterey, CA. [Levesque, 19841 H. Levesque. A logic of implicit and explicit belief. In Proceedings of the 3rd National Conf. on Artificial Intelligence, pages 198-202, 1984. Austin, TX. [McCarthy, 19781 J. McCarthy. Formalization of two puz- zles involving knowledge. Unpublished note, Stanford University, 1978. [McDermott, 19821 D. McDermott. A temporal logic for reasoning about processes and plans. Cognitive Science, 6:101-155, 1982. [Perlis, 19861 D. Perlis. On the consistency of common- sense reasoning. Computational Intelligence, 2:180- 190, 1986. [Perlis, 19881 D. Perlis. Languages with self reference II: Knowledge, belief, and modality. Artificial Intelligence, 34: 179-212, 1988. [Vardi, 19861 M. Vardi. On epistemic logic and logical omniscience. In J. Halpern, editor, Proceedings of the 1986 Conference on Theoretical Aspects of Reasoning about Knowledge, pages 293-305. Morgan Kaufmann, 1986. Monterey, CA. ELGOT-DRAPKIN 417
1991
63
1,124
Mois& Goldszmidtt Judea Pearl < moises@cs.ucla.edu > < judea@cs.ucla.edu > Cognitive Systems Laboratory, Computer Science Department, University of California, Los Angeles, CA 90024 Abstract We develop a formalism for reasoning with de- faults that are expressed with different levels of firmness. Necessary and sufficient conditions for consistency are established, and a unique rank- ing of the rules is found, called Z+, which ren- ders models as normal as possible subject to the consistency conditions. We provide the necessary machinery for testing consistency, computing the Z+ ranking and drawing the set of plausible con- clusions it entails. 1 Introduction: Not All Defaults Were Created Equal Regardless of how we choose to interpret default state- ments, it is generally acknowledged that some defaults are stated with greater firmness than others. For ex- ample, the action-response default “if Fred is shot with a loaded gun Fred is dead” is issued with a greater con- viction than persistence defaults of the type “If Fred is alive at time t, he is alive at t + 1.” Moreover, the de- gree of conviction in this last statement should clearly depend on whether t is measured in years or in sec- onds. In diagnosis applications, likewise, the analyst may feel strongly that failures are more likely to occur in one type of devices (e.g., multipliers) than in an- other (e.g., adders). A language must be devised for expressing this valuable knowledge. Numerical proba- bilities or degrees of certainty have been suggested for this purpose, but if one is not concerned with the full precision provided by numerical calculi, an intermedi- ate qualitative language might be more suitable. Priorities among defaults have been proposed in many non-monotonic reasoning systems. For example, given a set of conflicting defaults, prioritized circum- scription ([Lifschitz, 19881) permits the user to iden- tify which statement should override the other. The *This work was supported in part by National Science Foundation grant #IRI-88-21444 and State of California MICRO 90-127. + Supported y b an IBM graduate fellowship 1990-91. statement “penguins do not fly” for instance, can be given a higher priority over “birds fly”, in order to enforce preferences toward the more specific classes. In certain systems, specificity preferences can be ex- tracted automatically from the databases itself (e.g. [Geffner, 19891, [Kraus et al., 1990]), when such infor- mation is available, say from the statement “all pen- guins are birds”. However, certain priorities are not specificity based. For example, to reflect our intuitions that religious beliefs are stronger than political affil- iations, we would like the default “typically Quakers are pacifists” to override the default “typically Repub- licans are not pacifists”, when the two are found to conflicts with one another (say when Nixon is found to be a Quaker and a member of the Republican party). To resolve such conflict we need to encode these prior- ities on a rule-by-rule basis. This paper proposes and analyzes a formalism to include priority information in the form of integers assigned to default rules, each integer signifying the degree of firmness with which the corresponding rule is stated or, alternatively, the degree of surprise (or abnormality) associated with finding the rule vio- lated. These integers may encode linguistic quanti- fiers such as “typical”, “highly typical”, “extremely typical”, etc.. They can also be viewed as pow- ers of infinitesimals in the probabilistic interpretation of defaults, in the spirit of e-semantics [Pearl, 19881, QCF [Spohn, 19871, and Kraus et al. [1990]. Our for- malism takes after, and extends system-Z [Pearl, 19901, which proposes a conditional-preferential interpreta- tion of defaults cp ---) + as saying that $ holds in all most preferred models of cp (see [Shoham, 19871, [Kraus et a!., 19901, [Geffner, 1989]), but permits only one level of firmness for all defaults. The paper is organized as follows: Section 2 in- troduces the concept of ranking functions on models and establishes the necessary and sufficient conditions for the existence of admissible rankings. Section 3 is concerned with the precise characterization of a priv- ileged ranking ti+ on models and its relation with the ranking Z+ on rules. Its main properties, minimality, uniqueness, and the procedures necessary to compute GOLDSZMIDT & PEARL 399 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. I the ranking and its set of plausible conclusions are presented. Section 4 provides some examples which illustrate the use of the &-ranking to express belief strength, to enforce priorities among defaults and how specificity relations are maintained by system-Z+. Fi- nally, Section 5 discusses and evaluates the main re- sults. For reasons of space, all proofs are to be found in the technical report [Goldszmidt and Pearl, 19911. 2 Admissible Rankings We consider a set of rules A = {pi 2 &} where cpi and ?,& are propositional formulas over a finite alpha- bet of literals, “+” denotes a new connective (to be given a default interpretation later on), and Si is a non- negative integer that measures the relative strength of the rule’. A truth valuation of the literals in the lan- guage will be called a model. A model M is said to verify a rule cp + v) if M b cp A +, to falsify cp ---) 1c) if M + cp A +, and to satisfy cp + II, if M b cp > $. Definition 1 A ranking function is an assignment of non-negative integers to the models of the language. A ranking function K is said to be admissible relative to A, if it satisfies min{K(M) : M k (pi A &) + Si < min(rc;( M) : M b cpi A ybi) (1) for every rule (pi 2 & E A. A model M+ is said to be a characteristic model for rule cp -+ 1c, relative to ranking K, if K(M+) =min{K(M): M b VA+}. Equivalently, among the lowest ranked models satisfy- ing the antecedent cps, a rule ~a 2 & forces any model satisfying +S to rank at least Si units higher than those satisfying $i. This echoes the usual interpreta- tion of defaults ([Shoham, 19871) according to which +i holds in all minimal models satisfying cpi. In our case minimality is reflected in having the lowest possi- ble ranking. The new parameter Sa can be interpreted as the minimal cost or penalty charged to models vio- lating rule cpi 2 &. Along the same vein we can define consequence relations as follows: Definition 2 Given a set A, q5 b c is in the conse- quence relation defined by an admissible ranking tc ifl every K-minimal model for 4 is also a model for a. The next couple of definitions and Theorem 1, char- acterize and provide a decision procedure for testing the consistency of a set A, namely its ability to ac- commodate at least one admissible ranking. Definition 3 A rule cp -+ 1c, is tolerated by A ifl there exists a model M such that M verifies cp + + and satisfies all the sentences in A. Definition 4 A set A is said to be consistent if there exists-an admissible ranking K for A. I ‘Whenever S i s not relevant we will simply write cp + $ to identify a rule. 400 N~NMONOTONIC REASONING Theorem 1 A set A is consistent ifi there exists a tolerated rude in every nonempty subset of A. Fortunately it is not necessary to test tolerance in ev- ery subset of A. A procedure for deciding consistency will need only to continuously remove tolera.ted sen- tences in A until A becomes empty. If at any point a tolerated sentence cannot be found, A is inconsistent. The proof of the correctness of this procedure along with the proofs of Theorem 1 and Corollary 1 can be found in [Goldszmidt and Pearl, 19911. Corollary I. Deciding the consistency of a set A re- quires at most ]Al2 propositional satisfiability tests. 3 Plausible Conclusions and Minimal ankings So far we were concerned with the conditions for con- sistency, and we have seen that these conditions make no reference to the cost 6 associated with the rules. It is reassuring to know that once a database is consis- tent for one set of costs assignments, it will be consis- tent with respect to any such assignment which means that the rule author has the freedom to modify the costs without fear of forming an inconsistent dat’abase. Our main aim however, is to draw plausible conclusions from the database and this calls for further examina- tions of Def. 2. According to Def. 2 each ranking would give raise to its own consequence relation. The requirement on these rankings to be admissible, is too loose, since many vastly different rankings are capable of satisfy- ing the constraints. A straightforward way of stan- dardizing the conclusion set would be to require the conditions of Def. 2 to hold in all admissible rankings. This leads to an entailment relation called e-semantics in [Pearl, 19891, O-entailment in [Pearl, 19901, and r- entailment in [Kraus et al., 19901 which is recognized as being too conservative. A more reasonable approach would be to select a distinguished admissible ranking which best reflects the spirit of default reasoning. If a model with lower K reflects a more normal world, it is reasonable that we attempt to assign to each world the lowest possible K permitted by the constraints. Such an attempt can be interpreted as a tendency to be- lieve that, unless forced otherwise, each world is as- sumed as normal as possible. The question we need to answer is whether making one world as normal as possible would not force other worlds to become more exceptional than otherwise. This would render the set of minimal rankings non unique and the entailment conditions rather complex, reminiscent of multiple ex- tensions in default logic ([Reiter, 19801). Remarkably we will show that there is a unique minimal ranking, i.e., that lowering the ranking of one world does not come at an expense of another. Moreover, we pro- vide an effective procedure for computing this minimal ranking2. 2Sucll uniqueness condition was previously shown t,o Definition 5 Let TV+ be a ranking function on a con- sistent set A, such that t&(M) = 0 if M does not falsify any rule in A, and otherwise, K+(M) = max{Z+(ri) : M j= cpi A -&} + 1 where (2) Z+(T~) = min{K+(M) : M b cpi A $a) + Si (3) Note that the apparent circularity between K+ and Z+ is benign. Both functions can be computed recursively in an interleaved fashion. Each time we assign a rc+ to a model according to Eq. 2, it permits us to assign a Z+ to some rules in Eq. 3 and vice versa. This can be illustrated by tracing the first few steps: Given that A is consistent, there must exists at least one rule r’ tolerated by A, i.e., at least one model M’ must satisfy A and verify r’. By Def. 5 we can set &(M’) = 0, for all those models and for all such rules set Z+( T’) = 6’ in accordance with Eq. 3. The K+ of models falsifying these rules can now be computed using Eq. 2 and so on. The details of this recursive assignment can be found in Procedure Z-rank below. Another view of Eqs. 2 and 3 can be obtained by substituting Eq. 3 into Eq. 2. Define Y[M] to be the set of rules verified by model M, and F[M] be the set of rules falsified by model M, then tc+(A!f) = max [ min riEF[M] M’:fi EV[M’] [K+(M’)] + Si] + 1 (4) Eq. 4 illustrates that the value of K+(M) is set just above the value of the characteristic model of the rules that M violates, thus “pushing down” the ranking of models to be as normal as possible. The reason that the function Z+ is introduced in Def. 5 is that it provides an economical and convenient way of stor- ing and manipulating the ranking K+. The amount of space required by the Z+-ranking is linear on the number of default rules in the database, and once Z+ is known, the ranking K+(M) for any M can be ob- tained from Eq. 2 in at most ]A] steps. To show that any function K+ satisfying Eqs. 2 and 3 is admissible, we re-write the conditions for admissibility (Eq. 1) as z+(ri) < min{“~~~~+~r~ ~~~ ~~~~ bus ~;Pt Since K+(M) = i : i -i follows that K+ is indeed admissible. The following is a step by step effective procedure for computing Z+: Procedure Z-rank Input: A consistent set A. Output: Z+-ranking on rules. 1. Let A0 be the set of rules tolerated by A; and let RZ+ be an empty set. 2. For each rule ri : cpi and 7&Z+ 5 $i E A0 do: set Z(Ti) = 6i; = RZ+ U {ri}. hold for uniform databases, in which all rules are as- signed S = 0 ([Pearl, 19901). Th e consequence relation emerging from the unique preferred ranking of uniform databases was called l-entailment in [Pearl, 19901 and was shown to be equivalent to Lehmann’s [1989] rational closure ([Goldszmidt and Pearl, 19901). 3. While RZ+ # A do: (a) Let 52 be the set of models M, such that M falsi- fies rules only in RZ+, and verifies at least one rule outside of RZ+. (b) For each M compute: /C(M) = max{Z(ri) : M b ~pi A YGi} + 1 (5) (c) Let M* be the model in Q with minimum K; For each rule ri : pi 2 $i @ RZ+ that M* verifies do: Z(6) = K(M*) + Si (6) R&z+ = RZ+ U {ri}. End Procedure Theorem 2 The function Z computed by Procedure Z-rank complies with Def. 5, i.e. Z = Z+. We turn our attention to the main results of this section, namely the minimality and uniqueness of K+: Definition 6 A ranking function K is said to be min- imal if every other admissible ranking K’ satisfies d(M) > n(M) for at least one model M. Definition 7 An admissible ranking K is said io be compact if, for every M’ any ranking IC’ satisfying aw = K(M) MfM’ K’(M) < I-C(M) M= M’ is not admissible. Theorem 3 (Main.) Every consistent A has a unique compact ranking given by K+ (see Def. 5). Corollary 2 (Main.) Every consistent A has a unique minimal ranking given by & (see Def. 5). As mentioned before, once the Z+ ranking on rules is found, the K+ of any given model can be readily computed using Eq. 2. Moreover, the Z+ ranking also provides effective means for deriving new conclusions from A: To test whether c is a plausible conclusion of 4 3 we need to compare the minimal &(M+) such that M+ b 4 A 0, against the minimal &(M-) such that M- ~~ATU ( see Def. 2). Fortunately this min- imization does not require an enumerative search on models; it can be systemized using the ordering im- posed by Z+. Let M be a witness for 4 k u with respect to a set At, if M b 4 A u and M satisfies A.‘. We start by testing whether there is a witness M for 4 k u with respect to the set A. If one is found, then K+(M) must be 0: M does not violate any rule in A (see Def. 5). If no witness is found, we remove from A all rules r’ such that Z+ (r’) is minimal, and call the remaining set At. We start a new iteration by testing the existence of a witness M’ (for C$ k u) this time with respect to At. If M’ is found, K+(M~) must be Z+( r’) + 1, since M’ must violate a rule removed %.e., whether 4 k u is in the consequence relation de- fined by K+. GOLDSZMIDT & PEARL 401 in the previous iteration. If no witness is found we remove the rules r” with minimal Z+ (r”) and so on. The question of whether $J k CT, 4 + lu or neither is in the consequence relation defined by K+ is reduced to whether we ‘find a witness for 4 k u, before we find a witness for 4 k lu, the completely symmetrical case, or whether these witnesses are found in the same iter- ation. The steps just described are formalized in Pro- cedure Z+-consequences below, where cases 3.(a)-3.(c) correspond to 4 k u, 4 k -CT or neither. Case 3.(d) se- lects the rules r with minimal Z+(r) and modifies the current set A for the next iteration (in case no wit- ness is found). Note that each iteration (i.e., the test of whether a witness for 4 f- u with respect to some subset of A exists) involves a satisfiability test for A, and there can be at most ]A] iterations before a wit- ness is found4. Therefore, the complexity of Procedure Z+-consequences is bounded by ]A] propositional sat- isfiability tests in the worst case. Procedure Z+-consequences Input: A consistent set A, the function Z+ on A, and a pair of consistent formulas 4 and u. Output: answer YES/NO/AMBIGUOUS depending on whether 4 k u, 4 + lu or neither. 1. TEST1 whether there is a model M such that M b 4 A u and M satisfies A. 2. TEST2 whether there is a model M such that M b 4 A -W and M satisfies A. 3. CASES indexed by the results from TESTl-TEST2: (a) IF YES-NO then return($ k a) (b) IF NO-YES th en return(4 k 77) (c) IF YES-YES th en return(AMBIGUOUS) (d) IF NO-NO then let MIN,Z be the set of rules in A with minimum Z+. Set A’ = A - MIN,Z. Set A = At and goto Step 1. 5 End Procedure It is natural to define the strength with which A en- dorses the validity of 4 )- u as the difference between the minimal d(M+) such that M+ b 4 A u, and the minimal h(M-) such that M- b $Alu. The Proce- dure Z+-consequences can be easily modified to return this value, by simply computing the difference between the Z+ levels at which each of the two witnesses are found. 4 Examples The following examples illustrate properties of the K+- ranking and the use of 6 to impose priorities among defaults. Example 1 shows how specificity-based pref- erences are established and maintained by the K+- ranking, freeing the rule-encoder from such considera- tions. A general formalization of this behavior is given 41n eich iteration the size of A decreases by since at least one rule is removed. 5Note that since we are requiring that both consistent, A’ cannot be empty. 402 NONMONOTONIC REASONING at least one 9 and c be in the next section (Theorem 4). In the second exam- ple, the priorities 6 are used to establish preferences when specificity relations are not available. Example 3 constitutes a combination of the previous two. Example 1: Specificity. Consider Ap = {b 2 f, p 2 b, p 2 lf} which stands for rr :“birds fly”, r2: “penguins are birds”, and rg : “penguins don’t fly”. The Z+-ranking is computed as follows: Since r1 is tol- erated by Ap, Z+(rl) = 61. Any &+-minimal model verifying r2 and r3 must violate rl, therefore, fol- lowing Procedure Z-rank, Z+(rz) = 61 + 62 + 1 and z+(r3) = 61 + Ss + 1. According to Def. 2, in order to decide whether p A b k -f (“penguin-birds don’t fly”) we must test whether “lf” is satisfied in all K+- minimal models of “p A b” or, equivalently, whether K+(pAbA+‘) < ~+(p A b A f). This test is performed mechanically by Procedure Z+-consequences, yielding the expected conclusion: p A b k -f. The reason is as follows: Any model for “p A b” will violate either r1 (“birds fly”) or rg (“penguins don’t fly”). Since z+(r3) = Z+(rl) + Sa + 1, models violating r1 (in- cluding those satisfying “p A b A -r) will have a lower K+-ranking and will thus be preferred to those violat- ing r-3 including those satisfying “p A b A f”); it follows that Ic1 L (pAbAlf) < &(pAbAf) . Note that the preference of r3 over r1 is established independently of the initial priorities 6 assigned to these rules. Example 2: Belief strength. Consider a data- base containing two conflicting default rules: AN = {q 2 p, r 3 lp}, standing for r1: “typically Quakers are pacifists”, and r2: “typically Republicans are not pacifists” (a version of the “Nixon-diamond”). Since each rule is tolerated by the other, the Z+ of each rule is equal to its associated 6: Z+(q) = 61 and z+(r2) = 62. Given an individual, say Nixon, who is both a Republican and a Quaker, the decision of whether Nixon is a pacifist will depend on whether Si is bigger, less or equal than 62. This is so because any model MTqP for Quakers, Republicans and paci- fists must violate r2, and consequently K+( M,.,,) = 62, while any model MrqyP for Quakers, Republicans and non-pacifists must violate q, i.e., K+(M,.~-~) = 61. Note that in this case the decision to prefer one model over the other does not depend on specificity consider- ations but, rather, on whether the rule encoder believes that religious convictions bear more strength than po- litical affiliations. This kind of preferences cannot be expressed in system-2 or in Lehmann’s rational clo- sure [1989]. Example 3: Combining priorities with speci- ficity. For the final example, consider AB = {w 2 b, w A p 2 lb} encoding the information that rl:“if it is Wednesday night I play basketball”, and r2:“if it is Wednesday night and I have a paper due, I don’t play basketball” with the S’s reflecting the degree of firmness of these rules. Suppose we wish to inquire whether “I’ll play basketball on a Wednesday night when a paper is due” (w A p k b). On one hand, the answer to such query is explicitly contained in r2. On the other hand, r2 conflicts with q, and in many for- malisms (e.g. circumscription [McCarthy, 19861, de- fault logic [Reiter, 19801) such a conflict would require extra information in order to give an unambiguous an- swer (the relation between the ab predicates associated with the defaults for circumscription, and some prefer- ence criteria among extensions or the use of seminor- mal defaults for default logic). System-Z+ yields the expected result regardless of how strongly one believes in rl. According to Procedure Z-rank the @-ranking computes to Z+(rl) = Sr and Z+(rz) = 61 + 62 + 1. Any model for w A p must violate either r1 or rg, and since Z+ (r2) = Z+ (rl) + S2 + 1, the model violating rl will be preferred. Thus, u;+(wApA b) < K+(wApAlb) and we conclude (with firmness S2 + 1) that “I won’t be playing basketball that night”. Now suppose Ag is part of a bigger database and that we wish to in- clude information that takes precedence over all other so far mentioned. For example, “If I am sick I stay in bed” (assuming of course that “staying in bed” rules out “playing basketball” or other activities). In order to enforce this precedence we need only give to this new rule a sufficiently high S without considering its relation to previous commitments. 5 Discussion System-Z+ provides the user with the power to ex- plicitly set priorities among default rules, and simul- taneously maintains a proper account for specificity relations. However, it inherits some of the deficien- cies of system-2 (and the rational closure described in [Lehmann, 19891) the main one being the inability to sanction inheritance across exceptional subclasses. For example if a fourth rule b 2 I (“birds have legs”) is added to Ap (Example l), we would normally con- clude that “penguins have legs”. However, system- 2 will consider “penguins” exceptional “birds”, (since they do not fly) with respect to all properties, includ- ing “having legs”. The &-ranking allows the rule author to partially bypass this obstacle by means of the S’s associated with the rules. If S* is set to be bigger than S1 (to express perhaps the intuition that anatomic properties are more typical than developmen- tal facilities) then the system will conclude that ‘“typ- ically penguins have legs”6. This solution however, is not entirely satisfactory. If we add to this new set of rules a class of “birds” which are “legless”, system- Z+ will conclude that either “penguins have legs” or 6Note that the fact that “penguins” are only exceptional with respect “flying” (and not necessarily with respect to “having legs”) is automatically encoded in the Zt ranking by forcing Zt (~3) to exceed Zt (~1) + 63 independently of 64 (and Z+(T~)). “legless birds fly” but not both7. To overcome this difficulty non-layered priorities among rules must be imposed (see [Geffner, 19891, [Grosof, 19911). In a similar vein we remark that more refined selec- tion functions (than the maximum rule violated) might be needed for certain domains. For example in circuit diagnosis the ranking of a given explanation should also reflect the number of faults it predicts, not merely the abnormality of the least likely fault or, better yet, the sum of the faults weighted by their abnormality rank- ing. A refinement such as the one proposed by maxi- mum entropy approach ([Goldszmidt et ad., 19901) em- bodies this cost function and may yield better results in such domains. In some sense system-Z+ can be viewed as a version of prioritized circumscription [Lifschitz, 19881, where default priorities are induced by means of partial or- der imposed on the abnormalities in the minimization process. However, in prioritized circumscription the relative ranking of abnormalities remains fixed at the level furnished by the user, and does not reflect in- teractions between adjacents rules. In system-Z+ the input priorities undergo adjustments so as to take into account all related rules in the system. For example, in the database Ap above, the ranking of r3 (“typi- cally penguins do not fly”) was adjusted from 63 to Sr + S3 + 1, so as to supercede 61, the priority of the conflicting rule “typically birds fly”. As a result of such adjustments, the consistency of the rankings is main- tained throughout the system, and compliance with specificity-type constrains is automatically preserved. This is made precise in the following theorem: Theorem 4 Let rl : cp 2 $J and r2 : 4 2 cr be two rules in a consistent set A such that: 1. cp k 4 is in all consequence relations of admissible K-rankings (i.e., cp is more specific than 4). 2. There is no model satisfying cp A 1c, A 4 A CT (i.e., r1 conflicts with rz). then Z+(rl) > Z+(r2) independently of the values of 61 and 62 In other words, the @-ranking guarantees that fea- tures of more specific contexts override conflicting fea- tures of less specific contexts. Note that although the computation of the adjusted ranking may be expensive (non-polynomial in the num- ber of rules), once it is found, it constitutes an effi- cient encoding of u;+ and facilitates an efficient pro- cedure for answering queries about the plausibility of proposed conclusions: deciding whether 4 k u requires only wu P ro osr ronal satisfiability tests. P 't' Acknowledgements Sound criticisms by two anonymous reviewers helped in putting some of the results in perspective. ‘This counterexample is due to Kurt Konolige. GOLDSZMIDT & PEARL 403 References [Geffner, 19891 Hector A. Geffner. Default reasoning: Causal and conditional theories. Technical Report TR-137, PhD dissertation, University of California Los Angeles, Cognitive Systems Lab., Los Angeles, 1989. [Goldszmidt and Pearl, 19901 Moises Goldszmidt and Judea Pearl. On the relation between rational clo- sure and system Z. In Third International Workshop on Nonmonotonic Reasoning, pages 130-140, South Lake Tahoe, 1990. [Shoham, 19871 Yoav Shoham. Nonmonotonic logics: Meaning and utility. In Proceedings of International Conference on AI (IJCAI,87), pages 388-393, Mi- lan, Italy, 1987. [Spohn, 19871 W. Spohn. Ordinal conditional func- tions: A dynamic theory of epistemic states. In W. L. Harper and B. Skyrms, editors, Causation in Decision, Belief Change, and Statistics, pages 105- 134. Dordrecht: Reidel, Holland, 1987. [Goldszmidt and Pearl, 19911 Moises Goldszmidt and Judea Pearl. System-Z +: A formalism for reasoning with variable-strength defaults. Technical Report TR-161, University of California Los Angeles, Cog- nitive Systems Lab., Los Angeles, 1991. [Goldszmidt et ad., 19901 Moisks Goldsamidt, Paul Morris, and Judea Pearl. A maximum entropy ap- proach to nonmonotonic reasoning. In Proceedings of American Association for Artificial Intelligence Conference, pages 646-652, Boston, 1990. [Grosof, 19911 Benjamin N. Grosof. Generalizing pri- oritation. In Principles of Knowledge Representation and Reasoning: Proceedings of the Second Interna- tional Conference, Boston, 1991. [Kraus et-al., 19901 Sarit Kraus, Daniel Lehmann, and Menachem Magidor. Nonmonotonic reasoning, pref- erential models and cumulative logics. Artificial In- telligence, 44:167-207, 1990. [Lehmann, 19891 Daniel Lehmann. What does a con- ditional knowledge base entail? In Proceedings of Principles of Knowledge Representation and Rea- soning, pages 212-222, Toronto, 1989. [Lifschitz, 19881 Vladimir Lifschitz. Circumscriptive theories: a logic-based framework for knowledge rep- resentation. Journal of Philosophical Logic, 17:391- 441, 1988. [McCarthy, 19861 John McCarthy. Applications of cir- cumscription to formalizing commonsense knowl- edge. Artificial Intelligence, 28:89-116, 1986, [Pearl, 19881 J u d ea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Infer- ence. Morgan Kaufmann, San Mateo, 1988. [Pearl, 19891 Judea Pearl. Probabilistic semantics for nonmonotonic reasoning: A survey. In Proceedings of Principles of Knowledge Representation and Rea- soning, pages 505-516, Toronto, 1989. [Pearl, 19901 Judea Pearl. System Z: A natural order- ing of defaults with tractable applications to default reasoning. In M. Vardi, editor, Proceedings of Theo- retical Aspects of Reasoning about Knowledge, pages 121-135. Morgan Kaufmann, San Mateo, 1990. [Reiter, 19801 Raymond Reiter. A logic for default rea- soning. Artificial Intelligence, 13:81-132, 1980. 404 NONMONOTONIC REASONING
1991
64
1,125
king t e ecution Jeffrey A. Barnett Northrop Research and Technology Center One Research Park Palos Verdes Peninsula, CA 90274 jbarnett@nrtc.northrop.com Abstract How should opinions of control knowledge sources be represented and combined? These issues are addressed for the case where control knowledge is used to form an agenda, i.e., a proposed knowledge source execu- tion order. A formal model is developed in the Demp- ster/Shafer belief calculus and computational prob- lems are discussed as well. The model is applicable to many other problems where it is desired to order a set of candidates using a knowledge-based approach. Introduction Deciding on the order to do things is one of the most important activities performed by an intelligent sys- tem. These decisions influence the amount of problem- solving resources utilized and determine the coherence and explainability of system behavior. The decisions are made by control knowledge and it is this knowl- edge that is responsible for guiding a system’s search for problem solutions. Search is a prevalent problem-solving paradigm used by many intelligent systems (Newell & Simon 1976). Some relevant piece of knowledge is selected by the con- trol knowledge and applied within the current problem- solving state. This step can modify that state and make different pieces of the knowledge base applica- ble. The selection and application cycle is repeated until the system’s termination condition is met. Without the guidance of proper control knowledge, search wanders aimlessly through the solution space until either resources are exhausted or an answer is uncovered (Pearl 1984 & Nilsson 1980). This is not ac- ceptable in large domains because the chance of stum- bling on a solution is very small. Also, it is unlikely that a proposed solution could be defended: it is dif- ficult or impossible to explain why alternative paths were discarded or not explored at all if blind search were substituted for the use of control knowledge. If the resources consumed by control knowledge did not count as part of the total problem-solving resources used by the system, it would be optimal to determine the piece of knowledge to apply at the beginning of each system cycle. However, these resources do count (Barnett 1984) and the complexity of making control decisions can easily overwhelm many other aspects of system behavior. For this reason, many systems form an agenda, a proposed order to execute or apply the relevant pieces of knowledge. Motivation and techniques to form agendas in rule-based systems are described by (Davis 1980a & 1980b). Summary A formal model is developed in the Dempster/Shafer belief calculus (Shafer 1976) so that the opinions of control knowledge sources can be represented and com- bined in order to form agendas. Contradictory opin- ions and preference cycles (loops) are dealt with in a straightforward way. The model is applicable to many problems where it is desired to order a set of candidates using a knowledge- based approach. However, for the sake of concreteness, the discussion is presented as if the problem were that of forming an execution-order agenda for the rule in- stantiations in a system’s conflict set and some flexi- bility in execution order is assumed. The model’s objective function for agendas is Pl, the Dempster/Shafer plausibility measure. However, finding the agenda that maximizes Pl is equivalent to solving the weighted feedback edge-set problem which is known to be NP-complete. Since the same underlying problem is likely to occur in other formulations of weighted voting schemes, an approximation technique is desired. An algorithm, em- pirically shown to be reasonably accurate and efficient, is described. The Model A model to represent and combine the opinions of control knowledge sources is developed in the Demp- ster/Shafer belief calculus. (The appendix provides a brief introduction to the concepts used below.) The calculus is well-suited to this task because opinions about ordering are naturally expressed as preferences for subsets of the set of the possible agendas. BARNETT 477 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. In the model, primitive opinions are weighted prefer- ences on the execution order of pairs of rule instances. These pairwise preferences are represented by simple support functions. Complex opinions are expressed as sets of primitive opinions and combined with Demp- ster’s rule. Representing Opinions Let R = (q . . . r,} be the collection of rules selected by the retrieval and filtering mechanism of an expert sys- tem. Define 8 as the set of possible execution agendas, i.e., the set of permutations of R. Thus, if R = {a b c), then 0 = {abc acb bat bca cab cba} and the problem of picking an agenda for R is to select the best A E 8. Therefore, opinions about ordering R are preferences for particular elements or subsets of 8 because these elements and subsets encode order relations among the elements of R. In the proposed model, a primitive opinion about the best ordering of R is a pairwise preference written as u -+ b[w], where a and b are elements of R. This is an opinion that a should execute before b and w E [O, l] is the strength of that preference. A primitive opinion is represented by a simple sup port function. The degree of support is w and the focus is the subset of 0 for which the pairwise prefer- ence holds. For example, if R = {u b c}, the opinion a -+ b[w] is represented by a simple support function with focus (abc acb cab}. N.B., the opinion “opposite” to a + b[w] is b + u[w]. Unlike certainty factors (Shortliffe 1976), negative de- grees of support are not used; rather, support is fo- cused on complementary propositions. It is easy to imagine other kinds of primitive opinions than those representable in this model, i.e., opinions that support subsets of 8 not allowed herein. However, as shown below, sets of pairwise preferences adequately capture the intent of many types of opinions expressed by control knowledge. Combining Opinions The Dempster/Shafer belief calculus provides Demp- ster’s rule to combine sets of belief functions into a single belief function. In particular, the opinions of control knowledge sources are combined by this rule because primitive opinions are represented by simple support functions, a specific kind of belief function. The Best Agenda The Dempster/Shafer belief calculus provides decision makers with the Be1 and Pl functions. This section shows that maximizing PI is the better criterion to select the best agenda because PI is more reliable than Be1 in discriminating among alternatives. Assume that the total set of primitive opinions ex- pressed by the control knowledge sources is represented by ui + ui[wi], where 1 5 i 2 m. Let z E 8 be an agenda and define ui(z) to be satisfied if and only if z is compatible with u. ---* vi wi i.e., if w appears before u in A Then Pl is;ompdtei by i - PI(z) = K n (l- Wi), (1) -Vi(r) where K is strictly positive and independent of z (see the appendix). Thus, PI(r) # 0 unless there is a w + ui[l], where -ai( However, the neces- sary and sufficient condition that Bel(z) # 0, where z = rr . . . rk, ia much stronger: 1. A primitive opinion, ri 4 ri+i[wi], where wi # 0, exists for each 1 5 i < k. 2. No primitive opinion of the form ri + rj [ 1] exists for any12 j<ilk. Therefore, unless the set of primitive opinions is rel- atively large, Be1 will be zero for most or all of the z E 8. In fact, if there is a r E R such that no prim- itive opinion references r, then Be1 will be zero for every agenda. Hence, the best agenda is defined, by this model, to be the z E 8 that maximizes Pi(z) because Pl is more stable and reliable at discriminating agenda merit than is Bel. Complex Opinions The opinions of control knowledge sources are often derived from general knowledge and knowledge of the application rather than specific knowledge about par- ticular rules (Davis 1980a & 1980b). For example, a typical control rule in an investment domain is, If the economy is fluctuating, prefer the rules that analyze risk. This control rule is interpreted to prefer to execute investment rules that analyze risk before those that do not given that the system has (can) deduce that the economy is fluctuating. Another example of a control rule in the same domain is, If bonds are being considered, prefer the rules that recommend investments with higher Standard and Poor5 ratings. This example references a ranking (the Standard and Poors index) and groups some of the investment rules (those that recommend bonds) so that the groups can inherit the ranking. Then it prefers to execute the rules so ranked in the specified order. Both examples exhibit execution-order preferences that induce partial orders on R, the domain rules. A simple representation captures the intent of such con- trol knowledge. Let the Pi, where 1 5 i 5 m, be predicates with domain R and assume that the weights sij E [0, l] are given. The partial order preference is realized as the collection of all primitive opinions of the form Q -+ b[eij], where U, b E R, Pi(a), Pi(b), and i < j. Thus, the control knowledge source is represented by 478 BELIEF FUNCTIONS Figure 1: Example with 3 interpretations. its Pi and sij. It should be noted that the Pi may need to access variables in the problem-solving state of the expert system, e.g., “economy fluctuation” in the first example in this section. The next section presents an example and considers alternative realizations of the execution-order prefer- ences of control knowledge sources. Alteqnative Interpretations Let R = {cc b c} and assume that a control knowledge source prefers that execution be in alphabetically or- der. In the notation of the previous section, a, b, and c are the only rules that, respectively, satisfy PI, PQ, and P3. Additionally, let s& = sac = s& = s. Thus, the opinion of this control knowledge source is expressed by a + b[s], a -+ c[s], and b + c[s]. With these assumptions, Figure 1 shows the value of Pl for each z E 9. The column labeled “Model” lists the Pl values computed by the model. The six z E 0 split into four groups because Pl awards values that depend on whether z agrees with 0, 1, 2, or 3 of the primitive opinions. An alternative realization of the complex opinion that a, b, and c should execute in the stated order is to combine only the two primitive opinions a 3 b[s] and b -+ c[s], i.e., do not take the closure of the transitive preference relation. This alternative results in the Pl values shown in the figure under the heading “Simple”. A third alternative is to form a single simple support function that focuses only on the singleton set (abc). The PI values that result are shown in the column titled “Chunk”. A problem with the second and third interpretations is that they are less discriminating then the model’s. The third alternative is the most insensitive: minor dis- agreements such as acb, with only b and c out of order, are awarded the same Pl value as total disagreements such as cba where everything is backwards. Since there can, in general, be many knowledge sources expressing ordering opinions, it is not a good idea to employ an all-or-nothing interpretation in do- mains where “half a loaf is better than nothing”. The approximation computation described below is appli- cable with both the &Model” and the “Simple” inter- pretations but not “Chunk” because the latter is not based on pairwise preferences. d Figure 2: Example with a loop. ptimization agenda means fi l(z). This is shown to be the weighted feedback edge-set problem which is known to be NP- complete (Carey & Johnson 1979). Since K in Equation 1 is strictly positive, simple algebra demonstrates that the llr E 0 that minimizes l+r) = c w;, (2) loi is the best agenda, i.e., the z E 8 that maximizes in Equation 1. The wi = - log(1 - wi) are the weights of evidence, in the terminology of (Shafer 1976), and are positive because wi E [0, l]. Hence, PI’(R) is just a sum of a positive weight for each ui + ui [wi] that is not compatible with ?r. One would expect a similar formulation, with per- haps different weight semantics, for any weighted pref- erence scheme used to determine optimal agendas- those that are incompatible with the least vote weight are valued most. An example is shown graphically in Figure 2. The elements of-R are the nodes-and each directed labeled arc represents a pairwise preference, e.g., the arc la- beled w:b directed from a to b represents a + b[wab]. Thus, the example shows five primitive opinions that contain a preference loop between a, b, and c formed by the arcs labeled &,) w:,, and wicr. Let z = abed, then PI’(z) = wda because the arc directed from c to a is the only one that is inconsistent with ?r. Since there is a cycle,-every z E 0 is penalized bY a weight on at least one of the arcs in that cycle. In the general case, every agenda is penalized by the weight of at least one arc in each directed cycle in the preference graph. Therefore, the best agendas are those that are compatible with the graph that re- mains after the least total weight has been removed on a set of arcs that cut each directed cycle. Finding a minimum-weight deletion is the weighted feedback edge-set problem. Again, consider the example with a loop shown in Figure 2. The best you can do is to accept one of the penalties, Wbb, wLc, or w:~. Assume that the minimum of the three is whb s Then the orders bead and bcda both have this minimum penalty and, hence, both maximize plausibility, i.e., both are optimal. BARNETT 479 PROCEDURE FIND-AGENDA(d) 1. Set s to a random permutation of R. 2. Visit the elements of I in left-to-right order. Move each visited element to the position in ‘lr that minimizes Pi’(n). If any element is moved by this step, con- tinue with the next step. Otherwise, halt and return IF. 3. Visit the elements of A in right-to-left order. Move each visited element to the position in a that minimizes PI’(n). If any element is moved by this step, con- tinue with the previous step. Other- wise, halt and return z. END FIND-GOOD-AGENDA; Figure 3: Minimization algorithm. A graphical representation of a set of primitive opin- ions provides a simple method to check their consis- tency. Let R be the graph’s nodes as above. How- ever, only include those edges that represent primitive opinions of the form a + b[l]. The total set of con- trol knowledge opinions is consistent if and only if this restricted graph is free of directed cycles. An Approximation Since determining the f E 0 that minimizes Equac tion 2 is an NP-complete problem, an approximation technique is necessary if the above model is to be used for applications with more than a few agenda items. Unfortunately, a search for previous work on such ap- proximations has not been fruitful. Therefore, several simple approximation techniques were programmed and empirically tested, by compar- ison to each other, and to actual optimal results for small problems. The test cases were generated ran- domly and exact values computed, when possible, by exhaustive search. Based on the empirical evidence, one algorithm ap- pears to be efficient and accurate enough to be useful. The core of that approximation is shown in Figure 3. Given the n x n matrix, w’, it is possible to find and move an element to its optimal place, relative to the current order of r, in O(n) time, where n = R . Thus, each application of step 2 and step 3 is O(n !2 ’ ). Steps 2 and 3 alternate because it is usually possible to prune a substantial part of a step 3 after a step 2 and vice versa. On the other hand, if either step is directly repeated, pruning is not available. Empirical testing, with n varying from 3 to 100, showed that the entire algorithm is O(nZ), where z is between 2.6 and 3. Several cases were tested with n = 200 and the results were compatible with this analysis. Exponent variation does not appear to de- pend very much on n or on the average degree of the preference graph. Rather, it is most strongly affected 480 BELIEF FUNCTIONS by the fraction of the total arcs that agree with the best agendas. This algorithm is a local optimizations that employs a random starting point. Since different starting points can find different local optima, it may pay to run the algorithm several times and keep the best solution. Empirical evidence suggests that the need for multi- ple applications increases when a relatively large frac- tion of the wij are zero, the distribution of non-zero wij values is flat, and therefore, the variance between different solutions tends to be largest. The variance in PI’ for different starting points was never observed to be more than a few percent. Replanning Sometimes new information becomes available while an execution agenda is being pursued. That raises ques- tions about how to test the impact on the unexecuted part of the current agenda and how to economically reorder that portion if and when it seems appropriate to do so. The algorithm described in the previous section has a property that makes it well-suited to address such replanning problems: every subagenda (a contiguous sequence) in K is a local optimum in the sense that it cannot be improved by moving any single element within that subagenda. In particular, the unexecuted portion of X, a tail sequence, is a local optimum unless some W:j changes, where both ri and rj are in the tail of z. If a change in opinions affecting rules that have not been executed occurs, a modification of the algorithm shown in Figure 3 is used. The changes are to restrict w’ to the tail and not execute step 1. In other words, start with an agenda that is probably close to reason- able rather than start from a random point. The full algorithm with multiple starting points only needs to be considered if there is substantial change in the value of Pl’ calculated for the tail. Empirical testing shows that this type of replanning, where only a few opinions change, is very economical. Typically, steps 2 and 3 do not iterative-one or two applications are sufficient. It is possible to add new candidates to the tail of z or remove some that no longer belong in the conflict set. In these cases, simply place the new candidates at the end of z, appropriately grow and/or restrict w’ to reflect the actual slate of candidates in the tail, and rerun the algorithm using the modified 7~ as the starting point. In all of these cases, the approximation algorithm commends itself as a diagnostic to determine the prob- able impact of the changed opinions. In addition, it can do the full replanning when it is appropriate to do so. The fact that subagendas of locally optimal agendas are themselves locally optimal, provides a measure of stability. iscussion The model developed here appears to have sufficient generality to solve ordering problems in many domains. Its use of the Dempster/Shafer belief calculus makes the formulation straightforward because (1) it is de- sired to invest belief in subsets of the possible agendas and (2) the simple support functions provided by the calculus make it easy to do so. Finding the best agenda, as defined above, is an NP- complete problem. However, the availability of a rea- sonably efficient and accurate approximation partially mitigates this fact. Further, the underlying cornput* tion just determines the agenda that is incompatible with the least vote weight, an idea that seems to be very natural. Acknowledgements Thanks to Dan Geiger and the referees for many valu- able comments that improved this paper. Appendix: This is a brief introduction to the Dempster/Shafer Belief Calculus. The interested reader is referred to (Shafer 1976) f or more detailed explanations of the concepts that are involved. Let Q be a set of mutually exclusive and exhaustive possibilities and interpret the subsets of 0 as disjunc- tions of their elements. The function, m : 2e ---) [0, 11, is called a basic probability assignment or mass func- tion if m(8) = 0 and C m(S) = -1. SC0 In the Dempster/Shafer belief calculus, m plays a role similar to a density function in ordinary probability theory. The difference is that the domain is 29, the subsets of @, rather than its elements. The belief function, Be1 : 2* + [0, I], and the plau- sibility function, Pl : 2* ---) [0, 11, are defined to play roles similar to distribution functions. Bel(S) = x m(T) Thus, Bel(S) 5 PI(S) for all S E 8 and, therefore, BeI and PI are sometimes referred to, respectively, as the lower and upper probabddy measures. Both are available to decision makers. The calculus provides Dempster’s rule to combine several belief functions into one. Let 9711 . . . m, be the mass functions associated with n belief functions, then Dempster’s rule defines, m, the mass function for their combination to be m(B) = 0 and m(S) = K X C m(S;) sln...ns,=S l<iln K-1 = m(Si), for nonempty S C 8. This combination is defined whenever K is. A sample support junction is a belief function for which there is at most one S # 8 such that m(S) # 0. The trivial simple support function is the one where m(Q) = 1 and m(S) = 0 for all S # 0. Other simple support functions are parameterized byaFc and a w E (0, l], where m(F) = 20 and m(Q) = 1 - 20. The subset F is called the focus of the simple support function and w is called its degree of support. A simple support function is a mechanism to place committed belief on the single hypothesis represented by its focus. The remaining weight, placed directly on 0, is uncommitted since 0 represents the universally true proposition. Let Dempster’s rule be used to combine the ‘~1 sim- ple support functions with the foci Fi and degrees of support si. Then (Barnett 1991) shows that Pi(a) = K x l<i<n 7FeFi for the combined belief function. This formula is the justification for Equation 1 above. eferences Barnett, J.A. 1991. Calculating Dempster-Shafer Plau- sibility. IEEE Trans. PAMI. In Press. Barnett, J.A. 1984. How much is control knowledge worth: A primitive example. Artificial Intelli- gence 22:77-89. Davis, R. 1980a. Me&rules: reasoning about control. Artificial Intelligence 15: 179-222. Davis, R. 1980b. Content reference: reasoning about rules. Artificial Intelligence 15:223-239. Garey, MB., and Johnson, D.S. 1979. Computers and Intractability. Freeman Press. Newell, A., and Simon, H.A. 1976. Computer science as empirical inquiry: symbols and search. CACM 19(3):113-126. Nilsson, N.J. 1980. Principals of Artificial Intelligence. Tioga Press. Pearl, J. 1984. Heuristics. Addison Wesely. Shafer, G. 1976. A Mathematical Theory of Evidence. Princeton University Press. Shortliffe, E.H. 1976. Computer-Based Medical Con- sultations: MYCIN. Elsevier. BARNETT 481
1991
65
1,126
Abstract While every Shafer belief function corresponds to a set of interval beliefs on the atoms of the frame of discernment, an arbitrarily specified set of intervals of belief may not correspond to any belief function, even when it does correspond to bounds imposed by sets of probability functions. This paper proves necessary and sufficient conditions which must be met by a set of belief intervals over atoms if a corresponding belief function exists. The sufficiency is proved via an an O(n) algorithm which will always construct an corresponding belief function, if one exits, for a specific set of intervals Introduction1 C. A. B. Smith (Smith 1961), I. J. Good (Good 1962), and H. E. Kyburg (Kyburg 1961) all early on had the idea that the representation of belief should be by means of intervals rather than by real numbers. There are intuitive reasons for this, of course, but only C. A. B. Smith offered a pragmatic argument. While it is true, as Savage (Savage 1954) argued, that one can be forced to choose among alternatives in such a way as to reveal in the imaginary limit a real-valued degree of belief, at some point the choices will strike the agent making them as arbitrary, forced by the demands of the interlocutor rather than by the agent’s belief state. Smith’s idea was that degrees of belief should correspond to what he called “pignic” odds. The pignic odds on S are the least odds that the agent would feel comfortable in offering on S . It is clear that in general the probabilities corresponding to the pignic odds on S and to the pignic odds on the CTA Incorporated 200 Liberty Plaza Rome, NY 13440 jlemmer@nova.npac.syr.edu Henry ‘E. Kyburg, Jr. Department of Computer Science University of Rochester Rochester, NY 14627 Kyburg@cs.rochester.edu 1 The first author acknowledges the support of New York State Science and Technology Foundation. The second author acknowledges the support of the Signals Warfare Center of the U. S. Army and of the National Science Foundation. negation of S will not add up to one. For example, if I will offer odds of I:2 on rain tomorrow, and odds of I: 1 on no rain tomorrow, the corresponding probabilities are l/3 for rain and l/2 for no rain. Since offering odds of less than 1:3 on no rain is like receiving odds of greater than 3:1 on rain, the agent’s beliefs concerning rain can be represented by the probability interval for S : [ 1/3,1/2](The characterization of evidential probability developed about the same time (Kyburg 1961), in which all probabilities are based on imperfectly known statistical proportions, also leads to an interval representation of uncertainty.) The analogy, so far, with Shafer’s belief functions (Shafer 1976) is very close. It has been shown (Kyburg 1987) that every belief function corresponds to a convex set of classical probability functions. Smith’s pignic odds correspond to suspending judgment on a convex set of classical probabilities. A.P Dempster discussed the relation between upper and lower probabilities and convex sets of probability functions in (Dempster 1967). The problem of ensuring that a set of real-valued probabilities over an algebra are consistent (coherent) is non-trivial. The problem of ensuring the consistency of a set of pignic odds or a set of belief functions is even harder. By consistency of a set of probability intervals, we mean that there exists at least one classic probability function which takes a point from each interval. The assignment [.1,.2], [.3,.4] to exclusive and exhaustive Sl, S2 is obviously not consistent; while the assignment [.I,.9], [.4,.5] is since the probability function can assign a value of .5 to P(S) and 5 to P(S2). If this same set of intervals is to be consistent in the sense of a belief function, a belief function must exist which yields these intervals as the belief and upper probability of these singletons. In this paper we offer an algorithm for determining the consistency of a set of interval constraints on atoms of an 488 BELIEF FUNCTIONS From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. event space relative to the existence of a belief function, and for generating one of the many belief functions that correspond to that set of constraints, in case the constraints are consistent. The algorithm yields a complete belief (or mass) function. In applications, the initial interval constraints can be provided by an expert. I’vlany people are more comfortable with interval estimates than with real-valued estimates. When probabilities are obtained by formal statistical methods, they are often given by confidence intervals. If we are to use interval inputs in any form, some procedure for checking their consistency is crucial. While our program does this for only a limited special case, there is hope that it can be extended. The algorithm and a convenient man-machine interface have been implemented (Lemmer, 1988). ckgro This paper investigates the use of belief intervals as ‘input’ to the Dempster-Shafer calculus. Many implemented AI systems using models of uncertainty based on Dempster- Shafer Theory rely on intervals to elicit the belief functions, e.g. (Garvey, Lowrance, & Fischler 1981). But using intervals of belief in this way confronts the difficulty that many sets of intervals of belief do not correspond to any belief function. This is so even for intervals that are probabilistically consistent; see (Kyburg 1987). Figure I: Regions of Existence In the three event case shown in Figure 1 and described in the next section, only one fifth of all the probabilistically consistent intervals are consistent with any belief function. his situation can prove quite frustrating to a domain expert building a knowledge based system when he is forced by the theory to revise his original estimates. It is especially frustrating if he has no sure guidance on how to carry out this revision. etween Interval i&y Constraints and elief Functions It is well known that for every belief function Bel, Bel(X) and P*(X) = 1 - Bel(- X) can be construed as the minimum and maximum probabilities of X under a convex set of probability functions (Kyburg 1987). Such an interval can be computed for each event in the frame of discernment on the basis of the belief function Bel. But for what subjective intervals of belief can a belief function be consistently posited ? That is the subject of this section. Note that since the intervals cannot be specified arbitrarily, neither can belief in the complement of an event be specified arbitrarily. The criteria which must be met by a set of interval constraints on atoms in order for that set to correspond to a belief function are mathematically straight-forward. Assume that we have a frame of discernment 0= (eo,el,-, en-l, ), where the members of 8 are ordered by the size of the belief interval associated with them. Let [li,ui] be the interval associated with ei, and let i < j imply (Ui-li) I (Uj-lj)* The size of the interval associated with ei is si = ui - li, and the sum of these sizes is S=$Si The belief uncommitted to any singleton atom of 0, U, is n-l U = 1.0 - C 1; 0 Theorem 1. There exists a belief function corresponding to a set of intervals over atoms (or a set of basic subsets) of a frame of discernment if and only if U20 0 1 0 2 s 22u 0 3 The necessity of (1), (2), and (3) follows almost directly from the definition of a belief function. The sufficiency of (1), (2), and (3) will be shown by presenting an algorithm which will always construct a valid basic probability assignment whenever the three conditions are met. Before actually providing the proof that (l), (2), and (3) are necessary and sufficient, it is worthwhile examining their implication. Suppose that the frame of discernment, 0, is the set (eo,el,..., en-l, ), and the set of probability constraints on these events is LEMMER & KYBURG 489 What do the conditions, tell us about the values which the Si can assume and still have a belief function exist satisfying these constraints? The answer in the case of these constraints is shown in Figure 1. Using the formula for U introduced above, we can compute the value of U in the example as 0.1. Constraints (2) and (3) imply that the values for the Si must lie in the tetrahedron between the darker gray plane shown in the figure, and the point (. l,.l ,.l): Constraint (2) limits possible solutions to the cube. Constraint (3) limits them to the positive side of the darker shaded plane. Probability intervals are consistent within the cube on positive side of the lighter shaded plane. Necessity and Sufficiency The necessity of (l), (2), and (3) follows almost directly from the definition of a belief function. The sufficiency of (l), (2), and (3) will be shown by presenting an algorithm which will always construct a valid basic probability assignment whenever the three conditions are met. A belief function exists if and only if (Shafer 1976) there exists a basic probability assignment, m: 2’ + [O,l], such that m(0) = 0 0 4 c m(A)= 1.0 AC63 and Bel(A) = c m(B) BcA 0 5 From (4), (6) and the definition of li, it follows that and from the definition of U, (5), and (7) that U = 1.0 - C Ii = 1.0 - C m((ei)) i i = c m(A) 2 0 Ad3 because all the m(.) are non-negative. Thus we have the necessity of condition (1). Since Si = P*{(G)) - Be1 ((ei]) = c m(A) - qq Ac63 eie A we have the necessity of condition (2). Because S =CSi i AC63 = c 14 49 ACQ 1412 2 2 c m(A)=2[1.0-~~m((e~)]=2U AcQ 14~2 we have the necessity of condition (3). The algorithm for constructing a belief function meeting a set of constraints satisfying conditions (l), (2), and (3) operates in four phases. The first phase begins with an (improper) basic probability assignment in which m(0) = 1.0, and all other assignments are zero. At the conclusion of Phase I, the (still improper) basic probability assignment has been modified so that, if the belief assigned to the empty set is excluded from the sum in (6), the belief calculated for each event, Be1 ({eil), yields the correct value for li . At the end of phase two, computation of P* (ei) (again excluding the empty set)will yield ui for all the events in the (ordered) frame of discernment except the event with the largest sized interval, en-l. At the end of the third phase computation of P* (~-1) (still with the same exclusion) will yield the correct result. At the end of the fourth phase, m(0) will be zero, and all constraints will be satisfied by the (now proper) basic probability assignment. 490 BELIEF FUNCTIONS Phase I: Satisfy belief constraints: Set m(0) equal to 1.0; For all i, 0 5 i < n, Set “(( ei)) equal t.0 li; decrease m(0) by li; It is clear that at the conclusion of this phase the value of m(0) will always be equal to U. However, this value will be incrementally decreased in each of the following phases until it becomes 0 when the algorithm successfully terminates. Since m(0) will become 0 and the remaining phases will never alter the values of the ei, the condition (8) holds when the algorithm li = c m(A) = rn((~)>, Olicn (8) A:g successfully terminates. In the remaining phases, m(0) will decrease in such a way that if it ever becomes less than 0 we will be able to conclude that the value of h-1 (and possibly others) is greater than U, violating condition (2); if termination occurs with m(0) greater than 0 we will be able to conclude that condition (3) is violated. Phase II: Satisfy the upper probability constraints for all events except en-l. At the beginning of each iteration, U*is the value of st which could be computed from the current state of the belieffunction (excluding the value of m(0) 1. Set R equal to 0 and U* equal to 0; for all i, 0 5 i < n-l: Set m(R) equal to si-U*; set U* equal to U*+m(R); decrease m(0) by m(R); remove ei from R; if m(0) is less than 0, Si is greater than U. At the conclusion of this phase, the correct value for the upper probability of each element of the frame of discernment, except for the one with the largest interval, can be computed from the belief function so far constructed, that is : Moreover, U=m(lzI)+U* ( ) 10 The first condition, (9), is guaranteed by the construction of R and the first operation in the ‘for’ loop. The second condition, (10) is guaranteed by the operations on U* and m(0) . From the second condition it can be seen that if m(0) has a value less than zero then the interpretation of U* implies that si must be greater than U. This implies that one of the necessary conditions, i.e. (2), was violated by the constraint set. At the termination of this phase U* is also the the value which could be computed for ~~-1, neglecting the value of m(0). This is because every set, R, for which m(R) became non-zero during Phase II contained ~1 (and at least one more atom).. The essential action of the next phase is to preserve conditions of (9) and (10) while satisfying the upper probability constraint for en-l. Phase III: Satisfy the upper probability constraint for event errI. In what follows the interpretation of U* changes from the previous phase: it now refers only to the amount of s,,-l so far accounted for, without any reference to R. S is simply a temporary set which always contains en-l and one other event. Set R equal to 0; for all i, 0 I i c n-2: if U* is equal to sn-1 terminate this phase; Set S equal to (ei, en-t ; set m(S) equal to min decmasem(R) by m(S B sn-1 - u*l* m(R>) ; decmasem(0) by m(S); if m(0) c 0 sn-1 is greater than U; increase U* by m(S); remove ei from R; increase m(R) by m(S); if U* is not equal to sn-l then sn-l is greater than U. For Phase III to terminate successfully, U* must equal ~~-1, and m(0) be greater than or equal to 0. We now show failure to satisfy either of these conditions implies that the constraints did not satisfy the necessary conditions. First note that conditions (9) and (10) remain valid throughout the execution of this phase: (10) trivially so since in each iteration whatever is subtracted frcm m(0) is added to to U*. The preservation of condition (9) is less obvious. Subtracting m(S) from m(R) reduces sj for all j such that ej is an element of R. However these values are restored for LEMMER & KYBURG 491 all these j’s except j=i by removing ei from R and adding to the new m(R) the value of m(S). The value of si had been raised by adding value to m(S) but is restored by the subtraction of value from m(R) while R still contains ei. The net effect of each iteration is to raise U* (the amount of sn-l accounted for), reduce m(0), and preserve (9) and (10). If m(0) becomes negative, (10) and the meaning of U* imply immediately that h-1 is greater than U, violating (2)* If U* is not equal to h-1 when this phase terminates, the min operation will have guaranteed that U* must be less than h-1 . Once all iterations of the ‘for’ loop have been completed, all of U*will have been assigned to m(A) such that every A contains en-1 and exactly one other variable. Thus we will have that n-2 Sn-1 > U* = C Si i=O Because S = h-1 + nf Si i=O the inequality expression allows us to write s = sn-1 + (l-x) &l-l where x must be positive so that n-2 (l-X)&-l = U* = C Si i=O By necessary condition (3) we can write S = Sn-1 + (l-X)&l 2 2 U Sn-1 2 2 U ( 1 2-x Sn-1 > U which means that the situation we are analyzing cannot arise. Thus when Phase III terminates U* must equal Sn-1. and condition (9) is subsumed by c m(B) 1 ( 1 11 feE@ All we must do now is preserve (11) while insuring that m(0) takes on the value of zero. Phase IV; Convert m to a proper basic probability assignment. Set R equal to 0; for all i, 0 I i c n-2: Set x equal to mintm(RJ P IN -1 . , 4 I 2 -R +l I I decrease m(R) by x; for V t [t CR; 14 = 21: Increase m(t) by 1 x; I4 - 1 Remove ei from R; if m(0) does not equal 0 then S < 2U. We need to show three things to claim that the final phase works correctly: that (11) is preserved, that m(0) does not become negative, and that, if the phase terminates with positive m(0), then S was less than twice U. These arguments go through without difficulty. To see that (11) is preserved, note than when x is subtracted from m(R) the represented (in m) value of sj for every j such that ej is in R is reduced by x. When X ]Rl - 1 is added to each of the 14 - 1 cardinality two subsets of R containing ej, the originally represented value is KStORd. To see that m(0) never becomes negative begin by noting that the total added to the belief of the subsets of R will be greater than the amount subtracted from the belief x in R. This is true because the amount 1N - 1 will be added 1) RI 14 1 2 times to subsets, and 2 lRi - 1 is greater than 1 for ( 1 all 14 greater than 2. Indeed the reason we are doing this phase is to take this additional belief away from m(0) while preserving (11). The most we want to take away is the amount of m(0) itself. Thus x can be no more than that defined by the following expression: x + m(0) = C$--& 492 BELIEF FUNCTIONS which can be solved for x giving References Dempster, A.P. Upper and Lower Probabilities Induced by a Multivalued Mapping, The Annals of Mathematical Statistics 38: (325-339) x= IRI - 1 m(0) -R+l I I But m(R) cannot become negative either, hence the minimum function. If the phase terminates with positive m(0) , then all iterations of Phase IV must have been performed. Therefore, all the belief mass which was in m(0) at the end of Phase I(i.e a mass equal to U), is now either still in m(0) or is in some of the m(A) such that the cardinality of A is two. This implies that S = 2 c m(A) Ad3 14* and that U = m(0) + x m(A) &Z allowing us to conclude that S c 2U, which is not allowed by condition (3). Therefore we can conclude that conditions (l), (2), and (3) are sufficient to allow the construction of a belief function, because the algorithm just presented will always construct one when these conditions are met. Future Work The first important generalization of these results would be the construction of an algorithm that would take interval assessments of any set of subsets of the event space, rather than just the singleton subsets or a set of basic sets, ensure consistency, and derive a belief function for the whole space, Second, one would like to characterize, not just one, but the whole (infinite!) set of belief functions consistent with a set of consistent interval constraints. A beginning of such a characterization is available in (Lemmer 88). Third, since the representation of uncertainty by belief functions is a special case of the representation of uncertainty by sets’ of probability functions over the event space, and there are indeed sensible situations in which belief functions won’t do (Kyburg:1987), we would like to generalize these results to apply to the representation of uncertainty by sets of probability functions whether or not they correspond to belief functions. Garvey, T.D., Lowrance, J.D., and Fischler, M.A. 1981. An Inference Technique for Integrating Knowledge from Disparate Sources. In Proceedings of the Seventh IJCAI Vol 1:3 19-325 Good, I.J., Subjective Probability As a Measure of a Non- Measureable Set, Logic, Methodology, and Philosophy of Science. Nagel, E. Suppes, P. and Tarski, A. (eds) University of California Press, 1962:3 19-329 Kyburg, H.E. 1962. Probability and the Logic of Rational Belief. Wesleyan University Press Kyburg, W.E., 1987. “Bayesian and Non-Bayesian Evidential Updating”. Artificial Intelligence 3 1: (27 1-294) Lemmer, J.F. 1988. Causal Probabalistic Reasoning System, Final Technical Report SBIR (88)-160, Knowledge Systems Concepts, Inc, Rome NY Savage, L.J. 1954. Foundations of Statistics. John Wiley and Sons Shafer, G. 1976 A Mathematical Theory of Evidence. Princeton University Press. Smith, C.A.B. 1961. Consistency in Statistical Inference and Decision, Journal of the Royal Statistical Society Series B” (23): l-37 LEMMER & KYBURG 493
1991
66
1,127
Explanation, rrelevanee and endence * Solomon E. Shimony Computer Science Department Box 1910, Brown University Providence, RI 02912 ses@cs.brown.edu Abstract We evaluate current explanation schemes. These are either insufficiently general, or suffer from other seri- ous drawbacks. We propose a domain-independent ex- planation system that is based on ignoring irrelevant variables in a probabilistic setting. We then prove im- portant properties of some specific irrelevance-based schemes and discuss how to implement them. Introduction Explanation, finding causes for observed facts (or evi- dence), is frequently encountered within Artificial In- telligence. For example, some researchers (see [Hobbs and Stickel, 19881, [Ch [Stickel, 19881) arniak and Goldman, 19881, view understanding of natural language text as finding the facts (in internal representation for- mat) that would explain the existence of the given text. In automated medical diagnosis (for example the work of [Cooper, 19841, [Shachter, 1986], and [Peng and Reg- gia, 1987]), one wants to find the disease or set of dis- eases that explain the observed symptoms. In vision processing, recent research formulates the problem in terms of finding some set of objects that would explain the given image. Following the method of many researchers (such as cited above), we characterize finding an explanation, as follows: given world knowledge in the form of (usu- ally causal) rules, and observed facts (a formula), de- termine what needs to be assumed in order to predict the evidence. Additionally, we would like to select an explanation that is “optimal” in some sense. There are various schemes for constructing explana- tions, among them the pure proof theoretic “theory of explanation” (see [McDermott, 1987]), set minimal abduction [Genesereth, 19841, and others; these are usually insufficiently discriminating (i.e. they would *This work has been supported in part by the National Science Foundation under grants IST 8416034 and IST 8515005 and Office of Naval Research under grant N00014- 79-C-0529. The author is funded by a Corinna Borden Keen Fellowship. Special thanks to Eugene Charniak for helpful suggestions and for reviewing drafts of the paper. 482 BELIEF FUNCTIONS not be able to choose between many candidate ex- planations), because they only supply a partial or- dering of explanations, that may result in an mutual incomparability of the best candidates (see [Charniak and Shimony, 19901). A better explanation construc- tion method is Hobbs and Stickel’s weighted abduc- tion [Hobbs and Stickel, 19881, and our variant of it, cost-based abduction [Charniak and Shimony, 19901. However, the latter schemes do not handle negation correctly, because of the independence assumptions in- herent to them. In fact, cost-based abduction may prefer an inconsistent (i.e. 0 probability) explanation to a reasonably probable explanation, as we show in [Shimony, 19901. Another method su metric, presented in f gested recently is the coherence Ng and Mooney, 19901. How- ever, coherence suffers from some anomalies, as shown by [Norvig, 19911. One anomaly is that if the proof subgraph of an explanation happens to contain sev- eral intermediate nodes, then that explanation may be spuriously preferred. Another anomaly occurs in cases where we may not want things to be explained by the same fact, and coherence will fail there. Coherence also fails to deal with uncertainty, or with cases where rules or predicates have priorities. Probabilistic schemes for explanation are sufficiently discriminating, as they provide a total ordering of can- didate explanations. These schemes also have a natural semantics, the probabilities of things occurring in the world. For these reasons, we focus exclusively on expla- nation in a probabilistic setting. Unfortunately, while there are numerous probabilistic explanation schemes, all of them are either insufficiently general or have other deficiencies, as shown by Poole in [Poole and Provan, 19901. One of the schemes, Maximum A- Posteriori (MAP) [Pearl, 19881) model explanations (called MPEs in is used here as a starting point, because it maximizes both internal consistency of the explanation (the probability of the model) and its “predictiveness” of the evidence. Formally, the MAP is the assignment A to all the variables that maximizes P(A]E), or the “most probable scenario given the evidence, &“. We make the simplifying assumption that the world knowl- From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. edge is represented as (or at least is representable as) a probability distribution in the form of a belief net- work, as is done by many researchers in the field, such as in [Charniak and Goldman, 19881, [Cooper, 19841, and others. MAP explanation has its proponents in the research community. Derthick, in his thesis [Derthick, 19881, talks about “reasoning by best model”, which is es- sentially finding the most probable model given the evidence, and performing all reasoning relative to that model. We adopt that idea as a single “best” explanation. a motivation for finding A serious drawback of MAP explanations and sce- narios is that they exhibit anomalies, in the form of the overspecification problem, as demonstrated by Pearl in [Pearl, 19881. He presents an instance of the problem (the vacation planning problem), a variant of which (where the belief network representation is made ex- plicit) is presented below. - Suppose that I am planning to take some time off on vacation, and take a medical test beforehand. Now, the medical test gives me a 0.2 probability of having cancer’ (i.e. probability 0.8 of being healthy). Now, if I am alive and go on vacation whenever I am healthy, then the most-probable state is: I am healthy, alive, and on vacation (ignoring temporal problems). Sup- pose, however, that I now plan my vacation, and am considering 10 different vacation spots (including one of resting at home), and make them all equally likely, given that I’m healthy (i.e. no preference). Also, as- sume that if I’m not well, then I will (with high prob- ability) stay at home and die, but I still have a small probability of surviving and going on vacation. One way of representing the data is in the form of a be- lief network, as shown in figure 1. The network has three nodes: alive, healthy, and vacation spot2. The evidence is a conjunction, where we also allow assign- ments of U (unassigned) to appear for evidence nodes. The latter signifies that we are interested in an expla- nation to whatever value the node has, without actual evidence being introduced. Thus, in the example, the “evidence” can be stated as “alive = U”. In this network, since we have 10 vacation spots, the i We assume here sume that there are this example. th .at (cancer = not healthy), i.e. as- no other diseases, for the purposes of 2The terms n odes and uariarbles are used interchange- ably. We use lower case names and letters for variable names, and their respective capitalized words and letters to refer to either a particular state of the variable, or as a shorthand for a particular assignment to the variable. In addition, a variable name appearing in an equation without explicit assignment, means that we actually have a set of equations, one for each possible assignment to the variable. E.g. P(Z) = P(g) h w ere x and y are boolean valued nodes stands for P(x = 2’) = P(y = 2’) A P(x = F) = P(y = T) A P(x = T) = P(y = F) A P(x = F) = P(y = F). Assignments are treated as sample-space events when applicable. P(Healthy) = 0.8 P(stay at homelnot Healthy) = 0.9 P(other locationlnot Healthy) = 0.011.. P(any 1ocationlHealthy) = 0.1 (Alivelhealtby, any location) = 1 P(Alivelnot Healthy, stay home) = 0 U P(Alivelnot Healthy, other location) = 0.1 Figure 1: Belief network for the vacation-planning problem probability of any scenario where I am alive is 0.08; but the scenario where I die of cancer is the most likely (probability of 0.18)! This property of most-probable scenarios is undesirable, because it is not reasonable to expect to die just because of planning ahead one step. Pearl suggests that we do not assign nodes having no “evidential support”, i.e. those that have no nodes below them in the belief network. This is a good idea, but it is not sufficient, as it clearly does not help in the above example, because the “vacation location” node does have evidential support. Despite its shortcomings, the MAP scheme does not suffer from potential inconsistencies. We use it as‘a starting point, and argue that by using a partial Max- - imum A-Posteriori model as an explanation, we can solve the overspecification problem. We use the intu- ition that we are not interested in the facts that are ir- relevant to our observed facts, and consider models (ex- planations) where irrelevant variables are unassignkd. We propose two ways to define the class of par- tial models (or assignments) that we are interested in, i.e. to decide what is irrelevant. The first attempt, - . independence-based partial assignments, uses statisti- cal independence as a criterion for irrelevance. We then define independence-based partial MAP as the high- est probability independence-based assignment. We show that it alleviates the overspecification problem in some cases (it solves the vacation-planning problem - correctly). We then outline a method of adapting our algorithm for finding complete MAPS, introduced in [Shimony and Charniak, 19901, to compute irrelevance- based partial MAPS. We do that by proving some im- portant properties of independence-based assignments. Using independence-based MAPS still causes assign- ing values to variables that we would think of as irrel- evant. We propose b-independence, an improved cri- terion for deciding irrelevance that is more liberal in recognizing facts as irrelevant. It specifies that a fact is irrelevant if the given facts are independent of it within a tolerance of 6. SHIMONY 483 Irrelevance-based lK?J?s We now define our notion of best probabilistic explana- tion for the observed facts as the most probable partial model that ignores irrelevant variables. The criteria under which we decide which variables are irrelevant will be defined formally in the following sections. For the moment, we will leave that part of the definition open-ended and rely on the intuitive understanding of irrelevance. Suffice it to say that our definitions of ir- relevance will attempt to capture the intuitive meaning of the term. Definition 1 For a set of variables B, an assignment? AS (where S C B), is an irrelevance-based assignment in the nodes B - S are irrelevant to the assignment. In the vacation planning example, we would say that the vacation-location is irrelevant to the assignment (Alive, Healthy). Definition 2 FOT a distribution ovep the set of vari- abbes B with evidence E, an assignment As is an irrelevance-based MAP if it is the most probable irrelevance-based assignment that is complete w.r.t. the evidence nodes, such that P(ElA”) = 1. This is a meta-definition. We will use different def- initions of irrelevance-based assignments to generate different versions of irrelevance-based MAPS. With the “intuitive” definition, in our vacation-planning exam- ple, the irrelevance based MAP is (Alive, Healthy), which is the desired scenario. We say that the irrelevance-based MAP w.r.t. the evidence I is the best explanation for it. Note that the definition above is not restricted to belief networks. Our formal definitions of irrelevance, however, will be restricted to belief networks, and will rely on the di- rectionality of the networks, the “cause and effect” di- rectionality. In belief networks, an arc from u to v states that u is a possible cause for v. Thus, the only possible causes of a node v are its ancestors, and thus (as in Pearl’s evidential support), all nodes that are not ancestors of evidence nodes are unassigned. Ad- ditionally, we do not assign (i.e. are not “interested” in) nodes that are irrelevant to the evidence given the causes. The ancestors are only potentially relevant, be- cause some other criterion may cause us to decide that they are still irrelevant, as shown in the next section. Independence-based MM% Probabilistic irrelevance is traditionally viewed as sta- tistical independence, or even independence given that we know the value of certain variables (the indepen- dence model that is due to Pearl). The latter is known, in the case of belief networks, as d-separation. How- ever, using d-separation as a criterion for deciding ‘A denotes assignments. The superscript denotes the set of assigned nodes. Thus, AS denotes an assignment that is complete w.r.t. S, i.e. assigns some value (but not V) to each node in S. which nodes are irrelevant does not suffice for our ex- ample, because clearly the “vacation spot” and “alive” nodes are not d-separated by the “healthy” node. As a starting point for our notion of probabilistic irrele- vance, we use Subramanian’s strong irrelevance ([Sub- ramanian, 19891). In that paper, S1(f, g, M) is used to signify that f is irrelevant to g in theory M if f is not necessary to prove g in M and vice versa (see [Subramanian, 19891 for the precise definition). We use the syntax of that form of irrelevance, but change the semantics. That is because we are interested in irrelevance of f to g even if g is not true. We de- fine probabilistic irrelevance relative to sets of models, rather than theories (as in [Subramanian, 19891). This is necessary because the (more general) probabilistic representation does not have implications, just condi- tional probabilities. Partial assignments induce a set of models. For ex- ample, for the set of variables (x, y, 23, each with a binary domain, the assignment (x = T, y = F} with z unassigned induces the set of models ((z = T, y = F, z = F), (1: = T, y = F, z = T)). We will limit ourselves to the sets of models induced by partial as- signments, and use the terms interchangeably. We say that In( f, gjA) if f is independent of g given A (where 4 is a partial assignment), i.e. if P( f 19, JI) = P( f IA). We allow f and g to be either sets of variables or as- signments (either partial or complete) to sets of vari- ables. This is similar to Pearl’s independence nota- tion, 1(X, Y, Z), where variable set X is independent of variable set 2 given variable set Y. The difference is that Pearl’s notation does not require a certain as- signment to Y, just that the assignment be known; whereas our notation does require it. For any disjoint sets of variables X, Y, 2, we have that 1(X, Y, 2) im- plies Ila(X, Z].Ay), but not vice-versa. We now define our first notion of an irrelevance- based assignment formally (we call it independence- based assignment): Definition 3 An assignment AS is an independence- based assignment ifl for every node v E S, A(V) is independent of all its ancestors that are not in S, given pt+w. 4 The idea behind this definition is that the unassigned nodes above each assigned node v should remain unas- signed if they cannot affect v (and thus cannot be used to explain v). Nodes that are not ancestors of v are because they never used as an explanation of v anyway, are not potential causes of 21. Definition 4 An independence-based MAP is an irrelevance- based MAP wheTe independence- based as- *t is shorthand for “immediate predecessors of”. t + is the non-reflexive, transitive closure oft. Thus, t+(v) is the set of all the ancestors of ‘u. We omit the set-intersection operator between sets whenever unambiguous, thus S t +(v) is the intersection of S with the set of ancestors of V. 484 BELIEF FUNCTIONS signments are substituted for irrelevance- based assdgn- mends. In our example, using independence-based MAPS, we have a best scenario of (Alive, Healthy, vacation location undetermined) with a probability of 0.8 as desired. We do not assign a value to vacation location because the only node v with unassigned ancestors is v=alive, and the conditional independence In(alive, va- cation spot ] Healthy) holds. Properties of Independence-based Assignments The independence constraints in the definition of independence-based assignments leads to several inter- esting properties, that are desirable from a computa- tional point of view. We make the following observation: if, for each as- signed variable 21, v is independent of all of its unas- signed parents given the assignment to the rest of its parents, then the entire assignment is independent of the unassigned ancestors. Thus, to test whether an as- signment is independence-based, we only need to test the relation between each node and its parents, and can ignore all the other ancestors. Formally: Theorem 1 For all assignments As that are complete w.r.t. S, the nodes of some subset of belief network B, if for every node v E S, In(Af”), t (v) - SjASt(U)), then As is an independence-based assignment. Proof outline: We construct a belief network B’, that is the same as B except for intermediate nodes inserted between nodes and their parents. The extra nodes map out all possible assignments to each node v and its parents, where nodes are collapsed together whenever v is independent of some subset of its par- ents given the assignment to the rest of its parents. Then we show that the marginal distribution of B’ is the same as B. We use the d-separation of nodes in B’ to show independence of nodes and their ancestors in the constructed network, and carry these independence results back to the original network, B. Theorem 1 allows us to verify that an assignment is independence-based in time linear in the size of the net- work, and is thus an important theorem to use when we are considering the development of an algorithm to compute independence-based MAPS. Additionally, if B has a strictly positive distribution, then theo- rem 1 also holds in the reverse direction. This allows for a linear-time verification that an assignment is not independence-based. The following theorem allows for efficient computa- tion of P(A’): Theorem 2 If In(v, 7 (v) - S, ASt(“)) for every node v E S, then the probability of As is: Proof outline: Let 0 be the set of belief network nodes not in S. We argue that, because of the inde- pendence constraints, we can write the joint probabil- ity of the entire network, P(nodes(B)) as the product P(A”)P(JP), f or any possible assignment to the nodes of 0. The joint probability of a belief network can al- ways be written as a product of probabilities of nodes given their parents. To calculate P(AS), we marginal- ize P(.A”) out, by summing over all the possible values of the unassigned nodes. Thus, we can write: with A$’ uEO where the c subscript denotes all possible complete as- signments, in this particular case all complete assign- ments to the set of nodes 0. We then argue that S is the sum of the probabilities of a complete sample space, and thus is equal to 1. The theorem allows us to find P(As) in linear time for independence-based assignments, as the terms of the product are simply conditional probabilities that can be read off from the conditional distribution array (or other representation) of nodes given their parents. Algorithmic Issues In order to be able to adapt standard algorithms for MAP computation to compute independence-based MAPS, we need to be able to do two things efficiently: a) test whether an assignment is independence-based, and b) evaluate its probability. This is usually a min- imal requirement?, whether the form of our algorithm is simulation, best-first search or some other method. In the case of independence-based MAPS, theorems 1 and 2 indeed provide us with linear-time proce- dures to meet these conditions, which allows us to take a complete-MAP algorithm and convert it to an independence-based MAP algorithm. We presented a best-first search algorithm for finding complete (rather than partial) MAP assignments to belief networks in [Shimony and Charniak, 19901. The algorithm finds MAP assignments in linear time for belief networks that are polytrees (i.e. the underlying graph, with all edges replaced by undirected edges, is a set of trees), but is potentially exponential time in the general case, as the problem is provably NP-hard. The algorithm was modified to compute indepen- dence based MAPS. We describe the algorithm and modifications more fully in [Shimony, 19911, but review it here. An agenda of states is kept, sorted by current probability, which is a product of all conditional prob- abilities seen in the current expansion. The operation of the algorithm is summarized in the following table. ‘We can survive without requirement a) if we have a scheme that enumerates independence-based assignments, but even in that case theorem 1 will help us out. SHIMONY 485 The modifications are in checking for completeness in step 3, and in extending the current agenda item. In both cases, the extension consists of picking a node, and assigning values to its neighbors. Each such as- signment generates a new state. The states are eval- uated and queued onto the agenda. The difference is that with the modified algorithm, when extending a - node, we never assign values to its children, and some of the parents need not be assigned, which actually saves some work w.r.t. the complete-MAP algorithm. Completeness in the modified algorithm is different in that an agenda item may be complete even if not all variables are assigned, and in fact we use the results of theorem 1 directly to check for completion. We will not pursue further details of the algorithm here, as it is discussed in [Shimony, 19911. Evaluating Independence-based MAPS We can see that independence-based assignments solve the vacation-planning problem, in that “vacation spot” is irrelevant to “alive” given “Healthy”, using the con- ditional independence criterion of definition 3. How- ever, this definition of irrelevance is still insufficient because slightly changing conditional probabilities may cause assignment to variables that are still intuitively irrelevant, which may in turn cause the wrong expla- nation to be preferred (the instability problem). The latter problem manifests if we modify the probabilities in our vacation-planning problem slightly, as in the fol- lowing paragraph. Change the probability of being alive given the lo- cation so that probability of “Alive” given “Healthy” and staying at home is still 1, but only 0.99 given “Healthy” and some other location (say an accident is possible during travel). We no longer have indepen- dence, and thus are forced into the bad case of finding the “not alive” scenario as the best explanation. This is counter-intuitive, and we need to find a scheme that can handle “almost” independent cases. S-independence and Explanation In order to improve the relevance performance of inde- pendence based explanation, we will attempt to relax the independence constraint that stands at the heart of the scheme. This will allow us to assign fewer vari- ables, hopefully ones that are not independent but still intuitively irrelevant. We relax the exact indepen- dence constraint by requiring that the equality hold only within a factor of 6, for some small 6. our notation: 6-In(a, blAS)), ifl This definition is naturally expanded for the case of a and b being (possibly partial) assignments rather than sets of variables (in which case read a for Aa, and b for Ab). This definition is parametric, i.e. 6 can vary between 0 and 1. Definition 6 An assignment AS is 6-independence based if16-In(Aj”), 0 1 +(v)l~“f’(“)), for every v E S (where 0 is the set of variables not in S). Note that the case of 6 = 0 reduces to the independence-based assignment criterion. In the case of our modified vacation-planning problem, and 6 = 0.1, we get the desired &independence based MAP of (Alive, Healthy), alleviating the instability prob- lem. Using &independence as a measure of irrelevance solves the vacation-planning problem and its modi- fied counterpart, but we need to show that finding 6- independence based MAPS is practical. To do that, we need to prove locality theorems similar to theorems 1 and 2. In the former-case, it works: Theorem 3 If 4’ is a complete assignment w.r.t. subset S o belief network B and for every node v E S, 6 - In(A “),t (v) - SIASflu)) (for 0 < 6 < l), then f AS is a b-independence-based assignment to-B. Proof outline: Expand the probability of node v given its parents as a marginal probability, summing over states of the unassigned indirect ancestors of v. The sum of probabilities over all states of the ancestor nodes equals 1. Using a convexity argument, we show that the minimum probability of v, given any states of its ancestors, occur when all the parents of v are assigned. Thus, the minimum probability of v given some assignment to its direct parents is smaller than the probability of v given any assignment to the indi- rect ancestors of v. A similar argument can be made for the respective maxima. Thus, given that the min- imum and maximum probabilities (of v given parents) are within a factor of 1 - 5 of each other, the mini- mum and maximum over the states of all unassigned ancestors are also within factor 1 - 6. Computing the exact probability of &independence based assignments is hard, but the following bound inequalities are always true: P(JP) 5 n Aybx, P(Jw ptsT(u), JvU)) UES P(AS) 2 n AFj,ny, P(dw psy JvU)) UES Proof outline: Apply the argument of theorem 3 to all v E S, and the multiplicative property of belief Definition 5 We say that a is b-independent of b given As, where a, b and S are sets of variables (in networks (as in the proof of theorem 2) to get the in- equalities above. 486 BELIEF FUNCTIONS The bounds get better as d approaches 0, as their ratio is at least (1 - 5)lsl. They can be computed using only local information, the conditional indepen- dence arrays of v given its parents, for each v in the assignment. In the worst case, we need to scan all the possible value assignments to the (currently unas- signed) parents of each v, but the computation time is still linear in IS]. In practice, it is possible (and fea- sible) to precompute the bounds for each v and each possible &independence based assignment to v and its parents (not all possible &independence based assign- ments to the entire network). In [Shimony, 19911, we show how to adapt our basic MAP algorithm to com- pute b-independence based MAPS. There is, however, a problem with &independence, that of determining the correct value of 6. In the above example, where did the value d = 0.1 come from? We could state that this is some tolerance beyond which we judge variables to be sufficiently near to independence, and that we can indeed pick some value and use it for all explanation problems successfully. That does not appear to be the case in certain examples we can cook up, especially when nodes can have many possible values (say, more than 10 values). A good solution to the problem is using a variable 6. We are looking into a method of making 6 dependent on prior probabilities of nodes, but are also considering basing its value on the number of values in a node’s domain. Summary Previous research ([Poole and Provan, 19901, [Char- niak and Shimony, 19901, and [Shimony, 19901) has shown that existing explanation systems have major drawbacks. We have looked at probabilistic systems for a solution. We started off with MAP explanations, and observed that the system suffers from overspeci- fying irrelevant variables. We defined the concept of partial MAPS, and a particular kind of partial MAP called “irrelevance-based MAP”, in which “intuitively irrelevant” nodes are left unassigned. We then defined irrelevance as -statistical indepen- dence, showed how it helps in certain cases, and proved important properties of independence-based assign- ments that facilitate designing an algorithm to com- pute them. Independence-based MAPS still suffer from irrelevant assignments, and we discussed the relaxation of the independence based assignment criterion, by us- ing &independence, to solve the problem. eferences Charniak, Eugene and Goldman, Robert 1988. A logic for semantic interpretation. In Proceedings of the ACL Conference. Charniak, Eugene and Shimony, Solomon E. 1990. Probabilistic semantics for cost-based abduction. In Proceedings of the 8th National Conference on AI. Cooper, Gregory Floyd 1984. NESTOR: A Computer-Based Medical Diagnosis Aid that Inte- grates Causal and Probabilistic Knowledge. Ph.D. Dissertation, Stanford University. Derthick, Mark 1988. Mundane Reasoning by Parallel Constraint Satisfaction. Ph.D. Dissertation, Carnegie Mellon University. Technical report CMU-CS-88-182. Genesereth, Michael R. 1984. The use of design de- scriptions in automated diagnosis. Artificial Intelli- gence 41 l-436. Hobbs, Jerry R. and Stickel, Mark 1988. Interpreta- tion as abduction. In Proceedings of the 26th Confer- ence of the ACL. McDermott, Drew V. 1987. Critique of pure reason. Computational Intelligence 3~151-60. Ng, Hwee Tou and Mooney, Raymond J. 1990. On the coherence in abductive explanation. In Proceedings of the 8th National Conference on AI. 337-342. Norvig, Peter 199 1. Personal communication. Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA. Peng, Y. and Reggia, J. A. 1987. A probabilistic causal model for diagnostic problem solving (parts 1 and 2). In IEEE Transactions on Systems, Man and Cybernetics. 146-162 and 395-406. Poole, David and Provan, Gregory M. 1990. What is an optimal diagnosis? In Proceedings of the 6th Conference on Uncertainty in AI. 46-53. Shachter, R. D. 1986. Evaluating influence diagrams. Operations Research 34~871-872. Shimony, Solomon E. and Charniak, Eugene 1990. A new algorithm for finding map assignments to belief networks. In Proceedings of the 6th Conference on Uncertainty in AI. Shimony, Solomon E. 1990. On irrelevance and partial assignments to belief networks. Technical Report CS- 90-14, Computer Science Department, Brown Univer- sity. Shimony, Solomon E. 1991. Algorithms for irrelevance-based partial maps. Submitted to the Conference on Uncertainty in AI. Stickel, Mark E. 1988. A Prolog-like inference system for computing minimum-cost abductive explanations in natural-language interpretation. Technical Report 451, Artificial Intelligence Center, SRI. Subramanian, Devika 1989. A Theory of Justified Re- formulations. Ph.D. Dissertation, Stanford Univer- sity. Technical report STAN-CS-89-1260. SHIMONY 487
1991
67
1,128
gic morphisms as a work for backward tr r ofle some modal and epistemic logics Ricardo cafema, Stiphme Denwi, Michel Herment LFIA-IMAG, 46, Av. Felix Wallet, d strategies in 38031 Grenoble Cedex, FRANCE {caferra 1 stephme 1 herment}@lifia.imag.fi (uucp) There exist methods in automated theorem proving for non-classical logics based on translation of logics from a (non-classical) source logic (abbreviated henceforth SL) into a (classical) target logic (abbreviated henceforth TL). These valuable methods do not address the important practical problem of presenting proofs in SL. We propose a framework applicable at least to S4(p), K, T, K4 for presenting proofs of theorems of these logics found in a familiar TL: Order-Sorted Predicate Logic (abbreviated henceforth OSPL). The method backward translates lemmas in a deduction (in TL) either (a) into lemmas in a corresponding deduction in SL (in the best case), or (b) into formulas semantically related to lemmas in a corresponding deduction (in the worst case). As a natural consequence we bring to the fore the fact that this framework can also be used to .help in solving another important and very difficult problem: the transfer of strategies from one logic to another. One conjecture -with corresponding theorem which is a particular case of it- is stated. When (b) above holds we give sufficient (and in general satisfactory) conditions in order to obtain the lemmas in SL. Two examples are treated in full detail : the well known problem of the “wise man puzzle” and another one which shows how our method can be used to transfer strategies. No additional theoretical result is given in this direction, but it is clear from the example how the proposed framework can help to transfer strategies. I. Introduction Relating logics is a technique that has been used in automated deduction for non classical logics: . By E. Orlowska, who introduced the notion of resolution-interpretability of a logic in another logic and applied it in order to construct theorem proving systems for algorithmic and m-valued Post logics (Orlowska 79, 80). . More recently by A. Hertzig and H-J. Ohlbach who emphasize the idea of logic morphism which is implicitly used in the work of E. Orlowska (Herzig 89, Ohlbach 89). The works of Ohlbach and Hertzig, in which unification plays a central role, were applied to several classes of modal and temporal logics. In these works the notions of source logic (SL) and target logic (TL) can be identified. SL is a logic to which we want to transfer results, in which we want to prove theorems and for which we do not have a good theorem prover or a theorem prover at all,. e .TL is a logic for which we know a lot of results, for which we have good theorem provers with good complete strategies,. . . In automated deduction for non-classical logics we can clearly, either use existing methods for these logics: tableau-based (see for ex. (Fitting 83)) or resolution-based (see for ex. (Enjalbert & Farifias de1 Cerro 1989)), or use translation methods (i.e. based on logic morphism). Our work is set in the latter context. A common feature of all works centered on logic morphisms is that they do not worry about a very important practical problem: presenting proofs in the source logic. In a fully general approach this amounts to translating proofs between (arbitrary) logics and, obviously, we shall not attack this problem. Instead, we shall deal with the problem of translating (for certain logics) some formulas of a proof found in the TL into formulas in a corresponding proof in the SL. We shall identify conditions allowing backward translation of lemmas of proofs from a particular TL: the Order-Sorted Predicate Logic (OSPL) into some differents (non classical) SL: S4(p), K, T, K4. OSPL is a “good” TL because very good theorem provers with good complete strategies are available for it. In order to expound our work we shall limit ourselves in this paper to the multi-modal propositional logic S4(p) but it will distinctly appear that similar results can be straightforwardly obtained for K, T and K4. A nice “by-product” of our results about transfer of lemmas is their usefulness as a framework for studying strategies transfer. It is well known that finding good strategies (and if it is possible to prove their completeness) is one of the central problems in automated deduction. If a strategy has been used to obtain a proof in TL, the steps of the proof can be considered as “keeping trace” of the strategy. Therefore a framework for studying the transfer of lemmas is also a framework for studying the transfer of strategies from TL to SL. The structure of the paper is the following: in section 2 we present the basic definitions concerning logic CAFERRA, DEMRI, & HERMENT 421 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. morphisms, a recall of S4(p) (S4 with several agents) and the definition of a partial inverse morphism. Section 3 contains the definition of a particular partial inverse morphism between OSPL and S4(p). Section 4 contains the main results of this paper: we state one conjecture and prove one related theorem. A well kown example (“the wise man puzzle”) is treated in detail. Another example shows the possibilities of using our approach as a framework for translating strategies. We have chosen the latter example in order to compare our method with a completely different one. Recently (Auffray, Enjalbert, & Hebrard 1990) proved that the unit strategy is complete also for some modal logics, In applying our backward lemma translation we obtain a similar result as the latter one. Presentation of the examples in section 4 is done with our proof-editor system (Caferra, Demri, & Herment 91). Section 5 gives some ideas of future work. 2. Basic definitions We use the notion of logic morphism from (Ohlbach 89) and we add the notion of partial inverse morphism corresponding to the backward lemma transfer from TL to SL. For the sake of simplicity we shall consider, both in TL and SL only formulas in clausal form. We shall not formally define the notions of logics, specification morphisms and logic morphisms here but the main ideas of the approach in (Ohlbach 89) can be summarized as follows.This approach is based on the proviso that : there exist i) a (good) theorem prover for a target logic and ii) a standard translation from a context logic (i.e. an intermediate logic, abbreviated henceforth CL) to a target logic. The task of writing theorem provers for new logics is therefore changed in the task of finding the way of translating these new logics to CL. “Translating” means here translating the syntax and the semantics. Ohlbach uses “morphism” to name this translation. We shall keep this name in this paper. In some standard cases the semantics of SL is syntactically captured by terms of the context logic via an axiomatization of the properties of semantic structures. Moreover, CL distinguishes in its syntax, the context terms which capture the semantics of SL (for instance, “&‘ denotes the application function, “0“ the composition function and “0~“ the initial world) from the domain terms. The first-order logic, CL, S4(p) and OSPL can be redefined with this formalism. Our definition of a partial inverse morphism fits in this theoretical framework. The following definition uses the notations from (Ohlbach 89). Definition 1 A partial inverse mophism <p for the logic morphism Y? between the two logics Ll and L;! is a mapping such that: For cr E Cl (set of signatures for Ll), q(o) is a partial function between <p2 (Ye) and <pl (a) (91 and <p2 map a signature to a set of formulas). In the sequel we shall note <p(F) the application of <p(o) to F (if it exists) 4 We illustrate the use of morphisms and partial inverse morphisms between logics by taking S4(p) and SL and OSPL as TL. The logic S4(p) is widely considered as the basic model for epistemic logics. In a world with p agents the intended interpretation of oiA is “the agent i knows that A is true”. The syntax and semantics for S4(p) and its associated resolution calculus are those used in (Enjalbert & Fariiias de1 Cerro 89). In the sequel we shall use a slight modification of the logic morphism defined in (Ohlbach 89): each agent generates its own context sorts. We associate to OSPL the resolution method defined in (Schmidt-Schauj3 88). rphism and partial inverse morphism The morphism we shall use is a slight modification of the one between S4(p) and CL defined in (Ohlbach 89): the context sorts are indexed by the different agents. For example the modal part of the formula morphism is: \rrF(Otf)= v X ‘wdtw’~ YF(f), YF(otf)= 3 X ‘w->tiw’~ YF(f). Let Z be a set of clauses in S4(p) and let C’ be the translation of those clauses into OSPL. Let Q’ a clause provable from C’ and let 0 be the possible inverse translation of G’ back into S4(p). We will define the conditions to backward translate @’ and we will state, as a conjecture, that <D is always provable from C. We can now define the partial inverse morphism. As proofs in OSPL are sequences of clauses, each lemma is Q clause. Therefore it suffices to define backward translation for clauses . To translate a clause, we capture the underlying semantics of the transition from one world to another by introducing a modal operator. The principle of the backward translation consists in gathering the literals which have some context in common. Each literal L has the form sP(pw(C)) where s E {A, 1) (as usual A denotes the empty string), P is a predicate symbol not appearing in the axioms, pw is a projection operator introduced by the morphism, and C is a context term. First of all, we algorithmically define auxiliary functions which deal with syntactic transformations . * generate-op(t) := if t is a variable of sort ‘W-&W’,, then q l else 0~ %t is a context term % * local-context ( (tl . . . tn)) := if n=l then generate-op(t1) else generate-op(t1) . localcontext((t2, . . . . tn)) %‘.’ is the concatenation operator, the ti are context terms % * p(U(x1 0 . . . 4 o xn, Oc), m) := if m = n then OC else Xm+l 0 . . . 0 Xn,OC) % 0 5 m 5 n% This last function takes into account the fact that each context term denoting a path-world is equivalent to OC or to a term of the form U(t1 0 . . . 0 tn, OC). * <pi(L) := % L = sP(pw(C)) and C is a context term % ifC=OCthenP ekf generaf-w(ti) . <piWPw(PCJD)) Definition of <p Let abe a clause Llv...vLn . We define a set of classes of literals (Ci, 1 5 i I k} (k 5 n ) which is a partition of the set of literals of Y. Moreover the classes have the following properties : 422 GENERAL DEDUCTION SYSTEMS For 1 Sjlk, 3cCj such that % Q = Si.pLi(pW(U(tLil O...OtL&, OC))) or siJpLi@w(OC)) % V P,Q E Cj , V i, 1 I i 5 aj tJ’i = tQi, and tPaj+l f tQ,j+l if both tPaj+I and tSoj+I exist VPE CjVQe Cj, Vi,lS;i5;ajd’i#tqi If CXj =-I then V L E Cj L has the following form SP@W(OC)). It is easy to verify that the decomposition in classes is unique. The definition of <p(O) will use the function a which takes as argument a set of literals and which returns a modal formula Calculus of cc(C) for the set of literals {ll,..,. lp) a(C) := cast? : % li = SiPi(pW(Ci)) % p =l ; <pl(Ll) ac = -1 ; SlPl v . . . v spPp % aC is the a of the class C% Qtherwise : 1= (till , . . . , &c) LI = {lit 12 s.p(pw(c)) E C and lit =s. ~oPw@(c,oy: 1)) and cy=-np f 0% L2 = {lit 1 3 s.p(pw(c)) E C and lit =s. p(pw(Oc)) and ac-np= 01; local-context(l). <p( v lit ) 1itE LIuL2 we can now define cp(%): <p(a) = a(CI) v . . . v C%(Ck). Example of a backwamlaanslation L = P@w&x 0 z, 0~))) v Q(pw&x o Y, 00)) W-J = on <p(P@w(JJ(z, 0~))) v Q@w(JX ~9 OC>>>> = q n( DnP v OnQ) where the sort of x,y,z is ‘W->ItW’n .We shall abbreviate pw(&(tI o...otn, OC)) by [tI ,..., tn]. roperties of the backward translation Given a formula in SL and a proof of its translation in TL it seems natural to expect from a backward translated lemma to belong to the proof in SL, or at least to be in some relation with formulas in the SL proof. This relation precisely stated by the following conjecture. We did not find any counterexample for it, but we can establish as a theorem only a weaker result (see the Theorem 1 below). Conjecture i) Let f be a S4(p)-formula and Cl,..., CN be the S4(p)- clauses such that 1=s4&I A . ..A CN H f. ii) Let Y be the logic morphism between S4(p) and OSPL (the one describe in section 2). (AI,...,Ak} is the set of axioms introduced by the specification morphism. iii) We note for 1 I i 5 N yF(Ci) = Ki . Let L1i,...,Lnii be OSPL-clauses such that L’i A . ../\Lnii is satisfiable iff Ki is satisfiable. iv) Let P={sI ,...,sn} be a deduction using the resolution and paramodulation rules from the set of OSPL-clauses {Lii 1 15 i I N and 1 I j 5 ni } u {AI,...,Ak). Then there exists a proof P’ = {s’l,...,s’m} in S4(p) (using the resolution system defined in (Enjalbert & Fariiias de1 Cerro 89)) from {Cl,..., CN} such that: (*) if each predicate symbol that occurs in sj (15 j I n) corresponds to one symbol in f then there exists k (I 5 k I m) such that S’k t=s+)q$sj) and i f SjmOSPL (OSPL empty clause ) then s’k#nS4(p) (S4@) empty clause) The theorem 2 below gives conditions allowing to obtain s’k from <p(sj). Initially we propose a theorem which particularizes the conjecture to a special kind of S4(p)- clauses. We need the following definition. Definifion 2 Let Oi (I< i 5 n) be multi-modal operators. A simple formula F is a propositional multi-modal formula iff F = OI...OnL n 2 0, where L is a propositional litteral (i.e. of the form sP, s E {A, 1 }) * Theorem 1 - i) and ii) as in the conjecture, moreover we assume that each Ci has the following form : S I v . . . v Sn where each Si is a simple formula. - For 1 I i I N Y’F(Ci) = C’i and in this case C’i is an OSPL-clause. - Let P={SI,..., sn} be a deduction using the resolution rule from the following set of OSPL-clauses {C’l,..., C’N} u {A1,...,4& - Then there exists a proof P’={s’I,...,S’m) in S4(p) from {CI,...,CN} such that (*) (in the Conjecture) holds. Proof (sketch) To build P’ verifying the conditions stated in the theorem, we can show that for this particular case n=m and for 1liSn <p(Si) = s’i. The proof is by induction on the number of steps of P. e Base case If L is an initial clause (in TL) and if <p(L) exists then <p(L) is also an initial clause (in SL) e Induction step If L is obtained by resolution rule from Su and Sv then <p(L) can be obtained from cp(Su) and cp(S,) with the modal resolution. To do so, we distinguish three cases according to the nature of the unifier q The corollary below gives a simple syntactic criterion to test whether s’k C=s4(p)(P(sj) is verified. It offers a satisfactory solution in a lot of cases. We first define a set of syntactic transformation rules . Definition 3 According to the definition of clauses found in (Enjalbert & Fariiias de1 Cerro 89) (and usually adopted in the modal logic literature) let {AI,..., An} be a set of disjunctive S4(p)-formulas, {B 1 ,...,Bm} be a set of conjunctive S4(p)-formulas, and {Fo,...,Fs} be a set of S4(p)- formulas. We note u + v the substitution of u by v (see the simplification rules in (Abadi & Manna 86)). Here is a set of rules : (RI) O(AIA...AAn)+ 0~1~ . ..AOAn (R2) q I(B l~...~Brn)+ OB IA.../\OBm (R3) O(F~V...VF~)+OF~ v . . . v OFs (Dl)FOV(F 1 A...AF~)+(F()vF~)A...A(F() vFs) + CAFERRA, DEMRI, & HERMENT 423 l Transformation S (applied to a clause C): Step I: To modify C by applying R1 and R2 as long as possible with a higher priority to the deepest subformulae of c. Step 2: Apply D1 to C with the same restrictions. If the resulting formula is equal to C then go to the step 3) else go to the step 1). Step 3: The result of S is C, and C has the following form c = ClA . . . A Cn where each Ci is a S4(p)-clauses. We note S(C) = Cl A...A Cn or C ->> { Cl , . . . ,Cn}. l Transformation T (applied to <p(L)): Apply R3 to <p(L) as long as possible with a higher priority for the deepest subformulae of v(L). Return the resulting formula. To prove the theorem 2 the Monotonicity of Entailment Lemma from (Abadi & Manna 86), can be used. Theorem 2 (1) S terminates for every clause C and t=sq,)C * S(C) and (2) T terminates for every backward lemma q(L) and ~s4@)<p(LwwPQ) q Corollary If the conditions i), ii) and iii) below hold i)L is a lemma verifying the conditions of the Conjecture. ii)C is a clause which has been deduced from {cl,...,C~q}. Let {fl,..., fk} be the set such that C ->> {fl,..., fk}. iii) There exists a (1 5 a I k) such that every disjunct of fo is a disjunct of T@(L)), then C I=SQ) <Q(L) q The backward translation we built between OSPL and S4(p), and its properties considering the modal resolution system for S4(p) (Enjalbert & Farifias de1 Cerro 89j can be straightforwardly adapted to some other normal modal logics. By keeping the same backward translation between OSPL and K, K4 or T the logic morphism between OSPL and these logics is mainly contained in the one defined for the multi-modal logic (Ohlbach 89). We get some similar properties considering the modal resolution system (Enjalbert & FarZas de1 Cerro 89) for them. For instance the theorem 1 (and the theorem 2 ) can be proved in the same way for K, K4 or T. The corresponding conjecture still remains a plausible theorem. The next example shows how the backward translation guide the proof in SL. Example I (“Wise Man Puzzle”) We illustrate our method with a famous example from MC Carthy. Its traditional form is: “‘A certain king wishes to determine which of his three wise men is the wisest. He arranges them in a circle so that they can see and hear each other and tells them that he will put a white or black spot on each of their foreheads but at least one spot will be white. In fact all three spots are white. He then offers his favor to the one who first tells him the color of his spot. After a while, the wisest announces that his spot is white. How does he know?” To axiomatize this puzzle in S4(p), we assume that the three wise men are A, B, C and C is the wisest. At least one of them has a white spot and everyone knows that everybody else knows that his colleagues know this : Al q iCljOk(PA v IQ v PC) for i, j, k E {A, B, C} and {i, j, k} = {A, B, C} (Pi for i E {A, B, C} means that the wise i has a white spot ; A1 denotes 6 axioms according to i, j and k). The three men can see each other and they know this. Whenever one of them has a white or black spots he knows that his colleagues know this and he knows also that his colleagues knows this from each other : A2 q i(TPi * nj TPi) for i,j E {A, B, C} and i #j. A3 oi nj (TPi * Ok YPi) for i, j, k E {A, B, C} and {i, j, k) = {A, B, C}. Aq q i q j <,Pj * Ok yPj> for i, j, k E {A, B, C} and {i, j, k} = (A, B, C}. C knows that B does not know that the colour of his spot and C knows that B knows that A does not know the colour of his spot : 4 q C+‘BPB A6 QYB+JAPA We have considered 26 axioms from which we would like to deduce q CPC (A7). We translate these formulas into OSPL. A’1 vx$: ‘W->*W’i, V xj2 : ‘W->rtW>, V Xk3 : ‘w->rtw’k PA([X$, xi2,xk3]) V pg ([X$, xJ2,xk3]) v PC([x$, xj2,xk3]) for i, j, k E {A, B, C} and {i, j, k} = CA, B, Cl A’2 V xi1 : ‘W-BrtW’i, V d2 : ‘W->rtW’j -;Pi([Xil]) * -Pi([xiI, xj2]) for ij E {A, B, C} and i #j. A’3 V xi1 : ‘W->*W’i, V xj2 : ‘W->*W’j, V Xk3 : ‘w->rtw’k TPi([Xil, Xj,]) =3 lPi([Xil, d2,xk3]) for i, j, k E {A, B, C} and {i, j, k) = {A, B, C}. A’4 V Xi1 1 ‘W->*W’i, V .xj2 : ‘W->*W’j, V Xk3 1 ‘w->rtw’k TPj([Xil, xj21) * TPj([Xil, d2,xk3]) for i, j, k E {A, B, C) and {i, j, k) = {A, B, C}. A’5 v xc1 : ‘W->*w’c ,- pB([Xcl @BI), @B is a constant of sort ‘ W-GW’B A’6 tr Xc1 : ‘w->*w’C, v XBl : ‘w->*w’B, 1 PB[xcl, xB2,@A]) where @A is a COnstaM of Sol-t ‘w->rtW’A The translation of A7 is (A’7) V xc1 : ‘W->rtW’C PC([xcl]). The deduction of the particular lemma A’7 is presented in the Figure 1 (with our proof-editor). Let L be the lemma PC([xC]). The backward translation of L is A7. To deduce A7 we can only consider the source S4(p)-axioms which have generated through the specification morphism the initial clause IC’i useful to deduce A’7 (cf the OSPL-proof). The only initial axioms (in clausal form) we consider, are ICl,..., X6 (cf the S4(p)-proof below). It should be noticed that the space search has been reduced with respect to the initial set of axioms (we have now 6 axioms instead of 26). We present the backward translation of every lemma of the QSPL- proof. Backward translation of the clausesfrom the OWL-proofi For 1 5 i < 6 <p(IC’i) = ICi (initial clauses), 424 GENERAL DEDUCTION SYSTEMS <p(CW) = q C ‘=‘BoA tPB v PC) ; (p(A’7) = A7 <p(CG’3) = q &OA P&: ; <p(CG’4) = U&J PC We fiit present the proof of <p(L) in %4(p) (Figure 2) and then we show how the conjecture holds and how the corollary has been a guide to build the S4(p)-proof. Pc([xc]) v 7 Pc([xc,xb]) (IC’2) (from AT) Wxc,@bl) (GC’4) ?c([xc,xb])v l Pc([xc, xb, xa]) (Ic’3) ( from A’3 PcUxc, @b, @al) (GC’3) ) v l Pb([xc, xb, xa]) (IC’4) resolution operator in OSPL ,B) =C is represented by (Figure I) (Figure 2) We note that , for 1 5 i < 6 ICi bs4(p)<p(IC’i) and, GC%4(p)<P(GC’ 1)~ GC 1 %(p)‘P(GC’2), GCi kS4(p)<P(GC’i), i = 3,4 and A7 t==~4(~)(~(A’7). We prove GC3 ~sQ)<P(GC’~). GC3 ->z= {ucOBOA(PBVPC), q COBOAPC, q COBOAlPA, q COBlPB, q COBUAl P} and T(q(GC’3)) = q(GC’3). Since every disjunct of q COBOAPC is a disjunct of q(GC’3) by the application of the corollary GC3 I=s+) <p(GC’3) holds. We get similar results for the other backward translated generated clauses. In SL, in order to find a proof of A7 we can consider the lemma CG’3 of the proof of TL. In the modal system we first generate clauses until some clauses (GC3) which verify the above conditions (a simple criterion is presented in the corollary) are deduced. The sequel of the proof will favour the use of these clauses. If we consider that every deduced clause is a lemma, we can “almost” build the proof in SL (in this particular case it is possible to build the whole proofi. The next example shows how our method cayl be used as a framework for the transfer oj strategies. Exampte 2 The following formula (from (Auffray, Enjalbert, & Hebrard 90)) is a S4-theorem for which the proof in OPSL verifies the hypothesis of the Conjecture . We consider the following S4-clauses: (Cl): AvO(O(B A -C v -,B )) (C2): q (oC v TD) v --,B (C3): CID Go: B (Cg): --,A We translate these formulas into OSPL: (C’I): A(pWOcN v 03([@1@21) A (1 W@1@21) v ~B([@1@2]) where @I and @2 are constants of sort ‘W->*W’. Let KII , K21 be some OSPL-clauses such that 1=0spL K1 I A K21 H C’ I (see Figure 3). (C’2): Vx,y :‘W->rtW’ C([xy]) v 1 D([x]) v -7 B(pw(OC)) (C’3): Vx: ‘W-&W’ D( [xl) (C’4): B(pw(GC)) C’s): l AOpw@cN A refutation has been obtained using a linear-input strategy (i.e. a linear strategy in which one at least of the clauses used in a resolution step is an input clause). The refutation presented by our proof-editor looks like : I 7 C([@l@l)* B(I@1@21) (C’Q II (Figure 3) CAFERRA, DEMRI, & HERMENT 425 Using results of the preceding sections we get the proof in S4 presented in the Figure 4. (Figure 4) It is easy to verify that by backward translating the proof found in GSPL, the proof is essentially the same that the one in (Auffray, Enjalbert, & Hebrard 90): the S4-proof implicitly uses the input strategy but the linear strategy is only partially used in the global proof. Clearly we did not prove the completeness of the unit strategy but, from a practical point of view the result is the same. On the other hand our approach is much more general because it allows us to study a?fferent strategies (or heuristics) and different togics. 5. Conclusion and future work We have presented a framework for the backward transfer of lemmas and strategies and we have shown how it works on two detailed examples. It is natural to ask about the possibilities and limits of our approach. We succinctly analyse now both of them. Theoretically, if SL admits a logic morphism to TL and if there exists a proof calculus in SL then our method could be applied. But as the proof calculus considered for TL is resolution, a first step to extend the class of source logics to which our method could be applied, consists in considering logics with a resolution proof calculus. Among the existing resolution proof calculi for non-standard logics we shall study the extension of our method to the following ones : - the first-order modal logic S5 (FariAas de1 Cerro 84) - the propositional temporal linear logic (Cavalli & Fariiias de1 Cerro 84) - some epistemic logics (Konolige 86). For the two latter logics, the problem of defining a logic morphism to GSPL and an inverse partial morphism is open. In principle there is no theoretical impossibility to answer to it. Obviously our approach could be applied with some other proof calculi and we are investigating in this way. We are presently studying a backward morphism for linear temporal logic, for propositional SS and for first- order modal logics. The main lines of future work are: - To try to prove the conjecture stated in section 4. - To characterize theoretically the SL, TL and logic morphism adapted to the transfer of strategies and, simultaneously - To use the ideas presented in the second example in order to experimentally help in the study of the possibilities for transferring strategies. Finally, it is worth pointing out that our method has similarities with the so called “reversed skolemisation” (see for ex. (Cox and Pietrzykowski 80)) - though our approach seems more general - . We shall deepen the study of their respective merits and limitations. eferences Y. Auffray, P. Enjalbert, & J-J. Hebrard, 1990. “Strategies for modal resolution: results and problems”. Journal of Automated Reasoning 6, pp. l-38. M. Abadi & Z. Manna, 1986. “Modal Theorem Proving”. In Proc. CADE-8, LNCS 230 Springer-Verlag. T. Boy de la Tour, R. Caferra, & 6. Chaminade, 1988. “Some tools for an Inference Laboratory (ATINF)“, CADE-9, Springer-Verlag, LNCS 310, pp. 744-745. R. Caferra, S. Demri, & M. Herment, 1991. “Logic- independent Graphic Proof Edition: an application in structuring and transforming proofs “. Submitted. A. Cavalli & L. Farifias de1 Cerro, 1984. “A decision method for linear temporal logic”, CADE-7, Springer- Verlag, LNCS 170, pp 113-127. P.T. Cox & T. Pietrzykowski, 1980. “ A complete nonredundant algorithm for reversed skolemisation”, CADE-5, Springer-Verlag, LNCS 87, pp 374-385. P. Enjalbert & L. Farifias de1 Cerro, 1989. “Modal resolution in clausal form”, Theoretical Computer Science 65, pp. l-33. L. Fariiias de1 Cerro, 1984. “Un principe de r&olution modale”, R.A.I.R.O. Informatique theorique, Vol 18 n02, pp. 161-170. L. Farifias de1 Cerro & A. Herzig, 1989. “Automated quantified modal logic” in Machine Learning, Metareasoning and Logics (P. Bradzdil and K. Konolige eds.) . M. C. Fitting, 1383. “Proof Methods for Modal and Intuitionistic Logics”, D. Reidel Publishing Co., Dordrecht. A. Herzig, 1989. “Raisonnement automatique en logique modale et algorithmes d’unification”. These, Universid Paul-Sabatier de Toulouse. K. Kcnolige,1986. “A deduction model of belief”, Pitman. H.-J. Ohlbach, 1989. “Context Logic”, FB Informatik Univ. of Kaiserslautern. E. Orlowska, 1979. “Resolution systems and their applications I”, Fundamenta Informaticae 3 , pp. 235-268. E. Orlowska, 1980. “Resolution systems and their applications II”, Fundamenta Informaticae 3,3, pp. 333- 362. M. Schmidt-Schaup, 1988. “Computational aspects of an order-sorted logic with term declarations” Thesis, FB Informatik, University of Kaiserslautern . 426 GENERAL DEDUCTION SYSTEMS
1991
68
1,129
Mechanization of Analytic Reasoning about Sets Alan F. McMichael AT&T Bell Laboratories 480 Red Hill Rd., lA-214, Middletown, NJ 07748 Abstract Resolution reasoners, when applied to set theory problems, typically suffer from “lack of focus.” MARS is a program that attempts to rectify this difficulty by exploiting the definition-like character of the set theory axioms. As in the case of its predecessor, SLIM, it employs a tableau proof procedure based on binary resolution, but MARS IS enhanced by an equality substitution rule and a device for introducing previously proved theorems as lemmas. MARS’s performance compares favorably with that of other existing automated reasoners for this domain. MARS finds proofs for many basic facts about functions, construed as sets of ordered pairs. MARS is being used to attack the homomorphism test problem, the theorem that the composition of two group homomorphisms is a group homomorphism. Introduction Set theory is a notoriously difficult domain for automated reasoning programs. The basic axioms of set theory are many and complex, yet they can be formulated quite naturally in terms of three primitive predicates: “is a set, “is a member of,” and “is equal to.” (These are my preferred primitives; other choices are possible. My choice corresponds to a “Goedelian” set theory (Was 1987).) With so many axioms and so few primitives, numerous deduction possibilities are always available. To make matters worse, the set theory axioms support the definition of numerous derivative concepts, such as ordered pair, function, and cardinal number - indeed, they suffice for the definition of all the concepts of ordinary mathematics. So an automated reasoner is faced at once with both many deduction possibilities, engendered by complex axioms and a small basic vocabulary, and the problem of reasoning with a large derivative vocabulary. The difficulty of set theory extends to even the simplest problems in that domain. One notable early attempt to prove the commutativity of set intersection using hyperresolution resulted in failure (McCharen, Overbeek, & Wos 1976). With a program based on linear resolution and a simple weighting strategy - perhaps an approach more suitable to this kind of problem - I have been able to find an automatic proof after about 130 inferences. But on a problem of the next level of difficulty, namely, the associativity of intersection, the same program required I9,OOO inferences (McMichael 1990). Obviously, such a program is confined to trivial results and cannot be adapted to problems of modest difficulty. Programs that have done better - and there are some (e.g., Bledsoe 1977, Brown 1978 & 1986, and Pastre 1978 82 1989) - have typically diverged sharply from the resolution framework, achieving greater power at the expense of added complexity in the deductive mechanism. Evaluation of the success of such programs involves some subtleties. Progress in theorems proved must be discounted by added complexity, for there is always suspicion that the progress has come by ad hoc means. Also, complex deduction mechanisms tend to resist formal analysis. This paper describes a program, called MARS (Mechanization of Analytic Reasoning about Sets), that might be said to lie midway between the resolution and nonresolution approaches. It employs a tableau-style proof procedure based on binary resolution and a restricted form of equality substitution. It is designed to attack only a fragment of set theory problems, those I shall designate as “analytic.” MARS’s deduction scheme is more complicated than most resolution schemes - in this respect it resembles the nonresolution approaches - but perhaps not so complicated as to resist evaluation and formal analysis. In respect of its ability to solve set theory test problems, MARS compares favorably with existing resolution reasoners and with suitably simple and systematic nonresolution reasoners. For example, the ordered pairs theorem used by F. Brown to demonstrate the power of his own reasoning program (Brown 1986) is solved by MARS. MARS goes beyond this theorem to prove basic theorems about functions - where functions are understood to be sets of ordered pairs - including a variant of the theorem that the composition of two homomorphisms is a homomorphism. This last theorem is a well-known test problem (Was 1987). It is not a really hard problem for humans, since it is a prime example of an MCMICHAEL 427 \ From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. “analytic” problem. Yet its solution - or, more precisely, its near solution, since MARS cannot yet handle Wos’s version - is significant from the standpoint of automation. The first portion of this paper reviews MARS’s tableau proof procedure, a procedure first employed in MARS’s predecessor, SLIM (,, Simple Logical Inference Machine,” McMichael 1990). This is followed by a discussion of MARS enhancements, namely a rule for equality and a scheme for handling lemmas (previously proved theorems), and of their effectiveness on test problems. Special attention is given to the homomorphism theorem. The Tableau Proof Mechanism Consider the nine set theory clauses relevant to the problem of proving the commutativity of set intersection: x=yifandonlyifxc_yandyEx =I. x=y v x&y v y&x =2. x#y v xs;;y =3. x+y v y!zx x E y if and only if for all z, if z E x, then z E y Gl. x E y v ss(x,y) E x Gz x G y v ss(x,y) B y G3. x&y v zd x v ZE y x E y n z if and only if x E y and x E z nl.xEynzvxciiyvxdz n2. x6! ynz v XE y n3. xciynz vx~z For a moderately trained human reasoner, a proof of commutativity of intersection from these clauses is routine. The clauses, in the indicated groups of three, function as “definitions” of the concepts of set equality, subset, and intersection, and the proof is constructed merely by negating the desired conclusion, expanding according to the definitions, and making obvious inferences of propositional logic. By an analytic problem, I mean one that can be solved by a proof of this sort, that is, by a proof constructed solely by means of definition expansion and involving at worst a multiplicity of possible free variable bindings to be tried (a complication not present in this first example). My first experiments with commutativity involved linear resolution. Linear resolution is unable to emulate the idea of definition expansion and consequently suffers from lack of focus, a well-known obstacle for reasoning programs (was 1987). From “a n b z b n a”, my linear resolution program deduces “a n b & b n a v b n a & a n b” and then proceeds to eliminate the first “&” and conclude “ss(anb,bna) B a n b v b n a & a n b”. But now that “&‘I has been lost from the left branch of the proof tree, linear resolution succeeds in returning to the other half of the definition of “&” (~2) only after a blind search through the nonleading literals in this clause set. Using binary resolution and tableau proof, this inadequacy can be rectified. In a tableau proof, any literal on the currently selected open proof branch is in principle available to extend the branch. Thus, when the current branch is extended via clause ~1, the next inference need not be made, as in linear resolution, using the result of ~1’s application but instead may be an extension via clause ~2 and that literal’s parent. Indeed, such a choice corresponds to the strategy of definition expansion. Effective implementation of this proof concept involves three guiding ideas: 1. Tableau Idea - The notion of an available literal is expanded to include all literals on the current branch of the proof tree. 2. Definition Expansion Idea - Clauses that serve as definitions should be used, whenever possible, to expand the defined notions (which, by convention, are displayed in the Zeading literals of these clauses). 3. Content Exhaustion Idea - Once a literal has been expanded fully according to the clauses constituting a definition, no further inferences involving it should be tried on the current branch. A binary resolution tableau guided by these ideas leads directly to a solution of the commutativity problem. Figure 1 shows the complete tableau. When the symbol 'ss(")' occurs in a branch, it abbreviates the more lengthy skolem term introduced above it. The X’s symbolize contradictions. The same guiding ideas can be used to solve many and more difficult analytic problems. However, it is ultimately necessary to supplement them with several refinements: 1. Repetitions - An extension that results in a literal identical to one of its branch predecessors is retracted. A redundant literal produced indirectly by variable binding is blocked from further expansion. 2. Irrelevancies - The root of the proof tree consists of the clauses resulting from the negation of the theorem (with minor exceptions). It constitutes the “set-of-support” of the problem, as opposed to the “auxiliary clauses” provided by the set theory axioms. However, a literal may be generated later in the tableau, such as “a c a”, which is in fact a logical consequence of auxiliary clauses. As in the case of repetitious literals, these may be retracted or blocked. (This is an extension of the set-of-support 428 GENERAL DEIIUCTION SYSTEMS 1. anbfbna (negated theorem) V 2. anb d: bna (1,-V 3. bna g anb 4. ss(anb,bna) E anb 11. ss(bna,anb) E bna (Zz;I1) (3,El) 5. ss(anb,bna) d bna 12. ss(bna,anb) B anb (aza (3~2) 6. ss( ” ) B b V 7. ss( ” ) d a 13. ss(“)6c a v 14. ss(“)e! b (5,nl) (12,nl) 8. ss( ” ) E a 10. ss(“)e a 15. ss(“)E b 17. ss(“)e b (402) (4f-Q) (1102) (1102) 9. ss( * ) E b xxx 16. ss(“)e a xxx (403) (IO,7 ancestor rsl.) (11 ,n3) (17,14 ancestor rsl.) xxx xxx (9.6 ancestor rsl.) (16.13 ancestor rsl.) Figure I : Proof of Commutativity Theorem idea (‘Was et al. 1967), so may be regarded as an axioms. The specific clauses required pertain to the answer to “Research Problem 1” (Was 1987).) membership primitive: 3. Skipping Unused Branches - Extension of the proof tree by means of an auxiliary clause containing three or more literals results in new branches. If one of the new branches is ultimately closed without using the literal at the beginning of the branch, then the other branches may be skipped. XB y v ZE y v X#Z XE y v z6! y v X#Z 4. Free Variables: Controlled Introduction, Binding, and Backtracking - The tableau proof method is complete even if variables are required to bind only with closed terms occurring previously in the branches in which the variables appear. Completeness is lost if inferences are confined to definition expansions. However, for analytic problems, the suggested restriction on variable bindings makes sense and can be used to guide proofs. Accordingly, clauses that result in the production of new free variables are introduced one at a time; the free variables of one are required to be bound before another is tried. Backtracking may occur before the correct clause is found and suitable bindings made. Substitution axioms are known to be a clumsy means for handling equality. Since equality figures prominently in the next set of problems on my agenda, namely, basic theorems about functions, SLIM has clearly reached the limits of its competence. Another shortcoming of SLIM is its inability to make use of lemmas - previously proved theorems - as shortcuts in the construction of new proofs. One way this crops up is in SLIM’s check for irrelevant literals: SLIM treats the literal as a potential theorem and actually attempts a little proof of it from the auxiliary axioms. Successful proof means the literal is irrelevant. But a necessary result of this procedure is that many inferences are expended merely to check, for each new literal, whether it is one of a handful of one-literal theorems. If instead one-literal theorems were stored and used as lemmas on demand, all these inferences would be avoided. Moreover, these lemmas could be used also to close branches in the main proof tree. The three guiding ideas and four refinements were first implemented by MARS’s predecessor, SLIM. SLIM is able to prove many simple set theory facts, as Figure 2 shows. Equality substitution is needed for the last problem in the table. SLIM has no special rules for equality, so is compelled to solve the problem using equality substitution Equality and Lemmas in MARS is an enhancement of SLIM that incorporates special rules for equality and lemmas. In order to preserve the desirable features of SLIM, the design of these rules has been approached conservatively. MCMICHAEL 429 SLIM Theorem Inferences Total Axiom Set Notes In Solution Inferences P(a) n P(b) = P(anb) 39 419 No “set” predicate {a,b} = {a,c} + b = c 106 922 (Naive Set Theory). b = c + {a,b} = {a,c} 118 1357 Definition axioms for =, but <a,b> = <cd> + a = c & b = d 555 23573 no substitution axioms. a = c & b = d & set(a) & set(b) & 687 25893 Set predicate added. set(c) & set(d) + ca,b> = cc,d> ordered-pair(a) & a = cb,c> + 151 1536 b G first(a) ordered-pair(a) & a = <b,c> + 645 32006 c G second(a) set(m) & set(n) + 233 15331 Substitution axioms added. l m x n c P(P(m u n)) Figure 2: SUM’s Performance The MARS substitution rule for equality is applied to ground 1iteraIs only (literals with no free variables). Moreover, it is applied only when the terms involved in an equality literal are not reducible according to the set theory axioms. (That is, their main function symbols, unlike P, n, u, or {,}, are not “defined” by the axioms.) The rule states that the lesser of the two terms in the equality, according to a natural complexity ordering, may replace the greater everywhere it occurs in another literal. If the equality literal is at the tip of a proof branch, then the substitution rule is applied to all of its predecessors on the branch, producing a group of successor literals under the equality and effectively removing the greater term from further consideration in the branch. If instead the equality literal is higher up in the branch, then the substitution is made into the tip goal alone to produce a single successor. If two equalities are involved, the one higher up in the branch takes precedence. Figure 3 illustrates the substitution rule. The MARS equality rule shares features of ordered paramodulation, simultaneous paramodulation, and demodulation (Bachmair and Granzinger 1990, Benanav 1990, and Wos et al. 1967) but is simpler and more restricted. The equality rule enables MARS, even without use of lemmas, to solve problems beyond SLIM’s reach. For example, with functions defined as sets of ordered pairs, MARS is able to prove the theorem: func(f) & ca,b> E f +b = f(a), where “f(a)” is an abbreviation for explicit reference to the result of function application, “result(f,a)“, that is actually required. (was 1987 uses “image(a,f)” to express this notion.) MARS also solves, without lemmas, all the problems SLIM solves. A naive strategy for employing lemmas in proofs can drastically reduce an automated reasoner’s effectiveness. This happens if the application of a lemma turns out not to be conclusive and essentially the same line of reasoning is reproduced later from another lemma or from the basic axioms. Repeated instances of such failure result in exponentially bad performance. This difficulty can be ameliorated, if not evaded altogether, by concentrating on demonstrably successful lemma applications. A paradigm case of this is the use of a single-literal lemma to Che a branch without binding any variables in the proof tree. Such an inference creates no backtracking point. Since no variable binding occurs, there is no need to try an alternative means of closing the branch. In the case of multi-literal lemmas that bind no proof variables, there is a danger of producing unnecessary extensions and splits in the proof tree. In some cases, the danger is minimal. Consider the clause: x E {x,y} v -set(x) v -set(y) The first literal “narrowly misses” being a theorem of set theory (“Goedelian” set theory). The supplementary facts needed are that x and y are sets not “proper classes” (intuitively, collections “too large” to constitute sets). But since proper classes rarely figure in elementary theorems and such supplementary facts are often available, MARS can make good use of lemmas of this sort Let us lump these literals together with the true single-literal lemmas and call them unitary lemmas. My first experiments with lemmas in MARS involved only unitary lemmas whose proofs are shallow. These are exactly the sorts of lemmas that are detected by SLIM’s irrelevancy check. In fact, my first list was compiled from liter& proved irrelevant by SLIM and MARS. For a practical solution to the homomorphism test problem, however, it proves necessary to give MARS one multi- literal lemma (in four clauses) which is not unitary and which does not have a shallow proof. This lemma is none 430 GENERAL DEDUCTION SYSTEMS 1. {d,b} c (0) 2. MC) c {d,b) 3. d E {d,b} 4. c E {d,b} 5.c=d V 6. c E {c,b} (5,3) 7. {cc) c {c,b} (52) 1 8. Ic.bl c kcl (5,-V c=b 1. ~$10 = b 2. ~$10 B {b,c} 3. b B {b,c} (1,2) I Substitution FROM TiD I Substitution INTO Tip Figure 3: Substitution Rule Examples other than the fundamental property of ordered pairs which was proved by F. Brown’s program (Brown 1986) and which served as a test problem for SLIM: If cx,y> = CU,V>, then x = u and y = v or (degenerate case) one of x and y is not a set and one of u and v is not a set. <x,y> # cu,v> v -set(x) v -set(y) v x = u <x,y> # <u,v> v -set(x) v -set(y) v y = v <x,y> # <u,v> v -set(u) v -set(v) v x = u <x,y> # <u,v> v -set(u) v -set(v) v y = v Figure 4 gives test problem data for S, culminating in the homomorphism theorem. MARS does not quite solve the homomorphism problem originally posed by WQS. Instead, it proves that a composition of triadic relation homomorphisms is a homomorphism. A group is a binary operation over a given domain, and thus is a special case of a triadic relation. Indeed, a triadic relation homomorphism between groups will also be a group homomorphism. MARS, however, cannot prove this. It can show that the triadic relation homomorphism has one of the crucial properties of a group homomorphism - that is the content of the last problem in the table. But to prove all the relevant properties, MARS would have to be able to reason effectively about the group closure property, and it currently does not. Thus, in the last problem, an instance of closure is assumed in the hypothesis of the theorem. There is reason to hope, however, that this difficulty will be soon overcome. The experience with MARS, indicates that the use of lemmas is an unavoidable device. If this is indeed so, then we are forced to confront questions about what constitutes “fair play” in automated theorem-proving. At a minimum, of course, the program that proves the theorem should also be able to prove the lemmas it invokes, and the invocation of lemmas should be an automatic process. MARS satisfies these minimum requirements. On the other hand, it would be nice to have a program that automatically selects lemmas as it gains experience with various deduction problems. MARS is not such a program; it is not like “AM” (Davis & Lenat 1982). MARS could easily be modified to prove and collect the shallow unitary lemmas, but it is not clear how to automate the selection of deeper, multi-literal lemmas. Nevertheless, I think the actual selection of lemmas for the homomorphism problem is sufficiently conservative and does not undermine the significance of MARS’s success. Undoubtably the greatest shortcoming of MARS is its inability to solve synthetic problems. Not all set theory problems can be solved by definition expansion coupled with, whenever free variables appear in the proof tree, trial of alternative inferences that bind those variables. My favorite example is Cantor’s theorem, which states that there can be no function on a set S whose range includes the whole power set of S. The proof proceeds by assuming that there is such a function and showing that the existence of that function implies the existence of a subset of S that cannot possibly be in the range of the function - a contradiction. The step from the existence of the function to the existence of the recalcitrant subset is one MARS cannot reproduce. There are two MARS inference restrictions that stand in the way of synthetic deductions, namely, (1) the use of “definition” axioms in one direction only, and (2) the requirement of controlled binding of free variables. Since (I) is MARS’s prime device for attacking the problem of focus, my plan is to revise (2) instead. But certainly I have not gone far toward implementing this plan. Much more experimental and theoretical study needs to be done. MCMICHAEL 431 MARS Theorem Inferences Total Axiom Set and In Solution Inferences Problem () Notes func(f) & ca,b> E f + b = f(a) 318 4148 Set predicate present. No = substitution axioms (unnecessary). No lemmas. func(f) & ca,b> E f + b = f(a) 155 825 Unitary lemmas. func(f) & func(g) + 5884 49310 func(fog) func(f) & ca,b> E f + b = f(a) 20 75 Ordered pair func(f) & func(g) + 97 439 lemma added. func(fog) func(f) & func(g) & 190 796 one-to-one(f) & one-to-one(g) + one-to-one(f 09) homomorphism(f,p,q) & 1925 40765 (Triadic relation homomorphism(g,q,r) + homomorphism, more homomorphism(fog,p,r) general than Wos’s group homomorphism. homomorphism(f ,p,q) & 41 263 (Triadic relation func(p) & func(q) &k homomorphism has a E domain(f) & group homomorphism b E domain(f) & property .) p(a,b) E domain(f) & d(a),f(b)> E domain(q) qtf(a)NN = f(p(a,b)) Figure 4: MARS’s Performance Appendix: Womomorphis Problem Clauses singleton set equality x B sing(y) v set(y) x + y v sub(x,y) & x d sing(y) v x = y & x f y v sub(y,x) x E sing(y) v x z y v -set(y) x = y v -sub(x,y) v -sub(y,x) set(sing(x)) * . ordered pair suose f -sub(x,y) v z B x v z E y sub(w) v,ss(x,y) E x & subky) v ss(x,y) 4 y pair set x d pW,z) v set(y) &x B pr(y,z) v set(z) & x B pr(y,z) v x = y v x = z x E pr(y,z) v x # y v -set(y) v -set(z) & x E pr(y,z) v x z z v -set(y) v -set(z) set(pr(x,y)) x B <Y,z> v x E pr(sing(y),pr(y,z)) x E cy,z> v x fit pr(sing(y),pr(y,z)) set(cy,z>) is an ordered pair, 1 st(sx), 2nd(sy) -isop v x = csx(x),sy(x)> isop v x f cy,z> v -set(y) v -set(z) set(sx(x)) WSY 00) relation -rel(x) v y G! x v isop rel(x) v sr(x) E x & rel(x) v -isop(sr(x)) set(sr(x)) 432 GENERAL DEDUCTION SYSTEMS function and value(sv) -func(x) v rel(x) & -func(x) v q,z> d x v z - sv(x,y) func(x) v -rel(x) v csf(x),sA(x)> E x & func(x) v -ret(x) v csf(x),sv(x,sf(x))> E x & func(x) v -rel(x) v sA(x) f sv(x,sf(x)) set(sv(x,y)) set(sf(x)) set(sA(x)) composition of functions x B cmp(u,v) v isop & x d cmp(u,v) v csx(x),sv(u,sx(x))> E u & x B cmp(u,v) v ~sv(u,sx(x)),sy(x)> E v x E cmp(u,v)) v -isop v <sx(x),sv(u,sx(x))> B u v <sv(u,sx(x)),sy(x)> B v) set(cmp(u,v)) is an ordered triple -istrip v x = ~~sx(sx(x)),sy(sx(x))>,sy(x)> istrip v x * CU,V> v -isop v -set(v) is a triadic relation -rel3(x) v y d x v istrip rel3(x) v s3(x) E x & rel3(x) v -istrip(s3(x)) set(s3(x)) homomorphism set(sE(x,wl ,w2)) set(sF(x,wl ,w2)) set(sG(x,wl ,w2)) -homo(x,wl ,w2) v func(x) 81 -homo(x,wl ,w2) v rel3(wl) & -homo(x,wl ,w2) v rel3(w2) & -homo(x,wl ,w2) v <al ,u2>,u3> e! WI v <ul ,SV(X,Ul)> d x v <U2,SV(X,U2)> d x v <U3,SV(X,U3)> B x v <<sv(x,ul),sv(x,u2)>,sv(x,u3)> E w2 homo(x,wl ,w2) v -func(x) v -rel3(wl) v -rel3(w2) v <<sE(x,wl ,w2),sF(x,wl ,w2)>,sG(x,wl ,w2)> E wl & homo(x,wl ,w2) v -func(x) v -rel3(wl) v -rel3(w2) v <sE(x,wl ,w2),sv(x,sE(x,wl ,w2))> E x & homo(x,wl ,w2) v -func(x) v -rel3(wl) v -rel3(w2) v <sF(x,wl ,w2),sv(x,sF(x,wl ,w2))> E x & homo(x,wl ,w2) v -func(x) v -rel3(wl) v -rel3(w2) v <sG(x,wl ,w2),sv(x,sG(x,wl ,w2))> E x & homo(x,wl ,w2) v -func(x) v -rel3(wl) v -rel3(w2) v <<sv(x,sE(x,wl ,w2)),sv(x,sF(x,wl ,w2))>, SV(X,SG(X,Wl ,w2))> e! w2 References Bacmair, L., and Granzinger, H. 1990. On Restrictions of Ordered Pammodulation with Simplification. In Proceedings of the Tenth International Conference on Automated Deduction, 427-44 1. Springer-Verlag. Benanav, D. 1990. Simultaneous Paramodulation. In Proceedings of the Tenth International Conference on Automated Deduction, 442-455. Springer-Verlag. Bledsoe, W. W. 1977. Non-resolution Theorem Proving. Artijicial Intelligence Journal 9:1-35. Brown, F. M. 1978. Towards the Automation of Set Theory and Its Logic. Artificial Intelligence Journal 10:281-3 16. Brown, F. M. 1986. An Experimental Logic Based on the Fundamental Deduction Principle. Artificial Intelligence Journal 30: 117-263. Davis, R., and Lenat, D. 1982. Knowledge-Based Systems in Artificial Intelligence. McGraw Hill. McCharen, I., Overbeek, R., and Wos, L. 1976. Problems and Experiments for and with Automated Theorem- Proving Programs. IEEE Transactions on Computers C- 25~773-782. McMichael, A. 1990. SLIM: An Automated Reasoner for Equivalences, Applied to Set Theory. In Proceedings of the Tenth International Conference on Automated Deduction, 308-321. Springer-Verlag. Pastre, D. 1978. Automated Theorem Proving in Set Theory. Artificial Intelligence Journal 10: 1-27. Pastre, D. 1989. MIJSCADET: An Automated Theorem Proving System using Knowledge and Metaknowledge in Mathematics. Artificial Intelligence Journal 38:257-3 18. Wos, L., Robinson, G., and Carson, D. 1965. Efficiency and Completeness of the Set of Support Strategy in Theorem Proving. Journal of the Association for Computing Machinery 12536-541. Wos, L., Robinson, G., Carson, D., and ShaIla, L. 1967. The Concept of Demodulation in Theorem Proving. Journal of the Association for Computing Machinery 14:698-709. Wos, L. 1987. Automated Reasoning: Thirty Three Basic Research Problems. Prentice-Hall. MCMICHAEL 433 I
1991
69
1,130
Kalla ARES Laboratory Department of Computational Science University of Saskatchewan Saskatoon, CANADA S7N OWO Abstract Student modelling is not typically concerned with representing the deep mental models a student employs in dealing with the world around him/her. In this research we discuss an intelligent tutoring system, PRESTO, whose goal is to understand the mental model a student has of a physical device, and then use this mental model in providing help to overcome misunderstandings related to the functioning of the device. The mental model is extracted from the student by asking questions about the relationships of variables affecting the physics of the device. The mental model is represented using deKleer and Brown’s qualitative confluence equations. The mental model can then be compared to a set of confluences representing a correct perspective on how the device functions. A variety of pedagogical choices can be made: to explain contradictions implicit in the student’s understanding of the device, to show the student a simpler physical device that by analogy illustrates anomalies in the student’s understanding, or to let the student witness his/her version of the device in operation so the misunderstandings become obvious. Experiments in running PRESTO with a number of students shows this approach to mental modelling to be promising. ntroduction Human tutors are able to determine from a student’s description of how a physical device works the underlying misconceptions that the student has about the physical properties that govern the device’s behaviour. To do this the tutor might present the student with a physical device and elicit from the student a description of how that device functions. From the student’s description the tutor is able to construct a set of beliefs about the student’s beliefs. In other words, the tutor constructs a representation of the student’s mental model of the physical device. This representation allows the tutor to predict answers that a student would give to specific questions, to identify student misconceptions, and to plan the type of pedagogical instruction to be given to the student. This paper describes an intelligent tutoring system (ITS), PRESTO, that has some of these capabilities. PRESTO can determine, from a student’s description of qualitative relationships among physical properties, underlying misconceptions that the student has about how these physical properties govern a device. From the student’s description, PRESTO constructs a representation of the student’s mental image of the device. This gives the ITS the ability to reason the way the student reasons, to predict answers to specific questions, and to identify misconceptions. The ITS deals with misconceptions in three ways. First, it can present the student with a simpler device that clearly illustrates the misconception affecting the behaviour of the simpler device and then encourage analogy to the original device. PRESTO can also explain contradictory relationships by explicitly showing the student how these contradictions lead to impossible behaviour of the device. Finally, the system has been designed so that future versions of it will be able to simulate the device in operation, and thereby demonstrate to the student the effects of particular relationships among physical properties on the working device. PRESTO uses qualitative reasoning in order to represent the student’s mental model of the physical device under consideration. It has been shown that in the domain of physical systems, experts tend to reason in a qualitative as opposed to quantitative fashion [Brown 841, [Frederikson & White 883, [White 881, IWhite & Frederikson 881. That is, an expert will examine how changes in the physical system propagate through the system via the qualitative relationships among the system’s variables. If a problem requires a quantitative solution, it is developed only after the problem has been analysed and understood in qualitative terms [White 881. It would seem that students learning the operation of a physical device would similarly use qualitative reasoning. Thus, it appears that the work done in qualitative reasoning could be useful in the development of a representation of deep student models in the domain of physical systems. ackground In intelligent tutoring systems, it is useful to be able to model the student’s knowledge in order to provide individualized instruction. Wenger CWenger 871 states that instead of just being able to represent the target expertise and calculate a level of mastery for the student, a student model should be able to provide an explicit representation of the student’s incorrect versions of the target expertise in order that remedial action may be taken. Self [Self 881 also points out that a student model should not have to provide value judgements on the correctness of the student’s knowledge. The student modelling problem should be one BARIL, GREER, & MCCALLA 43 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. of representing what a student believes. These beliefs should be represented in their own terms, not with respect to some target knowledge. Most student models are shallow; they are not concerned with the deep “mental modelling” level at which students formulate their conceptions of the world around them. Mental models are used to organize people’s conceptions and to be useful must predict behaviour with some degree of accuracy. Through interaction with the world, people continue to modify their mental models in order to better predict outcomes in the real world [Norman 831. The research reported in this paper is concerned with how the mental model level of a student model can be represented and used for tutoring. PRESTO is designed to determine and represent the student’s mental model of a physical device. If the student’s mental model is inaccurate, the student can be led to re-evaluate his or her mental model by being shown simpler devices, by having contradictions explained or by actually seeing anomalies in the behaviour of the device. Through this process of re-evaluation on the part of the student, PRESTO attempts to encourage the student to refine and/or correct his or her mental model of the device. The mental modelling representation scheme is confluence theory [deKleer & Brown 811, [deKleer & Brown 831, [deKleer & Brown 851. Confluence theory provides a method for qualitatively describing the behaviour of the components of a physical system It uses envisioning to construct a causal model of a physical device’s operation. Envisioning is performed with knowledge of the behaviour of the components (described qualitatively using confluences) as well as how they are connected to form a composite device. The device can be qualitatively simulated and interesting events in the functioning of the device can be presented in their causal order. This method of qualitative reasoning meets several of the requirements for student model representation. Both a qualitative device model and a model of the student’s perception of the device should recognize inconsistent specification, that is, both kinds of models should recognize the problem of specifying different values for the same attribute in any state of the device. Both kinds of models should have correspondence, that is, they should be faithful to the actual behaviour of the device (or student conception of the device) under examination. As well, both must be robust in unusual situations. In addition, qualitative simulation provides a method for running the student model in order to predict students’ responses and the consequences of students’ beliefs. Williams, Hollan and Stevens [Williams et al 831 have devised a representation of mental models based upon deKleer and Brown’s qualitative physics. They consider a mental model to be a causal model developed through envisionment. Douglas and Liu [Douglas & Liu 891 have extended the ideas of confluence theory in order to create a tutoring system, Heart Works, which provides a constructive simulation of a device as well as providing qualitative causal explanations of the device’s functioning. Their work has focused on generating pedagogical explanations of device behaviour as well as providing a constructive simulation of the device. Our work focuses on using confluences and envisionment in constructing runnable student models. Confluence-Based S tu odelling PRESTO makes use of deKleer and Brown’s confluence- based qualitative reasoning to represent its model of the student. Relationships among the different variables that govern the student’s beliefs about the behaviour of the physical device are represented as confluence equations. Since each relationship among the variables in a physical device can be represented in an equation, a model of the physical device can automatically be generated based upon the student’s description of the interactions among the different variables. Many of the other qualitative reasoning systems such as Qualitative Process Theory Forbus 851 and QSIM [Kuipers 861 do not have this one-to-one mapping between relationship and representation, and as a result knowledge must be handcrafted into complex process equations or well formed formulae. PRESTO works in the following manner: In order to determine a student’s misconceptions about a physical device, the system extracts from the student, beliefs about related variables underlying the operation of the physical device. First, a physical device, labelled with all the relevant variables, is presented to the student in a diagrammatic format. The diagram potentially limits the possible types of models that the student can have of the physical device, but this is unavoidable as the system and the student require a common frame of reference for discussing the device. The tutoring system asks the student to select related variables and to specify their causal relationships for a specific single state of the device. The tutoring system queries the student in causal terms because it has been found that students tend to think in terms of explicit and directed causal relationships [Forbus & Gentner 901. Based on the student’s descriptions of related variables, confluence equations are created. Figure 1 illustrates the environment for querying the student in PRESTO using deKleer’s familiar pressure regulator as the physical device. The relationships described by the student are encoded into confluence equations to represent beliefs about the relationships between the variables associated with the pressure regulator. Suppose, for example, that the student believed there was no relationship between flow and pressure and that a direct relationship existed between the pressure difference and the area of the valve opening. These misconceptions would be reflected in the responses given when PRESTO asks which variables are influenced by a change in the flow into the device or by a change in pressure difference. The latter relationship would be encoded as the confluence apressure-difference - &trea~valve_opening = 0 44 EDUCATION ’ 6 File Edit Eual Tools Windows Design Edit What variables would change as a direct result of the pressure difference changing? Windows -- l I I I I I Figure 1. Querying the Student in PRESTO After the student identifies all the variable relationships thought to be important, PRESTO has acquired a representation of the student’s mental model of the pressure regulator. A typical representation is shown in Figure 2. It can be contrasted with the correct mental model of the pressure regulator as shown in Figure 3. Figure 2. Student’s Faulty Mental Model Figure 3. Correct Mental Model In some cases, simple relationships between two variables of the physical device are not adequate. For example, variable A and B may be directly related, variable A and C may be inversely related. Thus A would be influenced by a combination of both B and C’s effect. When two variables cause contradictory effects on a third variable, such as in a negative feedback loop, a three term confluence must be formed in order to eliminate contradictions. In Figure 2, input pressure has a direct relationship with the pressure difference and the output pressure has an inverse relationship with the pressure difference. Thus the two confluences: ~pressureJifference - ainput pressure = Q and ~pressure-difference + aoutput-pressure = 0 collapse into the single confluence apressure_difference + aoutput-pressure - ainput pressure = Q* The confluence equations are r-6-i through a constraint satisfaction algorithm. During constraint satisfaction, misconceptions may be manifested as contradictions in the model or as missing or malformed confluences. In order to solve the constraints of the confluence equations, an initial variable is selected and some perturbation of the variable’s value is introduced in order to induce a change in the device. The change is propagated through the confluences that comprise the student model. Equations with a single uninstantiated variable are solved first. When there are no equations with a single uninstantiated variables, two uninstantiated variables are solved. In this case one of the variables is given a value corresponding to the actual behaviour of the device and the propagation of values is continued. Propagation of values stops when all variables have a value, when all the equations have been used, or when all unused equations contain no instantiated variables. This indicates that the student has described the device with disjoint parts with one of the parts unaffected by the initial change. When the propagation of values is complete, the resulting values represent the behaviour of the device as it is conceived by the student. Misconceptions are determined by comparing values of the variables determined by constraint propagation with values that represent actual behaviour of the device. If a value is in conflict with the actual value, then the confluence equation used to obtain that value is assumed to be a misconceived confluence. If all variables in a confluence are assigned values identical to the actual values, this does not indicate that there are no misconceptions. It is possible that during value propagation, not all confluence equations have been used. Thus, each confluence equation is tested with the generated values. If the equation is not valid for that set of values, then that confluence is deemed to be misconceived. In the example, the student believed that the pressure difference and the area of the valve opening were directly related. This misconception is represented by the confluence equation: apressure-difference - &ea~valve-opening = 0. When this equation is processed by the constraint satisfaction algorithm, it produces a behaviour inconsistent with the actual behaviour of the pressure regulator. Thus, the tutoring system focuses on this misconception and BARIL, GREER, & MCCALLA 45 addresses it until it can convince the student that the relationship is inverse. When this happens the tutoring system is able to change the confluence to: apressure-difference + &za-valve-opening = 0. When a misconceived confluence is determined, it is first checked to see what kind of misconception it represents. A deviation from the actual behaviour of the pressure regulator could indicate that the student believes that a wrong relationship exists between the two variables of the confluence equation. However, if the confluence is one that would collapse into a multi-term confluence, it could indicate that some other confluence is missing. In the example, the student believes that pressure and flow were not related, Thus the student incorrectly believes that the pressure difference across the valve has no relationship with the flow through the valve. As a result one of the confluences that represents the feedback loop of the pressure regulator is missing. In order to deal with this misconception, PRESTO presents the student with a simpler device to illustrate the behaviour that the missing confluence represented. This simpler device, a constricted pipe, is shown in Figure 4. PRESTO asks the student to describe the behaviour of the constricted pipe. If the student understands that the pressure difference affects flow through the constriction, then the system leads the student by analogy to the pressure regulator. If the student does not understand the behaviour of the simpler device, the tutoring system will explicitly explain it, When tutoring on the misconception is complete, the student will believe that pressure difference is directly related to the flow into the valve and thus aPressure-difference - &low-into-valve = 0 will be added to the student model. PRESTO will then collapse the three confluences representing the feedback loop of the valve: aPressure difference - &low into valve = 0 &ressure-difference + &uea~valv~-opening = 0 &low-in&valve - aarea-valve-opening = 0 into a single confluence: +ressure-difference - &lowjnto-valve + &uea valve opening = 0 There are a number of misconceptions that the student may have which might not be identified through constraint satisfaction. These include missing confluences that may affect device behaviour but may not be detected because other confluences impose the necessary constraints on variables. Thus, if certain confluences are missing, simply simulating device behaviour may be insufficient. Therefore, alI necessary confluences must exist in order for the student model to be deemed correct. Missing relationships also may subtly affect device behaviour, such as when missing confluences cause a variable to be unconstrained. This deviation in device behaviour would appear with the unconstrained variable not exhibiting any change when the device is simulated for the student. Another misconception that may not be determined from the constraint satisfaction occurs when the student believes that there are relationships among actually unrelated variables. These relationships may impose the proper constraints upon the variables within them; however, their existence indicates a misconception by the student about the physical properties governing the device. r 6 File Edit Eual Tools Windows Design Edit Windows a‘ I B flow +pressure difference-b ‘;l”e’ubi ;;a input ‘,fYleout pressure p%$%e device Consider a tube with a constriction in the center and water running through the tube. The water is running at a certain flow rate and there is a pressure difference between the two sides of the constriction. What variables would change as a direct result of the pressure difference between the input and output of the tube changing7 Figure 4. A Simpler Device fc Removing a Misconception 46 EDUCATION In the example, the student initially states that pressure and flow are not related. As a result the student does not describe a relationship between the flow out of the device and the output pressure. Even though at this point all the variables can be constrained, it is a misconception on the part of the student not to describe how the load on the end of the pressure regulator offsets the relationship between the output pressure and flow. In order to deal with this misconception PRESTO again presents a simpler device to the student, in this case a pipe with a load on the end. When the student correctly describes the behaviour of the simpler device and correctly draws the analogy back to the pressure regulator, the following confluence will be placed in the student model: aOutputgressure - &low-out-of-device = 0 The student model now is the same as the correct model of the pressure regulator and the tutoring system has successfully dealt with the misconceptions. PRESTO deals with students’ misconceptions in three ways. The first way is to present to the student an alternate, yet simpler physical device that will illustrate the behaviour in question. When the student sees how the variables interact in the simpler device, he or she should be able to make the analogy back to the original device. The second way of helping the student understand the misconception is to provide an explanation of the contradiction within the set of confluences which describe the device. PRESTO shows how the described relationships lead a variable to take on contradictory values. The final way of illustrating misconceptions to the student is through simulation. The student may describe relation- ships which do not lead to a contradiction, yet lead to incorrect behaviour of the device. This incorrect behaviour can be illustrated to the student by simulating the device as described. We have not yet implemented the simulation option. By addressing misconceptions one by one, the tutoring system will help to evolve the student’s mental model of the physical device to one which is correct. The end result is an intelligent tutoring system that supports students in refining their mental models of a physical device. It also supports the refinement of students’ beliefs about the physical properties that govern the device. This corresponds to Self% idea [Self 881, that the purpose of a student model is to assist an intelligent tutoring system in provoking the student to consider the justifications and implications of his or her beliefs. B Conclusions The student modelling capabilities of the confluence equations were tested by running twelve students through the implemented version of this tutoring system. From this testing a number of conclusions as to the capabilities as well as the limitations for this modelling scheme were discovered. Qualitative differential equations worked well as a representational scheme in certain situations. In general, with this representation of the student’s mental model, the tutoring system recognized most misconceptions about the underlying physics that governed the device’s behaviour. The tutoring system was able to recognize deeper misconceptions, including the situation where the student correctly understands the physics that governs the device’s behaviour but applies it incorrectly to the device or when the student is unable to envision the whole device’s behaviour from the behaviour of its component parts. The tutoring system could also recognize misconceptions where the student did not understand the physics that governed the device’s behaviour. In many cases, the students were surprised when PRESTO pointed out their misconceptions. PRESTO was able to eliminate some of the misconceptions of the students. Through interviews with the students after their sessions with PRESTO, it was found that they did learn something about the physics of the pressure regulator through their interactions with PRESTO. This shows that malformed confluence equations do in fact provide a direct indication of a misconception on the part of the student. There are limitations to a student model representational scheme using confluence equations. A major limitation is the inability to explicitly represent causal relationships. Students tend to think in a causal manner. However, confluence equations contain only vague implicit notions of causality and thus there are types of misconceptions a confluence equation cannot represent For example, in the pressure regulator, a student could believe that an increase in the output pressure causes the valve to close. That belief would be correct. However, if the student believed that the closing of the valve caused a higher output pressure then that belief is incorrect. The problem is that both of these beliefs are represented by the same confluence equation. In both cases there is an inverse relationship between the variables. Consequently, PRESTO is unable to recognize all of the misconceptions that a student may possess. In order for confluence equations to become an adequate representational scheme, some improved method of representing causality must be added. PRESTO queries the students about causal relationships, but the causal information that the student supplies is simply discarded and only the information on variable relationships is maintained. By bringing causal information into the representation, the tutoring system would be better able to evaluate the student as well as to correctly recognize an additional class of misconceptions. The interaction through which PRESTO constructs the preliminary mental model is not ideal. Many queries about individual relationships between pairs of variables are necessary, even for a device as simple as the pressure regulator. Better usage of inference and defaults dealing with the student’s knowledge of basic confluences could improve the somewhat tedious query phase of PRESTO. Confluence equations may not be perfect as a representation for a mental model, but they do have merit. In fact, much of the work done in the field of qualitative reasoning could have a major impact on student model BARIL, GREER, & MCCALLA 47 representational schemes since many of the problems and issues in both fields are similar. By building upon representational schemes and reasoning approaches from the work in qualitative reasoning, student models can be greatly enhanced. Instead of simply representing a student’s knowledge with respect to some target knowledge, a tutoring system should be able to represent a student’s mental image of the tutoring domain. In doing this, the pedagogical aspects of the tutoring system can take advantage of knowing exactly how the student views the domain as well as being able to reason qualitatively like the student. Acknowledgements: We would like to thank Reg Anderson for his work on the interface to PRESTO. We also acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada. References Brown, J.S. 1984. A Framework for a Qualitative Physics. In Proceedings of the Sixth Annual Conference of the Cognitive Science Society, 11-17. Hillsdale New Jersey: Lawrence Erlbaum. deKleer, J. and Brown, J.S. 1981. Mental Models of Physical Mechanisms and their Acquisition. In J. R. Anderson (Ed.) Cognitive Skills and their Acquisition, Hillsdale New Jersey: Erlbaum, 285 - 309. deKleer, J. and Brown, J.S. 1983. Assumptions and Ambiguities in Mechanistic Mental Models. In D. Gentner and A. Stevens (Eds.), Mental Models, Hillsdale New Jersey: Lawrence Erlbaum, 155 - 190. deKleer, J. and Brown, J.S. 1985. A Qualitative Physics Based on Confluences. In D.G. Bobrow (Ed), Qualitative Reasoning about Physical Systems, Cambridge Mass.: MIT Press, 7 - 83. Douglas, S.A. and Liu, Z.Y. 1989. Generating Causal Explanations from a Cardio-Vascular Simulation, In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, 489 - 494, Los Altos, CA: Morgan Kaufmarm. Forbus, K.D. 1985. Qualitative Process Theory. In D.G. Bobrow (Ed), Qualitative Reasoning about Physical Systems, Cambridge Mass.: MIT Press, 85 - 168. Forbus, K.D. and Gentner, D. 1990. Causal Reasoning about Quantities. In D.S. Weld and J. deKleer (Eds.), Readings in Qualitative Reasoning about Physical Systems, Los Altos California: Morgan Kaufmann, 666 - 677. Frederikson, J.R. and White, B.Y. 1988. Mental Models and Understanding: A Problem for Science Education. NATO Advanced Research Workshop on New Directions in Education Technology, Milton Keynes. Kuipers, B. 1986. Qualitative Simulation. Artificial Intelligence, 29:289-388. Norman, D.A. 1983. Some Observations on Mental Models In D. Gentner and A. Stevens (Eds.), Mental Models, Hillsdale New Jersey: Lawrence Erlbaum, 7 - 14. Self, J.A. 1988 Bypassing the Intractable Problem of Student Modelling. Proceedings of the ITS-88 Conference, Montreal, 18 - 24. Wenger, E. 1987. Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to the Communication of Knowledge. Los Altos CA: Morgan Kaufmann. White, B.Y. 1988. Thinker Tools: Causal Models, Conceptual Change, and Science Education. Technical Report #6873, BBN Laboratories Inc. White, B.Y. and Frederikson, J.R. 1988. Explorations in Understanding How Physical Systems Work, In Proceedings of the Sixth Annual Conference of the Cognitive Science Society, 325-33 1. Hillsdale New Jersey: Lawrence Erlbaum. Williams, M.D., Hollan, J.D. and Stevens A. 1983. Human Reasoning About a Simple Physical System. In D. Gentner and A. Stevens (Eds.), Mental Models, Hillsdale New Jersey: Lawrence Erlbaum, 13 1 - 153. 48 EDUCATION
1991
7
1,131
ristie Eugene Charniak and Saadia Husain Department of Computer Science Brown University Box 1910, Providence RI 02912 Abstract Finding best explanations is often formalized in AI in terms of minimal-cost proofs. Finding such proofs is naturally characterized as a best-first search of the proof-tree (actually a proof dag). Unfortunately the only known search heuristic for this task is quite poor. In this paper we present a new heuristic, a proof that it is admissible (for certain successor functions), and some experimental results suggesting that it is a signif- icant improvement over the currently used heuristic. Introduction The problem of explanation (or abduction) is often for- malized within AI as a problem of proving the things to be explained using some auxiliary hypotheses [l, 5,7,9,10,11]. For example, a circuit’s showing a par- ticular fault is explained by proving that the circuit would show the fault if a particular component, say Component-5, were broken. We have thus explained the fault via the auxiliary hypotheses, “Component-5 is broken.” A persistent problem with such approaches is that there will typically be many such proofs based upon different sets of auxiliary hypotheses. One can use the criterion of set minimality to remove some of these, but still the numbers remain large. This naturally suggests that we should somehow weight the proofs and look for a proof which is optimal in some sense. If we talk about the “costs” of proofs, and then pick the one with the smallest cost, we are lead to the problem of finding the “minimal cost proofs” of our title. A clean axiomatization of this idea, called “cost- based abduction,” is presented in [4]. Auxiliary hy- potheses have associated costs, and the cost of a proof is the sum of the costs of the extra hypotheses required to make the proof go through. This is quite close to what is done by Hobbs and Stickel [7]. It is shown *Thanks to Philip Klein for helpful discussions,Solomon Shimony and Robert Goldman for reading an earlier draft of this paper, and to Eugene Santos for curve fitting. This work has been supported by the National Science Founda- tion under grant IRI-8911122 and by the Office of Naval Research, under contract N00014-88-K-0589. 446 SEARCH neighbor-away + neighbor-lights-off black-out 3 neighbor-lights-off fuse-blown + my-power-off black-out ---) my-power-off E(neighbor-away) = 3 E( black-out) = 8 E (fuse- blown) = 6 Figure 1: Some simple rules, and their and-or dag in [4,12] how the costs can be given a firm semantics in terms of probability theory, and that, as one might expect, finding minimal-cost proofs is NP-hard. As an example of such proofs, consider Figure 1. We show there some simple rules, the costs of vari- ous hypotheses (using the cost-function E), and the and-or dag showing the possible proofs of the observa- tions “neighbors-lights-off” and “my-power-off’. The minimal cost proof would assume “black-out” (E = 8) despite the fact that it is the most expensive hypothe- ses. This occurs because the two facts which we need to explain can be thought of as “sharing” the cost of the hypothesis. The use of an and-or dag to illustrate possible proofs suggest that one might find minimal-cost proofs by do- ing a best-first search of the and-or dag. How this could be done is outlined in [4]. The basic idea is that one starts with the expression to be proven, and then creates alternative partial proofs. In each iteration of the search algorithm a partial proof is chosen to ex- pand, and it is expanded by considering how one of its goals can be satisfied. Finally one of the partial proofs From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. will be completed by having all of its goals satisfied by either known facts, or auxiliary hypotheses. (We con- sider known facts to be zero-cost auxiliary hypotheses.) Naturally, the efficiency of such a search will depend on having a good heuristic function for deciding which partial proof to work on next. Furthermore, we are only guaranteed that the first proof is the minimal- cost proof if the heuristic is admissible. Unfortunately, the only known admissible heuristic (indeed, the only known heuristic of any sort) for minimal-cost proofs is “cost so far.” That is, given a partial proof, we estimate the cost of the total proof to be the actual costs incurred by the partial proof. This is the heuristic used in [4] and [13]. “Cost-so-far” is admissible, but otherwise it is a very bad heuristic. All of the costs in a minimal-cost proof are associated with the auxiliary hypotheses, which are, of course, at the leaves of the and-or dag. Thus it is typically only when the partial-proof is almost com- plete that the heuristic gives good guidance. In this paper we present a better heuristic for minimal-cost proofs, and we prove that it is admissi- ble if the partial-proofs are expanded in a certain way. We also give some preliminary results on how well the new heuristic behaves compared to “cost-so-far.” Sec- tion 2 introduces the heuristic. We also show that it is not completely obvious that the heuristic is, in fact, admissible. (Indeed, we show that if not used prop- erly it is inadmissible.) Section 3 proves the heuristic admissible when used with a certain class of successor functions. Section 4 gives the experimental results. We first introduce our notation. Definition 1 An and-or dag 6 is a connected di- rected acyclic graph with nodes NJ (n E Nb), edges E6 (e E EJ), leaves LJ (& c NJ, I E Lb), and a single root rg E N6. In general, capital letters denote sets and lower case denote individuals. We use “‘E” for edges, ‘(L” for leaves, and “N” for nodes. To indicate par- ent and child relations in the dag we use subscript and superscript along with the convention that parents are “below” children. For example, XV would be the set of Z’S immediately above y (i.e., y’s children) whereas yx would be the y immediate below all of the Z’S. As we saw in Figure 1, Lb are the auxiliary hypotheses which we introduce to complete the proof, and ~6 is the theorem to be proven. We occasionally need se- quence of edge sets. We use Fi for the i’th member of the sequence as Ei could be interpreted as the set of edges above node i. Definition 2 Let 6 be an and-or dag. An and-dag (from 6) is a dag obtained from 6 by removing all but one of the descendants from every or-node in 6. We denote the set of all such dags by A(6). Intuitively the and-dags of 6 correspond to the possible complete proofs for ~6. For example Figure 2 shows two and- dags for the dag in Figure 1. cblack-oua Figure 2: Two and-dags Definition 3 A weighted and-or dag (Waodag) is a pair (5, E), with an and-or dag 6, and a cost function f: : L6 ---) !l? (the non-negative reals). In what follows all and-or dags are Waodags, and we leave the cost function implicit, thus referring to the Waodag by the term denoting the dag itself, e.g., 6. Since an and-dag corresponds to a proof, the cost of an and-dag is naturally defined as the sum of the cost of the leaves. Since we want a minimal-cost proof for the entire dag we define a dag’s cost to be the minimum cost of all of its and-dags. Definition 4 Let 6 be a Waodag and Q! E A(6). The leaves of Q are L,. We define the cost of 5 as follows: E6 = min $0. &A(6) Now we define our heuristic cost estimation function $. The basic idea is that we want to get some estimate at lower nodes of the cost of the leaves above them. Thus the estimator will use values passed down from the leaves. Since, as we saw in Figure 1, the cost of a node can be “shared,” we divide the cost of a node by how many parents it has, and that is the cost passed down to the parents. efinition 5 Let 6 be a Waodag. The estimator $ : & u Es u 2E6 --) 8 is defined as follows: En n E .& $n = $E?l n is an and-node mineEB, $e n is an or-node $n $en = - I E” I $E = xi $e. eEE CHARNIAK & HUSAIN 447 Figure 3: Estimated costs Order of Goals Leaves Estm Derived Derivation cost From 1 +I3 0 7 2 es3 0 7 3 4 G3 +3 12 2 5 0 (tv) 9 3 Figure 4: Five expansions of a Waodag (We define the estimate for sets of edges because we will shortly be defining our expansion function in terms of sets of edges, so the estimate should be over these sets. See Figure 5.) Figure 3 shows the estimated costs for the nodes and edges of Figure 1 (where we have replaced the proposition symbols by individual letters to shorten things). While it may seem reasonable that $ is an admissible heuristic cost estimate, it is not necessarily so. Figure 4 shows five initial expansions in a best-first search for the minimal-cost proof based upon the estimates in Figure 3. The “goals” are the nodes which have yet to be expanded, and the ?eaves” are the leaves which have been reached. Note that the estimated cost for the partial solution number 4 is, in fact, higher than the final solution would be if we had pursued this partial solution. Because of this incorrect estimate the best-first search tries partial solution 3 next, and finishes it off for a total cost of 9, in partial solution 5. As we have already noted, the correct minimal cost proof here has a cost of 8. The admissible version of our heuristic requires a few small changes from the version shown in Figure 4. First, rather than define partial solutions in terms of nodes, it turns out to be easier to define them in terms of edges. Secondly, we require that nodes be ex- panded in a topological order such that a node is not expanded if a predecessor in the order has not been ex- panded. Expansion consists of removing all the edges immediately below the expansion node, and adding in the edges above it. (The next section will give a more formal characterization of the algorithm.) We have also taken the liberty of adding some “pseu- doedges” ) one just below the root and one above each leaf. These are convenient to define the starting point Order of Derivation 1 2 3 4 5 6 7 8 9 10 11 Goals Leaves Estm cost w3 0 7 +fe”3 &!3 0 7 0 7 Cev3 C&:3 0 8 0 7 c+:3 0 9 x3 (et3 7 0 (wb) 11 He:3 0 8 Kx3 0 10 0 h3 8 Derived From 1 2 2 3 3 5 7 4 4 9 Figure 5: The correct solution to our problem of the search (the pseudoedge just below the root) and the ending point (where we have only edges above the leaves). In Figure 5 we show the search for the best proof for our continuing example, but now according to our new scheme. The Proof First, we formally describe the “pseudoedges” men- tioned at the end of the last section. We modify our 6’s by adding an extra layer of nodes L/, above the leaves. Each 1’ is connected to the corresponding Z by a single edge el. We also add an extra node ~‘6 which has as its single child rg. We define El’ = $I’ = $el = El It should be clear that the costs of the original dag 6 and the estimates on all of its nodes are uneffected by this change, so we henceforth just talk about 6 as if it had these extra nodes and edges all the time. Definition 6 A cut of an and-dag Q is a set of edges C of (Y such that m it is not possible to go from the root of CY to a leaf of a! without going through one of the edges a if any edge is removed from C it will no longer be a cut. If 6 is a general Waodag (not necessarily an and-dag) then C is a cut of 6 iff C is a cut of some a! E A(6). Definition 7 Let Q! be an and-tree, and r be a topo- logical sort of Na, r = (no . . . nk 3, such that no node comes before any of its parents. We will not include in r any of the extra nodes we added on at the beginning of thus section. Thus no = rol, not rh. We define a function &, : 2Ea + 2Ea as follows: IT(E) = (E - En3 u En where n is the first element of r such that some en E E and En is non-empty. &, is undefined if no such n ex- ists. Intuitively, &, is a search successor function, but just defined for and-dags. For reasons which are made 448 SEARCH clear by the next theorem, we call such ET’s “topolog- ical cut extenders.” EJ denotes the reflexive transitive closure of E7. E: (E) denotes the sequence of edge sets generated (in order) by successive applications of E7. Definition 8 We say that the topological cut exten- der E7 when applied to the set of edges E “expands” node n iff for all e E E,,, e # E and e E &(E). %‘I’eorem 1 Let a! be a~ and-tree and &:((e”a)) = o,***, Fk}. The followtng are true: I. FO = (era) 8. Fi,O<il k is a cut. 3. En’ c Fi 4. E7 when applied to Fi expands node ni of r. 5. Fk =EL,. The proof has been omitted because of space limita- tions. See [3]. Definition 9 We say that C is a “proper cut” of the and-dag a! iff there is a topological ordering of cr, r, such that C E E: ((era)). C is a proper cut of an and- or dag 6 iff it is a proper cut of an and-dag cy E A(6). Intuitively proper cuts of and-trees are the bound- aries of partial proofs. The cut extension function takes us from partial proof to partial proof until we reach the leaves. At this point we have only defined the process for and-dags, since this is all we need for the following few theorems. Eventually we will define a similar function, our successor function for the best- first search, which is defined for and-or dags. Theorem 2 If ar is an and-dag, then for any proper cute OfcY, $C= ~a! Proof. Let 7 be the ordering of CY such that C E E7*(C~a3). BY Th eorem 1 8: ((e’a 3) is the sequence of proper cuts ({era), . . . 9 EL, 3. We prove the theorem by induction on this sequence, starting with EL,. Basis step. Consider C = EL, = (el 1 1 E La) : $C = C $el = C El = EQ IEL, IEL, Induction step. We prove the theorem for C such that :;,a11 Fts C’ after it in the sequence ((eTa), . . . , EL,), = a. Let C’ be the cut immediately after C in the se- quence. By the definition of ET9 C’ = (C - E”) u E,, where n is the earliest node in T such that en E C. Next we observe that $C’ = ECY = $C - $En + SE,, The first equality follows from the induction hypothe- sis. The second from the definition of C’ and the fact that for all of the edges en E En, it is the case that en E C. (We know this from Theorem 1, claim 3.) Now, rearranging the above equation we get $C = LCU + ($E* - $En). It remains to show that the last term is zero. First Sin Second, consider n. It must either be an and-node or an or-node. Suppose it is an or-node. Then since there is only one edge above it (this is an and-tree) $n = min $e, = $En G%EE, If n is an and-node, then by definition &a= $En Thus either way the final term in our equation for $C reduces to ($n - $n), so $C = ECX. 0 Next we will consider what happens when we take estimates on Waodags which are not and-dags. In what follows we will find it convenient to talk about a par- ticular node being part of different Waodags, and its estimated cost in each of the dags. We will use the no- tation $an to specify that the estimation is being done relative the the Woadag 6. It should be obvious that any topological sort of all of the nodes of 6 will, when restricted to the nodes of a! E A(6), be a topological sort of CY. Thus from now on when we talk of a sort r it will be of the entire dag. efinition 10 If C is a proper cut of the dag 6, then A(C) = (Q! ] CY E A(S) and C is a proper cut of Q) efinition 11 We extend the definition of our cost function to proper cuts C of 6. EC = min ECY &A(C) Theorem 3 Let 6 be a Waodag, and C a proper cut of 6. $6C 5 EC roof. We will prove this by induction on a se- quence of Woadags 61. . . Sj such that 61 = CX, where CY is the member of A(C) such that Ecr = min&&4(c) .&Y’, and 6j = 6. To go from Wodag i to i + 1 we do one of the following operations: 1. Add a leaf node 1 E Lb 2. Add an and-node n E NJ, and En. This step is only allowed if the heads of all the edges En are already in &. 3. Add an or-node n and one e,. Only allowed if the head of en is already in &. 4. Add an edge from an or-node already in 6i, to one of its children already in in Si. We will assume that it is clear that we can, in fact, build 6 from CY in this fashion. We will now show that for all i, 1 5 i 5 j, $aiC < EC Basis Step $&C = $,C = f?Ea! = LC CHARNIAK AL HUSAIN 449 The first equality is by the definition of 61, the second by Theorem 2, and the third by the definition of EC. Induction Step We will show that for any of the op- erations used in going from Si to &+I it must be the case that if $a;C <_ EC then $6i+lC <_ .fC. Operation I. Since the new node is not connected to any node in C the estimated cost remains unchanged. Operations 2 & 3. The only possible connection between the operation and nodes in C is that one or more descendants of the heads of the edges in C might get extra parents. This would lower the estimated value for all of the parents. Thus the only possible change would be to lower the estimate cost of C. Operation 4. The or-node could have its estimated cost lowered if this child had lower estimated cost than any other child. There is no way its cost could be raised. Thus the estimated cost of C could only be lowered. 0. Now we relate the above material to best-first search of and-or dags in order to find the minimum cost proof. Definition 12 A 9,opological successor function” 457 :2E --+ 22R for the topological sort r of the and-or dag S is defined as follows: G(E) = ((E - En) U En) and-node ((E - En) U (e,) I en E En) or-node. The first branch applies if n is an and-node, and the second if n is an or-node. Again, n is the first node in r such that en E E, and E, is non-empty. S, is undefined if no such n exists. We define SG : 2E -+ 22B as follows: E E S;(E), and if X E S:(E), then ST(X) c S;(E). Nothing else is in S:(E). Theorem 4 If E E S: ({e”6 )), then E is a proper cut of& Proof. Suppose E E SG((e’“)). It must then be the case that we can apply ST9 first to rg, then to a mem- ber of &((er6)), t t g e c o enerate a sequence of edge sets (Fo, . . . , Fj), where Fo = (ef6), and Fj = E. We will prove that this sequence is a prefix of &: ((e’6)) for some a! E A(6). Fr om this and theorem 1 it follows that E is a proper cut of 6. The proof is by induction. Basis step. FO = (er63. Clearly (er63 is a prefix of Ei applied to any cy E A(S). hhcti0n step. Assume the theorem is true for PO 9” ‘9 Fi). We want to prove that (PO,. . . , Fi+r) is a prefix of E; applied to some Q! E A(S) (in general there will be many such (r). Consider what happens in going from Fi to &+I. Fi+l E S,(F!), and s7(Fi) = ((Fi - En) U En) and-node ((E - En) U (en) 1 en E En) or-node Now if n is an and-node, then any cy for which Fi is a proper cut will also have Fi+l as a proper cut, since Q! must include n and En. Thus &(Fi) = (&(Fi)), so &+I = &(F;). On the other hand, suppose n is an or-node. Then Fi+l extends Fi by removing the predecessor edges of n and including one of the possible choices en at the or-node n. Now consider an (Y which includes all of Ei and also has, as its “‘choice” at n, en. For this cx Fi+l = E,(F;). q Theorem 5 Let C E SG ((e’6)). $C 5 EC. Proof. By Theorem 4, C is a proper cut of 6. Thus, by Theorem 3 $C <, EC. 0 Experimentall We tested our scheme on a collection of and-or dags generated by the Wimp3 natural language understand- ing program [2,6]. Wimp3 understands stories by gen- erating Bayesian networks [8] to model the possible interpretations of the text, and evaluates the networks to find which interpretation is most probable given the evidence. With a small amount of post-processing we were able to convert these networks into equivalent and-or dags. We tested our heuristic on all of the dags from 15 stories of the 25-story corpus used to debug Wimp. (We were not able to use all of the stories because many of them used features in Wimp which would have made the conversion to and-or dags much more difficult. However, there is no reason to believe that this would have any effect on the comparisons be- tween heuristics.) In all we ran the minimal-cost proof schemes on 140 dags ranging in size from 7 nodes to 168 nodes. As a method of comparison we have used time (in seconds) as a function of number of nodes. Since all of the methods were run on the same machine, using mostly the same code, the normal arguments against raw times did not seem to apply. We might also note that a comparison based upon number of partial so- lutions explored (the other obvious method) would, if anything, tend to make our new methods look still bet- ter since it would hide the extra time per iteration they require. Since the searches are exponential in principle, we have done a least-squares fit of the equation time = ,a+~-Inodes . The results are as follows: Method a b Cost-so-far -6.4083 .07932 Cost sharing -5.6498 .06337 Multiple parent removal -5.5376 .05877 The reduction in the exponent shows that cost sharing does, in fact, pay off. It should be clear that it is possible to improve our cost estimator. In particular, $ will be optimistic when there are multiple parents of a node which came from a single or node. For example, in Figure 6 there are two parent edges of n which came from or. In such cases, no partial proof will have more than one of them, so prorating the cost of the child among its parent edges leads to a underestimate of the cost at parent nodes. This is okay in the sense that the heuristic is admis- sible, but it would be better if it could be, at least in part, corrected for. We have implemented an improved 450 SEARCH Figure 6: Two edges from the same or-node version of our estimator which catches some of these situations. It indeed performed even better than the original cost sharing scheme as seen by the final line in the above table. We should note, however, that these data are still preliminary, and hide a lot of interesting behavior which deserve further investigation. Conclusion The usefulness of our improved cost estimate for min- imal cost proofs depends on three factors: (1) to what degree important problems in AI can be formulated as minimal-cost proofs, (2) the reasonableness of our formalization of minimal-cost proofs as the search of already given and-or dags, and (3) the improvement provided by our cost estimate. Of these (1) is beyond the scope of this paper. The technique has proven use- ful so far, and we can only express our belief that it will continue to do so. Factor (3) was addressed by the last section. Here we want to consider point (2). An alternative view of minimal-cost proofs is pro- vided by Hobbs and Stickel [7]. They use a backward- chaining theorem prover to construct proofs and use the cost-so-far heuristic to guide the theorem prover. The problem with their model from the viewpoint of this paper is that our cost estimator cannot be used within it. We propagate the cost of the leaves down through the dag, but the theorem prover has no idea what the leaves are, and thus no costs can be propa- gated. Indeed, when viewed from this perspective, it might seem that our approach is hopeless. We need the complete dag before we can proceed, but we cannot use our heuristic to help construct the dag. Fortunately, there is another perspective. Backward chaining proof procedures are exponential in the size of the dag they explore. This is because each partial proof combines the choices at or-nodes in a different way, and thus we get a combinatorial explosion. But simply constructing the dag need not be expensive. Indeed, given reasonable assumptions, the process can be linear in the size or the dag, or, at worst, linear in the size of the dag plus the knowledge base which underlies the dag. We get this improvement because the dag lays out an exponential number of proofs in a compact form. Of course, it is not possible by just looking at the dag to easily determine which proof is minimal or, even (if we allow negation), to see if there is any consistent proof available. But this is okay with us. Once we have a dag (con- structed in linear time), we can go back to work using our cost estimator. Or, to put this slightly differently, we can build a dag in linear time at the cost of a sub- sequent exponential process to find the minimal-cost proof. The contribution of this paper is to make this latter process more efficient. References 1. CHARNIAK, E. A neat theory of marker passing. Presented at AAAI-86 (1986). 2. CHARNIAK, E. AND GOLDMAN, R. P. A semantics for probabilistic quantifier-free first-order lanuages, with particular application to story understanding. Presented at IJCAI-89 (1989). 3. CHARNIAK, E. AND HUSAIN, S. A New Admissi- ble Heuristic for Minimal-Cost Proofs. Department of Computer Science, Brown Univeristy, Technical Report, Providence RI, 1991. 4. CHARNIAK, E. AND SHIMONY, S. E. Probabilis- tic semantics for cost based abduction. Presented at Proceedings of the 1990 National Conference on Artificial Intelligence (1990). 5. GENESERETH, M. R. The use of design descrip- tions in automated diagnosis. Artificial Intelligence 24 (1984), 411-436. 6. GOLDMAN, R. AND CHARNIAK, E. Dynamic con- struction of belief networks. Presented at Proceed- ings of the Conference on Uncertainty in Artificial Intelligence (1990). 7. HOBBS, J., STICKEL, M., MARTIN, P. AND ED- WARDS, D. Interpretation as abduction. Presented at Proceedings of the 26th Annaual Meeting of the Association for Computational Linguistics (1988). 8. PEARL, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, Los Altos, Calf., 1988. 9. REITER, R. A theory of diagnosis from first prin- ciples. Artificial Intelligence 32 (1987), 57-95. 10. SELMAN, B. AND LEVESQUE, H. J. Abductive and default. reasoning: a computational core. Presented at Proceedings of the Eighth National Conference on Artificial Intelligence (1990). 11. SHANAHAN 9 M. Prediction is deduction but expla- nation is abduction. Presented at Ijcaj (1989). 12. SHIMONY, S. E. On irrelevance and partial assign- ments to belief networks. Computer Science Depart- ment 9 Brown University, Technical Report CS-90- 14, 1990. 13. STICKEL, M. E. A Prolog-like inference system for computing minimum-cost abductive explanations in natural-language interpretation. SRI International, Technical Note 451, 1988. CHARNIAK & HUSAIN 451
1991
70
1,132
Depth-First vs Nageshwara Rae Vempaty Vipin Kumar* ichard IL orft Dept. of Computer Sciences, Computer Science Dept., Dept. of Computer Science, Univ. of Central Florida, Univ. of Minnesota, Univ. of California, Orlando, FL - 32792. Minneapolis, MN - 55455. Los Angeles, CA - 90024 Abstract We present a comparison of three well known heuristic _ search algorithms: best-first search (BFS) , iterative- deepening (ID), and depth-first branch-and-bound (DFBB). We develop a model to analyze the time and space complexity of these three algorithms in terms of the heuristic branching factor and solution density. Our analysis identifies the types of problems on which each of the search algorithms performs better than the other two. These analytical results are validated through experiments on different problems. We also present a new algorithm, DFS*, which is a hybrid of iterative deepening and depth-first branch-and-bound, and show that it outperforms the other three algo- rithms on some problems. Introduction Heuristic search algorithms are used to solve a wide variety of combinatorial optimization problems. Three important algorithms are: (i) best-first search (BFS); ( ) t t ii i era ive-deepening (ID)[Korf, 19851; and (iii) depth-first b ranch-and-bound (DFBB)[Lawler and Woods, 1966; Kumar, 19871. The problem is to find a path of least cost from an initial node to a goal node, in an implicitly specified state-space tree, for which a consistent admissible cost function is available. Best-first search (BFS) is a generic algorithm that expands nodes in non-decreasing order of cost. Dif- ferent cost functions f(n) give rise to different vari- ants. For example, if f(n) = depth(n), then best-first search becomes breadth-first search. If f(n) = g(n), where g(n) is the cost of the path from the root to node n, then best-first search becomes Dijkstra’s single-source shortest-path algorithm[Dijkstra, 19591. If f(n) = g(n) + h(n), where h(n) is the heuristic es- *This research was supported by Army Research Office grant # 28408-MA-SD1 to the University of Minnesota and by the Army High Performance Computing Research Cen- ter at the University of Minnesota. +This research was supported by an NSF Presidential Young Investigator Award, and a grant from Rockwell International. timate of the cost of the path from node n to a goal, then best-first search becomes A*[Hart et al., 19681. Given a consistent, non-overestimating cost func- tion, best-first search expands the minimum number of nodes necessary to find an optimal solution, up to tie-breaking among nodes whose cost equals the goal cost [Dechter and Pearl, 19851. The storage require- ment of BFS, however, is linear in the number of nodes expanded. As a result, even for moderately difficult instances of many problems, BFS runs out of memory very quickly . For example, for the 15-puzzle problem, A* runs out of memory within a few minutes of run time on a SUN 3/50 workstation with 4 Megabytes of memory. Iterative deepening (ID)[Korf, 19851 was designed to remedy this problem. It is based on depth-first search, which only maintains the current path from the root to the current node, and hence uses space that is only linear in the search depth. ID performs a series of depth-first searches, in which a branch is pruned when the cost of a node on that path exceeds a cost thresh- old for that iteration. The cost threshold for the first iteration is the cost of the root node, and the threshold for each succeeding iteration is the minimum cost value that exceeded the threshold on the previous iteration. The algorithm terminates when a goal is found whose cost does not exceed the current threshold. Since the cost bound used in each iteration is a lower bound on actual cost, the first solution chosen for expan- sion is optimal. Special cases of iterative deepening include depth-first iterative-deepening (DFID), where f(n) = h-+n), and iterative-deepening-A* (IDA*), where f(n) = g(n) + h(n). Clearly, ID expands more nodes than BFS, since all the nodes expanded in one iteration are also expanded in all following iterations. Define the heuristic branch- ing factor (b) of a problem to be the ratio of the number of nodes of a given cost to the number of nodes with the next smaller cost. For example, if cost is simply depth, then the heuristic branching factor is the well-known brute-force branching factor. If the heuristic branching factor is greater than one, meaning that the tree grows exponentially with cost, then IDA* generates asymp- 434 SEARCH From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. totically the same number of nodes as A*[Korf, 19851. The problem occurs when b is less than one or very close to one. In the former case, where the size of the problem space does not grow exponentially with cost, ID generates asymptotically more nodes than BFS. In fact, in the worst case, where every node has a unique cost value, ID generates O(M2) nodes where M is the number of nodes generated by BFS[Patrick et al., 19911. If b is greater than but close to one, while asymptotically optimal, ID will be very inefficient in practice, compared to BFS. This occurs in problems such as the Traveling Salesperson Problem (TSP) and VLSI floorplan optimization. For example, on a small instance of the TSP which could be solved by A* in a few minutes, IDA* ran for several days without find- ing a solution. Ideally, we would like an algorithm with both low space and time requirements. Depth-first branch-and-bound (DFBB)[Lawler and Woods, 19661 is a potential candidate. DFBB starts with an upper bound on the cost of an optimal solu- tion, and then searches the entire space in a depth- first fashion. Whenever a new solution is found whose cost is lower than the best one found so far, the up- per bound is revised to the cost of this new solution. Whenever a partial solution is encountered whose cost equals or exceeds the current bound, it is eliminated. Note that DFBB and ID are complementary to each other, in that DFBB starts with an upper bound, and ID starts with an lower bound. Both can expand more nodes than BFS. ID performs repeated expansion of nodes, while DFBB expands each node exactly once, but expands nodes costlier than the optimal solution cost. Since the node selection strategy in both DFBB and ID is depth-first, both have low memory require- ments and a much faster node expansion rate compared with A*. There are two main reasons why the time needed by BFS to expand a node is much larger than that of depth-first search algorithms such as ID and DFBB. First, each time a node is expanded by BFS, a priority queue has to be accessed to remove the node and to insert its descendants, multiplying the node expansion time by a logarithmic factor. Second, in the depth-first algorithms, successor nodes can often be constructed by making simple changes in the current parent node, and the parent can be reconstructed by simply undoing those changes while backtracking. This optimization is not directly implementable in BFS. For example, in the N x N sliding tile puzzles, such as the Fifteen Puzzle, the time taken to expand a node for ID and DFBB is O(1) while it is O(N2) for BFS, just to make a copy of the state. Given these three algorithms, we address two ques- tions in this paper: 1) What are the characteristics of problems for which one of the algorithms is better than the others?, and 2) Are there additional algo- rithms that are memory efficient and may be better for some classes of problems? We show that for problems with high solution den- sities, DFBB asymptotically expands the same num- ber of nodes as BFS, and outperforms ID. For prob- lems with low solution densities, ID beats DFBB. Fi- nally, when both the solution density and the heuristic branching factor are low, both DFBB and ID perform poorly. For this type of problem, we propose a hy- brid of the two algorithms, DFS*, and demonstrate its effectiveness on a natural problem. We experimentally verified our analysis on three dif- ferent problems: the Fifteen Puzzle, Traveling Sales- person Problem (TSP) and solving mazes. We im- plemented IDA *, DFBB and A* algorithms to solve each of these problems. Our experimental investiga- tion showed that the Fifteen Puzzle has a low solution density, but a high heuristic branching factor. Con- versely, TSP has high solution density and low heuris- tic branching factor. Comparison of run times of the algorithms shows that ID is superior on the Fifteen Puzzle, and DFBB is superior on TSP. BFS is poor on both problems because of high memory requirements. The maze problem has both low heuristic branching factor and low solution density. Hence, this problem is unfavorable to both ID and DFBB algorithms, and the hybrid algorithm, DFS*, outperforms them on this problem. Thus our experimental results support the theoretical analysis. Analysis of Search Algorithms Assumptions and Definitions We restrict our analysis to state-space trees. For each node n, f(n) denotes a lower bound on the cost of solutions that include node n. The cost function f is monotonic, in the sense that f(n) 5 f(m) if m is a successor of n. Let N be the number of different values taken by f over all the nodes of the tree. Let E denote the set of nodes whose cost is the ith-smallest of the f-values. Thus V(, VI, . . . VN- 1 is a sequence of sets of nodes in increasing order of cost. Clearly, the children of a node in Vi can only occur in those Vj for which j 2 i. Vi contains the start node. We assume that the sequence of sizes ]K] of the node sets is a geometric progression with ratio b, the heuristic branching factor[Korf, 19881. If we assume that Vo is a singleton set, then IVil = bi. We assume that b > 1.l Let vd be the set of nodes that contains the opti- mal solution(s). Hence, there are no solutions in V;l for i < d. Furthermore, we assume that each element of vd is a solution with probability pc, and in successive vd+i’s, the probability of a node being a solution is pi. Thus the sequence of pi’s is a measure of the den- sity of solutions in successive search frontiers. We as- sume that the solutions follow a Bernoulli distribution among the elements of each Vi for i > d. Therefore, the average number of nodes expanded in vd+i before ‘If b < 1, then ID would perform quite poorly, and one would choose between DFBB and A*. VEMPATY, KUMAR, & KORF 435 a solution is found from this set is &. Since each Vd+a has at least one solution, pi > &. For simplicity of presentation, we make the assumption that all the al- gorithms expand all the nodes in vd, in order to find the optimal solution. Similar results can also be obtained under either of the following alternate assumptions: (i) all algorithms expand exactly one node in vd (i.e., the first node searched in vd is a solution); (ii) the average number of nodes expanded by all algorithms in vd is 1 PO’ Analysis of Best-First Search Best-first search expands all the nodes in 5 for i < d. Let M denote the number of nodes expanded by BFS. We have i=d M=ClKl i=o i=d = c b” a’=0 M= bd+l - 1 b - 1 for b> 1 The above formula denotes the ‘mandatory nodes’ in our search tree. These are the nodes that have to be expanded by any algorithm solution. that finds an optimal Analysis of Iterative Deepening Iterative deepening reexpands nodes in prior iterations during later iterations, but does not expand any nodes costlier than the optimal solution. Let DI denote the average number of nodes expanded by ID. The algo- rithm starts with an initial bound equal to the cost of the root. i=d i=i The inner sum adds the nodes expanded in a given iteration, while the outer sum adds up the different iterations. i=O j=O i=d bi+l _ 1 DI=):- i=O b-1 since b > 1 b bd+l-1 d =- b-l b-l --b--l DI+M (2) - A similar result was proved in [Korf, 1985; Stickel and Tyson, 19851. It is clear from this equation that when b > 1, ID expands asymptotically the same num- ber of nodes as BFS, and that when b is close to one, ID expands many more nodes than BFS. Analysis of Depth-First Branch-and-Bound Depth-first branch and bound starts with an upper bound on the cost of the optimal solution, and de- creases it whenever a better solution is found. Eventu- ally, the bound equals the cost of the optimal solution and then only mandatory nodes are expanded. While DFBB may expand nodes costlier than the optimal so- lution, it never expands a node more than once. Let DB denote the number of nodes expanded by DFBB. These nodes fall into two disjoint categories - (i) those which are costlier than the optimal solution(s) and hence lie in vl: for i > d and (ii) those which are not costlier than the optimal solution(s) and lie in Vj for j 5 d. The average number of nodes in the second category is M. The initial bound with which DFBB starts is quite important. It is generated by using a fast approximation algorithm. In problems like floor plan optimization and planar TSP, the initial bound is often within twice the cost of the optimal solution. We assume that the initial bound gives the cost of nodes at level kd + 1, where d is the level containing the optimal solution(s). (Note that L need not be an integer, and is a measure of accuracy of the approximation algorithm used.) Hence nodes in the first category belong to I$ for d < j < kd. Each of these Vj’s contain bj nodes, out of which approximately bj pj-d are solutions. Let Bi denote the average number of nodes expanded by DFBB from vd+S for 0 5 i < kd. We have Bo = I&l, and for 1 5 i 5 kd, Bi = min(bBi-1, -% Pi This says that either & nodes are expanded from Vd+a before a solution is found at this level, or a solu- tion is found earlier at level vd+i-r itself. In either case, no more nodes are expanded from level vd+i. Hence, Using this, we can derive the following - i=kd DB<M+)Bi i=l Clearly, the behavior of DFBB depends on the se- quence of pi’s* It is always the case that M < DB. Let p denote the harmonic mean of the seiuence Ply P2 . . . Pkd. Thus the sum cfz:” & is equal to y.” 2Harmonic mean p = *. For p to be well de- i=l fined, we need to have 0 < pi 5 1:;. 436 SEARCH We are interested in sequences for which DB is close to M. From equation 3, it is clear that a sufficient con- dition for DB 6. 21M is $$ 5 p. The harmonic mean, p, is a measure of the overall density of solutions and strongly influences the running time of DFBB. An interesting sequence of h’s is the case for which the number of solutions increase exponentially in suc- cessive levels, as do nodes. Consider the case where Vd has s(= bdpo) solutions, and in every successive level, the number of solutions increase by a factor of s. This is the case for which pi = MIN(lg, 1.0). For s > 1, the reader can verify that For this type of problem, we have DB=M+ bd+l(l - (P)““)) s(s - b) (4 From 4, we can see that DB is close to M when s > 2b. In this case DB is not very sensitive to k: or d. When s decreases from 2b to b, DB gradually in- creases and also becomes more sensitive to kd. For s 5 b, DB is much larger than M unless kd is very small. Thus DFBB performs well when the number of solutions grows more than twice as fast as the number of nodes. It performs poorly when the number of so- lutions at successive cost levels grows slower than the number of nodes. Comparison of the Algorithms The space complexity of BFS is O(M), while for the depth-first strategies it is O(d). Using equations 1, 2, 3 and 4, we can analyze the relative time complexities of each of the algorithms. As pointed out earlier, node expansion time in the depth-first search algorithms, ID and DFBB, is usually much smaller than that for best-first search. Let r be the ratio of the node expansion time for best-first search compared to depth-first search. Typical values of r in our experiments range from 1 to 10. For any particular value of r, we can find combinations of b and p for which one of the algorithms dominates the other two, in terms of time. 1. DFBB ws BFS. DFBB runs faster than BFS when DB 5 rM. For small values of r (such as 2), this will be true when the number of solutions grows at least twice as fast as the number of nodes. BFS runs faster than DFBB when the number of solutions grows slower than the number of nodes. Note that BFS is impractical when M exceeds the available memory. 2. BFS vs ID. ID runs faster than BFS when DI 5 rM. This will be true roughly when & 5 T. Oth- erwise BFS will run faster than ID. Again BFS may still be impractical to use due to memory limits. 3. ID vs DFBB. ID runs faster than DFBB, when b > 2 and s < b. DFBB runs faster than ID when b < 2 and s > 2b. When b < 2 and s < b, both algo- rithms will perform poorly, For other cases, there is a transition between the two algorithms. To summarize the results of the above analysis - e DFBB is preferable when the increase in solution density is larger than the heuristic branching factor. o ID is preferable when the heuristic branching factor is high and density of solutions is low. t) BFS is useful only when both density of solutions and heuristic branching factor are very very low. Experimental Results. We chose three problems to experimentally validate our results - the Fifteen Puzzle, Traveling Salesperson Problem (TSP) , and maze search. They have different heuristic branching factors and solution densities. The Fifteen Puzzle is a classical search example. It consists of a 4 x 4 square with 15 numbered square tiles and a blank position. The legal operators are to slide any tile horizontally or vertically adjacent to the blank position into the blank position. The task is to map an arbitrary initial configuration into a particular goal configuration, using a minimum number of moves. A common heuristic function for this problem is called Manhattan Distance: it is computed by determining the number of grid units each tile is away from its goal position, and summing these values over all tiles. IDA*, using the Manhattan Distance heuristic, is capa- ble of finding optimal solutions to randomly generated Fifteen Puzzle problem instances within practical re- source limits[Korf, 19851. A* is completely impractical due to the memory required, up to six billion nodes in some cases. In addition, IDA* runs faster than A*, due to reduced overhead per node generation, even though it generates more nodes. We compared IDA* and DFBB, on the ten easiest problems from [Korf, 19851, based on nodes generated by IDA*. For the initial bound in DFBB, we used twice the Manhattan Distance of the initial state. Table 1 shows that DFBB generates many times more nodes than IDA*. Their running times per node generation are roughly equal. The average heuristic branching factor of the Fifteen Puzzle is about 6, which is rel- atively high. The solution density is quite low, and actually decreases slightly as we go deeper into the search space. This explains why IDA* performs very well, while DFBB and A* perform poorly. The Traveling Saleperson Problem (TSP) is to find a shortest tour among a set of cities, ending where it started. Each city needs to be visited exactly once in the tour. We compared all three algorithms on the eu- clidean TSP. Between 10 and 15 cities were randomly located within a square, 215 units on a side, since our random number generator produced 16 bit ran- dom numbers. A partial contiguous tour was extended VEMPATY, KUMAR, & KORF 437 by adding cities to its end city, ordered by the near- est neighbor heuristic. The minimum spanning tree of the remaining cities, plus the two end cities of the current partial tour, was used as the heuristic evalua- tion function. Each data point in table 2 is an average over 100 randomly generated problem instances. The first column gives the number of cities. The second column gives the cost of an optimal tour and third column gives the number of mandatory nodes, or the number of nodes generated by A*. The fourth and fifth columns give the number of nodes generated by DFBB and IDA*, respectively. No data is given for IDA* for 13 through 15 cities, since it took too long to generate. Finally, the last column gives the ratio of the number of nodes generated by IDA* to the number of nodes generated by DFBB. The data demonstrates that DFBB is quite effective on this problem, generat- ing only 10 to 20% more nodes than necessary. This is due to the high solution density, since at a depth equal to the number of cities, every node is a solution. The data also shows that IDA* performs very poorly on this problem, generating hundreds of times more nodes than DFBB. This is due to the low heuristic branch- ing factor, since there are relatively few ties among nodes with the same cost value. Similar results were observed for the Floorplan Optimization Problem, us- ing the best known heuristic functions in [Wimer et ad., 19881. A new search algorithm : DFS* Our discussion so far suggests that DFBB and ID are complementary to each other. ID starts with a lower bound on cost, and increases it until it is equal to the optimalcost. DFBB starts with a upper bound on cost, and decreases it until it is equal to the optimal cost. Since ID conservatively increases bounds, it does not expand any nodes costlier than the optimal solution, but it may repeat work if the heuristic branching factor is low. DFBB does not repeat work, but expands nodes costlier than the optimal solution. Such wasteful node expansion is high when the initial bound it starts with is much higher than the final cost, and if the solution density is low. This suggests a hybrid algorithm, which we call DFS* to suggest a depth-first algorithm that is admis- sible. DFS* initially behaves like iterative deepening, but increases the cost bounds more liberally than nec- essary, to minimize repeated node expansions [Korf, 19881. When a solution is found that is not known to be optimal, DFS* then switches over to the DFBB algorithm. The DFBB phase starts with the cost of this solution as its initial bound and continues search- ing, reducing the upper bound as better solutions are found. Also, if the cost bound selected in any iteration of the ID phase is greater than an alternate upper- bound, which may be available by other means, then DFS* switches over to the DFBB algorithm. A very similar algorithm, called MIDA*, was independently discovered by Benjamin Wah[Wah, 19911. DFS* is a depth-first search strategy and it finds optimal solutions given non-overestimating heuristics. DFS* may be useful on certain problems where both DFBB and ID perform poorly. For example when both the heuristic branching factor and solution density are low (b < 2 and s < 2b), DFS* can perform well pro- vided reasonable increments in bounds can be found. Define B as the ratio between the number of nodes first generated by successive iterations of ID. If we set successive thresholds to the minimum costs that ex- ceeded the previous iteration, then B = b, the heuris- tic branching factor. By manipulating the threshold increments in DFS*, we can change the value of B. Too low a value of B results in too much repeated work in early iterations. Too high a value of B results in too much extra work in the final iteration generat- ing nodes with higher costs than the optimal solution cost. What value of B produces optimal performance, relative to BFS, in the worst case? Let d be the first cost level that contains an optimal solution. In the worst case for DFS*, BFS will not expand any nodes at level d, but all nodes at level d - 1. The number of such nodes is approximately Bd/(B - 1) Similarly, in the worst case, DFS* will expand all nodes at level d. Thus DFS* expands approximately Bd * (B2/(B - 1)2). The ratio of the nodes expanded by DFS* to the nodes expanded by BFS is B2/(B - 1). Taking the derivative of this function with respect to B gives us B(B - 2)/(B - 1)2. Setting this derivative equal to zero and solving for B gives us B=2. In other words, to optimize the ratio of the nodes generated by DFS* to BFS in the worst case, we’d like B to be 2. If we substitute B = 2 back into B2/(B - l), we get a ratio of 4. In other words, if B = 2, then in the worst case, the ratio of DFS* to BFS will be only 4. This analysis was motivated by the formulation of the problem presented in [Wah, 19911. To achieve this value of B, the approximate incre- ment in cost can be estimated by sampling the distri- bution of nodes across a cost range during an iteration, as follows. We divide the total cost range between 0 and maxcost into several parts, and associate a counter with each range. Each counter keeps track of the num- ber of nodes generated in the corresponding cost range. Any time a node is generated and its cost computed, the appropriate counter is incremented. This data can be used to find a cost increment as close as possible to the desired increase in the number of nodes expanded. A much simpler, though less effective heuristic, would be to increment successive thresholds to the maximum value that exceeded the previous threshold. This guarantees a value of B that is at least as large as the brute-force branching factor. To evaluate DFS* empirically, we considered the problem of finding the shortest path between two points in a maze. This problem models the task of navigation in the presence of obstacles. We imple- 438 SEARCH mented IDA*, DFBB, A* and DFS*, and tested them on 120 x 90 mazes, all of which were drawn randomly by the Xwindows demo package Xmaze. Figure shows an example of a maze. The manhat tan dist ante heuris- tic was used to guide the search. For this problem the heuristic branching factor is typically low, as is the solution density. The starting nodes were close to centers of the mazes, and a series of experiments were performed, each with the goal node being farther away from the start node. When the goal node is not too far away, the boundary walls are not encountered often during the search, minimizing boundary effects. Table 3, summarizes the number of nodes expanded by each algorithm, averaged over 1000 randomly gen- erated problem instances. In these experiments, the cost bound for DFS* was doubled after each iteration. DFS* outperformed the other depth-first algorithms, as predicted by our analysis, and performed close to A* on these mazes. The space requirements of A* are very high; it requires 1 MByte of memory for handling a 200 x 200 maze. Conclusions We analyzed and compared three important heuristic search algorithms, DFBB, ID and BFS, and identi- fied their domain of effectiveness in terms of heuris- tic branching factor and solution density. DFBB is the best when solution density is high. ID is the best when heuristic branching factor is high. Since both of them use a depth-first search strategy, they overcome the memory limitations of BFS and hence can solve larger problems. We also identified a natural relation between them and presented a new hybrid depth-first search algorithm DFS *, that is suitable when both heuristic branching factor and solution density are low. We ex- perimentally demonstrated these results on three nat- ural problems. References Dechter, R. and Pearl, J. 1985. Generalized best-first search strategies and the optimality of a*. Journal of the Association for Computing Machinery Vol. 32, No. 3:505-536. Dijkstra, E. W. 1959. A note on two problems in connexion with graphs. Numerische Mathematik Vol. 1:269-271. Hart, R.E.; Nilsson, N.J.; and Raphael, B. 1968. A formal basis for the heuristic determination of mini- mumcost paths. IEEE Transactions on Systems Sci- ence and Cybernetics Vol. 4, No. 2:100-107. Korf, Richard E. 1985. Depth-first iterative- deepening: An optimal admissible tree search. Ar- tificial Intelligence 27:97-109. Korf, Richard 1988. Optimal path finding algorithms. In Kanal, Laveen and Kumar, Vipin, editors 1988, Search in Artificial Intelligence. Springer-Verlag, New York. Kumar, Vipin 1987. Branch-and-bound search. In Shapiro, Stuart C., editor 1987, Encyclopaedia of Ar- tificial Intelligence: Vol2. John Wiley and Sons, Inc., New York. 1000-1004. Lawler, E. L. and Woods, D. 1966. Branch-and-bound methods: A survey. Operations Research 14. Patrick, B.G.; Almulla, M.; and Newborn, M.M. 1991. An upper bound on the complexity of iterative- deepening-a*. Annals of Mathematics and Artificial Intelligence To Appear. Stickel, M.E. and Tyson, W.M. 1985. An analysis of consecutively bounded depth-first search with appli- cations in automated deduction. In IJCAI. 1073- 1075. Wah, Benjamin W. 1991. Mida*: An ida* search with dynamic control. Technical report, Coordinated Science Laboratory, University of Illinois, Urbana, Ill. Wimer, S.; Koren, I.; and Cederbaum, I. 1988. Opti- mal aspect rations of building blocks in vlsi. In 25th ACM/IEEE Design Automation Conference. 66-72. Figure 1: Example of a maze. S is the starting point. G is the goal. The path . . . is the shortest solution. VEMPATY, KUMAR, & KORF 439 II Prob No I Sol. Cost h* I ID Nodes I DJ’BB Nodes 1 fLatlo U&‘JM/W II ” I I I Y 79 I 42 I 540860 ’ 52 rt 12 45 546344 16 Table 1: Experimental results on 15-puzzle. Number of Optimal Sol. A* Nodes cities cost exwanded DFBB Nodes ID nodes ID/DFBB expanded expanded ratio Y - n I 10 I 93421 1 1408 1 1552 1 325575 I 210 I- I. ” 11 97493 2843 3126 1952350 625 12 100511 5007 5576 5084812 912 13 103834 6806 8163 NA - 14 107524 16849 19133 NA - 15 111084 45211 49833 NA - Table 2: Experimental results on the Traveling Sales- person Problem. Each row shows the average value over 100 runs. The entries indicated NA mean that the experiment was abandoned because it takes too long. n Sol. Cost I DFBB Nodes I ID Nodes I DFS* Nodes 1 A* nodes [ Sol. Cost DFBB Nodes ID Nodes DFS* Nodes A* nodes , Range expanded L expanded L I expanded expanded 5 - 15 4535 23 I 29 1 -15 f 15 -50 4571 50 - 100 4992 100 - 200 4817 8972 I 624 1 314 200 - 500 5413 Table 3: Experimental results for finding optimal routes in mazes. The data for each cost range was obtained by averaging over 1000 randomly generated mazes. 440 SEARCH
1991
71
1,133
Is there c Matthew L. Ginsberg* and Donald I?. Geddist Computer Science Department Stanford University Stanford, California 94305 ginsberg@cs.stanford.edu No. Abstract 1 Introduction The split between base-level and metalewel knowledge has long been recognized by the declarative commu- nity. Roughly speaking, base-level knowledge has to do with information about some particular domain, while metalevel knowledge has to do with knowledge about that information. A typical base-level fact might be, “Iraq invaded Kuwait ,” while a typical metalevel fact might be, “To show that a country c is aggressive, first try to find another country that has been invaded by c.” Base-level information is of necessity domain- dependent, since the facts presented will involve the particular domain about which the system is expected to reason. Met alevel informat ion, how- ever, can be either domain-dependent (as in the ex- ample of the previous paragraph and as typically described in [Silver,1986]), or domain-independent. Typical domain-independent metalevel rules are the cheapest-first heuristic or the results found in Smith’s work on control of inference [Smith,1986, Smith and GeneserethJ985, Smith et aZ.,1986]. In this paper, we make an observation and a claim. The observation is that there are in fact two distinct types of metalevel information. On the one hand, one can have metalevel information about one’s base-level knowledge itself; on the other, one can have control information about what to do with that knowledge. This is a distinction that has typically been blurred by the AI community; we intend to focus on it in this paper. We will refer to the second type of met- alevel knowledge as control knowledge, and will refer to knowledge about knowledge (and not about what to do with that knowledge) as modal. We choose this term *This work has been supported by the Air Force Office of Scientific Research under grant number 90-0363, by NSF under grant number IRI89-12188 and by DARPA/RADC under grant number F30602-91-C-0036. +Supported by a National Science Foundation Graduate Fellowship. 452 SEARCH because sentences expressing modal knowledge typi- cally involve the use of predicate symbols that have other sentences as arguments. Thus a typical modal sentence might be, “I know that Iraq invaded Kuwait,” or “I don’t know of anyone that Kuwait has invaded.” Note that the information here doesn’t refer to the do- main so much as it does to our knowledge about the domain; nor is it control information telling us what to do with this knowledge. It simply reports on the state of our information at some particular point in time. In general, we will describe as modal all information describing the structure of our declarative knowledge in some way. The claim we are making - and we intend to prove it - is that there is no place in declarative systems for domain-dependent cant rol informat ion. Rat her, we suggest that every bit of information of this sort is in fact a conflation of two separate facts - a domain- independent control rule and a domain-dependent modal fact telling us that the rule can be applied. A similar observation has already been made by David Smith: Good control decisions are not arbitrary; there is always a reason why they work. Once these rea- sons are uncovered and recorded, specific control decisions will follow logically from the domain- independent rationale, and simple facts about the domain. [Smith,1985, p. 121 This is an observation with far-reaching conse- quences, including the following: There is no need for a “metametalevel” or anything along those lines. Domain-independent control in- formation need not be refined using higher-order in- formation. A similar conclusion has been reached in a decision-theoretic setting by Barton Lipman [Lipman,l991]. Current work on learning need not focus on the development of new control rules (as in [Minton et uZ.,1989]), but should instead focus on the development of new base-level or modal infor- mation. This may make the connection to existing ideas (such as caching) somewhat easier to exploit From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. and understand. 3. Current work on the control of inference should re- strict its focus to domain-independent techniques. There is much to be done here; Smith’s work only scratches the surface. At a minimum, general- purpose information can be extracted from the domain-specific control rules currently incorporated into many declarative systems. 4. The recognition that the control of reasoning should proceed by applying domain-independent rules to domain-dependent modal and base-level informa- tion can enhance the power of systems that con- trol reasoning in this way. This is a consequence of the fact that many declarative databases already contain such domain-dependent information with- out exploiting its implications for search control. In [Ginsberg,l991], it is suggested that the real role of nonmonotonic reasoning in AI is to focus inference, and that suggestion is typical of what we are propos- ing here. The outline of this paper is as follows: We begin in the next section with an example, showing the replace- ment of a specific control rule by a domain-independent one using domain-dependent modal information. We then go on to show that this procedure generalizes with minimal computational cost to all domain-specific con- trol information. The general construction is extremely weak, and Section 3 examines a more powerful application re- lated to the work in [Elkan,1990, Ginsberg,l991, Minton et uZ.,1989]. C oncluding remarks are contained in Section 4. 2 The basic result Consider the following declarative database, written in PROLOG form: hostile(c) :- allied(d,c), hostile(d). hostile(c) : - invades-neighbor(c). allied(d,c) :- allied(c,d). invades-neighbor (Iraq) . allied(Iraq, Jordan). Informally, what we are saying here is that a country is hostile if it is allied to a hostile country, or if it invades its neighbor. Alliance is commutative, and Iraq has invaded its neighbor and is allied with Jordan. Now suppose that we are given the query hostile( Jordan); is Jordan hostile? In attempting to prove that it is, it is important that we eventually expand the subgoal invades-neighbor(Iraq) rather than only pursue the infinite chain of subgoals of the form hostile(Jorda.n), hostile(Iraq), hostile(Jordan),hostile(Iraq), . . . We might describe this in a domain-dependent fashion by saying that subgoals involving invades-neighbor should be investigated before subgoals involving hostile. (There are obviously many other descrip- tions that would have the same effect, but let us pursue this one.) Now consider the following geometric example: acute(t) :- congruent (u,t> , acute(u). acute(t) : - equilateral(t). congruent (u,t> : - congruent (t ,u> . equilateral (Tl) . congruent (Ti ,T2) . A triangle is acute if it is congruent to an acute tri- angle or if it is equilateral. Triangles Tr and T2 are congruent, and Tr is equilateral. Is 2-2 acute? This example is different from the previous one only in that the names of the predicate and object constants have been changed; from a machine’s point of view, this is no difference at all. It follows from this that if the control rule delaying subgoals involving the predicate hostile is valid in the initial example, a rule delaying subgoals involving acute will be valid in the new one. What we see from this is that the control rules are not operating on base-level information so much as they are operating on the form of our declarative database. To make this remark more precise, we need to discuss the structure of the proof space a bit more clearly. We propose to do this by actually axiomatizing a description of this proof space. This axiomatization will refer to the structure of the proof space only and will therefore be modal information in the sense of the introduction; since nothing will be said about how to search the proof space, no domain-dependent control information will be used. The language we will use will label a node in the proof space by a set {pr , . . . , p, }, where all of the pi need to be proved in order for the proof to be complete. A node labelled with the empty set is a goal node. We will also introduce a predicate child, where child(ni, na) means that the node n2 is a child of the node nl. Thus in the original example, we would have child({hostile(c)}, { invades-neighbor(c)}) (1) saying that a child of the node trying to prove that c is hostile is a node trying to prove the subgoal that c has invaded its neighbor. We can go further. By taking into consideration all of the facts in the database together with the inference method being used, we can delimit the extent of the child predicate exactly. This eventually leads to a large disjunction, one term of which is given by (1): child(m, n) E [m = n U {invades-neighbor(Iraq)}] V [m = n U {allied(Iraq, Jordan)}] V (2) 3S.[m = S U {allied(z, y)} A n = S U {allied(y, x)}] V . . . GINSBERG & GEDDIS 453 The above expression refers to the predicate and ob- ject constants appearing in the initial database, in this case hostile, invades-neighbor, allied, Iraq and Jordan. We can abbreviate the large expression ap- pearing in (2) to simply type&hostile, invades-neighbor, allied, (3) Iraq, Jordan) where we are using the subscript to distinguish databases of this form from databases of some other form. Note that (3) is in fact a consequence of the form of the information in our database, so that we can ei- ther derive (3) before applying a (domain-independent) control rule in which type,, appears, or derive and stash (3) when the database is constructed. The sec- ond example is similar, allowing us to derive type,,(acute,equilateral,congruent,Tr,T2) At this point, the hard work is done - we have for- malized, via the type predicate, our observation that the two databases are the same. The control rule that we are using is now simply type,,(pl,p2,p3,01,02) 1 detach) (4 indicating that for databases of this type, with the pi being arbitrary predicate constants and the oi being ar- bitrary object constants, subgoals involving the pred- icate pl should be delayed. Of course, we still need to interpret delay in a way that will enable our theo- rem prover to make use of (4), but this is not the point. The point is that (4) is no longer a domain-specific con- trol rule, but is now a domain-independent one. Any particular application of this domain-independent rule will make use of the modal information that a given database satisfies the predicate type,, for some par- ticular choice of constants. As we have already noted, this modal information can either be derived as the theorem prover proceeds or can be cached when the database is constructed.’ Proposition 2.1 Any domain-dependent control rule can be replaced with a domain-independent control rule and a modal sentence describing the structure of the search space being expanded by the theorem prover. The computational cost incurred by this replacement is that of a single inference step, and the domain-independent control rule will be valid provided that the domuin- dependent rule was. Proof. Given a specific database of a particular form, it is clearly possible to delimit in advance the na- ture of any single inference, thereby obtaining an ex- pression such as (2); this expression can then be ab- breviated to a type expression such as (3). The ‘For efficiency reasons, we might want to use the control rule only when the database is /mown to be of this type. This could be encoded by using a modal operator of explicit knowledge to replace (4) with a more suitable expression. 454 SEARCH domain-dependent control rule can now be replaced by a domain-independent one as in (4). The incremental cost of using the domain- independent control rule instead of the domain- dependent one will be the expense of checking the an- tecedent of a rule such as (4); if we choose to cache the modal information describing the structure of the database, only a single inference will be needed. In addition, since the validity of a control rule depends only on the nature of the associated search space and never on the meanings of the symbols used, it follows that the domain-independent control rule will be valid whenever the domain-dependent one is. 0 3 A more interesting example Although correct, the construction in the previous sec- tion is in many ways trivial, since in order to transfer a control rule from one domain to another we need to know that the two search spaces are identical. Indeed, this is the only circumstance under which we can be as- sured that the rule will remain valid when the domain changes. The reason that our earlier construction is interest- ing, however, is that the intuitive arguments under- lying domain-dependent control rules do not typically depend on the complete structure of the search space. In this section, we will consider a more typical example due to David Smith. The example involves planning a trip to a distant location; let us suppose from Stanford to MIT.2 The domain-dependent rule Smith presents is the following: When planning a long trip, plan the airplane compo- nent first. Why is this? There are two reasons. Suppose that we form the tentative high-level plan of driving from Stanford to San Francisco airport, flying from there to Logan, and then driving to MIT. The decision to plan the airplane component of the trip first is based on the observations that: The three legs of the journey are probably noninter- acting. Except for scheduling concerns, the nature of our transportation to and from the airport is un- related to our flight from San Francisco to Boston. It therefore makes sense to plan for each of the sub- goals separately. Airplane flights are tightly scheduled, whereas ground transportation is typically either loosely scheduled (because busses run frequently) or not scheduled at all (if we propose to drive to and from the airports involved). If we schedule the ground transportation first, we may be unable to find a flight that satisfies the induced constraints. The observation that our three subgoals (drive to SFO, fly to Logan, drive to MIT) do not interact is 2Why anyone would actually want to malce this trip es- capes us. little more than an application of the frame axiom, which says that once a subgoal has been achieved it will remain satisfied. (So that renting a car from Hertz will not magically teleport us from Boston to Miami; nor will driving to the airport cause United to go bankrupt.) What we are doing here is applying the control rule: When attempting to achieve a conjunctive goal, it is reasonable to attempt to achieve the conjuncts separately. Elkan also makes this observation in [Elkan,1990]. It is a base-level fact - the frame axiom - that justi- fies our confidence in this domain-independent control rule.3 The frame axiom justifies our belief that we will be able to achieve the subgoals separately, and there- fore that planning for them separately is a computa- tionally useful strategy. Thus Elkan’s principle is in fact a special case of the approach that we are propos- ing; this observation is made in [Ginsberg,19911 as well, where it is suggested that the true role of nonmonoton- ic reasoning in AI generally is to enable us to simplify problems in this way. The reason that we plan for the airplane flight before planning for the other two subgoals is similar. Here, we note that solving the subgoal involving the airplane flight is unlikely to produce a new problem for which no solution exists, while solving the subgoals involving surface transportation may. So we are invoking the domain-independent principle: When solving a problem, prefer inference steps that are unlikely to produce insoluble subprob- lems . This domain-independent control information is ap- plied to the domain dependent modal information that f ly( SFO, Logan, t) is unlikely to have a solution if t is bound, while drive(Stanford, SFO, t) is likely to have a solution whether t is bound or not. Once again, we see that we are applying a domain-independent control rule to domain-dependent modal knowledge. F’urt hermore, the computational arguments in Proposition 2.1 remain relevant; it is no harder to cache partial information regarding the structure of the proof space (e.g., f ly(SF0, Logan, t) is unlikely to have a solution if t is bound) than it is to cache a complete description as in (3). 31n fact, this control rule is not completely domain- independent, since it applies to planning only. But it can easily be extended to the completely domain-independent rule that when attempting to solve a problem, if there is good reason to believe that the solution to a simpler prob- lem will suffice, one should solve the simpler problem and then check to see if it is a solution to the original query. An example due to Minton and appearing in [Minton et al.,19891 can be handled similarly. This ex- ample is from the blocks world, where Minton suggests that when planning to build a tower, expect to build it from the bottom up. We have chosen to discuss Smith’s example in detail because only his rule is pure control information.4 In Minton’s case, the fact that you will build the tower from the bottom up is no rea- son to do the planning by beginning with consideration of the bottom block, although Minton appears to use the rule in this fashion. After all, it is easy to imagine domains in which the nature of the tower is constrained by the properties of the uppermost object it contains; in this case it might well be advisable to do the plan- ning from the top down even though the construction will inevitably be from the bottom up. Smith’s ex- ample does not share this ambiguity; it really is the case that one plans long trips by beginning with any airplane component. We should be careful at this point to be clear about what distinguishes the examples of this section from those that appear in Section 2. The basic result of Section 2 was the following: By constructing control rules that appeal to the precise form of a declarative database and by caching modal information about the form of the database being used in any partic- ular problem, control rules can always be split into domain-independent control knowledge and domain-dependent base-level or modal knowledge. F’urt her, this split incurs minimal computational expense. The upshot of the examples we have discussed in this section is the following: Although the control rules used by declara- tive systems might in principle be so specific as to apply to only a single domain, the domain- independent control rules in practice appear to be general enough to be useful across a wide variety of problems. Whether or not this observation is valid is more an experimental question than a theoretical one; if the ob- servation is valid, it reflects that fact that our world is benign enough to allow us to develop control rules of broad merit. The evidence that we have seen indi- cates that our world is this benign; all of the examples of control information that we have been able to iden- tify in the AI literature (or in commonsense discourse) appeal to domain-independent control principles that have wide ranges of validity. Nowhere do we see the use of a control rule whose justification is so obscure that it is inaccessible to the system using it. The results in [Etzioni,l990] also lend credence to this belief. Etzioni’s STATIC system showed that de- termining whether or not a rule learned by PRODIGY 4As Smith himself has pointed out (personal communication). GINSBERG & GED,DIS 455 ‘was likely to be useful in practice could be deter- mined by a static examination of the PRODIGY search space. Since Etzioni’s system examines PRODIGY'S search space only to determine whether or not it is recursive, we see once again a very general domain- independent rule (involving the need to avoid recursive problem descriptions) being used to construct appar- ently domain-specific control information. 4 Conclusion Our aim in this paper has been to argue that reason- ing systems have no need for domain-dependent control rules; rather, they apply domain-independent control techniques to domain-dependent modal and base-level information. In retrospect, the viability of this ap- proach is obvious; what is interesting are the conse- quences of this view. These were mentioned in the in- troduction, but we would like to explore them in some- what more detail here. Higher orders of meta One of the difficulties with control reasoning thus far has been that it seems to require a “metametalevel” in order to decide how to do the reasoning about the control information itself, and so on. Domain-independent control information eliminates this philosophical difficulty. Since the control informa- tion is completely general, there is no reason to modify it in any way. The implemented system must be faith- ful to this control information but need have no other constraints placed on it. + Another way to look at this is the following: Pro- vided that no new control information information is learned, existing metalevel information can be “com- piled” in a way that eliminates the need for runtime evaluation of the tradeoffs between competing control rules. The observations in this paper allow us to con- clude that the new control information learned by ex- isting systems is in reality base-level or modal infor- mation instead; viewing it in this way allows us to dispense with any reexamination of our existing con- trol decisions. This reexamination is, of course, the “metametalevel” analysis that we are trying to avoid. Research on control of inference There has been little work to date on domain-independent tech- niques for the control of search. Smith investigates the question in a declarative setting in [Smith,1986, Smith and Genesereth,l985, Smith et aZ.,1986], while authors concerned with constraint satisfaction have considered these issues from a somewhat differ- ent viewpoint [Dechter,l990, Dechter and Meiri,1989, Dechter and Pear1,1988, Ginsberg et aZ.,1990, and oth- ers] . The ideas in this paper suggest an obvious way in which these results can be extended. If the existing domain-dependent control informat ion 456 SEARCH used by various systems does in fact rest on general domain-independent principles, these gen- eral principles should be extracted and made ex- plicit . In many cases, doing so may involve ex- tending the declarative language being used to in- clude information that is probabilistic [Smith,1986, Smith and Genesereth,l985, Smith et aZ.,1986] or de- fault [Elkan,1990, Ginsberg,19911 in nature; this can be expected to lead to still further interesting research questions. Domain-dependent information The recognition that the domain-dependent information used by a declarative system is all base-level or modal has in- teresting implications in its own right. The most immediate of these is the recognition that learning systems should not be attempting to learn control rules directly, but should instead be focusing on the base-level information the underlies them. Thus a travel-planning system should be trying to recognize that airline flights are scheduled while automobile trips are not, as opposed to the specific rule that one should plan around airline flights. There are some immediate benefits from this ap- proach. The first is that domain-independent control rules may be able to make use of domain-dependent information that has been included in the system for reasons having nothing to do with the control of search. In Section 3, for example, we saw that the frame axiom, through its implication that conjunctive subgoals can be achieved separately, had significant consequences in planning search. These consequences are obtained au- tomatically in our approach. A second benefit is that learned modal knowledge can be used by any domain-independent control rules included in the system. If domain-specific control rules are learned instead, other consequences of the modal knowledge underlying these rules will not be exploited. Third, subtle interactions among domain-dependent control rules can now be understood in terms of sim- pler base-level information. Thus if my reason for go- ing to MIT is to give a presentation of some sort, I will typically plan the time of the presentation first, only then scheduling the airplane and other parts of my travel. The reason is the base-level fact that the folks at MIT are likely to be even more tightly sched- uled than are the airlines. The apparent conflict be- tween the control rules, “Plan airplane flights first ,” and “Plan talks first,” is recognized as tension between the base-level facts, “Airplane flights are tightly sched- uled,” and “Academic talks are tightly scheduled.” As- suming that our declarative language is able to com- pare the truth values assigned to these two latter sen- tences, the conflict will be resolved in a straightforward fashion. Finally, a focus on the ability to derive base-level information may allow us to address an aspect of hu- man problem solving that has been ignored by the AI community thus far. Specifically, the results of the problem-solving effort themselves frequently impact our base-level expectations. As an example, after an hour spent fruitlessly searching for a proof of some the- orem, my belief in the theorem may waver and I may decide to look for a counterexample. What I am doing here is changing my base-level beliefs based not on new information, but on the results of my theorem-proving efforts directly; the new base-level expectations then refocus my attention via their relationship to domain- independent rules (here, information about when one should try to prove a theorem and when one should attempt to prove its negation). Domain-independent control information that is capable of dealing with our new base-level expectations will duplicate behavior of this sort .5 In all of these cases, the advantages we have identi- fied are an immediate consequence of the observation that reasoning systems should work with a domain de- pendent base level (including modal information) and domain independent control information. Acknowledgement We would like to thank Alon Levy, Nils Nilsson and David Smith for many enlightening discussions. The comments of two anonymous reviewers are also grate- fully acknowledged. References [Dechter and Meiri,1989] Rina Dechter and Itay Meiri. Experimental evaluation of preprocessing techniques in constraint satisfaction problems. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, pages 271-277, 1989. [Dechter and Pearl,19881 Rina Dechter and Judea Pearl. Network-based heuristics for constraint- satisfaction problems. Artificial Intelligence, 34:1- 38, 1988. [Dechter,l990] R ina Dechter. Enhancement schemes for constraint processing: Backjumping, learning, and cutset decomposition. Artificial Intelligence, 1990. To appear. [Elkan,lSSO] Charles Elkan. Incremental, approximate planning. In Proceedings of the Eighth National Conference on Artificial Intelligence, pages 145-150, 1990. [Etzioni,l990] Oren Etzioni. Why PRODIGY/EBL works. In Proceedings of the Eighth National Confer- ence on Artificial Intelligence, pages 916-922, 1990. [Ginsberg and Harvey,19901 Matthew L. Ginsberg and Will D. Harvey. Iterative broadening. In Proceed- ings of the Eighth National Conference on Artificial Intelligence, pages 216-220, 1990. ‘The blind search technique of iterative broadening [Ginsberg and Harvey,19901 appears to be based on a sim- ilar observation. [Ginsberg et al.,19901 Matthew L. Ginsberg, Michael Prank, Michael P. Halpin, and Mark C. Torrance. Search lessons learned from crossword puzzles. In Proceedings of the Eighth National Conference on Artificial Intelligence, pages 210-215, 1990. [Ginsberg,19911 Matthew L. Ginsberg. The computa- tional value of nonmonotonic reasoning. In Proceed- ings of the Second International Conference on Prin- ciples of Knowledge Representation and Reasoning, Boston, MA, 1991. [Lipman, Barton L. Lipman. How to decide how to decide how to . . . . Modeling limited rationality. Econometrica, 1991. To appear. [Minton et aZ.,1989] Steve Minton, Jaime G. Car- bonell, Craig A. Knoblock, Daniel R. Kuokka, Oren Etzioni, and Yolanda Gil. Explanation-based learn- ing: A problem solving perspective. Artificial Intel- ligence, 40:63-118, 1989. [Silver,19861 Bernard Silver. Meta-level Inference. El- sevier, New York, 1986. [Smith and Genesereth,l985] David E. Smith and Michael R. Genesereth. Ordering conjunctive queries. Artificial Intelligence, 26(2):171-215, 1985. [Smith et aZ.,1986] David E. Smith, Michael R. Gene- sereth, and Matthew L. Ginsberg. Controlling re- cursive inference. Artificial Intelligence, 30:343-389, 1986. [Smith,19851 David E. Smith. Controlling Inference. PhD thesis, Stanford University, August 1985. [Smith,19861 David E. Smith. Controlling backward inference. Knowledge Systems Laboratory Report 86-68, Stanford University, June 1986. GINSBERG & GEDDIS 457
1991
72
1,134
ee Searches eiger and Jeffrey A. Barnett Northrop Research and Technology Center One Research Park P alos Verdes , CA 90274 Abstract We provide an algorithm that finds optimal search strategies for AND trees and OR trees. Our model includes three outcomes when a node is explored: (1) finding a solution, (2) not finding a solution and realizing that there are no solutions beneath the cur- rent node (pruning), and (3) not finding a solution but not pruning the nodes below. The expected cost of examining a node and the probabilities of the three outcomes are given. Based on this input, the algo- rithm generates an order that minimizes the expected search cost. Introduction Search for satisfactory solutions rather than optimal ones is common in many reasoning tasks. For example, a theorem prover may search for an acceptable proof although that proof is not necessarily the shortest pos- sible. Similarly, planning the way home from a friend’s house does not require us to look for the shortest path, any reasonable path suffices. Simon and Kadane (1975) examine satisficing search using a simple gold-digging example: An unknown number of treasure chests are randomly buried at some of 42 sites, but neither the sites nor the depth of burial are known with certainty. At each site, a sequence of one-foot slices can be excavated, and a treasure may be disclosed by the removal of any one of these slices. The probability that a treasure lies just below each slice is known as is the cost of excavating that slice. Which search strategy minimizes the expected cost to find a treasure? If slices can be excavated in arbitrary order, the optimal search strategy is to excavate slices in de- creasing order of their benefit-to-cost ratios. However, there is a constraint: a slice can be excavated only af- ter all slices above it are excavated. Consequently, a greedy approach-selecting the currently most promis- ing slice -is not adequate. One should prefer to exca- vate a slice with a low benefit-to-cost ratio if a suffi- ciently promising slice lies under it. Simon and Kadane provide a method to find excavation sequences with the least expected cost to find a treasure. This article defines a more detailed model of search where excavating a slice may prune search beneath that slice. Simon and Kadane’s characterization of optimal excavation sequences is shown valid in our model and a variant of Garey’s (1973) algorithm is developed to find these sequences for trees. An example of an OR graph is given where the optimal strategy must be constructed dynamically. The example stands in sharp contrast to Simon and Kadane’s model where optimal search strategies are determined before search starts. Finally, optimal searches of AND-OR trees are shown to require dynamic strategies too. Search of OR Trees: Preliminaries We extend Simon and Kadane’s search model to allow three rather than two outcomes for excavating a slice: (1) a treasure is found, (2) a treasure is not found and it is realized that there are no treasures beneath the current slice (pruning), and (3) a treasure is not found but one can still be found below. In the later case, sev- eral alternatives are revealed for further digging. The respective probabilities of the three outcomes for the slice s are p+(s), p-(s), and PO(S). These outcomes are assumed to be mutually-exclusive and exhaustive and, hence, p+(s) + p-(s) + ~~(8) = 1. Simon and Kadane exclude the second outcome because they do not model pruning. Metaphorically, if a treasure is not found at a slice, then either (1) some “doors” to new slices open-a situation that corresponds to exposing the immediate children of a node in a search tree or (2) no doors open -a situation that corresponds to either reaching a leaf node or pruning deeper search through that node. We assume that each slice is part of a single site and that each slice can be reached by only one path from the surface. Consequently, a site is a metaphor for a tree and multiple sites for a f0rest.l More general searches are discussed later. ‘The word-pairs, site/tree, slice/node, and excava- tion/search are used interchangeably throughout this ar- ticle to emphasize the analogy between the gold-digging example and tree searches. , GEIGER & BARNETT 441 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. A sequence of slices b = sr . . . sr is a (search) strutegy when (1) the si are distinct slices and (2) all slices above each s; are in b and all precede si. Define 4+(s) = p+(s)/c(a) as a benefit-to-cost ratio with the assumption that p+(s) # 1 and c(s) # 0. These assumptions entail, respectively, that no slice contains a treasure with certainty and that no slice is excavated for free. Assume, also, that the probabilities and cost for one slice are independent of the outcome of excavating other slices. The cost of a strategy b = sr . . . sr, denoted c(a), is computed by 4) = ~j3(s&l . . .Si_l) * C(G), (1) i=l where /?(slsl . ..sz) standsf or the probability that slice x is excavated given the strategy starts with 51. . . sz. Similarly, the probability that a strategy b unearths a treasure, denoted p+(b), is computed by P+ (b) = ~S(SilSl . . . Si-1) * p+(Si). i=l Formulas for p are developed below. For each strategy b, define 4+(b) = p+(b)/c(b) and q+(b) = 1 - p+(b). The problem is to find a strategy having the least expected cost, i.e., to find a strategy b” such that c(bO) 5 c(b) for every strategy b. In Simon and Kadane’s search model, a slice si in a strategy b = 51 . . . sr is excavated if and only if none of Sl . . . ai- contain a treasure. In this case, the expected cost of b is derived from Kq. (1) by substituting i-l a(% 181 **aSi-_l) = Q+ (Sj)* (2) j=l This equation states that the probability that si is ex- cavated equals the probability that no treasure is found in slices 81 e l l Si-1. In our search model where pruning is permitted, the expected cost of a strategy is still given by Eq. (11, however, the expression for /9 is more complex. A slice is excavated if and only if every slice above it opens its doors (i.e., the path to that slice is unearthed) and no treasure is found by prior excavation. When there is only one site, a tree T, then /? is defined by n a(% 181 ..+Si-l) = nPO(aj) Q+(d) (3) j=l d#aj+l 9+ (4 = P- (4 + PO(d) Q+(x), 3GEK (d) where K(d) is the set of children of node d that are in (sr...si-~} and al . . . a, is the path from the root of T to si = c1~n+l. The formula for Q+(d) computes 442 SEARCH the probability that a subtree rooted at node d does not contain a treasure; when d is a leaf node, then Q+(d) = p-(d) + PO(d) = q+(d). When there are several sites, i.e., a forest, the ex- pression for p(si IsI . . . si-1) in Eq. (3) is multiplied by Q+(r) for each root node, t‘, in s1 . . . si-1 besides the root of T. The original formula calculates the probabil- ity that a path to si is unearthed and that no treasure is found in T prior to excavating si. The additional factors account for the assertion that no treasure is found at the other sites either. Thus, the calculation of /3 in Eq. (3) depends on the topology of the sites as just described and on si itself because its ancestors, al . . . Q,,, are distinguished in the formula. Eq. (2) depends on neither. ees: II Algorithm A brute force approach for choosing the best excavs tion strategy computes the cost of each strategy and chooses the least expensive. Fortunately, when two strategies are identical except that two adjacent slices are switched, one can choose between the two strate- gies without computing their expected costs; merely compare the benefit-to-cost ratios, c$+, of the slices that are switched, and choose the strategy where the slice with the highest ratio is excavated first. This lo- cal property facilitates a polynomial-time algorithm to find an optimal strategy. The next theorem spells out this property. Theorem 1 If b = s1 . . . s, is a strategy and b’ is a strategy obtained from b by switching two adjacent slices, si and si+l, then c(b) < c(b’) if and only if b+(si) > d’(si+l) c(b) = c(b’) if and only if $‘(si) = 4’(si+l). roof: Let 7 be the subsequence of b that precedes sisi+r. The expected costs of 7sisi+l and 7si+l si are divided into contributions from three mutually exclu- sive situations: (1) neither si nor si+l can be exca- vated because either an ancestor of each failed to open its doors or a treasure was found in one of 7's slices, (2) only one of the two slices can be excavated, and (3) both slices can be excavated. The expected costs of 7sisi+l and 7si+lsi are iden- tical in the first two cases because changing the po- sition of slices that are not excavated cannot change the expected cost of a strategy. In the third case, the expected costs are given by C(7%&+1) = C(7) + C(Q) + Q+(Si)C(Si+l) C(Y%+lSi) = C(7) + C(&+l) + q+(%+l)C(%)* The first equation stems from the assumption that si is excavated with certainty after 7 is excavated and from the fact that slice si+l is excavated after 5; with probability q+ (si ) a The probability of excavating si+ 1 after si is q+(si) = p-(si)+pO(~i), and not just P’(s~), because slice si is not on top of si+l. (Otherwise, b and b’ could not both be strategies). The second equation holds by symmetry of i and i + 1. The theorem follows by taking the difference between these two equations. •I The basis for our algorithm to find optimal excavs tion sequences lies in the observation that a slice with the highest #+ should be excavated immediately after the slice above it is excavated. Theorern 2 If si is a slice with the highest #+ and si has an immediate parent ai, then there exists an optimal strategy that includes the subsequence sisi. If sj is a top slice (root node), then there exists an optimal strategy that starts with sj. Proof: If si is an immediate parent of si, then si must be excavated before si. Suppose Sir1 . . . tmSj is a subsequence in some optimal strategy. Note that no rk can be an ancestor of Sj because Si is the immediate parent of sj+ Hence, we can repeatedly switch aj with each ri to obtain a new strategy in which sj directly fOllOWS Si l By Theorem 1, the cost of this strategy is less than or equal to the cost of the original. If sj is a root node that follows rl . . . rm in an optimal strategy, it can be switched to the front because no ri can be its parent. Theorem 1 entails that the transformed strategy is at least as good as the original one. Hence, either sj can start the strategy or immediately follow its parent. Cl Theorem 2 implies that whenever a slice with the highest #+ is a top slice it can be placed first in a strat- egy. The remaining sequencing problem is smaller. If the best slice is not a top slice, it can be combined with its parent to form a single slice. Again, the remaining problem is smaller. Thus, each step reduces the num- ber of slices by 1 until no slices are left and an optimal strategy is obtained. This algorithm is summarized in Figure 1. It remains to explicate how to compute the cost and probabilities for the combined node, b’b. When prun- ing is not modeled, the parameters are computed by c(b’b) = c(b’) + q+(b’) . c(b) p+(b’b) = P+ (b’) + q+ (b’) - p+(b), where b and b’ are subsequences and not necessarily single nodes (Garey 1973). The expected cost of an optimal strategy is preserved by these transformations due to E&s. (1) and (2). However, when pruning is modeled, the combining equations depend on the topology of the tree. Suppose b’ in the algorithm is the result of combining a subtree T’ and b is the result of combining a subtree T. Since b’ is a parent of b, c(b’b) = c(b’) + B(b’, b) - c(b) p+(b’b) = P+ (b’) + B(b’, b) . p+(b), where j?(b’, b) is the probability that it is necessary to execute b after b’ is executed. Moreover, /3(b’, b) equals B(+Q . ..s.),whereb’=si...s,andristhefirstnode in b. Input: A collection of trees, with nodes N. Output: An optimal search strategy stored in 7. 1. 2. 3. 4. 5. Set 7 to the empty sequence. Find a node b E N having the highest 4’. If b is a root node, then set 7 = yb and remove b from N. Otherwise, b has a parent b’ in N. Combine nodes b’ and b into a single node denoted by b’b. Place node b’b in N and remove b’ and b from N. Compute t$+(b’b). If some nodes are left in N, go to Step 2. Figure 1: Algorithm to find optimal strategies. The complexity of the algorithm is O(n2). On each iteration, finding the node with the highest 4+ is O(logn) using a priority heap, and the calculation of p+ and c for a merged node is O(n). Since the algo- rithm iterates n times, the bound follows. Our algorithm is similar to one described by Garey (1973). Both algorithms repeatedly transform a search tree by merging pairs of nodes into single nodes. They differ in the type of transformations applied; Garey’s transformations always involve a leaf node while our transformation involves a node with the highest #+ value. Further, our algorithm deals with three possible outcomes of node exploration while Garey’s deals with two. Nevertheless, Garey’s algorithm can be amended to account for three-outcome evaluation as well and its complexity is the same as ours. An Example Consider three sites having the structure depicted in Figure 2 and the parameters given by Table 1. The optimal strategy for this example is calculated next using our algorithm. Node 5s has the highest 4 +. It is therefore combined with si. The parameters of the combined node are 4w3) = +1)+P"(+(~3) = 10.8 p+(w3) = P'(R) +P"(~l)P+(~3) = .74 and, thus, $+(si53) = .069. Now the node with the highest #+ is ~4. Its #+ is highest among all nodes including the newly created node ~1s~. Hence, nodes 34 and 32 are combined and the resulting parameters are ~(9294) = 5.5, p+(s~s4) = .55, and ++(szs~) = .l. Node 52~4 now has the highest #+. It is therefore combined with its parent 31s~. The new parameters are c(s133s2s4) = C(SlS3) + P"(~1)4+(~3)+2s4) = 11.68 GEIGER & BARNETT 443 0 31 /\ a 92 34 0 53 Figure 2: Search sites. 0 88 Table 1: Search sites parameters. P+(%s38234) = p+(sl33) +p"(31)q+(~3)p+(3234) = .828 and the resulting f#+ is .071. Node s1 L?ss234 is a root node and it has the highest #+. Thus, it is added to 7 as a bloc. The next node is sg. It is also a root node with the highest 4+ and is therefore added to 7 as a bloc. Now node 87 is combined with its parent &j. The resulting node 3337 has a lower 4+ value (0.018) than 3s. Thus, &3 is the next bloc added to 7 and 965‘7 is the fourth and last. Notably, any strategy produced by our algorithm consists of a sequence of blocs with decreasing #+ val- ues. h this examplt?, the blocs are sl$ss2s4, ss, $8, 8387 with the respective decreasing #+ values, .071, .025, .02, and .018. This bloc structure coincides with Simon and Kadane’s characterization of optimal strategies. It is not clear, however, whether this struc- ture extends to strategies for OR graphs when three rather than two evaluation outcomes are possible. ual Problem: AN We can think about search of OR trees as a procedure for proving the root node true: a node is proven true if and only if it is proven true by its own evaluation or at least one of its children is proven true. Proving true corresponds, in the gold-digging metaphor, to finding a treasure. The task for AND trees is to prove the root node false. A node in an AND tree is proven false if and only 444 SEARCH Figure 3: Sites with multiple paths. if it is proven false by its own evaluation or at least one of its children is proven false. The algorithm of Figure 1 with a minor change, switch the roles of p- and p+, finds optimal search strategies for AND trees. Dynamic vs. Static Sequencing Previous sections provide an algorithm that finds op timal search sequences for OR trees and for AND trees. In these two cases, optimal search sequences can be de- termined before search starts. However, this property does not hold in general. Next, we provide two exam- ples where optimal sequences must be revised during search. Consider the OR graph shown in Figure 3. There are three surface slices Z, y, and z, and two deeper slices, v and w, each of which can be reached from two distinct surface slices. The rule is that a slice cannot be excavated until all of its parents are excavated. Thus, the graph encodes a partial order constraint on slice excavations.2 Suppose x is a node with extremely high 4+. Then x is excavated first. How should the rest of the nodes be ordered for excavation? Jf no treasure is found at x, then there are two options: (1) the digger learns with probability p- (x) that there is no gold beneath x in which case v and w will not be excavated, or (2) he learns with probability PO(X) that there may be gold in v and w. In the first case the decision about which slice to excavate next depends only on d+(y) and 4+(z) while in the second case the decision de- pends on #+(yv) and I+ as well. Hence, an op timal sequence cannot be determined until the result of excavating x is known, i.e., it must be determined dynamically. Optimal searches of AND-OR trees require dynamic strategies as well. The interpretation of AND-OR trees is consistent with the definitions used previously for AND trees and OR trees. In particular, the tree of Fig- ure 4 evaluates to true iff either node 81, 82, or 36 evaluates true. Node 82 (an AND node) evaluates to true iff 92 is true or 93 and 84 evaluate to true. Node 93 evaluates to true iff s3 is3 true or 55 is true. Assume that the b+ values of 51 . . .86 are, respec- tively, 1, 900, 100, 200, 50, 130, that 4+(ssa4) = 150, ‘ln the gold-digging metaphor this assumption is made to (say) prevent the collapse of a parent slice on its child if the child were excavated first. References Garey MR. 1973. Optimal task sequencing with prece- dence constraints. Discrete Mathematics 4:37-56. Simon H.A., and Kadane, J.B. 1975. Optimal problem- solving search: all-or-none solutions. Artificial In- telligence 6:235-247. Slagle, J.R. 1964. An efficient algorithm for finding cer- tain minimum-cost procedures for making binary decisions. Journal of the Association for Computer Machinery 11:253-264. Figure 4: AND-OR tree. and that 4+(sdss) = 110. If one must commit to an execution order before search starts, then the choice would be either sls2sas4&jss OX- 81s2s4t?Q&js5. Nodes ~2, 5s and s4 appear before se because d+ (~2) and #+(sss4) are higher than #‘(se) and 56 is placed be- fore 55 because its 4+ is higher. The ordering between descendants of AND nodes is determined by their p-/c ratios: those with higher values execute first. Assume for this example, that ss is executed before s4 based on this criterion, i.e., the best a priori strategy is s182sss4se85. Evaluating 5s can produce three results: If 5s eval- uates true as expected, it is best to continue with the predetermined strategy. If se evaluates false, then 94 and ss are not evaluated and, hence, the expected cost of the remaining work is not effected by their relative location in the strategy. Otherwise, 53 opens its doors and the strategy profits from a change: node se should now be evaluated before node ~4, because its 4+ value is higher than that of 545s. Thus, the best ordering of 54 and &j is contingent on the results obtained by evaluating 5s. Summary We have presented an algorithm that finds optimal search strategies of AND trees and OR trees where prun- ing is modeled. Further, we have shown that optimal search strategies of AND-OR trees, AND graphs, and OR graphs cannot be represented as static permutations of the nodes. Consequently, to represent an optimal search strategy for these latter cases, one must con- struct a decision diagram that indicates the node to search next as a function of the outcomes of previous searches. The construction of such dynamic strategies is addressed by Slagle (1964) assuming there are no constraints on the order of node examination. Finding optimal search strategies subject to order constraints remains an open problem. GEIGER & BARNETT 445
1991
73
1,135
Concept Languages as uery Languages Maurizio Lenzerini, Andrea Schaerf Dipartimento di Informatica e Sistemistica Universita di Roma “La Sapienza” via Salaria 113, 00198 Roma, Italia Abstract We study concept languages (also called terminological languages) as means for both defining a knowledge base and expressing queries. In particular, we investigate on the possibility of using two different concept lan- guages, one for asserting facts about individ- ual objects, and the other for querying a set of such assertions. Contrary to many nega- tive results on the complexity of terminologi- cal reasoning, our work shows that, provided that a limited language is used for the asser- tions, it is possible to employ a richer query language while keeping the reasoning process tractable. We also show that, on the other hand, there are constructs that make query answering inherently intractable. 1 Introduction Concept languages (CLs, also called terminological languages) provide a means for expressing knowledge about concepts, i.e. classes of individuals with common properties. A concept is built up of two kinds of symbols, prim- itive concepts and primitive roles. These primitives can be combined by various language constructs yield- ing complex concepts. Different languages are distin- guished by the constructs they provide. Concept lan- guages are given a Tarski-style semantics: an interpre- tation interprets concepts as subsets of a domain and roles as binary relations over the domain. Much of the research on terminological reasoning (see [2,~,v,1w aims at characterizing CLs with re- spect to both expressive power and computational complexity of computing subsumption, i.e. checking if one concept is always a superset of another. Other recent work (see [4,6]) deals with the problem of using a CL for building what we call a concept-based This work was partly funded by ESPRIT BRA 3012 (Com- pulog) and by the Italian CNR, under Progetto Finalizzato Sis- temi Informatici e Calcolo Parallelo, Linea di ricerca IBRIDI. knowledge base, i.e. a set of assertions about the mem- bership relation between individuals and concepts, and between pairs of individuals and roles. It is interesting to observe that little attention has been paid to studying CLs as query languages, i.e. as means for extracting information from a concept- based knowledge base. In the present paper we deal with this problem, with the main goal of identifying an optimal compromise between expressive power and computational tractability for both the assertional and the query language. Our work has been carried out with the following underlying assumptions: The assertional language is at least as powerful as F&Z- [2], which is generally considered as the minimal concept language. A query is formulated in terms of a concept C, with the meaning of asking for the set of all the individuals x such that the knowledge base logi- cally implies that x is an instance of C. m Since we want to be able to extract from the knowledge base at least the stored information, the query language is at least as expressive as the assertional language. e The computational complexity of query answering is measured with respect to the size of both the knowledge base and the concept representing the query. The main result of this paper is to show that one can use a rich CL for query formulation without falling into the computational cliff, provided that a tractable language is used for constructing the knowledge base. It is worth mentioning that the idea of using a query language richer than the assertional language is not new. For example, relational data bases, which are built up by means of a very limited data definition language, are queried using a full first order language, called relational calculus. Another example is the work by Levesque in [8], where a first order knowledge base LENZERINI & SCHAERF 471 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. is queried by means of a richer language including a modal operator. In order to apply this idea in the context of concept- based knowledge bases, we make use of AIC [lo] as as- sertional language, and we define a suitable new lan- guage &L for query formulation. AL is a tractable extension of FLc- with a constructor for denoting the complement of primitive concepts, whereas QL is an extension of dL to express qualified existential quan- tification on roles, role conjunction, and collection of individuals. Another result of our work is that AC and &C are almost maximally expressive with respect to the tractability of query answering. In particular, by ana- lyzing the constructs usually considered in terminolog- ical languages, we show that, if one aims at retaining tractability, there are inherent limits to the expressive power of both the assertional and the query language. The paper is organized as follows. In Section 2 we provide some preliminaries on CLs. In Section 3, we deal with the problem of checking subsumption be- tween a concept of &L and a concept of AL. In Sec- tion 4, we make use of the results of Section 3 for de- vising a polynomial method for answering queries to an d&knowledge base using &L as query language. In Section 5 we discuss the limits for the tractability of query answering. Finally, conclusions are drawn in Section 6. For the sake of brevity, most of the proofs are omit- ted. They can be found in [7]. 2 Preliminaries In this paper, we consider a family of concept languages whose general description can be found in [5,10]. We are particularly interested in the language AL:, where concepts (denoted by the letters C and 0) are built out of primitive concepts (denoted by the letter A) and primitive roles according to the syntax rule C,D - Aj~AjCnDjVR.C)3R where R denotes a role, that in AL is always primitive (other languages provide constructors for roles too). Both FIs- and dL provide a restricted form of exis- tential quantification, called unqualified: the construct 3R denotes the set of objects a such that there exists an object b related to a by means of the role R: the ex- istential quantification is unqualified in the sense that no condition can be stated on b other than its exis- tence. An interpretation Z = (A’, sZ) consists of a set AZ (the domain ofZ) and a function 7 (the interpretation function of Z) that maps every concept to a subset of AZ and every role to a subset of AZ x A’ such that the following equations are satisfied: WY =A=\A=, (CnD)Z=C=nDZ, (VR.C)Z = {a E AZ ] V(a, b) E R’. b E C’], (3R)* = {a E AZ ] 3(a, b) E R’}. An interpretation Z is a model for a concept C if CZ is nonempty. A concept is satisfiable if it has a model and unsatisfiable otherwise. We say that C is subsumed by D if CZ & Dz for every interpretation Z, and C is equivalent to D if C and D are subsumed by each other. More general languages are obtained AC the following constructs: bY adding to o qualified existential quantification, written as 3R.C, and defined by (3R.C)’ = {a E AZ I 3(a,b) E Rx. b E CZ). The difference with unqualified existen- tial quantification is that in this case a condition is specified on object b, namely that it must be an in- stance of the concept C; l disjunction of concepts, (C U 0)’ = Cz U D’; e intersection of roles, (Q fl R)’ = Q’ n Rx; e collection of individuals (see [l]), written as {a1 f * - . , a,}, where each ai is a symbol belonging to a given alphabet 0. In order to assign a meaning to such a concept, the interpretation function eZ has to be extended by injectively associating an element of AZ with each symbol in 0. The semantics of {al, . . . , a,} is then defined by {al,. . . ,a,}’ = {a:, . . . ,a;). In [5,10] a calculus for checking concept satisfiability is presented. The calculus operates on constraints of the forms x: C, xRy, where x, y are variables belonging to an alphabet V, C is a concept and R is a role. Let Z be an interpretation. An Z-assignment o is a function that maps every variable to an element of A’; cy satisfies x: C if a(~) E C’, and Q satisfies xRy if (a(x)+(y)) E R’. A constraint system (i.e. a finite, nonempty set of constraints) S is satisfiable if there is an interpretation Z and an Z-assignment cy such that cr satisfies every constraint in S. It is easy to see that a concept C is satisfiable ifI the constraint system {x:C} is satisfiable. In order to check C for satisfiability, the calculus starts with the constraint system S = {x: C}, adding constraints to S until either a contradiction is generated or an interpretation satisfying C can be obtained from the resulting system. Constraints are added on the basis of a suitable set of so-called propagation rules, whose form depends on the constructs of the language. The propagation rules for the language d,C are: 1. s -+n (x:C1, XCZ) US if x: Cr ll Cz is in S, and x: Ci and x: Cz are not both in S 472 TERMINOLOGICAL REASONING 2. s -v {y:C}US if 2:VP.C is in S, xPy is in S, and y: C is not in S 3. s -T3 {XPY} us if x: 3P is in S, y is a new variable and there is no z such that xPz is in S if x: A and x: 1A are in S A constraint system is compdete if none of the above completion rules applies to it. A clash is a constraint of the form x: 1. We say that S’ is a completion of {x: C), if S’ is complete, and is obtained from {x: C} by applying the above completion rules. In [lo] it is shown that an d&concept C is satisfi- able iff the complete constraint system obtained from {x: C} by means of the above rules does not contain any clash. Moreover, it is proved that computing the completion of (5: C} is a polynomial task. By exploit- ing the features of the propagation rules, in [5] it is shown that checking subsumption between two A& concepts is also a polynomial task. Concept languages can also be used as assertional Zanguages, i.e. to make assertions on individual ob- jects. Let 0 be an alphabet of symbols denoting indi- viduals, and L be a concept language. An &assertion is a statement of one of the forms: C(a), R(a, b), where C is a concept of L, R is a role of L, and a, b are in- dividuals in 0. The meaning of the above assertions is straightforward: if Z = (AZ, .‘) is an interpreta- tion, C(a) is satisfied by Z if a’ E C’, and R(a, b) is satisfied by Z if (a’, b’) E R’. A finite set C of C-assertions is called an L- knowdedge base. An interpretation Z is said to be a moded of C if every assertion of C is satisfied by 1. C is said to be satisfiable if it admits a model. We say that C ZogicaZZy impdies an assertion a (written C + (Y) if Q is satisfied by every model of C. The above propagation rules can be exploited for checking the satisfiability of an d&knowledge base C. The idea is that an AC-knowledge base C can be translated into a constraint system, denoted by SC, by replacing every assertion C(a) with a: C, and every assertion R(a, b) with aRb (see [S]). One can easily verify that, up to variable renaming, only one com- pletion, denoted COA~P~L((C), can be derived from SC. Notice that the constraints in COMPJL(C) re- gards both individuals in 0 and variables in V. In the sequel, we use the term object as an abstraction for individual and variable. Moreover, if 2 is either a knowledge base or a concept, we write dimz to denote the size of 2. Theorem 2.1 An AC-knowledge base C is satis- fiable ifl COMP&((C) is clash-free. Moreover, COMPdL(~) can be computed in poZynomiaZ time with respect to dime. 3 nriching the language of the sub- sumer The goal of this section is to show that, when using a tractable language for the subsumee, it is possible to enrich the language of the subsumer without endan- gering the tractability of the subsumption problem. In particular, we study the subsumption problem (is C subsumed by D?) in the hypothesis that the candi- date subsumee C is a concept of AL, and the candidate subsumer D is a concept of a richer language, which we call QC. The language &L is defined by the following syntax (where Pi denotes a primitive role, and n > 1): C,D - A 14 ICnD I {w,...,a,} I VR.C ]3R I3R.C R - PI i-l **-i-l P* Notice that the results reported in [5] show that checking subsumption between two &E-concepts is an NP-hard problem. A concept C is subsumed by D iff C n 1D is un- satisfiable, thus we can reduce subsumption between a &,&concept D and an d&concept C to unsatisfiabil- ity of CnlD. In order to solve such an unsatisfiability problem, we have devised suitable completion rules for @Z-constraint systems, i.e. constraint systems whose constraints have the forms: x:C, x: 1D, and xRy, where C is an AL-concept, D is a &E-concept, and R is a Q&role. As a notation, we say that xRy holds in a constraint system S if: R is a primitive role and xRy E S or R is of the form PI n a - . I’7 P, and for each i, xPiy E S. The set of completion rules for Q&constraint sys- tems is constituted by the rules for AIC presented in Section 2, together with the following rules, that take care of the constructs of -D. 5. s --ln {x:-Da) u S if x:l(Dl n 02) is in S, i E {1,2}, and neither x: lD1 nor x: lD2 is in S 6. S -TV {xPl y, . . . , xPny, y: ‘D) U S- if 2: +(Pl n . - - n Pn).D is in S, y is a new variable, and S- is the constraint system obtained from S by replacing each variable z such that XPjz E S (j E (1,. . . ,n}) with Y LENZERINI & SCHAERF 473 7. s 3-3 (y:1D)uS if X: 13R.D is in S, zRy holds in S, and y: 10 is not in S. Observe that in the -,v-rule, each variable z pre- viously created to satisfy one constraint of the form x:3Pa (i E (1,. . . , n}) is replaced by the newly cre- ated variable y. This procedure, that is crucial for efficiency, is made possible by the fact that the exis- tential quantification in AL is unqualified, and hence all the properties imposed on such Z’S are also imposed on y. It follows that it not necessary to keep track of the z’s in the resulting system. Notice that, due to the non-determinism of the hTn-rule, several complete constraint systems can be obtained from (2: C 17 ‘D}. The following theorem states the soundness and completeness of the above rules. Its proof derives from the above observation about the +,v-rule and from the results reported in POI. Theorem 3.1 Let C be an M-concept, and let D be a &&concept. Then C fled is unsatisfiable if.7 every completion of (2: C tl ‘D) contains a dash. It is easy to see that, starting from (z: C fl TO}, in a finite number of applications of the rules, all the completions are computed, and checked for clash. It follows that the above propagation rules provide an effective procedure to check subsumption between D and C. With regard to the computational complexity, the next theorem states that such a procedure requires polynomial time. Theorem 3.2 Let C be an d,5concept, and let D be a Q&concept. Then the set of all the completions of the constraint system (x: C 17 ‘D) can be computed in polynomial time with respect to dimcnlo. From all the above propositions it follows that checking subsumption between a Q&concept and an d&concept can be done in polynomial time. This re- sult will be exploited in next section for devising a polynomial query answering procedure. 4 Query answering In this section we propose a query answering method that allows one to pose queries expressed in the lan- guage &L to an AL-knowledge base. As we said in the introduction, a query has the form of a concept D, and answering a query D posed to the knowledge base C means computing the set {a E 0 I c I= DWh 1 n order to solve this problem, we consider the so-called instance problem: given an AL- knowledge base C, a QL-concept D, and an individual a, check if C b D(a). S ince the number of individuals in C is finite, it is clear that our method can be directly used for query answering, in particular, by solving the instance problem for all the individuals in C. Most of the existing approaches to the instance problem are based on the notion of most specialized concept (MSC). The MSC of an individual a is a rep- resentative of the complete set of concepts which a is an instance of. However, a method merely based on the MSC would not work in our case, because of the presence of the qualified existential quantification in QC. For example, in order to answer the query 3R1.3R2.{b,d)(u), it is not sufficient to look at the MSC of a, but it is necessary to consider the asser- tions involving the roles R1 and Ra in the knowledge base. For this reason, our method relies on an ad hoc technique that, by navigating through the role asser- tions, takes into account the whole knowledge about the individuals. In the following, we make use of a function ALL that, given an object a, a QL-role R = PI II . . . fl Pn, and an AL-knowledge base C, computes the d&concept ALL(u, R, C) = Cl I-I . . . fl Cm, where G,-vG7z are all the concepts appearing in some con- straint of CoMPdL(C) h aving the form a: Vj&.Ci with Q E {PI,...&}. If no such a concept exists, we as- sume ALL(u, R, C) = T (where TZ = A’). In other words, ALL(u, R, C) represents the concept to which every object related to a through R must belong, ac- cording to the assertions in C. Our method heavily relies on the following theorem. Theorem 4.1 Let C be a satisfiable A&knowledge base, let a, al,. . . , a, be individuals, let A be a primi- tive concept, and let D, Dl,Dz be Q&concepts. Then the following properties hold: I. c + {c&l,... >%z)(~) iih E {al,. . . ,G2}; 2. c b A(u) iflu:A E CCkfPdc(C), and C b lA(u) ifj?z: -A E CCik?Pd&); 3. C b D1 fl Da(u) iflC b 01(u) and C /= 02(u); 4. C b VR.D(u) ifl D subsumes ALL(u, R, C); 5. C /= YR.D(u) iff th ere is a b such that uRb holds in COMPdL(C) and c b D(b). Proof. (Sketch) The proofs of 1, 2 and 3 are straight- forward. With regard to 4, assume that D subsumes ALL(u, R, C), and suppose that C &t VR.D(u), i.e. C U {3R.~D(u)} is satisfiable. This implies that there is a model Z of C with an element d E AZ such that d E (ALL(u, R,C))‘, and d E (-o>‘, contradict- ing the hypothesis that ALL(u, R, C) is subsumed by 474 TERMINOLOGICAL REASONING Algorithm ASK@, a, D) Input AL-knowledge base C, individual a, Q&concept D, data structure ,Y; Output one value in (true, f ulse}, updated p; begin if I-+, D> = nil then case D of A : /+, D) + u:A E CCMPd@); 1A : ,~(a, D) t u:lA E CCMPd@); D1 I-I Dz : &t, D) t ASK(C,u, Dl) A ASK@, a, D2); VR.Dl : ~(a, D) t D1 subsumes ALL(u, R, C); 3R.D1 : +, D) + 3b s.t. uRb holds in coMPd@) A ASK@, b, DI); {m,..., an} :/+,D) + UE {~l,...,~n} endcase endif; return ~(a, D) end Figure 1: The Algorithm ASK D. On the other hand, assume that C b VR.D(u), i.e. C u {ElR.lD(u)) is unsatisfiable, implying that SC U (uRz, Z: lD} is unsatisfiable, where z is a new variable. Now, it is possible to verify that, since C is satisfiable, this may happen only because the constraint system {z: ALL(u, R, C), Z: ‘D} is unsatis- fiable, which means that D subsumes ALL(u, R,C). With regard to 5, it is easy to verify that if there is a b such that uRb holds in con/r&(c) and C b D(b), then C b ZIR.D(u). On the other hand, as- sume that C b YR.D(u), and suppose that for no bi (i = l,..., n) such that uRbi holds in COMPdt(C), C b D(bi). This implies that for each bi, there is a model Mi of C U {lD(bi)). Now one can easily verify that Ml U - - - U M, is a model of C U {VR.lD(u)], contradicting the hypothesis that C k -JR.D(u). Cl Based on the properties stated in the above theo- rem, we can directly develop an algorithm for query answering. The algorithm called ASK and shown in Fig. 1, makes use of a data structure p which as- sociates a value in {nil, true, fulse} with every pair a’, D’, where a’ is an object and D’ is a Q&concept. Informally speaking, ~(a’, 0’) is used to record the an- swer to the query C + D’(u’). The value nil represents that no answer has been yet computed for the query, whereas true and false have the obvious meaning of yes and no, respectively. We assume that, initially, ~(a’, D’) = nil for each pair a’, D’. The following theorem states the correctness and the tractability of the algorithm. Tractability is achieved by virtue of the data structure p, which ensures that at most one call of the algorithm is issued for every pair a’, D’, where a’ is an object and D’ is a subconcept of D (i.e. a concept appearing in D). Notice that without a technique of this kind, the method might require an exponential number of checks (for example, when queries have the form 3R1.3R2.. . .3R,.A(u)). Theorem 4.2 Let C be a satisfiable AGknowledge base, a be an individual, and D be a Q&concept. Then ASK(C,u, D) terminates, returning true if C b D(u), and false otherwise. Moreover, it runs in polynomial time with respect to dime and dimD. 5 Limits to the tractability of query answering The aim of this section is to consider several possi- ble extensions of both the query language and the assertional language and analyze their effect on the tractability of query answering. We first consider the query language, showing that there are limits to its expressive power. The basic observation is that if D is equivalent to the universal concept T, then for any knowledge base C, it holds that C b D(u). It f 11 o ows that query answering is at least as hard as the so-called top-checking problem for the query language, i.e. checking whether a concept is equivalent to T. Notice that, due to the characteristics of&L, for any Q&concept D, 1D is satisfiable, and therefore in &L it is impossible to express a universal concept. How- ever, there are languages in which the universal con- cept can be expressed, and in some of these languages top-checking is intractable. We are able to show that this is the case already for 3LU-, that is obtained from 3C- simply by adding disjunction of concepts. The proof, reported in [7], is based on a reduction <p from the satisfiability problem for a propositional conjunctive normal form (CNF) formula to the top- checking problem in 3LU-. The reduction + is de- fined as follows: @(Pi> = 3Rp,, @(lpi) = VRpt .A, Q(ll v . . . v In) = aa n - . . n (a@,), @(a1 A * * * A am) = fqQ1) Ll * * * LJ @(a,), where pi denotes a propositional letter, li a literal, and cui a clause. The above result allows us to derive the intractabil- ity of several other concept languages as query lan- guages. For example, co-NP-hardness clearly extends to ALL4 (AC + disjunction), ALC (3Lc- + full com- plement) [lo], and 3E [2]. LENZERINI & SCHAERF 475 We now turn our attention to analyzing possible ex- tensions of the assertional language. Analogously to the case of the query language, we can single out an inherent limit to the expressive power of the asser- tional language, due to the fact that query answering is clearly at least as hard as the problem of concept satisfiability for the assertional language. This observation allows us to rule out several ex- tensions of AL, such as d&Y, ALCE (AIC + qualified existential quantification), and dLCR (Al + role con- junction) [5,10]. W e are able to show that a similar re- sult holds for the language ALCO, obtained from AL by adding collections of individuals. The proof, reported in [7], is based on a reduction \E from the NP-complete problem of checking the satisfiability of a CNF formula with only positive and negative clauses, to the problem of concept satisfiability in dL0. The reduction 9 from I’= CXrA*a*Aa$AaiA*.*A a; to the dLCO-concept *(I’) = Ct n - - + n C$ n CL n . . . n C;, is specified by the following equation: C,+ = 3R; n VR;t.(obj(+ I-I A), c,- = 3R, I-I VR;.(obj(Cu;) n -A) where c$ (aa) denotes a positive (negative) clause, A is a primitive concept, Rh+ and Rk (for h = 1, . . . , n and k = l,... , m) are primitive roles, and obj(cr) de- notes the concept (~1,. . . ,p~}, where pl, . . . ,pk are all the propositional letters in the clause Q. In other words, we associate with every propositional letter of I’ an individual with the same name, and with every clause ~11 of I’ the collection of individuals obj(a). For example, if r = (p V Q) A (lp V lr), then the corresponding dCO-concept @(I’) is: 3Rt nVRt.({p,q} ll A) il3Ri nVR;.({p,r) n-A). Based on the above result, we are able to show that subsumption in CLASSIC [l] is intractable too’. Consider a primitive concept B and a primitive role R: it is easy to verify that the above reduction still holds when A and -A are replaced by the two CLASSIC concepts (AND B (ATLEAST 2 R)) and (AND B (ATMOST 1 R)). It follows that concept satisfiability in CLASSIC is NP-hard, and therefore subsumption and query answering are c*NP-hard. 6 Conclusion In the future, we aim at addressing several open prob- lems related to the use of concept languages as query languages. First of all, we aim at investigating pos- sib)e extensions of dL and &C (e.g. number restric- tions), and we want to consider the case where the knowledge base includes a so-called terminology, i.e. ‘CLASSIC extends 3L- in various ways, including number restrictions and collections of individuals. 476 TERMINOLOGICAL REASONING an intentional part expressed in terms of concept def- initions. Second, we aim at improving the efficiency of our method for query answering, in particular by using a suitable extension of the notion of most specialized concept (see [4]) and by employing suit able techniques from the theory of query optimization in the relational data model. In fact, the goal of our work was to show that the problem is tractable, but several optimiza- tion of the algorithm are needed in order to cope with sizable knowledge bases. Finally, we aim at considering more complex queries, such as queries constituted by a set of atomic asser- tions, or queries asking information regarding the in- tensional knowledge associated to the individuals. References PI PI PI PI PI PI PI PI PI PO1 A. Borgida, R. J. Brachman, D. L. McGuinness, L.A. Resnick. “CLASSIC: A Structural Data Model for Objects.” Proc. of ACM SIGMOD- 89, 1989. R. J. Brachman, H. J. Levesque. “The Tractabil- ity of Subsumption in Frame-based Description Languages.” Proc. of AAAI-84, 1984. F. Donini, B. Hollunder , M. Lenzerini, A. Marchetti Spaccamela, D. Nardi, W. Nutt. “The Complexity of Existential Quantification in Ter- minological Reasoning”, Tech. Rep. RAP.Ol.91, Dipartimento di Informatica e Sistemistica, Uni- versit& di Roma “La Sapienza”, 1991. F. M. Donini, M. Lenzerini, D. Nardi, “An Ef- ficient Method for Hybrid Deduction”, Proc. of ECAI-OU, 1990. F. Donini, M. Lenzerini, D. Nardi, W. Nutt. “The Complexity of Concept Languages.” To appear in Proc. of h’R-91, 1991. B. Hollunder , “Hybrid Inference in KL-ONE- based Knowledge Representation Systems.” German Workshop on Artificial Intelligence, 1990. M. Lenzerini, A. Schaerf. “Concept Languages as Query Languages .” Technical Report, Dipar- timento di Informatica e Sistemistica, Universitb di Roma “La Sapienza”. Forthcoming. H.J. Levesque. “The Interaction with Incom- plete Knowledge Bases: a Formal Treatment.” Proc. of IJCAI-82, 1981. B. Nebel. “Computational Complexity of Ter- minological Reasoning in BACK .” Artificial In- telligence, 34(3):371-383, 1988. M. Schmidt-Schauf3, G. Smolka. “Attributive Concept Descriptions with Unions and Comple- ments”. To appear in Artificial Intelligence.
1991
74
1,136
Deduction as Parsing: in the KL-ONE Framework Marc Vilain The Mitre Corporation Burlington Road, Bedford, MA 01730 Internet: MBV@ATRE.ORG Abstract This paper presents some complexity results for deductive recognition in the framework of languages such as KL-ONE. In particular, it focuses on classification operations that are usually performed in these languages through subsumption computations. The paper presents a simple language that encompasses and extends earlier m-ONE-based recognition frameworks. By relying on parsing algorithms, the paper shows that a significant class of recognition problems in this language can be performed in polynomial time. This is in marked contrast to the exponentiality and undecidability results that have recently been obtained for subsurnption in even some of the most restricted variants of I&ONE THE SINGLE MOST RECURRING THEME in knowledge repre- sentation, and possibly the most successfully applied, is that of types and type taxonomies. Much of the literature, both practical and theoretical, takes the need for explicit type representations as a given. In particular, considerable attention has been paid over the years to languages derived from KL-ONE, as the central concerns of these languages are types and type definitions. The hallmark of KGONE and its descendants is that these languages provide a set of type- forming operators that allow one to define types (usually called concepts) and interrelate them through a notion of type subsumption. Subsumption is usually treated as necessary type entailment, and its computational charac- teristics are one of the main concerns of this body of work. Unfortunately, most studies of subsumption have produced pessimistic results. Even for simple type langua- ges subsumption is intractable [Brachman & Levesque 1984, Levesque dz Brachman 19871, and for barely less simple languages it is undecidable [Schmidt-SchaulS 1989, Patel-Schneider 19891. These results notwithstanding, sub- sum-ption remains of interest to the KR community in that it enables a form of automated recognition. Type classifiers such as that of Lipkis use subsumption to index types automatically into a taxonomy [Schmolze & Lipkis 19831. In my own work on the KL-TWO representation system ~ilain 19851, subsumption-based classification was used to recognize those types instantiated by an individual, and thereby to index rules applicable to that individual. The relevance of KL-TWO to the present paper is that despite the intractability and undecidability of subsumption in the general case, the classification of individuals as per- 464 TERMINOLOGICAL REASONING formed in KL-TWO is computable in polynomial time. This computational discrepancy arises from the specific (and restricted) ways in which KL-TWO relies on subsumption. The present paper is an examination of how this enables tractable classification. It presents a recognition language that at first doesn’t look at all like a language from the KL- ONE family, but that embodies and extends those aspects of KGTWO that lead to tractable classification. The bulk of the paper is concerned with demonstrating for this language the tractability of a classification-like recognition process that is based on notions of chart parsing. L-ONE languages and dassi RECENT ACTIVITY IN THE KL-ONE COMMUNITY has led to a proliferation of ru-oNr5inspired representation 1anguages.l They share the common goal of formalizing frames as type structures called concepts; slots are treated as binary relations called roles. Another shared characteristic is that they all provide a number of concept- and role-forming operators. A typical collection of these is shown in Figure 1 below, along with a typical account of their semantics. These operators are put to use in forming terminological definitions, for example: DOCTOR= (and PERSON (exists (vrdiff r-rAs-DEoru3 ~~ED-DEGI~@)) ; a person with a degree that is a medical degree. DOCTPARENT= (andPERso~(all CHILDDOCTOR)) ; a person all of whose children are doctors. In most accounts of KL-ONE, subsumption is treated as necessary denotational inclusion. To be precise, a co cl is said to subsume a concept CJ just in case [cd c and a role rl is said to subsume another role r;! just in case c [rl]. By this definition, the DOCTPARENT concept is subsumed by such concepts as “parent of graduates,” e.g., (and PERSON (all CHILD (exists HAS-DEGREE))) A variety of algorithms exist for computing subsumption on the basis of concept and role definitions but, as mentioned above, these are tractable only for restricted cases. Brachman and Levesque [1984], for one, exhibit a 1. For a comprehensive catalogue, see [Patel-Schneider et al. 19901 or [Woods 6% Schmolze 19901. From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. The semantics of concept- and role-forming expressions is given with respect to to a domain 8 (a set of entities) and a modeling function p that assigns interpretations in 8 to atomic expressions. That is, p(c) E 6 and p(r) E 6x8, for each atomic concept c and role r. The ci and 5 are either atomic or complex, except in the first row, where they are strictly atomic. Figure I: Common type-forming operators. polynomial algorithm for subsumption for the language defined by atomic concepts and roles, and expressions formed with the and, all, and exists operators. However, they then extend the language to include vrdzfexpressions, and show this leads the computation of subsumption to be co-NP-complete. Schmidt-Schaug [ 19891 goes further, showing that the operators and, all, =, and chain are sufficient to make computing subsumption undecidable. For another such proof, see @‘atel-Schneider 19891. In contrast, the process of individual (or instance) classification can be performed tractably despite being ultimately based on subsumption. The underlying reason for this is that instance classification, at least as done in IU- TYVO, does not require the full complement of type-forming operators that are typically considered in analyzing subsumption. In KL-TWO, instance classification proceeds from ground propositions predicated of individuals, e.g., (PERSON~&)A(HAS-DEGREE fred &)A(MED-DEGREE&) Individuals are classified by generalizing their predicating propositions to terminological definitions which are then matched to the terminological database. In this case&& generalization would be (and PERSON (exists (vrdiff HAS-DEGREE MED-DEGREE))) which (trivially) subsumes the definition of DOOR. This kind of classification was used in KL-TWO as the basis of a deductive rule matcher.2 In this case, rules that might contain a pattern such as (DOCTOR ?x) could still be indexed and applied to the individualfred, despite their not literally matching any ground predication offled. The reason this instance classification scheme does not lead to intractability turns out to be due to the impossibility 2. This indexing strategy was first described in the KL-ONE framework by Mark [ 19821. See also [Schubert, et al. 19791. The term deductive pattern matcher is due to Mac Gregor [ 19881. of recognizing directly any concepts that impose universal restrictions on roles, i.e., concepts defined with the all or = concept-forming operators. Take the case of DOCTPARENT, defined as (and PERSON (all CHILD DOCTOR)), and say we have (PERSONfred) A (cmfiedrnary)~ (DOCTOR Fna?y) We are not entitled from this alone to conclude monotoni- cally that (DocTpA~~fied)l This would only be inferrable if we had, for example, independent knowledge that the set of individuals satisfying h x (cmfred X) had cardinality n-5 1. Knowing this would then allow us to reduce the definition of DUCTPARENT as a special case to the concept (and PERSON (exists(vrdiff CHILD DOCTOR))), which is in fact satisfied by fled. In the general case, one can show that without recourse to external non-ground information such as role filler cardinality, universal restrictions can not be recognized through instance classification.3 Putting things most simply, one can not use deduction to recognize universals from ground facts as doing so is by definition induction. In effect, the type language recognized by KL-TWO con- sists of those concepts and roles that are definable with the and, exists, and vrdifloperators, or that can be reduced to such concepts and roles through cardinality considerations. One can readily verify that subsumption in this language can be computed in polynomial time, and by extension, so can the process of instance classification performed by I(L- TWO (more on this below). This language is clearly rudi- mentary, but it does suggest a strategy for achieving tractability of recognition in a more expressive language. 3. As the preceding discussion suggests, KL-TWO relies on this kind of cardinality information to reduce universal restrictions to existential ones. Similar ly, the recent CLASSIC system [Bra&man et al. 19901 provides a close operator to specify that the system has been given a complete enumeration of the fillers of a role. VILAIN 465 THE APPROACH ADOPTED HERE departs from the KL-ONE tradition in separating those aspects of a concept’s defi- nition that impose existential restrictions and those that impose universal restrictions. More precisely, I mean by the existential aspects of a concept’s definition those characteristics that are provided by operators whose semantic account introduces either no variables, or only existentially quantified variables. Examples of such operators are the and, exists, vr&#, and chain operators of Figure 1. The universal aspects of a concept are those provided by operators that require the introduction of universally quantified variables, e.g., all and =. This distinction between universal and existential restrictions is rendered operational in a simple representation language called RHO (p), after the Greek letter for “r” (as in pecognition or pepresentation). RHO provides two sub-languages for this purpose: an axiomatic language to state the existential restrictions on a type, and a frame-like language to state the universal ones. For the PARENT type, for example, we might state the following. (pARENT ul)t (PERS~N~~)+(~I-IILD~~~~) ; i.e., a person for which there exists a child (frame PARENT(PERSON(CHILD)) ; i.e., all of a parent’s children are persons Following the argument in the preceding pages, only the existential axioms of RHO are used in recognizing instances of a type z. The universal statements about z are irrelevant for recognition, and only come into play once instances of z have been recognized. One can thus think of the axioma- tic sub-language of RHO as providing sufficiency rules for a type, i.e., rules specifying conditions whose satisfaction is sufficient to guarantee membership in the type. Gonverse- ly, the universal statements can be seen as necessity rules, contingent conditions that must necessarily hold of instances of the type. This particular conceptualization of necessary and sufficient conditions for type membership is dictated in part by this work’s concern with recognition, and is thus limited in scope. Nevertheless, the terminology of necessity and sufficiency affords a convenient opera- tional characterization of the sub-languages of RHO, and as such will be used throughout this paper. Sufficiency axioms in RHO Ignoring necessity conditions until later in this paper, sufficiency conditions are specified in RHO with function- free Horn axioms. In addition to the sufficiency condition for the PARENT type above (PARENT is a canonical ru-ONE concept), some examples of this syntax are given below. The first corresponds to a vrdiff role definition in KLONE, and the second mentions three-place relations, which have been mostly ignored in xr.-ONE-inspired languages, with some exceptions (e.g. [Schmolze 19891). (DAUG~RU~U~)~(CHILDU~U~)+(FEMALE ~2) (PASS-TO UlU2 Uj)C (THROW-TO UlU2U3)+(cATaU2U3) To be precise, the syntax of sufficiency conditions in RHO follows the axiom schema below, where the it’ are predicates, with ai the arity of predicate n!. The 4 terms in this schema must be either constants or variables-function terms are excluded. As is usual with Horn clauses, variables appearing on both the left-hand side and right-hand side of a sufficiency condition are implicitly universally quantified; variables appearing only on the right-hand side are implicitly existentially quantified. This is reflected in the names of the variables in the preceding examples: the ui are universal, and the ei are existential. As a result of this way of interpreting variables, sufficiency rules in RHO are precluded from recognizing universally quantified restrictions such as those imposed on concepts in LONE by all. To see this, assume without loss of generality that there are no constants on the left-hand side of a sufficiency axiom satisfying the schema above, i.e., no constants in 5’: l ** s :,. The sufficiency axiom can then be factored into a predicate definition with existential quantification, and a universally quantified sufficiency assertion. d=?~$3$(7+$)~---~(7Pi$) bf i? wihm 0 $1 The $ designate the arguments of the xi, and 2 designates those variables appearing on the right hand side of the axiom but not on the left hand side. For example, the sufficiency axiom for PARENT above can be rendered into the following existential predicate definition and universal axiom. k4rmrr' = hu13 el(pE~sO~ u~)A(CHED 241 el) V u1 (PARE& u1)2 (PM uJ For one- and two-place predicates, the predicate definition can be further expressed in terms of x.r.-ONE-like type- forming operators, in this case PARENT'= (and PERSON(~X&S CHILD)) 466 TERMINOLOGICAL REASONING (andrl . . . r& [(and rl . . . rr,)] = [rl] n...n [mrJ (exists rl . . . r,) [(exists rl . . . rnlp = (vrdiff I c) [(vrdiff r c)] = ( (x, y) E The domain 6 the modeling function p, concept terms ci and role terms 5 are as in Figure 1. Figure 2: Type-forming operators for sufficiency axioms. The reason sufficiency axioms must be factored into two formulae, one definitional and the other axiomatic, is that ruro allows several sufficiency axioms to support the recog- nition of the same left-hand side. For example, a multi- media messaging domain (cf. CONSUL [Mark 19821) might afford multiple ways of recognizing a priority message: (PR.Iom-mG ul) + (h&G ul) + (FROM ul el) + (Boss el) (PRIORITY-MSG 2~) t (MSG ul) + (REPLY-BY u1 today) and so forth. It is clear that though these axioms may each provide a sufficient recognition criterion for PRIORITY-MSG, they are not definitional in the traditional KGONE sense. Semantics of sufficiency axioms These notions are formalized through a straightforward model-theoretic semantics for the language. As usual, constants denote elements of a domain 6, and a predicate 11: of arity a denotes a subset of 6 if a = 1, and a subset of ,a x .:. x 5,. a if a> 1. Next, let l,t be a model function for 6, that is, a function that assigns to a constant K in the language some element of 6, and that assigns to a predicate K in the language a subset of the appropriate cross-product of 6 (given the arity of z). Let p designate a variable binding function that maps variables among the k terms to constants in the lan- guage, and maps constants to themselves. Then i.t is said to interpret a set of sufficiency axioms if for each such axiom (ltO&..~” )t(n: 1 1 a0 51 ..y a1 )+*.*+(lq...gn an ) andforanyp, is in l.t(n’) whenever is in &) for each of the rci. That is, in any model that interprets the axioms, whenever the right-hand side of an axiom is instantiated, so is its left-hand side. RHO and the IKL-ONE languages As promised, the syntax of sufficiency conditions in RHO bears little resemblance to the traditional syntax of KGOFJE languages. However, as noted above, these conditions can be factored in part into &ONE-like predicate definitions, for which it is possible to define a predicate-forming specifi- cation language. In general, this language has cumbersome syntax, mostly because of the need to indicate the co- references specified by variables in the axioms. Restricting one’s attention to axioms with predicates of arity 2 or less (i.e., concepts and roles), it is possible to define a IWME- like language that covers an interesting set of special cases among these axioms. This language is given in Figure 2. In addition, one can organize the predicates defined by these axioms into multiple type taxonomies, one for each predicate arity. To do so, we just place any predicate Ic’ on the right-hand side above predicate rr” whenever their arities match (i.e., oi = ok), and whenever + +% i.e., the arguments of n’ are identical to those of x0. The taxonomies so defined do not encode any subsumption relations, but they do have the property subsumption hierarchies have that any instantiation of a predicate IC in the taxonomy is also an instantiation of all those predicates above n in the taxonomy. A parsing algorithm for recognition ONE OF THE ATTRACTIVE CONSEQUENCES of casting recog- nition criteria as deductive rules is that this approach leads to a direct implementation of recognition as a parsing problem. It may seem unlikely at first that a deductive process should be amenable to being treated as parsing. However, deductive recognition is a very particular kind of VILAIN 467 deduction, and its analogies to parsing are multiple. Just as grammars allow one to compose syntactic constituents into larger syntactic units, deductive rules allow one to combine propositions to deduce oher propositions. Just as a bottom- up recognizer starts from atomic “facts” (the positions of words in a buffer) and proceeds to find all paths to the start symbol, a forward-chaining deductive recognizer starts from base propositions and deduces all their consequences. The recognition strategy adopted here is based on notions of bottom-up chart parsing. As syntacticians will notice, not all characteristics of this kind of parser actually come to bear on deductive recognition. By casting the problem in this way, however, the computational proper- ties of the recognition problem in RHO are made clear. The principal data structure in a chart parser is the chart [Kay 19801, a two-dimensional table indexed by buffer position that records which constituents have been parsed out of the buffer. Letting Ch designate the chart, a given constituent x E Ch[i, jl just in case x has been parsed out of the buffer between positions i and j. For deductive recognition, the chart is an a-dimensional table, where a is the maximal arity of predicates in the recognition rules. Chart cells are indexed by individual constants of the language. If a proposition (Z rcl . . . rcn) has been given as a premise, or has been deduced from other propositions, then n is entered in cell Ch[rci . . . Kn] of the chart . For expo- sitory purposes, one can assume that Ch is implemented as an a-dimensional array A. The chart entry indexed by individuals ~~ . . . Kn is thUS A(NKl),..., N(Kn),O, . ..) 0), where Nis a function that returns an integer code (2 1) for each constant Ki and the O’s fill in any missing indices of A if the Kiare fewer than a in number. This storage strategy is clearly simple-minded and wasteful of space; many improvements are possible (see Wllman 1988,1989]). The deductive recognition process operates bottom up, and as with many syntactic recognizers for context-free languages, intermediate states of recognition are encoded as “dotted rules” Barley 19701. Initially a rule’s dot is to the left of the first term on the right-hand side. If this term matches an entry in the chart, a copy of the rule is created that advances the dot past the matching term, indicating that this new rule must match on its second term. This process of creating dotted rules is continued until the right- hand side is exhausted, whereupon the left-hand side is added to the chart. As with unification grammars [Shieber 19861, the variable bindings that ensue from matching a term to a chart entry are associated with the resulting dotted rule, and are consulted in subsequent matches. For further details on this process, and for specific algorithms, see Barley 1970; Kay 1980; or Shieber 19861. It is straightforward to verify that this parsing algorithm correctly computes all the deductions sanctioned by a set of sufficiency rules. In effect, the chart implicitly defines a modeling function ll.: @(K1) . . . p(Kn)) E ll.(lc) just in case II: is entered into the chart in cell Ch[q . . . ~1. That this ~1 provides an interpretation of the sufficiency rules follows directly from the way the parsing process adds entries into the chart. The computational complexity of this algorithm, unfor- tunately, is unappealing. In the general case, unification grammars have the formal power of a Turing machine. This result does not fully apply here, principally because function symbols are barred from the recognition rules. Nevertheless, the problem of computing the deduction sanctioned by sufficiency rules can be readily shown to be w-complete, by reduction from subgraph isomorphism or from conjunctive Boolean query [Garey & Johnson 19791. The reduction from subgraph isomorphism is of particular interest, as it helps reveal a tractable subclass of deductive recognition problems. The reduction proceeds by mapping the matching graph (the putative subgraph) into a recognition rule. For each node nl . . . nn in the matching graph, a variable v1 . . . v, is created, and for each vertex from n; to nj, the term (CONNECTS vi yj) is added to the rule. The rule’s left-hand side is the term (SG v1 . . . vn). The target supergraph is then mapped to ground propositions (CONNECTS mi ~9, where mi and mj are target graph nodes. The matching graph is isomorphic to a subgraph of the target just in case (SG m 1.. . mn) is recognized for some ml.. .mn. Achieving tractability This reduction demonstrates that the potential computa- tional burden of deductive recognition arises from the need to perform arbitrary graph matching to instantiate a rule. This suggests that if one could restrict deductive rules so as to preclude graph matching, recognition might be tractable. The following criterion provides such a restriction. The criterion is based on a partial order L defined by the existential variables introduced on the rule’s right-hand side. Starting from the universal variables ui on the left- hand side, which are clustered as the bottom element of L, an undirected graph of the partial order is obtained by consulting terms on the right-hand side in order of decreasing arity. If a term mentions variables v1 . . . vi that are already in the order and variables Vj . . . Vn that are not, a cluster c containing Vj . . . vn is first added to the graph. Then, for any two variables (old or new) in v1 . . . vn, an 468 TERMINOLOGICAL REASONING undirected link is added between their variable clusters, provided there is not already a path between the two that only mentions the variable clusters of vr . . . v,. The rule is said to define a variable tree just in case the graph of L is a tree rooted at the ui cluster. Note that the examples given earlier in this paper meet this criterion, as does any rule equivalent to an expression formed with the operators in Figure 2. For example, the first sufficiency axiom given above for PARENT, (PARENT U~)t(PERSONU~)+(CHILDU1 el), only has two variables, which are clustered and ordered as { ui) L ( el). This trivially defines a variable tree. Given this notion, it can then be shown that recognition over any set of rules that define variable trees can be performed in polynomial time. The proof proceeds by estimating the amount of matching necessary to instantiate the left-hand side of a rule r. Say 1~1 is the total number of constants appearing in ground propositions, and a is the maximal predicate arity in the rules. Then to obtain all possible consistent instantiations of the variables in a variable cluster of r can only require O(IK~~) matches. Because r defines a variable tree, at most another O(l~l~) matches are necessary to relate the cluster to all other clusters for r. The number of such clusters is bounded by a parameter 151, the maximal number of variables in a rule. The total number of left-hand side instantiations is again bounded by IKI ae The overall cost of fully instantiating the left-hand sides is thus 0(1&l IK~~), a polynomial in IKI with 141 and a as parameters. Further issues The preceding result demonstrates that despite the unde- cidability of subsumption in most xGoNr+like languages, the related problem of instance classification, or recogni- tion, is in fact tractable to a significant degree. That is the main result of this paper. Beyond this, a number of issues remain that I would like to address before closing. Universal conditions The discussion to this point has largely ignored the specification of universal restrictions, such as the selectional restrictions on predicates that one might want to impose in order to state, e.g., that all of a person’s children are persons. As I suggested above, RHO provides universal (or necessity) conditions such as these with a simple frame language, in this case: (frame P~E~(PERSON(CHILD)). The details of RHO'S necessity language are not of great interest here. By a fairly pedestrian set of meaning postu- lates, statements such as the preceding are interpreted as universal restrictions on the fillers of relations, in this case: V X~(PARENTX)A (cm x,y)z (PERsoNy). In a somewhat perverse twist, it is worth pointing out that necessity conditions can actually be recast in terms of r2no’s sufficiency rules, as in (PERSON y) t (cm x y). RHo in perspective I should note that though the framework adopted here is loosely based on chart parsing, that is not strictly necessary. The notion of implementing deduction with parsing strategies is, I would hope, both provocative and amusing, but in truth it is related to the way rule matching is implemented in certain forward-chaining rule languages, for example MAllester’s RUP [McAllester 19821. An interesting topic of further study would be to compare the performance, theoretical and actual, of this “parsing deducer” to that of traditional forward-chaining architec- tures such as OPSJ [Forgy 19821 or TREAT [Miranker 19871. In closing I should acknowledge that there are signili- cant representational issues lurking behind this work. By separating existential and universal and existential restric- tions, and by making their semantics implicational (as opposed to biconditional), RHO rejects the standard KGONE notion of necessary and sufficient type definitions. Along these lines, a counter-example that has been levelled in criticism of this work is the concept of an all-girl school. It is easy to define this concept in KGONE, but no non-trivial sufficiency axiom can be given for it in RHO. Such criticisms are certainly valid, but are somewhat beside the point. The whole point of this work is to support deductive recognition from ground facts; recognizing an all-girl school by observing its students’ gender is a non- monotonic inference that is inductive in nature. As a point of speculation, my suspicion is that progress towards this particular non-monotonic reasoning problem will come from representational approaches that pay attention to issues of learning. Research is proceeding in this direction, including some of my own [Vilain, Koton & Chase 19903. This area holds much promise for the future 3 Acknowledgements 3 This research was begun while I was affiliated with Bolt, Beranek, and Newman Inc. (BBN), where Remko Scha offered much patience and attention. Bill Woods and Steve Minton provided critical insights, and Ellen Hays provided incisive criticism. I also owe much gratitude to the Arnold Arboretum of Harvard University for providing the setting in which these ideas were first developed. VILAIN 469 References Brachman, R. J. & Levesque, H. J. (1984). The tractability of subsumption in frame-based description languages. In Proceedings of the Fourth National Conference on Artificial Intelligence (AAAI~~), Austin, TX, 34-37. Earley, J. (1970). An efficient context-free parsing algorithm. Communications of the ACM 13: 84-102. Reprinted in [Grosz et al. 19861. Forgy, C. L. (1982). A fast algorithm for the many pattern/many object pattern matching problem. Artificial Intelligence 19: 17-37. Garey, M. R. & Johnson D. S. (1979). Computers and intractability: a guide to the theory of Np-completness. New York, NY: W. H. Freeman and Co. Grosz, B. J., Sparck Jones, K., & Webber, B. L., eds. (1986). Readings in Natural Language Processing. Los Altos, CA: Morgan-Kaufmann Publishers. Kay, M. (1980). Algorithm schemata and data structures in syntactic processing. Technical report ~~~-80-12, Xerox Palo Alto Research Center. Reprinted in [Grosz et al. 19861. Levesque, H. J. & Brachman, R. J. (1987). Expressiveness and tractability in knowledge representation and reasoning. Computational Intelligence 3: 78-93. Mark, W. S. (1982). Realization. In Schmolze J. G., and Brachman, R. J., eds. Proceedings of the Second KL-ONE Workshop. Technical Report No. 4842, Bolt Beranek and Newman, Inc., Cambridge, MA, 78-89. McAllester, D. A. (1982). Reasoning utilities package user’s manual. AI memo 667 MITAI Lab. Mac Gregor, R. M. (1988). A deductive pattern matcher. In Proceedings of the Seventh National Conference on Artiftcial Intelligence (AAAI~~), Saint Paul, MN, 403-408. Miranker, D. P. (1987). TREAT: A new and efficient match algorithm for ai production systems. PhD dissertation, Dept. of Computer Science, Columbia University. Patel-Schneider, P. F. (1984). Undecidability of subsump- tion in NIKL. Artificial Intelligence 39: 263-272. Patel-Schneider, P. F., Owsnicki-Klewe, B., Kobsa, A., Guarino, N., Mac Gregor, R., Mark, W. S., McGuiness, D. L., Nebel, B., Schmiedel, A., & Yen, J. (1990). Term subsumption language in knowledge representation. AI Magazine ll(2): 16-23. Schmolze, J. G. (1989). Terminological knowledge representation systems supporting n-ary terms. In Proceedings of the First International Conference on Knowledge Representation @x89), Toronto, 432-443. Schmolze, J. G. & Lipkis, T. (1983). Classification in the KL-ONE knowledge representation system. In Procee- dings of the Eighth International Joint Conference on Artificial Intelligence (LJCAI~~), Karlsruhe, Germany. Schmidt-SchauB, M. (1989). Subsumption in KL-ONE is undecidable. In Proceedings of the First International Conference on Knowledge Representation (1~~891, Toronto, 421-431. Schubert, L. K., Goebel, R. G., & Cercone, N. J. (1979). The structure and organization of a semantic net for comprehension and inference. In Findler, N. V., ed., Associative networks: Representation and use of know- ledge by computers. New York, NY: Academic Press. Shieber, S. M. (1986). An introduction to unification-based approach to grammar. CSLI lecture notes no. 4. Distributed by University of Chicago Press. Ullman, J.D. (1988). Principles of database and know- ledge-base systems, vol. 1. Rockville, MD: Computer Science Press. Ullman, J.D. (1989). Principles of database and know- ledge-base systems, vol. 2. Rockville, MD: Computer Science Press. Vilain, M. B. (1985). The restricted language architecture of a hybrid representation system. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence (ucAI~~), Los Angeles, CA, 547-551. Vilain, M. B., Koton, P. & Chase, M. P. (1990). On analytical and similarity-based classification. In Proceedings of the Eighth National Conference on Artt$cial Intelligence (AAAI~~), Boston, MA, 867-874. Woods, W. A. & Schmolze, J. G. (1990). The KGONE family. To appear in Computers and Mathematics with Applications, special issue on semantic networks in artificial intelligence. Also available as Technical Report ~~-20-90, Aiken Computation Lab, Harvard University. 470 TERMINOLOGICAL REASONING
1991
75
1,137
Knowledge Science Institute University of Calgary Calgary, Alberta, Canada T2N lN4 gaines@cpsc.ucalgary.ca This paper addresses the integration of services for rule- based reasoning in knowledge representation servers based on term subsumption languages. AS an alternative to previous constructions of rules as concept+concept links, a mechanism is proposed based on intensional roles implementing the axiom of comprehension in set theory. This has the benefit of providing both rules as previously defined, and set aggregation, using a simple mechanism that is of identical computational complexity to that for rules alone. The extensions proposed have been implemented as part of KRS, a knowledge representation server written as a class library in C++. The paper gives an example of their application to the ripple-down rule technique for large-scale knowledge base operation, acquisition and maintenance. ntroduction In recent years there have been major advances in the theory and practice of knowledge representation systems originating from semantic nets. In particular the series of term subsumption languages commencing with KL-ONE (Brachman Jz Schmolze, 1983, developing through KRYPTON (Brachman, Gilbert & Levesque, 1985) and currently culminating in systems such as CLASSIC (Borgida, Brachman, McGuinness & Resnick, 1989) and LOOM (MacGregor, 1988) has reached a maturity of technology which offers the promise of knowledge representation ‘utilities’ or ‘services’ in Levesque’s (1984) terminology. The logical foundations of the subsumption relation, techniques for its correct and complete calculation, and the interaction between representation power and the tractability of subsumption algorithms have been widely studied (Brachman & Levesque, 1984; Schmidt-Schauss, 1989; Nebel, 1990) and are becoming reasonably well-defined. It is now feasible to develop knowledge representation servers on a par with floating-point arithmetic units and numeric libraries, as software (and perhaps ultimately hardware) modules with well-defined functionality and fast, reliable performance. As with arithmetic units, such servers by no means solve all the problems of a particular application domain, but they do greatly reduce the burden of system development, allowing effort to be focused on the specifics of particular systems rather than on what should be general utilities. 458 TERMINOLOGICAL REASONING In practical terms, the current generation of term subsumption languages and associated theoretical studies may be seen as providing a well-defined and understood semantics for frame-based knowledge representation systems. However, the focus on terminological definitions has not been paralleled by similar in-depth analysis of rules in knowledge-based systems, and the provision of rules in term subsumption languages is simplistic compared with that of most expert system shells. As rules are regarded as part of the assertional, A-box, component and do not form part of terminological definitions in the T-box, this does not affect the theoretical analysis of subsumption. However, it restricts the services offered by a knowledge representation server, and requires for many tasks that additional inference engines be written apart from the server. This paper addresses the problem of providing a powerful rule representation system well-integrated with a term subsumption language, with emphasis on knowledge acquisition issues such as the natural representation of rules with exceptions. As a side-effect the rule system defined also provides for the automatic formation of sets or aggregations of individuals. The next sections briefly outline the knowledge representation server and its visual language, the representation of rules within it through individuals with intensional, rather than extensional, role definitions, and some applications to knowledge acquisition and knowledge-base maintenance. egresentation KRS is a knowledge representation server written as a class library in C++ with semantics that are a slight extension of those of CLASSIC (Borgida, Brachman, McGuinness & Resnick, 1989). KRS supports a textual input/output language similar to that of CLASSIC, but since its primary application is to knowledge acquisition (Gaines, I990) it also supports an equivalent visual input/output language through an interactive grapher (Gaines, 1991). This visual language will be used for the exposition in this paper with some examples of its compilation into textual form. The visual language provides the means to represent knowledge structures as graphs of labelled nodes and arcs. The visual primitives of the language are: From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. The visual language provides the means to represent knowledge structures as graphs of labelled nodes and arcs. The visual primitives of the language are: l Nodes, identified and typed as specified below. 0 Two arc types linking nodes-a line with, and without, an arrow, respectively. 0 Text strings labelling the nodes-with an associated equivalence relation based on lexical identity. 0 Five distinctive text surrounds defining the node types- ovals (concepts), marked ovals (primitives), rectangles (individuals), no surround (roles or annotation), and marked rounded corner rectangles (constraints, e.g. cardinality and set inclusion). The semantics of the arc types are overloaded and determined by the nodes joined. A line between two primitive nodes defines them as disjoint, for example, , and between two roles defines them as inverse, for example, par en t - ch I I d . An arrow from one concept to another defines the first as subsumed by the second; from a concept to a role, that the role is part of the definition of the concept, etc. The graph at the top of Figure 1 shows the way in which concepts are defined and constraints and values asserted for individuals in the visual language. The resultant text statements are shown at the bottom. age w height -b&j Primitive(person) Primitive(patient, person, (All age, (One young, old)) (All height, (Integer 1 OS1 00)) 1 Concept(old patient, patient, (All age, (Include old)) 1 Individual(Fred, patient, (Fills age, old) (Fills height, 72) 1 Integration of Rules CLASSIC (Borgida, Brachman, McGuinness & Resnick, 1989) and LOOM (MacGregor, 1988) provide a basic form of rule in which the two concept definitions are linked in such a way that recognition of an individual as satisfying the constraints of the first concept leads to the assertion that it satisfies the constraints of the second. For example, if the following rule is added to the graph of Figure 1: risk and the inference engine is run then the individual Fred will be recognized as an “old patient” and the constraint will be asserted will be made that the role “risk” for Fred includes “pneumonia.” The incorporation of rules as asserted concept+concept links integrates well with the way in which definitions in the T-box classify assertions in the A-box: if an individual is classified as falling under the first concept it is asserted to fall under the second; the first concept definition may be seen as specifying sufficient conditions for classification, and the second concept as additional necessary conditions. McGregor (1988) has shown how such rules may be used to recognize recursive structures such as lists, and Figure 2 shows his construction in the visual language. The individual “~2” is recognized to be a “cons list” because it is asserted to be a “cons” and its cdr, “null”, is asserted to be a “list”. Hence, c2, through the rule, is asserted to be itself a “list”. Then the individual “cl” is similarly recognized as a “cons list” because its cdr, W”, has been asserted to be a “list”. Fig.2 Recognition of a recursive structure using a rule It may appear that a major limitation of this form of rule is that, since a single individual is classified at a time, it corresponds to an OPSS-style rule with only one free variable (McGregor, 1988). However, since the roles of an individual may be filled with other individuals, a rule classifying one individual may involve classifying, and making assertions about, several individuals. For example the rule with two free variables: ((data ?d) (clock ?c) (connected ?d ?c) + (connection-error ?d ?c)) in CADIE (Franke, 1990), has an equivalent in KRS as shown in Figure 3. Fig.3 Rule with two free variables Fig.1 Definitions and assertions in KRS visual and text languages GAINES 459 If an individual that is asserted to be a “clock” is asserted to be “connected” to one that is asserted to be “data” then both individuals are asserted by the rule to have a “connection error”. This construction in general clearly depends on the roles linking the free variables and would not be applicable to a rules whose premise references several completely independent individuals. However, this may be seen as a desirable constraint on rules integrated with frame-based knowledge representation schema-that a frame should exist, or be created, which brings into relation individuals accessed by the premise of a rule. ules through Aggregation There is a more subtle criticism that can be made of the way in which rules are introduced into term subsumption languages. A rule is introduced as a new construct which links two concepts. It acts as an inferred assertion involving the classification of all individuals in the domain by the concepts that form the premise of a rule. However, the sets of individuals resulting from this classification are lost except in so far as it results in individuals being re- classified by the conclusion of the rule. There is a need in many applications of term subsumption languages for an aggregation operation that forms sets in a natural way (Allgayer & Reddig-Siekmann, 1990), and it is attractive to consider an alternative way of incorporating rules as the side effect of such aggregation. The definition of role fillers is normally extensional in that specific individuals are named as fillers. Suppose one introduces a complementary way of filling a role intensionaZZy by defining the concept which a role filler must satisfy and specifying that all individuals in the domain that satisfy it are fillers. This operationalizes Frege’s axiom of abstraction or comprehension, that every concept defines a set. Introducing intensional role filling instead of rules as a new construct has the advantage of providing an explicit aggregation operator. Rules can be realized as a natural side-effect of the individuals actually being asserted to be in the intensional role. That is, the premise of the rule is now the intensional role definition, and the conclusion is the intensional role constraint. As a first step to an aggregation operation, consider how Frege’s axiom can be represented using inverse roles. Figure 4 shows a concept “intension x” whose role “member of’ includes “extension x” whose role “member” is inverse to “member of” and constrained by “intension x”. An individual asserted to be an “intension x” will be inferred to fill the “member” role of “extension x”, and, conversely, an individual asserted to fill the “member” role of “extension x” will be inferred to be an “intension x”. nenber nenber Fig.4 Axiom of comprehension with inverse roles This construction is available in existing knowledge representation systems. For example, in LOOM, it is: (defconcept intension-x :is (filled-by (:inverse member) extension-x)) The assertions of conceptual constraints on individuals: (tell (intension-x indl)) (tell (intension-x ind2)) then result in the query about members of extension-x: (retrieve ?I (member extension-x ?I)) returning indl and ind2 as required. The construction of Figure 4 is readily extended to have the side effect required for rules by having the arrow from “member” go, not to “intension x”, but instead to another concept, “intension y” say, representing the conclusion of the rule. However, inverse roles alone cannot implement intensional role filling since it is still necessary to make the explicit assertion that an individual is an “intension x” rather than recognize this as being so. Figure 5 shows how intensional role filling is introduced in KRS as a link from a concept, “intension x”, to an individual, “extension x”. This results in the inference engine aggregating the set of all individuals in the domain that are recognized as “intension x” and filling the “member” role of “extension x” with them. If the “member” role is constrained by a concept, “intension y”, this also results in all the individuals recognized as “intension x” being additionally asserted to be “intension y”, thus implementing a rule. member Fig.5 Intensional role, aggregation individual and rule Intensional roles are proposed as a preferred alternative to concept+concept rules in term subsumption languages. They provide set aggregation and rules in one construct. Implementation in the inference engine involves no more computation than for rules alone. The additional storage space for the aggregations has not proved a significant overhead in a wide range of applications of KRS. Note that a “rule” in KRS is represented as an individual with an intensional role, and that the individual can have additional roles, and can itself be used as a role filler in other individuals. The availability of these features and explicit aggregations has proved invaluable in applications involving planning, configuration and scheduling. Rules with Exceptions A limitation of the rule schema described above is that they make no provision for the representation of rules with exceptions. Logically, if one does not require default reasoning, rules with exceptions can always be expanded into an equivalent, although generally larger, set of rules without exceptions. However, in knowledge representation for knowledge acquisition systems in particular, it is important to be able to encode expert knowledge in exact conformity with the expert’s representation, which is often as a rule with exceptions. Hence, KRS offers a further extension in which one rule can be an exception to others. 460 TERMINOLOGICAL REASONING As shown in Figure 6, an arrow from one individual to another means that the first individual acts as an exception in aggregation to the second. That is, that individuals are aggregated in the “member” role of the “aggregation rule” if they are classified as “premise rule” but not as “premise exception”. Thus the exception mechanism provides for both exceptions to rules and differential aggregations. Since one individual may be an exception to many others, and may have many exceptions to itself, complex structures are readily represented. To reduce the number of arrows the exception arrow is taken to be transitive, so that an individual is an exception to all those individuals along its outgoing paths of exception arrows. member renber Fig.6 Representation of a rule with an exception Figure 7 shows the solution to Cendrowska’s (1987) contact lens problem represented in KRS as a set of two rules with three exceptions. This compares favorably with the minimal solution without exceptions, of nine rather more complex rules. The implementation of intensional roles, aggregations, rules and exceptions in KRS is more subtle than has been indicated because no closed-world assumptions are made. Hence subsumption and recognition return one of three values, true, false or open, that is able to become either true or false as more assertions are made. If an individual is open in recognition for an intensional role this acts as false as far as placing it in the role is concerned but as true as far as exception propagation is concerned. Thus, the basic inference engine acts as a conservative, monotonic, reasoner in worlds where some roles or fillers are open. It also supports a simple extension of the exception scheme to default reasoning in that the inference engine generates a list of open concept-individual pairs. Non- monotonic inference can commence with the default assumption that none of the open pairs will become true and hence not propagate open exceptions. If this leads to a consistent world it is the unique default extension. If it does not, a truth maintenance search can be invoked for maximal subsets of the open pairs that can be assumed false, resulting in zero to a number of possible extensions. Compton and Jansen (1990) have developed techniques for the acquisition and maintenance of large rule-based systems. They have applied their “ripple-down rule” techniques to the Garvan ES-l knowledge base for thyroid diagnosis, which has grown to a size and complexity where it has become difficult to maintain.. The techniques rely on the efficient management of rules with a large number of exceptions, and the acquisition and maintenance procedures require access to aggregations of previously classified cases. Hence they provide a useful test of KRS capabilities in a domain where ongoing maintenance through continual knowledge acquisition is important, and where large rule sets and case bases are available. Pat i Defini ent Fig.7 Solution to contact lens problem through rules with exceptions G AIN ‘ES 461 The acquisition and maintenance procedures assume that an expert is available to make decisions about changes in the knowledge base. The objective of the ripple-down rule representation is to simplify the expert’s task by allowing a new rule to be entered for a misdiagnosed case through a simple procedure that has only to take account of one existing rule and the cases that have already fallen under it. That is, the activities take place in a minimal context with a guarantee that no change will be made to the system’s behavior outside that context. Consider an empty knowledge base in which information about an individual “case 0” has been entered with a known diagnosis “diagnosis b”. The expert creates a concept, “concept j”, that will recognize “case 0” and attaches it to a rule, “rule j”, that leads to “diagnosis b”. Figure 8 shows the resultant state of the knowledge base. member L?l case 0 Fig.8 Initial case and rule entered in ripple-down rule base Now consider the entry of further cases. If a case is identical to an existing one but has a different diagnosis then there is a conflict, otherwise there are four possibilities: 1. “Case 1” is identical to “case 0” and has same diagnosis. There is no need for a new rule or for the entry of the case. 2. “Case 2” falls under “concept j” and has “diagnosis b” but is not identical to “case 0”. There is no need for a new rule but the case needs to be entered. 3. “Case 3” does not fall under “concept j” and has “diagnosis c”. The expert proposes a new covering concept, “concept k”, and rule, “rule k”, following the same procedure as for “case 0”. The existing rule, “rule j”, is made an exception to this new rule. 4. “Case 4” falls under “concept j” but has a different diagnosis, “diagnosis a”. A new rule is created by adding constraints to “concept j” that apply to “case 4” but not to “case 0” to create “concept i” subsumed by “concept j”. The resultant “rule i” with “diagnosis a” is entered as an exception to “rule j”. Figure 9 shows the state of the knowledge base after this procedure has been followed. Compton and Jansen’s insight was that the procedure described could be repeated indefinitely as more cases are entered. It leads to a linear chain of rules such that a case may be seen as being entered at the beginning of the chain and rippling up (or “down’‘-KRS arrows are in the o;>posite direction to the original implementation diagrams) until it is recognized by the premise of a rule. The diagnosis is then determined by the conclusion of the rule. member member member Fig.9 Ripple-down rule base after initial case entries In knowledge acquisition and knowledge-base maintenance, if the case does not fall under any rule, a new one is created at the end of the chain as for “case 3” above. If it comes under a rule with an incorrect diagnosis, a new one is created with a premise that is subsumed by that of the existing rule and does not cover any of the cases that have already been diagnosed using the existing rule. This new rule is inserted in the chain as an exception to the one with the incorrect diagnosis. A new rule can always be created using these procedures since a concept may be created that has precisely the constraints necessary to recognize uniquely the new case. However, the expert will often be able to generalize such a specific concept and, in so doing, need only consider discriminating the previous cases attached to the one rule. The KRS rule mechanism supports both the chain of exceptions and the case aggregation required for ripple- down rules. The ripple-down version of the Garvan rules, 547 rules, together with 669 cases evaluated on 17 attributes, were loaded and run in KRS on a Macintosh II. The diagnoses took 0.4 seconds a case on average. Thus, even a microcomputer is able to support a significantly large knowledge-base operation, acquisition and maintenance using the ripple-down rules technique. On obvious question regarding knowledge bases created through the ripple-down rules technique is the size- efficiency of the resultant knowledge base. Since rules, once entered, are not changed, if the expert over- generalizes or over-specifies, more rules will be generated than needed. Clearly size-efficiency is an empirical issue that can only be investigated over a number of knowledge bases created by different experts. When the data set of 669 cases are run through Induct (Gaines, 1989), an empirical induction algorithm that generates rules from cases with similar efficiency to C4.5 (Quinlan, 1987), a rule set giving correct diagnoses with 269 rules is generated, that is a reduction by about 50%. The resultant rule set, while smaller, may not be as understandable or acceptable as knowledge structure, and this is subject to further empirical investigation. 462 TERMINOLOGICAL REASONING Conclusions eferences Developments in the theory and practice of term subsumption languages make possible generic knowledge representation servers offering efficient implementation of principled artificial intelligence techniques. This paper has addressed the issue of integrating services for rule-based reasoning with knowledge representation servers based on term subsumption languages. As an alternative to previous constructions of rules as concept *concept links, a construction based on intensional roles is proposed. This has the benefit of providing rules as previously defined, and set aggregation, with a simple mechanism that is of identical computational complexity to that for rules alone. This mechanism extends simply to the representation of rules with exceptions. It is interesting to examine what representational issues in the A-box are addressed by the new constructs. The exception mechanism itself implements concept negation in rule expression. In addition, since multiple concepts can point to the same aggregation individual, disjunctive rules can be expressed. These mechanisms also provide for the formation of set aggregations on a differential basis, essentially introducing concept disjunction and negation in both rules and set aggregation in the A-box. The computational cost of the intensional role aggregation operation is identical to that of the rule mechanism it replaces. Intensional roles are processed exactly as if they were the premises of rules. The subsumption lattice that is computed as part of the KRS implementation of a term subsumption language is used to ensure very rapid recognition of those individuals which fall under the concept defining an intensional role The exception links are processed quite separately after such recognition in a time that is negligible compared with the recognition itself. The extensions proposed have been implemented as part of KRS, a knowledge representation server written as a class library in C++. The paper gives an example of their application to the ripple-down rules technique for large- scale knowledge base operation, acquisition and maintenance, and some evaluation of the space/time performance on the Garvan ES-l knowledge-base. This work was funded in part by the Natural Sciences and Engineering Research Council of Canada. I am grateful to Ron Brachman and Rob McGregor for access to their research on term subsumption languages, to Paul Compton and Bob Jansen for access to their research on ripple-down rules, to Paul Compton for access to the Garvan ES- 1 knowledge base, and to Mildred Shaw for collaborative research on knowledge acquisition systems. Allgayer, J. & Reddig-Siekmann, C. 1990. What KL-ONE lookalikes need to cope with natural language. Bltisius, K.H., Hedsttick, & Rollinger, C.-R., Eds. Sorts and Types in Artficial Intelligence. pp.240-285. Berlin: Springer. Borgida, A., Brachman, R.J., McGuiness, D.L. & Resnick, L.A. 1989. CLASSIC: a structural data model for objects. Clifford, J., Lindsay, B. & Maier, D., Eds. Proceedings of 1989 ACM SIGMOD International Conference on the Management of Data. pp.58-67. New York: ACM Press. Brachman, R. J., Gilbert, V.P. & Levesque, H. J. 1985. An essential hybrid reasoning system: knowledge and symbol level accounts of KRYPTON. Proceedings of IJCAI85. pp.547-55 1. Morgan Kaufmann. Brachman, R. J., & Levesque, H. J. 1984. The tractability of subsumption in frame-based description languages. Proceedings of AAAI-84. pp.34-37. Morgan Kaufmann. Brachman, R.J. & Schmolze, J. 1985. An overview of the KL-ONE knowledge representation system. Cognitive Science, 9(2) 171-216. Cendrowska, J. 1987. An algorithm for inducing modular mles. International Journal of Man-Machine Studies 27 (4), 349-370. Compton, P. & Jansen, R. 1990. A philosophical basis for knowledge acquisition. Knowledge Acquisition 2(3), 241- 258. Franke, D.W. 1990. Imbedding rule inferencing in applications. IEEE ETpert, 5(6) 8-14 (December). Gaines, B.R. 1989. An Ounce of Knowledge is Worth a Ton of Data: Quantitative Studies of the Trade-Off between Expertise and Data based on Statistically Well-Founded Empirical Induction. Proceedings of 6th International Workshop on Machine Learning, pp. 156- 159. Morgan Kaufmann Gaines, B.R. 1990. An architecture for integrated knowledge acquisition systems. Boose, J.H. & Gaines, B.R. Eds. Proceedings of the Fifth AAAI Knowledge Acquisition for Knowledge-Based Systems Workshop. pp. 8-I-8-22. Banff (November). Gaines, B.R. (1991). An interactive visual language for term subsumption visual languages. IJCAI’91: Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence. Morgan Kaufmann. Levesque, H. J. 1984. A logic of implicit and explicit belief. Proc. w-84. pp.198-202. Morgan Kaufmann. MacGregor, R.M. 1988. A deductive pattern matcher. Proceedings of AAA188. pp.403-408. Morgan Kaufmann. Nebel, B. 1990. Reasoning and Revision in Hybrid Representation Systems. Berlin: Springer. Quinlan, J.R. 1987. “Simplifying decision trees,” International J. of Man-Machine Studies, 27(3), 221-234. Schmidt-Schauss, M. 1989. Subsumption in #L-ONE is undecidable Proceedings of KR’89: First International Conference on Principles of Knowledge Representation and Reasoning. pp.421-43 1. Morgan Kaufmann. GAINES 463
1991
76
1,138
C eason Maintenance and Inference Control for Constraint Propagation over Intervals Walter Hamscher Price Waterhouse Technology Centre 68 Willow Rd, Menlo Park, CA 94025 Abstract ACP is a fully implemented constraint propagation system that computes numeric intervals for variables [Davis, 19871 along with an ATMS label [de Kleer, I9SSa] for each such interval. The system is built within a “focused” ATMS architecture [Forbus and de Kleer, 1988, Dressler and Farquhar, 19891 and incorporates a variety of techniques to improve efficiency. Motivation and Overview ACP is part of the model-based financial analysis sys- tem CROSBY [Hamscher, 19901. Financial reasoning is an appropriate domain for constraint-based repre- sentation and reasonin ‘i approaches [Bouwman, 1983, Dhar and s oker, 1988. For the most part CROSBY uses ACP in he traditional way: to determine the con- sistency of sets of variable bindings and to compute values for unknown variables. For example, CROSBY might have a constraint such as Days.Sales.in.Inventory = 30xMonthly.Cost.of.Goods.Sold Average.Inventory Given the values Average.Inventory E (199,201) and Cost.of.Goods.Sold E (19,21), ACP would compute Days.Sales.in.Inventory E (2.84,3.02). Had the fact that Days.Sales.in.Inventory E (3.5,3.75) been previ- ously recorded, a conflict would now be recorded. For the purposes of this paper, all the reader need know about CROSBY is that it must construct, ma- nipulate, and compare many combinations of underly- ing assumptions about the ranges of variables. Contra- dictions among small sets of assumptions are common. This motivates the need for recording the combinations of underlying assumptions on which each variable value depends, which in turn motivates the use of an ATMS architecture to record such information. Although there is extensive literature on the interval propagation aspects of the problem, little of the work addresses the difficulties that arise when dependencies must be recorded. The problems that arise and the solutions incorporated into ACP are: o Since variable values are intervals, some derived val- ues may subsume weaker (superset) interval values. 506 TRUTH MAINTENANCE SYSTEMS ACP marks variable values that are subsumed as in- active via a simple and general extension to ATMS jus- tifications. Other systems that maintain dependencies while inferring interval labels either use single-context truth maintenance [Simmons, 1986, Sacks, 19871, non- monotonic reasoning [Williams, 19861 or incorporate the semantics of numeric intervals into the ATMS itself [Dague et al., 19901. o Solving a constraint for a variable already solved for can cause redundant computation of variable bind- ings and unnecessary dependencies. ACP deals with this problem with a variety of strate- gies. Empirical results show that it is worthwhile to cache with each variable binding not only its ATMS la- bel, but also the variable bindings that must also be present in any supporting environment. e Certain solution paths for deriving variable bindings are uninteresting for some applications. ACP incorporates a unary “protect” operator into its constraint language to allow the user to advise the sys- tem to prune such derivation paths. Syntax and Semantics ACP uses standard notation as reviewed here: [l, 2) de- notes {x : 1 2 x < 2}, ( -oo,O) denotes {x : x < 0}, and [42, +oo) denotes {x : 42 5 x}. The symbols +oo and -oo are used only to denote the absence of upper and lower bounds; they cannot themselves be represented as intervals. Intervals may not appear as lower or up- per bounds of other intervals, that is, [0, (10,20)] is ill formed. (,) d enotes the empty set. All standard binary arithmetic operators are sup- ported, with the result of evaluation being the small- est interval that contains all possible results of apply- ing the operator pointwise [Davis, 19871. For example, W) + (1721 evaluates to (2,4). (1,2)/[0,1) evaluates to (1; +oo), with the semicolon replacing the comma to denote an interval that includes the undefined result of division by zero. All binary relations revaluate to one of T or I, obey- ing the following rule for intervals 11 and 12: I, ?-I2 ++3x,y: xryAxEIlAyE& From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. Corollary special cases include tJx : x F (-00, +oo), which evaluates to T, and Vx : x F (, ) which evaluates to 1. Interval-valued variables can appear in expressions and hence in the result of evaluations, for example, evaluating the expression ([l ,2] = [2,4] + c) yields c = [-3,0]. Appealing to the above rule for binary rela- tions, c = [-3,0] can be understood to mean c E [-3,0]. ACP has a unary “protect” operator, denoted “!“, whose treatment by the evaluator will be described later. ACP computes the binding of each variable under var- ious sets of ATMS assumptions. Let the state of ACP be defined by a set of Horn clauses of the form C --) @ orC--,V = I, where C is a set of proposition symbols denoting assumptions, Qp is a constraint formula, V is a real-valued variable and I an interval. The set of Horn clauses is to be closed under c~3v=I,c2--)@ I- &UC:!+@[I/V] (1) where @[I/V] d enotes the result of substituting interval I for V in <p and evaluating. C’ -+ V = I’ subsumes C+V = I if and only if I’ C_ I and C’ C C. Any clause subsumed by a different clause can be deleted. Let ,8(V, I?), the binding of V in context I’, be the inter- val I that is minimal with respect to subset, such that there is a clause C --) V = I with C & I’. Although this abstract specification is correct for ACP in a formal sense, it does not provide good in- tuitions about an implementation. In particular, it is a bad idea to literally delete every subsumed clause, since that can make it harder to check subsumption for newly added clauses. There is also an implicit interaction be- tween subsumption and 0 that is not obvious from the above description. Hence, the remainder of this paper describes ACP mainly in terms of the actual mecha- nisms and ATMS data structures used by the program, instead of the abstract specification. Reason Maintenance A node may represent any relation (a’), with a binding (V = I) being merely a special case. Each node has a label that is a set of minimal sets of supporting as- sumptions [de Kleer, 1986a]. The label of node N is denoted L(N). I n effect, each node N representing @ along with its label represents a set of C ----f ip clauses. By the subsumption criterion above, only the minimal environments (C) need to be stored. In the remainder of the paper, nodes will be shown with their labels. For example, node no represents the relation (a = b + c), true in the empty environment (1: no: (a =b+c) {} Nodes ni and n2 bind the variables under assumptions B and C: b and c, respectively, n1 : (b = (6,9)) vu n2 : (c = (10,ll)) {C} Constraint propagation creates justifications, which will be written as clauses of the form NO + Ai lVa. ATMS label propagation computes the closure under: (NO + ANi) A% E L(Ni) I- IJG E L(No) (2) i i i Continuing the example, constraint propagation yields an interval value for a, creating node n3, justified by justification jr, and label propagation adds {B, C) to the label of n3: . 123 + no A nl A n2 ii; i (a = (16,20)) {BY Cl (In a naive implementation, the system might at this point try to derive values for b or c using the new value of a; this is an instance of “reflection” and methods for preventing it will be discussed later.) A query for the binding of variable a in the envi- ronment {B, C) - that is, p(u, {B, C}) - should return node n3 and hence interval (16,20). Unique Bindings To control constraint propagation, there should be at most one binding per variable per environment. Sup- pose, for example, that we get a new value for a under assumption A, denoted by node n4 : n4 : (a = (17,19)) {A) Since this value of a is a subset of the interval for a derived earlier, a new justification is required for n3, with a resulting update to the label of n3: . . i: I ;h” z($, 20)) {B, C}(A) Label update Note that node n3 representing the less specific inter- val (16,20) f or a will need to be kept along with its label. /3(u, {B, C}) should still find node n3 and return (16,20), but p(u, {A)) should only find node n4, even though n3 is true as well. “Shadowing” justifications are introduced to provide this functionality. A shadowing justification obeys (2), that is, the con- sequent is true in any environment in which all its an- tecedents are true. This criterion results in updates to the node labels L(N). H owever, all nodes also have a “shadow label.” Any node supported by a shadowing justification in environment C also has C added to its shadow label S(N), obeying the usual minimality con- vention. ACP distinguishes between nodes being true in an environment, and active in an environment: true(N, P) w 3C E L(N) : C C I’ active(N, J?) t-) true(N, I’) (3) n13C E S(N) : c E r Intuitively, shadowing environments make the node in- visible in all their superset environments. A node shad- owed in the empty environment {} would be true in all environments, but no inferences would be made from it. HAMSCHER 507 The unique binding function ,0 is thus defined in terms of the active predicate: p(v,r) = I - active(V=I, r) (4) In the example above, j2 would be a shadowing jus- tification, since in any environment in which 124 is true, ng should be ignored. Shadowing justifications will be denoted by clauses written with t- and the shadow la- bel as that label appearing to the right of a “\,, char- acter. Note that any environment appearing in S(N) must also be a superset of some environment in L(N). However, for compactness of notation in this paper, en- vironments that appear in both L(N) and S(N) will only be shown to the right of the “\” character. In the example below, the reader should understand that L(n3) is actually {B,C) {A}: . ii i Ti =“(?t, 20)) (B, C) \ {A} Label update Since any number of different interval values for a variable can be created in any order, it is in princi- ple possible for O(n2) shadowing justifications to be installed for a variable with n bindings. IIowever, since shadowing is transitive some of these shadowing justi- fications can be deleted. For example, suppose three nodes n.101, 12102, and n103 are created. The sequence of new justifications and environment propagations il- lustrates that after jr02 and jica are created, jrci can be deleted: m0l : x = [o, lo] (Xl} n102 : x = [4,6] (X2) A01 : nl01 -+ w02 12101 : x = [O, lo] {Xl} \ {X2} La-be1 update 72103 : x = PY 81 {x3) New node A02 : n103 -+ 12102 12103 : x = [4,61 {X3] \ {X2] Label update A03 : nl0l * n103 n101 : 2 = [0, lo] {Xl} \ {X2)(X3} Label update ACP attempts to minimize the number of shadowing justifications by deleting each one that is no longer needed. Although deleting justifications is not a nor- mal operation for an ATMS since it can lead to incor- rect labelings, this special case guarantees that L(N) and S(N) remain the same as if the deletion had not taken place. Since the justifications were redundant, an efficiency advantage accrues from not recomputing labels as more environments are added. Having defined the distinction between nodes being true versus being active, we now turn to methods for controlling propagation inferences. Propagation ACP propagates interval values for variables using “con- sumers” [de Kleer, 1986b]. A consumer is essentially a closure stored with a set of nodes; it runs exactly once with those nodes as its arguments the first time they all become true. Normally, a consumer creates a new node 508 TRUTH MAINTENANCE SWI’EMS and justification whose antecedents are the nodes whose activation triggered it. ACP is built using a focused ATMS that maintains a single consistent focus environ- ment and only activates consumers to run on nodes that are true in that focus environment. ACP takes this fo- cusing notion step further, running consumers only on nodes that are active. The propagation mechanism of ACP distinguishes be- tween constraints and bindings. A binding is a con- straint of the form V= I. For example, no : (a = b + c) is a constraint and n1 : (b = (6,9)) and n200 : (a = 2) are bindings. Propagation of a constraint node works as shown in the procedure below. For simplicity, the example below shows variables bound only to integers, rather than to intervals as would be done in ACP. 1. When a. constraint node becomes active, install con- sumers to trigger propagation on each of the variables that appear in the constraint. For example, when n0 : (a = b + c) becomes active, consumers will be installed for variables a, b, and c. 2 When a binding node for a variable becomes active, run each of its consumers; each consumer will sub- stitute the current binding into the constraint and evaluate it. For example, when n1 : b = 7 becomes active, the constraint no : (a = b+c) will be evaluated given nl, to produce a new constraint a = 7 + c. 3. The result of the evaluation in step 2 will fall into one of four cases: (4 w (4 (4 The constant I. For example, if (a * b = 7) and U = 0, evaluation returns 1. Create a justifica- tion for the distinguished node I from the current antecedent nodes, which will result in an ATMS conflict. The constant T. For example, if (u*b = 0) and a = 0 then the evaluation will return T. Do nothing. A binding. For example, if a = 2 and a = b + 2 then evaluation returns the binding b = 0. Create a new node containing the binding and justify it with the current antecedents. A constraint. For example, if a = 2 and a = b + c then evaluation returns 2 = b + c. Go back to step 1 above for the new constraint. Protection In the expression (a = !(b + c)) with a, b, and c being variables, b and c are said to be protected. The effect of protection is that evaluating any expression, all of whose variables are protected, yields T. For example, evaluating ([l, 23 = [2,4]+!c) yields T. In step 3(c) above, if a = 2 and a = !(b + c) the evaluation returns T, because all the variables in 2 = !(b+c) are protected. The benefit of the protect operator is that the ACP user can advise the system not to ever waste effort trying to solve for certain variables. For example, CROSBY constructs linear regression equations of the formy=cvcxc+.. . + crnxra + ,B, with oi and /? denoting constants. In this context it makes no sense to try to use the dependent variable y to solve for any of the ?a independent zi variables. Protecting the zi variables is a simple, local, modular way to prevent ACP from doing so. Solution Trees The propagation procedure above is straightforward but would in general result in unnecessary work. For one thing, given Q = b + c, b = 2 and c = 2, it would derive a = 4 in two different ways. To prevent this the variables should only be bound in some strict global order (alphabetic, for example). Furthermore, subex- pressions that contain operators with idempotent ele- ments do not always require all variables to be bound before evaluating to a constant; for example, the con- straint a = b * c, evaluated with c = 0, should immedi- ately yield a I= 0, instead of waiting for a value for b. Finally, protected variables guarantee that certain se- quences of bindings and evaluations will never yield any new bindings. Although relatively minor from a purely constraint processing point of view, these are all genuine concerns in ACP because the computational overhead of creating new nodes, justifications, and consumers far outweighs the work involved in actually evaluating the constraints and performing the associated arithmetic operations. Whenever a new constraint node is created, ACP per- forms a static analysis to find all the legal sequences in which its variables could be bound. The result of this analysis is represented as a directed tree whose edges each correspond to a variable in the constraint. This is called the solution tree. Each path starting from the root represents a legal sequence. The recursive algo- rithm for generating this tree adds a new arc for each variable appearing in the current expression, from the current root to a new tree formed from the expression derived by deleting that variable. For example, the root of the tree for (u = b + c) has three branches: one for a leading to a subtree that is the tree for (b + c); one for b leading to a subtree that is the tree for (u = c); one for c leading to a subtree for (a = 6). In this example the c branch can be pruned because (a = b) is not a binding and c (alphabetica.lly) precedes neither a nor b. Had the expression been (a = b * c), the c branch would remain because c could be bound to 0 to produce the binding a = 0. Had the expression been (!a = b + c), the b branch could have been pruned because the tree for the subex- pression (!a = c) consists only of a single branch a, which does not precede b. Step 1 of the propagation procedure presented earlier need only install consumers on variables corresponding to branches emanating from the corresponding position in the tree. The propagator computes the solution tree once and caches it; this is worthwhile because it is not unusual in CROSBY for variables to acquire many dif- ferent bindings, and it would be wasteful for ACP to repeatedly rediscover worthless sequences of variable bindings. In an example shown earlier, recall that we had the nodes: : (a=b+c) (} ii; : (b = (6,9)) W 122 : (c = (10,ll)) {C} n3 : (a = (16,20)) {B, C) The solution tree ensures that the no and nl would have been combined to get (a = (6,9) +c), which would then have been combined with n2 to get n3, without deriving the result in the symmetric (and redundant) fashion. Reflect ion As mentioned earlier, nodes no and n3 might in prin- ciple be combined and evaluated to yield ((16,20) = b + c), “reflecting” back through the no constraint to derive new values of b and c. In general, there is little point in attempting to derive values for a variable V using constraints or bindings that themselves depend on some binding of V. The straightforward and complete method for check- ing this is to search each of the directed acyclic graphs (DAGS) of justifications supporting any binding about to be combined with a given constraint (step 2 of the propaga,tion procedure). If that constraint appears in every DAG, inhibit propagation. Worst-case complex- ity analysis suggests that this method might be worth its cost. Traversing the DAG takes time linear in the number of all nodes, but maintaining shadowing justifi- cations takes times quadratic in the number of bindings for any single variable. Since foregoing the reflection test may result in spurious bindings, the DAG strategy may pay off asymptotically. Intuitively speaking, when reflections go undetected, many extra nodes and justifi- cations get created, and depth first searches of justifica- tion trees are fast relative to the costs associated with creating unnecessary bindings. A further improvement to this strategy might be to search the DAG only to a fixed finite depth, since binding nodes can be supported via arbitrarily long chains of shadowing justifications. A different strategy is to cache with each node N its Essential Support Set E(N), and test that before searching the DAG. E(N) is that set of nodes that must appear in a justification DAG for any set of supporting assumptions l?. For example, no, nl and n2 all have empty essential support sets; node n3 has the essential support set (no, nl, 122). ACP tests essential support sets to see whether they contain either the constraint or any binding for the variable about to be propagated; if so the propagation is inhibited. In the example above, node 123 will not combine with no : (a = b + c) (that is, the n3 consumer that performs this computation will not run) as long as no E E(n3). Essential support sets can be easily computed locally and incrementally each time a justification is installed, and they have the useful property that once created, they can subsequently only get smaller as propagation HAMSCHER 509 proceeds. In general, when justification N t Ai Ni is created, E(N) is updated by E(N)’ = E(N) n u({N} u E(N)) (5) i where initially E(N) is represented implicitly by 24, the set of all nodes. For example, if some new justification n3 t 12201 were added, nodes no, nl, and n2 could be deleted from E(n3). In that case n3 would then appropriately con- tinue to propagate with constraint no. Essential support sets effectively cache the result of searching the support DAG ignoring ATMS labels. As compared to the complete and correct strategy, which is to search the DAG to an unbounded depth, the essential support set strategy (hereafter, ESS) can err only by not detecting reflections. With this additional propagation machinery in place we can now follow ACP as it continues to propagate in focus environment {A, B, C} from n4 as shown below. n4 : (a = (17,19)) . New node 721 i- no A 122 A n4 ;I: i (b = (6,9)) VW% Cl . 125 + no A nl A 124 ii; i (c = (8,13)) {A, B} Label update New node . . i: I ~~~(~,213)) {A, B} \ {C} Label update ACP creates the new node n5 : (c = (8,13)), active only in (A, B) . Hence, querying ACP for the value of c yields the following results in each of the following F: P(c, 0) = P(c, {Al) = P(c, VW = G-00, +4 P(c, {A, B)) = (8913) 125 C E I? + p(c, l?) = (10,ll) n2 Overlapping Intervals ACP needs to deal with cases in which a variable is assigned intervals which have nonempty intersections but neither is a subset of the other; these are called overlapping intervals. Suppose that nodes n201 and n202 are created with overlapping interval values (1,lO) and (5,20). ACP creates a third node n203 to represent their intersection (5, lo), and the new node in turn shadows the two nodes that support it. 12201 : (x = (1, lo)) {Xl} 72202 : (X = (5,20)) {X2) j200 : n203 +- mol A m2 New node Label update r-4203 : (2 = (5,10)) {X1,X2} j201 : n201 -e n203 72201 : (z = (1,lO)) j202 : {Xl} \ (Xl, X2) n202 -e n203 n202 : (x = (5,20)) {X2} \ {Xl, X2) Querying ACP for the value of z yields a d in each of the following environments: P(X> 0) = -. - - (-9 +m) Label update ifferent result Although in the worst case k variable bindings could result in O(k2) overlaps, empirically the number actu- ally created is much less than k. Intuitively, the prop- agation strategy ensures that overlapping intervals are only derived from the most restrictive intervals already present in the current focus environment. Whenever ACP creates a new overlapping interval, the two inter- vals it was created from become shadowed and trigger no more inferences in the current focus environment. Overlapping intervals result in a small complication to reflection checking. Although in general, no informa- tion is gained by deriving a value for variable V using constraints or bindings that themselves depend on some binding of V, overlaps between intervals can in fact add information. Hence, DAG and ESS computations ignore the presence of overlap justifications. Empirical Results ACP is implemented in 5500 lines of ANSI Common Lisp. Roughly one half is devoted to the expression evaluator, one third is the focused ATMS, and ACP- specific code comprises the remainder. It is currently being used in a prototype program for model-based fi- nancial data analysis; the data for Figures 1 and 2 were generated using two financial models with quar- terly data for a real computer peripherals manufacturer. The “small” example (data represented by o) uses 116 variables and involves the exploration of 12 contexts. The “large” example (data represented by 0) uses 158 variables and explores 28 contexts. The horizontal axis refers to the reflection check strategies in use: “ ” means the null strategy of not checking reflections; “S” refers to a simple strategy for which only constraints being immediately re-applied were inhibited; “D2” refers to the DAG strategy with a depth cutoff of 2, “D” refers to the DAG strategy with no depth cutoff, “E” refers to the ESS strategy, and the other columns show the use of multiple strategies as successive filters. Figure 1: Maximum Bindings for any Variable 90 60 30 Max 0 0 O 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I S l3 lcs2 b S S l2 S Strategies E D D E D 510 TRUTH MAINTENANCE SYSTEMS 200 100 Sets Figure 2: Run Time in Seconds 0 0 0 O O 0 0 0 0 0 0 0 I S l3 D2 0 S S l?J 9 E D D E Strategies D Figure 1 shows the maximum number of bindings created for any variable; in general, the stronger the reflection check strategy, the fewer intermediate results are created. Figure 1 illustrates that D2 works badly, inhibiting no intermediate bindings at all. Further, it shows ESS can work almost as well as the unlimited- depth DAG strategy. Figure 2 shows the run time in seconds on a Sym- bolics 3645 (inclusive of paging and ephemeral garbage collection) for the same examples. Figure 2 demon- strates that although the worst case complexity of the DAG strategy appears superior, in fact it is empirically much more costly than ESS. Among the reasons for this are that (i) the number of shadowing justifications for a variable with k bindings is theoretically O(k2) but em- pirically never more than 1.5k, a.nd (ii) both DAG and ESS tests must be reexecuted every time the ATMS fo- cus changes, and the frequency of such changes is larger than the average number of bindings per varia.ble. Al- though ESS is not much faster than the null strategy, the space savings suggested in Figure 1 makes it the method of choice in ACP. Conelusion ACP integrates constraint propagation over intervals with assumption-based truth maintenance, contribut- ing several novel inference control techniques, includ- ing the incorporation of subsumption into the ATMS and precomputing feasible solution paths for every con- straint. Experiments with ACP further indicate tha.t spurious intermediate variable bindings can be effi- ciently suppressed by using essential support sets to check whether each new variable binding is being de- rived in a way that depends on the variable itself. References [Bouwman, 19831 M. J. Bouwman. Human Diagnostic Reasoning by Computer: An Illustration from Finan- cial Analysis. Management Science, 29(6):653-672, June 1983. [Dague et al., 19901 P. Dague, 0. Jehl, and P. Taillib- ert. An Interval Propagation and Conflict Recogni- tion Engine for Diagnosing Continuous Dynamic Sys- tems. In Proc. Int. Workshop on Expert Systems in Engineering, in: Lecture Notes in AI, Vienna, 1990. Springer. [Davis, 19871 E. Davis. Constraint Propagation with Interval Labels. Artificial Intelligence, 32(3):281- 332, July 1987. [de Kleer, 1986a] J. de Kleer. An Assumption-Based TMS. Artificial Intelligence, 28(2):127-162, 1986. [de Kleer, 1986b] J. de Kleer. Problem solving with the ATMS. Artificial Intelligence, 28(2):197-224, 1986. [Dhar and Croker, 19881 V. Dhar and A. Croker. Knowledge Based Decision Support in a Business: Issues and a Solution. IEEE Expert, pages 53-62, Spring 1988. [Dressler and Farquhar, 19891 0. Dressler and A. Far- quhar. Problem Solver Control over the ATMS. In Proc. German Workshop on AI, 1989. [Forbus and de Kleer, 1988] K. D. Forbus and J. de Kleer. Focusing the ATMS. In Proc. 7th National Conf. on Artificial Intelligence, pages 193-198, Minneapolis, MN, 1988. [Hamscher, 19901 W. C. Hamscher. Explaining Un- expected Financial Results. In Proc. AAAI Spring Symposium on Automated Abduction, pages 96-100, March 1990. Available from the author. [Sacks, 19871 E. Sacks. Hierarchical Reasoning about Inequalities. In Proc. 6th National Conf on Artifi- cial Intelligence, pages 649-655, Seattle, WA, August 1987. [Simmons, 19861 R. 6. Simmons. Commonsense Arith- metic Reasoning. In Proc. 5th National Conf. on Ar- tificial Intelligence, Philadelphia, PA, August 1986. [Williams, 19861 B. C. Williams. Doing Time: Putting Qualitative Reasoning on Firmer Ground. In Proc. 5th National Conf on Artificial Intelligence, pages 105-112, Philadelphia, PA, August 1986. Acknowledgements Heinrich Taube contributed to the implementation of the expression evaluator. An anonymous reviewer sug- gested the inclusion of the abstract specification. HAMSCHER 511
1991
77
1,139
Wwee Tou Ng Raymond J. Mooney Department of Computer Sciences University of Texas at Austin Austin, Texas 78712 htng@cs.utexas.edu, mooney@cs.utexas.edu Abstract This paper presents an algorithm for first-order Horn-clause abduction that uses an ATMS to avoid redundant computation. This algorithm is either more efficient or more general than any other previous abduction algorithm. Since com- puting all minimal abductive explanations is in- tractable, we also present a heuristic version of the algorithm that uses beam search to compute a subset of the simplest explanations. We present empirical results on a broad range of abduction problems from text understanding, plan recog- nition, and device diagnosis which demonstrate that our algorithm is at least an order of mag- nitude faster than an alternative abduction algo- rithm that does not use an ATMS. 1 Introduction Abduction is an important reasoning process under- lying many tasks such as diagnosis, plan recognition, text understanding, and theory revision. The stan- dard logical definition of abduction is: Given a set of axioms T (the domain theory), and a conjunction of atoms 0 (the observations), find a minimal set of atoms A (the assumptions) such that A U T b 0 (where A U T is consistent). A set of assumptions together with its corresponding proof of the observa- tions is frequently referred to as an explanation of the observations. Although recent research has seen the development of several special purpose abduction sys- tems (e.g., [de Kleer and Williams, 19871) and some theoretical analysis of the problem ([Levesque, 1989, Selman and Levesque, lQQO]), there has not been much emphasis on developing practical algorithms for the general problem. Existing algorithms tend to be too restrictive, too inefficient, or both. The first problem is generality. [Levesque, 19891 has shown that the ATMS [de Kleer, 1986aJ is a gen- *This research is supported by the NASA Ames Re- search Center under grant NCC-2-429. The first author is also supported by an IBM Graduate Fellowship. Equip ment used was donated by Texas Instruments. 494 TRUTH MAINTENANCE SYSTEMS era1 abduction algorithm for propositional Horn-clause theories. However, many interesting abduction tasks require the expressibility of first-order predicate logic. In first-order logic, the important operation of uni- fying assumptions (factoring) becomes relevant. Fre- quently, simple and coherent explanations can only be constructed by unifying initially distinct assump- tions so that the resulting combined assumption ex- plains several observations [Pople, 1973, Stickel, 1988, Ng and Mooney, 19901. This important problem does not arise in the propositional case. The second problem is efficiency. The general pur- pose abduction algorithm proposed in [Stickel, 19883 can perform a great deal of redundant work in that partial explanations are not cached and shared among multiple explanations. The ATMS algorithm, though it caches and reuses partial explanations in order to avoid redundant work, has not been ex- tended to perform general first-order abduction. Also, the ATMS algorithm of [de Kleer, 1986a] exhaus- tively computes all possible explanations which is computationally very expensive for large problems. Even in the propositional case, computing all mini- mal explanations is a provably exponential problem [Selman and Levesque, 19901. This indicates that re- sorting to heuristic search is the most reasonable ap- proach to building a practical abduction system. This paper presents an implemented algorithm for first-order Horn-clause abduction that uses an ATMS to cache intermediate results and thereby avoid re- dundant work. The most important additions to the ATMS involve handling the unification of assump- tions. The system also incorporates a form of heuris- tic beam search which can be used to focus the ATMS on promising explanations and avoid the intractable problem of computing all possible explanations. We have evaluated our algorithm on a range of abduc- tion problems from such diverse tasks as text un- derstanding, plan recognition, and device diagnosis. The empirical results illustrate that the resulting sys- tem is significantly faster than a non-ATMS alterna- tive, specifically, the abduction algorithm proposed in [Stickel, 19881. In particular, we show that even when From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. using heuristic search to avoid computing all possible explanations, the caching performed by the ATMS in- creases efficiency by at least an order of magnitude. 2 Problem Definition The abduction problem that we are addressing can be defined as follows. Given: e A set of first-order Horn-clause axioms T (the do- main theory), where an axiom is either of the form VV1,..., vk c * Pl A . . . A P,. (a rule), or VV i,...,vkF (afact) k 2 0, y > 0, C, Pi, F are atoms containing the vari- ables 2)x,. . .,z]Jc. o An existentially quantified conjunction (the input atoms) of the form 0 of atoms hl, . . I, vk 01 A : . . A 0, k 2 0, m > 0, Oi are atoms containing the variables Vl ,...,Vk. Find: All explanations with minimal (w.r.t. variant- subset) sets of assumptions. We define “explanations”, “minimal”, and “variant- subset” as follows. Let A (the assumptions) be an existentially quantified conjunction of -atoms of the form %I, . . . , ?& A1 A . . . AA, where k 2 0,n > O,Ai are atoms containing the variables 211, . . . , vk such that A U T b 0 and A U T is consistent. An assumption set A together with its corresponding proof is referred to as an explanation (or an abductive pro08 of the in- put atoms. We will write A as the set (Al,. . . , An} with the understanding that all variables in the set are existentially quantified and that the set denotes a conjunction. We define an assumption set A to be a variant-subset of another assumption set B if there is a renaming substitution u such that Aa E B. The abduction task is to find the set S of all minimal expla nations such that there is no explanation in S whose assumption set is a variant-subset of the assumption set of another explanation in S. Since the definition of abduction requires consis- tency of the assumed atoms with the domain theory, the abduction problem is in general undecidable. We assume in this paper that consistency is checked in the following way: find the logical consequences of AUT via forward-chaining of some Horn-clause axioms (some of them are of the form PI A . . . A PT -+ FAL- SITY) up to some preset depth limit. If FALSITY is not derived, we assume that A U T is consistent. 3 The SAA Algorithm [Stickel, 19881 has proposed an algorithm for comput- ing the set of all first-order Horn-clause abductive proofs. His algorithm, which we will call SAA, op- erates by applying inference rules to generate goal clauses. The initial goal clause is the input atoms Ol,... , 0,. Each atom in a goal clause can be marked with one of proved, assumed, or unsolved. All atoms in the initial goal clause are marked as unsolved. A final goal clause must consist entirely of proved or assumed atoms. Let G be a goal clause &I, . . . , Qn, where the left- most unsolved atom is &a. The algorithm SAA re- peatedly applies the following inference rules to goal clauses G with unsolved atoms: e Resolution with a fact. If &a and a fact F are unifi- able with a most general unifier (mgu) gr, the goal clause Qr u, . . . , Qn 0 can be derived, where Qia is marked as proved. e Resolution with a rule. Let C + Pl A . . . A P,. be a rule where Qi and C are unifi- able with a mgu CT. Then the goal clause Qlu, . . .,&i-lb, PIa,. . . , Pra, Qdo, . . . , &,a can be derived, where Qiu is marked as proved and each Pku is marked as unsolved. Making an assumption. If Qi is assumable, then &1,-v&n can be derived with Qi marked as as- sumed. Factoring with an assumed atom.’ If Qj is marked as assumed, j < i, Qj and Qi are unifiable with a mgu u, the goal clause Qiu, . . , , Qi-10, Q;+iu, . . . , Q,u can be derived. [Stickel, 19881 also proposed the use of a cost metric to rank and heuristically search the more promising explanations first. All facts and rules in the knowledge base as well as assumed atoms are assigned costs. The best explanation is one with the least cumulative cost. Before we proceed, we note that the explanations generated by the SAA algorithm may include some that are variant-subsets of another. For instance, given the following axioms2 (inst ?g going) 4- (inst ?s shopping) A (go-step ?s ?g) (goer ?g ?p) + (inst ?s shopping) A (go-step ?s ?g) A (shopper ?s ?p) and the input atoms (inst go1 going) and (goer go1 johnl), we can derive the explanation F with assump- tions AF = {(inst ?x shopping), (go-step ?x gol), (inst ?y shopping), (go-step ?y gol), (shopper ?y johnl)} by backward-chaining on the two axioms. Applying the factoring operation, we can obtain another explana- tion E with assumptions AE = {(inst ?x shopping), (go-step ?x gol), (shopper ?x johnl)). But note that although AE e AF, AEU c AF with the renaming substitution u = {?x/?y}. Since explanations that are variant-supersets of other explanations are essentially redundant, they need to be eliminated. Unfortunately, it can be read- ily shown that determining variant-subset relation is an NP-complete problem by reduction from directed ‘Actually, the algorithm as presented in [Stickel, 19881 allows for unifying Q; with a proved atom as well. How- ever, it appears that in practice, omitting factoring with proved atoms does not cause any loss of good explanations while saving some redundant inferences. 2Variables are denoted by preceding them with a “?“. NG 8.1 MOONEY 495 subgraph isomorphism. This introduces yet another source of computational complexity when finding min- imal explanations in a first-order Horn-clause theory. asics of the ATM!3 The Assumption-based Truth Maintenance System (ATMS) [de Kl eer, 1986a] is a general facility for man- aging logical relationships among propositional formu- las. An ATMS maintains multiple contexts at once and is particularly suited for problem solving that in- volves constructing and comparing multiple explana- tions. Each problem solving datum is associated with a node in the ATMS. A node in the ATMS can be further designated as an assumption. Nodes are related via justifications. A justification is a propositional Horn- clause of the form al A . . . A ura 4 c, where each aa is an antecedent node and c is the consequent node. A restriction is that assumptions cannot be further justified. An environment is a set of assumptions. As- sociated with each node is a set of environments called its label. The ATMS supports several operations, in- cluding adding a node, making an assumption, and adding a justification. [de Kleer, 1986b] also proposed the use of a problem-solver-ATMS interface called the consumer architecture. A consumer is essentially a rule-like mechanism that is invoked by a set of nodes. It checks whether some precondition governing this set of nodes is satisfied, and if so, the consumer fires and performs some problem solving work. For example, first-order Horn-clause forward-chaining can be imple- mented using consumers as follows. Define a class to be an ATMS construct representing a set of nodes with some common characteristics. Let Class(C) denote the class representing all nodes whose data have the same predicate symbol as the atom C. For instance, Class(P(z)) = {P(u), P(f(b)), . . .}. Then a single con- sumer can implement a forward-chaining Horn-clause axiom such as P(z) A Q(z) ---) R(z) by actively look- ing for nodes with matching arguments in the classes Class(P(z)) and Class(&(z)). If the consumer finds two such nodes, say P(u) and Q(u), then its precondi- tion is satisfied. It then performs the forward-chaining work by inferring the node R(u), and adding the jus- tification P(u) A &(a) + R(u) to the ATMS. 5 The AAA Algorithm We now present our ATMS-based, first-order, Horn- clause abduction algorithm AAA. Basically, the algo- rithm is much like the implementation of a first-order Horn-clause forward-chaining system in the consumer architecture discussed above, except that the inference direction is now reversed. We also have the additional operation of unifying assumptions (factoring). Our goal is to construct an algorithm similar to SAA, but one that relies on caching justifications and sharing them among different explanations. First of all, note that in SAA, an unsolved atom in a goal clause can either be assumed or backward- chained on by some rule. However, in an ATMS, a node, once assumed, can never be further justified by some Horn-clause rule. To get around this restriction, when we want to assume an atom D with node n, we create a similar assumption node n’ and add the justification n’ + n (Table 1). For every Horn clause axiom A1 A . . . A A, -+ C, we create a consumer that is attached to every node in Class(C). The body of this consumer is shown in Table 1. We assume there is some preset backward- chain depth bound so as to prevent infinite backward- chaining on recursive rules. We also create a fact con- sumer for each fact. Assume an atom D If D is assumable then Let n be the node with datum D Let the predicate symbol of D be P Create another node n’ whose datum D’ is the same as D except that the predicate symbol of D’ is A-P Make node n’ an ATMS assumption node Add the justification n’ --f n Backward-chain consumer (encode a backward-chaining axiom AI A . . . A A, -+ C) For every node n newly added to Class(C) If n is not an assumption node and backward-chain depth bound is not exceeded and n unifies with C with a mgu u and c does not instantiate any variable in n then A: := Ala, . . . , A:, := Ant, C’ := Co- Add the justification A{ A . . . A AL + C’ Assume the atoms A:, . . . , A; Fact consumer (encode resolution with a fact F) For every node n newly added to Class(F) If n unifies with F with a mgu 0 and TV does not instantiate any variable in n then Make n a fact (i.e., let its label be ({}}) Algorithm AAA Add the justification 01 A . . . A 0, + GOAL Assume the atoms 01,. . . , 0, Run the fact consumers, backward-chain consumers, and forward-chain consumers For every environment e in the label of GOAL If factoring of assumptions has not been performed on e then For all pairs of assumptions ai and aj in e If a; and aj are unifiable with a mgu c f := eu it f un orward-chain consumers but restricted to forward-chaining on the assumptions in e’ (to check its consistency) Eliminate environments that are variant-supersets of other environments Table 1: The AAA Algorithm 496 TRUTH MAINTENANCE SYSTEMS For simplicity, we assume in the construction of AAA that any inconsistent assumption set can be de- tected by forward-chaining on a single axiom of the formArA...AA, -+ FALSITY. Forward-chain con- sumers as described in the last section are used to encode such forward-chaining axioms. The AAA algorithm first computes all explanations that are obtained by resolution with facts, resolution with backward-chaining rules, and making assump- tions. The last step in the algorithm performs fac- toring of assumptions in explanations. The resulting environments in the label of the GOAL node are all the minimal explanations of the input atoms 01 A. . .AO,. The AAA algorithm is incomplete in the sense that some explanations that will be computed by SAA will not be computed by AAA. Specifically, such missed explanations are those that are obtained when, dur- ing resolution of an atom in a goal clause with a fact or the consequent of a rule, the most general unifier is such that variables in the goal clause get instan- tiated. However, for all the problems that we have tested AAA on, this incompleteness does not pose a problem. This is because the explanations we seek are constructed by chaining together general rules to explain specific ground facts. Aside from this incom- pleteness, we believe that our AAA algorithm com- putes all other explanations that the SAA algorithm computes. We are currently working on a formal proof of this equivalence. We also plan to explore the con- struction of a version of the AAA algorithm that is complete. We have implemented and tested both algorithms. The empirical results section presents some data com- paring their performance. The actual performance of both algorithms indicate very clearly that computing all minimal explanations is simply too explosive to be practically useful. 6 The Heuristic Algorithms In this section, we present our heuristic algorithm AAA/H that uses beam search to cut down on the search space. To facilitate comparison, we also con- structed a heuristic version of SAA called SAA/H. The differences between SAA/H and SAA are: 1. SAA/H is an incremental algorithm that explanations one input atom at a time. constructs 2. Instead of searching the entire search space, we em- ploy a form of beam search. We restrict the number of goal clauses to &trca during the incremental pro- cessing of an input atom. The search terminates when there are ,&tra number of final goal clauses. The number of final goal clauses carried over to the processing of the next input atom is @inter, where P inter 5 Pintra- 3. Each goal clause is assigned a simplicity metric de- fined as E/A, where E is the number of input atoms explained (i.e., atoms marked as proved) and A is the number of assumptions in a goal clause. Goal Foreachinput atomOi,i=l,...,m Add the justification GOALi- A Oi --) GOALi (or 01 ---c GOAL1 if i = 1) Loop Run forward-chaining consumers Let L(GOALi) be the label of GOAL; Order the environments in L(GOALi) Retain the best (simplest) Pintra number of environments in L(GOALi) Unify the assumptions of the environments in L( GOALi) If the number of environments in L(GOALi) 2 Pintra then exit the loop Execute backward-chaining and resolution with facts If no backward-chaining occurs then exit the loop End of loop Reduce the number of environments in L(GOALi) to the best Pinter environments Table 2: The AAA/H Algorithm clauses with the processed first. highest simplicity metric values are Analogous changes are made to the AAA algo- rithm such that AAA/H is an incremental algorithm that uses beam search to restrict the search space ex- plored. To achieve incrementality, the AAA/H algo- rithm adds the justification 01 4 GOAL1 when pro- cessing the first input atom Or. Subsequently, adding the input atom Oi results in adding the justification GOALi, 1 A Oi ---) GOALi. By doing so, explanations of the input atoms 01 A . . . A 0s are exactly the envi- ronments in the label of the node GOALi. To implement beam search, the algorithm AAA/H uses the idea of focusing [Forbus and de Kleer, 1988, Dressler and Farquhar, 19901 to restrict the amount of work done by the ATMS. There are two main uses of focusing: to discard unwanted environments and to only search for those interesting (simplest) environ- ments. When the algorith decides to keep only /3inter \ number of environments, al other environments with higher simplicity metric values are removed from the labels of all nodes. To ensure that only the simplest environments are searched, each time when a node is assumed, it is added to a list of focus assumptions and label propagation is such that only the focused envi- ronments get propagated. Since we intend AAA/H to be an efficient and practical algorithm, we no longer assume in the con- struction of AAA/H that any inconsistency can be detected by forward-chaining on one axiom. That is, an assumption set may imply FALSITY via forward-chaining on several axioms with the last ax- iom having FALSITY as its consequent. Once we do away with this assumption, we are faced with the problem that any axiom of the form Al A . . . A A73 -+ C in the knowledge base can potentially be NG & MOONEY 497 used in two ways, backward-chaining during abduc- tion and forward-chaining during consistency check- ing. To minimize duplicating work, we now imple- ment backward-chaining by having the consequent node “suggest” the assertion of the antecedent nodes, and letting the forward-chaining consumers add the actual justification linking the antecedent nodes to the consequent node. The algorithm AAA/H is given in Table 2. Table 3: Empirical results comparing SAA and AAA Pro- blem hare snake fly bird shop1 shop2 work court move COPY replace backup paper graph cktl ckt2 ckt3 adder mean Table 4: Empirical results comparing SAA/H and AAA/H 1.38 0.13 9.40 1.59 12.68 1.66 8.68 1.52 4.35 0.12 9.06 0.53 8.70 0.33 10.85 0.57 11.53 0.76 12.78 0.83 12.28 0.93 12.74 0.83 14.21 1.06 14.58 1.17 1.74 0.13 3.04 0.09 1.12 0.07 6.66+ 0.17 10.62 5.91 7.64 5.71 36.25 17.09 26.36 19.04 15.17 15.40 13.20 15.35 13.41 12.46 13.38 33.78 16.00 39.18+ 17.55 59 154 176 138 152 233 239 271 536 561 533 561 673 709 212 312 130 284+ stifica A/H 68 109 117 95 58 104 95 122 227 236 221 236 288 314 22 35 24 36 3ns ratio 0.87 1.41 1.50 1.45 2.62 2.24 2.52 2.22 2.36 2.38 2.41 2.38 2.34 2.26 9.64 8.91 5.42 7.89+ 3.38 7 Empirical Results In this section, we present our experimental results on running the various algorithms SAA, AAA, SAA/H, and AAA/H on a range of abduction problems from such diverse tasks as text understanding, plan recog- nition, and device diagnosis. We measure the perfor- mance and the amount of computation expended by the algorithms using two metrics: the runtime and the number of justifications (i.e., number of rule invo- cations) made. The results are presented in Table 3 and 4.3 The full-search data in Table 3 also gives the 3The “+” signs in Table 3 and 4 indicate that the cor- responding examples take much longer to run and we only recorded the time taken and the number of justifications total number of minimal explanations (#E) for each problem. All runtimes are actual execution times on a Texas Instruments Explorer/2 Lisp machine. Note that the beam widths for SAA/H and AAA/H are set to the minimum values such that the best expla- nation is formed for every problem in the set. For the current set of problems, /?inte,. = 4, Pintra = 20 for SAA/H, and @irate,. = 4, /3intra = 16 for AAA/H.4 The text understanding problems include exposi- tory text examples about explaining the coloration of various animals (hare, snake, fly, bird) and nar- rative text examples about understanding the inten- tion of someone entering a supermarket (shopl, shop2, work, court). The plan recognition problems (move, copy, replace, backup, paper, graph) involve recog- nizing UNIX users’ plans from a sequence of primi- tive file commands. The device diagnosis problems include several simple logic circuit examples (ckt 1, ckt2, ckt3) and a full adder example (adder). In the diagnosis problems, we restricted the assumable atoms to be those with one of the two predicates norm (the component is normal) or ab (the compo- nent is abnormal). (i.e., we use predicate-specific abduction[Stickel, 19881). In the rest of the problems, all atoms are assumable. To give a sense of the size of our problems and the knowledge base used, there is a total of 46 KB facts, 163 KB rules, and 110 taxonomy-sort symbols5. The average number and maximum number of an- tecedents per rule are 2.8 and 6 respectively, and the average number of input atoms per problem is 5.6. (See [Ng and Mooney, 19911 for a complete listing of the knowledge base and the examples used.) The empirical results indicate that AAA outper- forms SAA and AAA/H outperforms SAA/H on the set of problems we ran. AAA runs about 3 times as fast as SAA on the few problems we were able to test. Due to the intractability of computing all minimal ex- planations, we were unable to run the full-search algo- rithms on the other problems, which clearly require heuristic search. Even for the simple problems for which we have comparative data, the heuristic versions are about 10 times faster than the full-search versions while still finding the simplest explanation (SAA/H is 5 times faster than SAA, AAA/H is 16 times faster than AAA). Comparing AAA/H with SAA/H, we see that there is on average at least an order of mag- nitude speedup on the problems that we have tested. We believe our results are particularly significant because, to the best of our knowledge, this is the first empirical validation that an ATMS-based first-order made at the time the program was aborted. 4Due to the different ordering in which explanations are generated in both algorithms, the minimum beam widths for which the best explanations are found in both algo- rithms need not be the same. 5Every taxonomy-sort symbol p will add an axiom (in addition to the 163 KB rules) of the form (inst ?x p) -+ (inst ?x supersort-of-p) 498 TRUTH MAINTENANCE SYSTEMS abduction algorithm employing caching of explana- tions performs better than a non-ATMS alternative even when pruning heuristics are used to find a rel- atively small number of good explanations. We also want to stress that although the AAA algorithm is in- complete compared to the SAA algorithm, this does not affect our comparison since on all the problems that we tested, the situation in which a most general unifier actually instantiates the datum of a node does not arise. 8 Related Work As mentioned in the introduction, previous algo- rithms for automated abduction have been either too restrictive or too inefficient. Previous research on the ATMS [de Kleer, 1986a, Levesque, 19891 and its use in device diagnosis [de Kleer and Williams, 19871 has been propositional in nature. In particu- lar, it has not dealt with the problem of uni- fying assumptions which occurs in general first- order abduction. There has been some previ- ous work on focusing the ATMS on a subset of interesting environments [Forbus and de Kleer, 1988, Dressler and Farquhar, 19901; however, this work was not in the context of general first-order ab- duction and did not specifically involve using fo- cusing to perform beam search. Also, although [Dressler and Farquhar, 19901 has empirical results comparing focused ATMS performance with that of non-focused ATMS (thus demonstrating that limited search is better than complete search), there has been no previous work comparing limited search ATMS im- plementation with a limited search non-ATMS alter- native. Finally, other systems have not been systemat- ically tested on such a wide range of abduction prob- lems from text understanding, plan recognition, and device diagnosis. 9 Conclusion In this paper, we have presented a new algorithm for first-order Horn-clause abduction called AAA. The AAA algorithm uses an ATMS to avoid redundant computation by caching and reusing partial explana- tions. By comparison, previous abduction algorithms are either less general or less efficient. Since computing all minimal explanations is intractable, we also devel- oped a heuristic beam-search version of AAA, which computes a subset of the simplest explanations. In or- der to evaluate AAA and AAA/H, we performed a comprehensive set of experiments using a broad range of abduction problems from text understanding, plan recognition, and device diagnosis. The results conclu- sively demonstrate that our algorithm is at least an or- der of magnitude faster than an abduction algorithm which does not employ an ATMS. Acknowledgements Thanks to Adam Farquhar for allowing us to use his ATMS code and for discussing technical details of the ATMS in the early stages of this research. Thanks to Siddarth Subramanian for writing some of the axioms for the diagnosis examples. References [de Kleer and Williams, 19871 Johan de Kleer and Brian C. Williams. Diagnosing multiple faults. Ar- tifkial Intelligence, 32:97-130, 1987. [de Kleer, 1986a] Johan de Kleer. An assumption- based TMS. Artificial Intelligence, 28~127-162, 1986. [de Kleer, 1986b] Johan de Kleer. Problem solving with the ATMS. Artificial Intelligence, 28:197-224, 1986. [Dressler and Farquhar, 19901 Oskar Dressler and Adam Farquhar. Putting the problem solver back in the driver’s seat: Contextual control of the ATMS. In Proceedings of the Second Model-Based Reasoning Workshop, Boston, MA, 1990. [Forbus and de Kleer, 19881 Kenneth D. Forbus and Johan de Kleer. Focusing the ATMS. In Proceed- ings of the National Conference on Artificial Intel- ligence, pages 193-198, St. Paul, Minnesota, 1988. [Levesque, 19891 Hector J. Levesque. A knowledge- level account of abduction. In Proceedings of the Eleventh International Joint Conference on Arti- ficial Intelligence, pages 1061-1067, Detroit, MI, 1989. [Ng and Mooney, 19901 Hwee Tou Ng and Ray- mond J. Mooney. On the role of coherence in abductive explanation. In Proceedings of the Na- tional Conference on Artificial Intelligence, pages 337-342, Boston, MA, 1990. [Ng and Mooney, 19911 Hwee Tou Ng and Ray- mond J. Mooney. An efficient first-order abduction system based on the ATMS. Technical Report AI91- 151, Artificial Intelligence Laboratory, Department of Computer Sciences, The University of Texas at Austin, January 1991. [Pople, 19731 Harry E. Pople, Jr. On the mechaniza- tion of abductive logic. In Proceedings of the Third International Joint Conference on Artificial Intelli- gence, pages 147-152, 1973. [Selman and Levesque, 19901 Bart Selman and Hec- tor J. Levesque. Abductive and default reasoning: A computational core. In Proceedings of the Na- tional Conference on Artificial Intelligence, pages 343-348, Boston, MA, 1990. [Stickel, 19881 Mark E. Stickel. A prolog-like inference system for computing minimum-cost abductive ex- planations in natural-language interpretation. Tech- nical Note 451, SRI International, September 1988. NG & MOONEY 499
1991
78
1,140
ity reasoning in a T -based analog diagnosis system David Jerald Goldstone* MIT A.I. La.l>ora.tory autl Xcros I’AHC’ 15.5 ulcst’u~lt~ St.. 3io11t clair. N.J O’iO-42 stoue(,lai.nlit,.etlu Abstract A system, called 5’kordo.s. has been implernent.etl for model-lmsed cliagilosis of analog circuits. One of the clifficult~ies of model-lmsecl diagnosis for alla- log circuits is inaiiaging the treineiiclous iiumlwr of predictions which inay lw genera.tecl 13~ a con- straint propagat,ion syst.em. Fort unat elr\s, not all of t,hosc predictSions are \-a.luable for the diagiios- tic process. A process called h ihc rll ntion, which is used iii Skodos to prevent geiierat ion of useless predictions, is int rocluced and clescril>ecl here. An- other t~eclinique is int~roclucecl and descrilwcl whcih further as4st.s iii controlling t,lw incyiialit~- rea- soning by exploiting hil>ernat.ion. This technique involves changing the structure iii which values are conhiiiecl. It? uses hilwnation as au early fil- t#er to reduce the i~uiid~er of in terac t,ions resulthg from I<irclilioff’s current law from exponential t40 quadratic iii t,lie nrxiiilwr of interacting variables. Introduction Skod0.s (Golclstoiw 1091) is a iiioclel-based diagnosis system for analog circuits. It is designed in the fdtih- ion of GDE (de Iilecr and \Villia.ms 198T): it uses a coi~straiiit~ propagat.ion system with a t,riitdi iiia.inte- iiance syst.eiii (de Kleer 1986). In t lie aimlog clomnin, liowcver. the values of state variables are not k~iowi~ madly, so the coiist ra.int. 1)ropagat ion sJ3t eni nialws predictions of the form 7’ > 3.2, ratlicr t.1ia.n 13 = .1. It turns out. hat t lie constraint propagation syst tmi ,geiicra.tes an overwlieliiiiiig iiiiiiil~er of pretlict~ions fol each stake varial)le. In cases where pretlic tioiis COII- sist of assignments of values to varial,les. t.wo distinct predictions for the same \-arial~le wsult in a coiit ratlic- t4ion. However. n-hen pretlidons ill\-olve iwclualit ivs. two predictions for t lie same \:arial>lf> caii sitpport c\acll ot,her. For example. 1’ < 3.2 supports 1. < 4.3. *This pa.per is based on a thesis subnlit ted on .Tan\lay, 22. 1Y91 to the Department of Electrica. Engilleering antI (:ompnter Science at the Massachusetth Instit rite of 7t~l1- nology in partial fulfillment of the requirements for t 11~ tl(+ pees of Master of Science and Bachelor of Science. 512 TRUTH MAINTENANCE SYSTEMS ITarious levels of precision for the same state \*ari- al)ltb wsult from propagating coust.raiiit s 1 llrougli tl if- fwt’iit pat,hs in the circlti t . Althollgli nnlltiple values for a variable are allowecl. it is not obvious that each nilwt lw maint~aiiietl. \\‘liJ. not kwp track of oiilJ* tlw st rongyst inequality? For esanlple. gi\w 11 < .j and 1’ < 6, why hot ignore the latter, less precise predic- t ion? The cliagiiosticiaii must giiaraiitee to l;w1) track of each 1)rrxlicte~l I-alit for a varial)le m-liicli proiik3 to cont,rihut t’ eveii t lie slightest hit of acltlit~ioi~al ilifbrnla- tioii. III acldit,ioii. everr\- predict ion is accoiiiganietl 1)) sets of a~suiiil)t ions untlw which it lioltls t rile: !’ < .5 iiiay ~1(\1)~13d 011 the corwct tiw5 of coiii1~oiieiit ,t : 13 < 6 ina?- tl~ye~itl 011 t II? correct 1if3b of coiiiponeiit B. ‘Tlit\ less 1)wcise pre~licl~iou is iiilporlailt , for if .1 wert-‘ tlih- co\7recl t,o he faidt!.. tlieii t,lie legs prwiw predict ion I‘+ mains, wlic>reas the iiiore precise 1)rrtlict ion ih 110 longer \-alitl. 1Iore gwerally: * Gil-en two prdictioiis, Ql antI \Ir,, where the in- equality- associatetl with !I! L is niow precise t hail t Iw iiitqualit>V associated wit.11 \li,. @z niust not 1~ ig- nored. in case the wt. of assuiiipt ions associatctl wit Ii q tllrlls out to he 1ms01111c1. From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. then they are reactivated, as iit4td. I-iil~ernatioi~ will he described in more detail, and that clcscription m-ill show that 1iiberna.t ion does not. lose any diagnostic in- forinat~ioii. tIil~eriiat.ioii simply takes advantage of tli- agnostic coiitlit.ioiis which allow th t ro~il~lesl~oot.cr to climiiiisl~ the predictions per value which imist lx niaiii- I aiiied. Another tecl~niqrle is used to deal wit.11 the esl)losivt~ iiuiiher of coiiil~ut~atioiis which resdt from iideract ioiis among the rcinaiiiin g ache predict ions for the vari- ables. This is sl~ow~ lat,er in the palm. Tt basically involves rest,ructuriiig the proldtwi tSo exploit filter3 ii1 the liil~eriiatlioii process. First, a. few definit~ions: 0 S is used to denote a set. of a.ssunq.~tioiis, where each a~ssuinptioii describes a coiiq~onent iii a iliorle of op- eraGoii. A is used to denote t lie assuml~t.ioii that coiiiyoiieiit. .A.‘ is iii a normal nlotle of operation. e @ is used to cleiiote an varia.ble and a iiuiiher, iiiequa li ty of the foriii he t.wee11 1’ > 5. a st,at.e e A prediction, 9, consists of an iiiequalit~y and t lie sets of assunipt~ions on which it. is hasetl. of the forin (%X1 % ) - a. Iii the esaiiqde ahove, one predirt.ion is 2! < 5 ((A)). If it, were sliowii, by another path. t,lia.t (9 < 5 assuiiiiiig sollie C’ wert’ working, t,heii the prediction would 1ia.ve ho sets of assuiiipt~ioiis: ((--a K’)). e A prediction. Xl!1 , .s~ths~r~~~~.s another prediction. \Irz, if the itquality associat,ed with q\~rl is stroiiger than t~lic oiie associated with \Ir, iiiat~lieiiiaticall~ aid if every set of assuiiiptioiis associated wit Ii 92 is sulb sum4 logica.lly I)> soiiie set. of assumptioiks associ- ated wit,li \][I 1 . mfpt1f< 16.5 ((C’)) s~lB.srrw~?s OUff’“f< li..‘> ((.-I, C”)) e Hibc mdion is the process 1,~ which selected lm‘clic- tioiis eit.lier invoke or are clisal~lecl from invoking tjhe coiist(ra,int, yropagat ioil syst eiil of which they are a part. The select ion is hased on at t.enll~t,s to tlt~citlt~ wliether invoking the coiist,raint propagation syst.eili will prolmhly generate ii0 iiew tliagiwstic inforiina- tioii. 9 ~\‘IMw a predict ion is ncf irf, it. is uwtl the const raiiit. yrolmgat ion systein. nor inally I,>- e M’lim a prediction is hiBcr?rnfcd. it cloes not. lead t 0 any new predictions. The coast raiut prolmgat ion syst,eiil stores up die const raiiit s diicli olwrat e 011 it, iii ca.sjc it is ever activated. Overview of the hibernation process As iiit rotlrlcetl aho\vl, liilw2iat.ioii is a prowss 1)~ which seine less precise predictions are stored. ant1 art’ not ust4 to generate ot(lwr predictions;. Tlwse predictions are chosen as t,lie oiws which which pro\-ably will Itot gtwwa.te iitw usefril cliagiiost ic iilforiilat ion. Ijut what is uwful iiiforinat.ioii? Ant1 how ina)- it be dvcitltd l~rolwrl,v whether a predict hi will l)ro- \.idc t l\at awful iiiforiiiat ion ? The cliagllost ic prowss ccwtc~w arouilrl sywpfoms, which are tliffewiiws Iw- t.wtwi olwmwl vallws and prtvlictetl valws. \Yitliin t lit> fraiiwwork of GDE( de Iileer ad \Villiailb, 1 {NT). t how s>-nqA 01115 art’ ustvl to isolat t’ conpicfs. wllicli aw set 5 of ~01i~p01w1~t I~t~l1avioral motles wliicl1 are illcoil- sist twt wit Ii t lit> systtbi11 descril)t hi ailcl soiiw ohser- vat ion. A prtdictioil is consitlertvl t 0 provitlv useful information if it might lead to a coiiflict to which no otlicr active predict ion would lcatl. Tlwrefore, iii or(br t,o liilwrila t c a lwtlir t ion, Skordo.~ inust pro\-e t,llat , at. least. for tlie time being, that predict ion will gener;\tt~ ii0 iiew such coiiflicts. A st raiglit fbrn-arcl circumst aiice which leads ISo lii- l)eriiat ioii is tlw sul)suilil)t ion of a predict ion. If arrhrrwcs( @I. \trz). t lien ally iiliiliinal conflict to which \ir J should lead. @z would also lead. Thus, upon tlisco\-- cry of hot 11, *S Imax 1~ properly put into hihernat iou. Hilwriiat ion iii the case of srilx9uinl~t ion can help ease the strains of iiianagillg inany levels of precision. Iii fact, tllis siiliplt~ sort of liilwriiat ion is i~eerlt~cl jllst t,o get the system to work. For esatnq~le, cwsicler a resist or. Kj . wit11 100 < t’l < 110 where it has l)etw o1w~rve~l that 11 < I*~ < 12. Call this ohervatioil 91. The const raiiit propagation syst.eni infers that 0.1 < i < 0.12 [(K,)]. and tllcll goes 1Mcli to infer that 10 < v1 < IS.2 [(RI )]. ( ‘all t.liis pw~lictioli qz. If tliis process cokit iirrirtl. t lien t lw prcqbagat or would go into an iiifinit th loop. llowc->\-er. \Ir, clearly l’ro\-ides no ntsw inforiliat ioil - it is s~ilk3uinetl I>>- *I. ‘I’lit~rc~fow. 9 1 is liihnat etl. Such stllwiinpt ive sit#uat ioils fit niwl\* illto the liilwrilat ion f~illlIt~\VO~li. I-11fol.t ulm~l~. siilmiiq~t i\.e liilwrliat ioti is not nearly t~nougli to steii1 the tide of iiiforlmtioii sl)ew- ing ft’oi1i t lie predict i1.e t>iigiile. i\lorc gmt~rally. lii- l,eriiation ina>. he 11set1 n-it11 iilaqm pwdict ioils which an= not s.clil~sli~~i~vl. Tlww l)rcvlict iom lnrlst Iw l)rovt’ll not to ltm(l to a cc,liflic1. t)-l)icall>- I)\. winlmrisoil wit 11 droiigw act i1.t) l~wtlictions which do not itlatl t 0 a cow flict . For c~sailqd~~. if a pwclic~t ion lilac I* > .‘,.CG tloes not lead to a coi1flic.t. then a pre~liction whicli it is ii1at li- eiiiatically st rongt~r tliail. 1ilW 1’ > -5, cannot leatl to a cotkflict At hr. As allot llcr t~sailil~lt-\. b~lltpcwt~ \l[r 1 con- sists of the iiityrlality I’ > .5.9.‘, alit1 t llv siiiglv awlllilb tion wt (.I. 11): \k 2 coiisist,s of v > .j ailtl t 11v assul11p tioii set (-4): ail(l @. 3 consists of t’ > 1 and (13). If tliert~ are no coilflicts. tlml \Ir 1 slioultl lw t llt> OIlI!. 011v act ivtl. (I’oiisiderii~g it is limt hentat icall>* st roilger t liali tnlw ot lwr pre(lic’tions. autl it, iii particular. did 1101 Ivacl to a conflict. t how nvalwr prtdictions will ilot leatl to conflicts eit hr. GOLDSTONE 513 conflict., it’ should be activat,ecl. It might, seen7 that hc- cause B, another nieiiher of ql’s assunipt ion set, was not iiiiylicated iii the conflict. \Ir 1 coulcl not have coii- t.rihu ted t#o the cohfiic t . C’onsider the case that two conflicts had arisen: (,-t, H, C’) (because of this pretlic- tion) a.iid (-4, C’), then the iiiiuimal coiiflict woulcl Ix (A,C). Tl iele ore, *.f the liil~erna.t.ed predict ions must, be considered for a.c th7a.t ion. q?, t.lie nest stroiigest predict ion iiiatlieiilaticall~, should not, 1~ a.ctivat.ed, l.wcause it tlcpentls 011 . I. which was t’lie only coniponeiit. on which 1’ > .5.9.5 tle- pendecl which was implica.tjed. It would ouly conic 111) with reduuda.nt, couflicts, if any. 93. although it is much wealier ina.tlieiiiat ically, should lx act iva t tvl, lw- ca.use it is dependent only on B, on which Xl!1 is iiot dependent,, and would provide useful inforliiat,ioii if it were involved in a conflict. A careful description In order to maintain correctness of the cliaguost.ic pro- cess, a new prediction, \Tri, nlaJ- be put into llil>eruat ion only when it is‘gmranteecl to create no new conflicts. Siiiiilasly, aft,er lia.ving been liil~erimtecl for some t.iiiie, if the gua.rant*ee uo longer liolcls, the predict,iou must be activakecl. The emphasis is on the creation of con- flicts, because t,lie diagnost.ic process used in ,+‘kol>rlo.~ bases its reasoning on c0nflict.s. If no new conflicts are created, tOlien no useful iuforniat ion has l.weii learued. (l0nflict.s axe coiisideretl to he part of the state of tlitt systmem. A conflict is of the form r~. - F, where I’x, clenot,es a. set, of a.ssuniptious and F’ denot.es false. r is used, ra.ther than 5 to easily different ia t e conflict iiig sets of assumpt.ions from non-conflict,iug out‘s. ITncler what coiidit~ions may Sko~~Yos couclucle that it will generake no new conflict8 from a cer’t ain new prt‘- clic t,ioii, \][I .? Certainly, for tSlic set of predict ions whre Qi, a.nd (ai are l~ouncls on t’lle same I-ariahlr autl <p; h- plies <p, , then if for every 2, , tllC?W is SOllW Q,j wit 11 so111e Y,j fl implies S,, , t lien Q, mar\- 1~ llil~erna t etl. FurtSlmmore for every YWil, a subtle wt\aker conclit iou will suffice: given the same set of predict ions uiatlle- inatically stronger t lian Q, . that tllere is solilt’ Tk aii(l some YjO such that l?k (7 Xj, implies Xk,., . t,lit~u 9, liia~- 1~ hibernated. A condit9ion which is not any Weaker 1,111 suggesth an efficient inlplementat ion follows: For e\.erJ- rk. fol I - v eve11 --r*(l, if there exists some act ivattvl pmlict ion, Qi, wliosc inequality is iiiatlieiiiatically stronger t hail 9, ‘s. a.nd rk n Xj, iuiplies Y*n, t,lieu !P, iuaJ- 1~ llilmna t ccl. The inva.ria.nt which ilnplelllt~llts this I)ella\-ior A: o For each I’k, the 9f mitll the strougeht incqualit~ which has some Yin such that l-k clot5 not iiillll>’ Yin slioultl he act,i\-ated. In cstentliiig tllesc t t~cliniqlleb to tliagilostic systt‘lllb which consider faulty modes of hella\-ior, it caii 1~ llbt’- ful to llla.lit~ use of t,lle teriii tliagllosis. I\-it 11 t lit> foIlon-- iiig defini tiou : 514 TRUTH MAINTENANCE SYSTEMS e A dicignosrs is au assigniiielit of modes (gootl 01 faulty\-). coiisistelit with the givei ohrvat ions. fol all t Iit> coiiil)oilt~ilts in tlit> sJ*st tv1i (de Iihr alit1 \l?lliallih 19%)). Hecallse I lifx~. tlescrihe assuiii1)tions which are co~~5~.4- fc?,1 with t.lie ol>ser\.ations, cliagiloses bar ail iuverstx relat,iouslkil) to conflicts. A prediction \Ir, is r~lrrl im(ltv a cllagnohis if I llerc ib home YtC, sllcll I list t lie (Iiagiiobis inll)lk Yj,. iimjuali t.y ~lioul~l lw act iva.tetl. This iil\.ariaiit will not Iiwessaril\. Icat1 t 0 maxi- ii1al liilwnat ion, i.tl. Iiil~c~rnat ing all predict ions wliicli w3li‘t lt~a(l to coiiflicts. AIasillial hilwrnat ion rtyuiwb an oiiii~iscit~ncf~ of li1lOWill~ in a(lvailce which pre(lictioil wol~ltl or n-o~rltl llot lcatl to a coiiflict . \\‘it hut sticli onu~iscieilcc~. \w rtw~rt to iiill’lt’tiit’iltiii~ t lie ill\variant ahoc-e. This condition will allow hil)c~rnat ioll of lllan~~ pmlict ions. It is guarantred not to over-Iiil)t~ruatt~: if a stronger +‘I mere \,alicl. then ht~caibe it did not lead to any conflict. a Wt’alit’r \Ir, certaiiil\- won’t lead to an!- conflict. This inI-ariaiit tlot3 put. an up1wr I~miitl 011 the nlluiher of active> prcvlict ions: the numlwr of \.alicl diagnoses. That iii\*ariailt can he usted to generate an algorit 11111 for clecidiiig whet lirr a new pmlict ion slioulcl 1w hibet.- nated. clmoted HJH'.'( Q,): 1. 2. 3. .3. r,. 6. JYlieii a new prediction Xl!, arise5 for a iiodc in the circii it, For t~acli diagnosis uii(lcr which \Ir, is valifl. For vacll active prmliction 3; for that node which is c-alit1 iiiidt~r t,l1at tliagiiosih. the circuit, is adive. However. when a new conflict, arises, he set of diagnoses change. Conseyuently, t,lie act,ive predictions might8 deserve l~ilwriia.t.ion, and the hibernated preclict,ion might need act,i\:a.tion. The algo- rithm for recoiiiput~ing the hibernating set, is somewhat, mpensive: 1. At, ea.ch value, 2. For each of t,he new diagnoses, 3. Go though all die predictions. liihernated and ac- t,ive, and fiiid t,lie st.rongest, one which liolcls. It. can be helpful t,o keep t(lie predictions sorted 1,~ st,rciigt Ii for this step. 4. When clone wit,li all die new cliagnoses a.t, a value, go hougli each of die act#ive predictXions and check HIB? on thm. By using the algorithms above, hibernat4ion may 1~ esecutecl in order t,o help keep die numhr of predic- tions nianageal>le without losing any useful inforiiia- t.ion. Exploiting the hibernation In spite of the gains which liil~ernat~ion allow, ,Cko,-- dos niustS nmnage niult~iple predict ions for iiiany \-al- ues. The comput~a.tion is esponent ial in the numlw of interacting variables, as described in die introcluct,ion. When many values intmera.ct , e.g. coniputat.ioiis associ- a.tecl wit,11 I~irclilioff’s current, law, the actual costs lw- come prohibit,ive. As it, turns out,. t,liese coiiil~iiiat ions are often fruidess, or their resuks freqiieintly are sub- sumed and hil>ernatSe imnedia.tel~.. Esecuting the Ed- ponentkl nuniber of possible conhinat~ions. when only a few ever will prove useful, is wasteful. By restruc- turing the prohleiii, t,o use ldwrnatioii to enforw early pruning the interactions can 1~ wcluced from esponeii- tia.1 t,o quaclrakic. hlet~liotls for restructuring his prolh leni and an analysis of he resources used are presentd in t,liis sect.iou. Because it a central esample to iiiuch of t,llc clkcus- siou iii this section, t.lie reader should be rfwiiiclecl of Kirclhoff’s current law. The algebraic s\lm of curwilts entering any node must be zero. Iii tSerins of S’koidos. this law is realized by suiiin~iiig I)- 1 preclic t ions for \-al- ues of tmeriniiials t,o a circuit node, and conclurliilg with a prediction of t.lie value for the last teriiiinal equal to t,lie opposke of the suni. For esample, in figure 1. the pretlict,ioiis at. coilverging t,erniinals .A, B and C allow a. conclusion for t,lle terminal D. Nainelj-, if at least one milli-ampere of current8 were entering each of ,-1. B, and C, tSllrrl at, Icast, three milli-amperes must Iw leaving through D. (The convention is t,llat current, is assuiiiecl to be entering die circuit node through the t~eriiiina.1, so t,lle conclusion is written as in < -3~~1.) Sllcll preclictioiis ina!* be inacle for any teriniiial of a circuit node, gijwi pretlict ions for all the other tt>r- nhals. Howe\-er, such a sit ua t ion is conlpu t at ionall ) iA> 1mA i B>lmA ic> 1mA i Dc-3n1A C D Figure 1: Iiirchhoff’s current law at a node esplosive, 1,ecause it iilvolves coml~ining every com- hinat ioii of one predict ion from each terminal. LYit li iiiaiiy act i\-e prfvlic t.ioiis alit1 niany t.erininals, t,lie sit- iiatiou I)ortlrrs 011 disaster. In order t.0 guarautfw cor- rect iless m-er~ coiiil~ination of ineyualit~ies shoultl I)0 t.rietl. However, many of those conibinat.ions simply yield iiseless informat ion. This 1)rol)lr~n1 ih esacerhatecl ill conlputatiom wit11 may. iupu tb. For esalnpleq consider t lie case of r1 ter- niiilals con\*erging on a circuit node, ant1 )?) act ivtb pw- clictioiis per 1.aluc. Iii appl3*ing Iiirclilioff’s current. law. 11) )I- 1 coilyii t a t ions will 1~ require<1 in order t 0 iiialw a coiiil)lcte set of preclict,ioiis ahorlt the first, tfmiCilal. Because this colnputatiou occurs for each terlnina.1, the total coht of computation is 11 x w”-~. All but about 1)? will 1~ liil~ematetl. Of course, a few of the coiiiput a- tions will rcsiilt in useful information, wlliclt will caiise a few tiiore coull)ritations. unt.il thf ’ sr\-stt~ni reaches st a- bility. Still. the cost of t,lie coni1)ut,atioil is approsi- mateI>- 0( II x )))” ). For a tc-pica1 circuit node of 11 = 6 inputs aiitl w = 6 act i\-e preclict ioils per terliiil~al, f he tot al iiuiulwr of coinput at ions would lw -1B.6r>cj. The 1~rol~lenl basically require a11al\-sis of the ha- sis of tllc low yieltl of active predict ions. TJ.pically. 1)rmlict ions froin two of the teriiiiiials will coiihine to l)rotluw a iih,clcsh internw&atc piwlict iou, m-llicli nlirst 1~ conil)ine(l wit Ii all conhinat iouh of prfdictions froiii all the otllc>r tc~rii~inals. The wsult ing predict ioil \vill Iw irselt~ss as well. in a11 cases. ( ‘onwqumtly, tlie resrilt iug pretlictioii will he hilwrilatfvl. I‘lli~ 1,rocesh, which 118~s liilwruation as a final filtcbr. filters tht\ prdict ion5 too late. It is frf-yu~~utl!. clear. wlkcu t \VO act i1.e predict ioiis from t.wo \-alum are conI- I)iiid, that t lie coiiil~iiiatiou is not iisf>fiil. Tlie solut ioi~. tlicln. is to Ilbt’ hilwriiatioii as ail early filter as well as a final filtfbr. The illtuition is to crf>ate a IMYV system I-alue. wliicl1 (lot’5 not corres1w~i(l to a j)art iciilar uotle iii tllc circiiit . hIit corrcspoiifls to a sj.iiilwlic coidji- nat im of a sulwt of the \.alueh which are col~hir~etl GOLDSTONE 515 at, a circuit node. The internlediate conlbinat,ions are retained at, the new systSeiil value explicitly. If a. new one is discovered, it is hiberiiatSed or actkat4ed, in t lie nor 1iia.l fa.shion. Because hiberna.tion acts as such a. powerful filt’er oil predictSion sets, it’ severelF cuts dowi~ on the coiilbiiiat.orics of the system. Also, the espouen- Ca.1 explosion is limited because t’lie iiuinber of conibi- nations depeuds directly oil the iiuiiiber of a.ct.ivat.ed values, and the values axe a.ct.ivat.etl in sets whose size is bounded above by the nuniber of diagnoses unclei analysis. For esa,mple, iu Figure 2, the l&work on the left would be converted, with a.11 int8ermedia.t8e value, to t(he net.work on the right. For figure 2, i,,, will de- note t’lie current int,o t,lie circuit through term-m a.iicl is will denote the current’ froin left tSo right’ through inter-term-5. In order to use Kirchhoff’s current. law to coniput,e 1-‘3, t.he stra.ightforwa.rd st ruct.ure of the cir- cuit on t81ie left’ would try ever)* conibinat ioii of active predictious for il, i? a.nd id. IIowever, iii the circuit on the right,, t,he filtered results of cond)iniug iz and id would be in is. Consequently, predictions for is would be a. combination of predictSions of id and is. By re- structuring the interactions, the equations are revised as follows: i] + i2 + is + i;l = 0 u i~+iy--ig = 0 i-2 + id + is = 0 Loosely, if ea.41 t~eriiiinal had 171 a.ct.ive predict,ioiis as- socia,ted with it, then the ca.lculat8ion of predictions foi term-3 011 the left, would take I?? colnputatious, as contrasted with 2~’ oil the right. Iii fairness. is also probably st#art,ed off with 773 active predictSious, which probably 1ea.d to another 2~’ conlput9at.ious. Reasou- ing about’ t’hese syst,enis is difficult, because comput a- tions will creake a few new active predict4ions, which lead to iiiore coiiiI>ut’a.t,iolls. However, t.he ilnport ant lesson is t,hat the order of growt 11 of the conlput.a.- tions wa.s reduced, in this sinlple case, front cubic to yuadra.tic. The basis for the probleul itself provides the int,uit iou that, this sort of restructuring maintains the integrity of the syst,eln. These terminals represent tjernklals in an ana.log electrica. circuit. which connect iii a node. All electrical node represents au equi-potential: it is siiiipl~. a. set of ternlinals counectecl by wires. But, these wires do not, actually counect at a point: they coiinect in pa,irs, fusing t,lieir currents and reaching au equilibriun~ wit,11 t.he other t*erniinals. This rest riictriring siillpl!. is a. niirror of iiot joining at. a single poiilt. For t’he restruct~uring to take place. II - ~1 ntw inter- mediate nodes must8 be iiitroducecl and arraugc(l iii a net,work such t,liat nodes converge in groups oft hree. In figure 3, for example. each internlcdiate value l)rogres- sively represents the suni of one inore terniiiia.1 valile. 516 TRUTH MAINTENANCE SYSTEMS term-l term -2 term-3 mm-4 rem-1 term-4 Figure 2: Restructuring the interact ions of Kirchhoff’s current law For figure 3, i,,, will dcnot,e the current. into the cir- cuit through term-m and. i, will deiiote the current clirt~ctt*cl clown through inter-term-n. Each internlr- &ate value is sui~uiit~d wit,11 t#lie next input to create the uest. iiitt~rinediate value, untSil all the iiipiits are connected. with the eiid conditioiis handled apl)ropri- ately. The restructuring of t,he equations iu t.his case is: l-1 + i:! + i.3 + iq + ilj + i,; = 0 v il + i4 - ii = 0 rs - is + i; = 0 L, -i2+& = 0 is - i,; + L, = 0 Recaustl each node Ikas t hrc~~ tc~rininiils, 1))’ coinpli- t atioils are n~~ecled to coiiipir t (1 t.he predict ious for each tcrrilinal. Tlicrcfore, the coniput at.ion at each notlc takes 3 * )7jL’ tiiiie. There are 11 - 2 of t hc3e uotle*, and the whole computatiou takes about (I) - 2)3 * 111’ tinie. which is 0( II * or)‘-)), which represents a sirbstari- t ial improvenirnt . For a typical circuit IK& of 1) = 6 terii~inals aiid 111 = fi prtAictif)us. about 432 coilipiita- t ioiis are iit’c(3sar\W. a savings of a factor of oii+hiiiirlrt~~l over tlic> original I)robleni. elated Work 7%~ ,i’ol)I)~~ project ( Brown. Burt ou and tic Iiker 1982) iiiclucles a stautl-alone circuit diaguosthr which was iisecl as a starting poiilt for this work. ,>‘ol)hif greatly es- pioits t lie assullil)t ioil that only out coiilpoueiit in the sj.steni is failing at a tinie. S’oplt~~ incliitlccl siinplt~ vfAoils of iiianJ. of the itleas \vllich wc’rt‘ fiirtlier tits- \~clo~~f~tl iii S’k0~lo.s ii1 a iiiore recent. tliaguost ic frailit‘- work (de KlecJr atitl \\-illiallis l!W7). Ailot*ller i~iotl~~l- lxisf4 sysl~i11 for diagnosis of aiialog circiiiis is callf~(l 11fdcIl~ (Dagile. Railnail. a.iitl Dee-es 1$&i). Diaguo- sis is bast~cl on “tile fulidaiiic9lt al assiililpt ioil that a clrfi~ct iii a co1i~~~oii~~iit katls to siguific*ant chaugt~s ii) the behavior of t hc circuit .*’ For inst auce. to clescrilx~ lcnn- 1 km-m-4 @ml-1 mm-4 I inltx-term -9 A lcml-3 term-6
1991
79
1,141
telli yacinth S. Nwana ri Department of Computer Science University of Liverpool P. 0. Box 147, Liverpool L69 3B.X, U.K. nwanahs@and.cs.liv.ac.uk Abstract This paper presents FITS - an Intelligent Tutoring System (ITS) for the domain of addition of fractions. It was developed with the aim of improving on many of the shortcomings of existing tutors in the mathematical domain. The paper largely describes its functioning. In order to give the reader a better ‘feel’ of the tutor’s capabilities than obtained from its description, an actual student-tutor protocol extract is given. More significantly the tutor has also been evaluated in several ways with seemingly very encouraging results so far; however, due to length restrictions they are not reported in this paper. The paper concludes by briefly highlighting some of FITS’s improved features over other existing tutors in the domain as well as some of its shortcomings. restrictions, the approaches and their shortcomings are not given (for details, see (Nwana 1991a)). The aim of the project was to construct a system for a more ‘complex’ domain which improved on some of these inadequacies of current systems. The fractions domain was chosen amongst other reasons (see Nwana & Coxhead 1988b) because it probably provides a more ‘complex’ domain in that it requires many other subskills in order to execute an addition or subtraction procedure completely and successfully. Some of the skills needed are the addition, subtraction and multiplication of integers, and finding the lowest common denominator. Most of the other diagnostic systems (e.g. DEBUGGY) work on more ‘primitive’ domains. Therefore, building a tutoring system for the fractions domain promises to provide a more realistic picture of the intricacies of building tutoring systems, since most real domains are complex. Introduction verview of FITS Overview of the Project Mathematics has provided a very suitable domain for intelligent tutoring systems (ITS) research, with the result that probably more tutoring systems have been constructed for the domain than any other, some examples being BUGGY (Brown & Burton 1978) and the Geometry tutor (Anderson, Boyle, & Yost 1985). Love11 (1980) explains that there are two reasons for this: 0 Mathematics is highly structured and its algorithms are well defined making it easier to concentrate on the features of the ITS itself, rather than those of the domain. e Mathematics is an important educational area because it under-pins most of science and engineering. Current tutors can be grouped under two main approaches: the malrule approach whose steps are noted in Sleeman (1983) and which BUGGY typifies; and the model-tracing approach which is described in Anderson (1987) and typified by the Geometry tutor. However, these tutors suffer from various shortcomings which stem from the approaches to their construction. We have previously highlighted using empirical data (Nwana & Coxhead 1988a 1988b) the inadequacies of at least one of these approaches, especially for more complex domains, and proposed alterations to address some of them. Due to length Language and Environment FITS is a Quintus Prolog-based system. FITS was developed on a SUN-360 workstation running under UNIX. It also runs on DEC’s 8650 under VAX. It consumes 380Kbytes of memory but requires 680Kbytes when the necessary library predicates are imported. It takes 3 to 4 minutes of processor time to be compiled on both machines. Description of FITS Figure 1 shows FITS’s architecture. It was designed to have a clear separation between domain-dependent and domain- independent parts, in order to facilitate the incorporation of a new domain. The system thus has a clear and basically simple architecture. Conceptually, each student-tutor interaction cycle involves the flow of data through a number of components. All these components are, as far as is practical, rule-based to permit ease of subsequent enhancement. A full anatomy of FITS (which describes inputs, outputs and basic algorithms for those interested in replicating the work) is described in Nwana (1990). The NWANA 49 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. major modules of FITS’s architecture (they are also briefly described) include: Flow of conaol --b Flowofdata Figure 1 - FITS’s Architecture (also showing Control and a a . . a . . . . . Data Flow) - (Rectangles: active components; Circles: data) User Interface Module: controls the interaction between the student and FITS. It comprises the Problem Solving Monitor and Student Modelling Modules. Problem Solving Module: the student monitoring (i.e. model-tracing) component of FITS. It is also capable of solving all fraction addition problems. Student Modelling Module: dynamically maintains a student model. Tutoring Strategy ModuEe: decides the next teaching action. Domain Expert Rulebase: contains all the correct rules for solving fraction addition problems. Bug Catalogue: a library of the common bugs for the various operations involved in fraction addition. Tutoring KnowZedge: the what to be communicated to the student. Student-Tutor History: an explicit and dynamic chronological record of the interaction between the student and FITS. Ideal Student Model: a representation of what the desired model of the ideal student should look like. Student Model: contains beliefs concerning the student’s knowledge state. Figure 1 also shows FITS’s control and data flow. The functionality of all the various modules that constitute the architecture has already been noted above, albeit very briefly. This section details the operation of the architecture as a whole, i.e. it explains how FITS works. The discussion here thus concerns control and data flow, and how FITS’s modules intercommunicate in operation. Conceptually, the control flow within FITS’s architecture (see Figure 1) is between the User Interface module (i.e. Problem Solving Monitor + Student Modelling modules) and the Tutoring Strategy module. Each student-tutor cycle could involve several control flows between these two components; it also usually involves much flow of data within the architecture. These will be discussed in the ensuing paragraphs. FITS normally commences tutoring a new student by analysing his/her results to a carefully-designed pre-test (it expects that the student’s responses to these pre-test questions had previously been stored and so can be retrieved on request). The User Interface Module (the Student Modelling module to be exact) uses these responses to perform pre-modelling, i.e. it creates a unique student model for the particular student which it initialises according to the performance of the student on the pre-test. The vital role of a student model to truly individualised instruction cannot be overstated. Control is now passed to the Tutoring Strategy module which will have to decide on the next tutoring action. This module makes use of much data to arrive at such decisions, as shown in Figure 1. At this stage, it mainly compares (overlays) the just-initialised Student Model with the Ideal Student Model; many beliefs can be inferred from this comparison, which include the following: A. Some prerequisite skills are unknown to the student, B. i.e. they are deemed missing from his/her repertoire. All prerequisite skills have been acquired but at least one has not yet been mastered, i.e. the student is deemed not proficient in at least one of these skills. C. All prerequisite skills have been mastered but the student has not yet mastered the overall skill of fraction addition. D. The student is an expert at fraction addition. Naturally, in the case where the Tutoring Strategy module believes the student an expert (i.e. D above), it immediately returns control to the Problem Solving Monitor module instructing it to terminate tutoring the student. If A applies, the Tutoring Strategy module normally instructs the Problem Solving Monitor module to sequentially teach the prerequisite skills that are believed missing from the student’s repertoire. The latter finds an appropriate skill, presents a piece of exposition on the skill, provides adequate examples and tests the student’s mastery of the skill. The Tutoring Strategy module will only be reactivated if the student has finished with the current piece of exposition. During all interaction, every input from the student is processed by the User Interface module. Its 50 EDUCATION parser makes use of the fraction addition grammar to report immediately on any incorrect entries, e.g. syntactic errors. In the case of an incorrect solution, the Tutoring Strategy module is invoked for a decision. It may instruct the Problem Solving Monitor module to diagnose the error (using data from the Bug Catalogue). Failing this, it may instruct the latter to provide a hint and another chance for the student to try again. These decisions are made on the basis of various pieces of information, e.g. the number of attempts the student has made at this task (using data from the Student-Tutor History), the type of malrule that was manifested (using data from the Bug Catalogue), etc. When the User Interface module is in control, the Student Modelling module also updates the student’s Student-Tutor History accordingly, e.g. noting in it the mal-rules that have been manifested, the problems he/she has attempted successfully/unsuccessfully, etc. After a session, the Student Modelling module uses the data in the Student- Tutor History to update the Student Model so as to reflect the student’s emerging knowledge of fraction addition. A similar procedure is pursued until all the skills believed missing from the student’s repertoire are taught. However, the student can quit after any successful/unsuccessful tutoring of a skill. The Student Model will reflect his/her current knowledge state, and the Tutoring Strategy module will ensure that tutoring resumes from where he/she quitted. However, suppose that after pre-modelling, B applies (this may also apply after the tutoring of prerequisite skills that were deemed unknown to the student). Here, all the skills are believed acquired, but at least one is believed not yet mastered. The by-then-invoked Tutoring Strategy module may decide in this case that the Problem Solving Monitor module should provide appropriate examples, drill and practice until the student is believed proficient in all the prerequisite skills. Once again, similar processes to those described in the previous paragraph may take place; the student is also free to quit anytime he/she wishes. In the case after pre-modelling where C applies (it may also apply after the process described in the previous paragraph), the invoked Tutoring Strategy module normally instructs the Problem Solving Monitor module to provide drill and practice on the overall fraction addition skill; at least at this stage FITS is ‘sure’ that the student has mastered the required prerequisite skills. Hence, he/she is not expected to solve fraction addition problems without knowledge of them. From here normally ensues the key and interesting interactions that FITS is capable of. The Problem Solving Monitor module generates and presents a problem of a particular difficulty level for the student to do. The student is expected to solve the problem in a step by step fashion; however, he/she is not constrained to do SO. A good student (judging from his/her Student Model) is allowed to skip steps. In so doing, if he/she makes too many errors (judging from the data in the Student-Tutor History), this privilege of skipping steps is withdrawn by the Problem Solving Monitor module, i.e. the appropriate menu option is taken away. This menu allows the student communicate his/her intention prior to performing it. If the choice is legal for the stage, the Problem Solving Monitor module normally praises the student and stays quiet. Meanwhile, the Student Modelling module records an instance in the student’s Student-Tutor History of correct manifestation of where and when to use the chosen operation. If on the other hand, the choice is illegal the Tutoring Strategy module is as usual invoked for a decision. Normally, the latter examines the Student-Tutor History. If it is judged good, the Problem Solving Monitor module is instructed to give another chance (it may have been a slip or a misentry). Otherwise (i.e. he/she is likely to get it wrong again, judging from his/her Student-Tutor History), the Problem Solving Monitor module will provide a hint or an example along with another chance for the student to retry. However, if the choice is again illegal, the Student Modelling module notes in the Student-Tutor History an instance of the manifestation of the student not knowing when to apply the skill that he/she chose, as well as that which should have been chosen. In this way, those skills which the student has demonstrated to possess but which he/she does not know when and where to apply are highlighted (this was noted as a major cause of the bugs observed (see Nwana & Coxhead 1989)). The Tutoring Strategy module also instructs the Problem Solving module to inform the student as to the correct operation to perform at the stage. When the student chooses this operation from the menu, he/she is now required to perform the operation so as to produce the result to the next stage. In the case where the result is correct, the Problem Solving module praises the student and redisplays the menu for the student’s choice of the next operation to be performed. The Student Modelling module also notes in the Student-Tutor History an instance of that skill having been correctly performed. If on the other hand the answer to the stage is wrong, the Tutoring Strategy module may decide that the Problem Solving Monitor module should diagnose, hint, provide an example, etc. If the student’s second attempt is again wrong, the Problem Solving Monitor module will be instructed to provide the answer in order to avoid bringing tutoring to a complete halt. Naturally, the Student-Tutor History is updated to reflect an instance of the particular skill not being correctly performed. When the fraction addition problem is eventually solved, FITS displays the student’s solution and comments on its efficiency. It gives extra praises to the student who solves the problem in fewer number of steps than it would have done. If however the student’s solution is long-winded, it also displays its more optimal solution so as to enable the student to improve on his/her efficiency. Each session requires the student to attempt three problems of the same difficulty level so as to ensure adequate skill refinement. After the session, the Student Model is updated with the contents of the Student-Tutor History as mentioned earlier. At this stage the model may reflect one or more of the following beliefs: 1. Some skills are still not acquired by the student. 2. Some skills are still not well mastered. NWANA 51 3. The student still does not know when to apply certain skills in his/her repertoire. If, e.g., (3) applies, the Problem Solving module would provide a Vanlehn-type teaching session (Vanlehn 1987) which aims at communicating knowledge of where and when to apply the acquired fraction skills. Examples and testing ensue as described previously. After appropriate reteaching and successful testing, the Problem Solving Monitor module is instructed to present the same session to the student again (i.e. problems of the same difficulty level that he/she failed to do, and hence warranted some reteaching). In the case where the student progressed successfully (which may only be after some reteaching), the Tutoring Strategy module would now decide that the Problem Solving Monitor module should present problems of a one- higher difficulty level, i.e. move to a one-higher session. Once again, the student can quit after any session but on resumption, tutoring will continue from where he/she quitted. If the student goes through the six sessions successfully, his/her Student Model will ultimately become identical to the Ideal Student Model, at which stage FITS’s tutoring of the student terminates (however, a student could still go through all the sessions without attaining ‘perfection’) . (It must also be noted that there is no psychological reason for for using three problems (questions) per session and for having six sessions. Three just ‘seemed’ the least number of problems which should be attempted before any meaningful conclusions could be drawn; addition of fraction problems were also analysed into six difficulty levels). Some Comments on FITS’s Architecture It is important, after this discussion of how FITS works, to highlight the clear separation of domain-dependent and domain-independent parts achieved with the architecture of Figure 1. In this respect, it is envisaged that modifying the present tutor to teach another domain, say fraction subtraction, should not be much of a problem. The modules that require changing are evident. It is clear that the grammar for fraction addition will have to be substituted by that for fraction subtraction, while the parser will remain intact. The Student Modelling and the Tutoring Strategy modules will also require no changing. Admittedly, the Problem Solving Monitor module requires more work, but it also is largely domain-independent. Naturally, data modules such as the Domain Expert Rulebase, the Bug Catalogue and much of the Tutoring Knowledge will need substitution. The form of the Student Model, Ideal Student Model and Student-Tutor History will largely remain the same; however, it is clear that fraction subtraction concepts will have to replace the present fraction addition ones. FITS’s architecture largely subsumes Anderson’s (Anderson, Boyle, & Yost 1985) as well as O’Shea et al’s (1984) five ring architecture. It also appears to subsume Hartley & Sleeman’s (1973) design except for the means- ends guidance rules (i.e. knowledge of how to apply 52 EDUCATION tutoring knowledge). It is very interesting that this architecture resulted, considering that it was by no means pre-designed to be as such; rather, the principles which constitute FITS’s basis led to such a design. It must be conceded that individually the components of the architecture are not novel, but collectively the functioning of the architecture seems to capture the underlying principles which form the basis of the approach to FITS described in Nwana (199 la). Example FITS Protocol Extract In order to give the reader a better ‘feel’ of the tutor’s capabilities than obtained from the description of its functioning, an example interaction with the tutor is given. An annotated and slightly modified example excerpt from an interaction between the tutor and an actual student, Rose, is illustrated by the protocol of Figure 2, as O’Shea does with his classic Quadratic tutor in O’Shea (1982). Comments are in italics. The protocol is also slightly stylised, e.g. the fraction l/4 really appears in FITS as l/4. [The system would begin with pre-testing and appropriate pre-teaching]. Tutor: Solve 1 :1/4 + 2:1/2. What operation do you want to perform? List of Operations 1. Add equivalent fractions 2. Cancel fraction 3. Find equivalent fractions 4. Sum whole numbers 5. Change improper fraction to a mixed number 6. Change mixed numbers to improper fraction 7. Rewrite fraction 8. Provide some intermediate/final answer WHICH <1..8>? [Demanding the student's intention/plan] Student: 2. Tutor: Oh no! There is no fraction to cancel. Try again. [Hint]. What operation do you want to perform? . . . Student: 4. [Could equally have chosen to find equivalent fractions or to change mixed numbers to improper fractions. Therefore the system can support various idiosyncratic ways of solving the same problem, hence no one 'correct' solution]. Tutor: Fine Rose. Perform operation. Student: 3 + 94 + l/2* [Could have typed in 3t1/q + l/z or l/4 t 3:l/2 or l/d + 3 + l/z. Note these various possible idiosyncratic inputs are possible due to explicit fractions knowledge representation.] Tutor: Correct. What operation do you want to perform? . . . Student: 3. Tutor: Fine. Perform operation. Student: 3 + 2/8 + 4/8- [Note these idiosyncratic inputs: could also have typed in 3:4/S + 2/S, 3 + 2/d + 2/* or 3:8/lG + */16, etc.1 Tutor: Correct. What operation do you want to perform? . . . Student: 1. Tutor: Fine Rose. Perform operation. Student: 3 + 916. Tutor: Not Correct. You seem to have added both numerators and denominators to provide the numerator and the denominator of the fraction respectively. [Classic fraction error]. Remember you add equivalent fractions by just adding the numerators and placing this sum over the denominator. Study this example to help you. 2 -I- l/q + 2/4 -> 2 + 3/4. [Hint]. Remember your current step is 3 + ‘/8 + 98. The operation to perform is adding equivalent fractions. Student: 3 + 8/64= Tutor: Still not correct. You seem to have multiplied the fractions together instead of adding. WELL I THINK YOU CAN DO WITH SOME HELP. Add the numerators and place the sum over the denominator for the problem 2/8 + ‘/8. Student: '/8. Tutor: Not Correct. Add the numerators: 2 + 4 = ? Student: 6. Tutor: Correct. Place this value 6 over the common denominator 8. Student: 9,. Tutor: Correct. Now add this fraction sum '/8 to the whole number 3. Student: 3 + 6/8. Tutor. Correct. WE RETURN TO WHERE YOU WERE. Remember you were at the stage: 3 + 2/8 + ‘/;. You chose to perform adding equivalent fractions. Now with the help I have given you, Try again. Student: 3 i- 6/8. [Note the use of the explicit knowledge represented in the system. Also note the different tutoring strategy which FITS adopts here. The system could therefore switch to tutoring a different aspect of mathematics, if necessary, before returning to tutoring fractions.1 Tutor: Correct. What operation do you want to perform? . . . Student: 4. Tutor: There are no whole nurtbers to sum Rose. Try again. Student: 1. Tutor: There are also no equivalent fractions to add. Try cancelling the fraction Rose. [Strategy here: tell the student what to do] Remember your current step is 3 + 6/S. [The system notes in the student model that Rose has shown evidence of a lack of understanding of when to cancel a fraction]. Student: 2. Tutor: Fine. Perform operation. Student: 3 + 3/4. Tutor: Correct. What operation do you want to perform? . . . Student: 7. Tutor: Fine. Perform operation. Student: 3:3/4. Tutor: Congratulations Rose. You have arrived at the answer. [After this the system would display the student's full answer, with a comment on its efficiency, followed by its 'optimal' answer.] Figure 2 - Extract from protocol of FITS in use Evaluation and Conclusions FITS, a fraction tutor has been developed and demonstrated to be functional. FITS was evaluated in four ways, including against real students, to reveal its limitations to potential future usability. This will enable researchers to identify what modifications will have to be made to FITS to make it more usable. Due once more to length restrictions the evaluations are not reported in this paper; they are fully reported in Nwana (1991b). Suffice to say that the results so far are seemingly very encouraging. FITS was shown to have largely improved on many of the shortcomings of existing ITSs in mathematics (e.g. NWANA 53 BUGGY (Brown & Burton 1978) or the GEOMETRY tutor (Anderson, Boyle, & Yost 1985)). Some of its improved features over such existing tutors include the abilities to (these subsumes its design principles of Nwana (199 la)): l . . . . a . . . . . Pre-model the student. Preteach prerequisite skills. Provide tutoring sequences which satisfy Vanlehn’s felicity conditions. Model the student throughout all his/her interactions with the tutor. More explicitly represent knowledge, e.g. of the student or tutoring strategies. Support improved remedial strategies. Test the student’s understanding. Provide more interaction on the part of the student. Support various idiosyncratic ways which the student might choose to solve the problem. Motivate and support a more flexible style of tutoring. Provide an environment in which the interaction with the student is as close as possible to what they are likely to encounter in reality. All these features have not been demonstrated; but for length restrictions, it would probably have been more useful to report on the evaluation of the system using small examples with good comments. This has been done as was earlier pointed out and is reported in Nwana (1991b). FITS’s value lies in its architecture, which though not exactly new, seems to support the novel collection of useful principles which were drawn from this research for constructing tutors in the domain of mathematics (see Nwana 1991a). FITS itself is a contribution as a pioneer tutor for the domain of fractions. It is currently being used to introduce postgraduate students to intelligent tutoring concepts in at least two AI/Computing departments. FITS has limitations: there is no system without them. After evaluations of the tutor, the following key deficiencies were observed which may be addressed in later projects: They include: l Its explanations are not tailored to the particular student. 0 Its outputs are not natural. l It tends to dominate the interaction with the student. l It lacks state-of-the-art graphic capabilities. l It does not learn or improve its strategies over time. l Its student modelling is inadequate. There is naturally much more not reported in this paper, e.g. how the student modelling is done, how the tutoring knowledge works, etc. For all further details on FITS, see Nwana (1990 1991b 1991c). Acknowledgments. The author sincerely acknowledges the invaluable comments of his supervisor, Dr Peter Coxhead of Aston University (Dept. of Computer Science) and one of his examiners, Dr Peter Ross of the University of Edinburgh (Dept. of Artificial Intelligence). The financial support by the Cameroonian Government for this work is also acknowledged. References Anderson, J. R.; Boyle, D. F.; and Yost, G. 1985. The geometry tutor, In Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Los Angeles, Calif.: International Joint Conferences on Artificial Intelligence, Inc. Anderson, J. R. 1987. Production Systems, Learning and Tutoring, In Klahr, D., Langley, P., and Neches, R. eds. Production System Models of Learning and Development. London: MIT Press, 437-458. Brown, J. S., and Burton, R. R. 1978. Diagnostic models for procedural bugs in basic mathematical skills. Cognitive Science 2: 155-192. Hartley, J. R., and Sleeman, D. H. 1973. Towards more intelligent teaching systems. The International Journal on Man-Machine Studies 51215236. Lovell, K. 1980. Intelligent teaching systems and the teaching of mathematics. AlSB Quaterly 38:26-30. Nwana, H. S. 1990. The Anatomy of FITS: A Mathematic Tutor. Intelligent Tutoring Media 1(2):82-95. Nwana, H. S. 1991a. An approach to developing intelligent tutoring systems in mathematics. In Proceedings of the Sixth International PEG Conference: Knowledge Based Environments for Teaching and Learning. Rappalo, Italy. Nwana, H. S. 199 1 b. The evaluation of an intelligent tutoring system. Intelligent Tutoring Media. Forthcoming. Nwana, H. S. 1991c. User modelling and user adapted interaction in an intelligent tutoring system. User Modelling and User-Adapted Interaction l(1): l-33. Nwana, H. S., and Coxhead, P. 1988a. Towards an Intelligent Tutoring System for Fractions. In Proceedings of the First International Conference on Intelligent Tutoring Systems, 403-408. Montreal, Canada. Nwana, H. S., and Coxhead, P. 1988b. Towards an Intelligent Tutoring System for a ‘complex’ mathematical domain. Expert Systems 5(4):290-299. Nwana, H. S., and Coxhead, P. 1989. Fraction bugs: explanations, bug theories and implications on intelligent tutoring systems. Cognitive Systems 2(3):275-289. O’Shea, T. 1982. A self-improving quadratic tutor. In Sleeman, D. H., and Brown, J. S. eds. Intelligent Tutoring Systems. London: Academic Press, 283-307. O’Shea, T.; Bomat, R.; Du Boulay, B.; Eisenstadt, M.; and Page, I. 1984. Tools for creating intelligent computer tutors. In Elithom and Beneiji. R. eds. Artificial and Human Intelligence. North Holland: Elsevier, 18 l-199. Sleeman, D. H. 1983. Intelligent Tutoring Systems: A Review, In Proceedings of the EdCompCon meeting, 95 101. IEEE Computer Society. Vanlehn, IS. 1987. Learning one subprocedure per lesson. Artificial Intelligence 31(l): l-40. 54 EDUCATION
1991
8
1,142
e er- of Versi ss 9 * (Extended Abstract) Carl A. Gunter Teow-Min Ngair University of Pennsylvania University of Pennsylvania 1. Introduction This paper arose out of the observation that the ver- sion space algorithm and the ATMS label-update algo- rithms operate on very similar structures. The version space algorithm learns concept descriptions from ex- amples. Central to this algorithm is the notion of all concept descriptions consistent with a given set of pos- itive and negative examples. The assumption-based truth maintenance system for recording dependencies during reasoning maintains labels for a proposition which encode all environments in which that propo- sition is true. This gives rise to two questions: one, what is the precise nature of the relationship between the struc- tures employed by these algorithms taken from two different problem areas, and second: are the compu- tations performed by the two algorithms related, and what special properties of these structures do they de- pend on? Our aim is to find a common mathematical basis in order to determine applicability conditions for the algorithms, and to cast the computations done in a form that reveals new, more efficient implementations. We first show that the version space of a concept is a special case of a convex space. The mathematics of these structures is then brought to bear on the question of ensuring that finite representability is preserved un- der the merging operation. For this, we derive a neces- sary and sufficient condition, called the MW property. This can help in determining the class of admissible concept languages for which finite observation version spaces are finitely representable. We present the first result on admissibility that captures both finite and infinite concept languages. We then recast the label-update algorithms in the ATMS as operations on a convex space. The chief re- sult is a new algorithm for computing labels that han- *Gunter’s work was supported in part by NSF grant CCR-8912778 and by a Young Investigator Award from the Office of Naval Research. N&r’s work was supported by the Institute of Systems Science, Singapore. Panangaden’s work was supported by NSF grant CCR-8818979 and by NSERC. Subramanian’s work was supported by NSF grant IRI-8902721. 500 TRUTH MAINTENANCE SYSTEMS Prakash Panangaden Devika Subramanian McGill University Cornell University dles complex disjunctions such as choose({A, B, C}, {D, E, F}) which stands for either A, B, and C are true, or D, E, and F are true. This algorithm is a nat- ural extension of de Kleer’s original ATMS algorithm and does not require hyper-resolution rules to compute minimal, consistent, sound and complete labels. The paper is structured as follows. In Section 2 we introduce the mathematics of convex spaces: the fi- nite representability and MW properties. The version space is formulated in terms of convex spaces in Section 3, and we present an admissibility result for concept languages. In Section 4, we perform a similar analysis of the ATMS algorithm and show that the label com- putation performed by the disjunction-free ATMS is akin to the boundary set updates of the version space algorithm. We extend the class of disjunctions express- ible within the ATMS and derive a new label-update algorithm which is more efficient in general, and does not rely on hyper-resolution rules. 2. Convex Spaces Our goal is to develop the basic mathematical theory of convex spaces with a view to extract the common order theoretic structure of version spaces and ATMSs. This order-theoretic structure formally captures one’s intuitions of “consistent” partially ordered knowledge. The basic issues that we address can be roughly de- scribed as follows: (i) for what sorts of concept lan- guages can one have a compact (finite) representation of the relevant knowledge? and (ii) what operations can one do on these, representations while preserving representability? The first issue motivates the singling out of convex spaces. If a subset of a poset is convex we have a hope of representing it by its “fringes”. Convex- ity by itself, however, is far from enough and we need a deeper analysis in order to describe the situations in which finite representability holds. With regard to the second issue, we show that for languages that satisfy some simple properties, the lattice-theoretic operations of join and meet preserve the finite representability. We consider a set P with a partial order 5 defined on it. In relation to the version space theory, P represents the set of all patterns/concepts, and in relation to the From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. ATMS, P is the set of all possible environments which a problem solver may consider. In both of these cases, the elements of P are sets of individuals and the partial order on P is defined by the set inclusion. It is worth noting, however, that the theory in this section can be applied to any domain as long as there is a partial ordering defined on P. Given an arbitrary subset of P, one can define the least convex space containing it. This and many other operations which interest us here are examples of clo- sure operations on a set. Let us denote the power-set of any set X by P(X). A function f : P(P) - P(P), f is called a closure operation if it is monotone and f(f(C)) = f(C) > C for any C E P(P). The elements in CC(f) = {f(C) ] C E P(P)) are called the closed sets of f . The properties of closure operations have been stud- ied by many researchers [Bir84] and some of their re- sults are relevant to our discussion here. In particular, it is worth noting that the range of a closure operation (i.e. CL(f)) is th e same as its set of fixed points. Fur- thermore, one can recover the closure operation from its set of fixed points. Thus one can often think of a closure operation in these two ways. In particular, there is an operation c : P(P) b P(P) defined by c(C) = {p E P 1 3~1, p2 E C, s.t. pl 5 p 3 ~2) called the convex closure. Its fixed points are called convex spaces. It can be shown that the set of closed sets of a closure operation forms a complete lattice under set inclusion. In actual applications of convex spaces, it is usually not practical to represent and perform computation on entire spaces due to their sizes. A solution to this problem is to represent the spaces by their boundary sets and restrict the computations to these sets only. This is the basic data representation insight in the work which we will discuss below. Given any subset C E P(P), let us denote the set of minimal elements of C by MZN(C) and the set of maximal elements of C by ztf$(C). We say that C is representable by boundary . Csyeg; 13s E MZN(C)Jg E MAX(C), s.t. -_* Furthermore, if MZN(C) and MAX(C) are both fi- nite, then C is said to be finitely representable. The most common operations applied to version spaces and convex spaces are the set union, set intersec- tion, join and meet operations. Our goal is to present a criterion for the pattern language so that the finite representability can be preserved under some of these commonly used operations. For simplicity, we restrict ourselves to posets P which are finitely representable. Definition: Given pl ,p2 E P and S, G C_ P both fi- nite, we define: ij(p1, ~2, G) = M~N(~P E P I PI 5 P, ~2 5 P, Vs E G,P 5 g)), &PI, ~2, S) = MAWP E P I ~1 ?I P, ~2 ?z p, vs E s,p k s}). The poset P is said to have the MW property if, for all pl,p2 E P: 1. \;~(PI,Pz,~) and ~(PI,Pz,~) are finite, 2. vP,P~Pl,P?IP2*3P’E~(Pl,P%,#), s.~.p~p’, 3. tlp,P~Pl,P~P2~3p’Eii\(Pl,P2,~), s-t* Pip’* Theorem 1 A poset P has the MW property i$ the finitely representable convex spaces form a sublattice of the lattice of convex spaces. 0 For a poset P, let F(P) be the set of finitely rep- resentable convex subsets of P ordered by superset in- clusion, that is, C 5 C’ iff C & C’. It is possible to provide an abstract representation of F(P) in terms of pairs of finite sets using some order-theoretic concepts taken from work on operators which are called pow- erdomains in the semantics of programming languages (for example, see [Gun911 and the references there). Suppose U and V are finite subsets of P. We define three binary relations as follows: m U 3fi V iff for every z E U there is a y E V such that x -( y, e U 5b V iff for every y E V there is a x E U such that x 5 y, a U sb V iff U 54 V and U Ab V In a poset P, an anti-chain A is a subset of P with the property that, whenever p, q E A and p 3 q, then P = !l* Definition: Let P be a poset. A fringe is a pair (S, G) such that 1. S, G are finite anti-chains of P, and 2. S 5b G. 0 We define the poset G(P) to be the set of fringes of P under the ordering (S, G) $S’, G’) S $ S’ and G’ 5b G and we have the following: Theorem 2 For any poset P, the poset F(P) of finitely representable convex subsets is isomorphic to the poset G(P) of fringes. 0 showing that it is possible to represent the operations on convex spaces via corresponding operations in the space of fringes. 3. Version Spaces The version space algorithm, which was introduced by Mitchell [Mit78], can be formulated using the ideas of the previous section. Our goal in this section is to char- acterize the order-theoretic conditions under which the version space representation is legitimate. Since it is GuNTER,ET AL. 501 not the case that every concept space supports the ver- sion space learning technique, it is desirable to provide some simple condition which will certify, for a given concept space, that the algorithm is sound. Such con- ditions have been proposed in several discussions of version spaces including the original work [Mit78] and a more recent textbook account [GN87]. However, the admissibility conditions which have been given are suf- ficient conditions which are too weak to support many of the examples of concept spaces for which the version space algorithm is sound (and, indeed, efficient). For the rest of this note, we write P for the collec- tion of sets of individual data (observations), and w (E U P) for the set of individual data. The partial or- dering on P is defined by set inclusion C. The notion of a Version Space is invented to facilitate the learn- ing of a concept (pattern) when we know a set of its positive instances and a set of its negative instances. The notations given above can be interpreted in this context by taking P as the set of concepts, and UP is the set of all possible instances. We define an opera- tion Icp (the subscript P is omitted when P is obvious from the context) by taking K(r,A)={p~plrCp~‘ZT} where I’, A C Up and x is the complement of A in UP. Here I? represents the positive instances and A the negative instances. Definition: A subset C of P is called a version space if there exist I’, A & UP, such that C = K(I’, A). The two subsets P and 4 are version spaces, which arise respectively when I? =A=+ndrnA#q5. A version space is an instance of a convex space. More- over, if we define a mapping d : P(P) _I) P(P) by then we have the following: Theorem 3 d is a closure operation whose fixed points are exactly the version spaces. Since it can be shown that version spaces are convex spaces and the meet operation on version spaces is the same as that on convex spaces, several results that are true for convex spaces are also true for version spaces. In particular, we have the following: Theorem 4 A poset P has the MW property if and only if the meet of every two finitely representable ver- sion spaces is finitely representable. 0 In a concepts learning system using the version space representation, the new version space after the addi- tion of some new observations is the same as the in- tersection (merging) of the current version space with the version space representing the new observations. Thus, the MW property is a necessary and sufficient condition for ensuring the preservation of finite repre- sentability in version space merging. Note, however, that the MW property does not imply the version spaces that we are interested in are finitely representable. It is possible that there exist some observations that cannot be represented by finitely representable version spaces and hence the preservation of finite representability in version space merging does not help at all. Therefore, we need to find out another important property about a concept language, that is whether or not every set of finite ob- servations can be captured’by a finitely representable version space. We call this the admissibility property. In this subsection, we give the formal definition of ad- missibility and show two different criteria for ensuring - the admissibility of a concept language. Definition: A given P is said to have the S property if for all x E Up, X((x), 4) and K(q5, (x}) are finitely representable. Furthermore, P is said to be admissible, if given any I’, A C Up, with I’UA finite and nonempty, /c(I’, A) is finitely representable. The following lemma allows us to check the admissi- bility of a pattern language by verifying that if obser- vations are all positive or negative, the version space is finitely representable. Lemma 5 A poset P is admissible if and only if for all non-empty finite r, A C I%p, both K(I’, 4) and K(c$, A) are finitely representable. The next theorem allows one to check for admissibil- ity by cases. checking finite representability in some special Theorem 6 Given a P, if P has the MW and S properties, then P is admissible. Note that every finite P has MW and S properties. Hence, every finite P is admissible. Furthermore, every convex space in such a P is finitely representable. In some problem domains, the notion of concept con- sistent with an observation may be defined differently from the instance inclusion definition (see [HirSO]). However, with the S property suitably modified, we can derive the same sufficient conditions as those of Theorem 6 to guarantee that the set of concepts consis- tent with any non-empty finite observations is finitely representable. This latter property can be viewed as the generalized definition of admissibility. To appreciate the point of having a condition for ver- ifying admissibility, we consider an example adapted from [Mit78]. Let I = (0,l) x (0,l) be the open unit rectangle in the two-dimensional real plane. A real in- terval is defined to be a set of real numbers having one of the following forms: [l,u]={x(z<x~u} (I, u) = {x 1 1 < 2 < u) [l,u)={xIr_<x<u} (z,u]={x~z<x~u} We define the rectangular concept space P as the set of subsets p 2 I such that p is the product of a pair of in- tervals (rectangles) or p is the empty set 4. The set of 502 TRUTH MAINTENANCE SYSTEMS observations is a subset of P of the form: [z, z] x [y, y] (points). Given a set of positive and negative observa- tions, the learning task is to find the set of concepts in P that are consistent with the observations. For each positive observation [z:, CC] x [y, y], its version space is the convex closure of {I) and ([z:,z] x [y, y]}. For each negative observation [z, z] x [y, y], its version space is the convex closure of ((z:, 1) x (0, l), (0,x) x (0, l),(O, 1) x (Y, 11, ml) x (O,Y)) and (41. Hen-9 p has property S. Given a pair of rectangles pi = [[CC, uz] x [ly, uy], p2 = [lx’, uz’] x [ly’, uy’], we have: \;l(Pl f P294) = {[min(lz, 1x’), max(u2, uz’)]x min( kh W) , max(w, UY’) 1, ii(P1, Pa, 4) = { [max(lz, Ix’), min(uz, uz’)] x n-My, 1~‘)~ min(uy, uy’)). Besides being finite, they also satisfy the second and third conditions of the MW property. Similar results can be derived for different combinations in the types of rectangles. Hence, P has MW property and it is therefore admissible. Note that the version spaces of P typically include infinite (and even uncountable) chains, so an admissibility condition which precludes such properties in the concept space will fail to cover this example. 4. Assumption-based TMS’s In this section we examine Johan de Kleer’s for- mulation of Truth Maintenance Systems known as Assumption- based TMS’s [dK86a, dK86b]. The formu- lation involves the inclusion, in the nodes of a TMS, labels indicating the minimal set of assumptions which would make the datum of the node true. These sets of assumptions must be recalculated as new informa- tion is acquired, so they must be represented in a way which makes this recalculation efficient. Sets of as- sumptions form a finite lattice (under set inclusion), and it is possible to show that sets of assumptions making the datum of a node true form a convex space. Hence such sets of assumptions may be represented using their boundary sets, that is, using the sets of greatest and least elements of the convex space. In fact, the existence of a common lower bound of incon- sistent sets of assumptions makes it possible to simplify this representation so that only the greatest elements are required. Our goal in this section is to look first at the basic ATMS and show how the calculations with label sets can be performed using computations on the bound- aries of convex spaces that we have considered in the previous sections. Expressing the basic ATMS in this way is not difficult, but the concept of an extended ATMS [dK86b] p rovides a more interesting challenge. We show how to recalculate a label using a formula involving closure operations. These closure operations can be computed using the meet and join algorithms for convex spaces. The final part of the section dis- cusses how this approach compares to the use of hyper- resolution rules in [dK86b]. Basic ATMS. The basic ATMS [dKS6a] consists of a set of nodes representing the problem solving data, a set of Horn clause justifications which describe how some nodes can be derived from other nodes, and a set of assumptions which are positive literals. Assumptions form the foundation to which every datum’s support can be traced. The main purpose of the ATMS is to discover for each node, the set of environments that will derive the node, where an environment is any set of assumptions. Note that the set of all environments forms a finite boolean lattice and is called the environ- ment lattice. De Kleer classifies the set of nodes into premises, as- sumptions, assumed and derived nodes. Each of them is represented by the basic ATMS data structure of the form: <datum, label, justifications>, where label is a set of environments and justifications is a set of Horn clauses of the form X1, . . . , X, + datum. Usu- ally, there is a special node in the ATMS denoted by I which represents falsity. For the sake of convenience we will adopt the convention of using the same symbol to denote a datum and the node representing it. Given any node X that is not I, we want to capture the set of environments that will derive X but not I, i.e. the set of consistent environments that derive X. An important observation by de Kleer is that, given an environment that derives X, every superset of that en- vironment will also derive X. Furthermore, given an inconsistent environment, every superset of that en- vironment is also inconsistent. This implies that the set of consistent environments deriving X is a convex space under the ordering defined by set inclusion. In view of the convex space theory, we take Up to be the set of all assumptions and P to be the set of all subsets of Up. However, the partial ordering 5 on P is defined to be the reverse of set inclusion, i.e. 1 z ,>. This partial order captures the notion of the general- ity of an environment. Since the set of assumptions is finite, the boolean lattice of environments is also finite and the local MW property is clearly satisfied. Fur- thermore, convex spaces can always be represented by their (finite) boundaries. We use the following nota- tion in our discussion: X is a node, V, is the set of inconsistent environments, VX, where X #I, is the set of consistent environments that derive X. Observe that each VX is a convex space, hence it can be represented by finite boundary sets. Note that an environment p is in Vx for X #tl if there does not exist p’ E MdX(V1) such that p 4 p’ and there exists p” E Mdx(Vx) such that p 3 p’l. Therefore, to encode the information in Vx, we need only store MdX(Vx) and MdX(VL). S ince the latter is shared by every X, we can maintain it separately. In an ATMS, the label of X, denoted by Lx, stores MdX(Vx) and the common MdK(V ) I is stored in the label of the node I, denoted by Ll called the nogoods. GUNTER, ET AL. 503 Given a convex space C, it is said to be repre- sentable by label if C = (p E P 1 (VP’ E Ll, p $ p’) and (3~” E MAX(C), p 5 p”)). It can be easily verified that given any two convex spaces X and Y that are representable by label, their intersection and union are convex, representable by label and are equivalent to their meet and join respectively. Hence, if S and T are the upper boundary sets of X n Y and X U Y re- spectively, then by specializing the algorithms for the meet and join of convex spaces [GNPSSO], we have the following formulas for computing S and T using only the labels of X and Y: S = M.AX((px upy 1 P, E Lx,P, E LY, s-t* -rYP’ E JLPX UPY 5 P’}) (1) T = MdX(Lx u Ly) (2) where Lx = MAX(X) and Ly = MAX(Y). One of the most significant operations in a basic ATMS is the addition or retraction of a justification which will trigger a recalculation of the label for each affected node. Let Vi’” be the consistent environments that derive the ith node (in the antecedent) of the lath justification of a node X. To recalculate the label of X, we first compute the intersection of I$! for the an- tecedent nodes of a justification (i.e. with Ic fixed) fol- lowed by taking the union of these results over all Ic: Uk ni vi” = vk Ai V” We will generally need to re- compute the above many, but finite, times before we arrive at the correct label. Since each application of the intersection and union operation will derive a con- vex space representable by label, we can make use of the formulas (1) and (2) to calculate the new label of X. Extended ATMS. In [dK86b], the basic ATMS is ex- tended with the notion of primitive disjunction of as- sumptions written as choose(Cr, . . . , C,), where each Ci, 1 < i < n is a distinct assumption. The inter- pretat& of the primitive disjunction is that at least one of the Ci must be true in the ATMS. Primitive disjunctions can be used to encode negated assump- tions, hence, all propositional expressions can be en- coded using Horn clause justifications and primitive disjunctions only. With the introduction of primitive disjunctions, the set V’ of inconsistent environments is expanded to include any environment p such that all supersets of p which satisfy every primitive disjunc- tion will derive 1. Similarly, consistent environments deriving a node X, i.e. V’, are now expanded to in- clude any consistent environment p such that all the consistent supersets of p which satisfy every primitive disjunction will derive X. With the introduction of primitive disjunctions, the label of a node calculated by the basic ATMS algorithm may no longer be correct. To see why, consider the fol- lowing example from [dK86b]. Suppose the set of justi- fications is {A + a; B a b; C j c; c, a =+1; c, b +I}. The label for the proposition I is ({A, C), {B, C}}. Adding choose({A}, {B}) causes this label to change to ((C}) because one of A or B holds in the new ATMS state. The label propagation algorithm described in the previous section fails to make this correction be- cause it handles Horn clause justifications only, and our choose statement is non-Horn. To solve this problem, de Kleer extended his algorithm to correct the labels by using two similar hyper-resolution rules, one for the i node-and one for the rest of nodes. In the following, we will reformulate the problem using the convex spaces as we did in the basic ATMS. However, we extend the choose operation to allow en- coding of complex disjunctions which can have any set of assumptions as its argument , i.e. any DNF formula of assumptions. This allows greater flexibility and effi- ciency in the encoding of knowledge in an ATMS. First we need to define an important operation needed in the calculation of the label in an extended ATMS. Let (P, 5) be any lattice. A subset C & P is down- ward closed if p E C and p’ 5 p implies p’ E C. For any subset S E P, the downward closure of S is the set 4 S = (p E P 1 3p’ E S, p 5 p’}. Now, given any subset T E P(P), the operation @T is defined on downward closed sets C E P(P) as: @T(C) = {p f P 1 Vt E T,p A t E C}. An important result relating to the Q, operation is: Theorem 7 Given {choose(t~,...,t~,) 1 la< set of disjunctions i 5 n), the algorithm for the corresponding eztend;d ATM5 can be described in three steps: (1) Apply the basic ATMS algorithm on the set of justifications; (2) let Cx = Vx U VL = Vx V Vl; (3) let V_L = @T, o . . + o @T~( Vl) and V’ = @pTI o . . . o ‘pT,(Cx> - V-L, where Ta = {ti, . . . , ti;>. To calculate <p, we use the following result: Theorem 8 If P is an environment lattice, then given any downward closed C with MdX(C) = {Sl,.. . ,sm), and any T = {tl,. . . ,tn}, then: @T(C) = A v E;, (3) l<j<n l<i<wa where Ej = (p(Z-P-lpAi 5 Si}. It can be shown that every Ej is downward closed and its upper boundary set is si - tj. Furthermore, the label set of @T(C) can be calculated using the modified convex space meet and join procedures, i.e. formula (1) and (2). This means that we can calculate for each node X in the extended ATMS, the new label.set LX that takes disjunctions into consideration. Note that if C is the set of all functions with domain -0 9.“> n} and codomain (1, . . . , m}, we can rewrite the formula (3) as: 0EC j In the case where only primitive disjunctions are con- sidered, it can be shown that de Kleer’s extended ATMS algorithm in,effect calculates the label of a node using the formula (4) with the hyper-resolution rules 504 TRUTH MAINTENANCE SYSTEMS used to prune away some of the terms that will not produce any new environments. However, in our imple- ment ation of the extended ATMS ,l we make use of an optimized algorithm that is based on the formula (3). There are several advantages to using this new al- gorithm over de Kleer’s hyper-resolution approach. The absence of resolution contributes substantially to the performance of our algorithm in comparison to a hyper-resolution based approach. This advantage is similar to the way ATMS’s were an improvement over the earlier Truth Maintenance Systems [Doy79], be- cause the labeling eliminates the need to reevaluate some computations multiple times during backtrack- ing. In our case, we eliminate many redundant pattern matching operations. Another advantage is the eas- ier encoding and potential improvement in efficiency because a formula in disjunctive normal form can be asserted as a single choose statement. 5. Conclusions We have formulated a theory of convex spaces of par- tially ordered sets which includes algorithms for basic operations on finitely representable convex spaces in the presence of a simple assumption on the partial or- der. Using this theory, we can also describe conditions that could ensure the admissibility of the version space representation of a concept description language. We then show how convex spaces can be used to describe the label-update algorithm in de Kleer’s basic assumption-based truth maintenance systems. This idea suggests a new approach to the label- update algorithm for the extended ATMS. Our ap- proach generalizes the extended ATMS choose op- erations to allow the use of disjunctions such as choose( {A, B, C), {D, E, F}). This provides addi- tional flexibility in expressing constraints and also con- tributes to the efficiency of label updating. Our new label-update algorithm does not require the introduc- tion of hyper-resolution rules. Instead, we use an ap- proach which is similar to that employed in the version space algorithm to recalculate labels. This simplifies the description of how labels are updated and makes the extended ATMS label-update algorithm more con- sistent with the algorithm used for the basic ATMS’s. The convex space treatment of ATMS is more than just a new algorithm for ATMS. In a forthcoming pa- per [&sag 11, we show that this leads us to provide a formal semantics for both the basic and extended ATMS. Furthermore, we can prove that if negation is introduced into the ATMS architecture, we can apply the <p operation to calculate the prime impli- cants [RdK87, KT90] of any set of DNF propositional formulas. This algorithm has been implemented on top of our ATMS implementation. It is especially our hope that the abstraction of ‘This is currently implemented on top of de Kleer’s basic ATMS. the convex space algorithms which we have discussed will lead to new insights in other areas which do, or could, employ similar structures for the representation of knowledge. References [Bir84] Garrett Birkhoff. Lattice Theory. American Mathematical Society, 1984. [dKSSa] Johan de Kleer. An assumption-based TMS. Artificial Intelligence, 28:127-162, 1986. [dK86b] Johan de Kleer. Extending the ATMS. Artificial Intelligence, 28:163-196, 1986. Po~791 Jon Doyle. A truth maintenance system. Arti- ficial Intelligence, 12:231-272, 1979. [FMHSO] Yasushi Fujiwara, Yumiko Mizushima, and Shinichi Honiden. On logical foundations of the ATMS. Proc. of ECAI-90 Workshop on Truth Maintenance Systems, 1990. [GN87] M. R. Geneserth and N. J. Nilsson. Logical Foundations of Artijicial Intelligence. Morgan Kaufmann Publishers, 1987. [GNPSSO] Carl A. Gunter, Teow-Hin Ngair, Prakash [Gun911 [HirSO] [KT90] Panangaden, and Devika Subramanian. The Common Order-theoretic Structure of Version Smces and ATMS’s. Technical Report MS-CIS- 90-86, University of Pennsylvania, 1990. Carl A. Gunter. The mixed powerdomian. The- oretical Computer Science, 1991. In press. Haym Hirsh. Incremental Version Space Merg- ing: A General Framework for Concept Learn- ing. Kluwer Academic Publishers, 1990. Alex Kean and George Tsiknis. An incre- mental method for generating prime impli- cants/implicates. J. Symbolic Computation, 185-206, 1990. [Mit78] [Mit82] wag11 [RdK87] Tom Mitchell. Version Space: An approach to Concept Leaning. PhD thesis, Stanford Univer- sity, 1978. Tom Mitchell. Generalization as search. Artifi- cial Intelligence, 18:203-226, 1982. Teow- Hin Ngair . ATMS and its extensions. Forthcoming, 1991. Raymond Reiter and Johan de Kleer. Founda- tions of assumption-based truth maintence sys- tems: preliminary report. Proc. of AAAE87, 183-188, 1987. GUNTER,ET AL. 505
1991
80
1,143
structive Induction on James I?. Calllan and nformation* Department of Computer and Information Science, University of Massachusetts 9 Amherst, Massachusetts 01003 callan@cs.umass.edu, utgoff@cs.umass.edu Abstract It is well-known that inductive learning algorithms are sensitive to the way in which examples of a concept are represented. Constructive induction reduces this sensitivity by enabling the inductive algorithm to create new terms with which to de- scribe examples. However, new terms are usu- ally created as functions of existing terms, so an extremely poor initial representation makes the search for new terms intractable. This work considers inductive learning within a problem-solving environment. It shows that in- formation about the problem-solving task can be used to create terms that are suitable for learn- ing search control knowledge. The resulting terms describe the problem-solver’s progress in achiev- ing its goals. Experimental evidence from two do- mains is presented in support of the approach. ‘by hand’ can take a long time, because it is essentially a manual search of the space of possible vocabularies. One solution, known as constructive induction, is to enable the learning program to construct new terms for its vocabulary. Most constructive induction algo- rithms define new terms as Boolean or arithmetic func- tions of previously known terms. There are infinitely many such functions, so attention must be restricted to a small subset. Most systems search the space of pos- sible vocabularies heuristically, using either the struc- ture of the evolving concept (e.g. CITRE [Matheus & Rendell, 19891) or the behavior of the learning algo- rithm (e.g. STAGGER [Schlimmer & Granger, 19861) to guide search. In both cases, an extremely poor ini- tial vocabulary makes the search intractable. Introduction One reason that inductive learning algorithms are not more widely used in problem-solving systems is their sensitivity to the representation of information. Some of the best-known successes in machine learning have been due in part to careful or fortunate choices of repre- sentations (e.g. AM [Lenat & Brown, 19841). When the same algorithms are applied with less-carefully chosen vocabularies, they may fail to learn anything useful. This paper addresses the problem of generating an initial vocabulary for inductive learning. The prob- lem arises, and is naturally addressed, when inductive learning is considered in the context of a problem- solving system. Many problem-solving methods, for example Hillclimbing, require an evaluation function that maps each search state to a number. Such a func- tion is not normally provided with a problem specifica- tion. This paper shows that available domain knowl- edge can be converted to a set of numeric features, for which an evaluation function can be trained via ex- isting methods. It is assumed that such features will be relevant to the domain, because they are derived directly from domain knowledge. The difficulty of constructing an appropriate vo- cabulary is sometimes underestimated, because many tasks appear to have obvious representations. For ex- ample, one might expect to describe a chess board by listing the location of each piece. However, obvious representations are sometimes inadequate. Quinlan spent two person-months designing a vocabulary that enabled ID3 to classify certain chess positions [Quin- lan, 19831. In general, constructing a good vocabulary Feature Construction There is wide variation in what is considered domain knowledge. It is desirable to require a minimum of information, so that the resulting methods are widely applicable. This work requires that the domain knowl- edge describe, in first order predicate calculus, the problem-solving operators and the goal state. First or- der predicate calculus was chosen because it is general and well-known. “This research was supported by a grant from the Digi- The specification of the goal state is neither the de- tal Equipment Corporation, and by the Office of Naval Re- sired evaluation function nor a good vocabulary for search through a University Research Initiative Program, learning the desired evaluation function, because it under contract N00014-86-K-0764. does not distinguish among states that are not goals. 614 LEARNING AND EVALUATION FUNCTIONS From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. One approach to creating a better vocabulary is to in- crease its resolution by decomposing the description of goal states into a set of functions, each of which maps search states to feature values. The additional func- tions, or features, may each change value at a differ- ent point, thereby making explicit the problem-solver’s progress toward a goaL If this decomposition can be applied recursively, resolution is further increased. Ide- ally, the problem-solver’s progress from state to state is reflected in a change of at least one feature value. Statements in first order predicate calculus may or may not be quantified, so it makes sense to develop some transformations that operate only upon Boolean expressions, and other transformations that operate only upon quantifiers. A set of four transformations, developed as part of this research and defined below, is sufficient to cover each type of Boolean expression and each type of quantifier. L;E: applies to Boolean expressions in which a logical operator (A, V, 1) has the highest precedence. AE: applies to Boolean expressions in which an arithmetic operator (=, #, <, 5, >, 2) has the high- es t precedence. VQ: applies to universally quantified expressions. EQ: applies to existentially quantified expressions. LE, AE, and UQ transformations are presented below. An EQ transformation is under development. The LE Tsansforrnat ion The LE transformation applies to Boolean expressions in which a logical operator (A, V, 1) has the highest precedence. Its purpose is to transform a Boolean func- tion into a set of Boolean functions, each of which rep- resents some component of the original function. The resulting functions, called LE terms, map search states to Boolean values. LE terms are sometimes also called subgoals, or subproblems [Callan, 19891, because they represent part of the original problem specification. The LE transformation creates new statements by eliminating the n-ary logical operator with the highest precedence. Its definition is: LE ((Qi)“(“j”=&d) * {(Qi)%, . . . , (Qi)‘%} LE ((Q$‘%‘jn=~bj)) * ((Qi)%, . . . , (Qi)‘%) &i is one of m quantifiers. It matches either Vv~c.5’~ or 3vieSi. bj is one of n Boolean statements. After the new Boolean statements are created, they are op- timized by removing unnecessary quantifiers. For example, the statement 3aeA, 3bd?, 3cd7, P&&b) A P2(%4 is transformed into statements 3aeA, 3bd?, p&z, b) and 3aeA, 3c&, pz(u, c). The statement to which the LE transformation is first applied is the specification of a goal state. The resulting statements can be considered subgoals. The LE transformation can be applied recursively, yielding subgoals of subgoals. If the transformation is applied wherever possible, the result is a subgoal hierarchy, with the goal at the root and atomic statements at the leaves. In general, a subgoal hierarchy contains between c + 1 and 2c subgoals, where c is the number of connectives in the specification of the goal state. The LE transformation explicitly represents sub- goals, but it does not explicitly represent the de- pendencies among them. This characteristic occurs because variable bindings are not forced to be con- sistent across all subgoals. For example, the state- ment 3ucA, pl(u)Ap2(u) is decomposed into statements 3ucA,pi(u) and 3mA, pz(u). The conjunction of the two statements is 3mA,pl(u) A 3ucA,pz(u), which is not equivalent to the original statement. Thus the sub- goals are represented as independent, when in fact they are interdependent. This characteristic is acceptable for two reasons. First, the dependency information remains available, although implicit, in the statement from which the subgoals were generated (i.e. subgoals that occur higher in the subgoal hierarchy). Second, given a set of LE terms that explicitly describe sub- goals, the inductive learning algorithm can infer the existence of, and then represent explicitly, those de- pendencies necessary to learn the concept. The A E Transformat ion The A E transformation applies to Boolean expressions in which an arithmetic operator (= , # , < ,L , > , 2) has the highest precedence. The AE transformation creates a function, called an AE term, that calculates the difference between the operands of the arithmetic operator. If the original statement contains quantifiers, then the AE transformation calculates the uverage dif- ference between the operands, over all permutations of variable bindings. The AE transformation could conceivably calculate other relationships between the operands, for example total difference, minimum dif- ference or maximum difference. The average differ- ence was chosen because it summarizes the relation- ships among a population of objects, rather than one relationship between a single pair of objects (as would the minimum or maximum difference). If one operand of an expression is a constant, the resulting AE term measures the average distance to a threshold along some domain-dependent dimension. If neither operand is a constant, it measures the average distance between pairs of objects whose locations vary along the dimension. The problem-solver’s task is ei- ther to drive the distance to zero (when the operator is =), to maintain a distance greater than zero (when the operator is #), or to enforce a particular ordering (when the operator is >, 2, <, or f). In each case, an explicit representation of the average distance is useful. For example, a subgoal for a circuit layout problem might require that all signal paths be less than one inch long (VpS, length(p) < 1). The corresponding AE term indicates how close an average signal path is to the threshold. CALLAN & UTGOFF 615 The UQ Transformat ion The UQ transformation applies to universally quanti- fied expressions. Expressions with universal quantifiers require that every member of a set of objects satisfy the Boolean expression. The VQ transformation produces a function, called a VQ term, that calculates the per- centage of permutations of variable bindings satisfying the Boolean expression. VQ functions are useful because they indicate ‘how much’ of the subgoal is satisfied in a given state. For example, if a subgoal for a blocks-world problem re- quires that all blocks be on a table (V&B, on(b, Table)), then the corresponding VQ term indicates the percent- age of blocks on the table. ‘Use of Problem-Solving Operators The previous sections assume that all of the informa- tion about the goal state resides in its description, but that may not be the case in practice. It is usually desir- able to move as many of the goal state requirements as possible into the preconditions of the problem-solving operators, so that fewer search states are generated. The advantage of doing so is that search speed may increase dramatically [Mostow & Bhatnagar, 19871. The practice of moving goal state requirements into the preconditions of problem-solving operators requires that the previously described transformations also be applied to the preconditions of each operator. At worst, doing so allows the creation of some useless terms, many of which will be identified and deleted by the syntactic pruning rules (discussed below). The additional cost is offset by the guarantee that albof the information about the goal state will be found, whether it resides in the description of the goal state or in the preconditions of the operators. Pruning The L.E transformation creates terms by extracting embedded expressions from a statement in first order predicate calculus. Sometimes taking an expression out of context causes it to become constant, or to dupli- cate other such expressions. Constant and redundant terms are undesirable because they needlessly increase the dimensionality of the space searched by the induc- tive algorithm. They also violate the assumption of linear independence among features, upon which some inductive algorithms depend [Young, 19841. A set of three syntactic pruning rules recognize and delete constant or redundant LE terms. They are: o Duplicate pruning: Deletes terms that are syntac- tically equivalent under canonical variable naming and quantifier ordering. 0 Quantified variable relation pruning: Deletes expressions that contain only equality or in- equality relations between quantified variables (e.g. Vt~~,21&5,211 = ~2). 616 LEARNING AND EVALUATION FUNCTIONS Static relation pruning: The adjacency relation- ship between squares of a chessboard is an example of a constant relation. Constant relations are cur- rently identified manually, although an automatic identification algorithm is under development. Once constant relations are identified, terms that involve only constant relations are deleted automatically. The remaining terms are potentially useful. They must be evaluated using other methods, for example based upon feedback from the learning algorithm. Estimation of a Term’s Cost It is straightforward to calculate a lower bound on the cost of each term, where cost is defined as the time needed to assign the term a value. A lower bound can be found by considering the number of quantifiers in the term’s definition, and the sizes of the sets to which they apply. This bound does not also serve as an upper bound, because it assigns O(1) cost to recursive func- tions and functions whose definitions are not known. Lower bound estimates of a term’s cost enable one to supply terms to the learning algorithm incrementally, beginning with the cheapest, or to reject a term whose cost exceeds some threshold. The transformations, syntactic pruning rules, and cost estimation described above are implemented in a com- puter program called CINDI. Its input is a problem specification expressed in first order predicate calcu- lus. Its output is a set of new terms and estimates of their costs, in either C or a Lisp-like language. The terms generated by CINDI have been tested in two domains. One domain, the blocks-world, was se- lected because its search space is small enough that the effectiveness of the terms could be determined pre- cisely. The second domain, the game of OTHELLO, has a larger search space, so effectiveness in that domain was determined empirically. In both experiments, qualitative information was used to train the evaluation functions. Qualitative in- formation indicates that one state is preferred to an- other (h(si) > h(sj)), but does not assign either state a value. The principle advantage of qualitative infor- mation is that it is more easily obtained than is precise information about a state’s value. The evaluation functions were implemented by sin- gle linear threshold units (LTUs) [Nilsson, 19651. The LTU was chosen because it handles recal-valued terms, and because of its well-known limitations [Minsky & Papert, 19721. The intent was to study the effect of the representation, rather than the power of a partic- ular learning algorithm. More complex learning algo- rithms use multiple hyperplanes to classify examples, while an LTU uses just one. Therefore the vocabular- ies that an LTU prefers might also be useful to more complex learning algorithms. )r 100.0 8 5 8 95.0 a 5 -= tl 90.0 .- z 1 85.0 0 80.0 O- o Terms Bl-Bl6 + Tl-T12 m- x Terms Bl -B16 cl- q Terms hl -L4 + Tl -T12 n- n Terms Tl-T12 75.0 0.0 5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0 Percentage of Examples in Training Set Figure 1: Learning curves for the 4 blocks problem. Given two states si and S,i, each described by a vec- tor of numeric features, and an LTU, described by a weight vector W, qualitative training information is encoded as follows: h(G) > h(sj) (1) @‘. s; > @T .s; (2) ~T*(si-s;) > 0 (3) h(< -sg > 0 (4) The Recursive Least Squares (RLS) learning rule [Young, 19841 was used to adjust the weight vector W, because it does not require that examples be linearly separable [Young, 19841. However, the RLS algorithm does require that h(s< - s;) be assigned a value. In these experiments, h(< - 3;) = 1. We are considering a switch to a Thermal Perceptron [Frean, 19901, be- cause it can use Equation 4 directly and it does not require linearly separable examples. A Blocks- rid problem The blocks-world is a simple domain that is often stud- ied in Artificial Intelligence. The problem studied here contained four blocks, labelled A through D, and a ta- ble. The problem-solver’s goal was to stack A on B, B on C, C on D, and D on the table. The starting state was generated randomly. CINDI generated ten LE terms from the specifica- tion of the goal state and the preconditions of the op- erators. The syntactic pruning rules eliminated two terms. No AE terms were generated, but four UQ terms were generated, for a total of twelve terms. The terms were arbitrarily labelled Tl-T12. For example, a precondition of the stack-block op- erator required that there be a block with an empty top. Two terms were created from that precondition. Term T6, an LE term, indicated whether some block had an empty top. Term TlO, a UQ term, indicated the percentage of blocks with empty tops. Terms Tl-T12 were compared to two hand-coded representations, labelled Bl-B16 and Ll-L4. The hand-coded representations were intended to be the kind of representations that a problem-solver might use. Terms Bl-B16 were Boolean terms that indicated the positions of the blocks. Bl indicated whether A was on B, B2 indicated whether A was on C, and so on. Terms Ll-L4 were numeric. Each term indicated the position of one block. A term’s value was 0 if its block was on top of A, 1 if its block was on B, and so on; its value was 5 if its block was on the table. Figure 1 shows the percentage of examples on which an LTU has to be trained for it to reach a given level of accuracy in identifying which of two randomly se- lected search states is closer to a goal state. The val- ues shown are accurate to within &l%, with 99% con- fidence. The figure shows the effects of both augment- ing and replacing the problem-solver’s vocabulary with the terms generated by CINDI. Performance improves when terms Ll-L4 are replaced, but degrades when terms Bl-B16 are replaced. However, performance im- proves when either the Ll-L4 or Bl-B16 representa- tions are augmented with terms Tl-T12. Representa- tion Ll-L4 shows more dramatic improvement. Qt hello OTHELLO is a two-player game that is easy to learn, but difficult to master. Players alternate placing discs on an 8x8 board, subject to certain restrictions, until neither player can place a disc. When the game is over, CALLAN 8~ UTGOFF 617 Fl-F24 (vs Sl 6128) .*....**.. Fl-F24+Sl-S128 (vs Sl-S128) 0 50 100 150 200 250 300 350 400 450 500 Game Number Figure 2: Cumulative average difference between scores of winners and losers in two OTHELLO tournaments. the player with more discs has won. CINDI generated 38 LE terms from the specification of the goal state. The syntactic pruning rules elimi- nated 15 terms. One AE term was created and 21 VQ terms were created, for a total of 45 terms. CINDI’s cost estimates were used to eliminate any terms whose cost exceeded a manually-provided threshold. The re- maining 24 terms were arbitrarily labelled Fl-F24. Several terms created by CINDI correspond to well- known OTHELLO concepts. One LE term indicated whether the player had wiped out (completely elimi- nated from the board) its opponent. The AE term measured the difference between the number of discs owned by each player (the disc differential); it can be used to play maximum and minimum disc strategies [Rosenbloom, 19821. One VQ term measured the num- ber of moves available to a player (mobility); it is highly correlated with winning positions [Rosenbloom, 19821. Terms Fl-F24 were compared with a set of terms, Sl-S128, used in two independently developed, performance-oriented, OTHELLO programs. Terms Sl- S64 are binary terms, one per square, that indicate whether the player owns the square. Terms S65-Sl28 are the corresponding terms for the opponent. A third representation, consisting of terms Fl-F24 and terms Sl-S128, was also considered. Performance was measured by allowing LTUs with differing representations to compete for 500 games. Af- ter each move, an ‘oracle’ program’ indicated the best move. The evaluation functions were then trained so that the oracle’s move was preferred to all other moves. Figure 2 shows the average disc-differential between pairs of players as the tournaments progressed. The disc-differential, a standard measure for OTHELLO, is the difference between the winner’s score and the loser’s score. Both tournaments show that the terms generated by CINDI are effective for learning OTHELLO evaluation functions. The tournament between the ’ A program tha t uses handcrafted features, a hand- crafted evaluation function, and 3-ply search. It switches to exhaustive search in the last 12 moves of the game. 618 LEARNING AND EVALUATION FUNCTIONS combined representation (Fl-F24+Sl-S128) and terms Sl-S128 demonstrates that augmenting the problem- solver’s vocabulary improves performance. The tour- nament between the new terms (Fl-F24) and terms Sl- S128 demonstrates that replacing the problem-solver’s vocabulary results in a greater improvement. The RLS algorithm may not converge if features are not linearly independent (i.e. if some features are linear functions of other features) [Young, 19841. Terms Sl- Sl28 are linearly independent, but terms Fl-F24 are not. The decay in the evaluation function based upon terms Fl-F24 is most likely caused by linear dependen- cies among the features. These dependencies probably also harm the performance of the evaluation function that uses the combined representation (Fl-F24+Sl- S128). However, in spite of this handicap, evaluation functions that use terms Fl-F24 consistently outper- form evaluation functions that use only terms Sl-S128. Related Work Abstractions of an expression are generated by elim- inating some combination of its predicates [Mostow & Prieditis, 1989; Pearl, 1984; Gaschnig, 19791. The number of such expressions is combinatorially explo- sive, so human guidance is often used to restrict at- tention to a small subset of the possible abstrac- tions. Each abstraction represents a simpler problem whose solution guides search in the original problem. However, unless care is taken, the additional cost of solving these simpler problems makes the overcall ap- proach more expensive than blind search [Valtorta, 1983; Mostow & Prieditis, 19891. The LE transformation generates abstractions, but it does not generate the full set. It only generates be- tween c+ 1 and 2c abstractions, where c is the number of connectives in the problem specification. Therefore it avoids both combinatorial explosion and the need for human guidance. Once created, LE terms are eval- uated in the context of a single search state, which avoids the expense of searching for a solution that is associated with abstractions. The VQ transformation is a restricted form of Michalski’s (1983) counting arguments rules. The counting arguments rnles are applied to concept de- scriptions that contain quantified variables. Each rule counts the permutations of objects that satisfy a condi- tion. However, the source of the condition is not spec- ified. In contrast, the UQ transformation is applied to expressions generated by the LE transformation. This paper describes a method for transforming eas- ily available domain information into terms with which to train an inductive learning algorithm. The method, called knowledge- based feature generation [Callan, 19891, includes pruning rules, and lower bound cost estimates for each feature. The number of fea- tures produced is a linear function of the number of connectives in the domain knowledge, so the method will scale to complex problems. The method has been tested on two problems of differing size. In each case, the features it generated were more effective for induc- tive learning than were the problem-solver’s features. The features created by knowledge-based feature generation are intended to augment or replace the problem-solver’s vocabulary as an initial vocabulary for inductive learning. Other methods of constructive induction can also be used, in conjunction with the in- ductive learning algorithm, to provide additional im- provement in the vocabulary. A number of topics remain for further research. One of these is the development of an EQ transformation, to apply to existential quantifiers. Another is an inves- tigation into control rules for supplying features to an inductive learning algorithm. The subgoal hierarchy and the cost estimates enable one to create a variety of incremental algorithms for supplying terms to the inductive algorithm. None of these have been investi- gated. A final problem is to characterize the situations in which knowledge-based feature generation is likely to improve a problem-solver’s vocabulary. Acknowledgements We thank Jeff Clouse for the oracle OTHELLO program, and Tom Fawcett for his OTHELLO domain theory. We thank John Buonaccorsi and Chengda Yang for assis- tance in the design of experiments. We thank Chris Matheus, Andy Barto, Tom Fawcett, Sharad Saxena, Carla Brodley, Margie Connell, Jeff Clouse, David Haines, David Lewis, and Rick Yee for their comments. eferenees Callan, J. P. (1989). K nowledge-based feature genera- tion. Proceedings of the Sixth International Worhshop on Machine Learning (pp. 441-443). Ithaca, NY: Mor- gan Kaufmann. Frean, M. (1990). Smarlb nets and short paths: Op- timising neural computation. DoctoraI dissertation, Center for Cognitive Science, University of Edin- burgh. Gaschnig, J. (1979). A problem similarity approach to devising heuristics: First results. Proceedings of the Seventh International Joint Conference on Artificial Intelligence (pp. 301-307). Tokyo, Japan. Lenat, D. B., & Brown, J. S. (1984). Why AM and EURISKO appear to work. Artificiarl Intelligence, 23, 269-294. Matheus, C. J., & Rendell, L. A. (1989). Construc- tive induction on decision trees. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (pp. 645-650). Detroit, Michigan: Mor- gan Kaufmann. Michalski, R. S. (1983). A theory and methodology of inductive learning. In R. S. Michalski, J. 6. Car- bonell, & T. M. Mitchell (Eds.), Machine learning: An artificial intelligence approach. San Mateo, CA: Morgan Kaufmann. Minsky, M., & Papert, S. (1972). Perceptrons: An in- troduction to computational geometry (expanded edi- tion). Cambridge, MA: MIT Press. Mostow, J., & Bhatnagar, N. (1987). Failsafe- A floor planner that uses EBG to learn from its failures. Pro- ceedings of the Tenth International Joint Conference on Artificial Intelligence (pp. 249-255). Milan, Italy: Morgan Kaufmann. Mostow, J., & Prieditis, A. E. (1989). Discover- ing admissible heuristics by abstracting and optimiz- ing: A transformational approach. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (pp. 701-707). Detroit, Michigan: Mor- gan Kaufmann. Nilsson, N. J. (1965). Learning machines. New York: McGraw-Hill. Pearl, J. (1984). H euristics: Intelligent search strate- gies for computer problem solving. Reading, Ma: Addison- Wesley. Quinlan, J. R. (1983). L earning efficient classification procedures and their application to chess end games. In R. S. Michalski, J. G. Carbonell, & T. M. Mitchell (Eds.), Machine learning: An artificial intelligence approach. San Mateo, CA: Morgan Kaufmann. Rosenbloom, P. (1982). A world-championship-level Othello program. Artifkiat Intelligence, 19, 279-320. Schlimmer, J. C., & Granger, R. II., Jr. (1986). Incre- mental learning from noisy data. Machine Learning, 1, 317-354. Valtorta, M. (1983). A result on the computational complexity of heuristic estimates for the A* algo- rithm. Proceedings of the Eighth International Joint Conference on Artificial Intelligence (pp. 777-779). Karlsruhe, West Germany: William Kaufmann. Young, P. (1984). R ecursive estimation and time- series analysis. New York: Springer-Verlag. CALLAN & UTGOFF 619
1991
81
1,144
Department of Computer and Information Science University of Massachusetts Amherst, MA 01003 U.S.A. Abstract This paper identifies two fundamentally different kinds of training information for learning search control in terms of an evaluation function. Each kind of training information suggests its own set of methods for learning an evaluation function. The paper shows that one can integrate the methods and learn simultaneously from both kinds of in- formation. Introduction This paper focuses on the problem of learning search control knowledge in terms of an evaluation function. The conclusion is that one can and should seek to learn from all kinds of training information, rather than be concerned with which kind is better than an- other. Many kinds of information are often available, and there is no point in ignoring any of them. An evaluation function provides a simple mecha- nism for selecting a node for expansion during search. An evaluation function maps each state to a number, thereby defining a surface over the state space that can be used to guide search. If the number represents a reward, then one can search for a sequence of con- trol decisions that will lead to the highest foreseeable payoff. Similarly, if the number represents a cost or penalty, then one searches for a minimizing sequence. Sources of Training Informatio There are currently two known fundamental sources of training information for learning an evaluation func- tion. The first is the future payoff that would be achieved by executing a sequence of control decisions from a particular starting point (Samuel, 1963; Lee & Mahajan, 1988). Sutton (1988) has illustrated via his temporal difference (TD) methods that one can learn to predict the future value for a state by repeat- edly correcting an evaluation function to reduce the error between the local evaluation of a state and the backed-up value that is determined by forward search. This is similar to an idea of Samuel (1963), but Sutton 596 LEARNING AND EVALUATION FUNCTIONS has broadened it considerably and related it to several other lines of thought. The second source of training information is identifi- cation of the control decision made by an expert, given a particular state. In the literature, such an instance of an expert choice is typically called a book move (Samuel, 1967), but it need not have been recorded in a book. Instead, one can simply watch an expert in action, or ask an expert what to do in a particular situation, and thereby obtain the control decision that the expert would make. Whenever an expert’s choice is available, one would like to be able to learn from it. Such a choice is the result of the expert’s prior learn- ing, and therefore should be quite informative. Indeed, learning to make the same choices as an expert is a sen- sible approach to building an expert system. OdS When making a control decision based on the value of each successor state, the exact value of a state is ir- relevant with respect to making the choice . Only the relationship of two values is needed for the purpose of identifying the one with the higher value. The objec- tive is to identify the most preferred state and then move to it. Given that a control decision does not depend on the particular values returned by an eval- uation function, one does not need to learn an exact value for each state. One needs only to learn a function in which the relative values for the states are correct. Whenever one infers, or is informed correctly, that state a is preferrable to-state b, one has obtained ‘infor- mation regarding the slope for part of a correct evalu- ation function. Any surface that has the correct sign for the slope between every pair of points is a perfect evaluation function. An infinite number of such eval- uation functions exist, under the ordinary assumption that state preference is transitive. One would expect the task of finding any one of these evaluation func- tions to be easier than- evaluation function. the task of finding a particular st Because one wants to learn to select a most preferred #ate from a set of possible successors, one should be able to learn from examples of such choices (Utgoff From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. A /I\ B c D Figure 1: One Ply of Search. & Saxena, 1987; Utgoff & Heitman, 1988). By stat- ing the problem of selecting a preferred state formally, and expanding the definitions, a procedure emerges for converting examples of state preference to constraints on an evaluation function. One can then search for an evaluation function that satisfies the constraints, using standard methods. We refer to a method that learns from such constraints as a state preference (SP) method. Assume that a state z is described by a conjunction of d numerical features, represented as a d-dimensional vector 2). Also assume that the evaluatio H(x) i epresented as a linear combination where is a column vector of weights, and transpose of W. Then one would compare the value of a state C to a state B by evaluating the expression H(C) > H(B). I n g eneral, one can define a predicate P(x, y) that is true if and only if H(z) > H(y), similar to Huberman’s (1968) hand-crafted better and worse predicates. One can convert each instance of state preference to a constraint on the evaluation function by expanding its definitions. For example, as shown in Figure 1, if state 6: is identified as best, one would infer constraints P(C, B) and P(C, D). Expanding P(C, B), for example, leads to: P(C, B) H(C) > H(B) TF(C) > WTF(B) WT(F(C) - F(B)) > 0 The difference between the two feature vectors is known, leaving W as the only unknown. By expanding all instances of state preference in the above manner, one obtains a system of linear in- equalities, which is a standard form of learning task for a variety of pattern recognition methods (Duda & Hart, 1973), including perceptron learning and other more recent connectionist learning methods. Note that these instances of state preference are expressed as d- dimensional vectors, meaning that learning from pairs of states is no more complex than learning from single states. This is in contrast to Tesauro (1989), where both states are given as input to a network learner. It is worth noting that Samuel’s (1963,1967) method for learning from book moves is an SP method. When learning from book moves, Samuel computed a correla- tion coefficient as a function of the number of times L (If) that the feature value in a nonpreferred move was lower (higher) than the feature value of the preferred move. The correlation coefficient for each feature was H, and was used directly as the weight in his eval- uation function. The divisor L + H is constant for all features, serving only as a normalizing scalar. Thus the total L - H is a crude measure of how important the feature is in identifying a state preferred by an expert. Illustration This section illustrates a TD method and an SP method, applied individually to the same problem. The purpose is to ground the discussion of the previ- ous sections, and to provide some indication of the rel- ative strength of the two kinds of training information. One already expects that state preference information is stronger than temporal difference information, so the point of interest is really how much stronger. The com- parison is not a contest because there is no need pick a winner. Each method learns from a different kind of training information, which raises the issue of how to learn from both sources. One can and should strive to learn from all the training information, regardless of its source. For the TD method and the SP method described below, it is important to note that we have instantiated TD and SP in a particular way, and have coupled each one with the best-first search algorithm. TD and SP methods are generic, and are independent of any par- ticular search algorithm that makes use of the learned evaluation function. To keep this distinction in mind, we refer to the two example progams below as Tl and Sl. The Task Domain Although we are experimenting with TD and SP meth- ods in larger domains, as mentioned in the final section, we have selected the smaller Towers-of-Hanoi domain for pedagogical purposes. The main reason for this choice is that the domain is characterized by a small state space, which can be controlled by varying the number of disks. The small domain makes it possible to implement a simple expert, which serves as a source of state preference information for the Sl program. The semantics of this problem makes it more nat- ural to think of the value of a state as a measure of remaining cost. Accordingly, the goal state has a cost of 0. The problem-solving program will be looking for a state with a low cost, and one state will be preferred to another if its cost is lower. Thus, for any two in- stances z and y, expanding P(x, y) f-) H(x) < H(y) leads to a constraint of the form WT(F(x) - F(y)) < 0. UTGOFF & CLOUSE 597 Table 1: Tl as Best-First Search Table 3: Results for Tl and Sl .~ 1. opent(start), closedtnil 2, stcheapest(open) 3. if solution(s), stop 4. put s onto closed, ctsuccessors(s) 5. if training, do TD adjustment 6, opentappend(c - closed,open) 7. re-evaluate all nodes on open 8. goto 2 Table 2: Sl as Best-First Search 1. opent(start), closedtnil 2. if training, [stexpertbest(open), do SP adjustments] else stcheapest(open) 3. if solution(s), stop 4. put s onto closed, ctsuccessors(s) 5. opentappend(c - closed,open) 6. re-evaluate all nodes on open 7. got0 2 A Temporal Difference Method The Tl program learns from temporal differences, as part of the best-first search algorithm shown in Table 1. The value backed up from the children of the node just expanded is the value of the lowest cost child plus 6, 6 = 1. This backed-up value is the desired value of the parent with respect to the children, and the learning mechanism adjusts the weights W so that the evaluation for the parent state is closer to this backed- up value. Because the value of the goal state is defined to be 0, the evaluation function is being trained to predict the distance remaining from a state to the goal. The error correction rule is a form of the well-known absolute error correction rule (Nilsson, 1965; Duda & Hart, 1973), which calculates the amount of correction needed to remove the error. One solves (W + cF(x))~F(x) = backed-up value for c and then adjusts W by W +-- W + vcF(x) so that WTF(x) is closer to the intended value. The learning rate 7 is 0.1. Over time, a series of such cor- rections to W should result in an evaluation function that is predictive of minimum cost to the goal state, assuming such a fit can be approximated well in the given feature space. A State Preference Method The Sl program learns from state preferences, as part of the best-first search algorithm shown in Table 2. expansions 8 16 56 Sl adjustments 5 9 11 trials 1 queries 7 halt opt expansions 8 2 1 30 31 opt opt 16 32 For training, the expert’s choice is simulated by brute force search for the optimal move. From the expert’s choice, the algorithm infers that the selected state is to be preferred to each of the nonselected states. From each such pair of states, Sl infers a constraint on the weight vector W expressed as a linear inequality. If the constraint is not satisfied, then the weight vector W is adjusted. As with Tl, the correction rule is a form of the ab- solute error correction rule. One solves for c and then adjusts W by ‘w + W + c@(x) - F(y)). Adjusting the weights so that the weighted difference is -1 corresponds to wanting the selected state to eval- uate to at least one less than a nonselected state, but any negative value will suffice in order to become cor- rect for the inequality. Discussion Each program was trained repeatedly until either it was able to solve the Towers of Hanoi problem op- timally or it had a sequence of weight vectors that was cycling. For each program, the cost of training was measured three ways: by the total number of ad- justments to the weight vector W, by the number of times the program was trained on the problem (a trial), and by the number of times the expert was queried for its control decision. Table 3 shows Sl requires fewer weight adjustments and fewer trials than Tl, but at the expense of querying the expert. For the !&disk prob- lem, Sl learned to solve the problem optimally, but Tl was unable to do so. “Expansions” is the number of node expansions that occurred when the program solved the problem after it had completed its training. The problem faced by Tl is to learn an exact value for each state, which is an impossible task in this case because the desired values are not co-planar in the given feature space. It is for this reason that one needs best-first search instead of simple hill-climbing. Sl needs only to learn a value for each state that causes 598 LEARNING AND EVALUATION FUNCTIONS Table 4: Features for the 3-Disk Problem. Feature Is Disk 3 on Peg 3? Is Disk 2 at its desired location? Is Disk 1 at its desired location? Is Disk 2 on Disk 3? Is Disk 1 on Disk 3? Is Disk 1 on Disk 2? Is Disk 2 on Peg 3? Is Disk 1 on Peg 3? Is Disk 3 clear? Is Peg 3 empty? Threshold Constant 1 the relationships of the values of the states to be cor- rect. This too is an impossible task in the given feature space, but it appears easier for a learning algorithm to try to satisfy the less demanding constraints of relative values than exact values. The features for describing a state are a function of the number of disks. For example, Table 4 shows the ten Boolean features for the 3-disk problem. Disk 3 is the largest, and Peg 3 is the goal peg. In general, for the n-disk problem, there are 0(3n) states and, in the representation for Tl and Sl, O(n2) features. Integrating TD and SP Methods This section discusses the relationship between TD and SP methods, and shows that both kinds of methods can work together in learning one evaluation function. Relationship of TD and SP Methods TD methods learn to predict future values, whereas SP methods learn to identify preferred states. For TD methods, training information is propagating vertically up the search tree. For SP methods, the training in- formation is propagating horizontally among siblings. Semantically, the two kinds of methods are compati- ble because the evaluation function is of the same form, and serves the same purpose of allowing identification of a best state. One can adjust the weights W so that the value of a state is predictive of its eventual payoff, and one can also adjust W so that the rela- tive values among the states become correct. Thus, in terms of the semantics of the learning, one can simply apply both kinds of error correction to the same eval- uation function simultaneously without fear that they are incongruous. However, a practical problem that can arise is that the expert might be fallible, putting the two sources of training information in conflict to some degree. This issue is discussed below. An Integrated Met hod In the same way that TD and SP are each a class of methods, there are many combinations of methods that would produce an integrated TDSP method. We Table 5: 11 as Best-First Search 1. opent(start), closedtnil 2. lasterrort 0.0 3. if training and lasterror > p, [stexpertbest(open), do SP adjustments] else stcheapest(open) 4. if solution(s), stop 5. put s onto closed, ctsuccessors(s) 6. if training, lasterrort ltderror], do TD adjustment 7. open+append(c - closed,open) 6. re-evaluate all nodes on open 9. got0 3 Method 11 Table 6: Results for I1 3 dsks 4 dsks 5 dsks adjustments 35 131 409 trials 1 1 1 queries 6 14 24 halt opt opt opt expansions 8 16 32 present one such method here, instantiated in a pro- gram that we refer to as Il. As noted above, it is permissible to apply a TD method and an SP method to the same evaluation function. Thus, the 11 program, shown in Table 5, is the union of the Tl and Sl pro- grams, with the addition of a dynamic test for when to ask the expert for its control decision. A TD method can be employed very easily in an unsupervised manner whenever a node is expanded. An SP method relies on an expert, which can be a human or a search procedure. At issue is when to ob- tain state preference information from the expert. If one can simply observe the expert passively, then there is no apparent expense in obtaining such information. For 11 however, we assume that one must query the ex- pert to obtain state preference information, and that one would like make such queries as seldom as pos- sible. As an extreme, one could avoid querying the expert altogether, and learn only from the TD infor- mation. However, expert preferences provide strong training information and should be considered when available. The 11 program queries the expert whenever the magnitude of the previous TD error is above /3, with p = 0.9. The effect is that the expert exerts great influ- ence early in the training, but is progressively ignored as the evaluation function becomes more accurate. Table 6 shows the same measures for 11 as those given for Tl and S 1. 11 learned to solve all three ver- sions of the problem optimally, with fewer weight ad- justments than Tl, and fewer queries to the expert than Sl. For the 4-disk problem, I1 learned the task UTGOFF & CLOUSE 599 in one trial, which is fewer than for either Sl or Tl. The 11 program increasingly ignores the expert as the evaluation function is learned. This is a desirable characteristic in terms of gaining autonomy, but it is also desirable if the expert is imperfect, e.g. human. One can learn rapidly from the expert, and then let TD training correct any flaws that may have crept in from believing the expert. However, it may happen that TD error might temporarily increase without expert input, causing the expert to be drawn back into the training, thereby preventing the improvement that might occur otherwise. The 11 program illustrates just one scheme for integrating TD and SP methods. We are continuing to examine the issue of how to integrate these sources of training information profitably. Conchsion We have identified two different kinds of training infor- mation for learning evaluation functions, and described their relationship. For state preference, we have shown that one can convert instances of state preference to constraints on an evaluation function, and that one can learn an evaluation function from such information alone. We have taken the view that one should be able to learn from all sources of training information, and not be diverted by arguments that one is to be favored over another. We have observed that it is semantically correct to apply a TD method and an SP method si- multaneously to the learning of one evaluation func- tion. Finally, we presented a specific method that in- tegrates both approaches, and demonstrated that the two can indeed work together profitably. Although we have chosen a simple problem for il- lustration, the issues that motivated this work arose while studying the effectiveness of TD and SP meth- ods in the game of Othello. The program was able to learn from either source of information, but it was un- clear whether or how one could learn simultaneously from both sources. We are in the process of finishing the integration of the methods in Othello, and are in the early stages of experimenting with an integrated approach in learning to control air traffic. Acknowledgments This material is based upon work supported by the National Aeronautics and Space Administration un- der Grant No. NCC 2-658, and by the Office of Naval Research through a University Research Initiative Pro- gram, under contract number N00014-86-K-0764. We thank Rich Sutton, Andy Barto, Sharad Saxena, Jamie Callan, Tom Fawcett, Carla Brodley, and Margie Con- nell for helpful comments and discussion. eferences Duda, R. O., & Hart, P. E. (1973). Pattern Classify- cation and Scene Analysis. New York: Wiley & Sons. 600 LEARNING AND EVALUATION FUNCTIONS Huberman, B. J. (1968). A program to play chess end games. Doctoral dissertation, Department of Com- puter Sciences, Stanford University. Lee, K. F., & Mahajan, S. (1988). A pattern classifi- cation approach to evaluation function learning. Ar- tificial Intelligence, 36, l-25. Nilsson, N. J. (1965). Learning machines. New York: McGraw-Hill. Samuel, A. (1963). S ome studies in machine learning using the game of checkers. In E. A. Feigenbaum, & J. Feldman (Eds.), Computers and Thought. New York: McGraw-Hill. Samuel, A. (1967). S ome studies in machine learning using the game of checkers II: Recent progress. IBM Journal of Research and Development, 11, 601-617. Sutton, R. S. (1988). L earning to predict by the method of temporal differences. Machine Learning, 3, 9-44. Tesauro, G. (1989). Connectionist learning of expert preferences by comparison training. In D. S. Touret- zky (Ed.), Advances in Neural Information Processing Systems. Morgan Kaufmann. Utgoff, P. E., & Saxena, S. (1987). Learning a pref- erence predicate. Proceedings of the Fourth Interna- tional Worircshop on Machine Learning (pp. 115-121). Irvine, CA: Morgan Kaufmann. Utgoff, P. E., & Heitman, P. S. (1988). Learning and generalizing move selection preferences. Proceedings of the AAAI Symposium on Computer Game Playing (pp. 36-40). Palo Alto, CA.
1991
82
1,145
exity si Steven D. Whitehead Department of Computer Science University of Rochester Rochester, NY 14627 email: white@cs.rochester.edu Abstract Reinforcement learning algorithms, when used to solve multi-stage decision problems, perform a kind of online (incremental) search to find an op- timal decision policy. The time complexity of this search strongly depends upon the size and struc- ture of the state space and upon a priori knowl- edge encoded in the learners initial parameter val- ues. When a priori knowledge is not available, search is unbiased and can be excessive. Cooperative mechanisms help reduce search by providing the learner with shorter latency feed- back and auxiliary sources of experience. These mechanisms are based on the observation that in nature, intelligent agents exist in a cooperative so- cial environment that helps structure and guide learning. Within this context, learning involves information transfer as much as it does discovery by trial-and-error. Two cooperative mechanisms are described: Learning with an External Critic (or LEC) and Learning By Watching (or LBW). The search time complexity of these algorithms, along with unbi- ased Q-learning, are analyzed for problem solving tasks on a restricted class of state spaces. The results indicate that while unbiased search can be expected to require time moderately exponential in the size of the state space, the LEC and LBW algorithms require at most time linear in the size of the state space and under appropriate condi- tions, are independent of the state space size alto- gether; requiring time proportional to the length of the optimal solution path. While these analytic results apply only to a restricted class of tasks, they shed light on the complexity of search in re- inforcement learning in general and the utility of cooperative mechanisms for reducing search. Introduction When reinforcement learning is used to solve multi- stage decision problems, learning can be viewed as a search process in which the agent, by executing a se- quence of actions, searches the world for states that yield reward. For real-world tasks, the state space may be large and rewards may be sparse. Under these cir- cumstances the time required to learn a control policy may be excessive. The detrimental effects of search manifest themselves most at the beginning of the task when the agent has an initially unbiased control strat- egy, and in the middle of a task when changes occur in the environment that invalidate an existing control policy. Two cooperative learning algorithms are proposed to reduce search and decouple the learning rate from state-space size. The first algorithm, called Learning with an External Critic (or LEC), is based on the idea of a mentor, who watches the learner and generates immediate rewards in response to its most recent ac- tions. This reward is then used temporarily to bias the learner’s control strategy. The second algorithm, called Learning By Watching (or LBW), is based on the idea that an agent can gain experience vicariously by relating the observed behavior of others to its own. While LEC algorithms require interaction with knowl- edgeable agents, LBW algorithms can be effective even when interaction is with equally naive peers. The principle idea being advocated in both LEC and LBW is that, in nature, intelligent agents do not exist in isolation, but are embedded in a benevolent society that is used to guide and structure learning. Humans learn by watchmg others, by being told, and by receiving criticism and encouragement. Learning is more often a transfer than a discovery. Similarly, in- telligent robots cannot be expected to learn complex real-world tasks in isolation by trial-and-error alone. Instead, they must be embedded in cooperative envi- ronments, and algorithms must be developed to facil- itate the transfer of knowledge among them. Within this context, trial-and-error learning continues to play a crucial role: for pure discovery purposes and for re- fining and elaborating knowledge acquired from others. The search time complexity is analyzed for pure un- biased Q-learning, LEC, and LBW algorithms for an important class of state spaces. Generally, the results WHITEHEAD 607 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. indicate that unbiased Q-learning can have a search time that is exponential in the depth of the state space, while the LEC and LBW algorithms require at most time linear in the state space size and under appropri- ate conditions, time independent of the state space size and proportional to the length of the optimal solution path. In the analysis that follows, only definitions and the- orems are given. Proofs can be found in the appendix. Characterizing state spaces Naturally, the scaling properties of any reinforcement learning algorithm strongly depends upon the struc- ture of the state space and the details of the algorithm itself. Therefore, it is difficult to obtain results for com- pletely general situations. However, by making some simplifications we can obtain interesting results for a representative class of tasks and we can gain insight into more general situations. Below we define a num- ber of properties that are useful when talking about classes of state spaces. We assume that actions are deterministic. Definition 1 (l-step invertible) A state space is l- step invertible if every action has an inverse. That is, if in state x, action a causes the system to enter state y, there exists an action a-l that when executed in state y causes the system to enter state x.l Definition 2 (uniformly k-bounded) A state space is uniformly k-bounded with respect to a state .- X 1. 2. 3. The maximum number of steps needed to reach x from anywhere in the state space is k. All states whose distance to x is less than k have b- actions that decrease the distance to x by one, b+ actions that increase the distance to x by one, and b, actions that leave the distance to x unchanged. all states whose distance to x is k have b- actions that decrease the distance by one and b,+b+ actions that leave the distance unchanged.2 Definition 3 (homogeneous) A state space is ho- mogeneous with respect to state x if it is l-step in- vertible and uniformly k-bounded with respect to x. Definition 4 (polynomial width) A homogeneous state space (of depth k) has polynomial width if the size of the state space is a polynomial function of its depth (k). For example, 2 and 3 dimensional grids have polyno- mial width since the size of their state spaces scale as O(k2) and O(k3) respectively. ‘k-step invertibility can be defined analogously and re- sults similar to those described below can be obtained. 2That is, at the boundaries, actions that would normally increase the distance to x are folded into actions the leave the distance unchanged. Homogeneous state spaces are useful for studying the scaling properties of reinforcement learning algorithms because they are analytically tractable. They represent an idealization of the state spaces typically studied in AI - in particular, the boundaries of the state space are smooth and equally distant from the “center” state x, and the interior states share the same local connec- tivity pattern. Nevertheless, we expect the complexity result obtained for homogeneous state spaces to be in- dicative of the scaling properties of more general state spaces. Q-learning Analysis To study the time complexity of unbiased systems we have chosen to analyze Q-learning [Watkins, 19891 as a representative reinforcement learning algorithm. Although a variety of other reinforcement algorithms have been described in the literature [Barto et al., 1983; Holland et al., 1986; Watkins, 1989; Jordan and Rumelhart, 19901, most are similar to Q-learning in that they use temporal difference methods [Sutton, 1988] to estimate a utility function that is used to determine the system’s decision policy. Thus, even though our analysis is for Q-learning, we expect our results to apply to other algorithms as well. In Q-learning, the system’s objective is to learn a control policy K, which maps states into actions, that maximizes the discounted cumulative reward: 00 rt = c yn ft+n (1) n=O where rt is the discounted cumulative reward (also called the return), y is the discount rate (0 5 y < l), and rt is the reward received after executing an action at time t. The system maintains an action-value function, Q, that maps state-action pairs into expected returns. That is, Q(x, a) is the system’s estimate of the return it expects to receive given that it executes action a in state x and follows the optimal policy thereafter. Given Q the system’s policy, x is determined by the rule: T(X) = a such that Q(x, a) = ye$Q(x, b)], (2) where A is the set of possible actions. The estimates represented by the action-value func- tion are incrementally improved through trial-and- error by using updating rules based on temporal dif- ference (TD) methods [Sutton, 19881. In l-step Q- learning only the action-value of the most recent state- action pair is updated. The rule used is Q(xt,at) - (1 - a)Q(xt, a) + Q[Q + yQ+l)], (3) where W:) = yp$Q(x, b)l, (4) where xt, ut, rt are the state, action, and reward at time t respectively, and where o is the learning rate parameter. 608 LEARNING AND EVALUATION FUNCTIONS 1. x t the current state 2. Select an action a that is usually consistent with a(z), but occasionally an alternate. For example, one might choose a according to the Boltzmann dis- tribution: p(ajx) = eQ(-VT where T is a tem- perature parameter that adjusts the degree of ran- domness . 3. Execute action a, and let y be the next state and r the reward received. 4. Update the action-value function: Q(x, 4 + (1 - 4&(x, 4 + 4- + Ye], where U(y) = ma&A [&(Y, b)]. 5. Go to 1. Figure 1: The l-step Q-learning algorithm Other Q-learning algorithms use different rules for updating the action-value function. Figure 1 summa- rizes the steps for I-step Q-learning. Definition 5 (zero-initialized) A Q-learning sys- tem is zero initialized if all its action-values are ini- tially zero. Definition 6 (problem solving task) A problem solving task is defined as any learning task where the system receives a reward only upon entering a goal state G. That is, rt = 1 ifxct+l =G 0 otherwise (5) Definition 7 (homogeneous problem solving task) A task is a homogeneous problem solving task if it is a problem solving task and its associated state space is homogeneous with respect to the goal state 6. Given the above definitions and descriptions we have the following result. Theorem 8 In a homogeneous problem solving task, the expected time needed by a zero-initialized Q- learning system to learn the actions along an optimal solution path is bounded below by the expression Cl * (c2 [$]k-i[($)i-lj +& where, 1 c1= 1-p= ’ ( > P= = b b= + b; + b- ’ b+ P+ = - b+ + b- ’ and P- =1-P+, and where i is the length of the optimal solution, and k is the depth bound on the state space (with respect to the goal). Space does not permit a detailed derivation of Equa- tion 6, however the key to the proof is to recognize that the time needed by a zero-initialized Q-learning system to first solve the task is exactly the time needed by a process to perform a random walk on a one dimensional state space which begins in state i and ends in state 0 - where the states are numbered left to right from 0 to k - 1 and the walk takes a leftward step with prob- ability P-(1 - P=), a rightward step with probability P+(l - P=), and a step having no effect with proba- bility P=. That is, when solving the task for the first time the zero-initialized system performs an unbiased random walk over the state space. It chooses actions randomly until it stumbles into the goal state. By con- straining our analysis to homogeneous problem solving tasks we are able to analyze this walk. In particular, it reduces to a random walk on a l-dimensional state space. A detailed derivation is given in [Whitehead and IBallard, 19911. Corollary 9 For state spaces of polynomial width (see Definition 4), when P+ > l/2, the expected search time is moderately exponential in the state space size. In Equation 6, P= is the probability that the sys- tem, when choosing actions randomly, selects an ac- tion that leaves the distance to the goal unchanged, and P+ (and P-) is the conditional probability that the system chooses an action that increases (decreases) the distance to the goal given that it chooses one that changes the distance. Figure 2 shows a series of plots of expected solution time (Equation 6) versus maximum distance k for a i = 10, and P+ E [0.45,0.55]. When P+ > l/2, the solution time scales exponentially in Iz, where the base of the exponent is the ratio F. When P+ = l/2, the solution time scales linearly ink, and when P+ < l/2 it scales sublinearly. The case where P+ > l/2 is important for two rea- sons. First, for many interesting problems it is likely that P+ > l/2. For example, if a robot attempts to build an engine by randomly fitting parts together, it is much more likely to take actions that are useless or move the system further from the goal than towards it. This follows since engine assembly follows a fairly sequential ordering. Similarly, a child can be expected to take time exponential (in the number of available building blocks) to build a specific object when com- bining them at random. Of course the state spaces for WHITEHEAD 609 lower bounc- on search time (in steps) _ 1000 - 0.49 , 0' 0 20 40 60 80 k (maximum distance to G) 100 Figure 2: Search time complexity (lower bound) as a function of b. building engines and assembling blocks are not homo- geneous, but they may reasonably be approximated as such. Second, when P+ is only slightly greater than l/2, it doesn’t take long before the exponent leads to unac- ceptablely long searches. Figure 2 illustrates this point dramatically; even when P+ is as small as 0.51 the so- lution time diverges quickly. When P+ = 0.55 (i.e. the system is only 10% more likely to take a “bad” action than a “good” action), the search time diverges almost immediately. Theorem 7 applies only to zero-initialized Q-learning systems. However, we expect these results to carry over to any learning system/algorithm that relies on unbiased search to initially solve tasks. Cooperation for faster learning Theorem 7 suggests that in the absence of a priori task knowledge, pure trial-and-error learning does not scale well with domain complexity (state-space size). Fortu- nately, a number of techniques can be used within the reinforcement learning paradigm to improve the learn- ing rate. One approach is avoid the problem altogether by providing the agent with an approximate controller a priori. This controller, by encoding a priori knowledge about the task, defines an initial policy that positively biases the search. In this case, trial-and-error expe- rience is used primarily to compensate for modeling errors in the approximate controller [Franklin, 19881. While this approach is useful for initial learning, its drawbacks are that it requires knowledge of the task a priori and it is less useful for adapting to change in the environment (i.e. low overall adaptability). A second approach is to augment the agent with a predictive model and use it to perform hypothet- ical trial-and-error experiments [Sutton, 1990a; Sut- ton, 1990b; Whitehead and Ballard, 1989a; Whitehead and Ballard, 1989b]. This technique can improve the learning rate even when the predictive model itself is learned [Sutton, 1990a; Whitehead, 1989; Lin, 1990; Riolo, 19901. The principle shortcoming of this ap- proach is that it leads to agents who are only as good as their internal predictive models. That is, because mul- tiple trials may be required before an accurate model can be learned, the model is useless during that first crucial trial when the agent first performs a task or first adapts to a change. Once, the agent has learned an accurate model (including learning the locations of re- wards), hypothetical experiments begin to contribute. A third approach, the focus of this paper, is to em- bed the agent in a cooperative social environment and develop algorithms for transferring knowledge. This approach is based on the idea that in nature, intelligent agents exist in a social structure that supports knowl- edge transfer between agents. Individuals learn from one another through social and cultural interactions: by watching each other, by imitating role-models, by receiving criticism from knowledgeable critics, and by receiving direction from supervisors. Knowledge trans- fer and trial-and-error learning interact synergistically. On the level of the individual, mechanisms for knowl- edge transfer dominate, and because of its inherent complexity, trial-and-error learning plays a lesser role - being used primarily to refine/elaborate knowledge already gained from others. On the level of the group, trial-and-error learning becomes an important tool for increasing the cumulative knowledge of the society as a whole. That is, even though agents, as individuals, cannot make discoveries at a useful rate, the inherent parallelism in the population can overcome the com- plexity of search and the group can accumulate knowl- edge and adapt at an acceptable rate. The following sections describe and analyze two co- operative mechanisms for improving the adaptability of reinforcement learning systems. We call these Learn- ing with an External Critic (LEC) and Learning By Watching (LBW) respectively. LEG Analysis The principle idea behind learning with an external critic is that the learner, while attempting to solve problems, is observed by a helpful critic, who analyzes the learners actions and provides immediate feedback on its performance. LEC algorithms achieve faster learning by reducing the delay between an action and its evaluation (i.e. feedback), mitigating the temporal credit assignment problem. LEC algorithms require only modest changes to the learner, since no inter- pretation is required. However, some interpretations 610 LEARNING AND EVALUATION FUNCTIONS skills are required of the external critic. We have stud- ied several LEC algorithms [Whitehead and Ballard, 19911. In this paper we focus on one, called Biasing Binary L EC. In the Biasing-Binary-LEC (or BB-LEC) algorithm, reward from the environment, rw and reward from the external critic, P, are treated separately. Reward from the environment is treated according to standard Q- learning (it is used to learn the action-value function), while the critic’s reward is used to learn a biasing func- tion, B, over state-action pairs. The biasing function is a simple weighted average of the immediate reward received from the critic. It is estimated using the rule: &+1 (a, 4 +- (1 - W*(zt, at) + @-e(t) (7) where re(t) is the reward generated by the external critic in response to the agents action at time t and + is a decay factor between 0 and 1. We assume that at each time step, the critic generates a non-zero reward with probability Pcritic according to the rule: c(t) = +R, if at is optimal -R, otherwise (8) where R, is a positive constant. The decaying average in Equation 7 is used to allow the agent to “forget” old advice that has not recently been repeated. Without it, the agent may have diffi- culty adapting to changes in the task once advice is extinguished. The decision making rule for BB-LEC is simple. The agent sums the action-value and bias-value for each possible decision and chooses the decision with the largest total. That is, r(z) = a such that Q(z, a)+B(e, a) = yg$Q(x, b)+B(x, b)] Given the above LEC algorithm, we have the follow- ing weak upper bound on the search time. Theorem I.0 The expected time needed by a zero- initialized BB-LEC system to learn the actions along an optimal path for a homogeneous problem solving task of depth L is bounded above by k P * 1st * b critic (9) where Pcxitie is the probability that on a given step the external critic provides feedback, ISI is the total number of states in the state space, b is the branching factor (or total number of possible actions per state) and k is the depth of the state space. This upper bound is somewhat disappointing be- cause it is expressed in terms of the state space size, IS], and the maximum depth, b. Our goal is to find a.lgorithms that depend only upon task difhculty (i.e. length of optimal solution) and are independent of state space size and depth. Nevertheless the result is interesting for two reasons. First, it shows that when j$ > l/2, BB-LEC is an improvement over pure zero- initialized Q-learning since the search time grows at most linearly in bc whereas Q-learning grows at least exponentially in k. Second, because the upper bound is inversely proportional to Pcritic, the theorem shows that even infrequent feedback from the critic is suffi- cient to achieve the linear upper bound. This has been observed in empirical studies, where even infrequent feedback from the critic substantially improves perfor- mance [Whitehead and Ballard, 19911. The trouble with the LEC algorithm, as we’ve de- scribed it so far, is that the critic’s feedback arrives late. That is, by the time the learner receives the critic’s evaluation it finds itself in another (neighbor- ing) state, where the feedback is of no value. If the learner had a means of returning to previously encoun- tered states, it could make better use of the critic’s feedback. This idea has lead to the following re- sults, which show that under appropriate conditions the search time depends only upon the solution length and is independent of state space size. Theorem PI If a zero-initialized Q-learning system using BB-LEC uses an inverse model to “undo” non- optimal actions (as detected based on feedback from the external critic) then the expected time needed to learn the actions along an optimal path for a homogeneous problem solving task is linear in the solution length i, independent of state space size, and is bounded above by the expression [ 2 P-(1-P=)-l I * i. (10) Similarly, if the task is structured so that the sys- tem can give up on problems after some time with- out success or if the system is continually presented with opportunities to solve new instances of a problem then previously encountered situations can be revis- ited without much delay and the search time can be reduced. a‘lheorem 12 A zero-initialized Q-learning system us- ing BB-LEG that quits a trial and starts anew if it fails to solve the task after nq (np 2 i) steps has, for a ho- mogeneous problem solving task, an expected solution time that is linear in i, independent of state space size, and is bounded from above by the expression 1 P-(1 - P=) * n,i. (11) Corollary 13 A zero-initialized Q-learning system using BB-LEC that quits a trial and starts anew upon receiving negative feedback from the external critic has an expected solution time that is bounded from above by the expression 1 P= + (1 - K)P+ > * i. (12) WHITEHEAD 611 The crucial assumption underlying the above the- orems is that the learner has some mechanism for quickly returning to the site of feedback, however for some tasks returning to a previous state may not be explicitly necessary to decouple search time from the state space size. In particular, if the optimal deci- sion surface is smooth (i.e., optimal actions for neigh- boring states are similar), then action-value estima- tors that use approximators that locally interpolate (e.g. CMACs, or Neural Nets) can immediately use the critic’s feedback to bias the impending decision. Similarly, if Q is approximated with overlapping hy- percubes (e.g. classifier systems [Holland et al., 1986]), then the critic’s feedback can be expected to transfer to other situations as well. Although not reflected in the above theorems, we suspect that this observation is the basis of the ultimate power of LEC algorithms and will enable them to be useful even when explicit mechanisms for inversion are not available. Without help by other means, a population of naive LBW agent’s may still require time exponential in the state space depth, however, search time can be decou- pled from state space size by adding a knowledgeable role model. Theorem 15 If a naive agent using LB W and a skilled (optimal) role-model solve identical tasks in par- allel and if the naive agent quits its current task after failing to solve it in n4 steps, then an upper bound on the time needed by the naive agent to first solve the task (and learn the actions along the optimal path) is given by i2 11 - nq i-i. 129 (13) As with LEC results, Theorem 14 relys on the agent having a mechanism for returning to previously en- countered states. Intuitively this follows since when a naive agent and a skilled agent perform similar tasks E Analysis LEC algorithms are sensitive to naive critics. That is, if the critic provides poor feedback, the learner will bias its policy incorrectly. This limits the use of LEC algo- rithms to cases where the external critic is skilled and attentive. Learning By Watching, on the other hand, does not rely on a skilled, attentive critic. Instead, the learner gains additional experience by interpreting the behavior of others. If the observed behavior is skilled so much the better, but an LBW system can learn from naive behavior too. In reinforcement learning, all adaptation is based on a sequence of state-action-reward triples that char- acterize the system’s behavior. In LBW, the agent gets state-action-reward sequences not only by its own hand, but also by observing others perform similar tasks. In the analysis that follows, we assume that, at each time step, the learner can correctly recognize the state-action-reward triple of any agent it watches. This sequence is then used for learning just as if it were the learner’s own personal experience. We also assume that the learner can observe the behavior of everyone in the population. Although these assumptions are overly simplistic and ignore many important issues, they are reasonable considering our goal - to illus- trate the potential benefits of integrating reinforcement learning with cooperative mechanisms like “learning- by-watching.“3 Given the above description of LBW, we can make the following observations. Theorem 14 The expected time required for a popula- tion of naive (zero-initialized) Q-learning agents using LBW to learn the actions along an optimal path de- creases to the minimum required learning time at a rate that is 0(1/n), h w ere n is the size of the population. 3Results similar to those described below can be ob- tained when the assumptions are relaxed. of the state space that are never visited by the skilled agent. Starting over is a means for efficiently return- ing to the optimal solution path. Again, we expect LBW systems to perform well on tasks that have de- cision surfaces that are smooth or can be represented by generalizing function approximators. Conclusions When used to solve multi-stage decision problems, re- inforcement learning algorithms perform a kind of on- line, incremental search in order to find an optimal decision policy. The time complexity of this search strongly depends upon the size and structure of the state space and upon any knowledge encoded in the system a priori. When a priori knowledge is not avail- able or when the system must adapt to a change in the environment, search can be excessive. An analysis of the search time complexity for zero- initialized Q-learning systems indicates that for a re- stricted, but representative set of tasks, the search time scales at least exponentially in the depth of the state space. For polynomial width state spaces, this implies search times that are moderately exponential in state space size. Learning with an External Critic (LEC) and Learn- ing By Watching are two cooperative mechanisms that can substantially reduce search. LEC algorithms rely on feedback from an external critic to reduce feedback (reward) latency. LBW algorithms use observations of others as an auxiliary source of experience. Both algo- rithms reduce search and increase overall adaptability. The LEC algorithm in its purest form, has a search time complexity that is at most linear in the state space size. Even though this is an improvement over pure Q- learning, the bound continues to depend on the state space size. The trouble with pure LEC is that, because the critic’s evaluation is received after the fact, the in parallel, it is possible for the naive agent to move off the optimal solution path, and find itself in parts 612 LEARNING AND EVALUATION FUNCTIONS learner may find itself in another (neighboring) state, where the feedback has little value. When means exist for efficiently returning to states that have been pre- viously visited (or states that are functionally equiva- lent) the search time can be decoupled from the state space size. This can be achieved either explicitly or implicitly. Explicit mechanisms include allowing the learner to use an inverse model and allowing the agent to restart (or pick a new instance of) a task if it fails to solve it after some time. Critic evaluation can im- mediately be made use of implicitly (or automatically) when the decision surface is smooth, so that neigh- boring state share the same optimal action; or when the policy can be represented by generalizing function approximators. The advantage of LBW over LEC is that it doesn’t necessarily rely on an attentive, knowledgeable critic. In particular, the search time complexity, of a popula- tion of naive LBW agents, scales as the inverse of the population size. When a knowledgeable (not necessar- ily attentive) role-model is available, a naive agent, un- der appropriate conditions, can learn the actions along the optimal solution path in time linear in the path length, (independent of state space size). Although our results are for zero-initialized Q- learning systems solving homogeneous problem solving tasks, we expect them to apply equally to other rein- forcement learning algorithms that depend on search. Our simulation studies [Whitehead and Ballard, 19911 support this hypothesis and also show LEC and LBW algorithms to be robust with respect to feedback noise and dropout. Finally, it might be argued that what we’re doing here is moving toward supervised learning, so why not abandon reinforcement learning altogether and focus on supervised learning. It is true that LEC and LBW algorithms take steps toward supervised learning by exploiting richer sources of feedback. Indeed, super- vised learning (in its classic sense) could be incorpo- rated into the LEC algorithm by expanding the vo- cabulary between the system and the external critic from its present “yes” or “no”, to a list of possible ac- tions. However, we want to retain reinforcement learn- ing because it is such an autonomous (if weak) learn- ing algorithm. Our philosophy is that an autonomous learning system should be able to exploit extra infor- mation available when learning (e.g., feedback from an outside critic), but it should not rely on it completely. A framework with reinforcement learning as its basic mechanism provides for such autonomy. eferences [Barto et al., 19831 Andrew G. Barto, Richard S. Sut- ton, and Charles W. Anderson. Neuron-like elements that can solve difficult learning control problems. IEEE Trans. on Systems, Man, and Cybernetics, SMC-13(5):834-846, 1983. [Franklin, 19881 Judy A. Franklin. Refinement of robot motor skills through reinforcement learning. In Proceedings of the 27th IEEE Conference on De- cision and Control, Austin, TX, December 1988. [Holland et al., 19861 John H. Holland, Keith F. Holyoak, Richard E. Nisbett, and Paul R. Thagard. Induction: processes of inference, learning, and dis- covery. MIT Press, 1986. [Jordan and Rumelhart, 19901 Michael I. Jordan and David E. Rumelhart. Supervised learning with a distal teacher. Technical report, MIT, 1990. [Lin, I9901 Long-Ji Lin. Self-improving reactive agents: Case studies of reinforcement learning frameworks. In Proceedings of the First Interna- tional Conference on the Simulation of Adaptive Be- havior, September 1990. [Riolo, I9901 Rick L. Riolo. Lookahead planning and latent learning in classifier systems. In Proceedings of the First International Conference on the Simu- lation of Adaptive Behavior, September 1990. [Sutton, 19881 Richard S. Sutton. Learning to predict by the method of temporal differences. Machine Learning, 3( 1):9-44, 1988. [Sutton, 199Oa] Richard S. Sutton. First results with DYNA, an integrated architecture for learning, plan- ning, and reacting. In Proceedings of the AAAI Spring Symposium on Planning in Uncertain, Un- predictable, or Changing Environments, 1990. [Sutton, 1990b] Richard S. Sutton. Integrating archi- tectures for learning, planning, and reacting based on approximating dynamic programming. In Pro- ceedings of the Seventh International Conference on Machine Learning, Austin, TX, 1990. Morgan Kauf- mann. [Watkins, 19891 Ch ris Watkins. Learning from delayed rewards. PhD thesis, Cambridge University, 1989. [Whitehead and Ballard, 1989a] Steven D. Whitehead and Dana H. Ballard. Reactive behavior, learning, and anticipation. In Proceedings of the NASA Con- ference on Space Telerobotics, Pasadena, CA, 1989. [Whitehead and Ballard, 1989b] Steven D. Whitehead and Dana H. Ballard. A role for anticipation in reactive systems that learn. In Proceedings of the Sixth International Workshop on Machine Learning, Ithaca, NY, 1989. Morgan Kaufmann. [Whitehead and Ballard, 19911 Steven D. Whitehead and Dana H. Ballard. A study of cooperative mech- anisms for faster reinforcement learning. TR 365, Computer Science Dept., University of Rochester, Feburary 1991. (A shorter version to appear AAAI- 91). [Whitehead, 19891 St even D. Whitehead. Scaling in reinforcement learning. Technical Report TR 304, Computer Science Dept., University of Rochester, 1989. WHITEHEAD 613
1991
83
1,146
tiv obert Levinsoln and Department of Computer and Information Sciences University of California Santa Cruz Santa Cruz, CA 95064 levinsonQcis.ucsc.edu and snyder@cis.ucsc.edu Abstract Psychological evidence indicates that human chess players base their assessments of chess positions on structural/perceptual patterns learned through experience. Morph is a computer chess program that has been developed to be more consistent with the cognitive models. The learning mech- anism used by Morph combines weight-updating, genetic, explanation-based and temporal-difference learning to create, delete, generalize and evaluate chess positions. An associative pattern retrieval system organizes the database for efficient process- ing. The main objectives of the project are to demon- strate capacity of the system to learn, to deepen our understanding of the interaction of knowledge and search, and to build bridges in this area be- tween AI and cognitive science. To strengthen connections with the cognitive lit- erature limitations have been place on the system, such as restrictions to l-ply search, to little domain knowledge, and to no supervised training. Although it is apparently effective to discover tactical issues by searches, isn’t it dull to “forget” them immediately after use instead of “learning” something for a later reuse? There is no serious at- tempt known to make a chess program really learn from its experience for future games itself. (Kaindl, 1989) Despite the recognition of the criticality of search and the high-costs that are paid to achieve it, little ef- fort has been applied to getting chess systems to utilize previous search experiences in future searches. Thus, excluding random factors from the system (or human intervention), one can expect a chess system to play exactly the same way against the same series of moves, whether it has won or lost, and take the same amount of time to do so! There do now exist some systems that *Both authors supported in part by NSF Grant IRI- 8921291. recall positions that they found promising, but from which they later lost material (Scherzer et al., 1990; Slate, 1987). Th’ is is certainly a step in the right direc- tion, but much more important than dealing with exact replication of positions is to handle situations that are analogous, but not identical, to previous situations. The main characteristic of the current model of chess programming is the use of brute-force alpha-beta min- imax search with selective extensions for special situa- tions such as forcing variations. This has been further enhanced by special purpose hardware. This model has been so successful that little else has been tried. Alternative AI approaches to chess have not fared well due to the expense in applying the “knowledge” that had been supplied to the system. Those times in recent years that chess has been applied as a testbed (Flann and Dietterich, 1989; Quinlan, 1983; Michalski and Negri, 1977; Niblett and Shapiro, 1981; Q’Rorke, 1981; Shapiro, 1987; Tadepalli, 1989; Pitrat, 1976; Minton, 1984) only a small sub-domain of the game was used. Nowever, we feel that there is a third approach that neither relies on search or the symbolic computation ap- proach of knowledge-oriented AI: this we shall call the “pattern-oriented approach.” In this approach config- urations of interaction between squares and pieces are stored along with their significance. A uniform (and hence efficient) method is used to combine the signifi- cances in a given position to reach a final evaluation for that position. Morph’ is a system developed over the past 3 years that implements the pattern oriented approach (Levin- son, 1989b; Levinson, 1989a). It is not conceivable that the detailed knowledge required to evaluate positions in this way could be supplied directly to the system, thus learning is required. A learning mechanism has been developed that combines weight-updating, genetic, and temporal-difference learning modules to create, delete, generalize and evaluate graph patterns. An associative pattern retrieval system organizes the database for ef- ficient processing. To st‘reng t hen-t he connections with the cognitive lit- erature the system’s knowledge is to come from its own ‘The name “Morph” comes from the Greek ing form and the chess great, Paul Morphy. morph mean- LEVINSON 8~ SNYDER 601 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. playing experience, no sets of pre-classified examples are given and beyond its chess pattern representation scheme little chess knowledge such as the fact that hav- ing pieces is valuable (leave alone their values) has been provided to the system. Further, the system is limited to using only l-ply of search.2 System Design Morph makes a move by generating all legal successors of the current position, evaluating each position using the current pattern database and choosing the position that is considered least favorable to the opponent. The system is designed so that after each game patterns are created, deleted and generalized and weights are changed to make its evaluations more accurate based on the outcome of the game. Patterns and their Representation The basic unit of knowledge to be stored is a pattern. Patterns may represent an entire position or represent a boolean feature that has occurred in one or more po- sitions and is expected to occur again in the future. Along with each pattern is stored a weight in [O,l] that is an estimate of the expected true minimax evaluation of states that satisfy the pattern. In Morph, patterns come in two different types: graph patterns and material patterns. Material patterns and graph patterns are pro- cessed identically by the system (in separate databases) except that the matching and generalization operations are simpler for material patterns. The graph patterns in Morph represent positions as unique directed graphs in which both nodes and edges are labeled. Nodes are created for all pieces that occur in a position and for all squares that are immediately adjacent to the kings. The nodes are labeled with the type and color of piece (or square) they represent and for kings and pawns (and pieces of urity 0) the exact rank and file on the board in which they occur. Edges represent attacks and defends relationships: Direct at- tack, indirect attack, or discovered attack. Graphs are always oriented with white to move. Patterns come from subgraphs of position graphs (see Figures la and lb) and may generalize rank and file square designa- tions. Many chess positions that are on face value dis- similar, have a large subgraph (see Figure 1) in common in this representation. A similar representation scheme has successfully been used to represent chess generalizations (Zobrist and Carlson, 1973) and to produce a similarity metric for chess positions(Levinson, 1989b; Botvinnik, 1984). Morph’s material patterns are vectors that give the relative material difference between the players, e.g. “up 2 pawns and down 1 rook,” “even material,” etc. The material patterns provide useful subgoals or sign- posts that are used to enhance the temporal-difference learning process since they can occur at any point in a game sequence. Morph tends to learn proper values for 2Though nothing in the method except perhaps effi- ciency, prevents deeper search. these early on in its training (see Table l), thus provid- ing a sound basis for learning the significance of graph patterns. Associative Pattern Database The pattern database expedites associative retrieval by storing the patterns in a partially-ordered hierarchy based on the relation “subgraph-of” (“more-general- than”) (Levinson, 1991). Only the immediate prede- cessors and immediate successors of each pattern are stored in the partial ordering. At one end of the hier- archy are simple and general patterns such as “white bishop can take black pawn” and at the other end are complex and specific patterns such as those associated with full positions. Empirically, it has been shown that on typical databases only a small fraction of the patterns in the database need be considered to answer a given query. For example, on a database of 600 patterns rarely are more than 10 to 20 comparisons required (Levinson, 1991). Evaluation Function The evaluation function takes a position and returns a value in [O,l] that represents an estimate of the ex- pected outcome of the game from that position (O=loss, .5=draw, l.O=win). The value returned by the evalua- tion function is to be a function of the patterns that are immediate predecessors of the state in the database and their weights. The contribution of other predecessors is in principle reflected in the weights of the immediate predecessors since they are most-specific. Determining the exact method by which the pattern values should be combined is a dificult and critical prob- lem and is a major issue being explored. Some progress has been made: The “extremeness” of pattern probabilities induces the prioritization structure. Specifically, let ~1, . . . , w,, be the weights of patterns (material or graph) that ap- ply to a position, P, and ez(w) be the “extremeness” of that weight. Clearly we want to “listen” to the more extreme patterns because they are an indication that something important is happening. To be more formal, the extremeness value, e%(w), of a pattern w is defined as follows: e%(w) E ]uI - for P is: 0.5 1. The evaluation function e&(P) = CL (u’i - ez(wi)P) CL, (ex(wi)P> If xy’i ez(uri)b = 0, eval(P) = 0.5. ,8 is a tunable parameter that indicates how much we want our evalu- ation function to be skewed by extreme patterns. Learning System Positional Credit Assignment and Weight- Updating Each game provides feedback to the sys- tem about the accuracy of its evaluations. The first step is to use the outcome of the game to improve the evaluations assigned to positions during the game. The method used by the system to assign new evaluations to states is temporal-difference (TD) learning (Sutton, 602 LEARNING AND EVALUATION FUNCTIONS Figure 1: A generalization derived from two different chess position. (a) is the subgraph derived from the board on the left and (b) is the subgraph from the board on the right. The solid edges correspond to direct edges between pieces and the dashed edges correspond to indirect edges. Graph (c) is the generalized graph derived from (a) and (b) in which the dotted edges have been generalized to generic “attacks.” 1988). In TD 1 earning, the idea is to update weights so that one position’s evaluation corresponds more closely to the evaluation of the following position (as opposed to the evaluation of the final or goal state). Temporal difference learning makes the feedback loo more local. Specifically, the system assigns the nal position R its true value (0, 0.5, or 1.0) and then iteratively assigns each preceding position P (working back- wards) the value (1 - a)(current value of P) + (~(1 - new value of succeeding position). a (now l/2) is a pa- rameter in [O,l] that can be adjusted to reflect how much importance should be given to new versus old information. Once new positions have been given evaluations, the weights of patterns that were used to evaluate the posi- tions are moved in the direction of the desired or “tar- get” evaluation. Pattern Creation The system must have a method for developing and storing new patterns. During weight updating, for each pair of successive positions si and s;+l the system constructs two new patterns &fore and .Pafle,. through a form of explanation-based learn- ing (EBG) (Mitchell et al., 1986). These new patterns are to represent the changes between two positions that occur with a single move and hence must contribute to the difference in their evaluations. A domain-dependent rule adds to the patterns a context in which these changes may occur. With graph patterns, P&f ore is the smallest connected subgraph made up of those edges in si that are not in si+l and Pafter is the smallest con- nected subgraph made up of those edges in si+i that are not in si. P&fore and Pafter are then augmented (for context) by adding to them all edges adjacent to their initial set of nodes. Pattern Deletion As in genetic algorithms (Gold- berg, 1989), there must be a mechanism for insignifi- cant, incorrect or redundant patterns to be deleted (for- gotten) by the system. A pattern should contribute to making the evaluations of positions it is part of more accurate. The utility of a pattern can be measured as a function of many factors including age, number of up- dates, uses, size, extremeness and variance. Using a utility function (Minton, 1984), patterns below a cer- tain level of utility can be deleted. erfcwmance es Morph’s main trainer and opponent is GnuChess Level One, which is rated at least 1600 (better than 63% of tournament players). For a l-ply program, start- ing from scratch and beating such an opponent, would demonstrate a very significant amount of learning. To date Morph has been unable to defeat GnuChess even once, though it has obtained several draws when GnuChess has inadvertently stalemated it. The main difficulty seems to be that due to insufficient training and/or imprecision in the weight-updating and evalu- ation mechanisms, removing all “bugs” from Morph’s database has not been possible. In chess, one bad move can ruin an otherwise reasonable game, especially against a computer. Morph’s main trouble is “overgen- eralization” attributing too much significance to small features of the board. To achieve an even record against GnuChess is a primary objective of the current project. The longest game has been 48 moves (for each player) with an average game length after 50 games of training of 26 moves. Despite the lack of success against GnuChess there have been many encouraging signs in the three months since Morph was fully implemented: LEVINSON & SNYDER 603 a After 30 or more games of training Morph’s mate- rial patterns are consistent and credible (see Table 1 for a database after 106 games), even though no in- formation about the relative values of the pieces or that pieces are valuable have been given the system. The weights correspond very well to the traditional values assigned to those patterns (except the queen is undervalued). These results reconfirm Korf and Christensen’s efforts (Christensen and Korf, 1986) and perhaps go beyond by providing a finer grain size for material. o After 50 games of training, Morph begins to play reasonable sequences of opening moves and even the beginnings of book variations despite that no infor- mation about development, center control and king safety have been directly given the system and that neither Morph or GnuChess uses an opening book. e Morph’s database contains many patterns that are recognizable by human players and has given most reasonable values. The patterns include mating pat- terns, mates-in-one, castled king and related defenses and attacks on this position, pawn structures in the center, doubled rooks, developed knights, attacked and/or defended pieces and more. a Morph’s games against GnuChess improve with training. Since Morph has not yet been victorious, we use another performance metric besides win-loss record: the total amount of opponent’s material cap- tured by Morph, using the traditional piece values (queen=9, rook=5, bishop=3, knight=3, pawn=l). Of course, Morph is “unaware” of this metric. Fig- ure 2 f ives a typical plot of Morph’s performance over time with /3=4.5). We have discovered that adding patterns too frequently, “overloads” Morph so it can’t learn proper weights. In the experiment graphed in Figure 3, weights were updated after every game but patterns were added only every 7 games. This al- lowed the system to display a steady learning curve as depicted. Relationship to Other Approaches Clearly, the chess system combines threads of a variety of machine-learning techniques that have been success- ful in other settings. It is this combination and exactly what is done to achieve it that is the basis for Morph’s contributions. The learning-method areas and their in- volvement in Morph include genetic algorithms (Gold- berg, 1989; Grefenstette, 1987; Holland, 1987), neu- ral nets (weight updating) (Rumelhart and McClelland, 1986), temporal-difference learning, explanation-based generalization (EBG), and similarity-based learning. To combine these methods some design constraints usu- ally associated with these methods are relaxed. With genetic algorithms, structured patterns rather than bit strings are used. In contrast to neural networks the nodes in Morph’s hierarchy are assigned particular se- mantic/structural values. Temporal-difference learn- ing is usually applied to methods with fixed evalua- tion function forms (in which the features are known but not their weights) but here the features change and 604 LEARNING AND EVALUATION FUNCTIoNs the hierarchical database organization produce atypical discontinuities in the function. We feel the integration of these diverse techniques would not be possible without the uniform, syntac- tic processing provided by the pattern-weight formu- lation of search knowledge. To appreciate this, it is useful to understand the similarities and differences be- tween Morph and other systems for learning control or problem-solving knowledge. For example, consider Minton’s explanation-based Prodigy system (Minton, 1984). The use of explanation-based learning is one similarity: Morph’s pattern creation specifically cre- ate patterns that are “responsible” for future favor- able or unfavorable patterns being created. Also sim- ilar is the use of “utility” by Morph’s deletion routine to determine if it is worthwhile to continue to store a pattern, basing the decision on accuracy and sig- nificance of the pattern versus matching or retrieval costs. A major difference between the two approaches is the simplicity and uniformity of Morph’s control structure: no “meta-level control” rules are constructed or used nor are goals or subgoals explicitly reasoned about. Another difference is that actions are never explicitly mentioned in the system. Yee et al.(Yee et al., 1990) h ave combined explanation-based learning and temporal-difference learning in a manner similar to Morph and other APS systems, applying the technique to Tic-Tat-Toe. Now let’s compare Morph to other adaptive-game playing systems. In earlier systems, the system is given a set of features and asked to determine the weights that go with them. These weights are usually learned through some form of TD learning (Tesauro and Se- jnowski, 1989). Morph extends the TD approaches by exploring and selecting from a very large set of possi- ble features in a manner similar to genetic algorithms. Lee and Mahajan (Lee and Mahajan, 1988) also im- prove on these approaches by using Bayesian learning to determine inter-feature correlation. A modicum of AI and machine learning techniques in addition to heuristic search have been applied di- rectly to chess. The inductive-learning endgame sys- tems (Michie and Bratko, 1987; Muggleton, 1988) have relied on pre-classified sets of examples or examples that could be classified by a complete game-tree search from the given position (Thompson and Roycroft, 1983). The symbolic learning work by Flann (Flann and Di- etterich, 1989) has occurred on only a very small sub- domain of chess. The concepts capable of being learned by this system are graphs of two or three nodes in Morph. Such concepts are learned naturally by Morph’s generalization mechanism. Tadepalli’s work (Tadepalli, 1989) on hierarchical goal structures for chess is promising. We suspect that such high-level strategic understanding may be neces- sary in the long run to bring Morph beyond an inter- mediate level (the goal of the current project) to an expert or master level. Minton (Minton, 1984), build- ing on Pitrat’s work (Pitrat, 1976), applied constraint- based generalization to learning forced mating plans. This method can be viewed as a special case of our r Material Pattern __r__ Knight 0 0 0 +1 -1 0 0 0 0 0 0 0 0 0 +1 -1 0 T Bishop 0 -1 +1 0 0 0 0 0 0 0 0 -1 -2 -1 -1 +1 +2 Queen Weight Updates Variance Age Value 0 0 0.455 2485 240.2 106 0 0 0 0.094 556 7.53 86 -3 0 0 0.912 653 11.19 88 +3 0 0 0.910 679 23.59 101 +3 0 0 0.102 588 17.96 101 -3 -1 0 0.078 667 3.56 103 -5 +1 0 0.916 754 5.74 103 +5 0 0 0.731 969 22.96 105 i-1 0 0 0.259 861 13.84 105 -1 0 +1 0.903 743 5.68 105 +g 0 -1 0.085 642 3.12 105 -9 +1 0 0.894 10 0.03 55 +2 0 0 0.078 146 0.53 71 -6 0 0 0.248 26 2.35 73 -2 0 0 0.417 81 4.48 82 0 0 0 0.478 84 5.72 82 0 0 0 0.924 168 0.66 91 +6 Table 1: A portion of an actual Morph material database after 106 games. Statistics Pawn 0 0 0 0 0 0 0 +1 -1 0 0 0 0 +1 0 0 0 The columns headed by pieces denote relative quantity. The weight column is the learned weight of the pattern in [O,l]. Updates is the number of times that this weight has been changed. Variance is the sum of the weight changes. Age is how many games this pattern has been in the database. Value is the traditional value assigned to this pattern. Note that a weight of 0.5 corresponds to a traditional value of 0. The entire database contained 575 patterns. pattern creation system. Perhaps the most successful application of AI to chess was Wilkin’s Paradise (PAt- tern Recognition Applied to DIrecting Search) system (Wilkins, 1980), which, also building on Pitrat’s work, used pattern knowledge to guide search in tactical situ- ations. Paradise was able to find combinations its deep as 19-ply. It made liberal use of planning knowledge in the form of a rich set of primitives for reasoning and thus can be characterized as a “semantic approach.” This difference plus the use of search to check plans and the restriction to tactical positions distinguish it from Morph. Also, Paradise is not a learning program: patterns and planning knowledge are supplied by the programmer. Epstein’s Hoyle system (Epstein, 1990) also applies a semantic approach but to multiple simul- t aneous game domains. Responsibility for feature discovery given to the sys- tem. Non-reliance on search (though at some point a small amount of guided search may be incorporated, bring- ing us even closer to the cognitive model). rences (Botvinnik, 1984) M. Botvinnik. Computers zn Chess. Springer- Verlag, 1984. (Christensen and Korf, 1986) J. Christensen and R. Korf. A unified theory of heuristic evaluation. In AAAI-86, 1986. (Epstein, 1990) S. L. Epstein. Learning plans for competitive do- mains. In Proceedings of the Seventh International Conference on Machine Learning, June 1990. Conclusions (Flann and Dietterich, 1989) N. S. Flann and T. G. Dietterich. A study of explanation-based methods for inductive learning. Ma- chine Learning, 4:187-226, 1989. To our knowledge, this is the first major project to ap- ply recent learning results to computer chess. The early results with the chess system are highly encouraging. We are excited about this project’s relevance to some growing debates on fundamental issues in artificial in- telligence and cognitive science research. In addition to Morph’s unique combination of meth- ods, what distinguishes it are: 8 A uniform representation of search knowledge. m A syntactic approach to playing and learning. An attempt to play a complete game of chess rather than a small subdomain. (Goldberg, 1989) D. E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Welsley, Reading, MA, 1989. (Grefenstette, 1987) J. J. Grefenstette, editor. Genetic algorithms and their applications, Hillsdale, NJ, 1987. L. Erlbaum and As- sociates. (Holland, 1987) J. H. Holland. Genetic algorithms and classifier sys- tems: Foundations and future directions. In J. J. Grefenstette, editor, Second International Conference on Genetic Algorithms, Hillsdale, NJ, 1937. L. Erlbaum and Associates. (Kaindl, 1989) H. Kaindl. Towards a theory of knowledge. In Ad- vances in Computer Chess 5, pages 159-185. Pergammon, 1989. (Lee and Mahajan, 1988) K. F. Lee and S. Mahajan. A pattern clas- sification approach to evaluation function learning. Artificial Zn- telligence, 36:1-25, 1986. e Rejection of a learning-by-examples framework for an experiential framework that is more cognitively- (Levinson, 1989a) R. Levinson. Pattern formation, associative re- inspired. call and search: A proposal. Technical Report UCSC-CRL-89-22, University of California at Santa Cruz, 1989. LEVINSON & SNYDER 605 200 190 180 170 160 150 140 130 120 110 30 40 50 60 70 80 90 10 110 Number of Games Played Figure 2: Graph plotting Morph’s progress versus GnuChess. (Levinson, 1989b) R. Levinson. A self-learning, pattern-oriented chess program. International Computer Chess Association Jour- nal, 12(4):207-215, December 1989. (Levinson, 1991) R. Levinson. A self-organizing pattern retrieval system and its applications. Internation Journal of Intelligent Systems, 1991. To appear. (Michalski and Negri, 1977) R. S. Michalski and P. Negri. An exper- iment on inductive learning in chess end games. In E. W. Elock and D. Michie, editors, Machine Representation of Knowledge, Machine Intelligence, volume 8, pages 175-192. Ellis Horwood, 1977. (Michie and Bratko, 1987) D. Michie and I. Bratko. Ideas on knowl- edge synthesis stemming from the KBBKN endgame. Journal of the International Computer Chess Association, lO( 1):3-13,1987. (Minton, 1984) S. Minton. Constraint based generalization- learn- ing game playing plans from single examples. In Proceedings of AAAI. AAAI, 1984. (Mitchell et al., 1986) T. M. Mitchell, R. Keller, and S. Kedar- Cabelli. Explanation based generalization: A unifying view. In Machine Learning 1, pages 47-80. Kluwer Academic Publishers, Boston, 1986. (Muggleton, 19SS) S. H. Muggleton. Inductive acquisition of chess strategies. In D. Michie J. E. Hayes and J. Richards, editors, Machine Intelligence l.f, pages 375-389. Oxford University Press, Oxford, 1988. (Niblett and Shapiro, 1981) T Niblett and A. Shapiro. Automatic induction of classification rules for chess endgames. Technical Re- port MIP-R129, Machine Intelligence Research Unit, University of Edinburgh, 1961. (O’Rorke, 1981) P. O’Rorke. A comparative study of inductive learning systems AQll and ID3. Technical Report Intelligent Sys- tems Group Report No. 81-14, Department of computer Science, University of Illinois at Urbana-Champaign, 1981. (Pitrat, 1976) J. Pitrat. A program for learning to play chess. In Pattern Recognition and Artificial Intelligence. Academic Press, 1976. (Quinlan, 1983) J. R. Quinlan. Learning efficient classification pro- cedures and their application to chess end games. In R. S. Michal- ski, J. G. Carboneil, and T. M. Mitchell, editors, Machine Learn- ing. Morgan Kaufmann, San Mateo, CA, 1983. (Rumelhart and McClelland, 1986) E. D. Rumelhart and J. L. Mc- Clelland. Parallel Dzstributed Processing, volume l-2. MIT Press, 1986. (Scherzer et al., 1990) T. Scherzer, L. Scherzer, and D. Tjaden. Learning in Bebe. In T. A. Marsland and J. Schaeffer, edi- tors, Computer, Chess and Cognition, chapter 12, pages 197-216. Springer-Verlag, 1990. (Shapiro, 1937) A. D. Shapiro. Structured Induction in Expert Sys- tems. Turing Institute Press with Addison-Wesley, 1987. (Slate, 1987) D. J. Slate. A chess program that uses its transposi- tion table to learn from experience. Journal of the International Computer Chess Association, 10(2):59-71, 1987. (Sutton, 19&3) R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9-44, 1988. (Tadepalli, 1989) P. Tadepalli. Lazy explanation-based learning: A solution to the intractable theory problem. In Proceedings of the Eleventh International Joint Conference on Artificial Intel- ligence, Detroit, MI, 1989. Morgan Kaufmann. (Tesauro and Sejnowski, 1989) G. Tesauro and T. J. Sejnowski. A parallel network that learns to play backgammon. Artificial Zn- telligence, 39:357-390, 1989. (Thompson and Roycroft, 1983) K. Thompson and A. J. Roycroft. A prophesy fulfilled. EndGame, 5(74):217-220, 1983. (Wilkins, 1980) D. Wilkins. Using patterns and plans in chess. Ar- tijkial Intelligence, 14(2):165-203, 1980. (Yee et al., 1990) R. C. Yee, Sharad Saxena, Paul E. Utgoff, and Andrew G. Barto. Explaining temporal differences to create use- ful concepts for evaluating states. In Proceedings of the Eighth National Conference on AI, Menlo Park, 1990. American Associ- ation for Artificial Intelligence, AAAI Press/The MIT Press. (Zobrist and Carlson, 1973) A. L. Zobrist and D. R. Carlson. An advice-taking chess computer. Scientific American, 228:92-105, June 1973. 606 LEARNING AND EVALUATION FUNCTIONS
1991
84
1,147
ANALYSIS OF THE INTERNAL NEURAL NETWORKS FOR MA &Wan CHAN Computer Science Department The Chinese University of Hong Kong Shatin, N.T., Hong Kong email : lwchan@cucsd.cuhk.hk (bitnet) Abstract The internal representation of the training pat- terns of multi-layer perceptrons was exam- ined and we demonstrated that the connec- tion weights between layers are effectively trans- forming the representation format of the infor- mation from one layer to another one in a mean- ingful way. The internal code, which can be in analog or binary form, is found to be dependent on a number of factors, including the choice of an appropriate representation of the training patterns, the similarities between the patterns as well as the network structure; i.e. the num- ber of hidden layers and the number of hidden units in each layer. In supervised neural networks, such as multi-layer perceptrons [Rumelhart, Hinton & Williams 19861, information is acquired by presenting some train- ing examples to the network in the training process. These examples are pairs of input and output pat- terns. A set of connection weights is then found it- eratively using the generalised delta rule and it is reserved for the classification process in the recall- ing phase. At present, there is no explicit guide-lines for both the choice of the size of the network and the representation format of the training examples. Trial and error has been used to decide the number of hidden layers and the number hidden units in each layer. Previous studies have shown that the number of hidden units in a multi-layer perceptron affects the performance of the network. For examples, the con- vergence speed and the recognition rate vary with the number of hidden units [Burr 19881. In this pa- per, the approach of regarding the hidden layers as the transformation process in the hyper-space were used. We illustrate that a back propagation network with internal layers solves some classification prob- lem intelligently by using this transformation idea. In addition, we show that the internal representation of information can be affected by some characteristics of the training patterns and the architecture of the network. 2 In previous studies, it has been pointed out that the multi-layer perceptron networks store information distributively [Rumelhart & McClelland 19861. The distribution of information may be uneven among the hidden units, thus, some hidden units may be more important than others and some may carry no infor- mation at all. In this respect, care has to be taken to choose the appropriate hidden units for studying. In our study examples, we used low dimensional hid- den space so that all information has to be packed in limited dimensions and the distributive represen- tation of information can be avoided. All experi- ments described in this section involved training a network to perform a particular task until the total error dropped below 0.1%. The relation between the states of the hidden units and the training patterns was studied and displayed in the form of diagrams. From these diagrams, we are able to visualize how the training patterns are transformed and encoded in the hidden layer due to the connection weights. This also enables us to tell the distribution of the en- coded information in the hidden space. This training procedure was repeated with different initial settings to exhaust all other possibilities of the hidden space patterns and to check the reproducibility of the re- sults. The initial settings include the random initial weights, the gradient and the momentum coefficients and the use of other training algorithms such as the 578 CONNECTIONIST REPRESENTATIONS From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. Training patterns Innut States 1 Qutnut States Table 1: The training patterns used in the encoder problem. adaptive training [Chan & Fallside 19871. %I. he 4-2-4 encoder We started the investigation with the 4-2-4 en- coder problem [Hinton, Sejnowski & Ackley 19841. A three-layered network in which the input and output units have four units each and a hidden layer with two units was used. Table 1 shows the input and tar- get states of the training patterns. After successful training process, the resulting hidden space pattern of this 4-2-4 encoder is shown in Fig. 1. Each axis of the space represents the state value of one of the hidden unit and the crosses indicate the values of the hidden units when one of the training patterns is pre- sented to the network. Similar diagrams are obtained when we repeated the experiment with different ini- tial settings. The hidden space pattern indicates that the hidden units encoded the four input patterns in a more or less binary fashion and this encoding schema is the same as what we expected from the traditional encoding method in a multiplexing system. 3.2 The 2” - n - 2” encoder When the 4-2-4 encoder problem was further ex- tended into 2n - n - 2” encoder problems, the hid- den space pattern distribution is observed to be quite differently. Theoretically, the back-propagation net- work is able to encode the training patterns into a binary representation as in the case of 4-2-4 encoder problem. However, as n increases, the hidden space becomes less likely to encode patterns in a binary- valued representation. The total error of the network during the training process dropped to a very low value without the formation of a binary coding sys- tem inside the hidden space. Fig. 2 shows the hidden space patterns of the 8-3-8 encoder problem with dif- ferent initial conditions. The binary-valued pattern shown in Fig. 2a had the highest occurrence whereas hidden unit 1 *- hidden unit 2 Figure 1: The hidden space of 4-2-4 encoder problem. The x and y axes show the ranges of the two hidden states from 0 to 1 respectively. The hidden states of the four training patterns are represented by the four crosses. the patterns in Fig. 2b and Fig. 2c, where the loca- tions of the patterns had shifted from the corners of the space to the edges, appeared with a much lower probability. In the 16-4-16 encoder problem, almost none of the trials gave a binary form representation in the hidden space. This demonstrates that a back propagation network can provide encoding schema in both binary and analog forms in the hidden space. This schema is not unique and depends on the initial setting of the training parameters. One suggested explanation of this phenomenon is that the number of hidden units has been increased and therefore, the training patterns have more freedom to be distributed in the hidden space. In the next set tion, we show that the analog encoding schema of the multi-layer perceptron is depending on the architecture of the network. The previous section showed that information can be encoded in either binary or analog manner in the hidden layer. We are now concentrating on the study of the internal representation using analog values. In order to force the use of analog representation in the hidden space, we constructed networks to encode more than four patterns in two hidden units. den layer A n-2-n network (i.e. a network with n input units, 2 hidden units and n output units) was used to en- code n training patterns. In this section, the orthog- onal training patterns used in Section 3 were used and they were extended to n orthogonal patterns in CHAN 579 hidden unit 1 hidden unit 2 hidden unit 3 a. Figure 2: Three hidden spaces of the 8-3-8 encoder problem. The x, y and z coordinates of the cubes are the ranges of three hidden units respectively. The encoding of the eight training patterns in the hidden space are represented by eight crosses in each cube. bidden unit 1 =-., hidden unit 2 hidden unit 1 b. X hidden unit 2 Figure 3: The hidden space pattern of (a) the 6-2-6 encoder problem and (b) the 8-2-8 encoder problem. n-dimensions. These training patterns forced the hid- den units to encode the binary input and output in an analog manner. Experimental results shown that the patterns in the hidden layer were arranged in an orderly manner and were lying approximately on a circle (Fig. 3). This particular arrangement has a special feature; each cross can be separated from the others by a single straight line and this enables each pattern be distinguished from the others by using the decision boundary in a 2-dimensional space. There- fore, the presence of the hidden layer provides the transformation of the input patterns into other di- mensional space which enables the classification to take place more easily. 580 CONNECTIONIST REPRESENTATIONS hidden unit 1 X X X hidden unit 2 Figure 4: The middle layer hidden space of the 8-4- 2-4-8 encoder problem. 42 ore hidden layers A different hidden space pattern was obtained if more hidden layers were included in the back propagation network. A network with 8 input units, 3 hidden lay- ers with 4 units, 2 units and 4 units respectively and an output layer with 8 output units was used to en- code 8 orthogonal patterns. Fig. 4 shows the hidden space patterns of the middle hidden layer with two hidden units. Patterns were no longer arranged in a circle as seen in Fig. 3b, but spread all over the space. These patterns were organised in a different way be- cause the presence of extra hidden layers in between this middle hidden layer and the input/output lay- ers brought an extra transformation between them. Again, the extra transformation allows the network to have more freedom to form the coding methodol- ogy* 4.3 Non-orthogonal patterns The above hidden space patterns were not obtainable if we altered the training patterns. When a different set of training patterns was used, the resultant hid- den space diagram was different. In this section, we study the encoding property of a 4-2-4 network us- ing non-orthogonal training patterns. Table 2 shows six non-orthogonal patterns used in the experiments. When any four of the patterns were chosen and in- cluded in the training domain, the hidden space ar- rangement was the same as that of the orthogonal 4-2-4 encoder problem shown in Fig. 2. With the ad- dition of an extra training pattern, the network was able to encode all training patterns successfully and this training process took about 254 cycles. The re- Training patterns Input States Output States 1100110 0 01100110 00110011 10101010 0101010 1 1001100 1 Table 2: The non-orthogonal training patterns. sultant hidden space pattern of this network is shown in Fig. 5. It can be seen that one of the training pat- terns no longer resides on a circle but locates at the centre of the space. When six input patterns were presented, the network was unable to encode all pat- terns successfully. The difference between the arrangements of the patterns in the hidden space is suggested to be due to the use of a different set of training patterns. In the experiments using orthogonal patterns, only one unit is active in each pattern whereas in the non-orthogonal case, more than one units are ac- tive. Hn the latter experiment, the connection weights were found to form the decision boundaries as shown in Fig. 5a. This allows the patterns be separated by the decision boundaries and satisfied the the in- put/output requirements as stated in the training sets. We can deduce from the above experiments that back propagation networks classify different input patterns by transforming the pattern in a layer to an- other location in the hidden space of a higher layer. The rule of the transformation depends on the dis- tribution of training patterns. The number of hid- den layers can affect the arrangements of the train- ing patterns in each hidden layer. For patterns with both distinct input and distinct output states, their locations in the hidden space are far apart so that their separations are large enough for discrimination to take place. Their distributions depend on the num- ber of hidden layers and the distribution of training input and output states. On the other hand, for pat- terns with different input patterns but the same out- put states, their hidden space patterns are more dif- ficult to describe. The network will organise itself in such a way that these patterns are grouped together so that they can be separated from the others by hy- perplanes in the higher layers, e,g. the XOR problem CHAN 581 hidden unit 1 Fig. 6 [Rumelhart & McClelland 19861. hidden unit 1 b. -- Acknowledgement. Part of the work in this paper was carried out in Cambridge University Engineer- ing Department and the author thanks Prof. Frank Fallside for his guidance during that period. eferences [Burr 19881 Burr, D.J. 1988. Experiments on Neural Net Recognition of Spoken and Written Text. IEEE Transactions on Acoustics, Speech, and Signal Processing, 36(7):1162-1168. i hidden unit 2 [Chan & Fallside 19871 Chan, L-W. and Fallside, F. 1987. An Adaptive Training Algorithm for Back Propagation Networks. Computer Speech and Language, 2:205-218. X [Hinton, Sejnowski & Ackley 19841 Hinton, G.E.; Sejnowski, T.J. and Ackley, D.H. 1984. Boltzmann Machine: Constraint satisfaction networks that learn. Technical Report: CMU- CS-84- 119, Carnegie-Mellon University. [Rumelhart & McClelland 19861 Rumelhart, D.E. hidden unit 2 Figure 5: The hidden space patterns of the 4-2-4 net- work when five non-orthogonal training patterns were used. and McClelland, J.L. eds. 1986. Parallel Dis- tribut ed Processing: Explorations in the Mi- crostructure of Cognition. Vol. l:Foundations, Bradford Books/MIT Press. [Rumelhart, Hinton & Williams 19861 Rumelhart, D.E.; Hinton, G.E. and Williams, R.J. 1986. Learning Internal Representations By Error Propagation, in Parallel Distributed Processing: Explorations in the Microstruc- ture of cognition, Vol. 1, ed. by Rumelhart D.E. & McClelland J.L. Bradford Books/MIT Press. 582 CONNECTIONIST REPRESENTATIONS input layer pattern Figure 6: The XOR problem. \ hidden layer’pattern CHAN 583
1991
85
1,148
ir s Lorien U. Pratt and Computer Science Department Rutgers University New Brunswick, NJ 08903 Abstract A touted advantage of symbolic representations is the ease of transferring learned information from one intelligent agent to another. This paper investigates an analogous problem: how to use information from one neural network to help a second network learn a related task. Rather than translate such in- formation into symbolic form (in which it may not be readily expressible), we investigate the direct transfer of information encoded as weights. Here, we focus on how transfer can be used to address the im- portant problem of improving neural network learning speed. First we present an exploratory study of the somewhat sur- prising effects of pre-setting network weights on subsequent learning. Guided by hypotheses from this study, we sped up back-propagation learning for two speech recognition tasks. By transferring weights from smaller networks trained on sub- tasks, we achieved speedups of up to an order of magnitude compared with training starting with random weights, even taking into account the time to train the smaller networks. We include results on how transfer scales to a large phoneme recognition problem. Introduction Recently, many empirical comparisons (surveyed in [Shav- lik et al., 19911) have been performed between neural net- work and symbolic machine learning methods on a variety of tasks. Back-propagation neural networks [Rumelhart et al., 19871 have been shown to perform competitively. At present, one reason to prefer symbolic representations is that they are more readily portable - we can simply copy axioms, rules, definitions, etc. between intelligent agents for reuse on new tasks. At least in principle, information learned by one agent is easy to share with others. It is less clear how to transfer information encoded in neural networks. A brute force approach to information transfer in neural networks is to store and communicate the entire set of data used to train the network. Besides the obvious storage costs, this approach suffers from requiring the network to be re- trained from scratch every time additional data is received. An indirect approach is to extract information in symbolic form from one network and insert it into another. While both the extraction [Fu, 19901 and insertion [Towell et al., 19901 processes are receiving some research attention, this indirect approach is limited by the fact that information encoded in the original network may be infeasible to express in symbolic orristown, NJ 07962- 1910 form. In contrast, this paper investigates how information en- coded as learned network weights can be directly trans- ferred between neural networks, short-cutting the indirect approach, as shown in Figure 1. By exploiting previous learning, such a process has the potential to increase network performance and also to improve learning speed, which has been found to be much slower in general for neural networks than for symbolic methods. We use the back-propagation algorithm for neural network learning. This algorithm iteratively modifies a set of initial weights, with the goal of building a network that gives cor- rect answers on a corpus of training examples. Usually, the starting weights are chosen randomly. Here, we show how using previously learned information to initialize weights non-randomly can speed up network learning. Following a brief introduction to back-propagation, we present a pilot study that explores transfer from a single source to a single target network, as shown in the center of Figure I. Then we describe two studies on more com- plex tasks that demonstrate transfer from multiple source networks to different portions of the target network. Back-propagation is an inductive algorithm for concept learning and regression. A back-propagation neural network contains several layers of computational units, connected via 584 CONNECTIONIST REPRESENTATIONS From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. weighted arcs. A typical network has an input layer, a single hidden layer, and an output layer of units, with full connec- tivity between adjacent layers. Each unit has an associated state, called its activation, typically a real number in the in- terval [0, 11. A distinguished unit called a threshold is usu- ally included in the input and hidden layers. Its activation is always I, and its weight is called a bias. When using a learned network, input unit activations are set externally and used to calculate activations in subsequent layers (this process is mlled feedforward). The network’s solution to the problem represented by the input activations is read from output layer activations. For a given connec- tion topology, network behavior is determined by the arc weight values. To determine its activation, each unit first calculates its input, via an inputfunction. This is typically a linear combination of incoming unit activations times con- necting weights. The input is fed through a squashingfunc- tion, usually sigmoidal in shape, which maps the input value into [0, 11. Learning in back-propagation networks consists of mod- ifying weight values in response to a set of training data patterns, each of which specifies an activation for each in- put unit and the desired target value for each output unit. Network weights are initialized randomly, usually in some small-magnituderange such as [-.5, .5]. Then the first train- ing data item is used to set input unit activations, and feed- forward calculates output unit activations. They are com- pared to target values, and an error is determined for each output unit. This error is used to determine the change to each network weight. This process is repeated for all train- ing patterns - one pass through them all is an epoch. ?Lp- ically, learning requires thousands of epochs, during which weights are updated by small increments until output vec- tors are close to target vectors. Weight increment size is de- termined in part by the user-supplied parameters 7 (learning rate) and a! (momentum). For more details, see [Rumelhart et al., 19871. ‘IO use back-propagation for concept learning tasks, a unary encoding is usually used for output units, each of which represents a different category. ‘Target vectors contain a 1 in the position corresponding to the desired category, and O’s elsewhere. When the network is used to classify an input vector, the highest output unit activation is used to determine the network’s chosen category. In this section, we present an exploratory study of the dy- namics of networks that have some of their initial weights pre-set. Consider a single-hidden-layer network that is fully connected in the input-to-hidden (IH) and hidden-to-output (HO) layers, and trained for concept learning using unary en- coding. Let the input function be linear, so the input to unit j is Ij = Ci YdWij, where yi is the activation of incoming unit i, and wij is the weight between units i and j. Let the squashing function be the step function: activation uj = 1 if1i +bias> 0 0 if1j +bias<O This function is a simplification of the sigmoid. It is com- monly used for analysis, since for high weight values it ap- proximates the sigmoid. Consider a space of n dimensions, where n is the number of input units. As shown in Figure 2, training data define ‘nts in this space which can be labelled by correspond- target values. Weights leading from input units to a par- ticular hidden unit, along with the hidden unit bias, deter- mine a hyperplane-bounded decision region in this space. For n = 2, the hyperplane is a line. Input vectors on one side of this hyperplane cause a hidden unit activation of 0; vectors on the other side cause an activation of 1. One condition for successful back-propagation learning is that IH hyperplanes separate the training data such that no decision region contains training data items with different target values. Another condition for learning is that IH hy- lanes are placed such that there is a correct configuration of HO hyperplanes possible (i.e. that hidden layer activa- tions are linearly separable). 0.0 1.0 0.0 1.0 Paatan 1 miturr 1 Figure 2: Two examples of hyperplane sets that separate training data in a small network. Since separating training data items for different targets by III hyperplanes is one condition for learning, it is reasonable to expect that pre-setting IH weights to produce such hyper- planes will lead to faster learning. We performed a series of experiments to test this hypothesis. All networks studied had a 2-3-l (2 input units, 3 hidden units, 1 output unit) topology. They were trained on a small set of hand-chosen training data that was not linearly sepa- rable and that contained 6 patterns (Figure 2). Each training data target was either 0 or 1. A manual search for values of 7 and a! (10 pairs tried, on networks with random ini- tial weights) resulted in locally optimal values of q = 1.5, = .9, which were used for all experiments. Standard back- Eropagation [Rumelhart et al., 19871 was used, with a sig- moidal activation function, training after every pattern pre- sentation, and training data presented sequentially. A number of experiments were performed, summarized in Table 1. Each row of this table shows results from a set of 30 trials (each with different random initiaI weights) of the conditions shown. Every network was trained for 2000 epochs. Experiments fell into the following five categories: om low itude initial weirdo ( control experiment, 30 networks were initialized with ran- dom weights in the range: [-0.5,0.5] (average III, HO mag- PRATT, MOSTOW, JZ KAMM 585 Exp#, IH source 1: Random 2: Preset 3: Preset 4: Preset 5: 0.6-perturbed 6: Centroid 7: Centroid 8: Centroid 9: Random IH avg. mag. .25 .25 .25 2.5 AI0625 5.0 AI0625 2.5 .00625 .25 .25 1.25 .00625 2.5 .OO625 2.5 .00625 MO avg. mag. .25 #con- verg- ing 28 26 26 23 24 28 30 20 23 Mean wit* Epochs diff to Csn- from verge #I? 528 - 501 N 228 Y 151 Y 199 Y 591 N 859 Y 898 Y 329 N Table 1: Conditions and results for the pilot study. nitudes of 0.25). 28 trials converged to a solution. Conver- gence time was measured as the number of epochs of training before the total squared error over all patterns (TSS) dropped below 0.1. As shown, the mean time to converge for the 28 networks was 528 epochs. Pre-set separating I weights, random 0 weights (Exps. Z&3,4): 30 converging networks were generated, and their separating IH weights were extracted and used to initialize a new set of 30 networks (not all of which con- verged). Note that since a hyperplane position is deter- mined by the ratio, not the magnitude, of its defining weights, it is possible to adjust hyperplane magnitudes arbitrarily while retaining their positions. IH and HO magnitudes were resealed in a variety of ways, as shown. As the IH magnitude was raised, fewer networks converged, but those that did had shorter training times (significantly shorter in studies 3 and 4, with p < .OOOl, df = 50,44, according to a t-test). 1 Perturbed (Exp. 5): The same weights as in the previous experiments were used, but each IH weight was modified to be w = w + r * w, where r was a random variable in [-0.6,0.6]. This produced hyperplanes that didn’t separate training data completely, but were in the proximity of effec- tive final positions. Perturbed networks trained significantly faster (p < .OOOl, df = 46) than randomly initialized net- works. Centroid initialization (Exp. 6,7,8): To test the general- ity of the idea of placing hyperplanes near training data, IH hyperplanes were initialized to pass through the training data centroid (median value of each input dimension). As shown, at best this produced training times no different from the ran- dom case. At worst, learning time was longer. Random ~ig~-~ag~~t~de initial weights (Erup. 9): This experiment verified that the speed-up found with high weight ‘In order to perform a fair significance test in comparisons with randomly initialized networks, when only some subset rE. < 28 of networks converged, the (30 - L) largest times were removed. For Experiment 7, the two worst centroid-initialized scores were removed, since it converged more often than the random case. magnitudes was due to both their positions and their magni- tudes, instead of just their magnitudes. As shown, the high- magnitude random networks did not converge significantly faster (df = 44) than low-magnitude initialized networks. Discussion These results yield several observations that can be used as hypotheses for investigation on more complex tasks: Learning was faster in networks wi than when weights were ~a~d~~~y init iments 3,4, and 5, learning speed was (p < .OOOl,df = 50,44,46) than when weights were ini- tialized randomly. Learning was faster in networks with -set weights specifying near-correct hyperplane posi s than when weights were randomly initialized. This was shown in Experiment 5. Surprisingly, correctly pre-set hypesplanes moved out of position. This happened when IH magnitudes were too low, as in Experiment 2. Hyperplanes were observed to di- verge rapidly in early epochs, losing their preset positions. Raising weight magnitudes ma Banes more re- tentive. In Experiment 4, hyperplanes moved out of posi- tion to a much smaller degree than in Experiment 2. The mean number of training data patterns crossed by moving hyperplanes during learning was 24.6 in Experiment 4, com- pared to 36.4 for Experiment 2. The difference was signifi- cant withp < 0.01, df = 58. Fastest learning was obtained when weights were pre- set in the correct positions and weight magnitudes were raised to make them retentive. This was shown in Exper- iment 4. Networks with pre-set yperplanes tended not to con- verge as often. This was shown by Experiments 2-5,8. Additional small experiments (only 10 runs, no signifi- cance testing) were performed on this same task to deter- mine whether these hypotheses are also supported when a different learning rate is used: in this case q = 0.1 was tried (instead of 1.5). The results of these experiments were con- sistent with the above observations. In the previous section, we showed that hyperplanes ini- tialized near correct positions may move into position, thus speeding up learning. Here we extend those results to a more complex task. We demonstrate a technique for pre-setting network weights that produces faster learning, even taking into account the time to learn pre-set weights. This is be- cause the original problem is decomposed into subproblems, thereby reducing search combinatorics. In this and the next section we borrow a decomposition technique introduced by [Waibel et al., 19891, who showed improved learning speed and performance in networks for consonant recognition. 586 CONNECTIONIST REPRESENTATIONS Using data from [Robinson, I9891, we trained a network to perform a vowel recognition task. It had IO input units (representing a preprocessed speech signal) and 1% output units, each representing a vowel sound. Training was on 528 vowels pronounced by 8 speakers, and the test set was 462 vowels from 7 speakers not included in that set. Four base- line networks were trained, each with 26 hidden units and no decomposition We next studied the performance of networks in which III and HO hyperplanes were pre-set through problem decom- position. The architecture used is illustrated in Figure 3. The number of weights in these networks was approximately the same as that in-the non-decomposed networks. Figure 3: Architecture of the decomposed vowel recognition network. Arrows delimit regions of full connectivity. Num- bers indicate unit counts. The training methodology used by [Waibel et al., I9891 for decomposed networks included these steps: ~~a~~~~g: Divide network into subsets; train each individually. 2. GIlae training: Combine subnetworks into a larger net- work using additional “glue” units. These are trained while subnetwork weights are frozen. 3. Fine tuning: All weights are modified during further training. Although fine tuning may be useful in some problems, there are a few reasons to question whether it will always improve network performance. First, for fine tuning to be useful, it would seem to require that, as in Experiment 5 of the previous section, subnetwork training develops a set of roughly positioned hyperplanes, and fine-tuning moves them into place. This won’t necessarily happen in all de- composed networks - there may be local minima that are optimal for subproblems but are not near an optimum solu- tion for the full problem. Secondly, weights often grow dur- ing back-propagation learning, so it may be important to set relative magnitudes between glue and subnetworks system- atically. This was not part of the fine-tuning step in [Waibel et al., 19891. Finally, for some networks, if glue training is stopped before overfitting begins, fine-tuning has the poten- tial to cause overfitting, reducing network performance. We sought both to test whether problem decomposition was useful in decreasing training time and to explore whether fine tuning did indeed improve performance on this task. Five different decomposed networks were trained, each with different initial random weights; they were compared to the four non-decomposed networks. We used an “oversized” network training technique [Weigend et al., 19901 to control for overfItting: networks were trained until their errors on the test set ceased to improve. We trained each subnetwork on all input vectors in the training set, with all-zero target vec- tors for input patterns in classes outside of those correspond- ing to a subnetwork’s output units performed t-tests of significance on the learning time erformance differ- ence between the two populations of decomposed and non- decomposed networks. Although space limitations preclude reporting all experimental details, major results are outlined alux!: No significant difference in test set score was found between performance of the decomposed and non-decomposed networks (p > 0.05, &f = 7). Perfor- mance scores, in percent correct on the test data, were 53, 58,55,56,55 for the decomposed and 58,6I, 59,54 for the non-decomposed networks. Learwing 8: Decomposed networks learned faster than mono networks. The mean decomposed time was 40% of the mean non-decomposed time (learning signifi- cantly faster with p < .Ol, df = 7). The mean decomposed learning time was I.348 x lo9 operations (3096 epochs). g: For one arbitrarily chosen set of subnetwork weights, four different networks were trained, starting with different random glue weights. Generalization scores were calculated every 10 epochs. The best scores observed were 47%, 35%, 43%, and 52%. These are markedly lower than the scores at the end of glue training. Substituting further glue training for fine tuning did not cause such a drop in per- formance. Note that this study differs from [Waibel et al., 19891 in that it used a network with one hidden layer instead of two. It also uses different training data than their consonant recogni- tion task. Under these different conditions, our experiments also show improved learning speed from problem decompo- sition. We have also explored the utility of fine tuning and found, in contrast to the previous work, that it may not al- ways be useful. Finally, we have established that the network training time speedup is statistically significant. In the following section, we explore a further modifica- tion of the problem decomposition model. As in the vowel recognition task, this network is decomposed into subnet- works which are trained separately. However we also de- compose the set of input units. [Kamm and Singhal, 19901 describe a neural network for learning part of an acoustic-to-phoneme mapping task. This network maps sequences of a spectral representation of a speech signal into sequences of phoneme classes. Although further high-level processing is necessary to determine the exact word pronounced (by disambiguating phonemes via context and other constraints), this initial phoneme determi- nation is a critical phase of speech recognition. The training set for this experiment was the DARPA PRATT, MOSTOW, & KAMM 587 acoustic-phonetic corpus wisher et al., 19871, which con- tains recordings of continuous utterances from a large num- ber (630) of talkers. This corpus is used extensively in phoneme recognition research. The decomposition strategy and training methodology de- scribed in the previous section were applied to this more complex task. A network was built with the architecture shown in Figure 4. Here, each of three subnetworks view 1ostioRT4uRATIoN n slioflT=ANDmRMEDlA~~ GlUE FtlONEMES DUflATlDNPHONEMES 147lNPUTNODES 147iNRwDEs SPANNING 35M sPANN#IGm s?ANRlNG12flls Figure 4: Architecture of the decomposed phoneme classifi- cation network input data spanning a different duration of a speech sig- nal [Kamm and Singhal, 19901 reported the construction of three non-decomposed networks, one for each input du- ration. Their results showed that networks with input dura- tions of 35ms and 65ms had much higher performance for phonemes with short average duration than for the longest phonemes (diphthongs). Based on this finding, we decom- posed the problem on the output units so that the two sub- networks with short-duration input spans “specialized” on subsets of phonemes with short and intermediate average du- rations. The three subnetworks were trained individually on a 200-sentence training set (108983 patterns), and then their weights were placed into the combined network, along with glue weights that were randomly initialized in [-0.3,0.3]. These low magnitudes were chosen to give the network flex- ibility in moving glue hyperplanes. Glue training and fine- tuning were then performed. As described in more detail in [Pratt and Kamm, 19911, network performance was evaluated by calculating a nor- malized average hit rate (AIIR) and false alarm rate (AFA) among the top 3 phonemes out of the 46 target classes. Learning time was measured as the number of arithmetic op- erations required to calculate unit activations and weight up- date values. The table in Table 2 shows training conditions and results (as tested on an independent 200-sentence set), as well as those for [Kamm and Singhal, 19901’s best network (with 125ms-duration input). The second and third columns in Table 2 show values of q and a! selected (by training sev- eral networks to a couple of epochs) for training. If we measure overall performance as the top-3 AHR mi- nus top-3 AFA, then the final network obtained after fine tuning has m > 100% of that of the [Kamm and Sing- hal, 19901 network, after lA:i$$fl M 10% of the operations. 588 C~NNECTIONIST REPRESENTATIONS Total top top EP- op’s 3 3 Network a q o&s x lb” AHR AFA K&S net. .l .l 300 10.62 61.7 3.1 35ms subnt 2.5 .l 7 .06 70.1 7.4 65ms subnt 1.2 .2 3 .04 65.2 5.8 125mssubnt 3.0 .Ol 10 .35 33.1 4.3 glue training 2.0 .5 9 .46 56.5 4.0 fine tuning 2.0 .5 2 .19 62.4 3.5 Decomposed Network Total 3 1 1.1 ‘Iable 2: Conditions and results for the phoneme recognition network This is a substantial speed-up over the previous training time. More details about the acoustic-phonetic network can be found in Pratt and Kamm, 19911. The vowel recognition and phoneme recognition problem decomposition experiments demonstrated some of the same major findings as the less complex pilot study, despite fun- damental differences in the naturn of the task decomposition among the studies. In all three studies, time to convergence was shorter when some of the weights in the network were pre-set than when all weights were set to random initial val- ues. Furthermore, when the pre-set weights had relatively high magnitudes, the networks were able to retain them dur- ing further training. In addition, the vowel task indicated that fine-tuning of a decomposed network may not always improve performance. Note that, by considering transfer of partially incorrect weights, we are addressing a more complex issue than if mul- tiple network weight sets were simply glued together in a modular system at run time. In contrast, we have explored how back-propagation learning uses both error-free and er- rorful pre-set weight subsets. These experiments leave many open questions for further research, including the following topics. More systematic studies of back-propagation dynamics on complex tasks should be done in order to further explore the pilot study hypotheses. More work is necessary to establish how problem decom- position can best be combined with the oversized training methodology used on the vowel recognition task. The speed-up observed in decomposed vs. non- decomposed networks might be due not to the decom- posed training methodology, but to the fact that that net- works were trained using a constrained topology. This possibility should be explored empirically. It is interesting that weight magnitudes from prior training worked so well in the problem decomposition tasks. This may have been due to the high network weights generated by subnetwork training. For example, the average subnet- work weight magnitude in the vowel study was 15.5. For source and target network tasks that differ substantially, careful magnitude tuning may be necessary, to avoid local minima. Furthermore, for a technique like that described in [Towell et al., 19901 (which uses weight sets obtained by means other than prior network training) more atten- tion to weight magnitudes may be helpful in dealing with potentially incorrect initial weights. 8 The network decomposition on the vowel recognition task was chosen arbitrarily. It is important to characterize the nature of decompositions for which speedup occurs. Care- ful analysis of subproblem interactions should aid in this endeavor. Also, further experiments with different arbi- trary decompositions should indicate sensitivity to partic- ular decompositions. It should also be fruitful to explore automated methods for guiding problem decomposition using domain knowledge. cb When training data is impoverished (noisy, incorrect, in- complete), it may be possible to achieve a performance improvement by using pre-set weights. Although this question has been explored in related contexts [Towell et al., 19901, [Waibel et al., 19891, an important open is- sue is whether direct network transfer produces significant performance improvement over randomly initialized net- works. e The model of transfer used here decouples initial weight determination from learning. Therefore, the learning al- gorithm can probably be changed (for example to Conju- gate Gradient [Barnard and Cole, 19891) without changes to the transfer process. A study should be performed to verify that transfer remains effective with this and other learning algorithms. 8 Finally, our most active current area of research explores transfer between networks trained on different but related populations of training data, for source and target net- works with the same topology. For example, speaker- dependent training may be sped up by transferring weights from a network trained on multiple speakers. The effec- tiveness of transfer should be evaluated under conditions of different relationships between source and target train- ing data (i.e. superset ---) subset, subset- superset, disjoint but relatea populations, etc.). We have addressed the question of how information stored in one neural network may be transferred to another net- work for a different task. We explored the behavior of back-propagation when some weights in a network are pre- set, and we studied the effect of using weights from pre- trained subnets on learning time for a larger network. Our re- sults demonstrated that the relative magnitudes of the pre-set weights (compared to the untrained weights) are important for retaining the locations of pre-trained hyperplanes during subsequent learning, and we showed that learning time can be reduced by a factor of 10 using these task decomposi- tion techniques. Techniques like those described here should should facilitate the construction of complex networks that address real-world problems. s Thanks to Steve and Bellcore for fi- nancial support during 1990. Thanks also to David Ackley, Sharad Singhal, and especidly the anonymous reviewers, who provided helpful comments on previous drafts. Also, Michiel Noordewier and Haym Hirsh provided critical sup- port and encouragement for the research program on which this paper is based. Barnard, Etienne and Cole, Ronald A. 11989. A neural-net training program based on conjugate-gradient optimization. Technical Report CSE 89-014, Oregon Graduate Center. Fisher, W. M.; Zue, V.; Bernstein, J.; and Pallett, D. 1987. An acoustic-phonetic data base. J. Acoust. Sot. Am. Suppl. 81(1):S92. Fu, Li-Min 1990. Recognition of semantically incorrect rules: A neural-network approach. In Proceedings of Third International Conference on Industrial and Engineer- ing Applications of Artificial Intellig and Expert Sys- tems, IEA/AIE ‘90. Association for C uting Machinery. Kamm, C. A. and Singhal, S. 1990. Effect of neural net- work input span on phoneme classification. In Proceedings of the International Joint Conference on Neural Networks, 1990, volume I, 195-200, San Diego. IEEE. Pratt, L. Y. and Kamm, C. A. 1991. Improving a phoneme classification neural network through problem decomposi- tion. In Proceedings of the International Joint Conference on Neural Networks (IJCNN-91). IEEE. Forthcoming. Robinson, Anthony John 1989. Dynamic Error Propaga- tion Networks. Ph.D. Dissertation, Cambridge University, Engineering Department. Rumelhart, D. E.; Hinton, 6. E.; and Williams, R. J. 1987. Learning internal representations by error propagation. In Rumelhart, David E. and McClelland, James L., editors 1987, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, volume 1. MIT Press: Brad- ford Books. 3 18-362. Shavlik, J. W.; Mooney, R. J.; and Towell, G. 6. 1991. Symbolic and neural net learning algorithms: An experi- mental comparison. Machine Learning 6(2): 111-143. Towell, Geoffrey G.; Shavlik, Jude W.; and Noordewier, Michiel 0.1990. Refinement of approximate domain theo- ries by knowledge-based neural networks. In Proceedings of AAAI-90,861-866. AAAI, Morgan Kaufmann. Waibel, Alexander; Sawai, Hidefumi; and Shikano, Kiy- ohiro 1989. Modularity and scaling in large phonemic neu- ral networks. IEEE Transactions on Acoustics, Speech, and Signal Processing 37(12): 1888-1898. Weigend, Andreas S.; Huberman, Bernard0 A.: and Rumel- hart, David E. 1990. Predicting the future: A connectionist approach. Technical Report Stanford-PDP-90-O 1, PARC- SSl-90-20, Stanford PDP Research Group, Stanford, Cali- fornia 94305-2 130. PRATT, MOSTOW, 8~ KAMM 589
1991
86
1,149
Error-Correct A General Metho Multiclass Inductive Thomas 6. ietterieh and Ghulum Bakiri Department of Computer Science Oregon State University Corvallis, OR 97331-3202 Abstract Multiclass learning problems involve finding a defini- tion for an unknown function f(x) whose range is a discrete set containing Ic > 2 values (i.e., k “classes”). The definition is acquired by studying large collections of training examples of the form (xi, f(xi)). Existing approaches to this problem include (a) direct applica- tion of multiclass algorithms such as the decision-tree algorithms ID3 and CART, (b) application of binary concept learning algorithms to learn individual binary functions for each of the Ic classes, and (c) application of binary concept learning algorithms with distribut- ed output codes such as those employed by Sejnowski and Rosenberg in the NETtalk system. This paper compares these three approaches to a new technique in which BCH error-correcting codes are employed as a distributed output representation. We show that these output representations improve the performance of ID3 on the NETtalk task and of backpropagation on an isolated-letter speech-recognition task. These results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multi- class problems. Introduction The task of learning from examples is to find an ap- proximate definition for an unknown function f(x) given training examples of the form (xi, f(xd )) . For cases in which f takes only the values (0, 1}-binary functions-there are many algorithms available. For example, the decision tree methods, such as ID3 (Quin- lan, 1983, 1986b) and CART (Breiman, Friedman, Ol- shen & Stone, 1984) can construct trees whose leaves are labelled with binary values. Most artificial neu- ral network algorithms, such as the perceptron algo- rithm (Rosenblatt, 1958) and the error backpropaga- tion (BP) algorithm (Rumelhart, Hinton & Williams, 1986), are best suited to learning binary functions. Theoretical studies of learning have focused almost entirely on learning binary functions (Valiant, 1984; COLT 1988, 1989, 1990). In many real-world learning tasks, however, the un- 572 CONNECTIONIST REPRESENTATIONS known function f takes on values from a discrete set of “classes” : {cl, . . . , ck}. For example, in medical diag- nosis, the function might map a description of a patient to one of k possible diseases. In digit recognition, the function maps each hand-printed digit to one of k = 10 classes. Decision-tree algorithms can be easily generalized to handle these “multi-class” learning tasks. Each leaf of the decision tree can be labelled with one of the Ic class- es, and internal nodes can be selected to discriminate among these classes. We will call this the direct multi- class approach. Connectionist algorithms are more difficult to ap- ply to multiclass problems, however. The standard approach is to learn Ic individual binary functions fl,.- , fk, one for each class. To assign a new case, x to one of these classes, each of the fi is evaluated on x, and x is assigned the class j of the function f’ that returns the highest activation (Nilsson, 1965). We will call this the one-per-class approach, since one binary function is learned for each class. Finally, a third approach is to employ a distribut- ed output code. Each class is assigned a unique bi- nary string of length n; we will refer to these as “codewords .” Then n binary functions are learned, one for each bit position in these binary strings. These binary functions are usually chosen to be meaning- ful, and often independent, properties in the domain. For example, in the NETtalk system (Sejnowski & Rosenberg, 1987), a 26-bit distributed code was used to represent phonemes and stresses. The individu- al binary functions (bit positions) in this code corre- sponded to properties of phonemes and stresses, such as “voiced,” “labial,” and “stop.” By representing enough distinctive properties of phonemes and stresses, each phoneme/stress combination can have a unique codeword. For distributed output codes, training is accom- plished as follows. For an example from class i, the desired outputs of the n binary functions are specified by the codeword for class i. With artificial neural net- works, these n functions can be implemented by the n output units of a single network. With decision trees, n separate decision trees are learned, one for each bit position in the output code. New values of x are classified by evaluating each of the n binary functions to generate an n-bit string s. This string is then compared to each of the rE code- words, and x is assigned to the class whose codeword is closest, according to some distance measure, to the generated string s. This review of methods for handling multiclass prob- lems raises several interesting questions. First, how do the methods compare in terms of their ability to clas- sify unseen examples correctly? Second, are some of the methods more difficult to train than others (i.e., do they require more training examples to achieve the same level of performance)? Third, are there princi- pled methods for designing good distributed output codes? To answer these questions, this paper begins with a study in which the decision-tree algorithm ID3 is ap- plied to the NETtalk task (Sejnowski & Rosenberg, 1987) using three different techniques: the direct mul- ticlass approach, the one-per-class approach, and the distributed output code approach. The results show that the multiclass and distributed output code ap- proaches generalize much better than the one-per-class approach. It is helpful to visualize the output code of a learn- ing system as a matrix whose rows are the classes and whose columns are the n binary functions correspond- ing to the bit positions in the codewords. In the one- per-class approach, there are k rows and E columns, and the matrix has l’s only on the diagonal. In the distributed output code approach, there are k rows and n columns, and the rows of the matrix give the codewords for the classes. From this perspective, the two methods are closely related. A codeword is assigned to each class, and new examples are classified by decoding to the nearest of the codewords. This perspective suggests that a better distributed output code could be designed using error- correcting code methods. Good error-correcting codes choose the individual code words so that they are well- separated in Hamming distance. The potential benefit of such error correction is that the system could recov- er from errors made in learning the individual binary functions. If the minimum Hamming distance between any two codewords is d, then [(d - 1)/2] errors can be corrected. The “code” corresponding to the one-per-class ap- proach, has a minimum Hamming distance of 2, so it cannot correct any errors. Similarly, many distribut- ed output codes have small Hamming distances, be- cause the columns correspond to meaningful orthog- onal properties of the domain. In the Sejnowski- Rosenberg code, for example, the minimum Hamming distance is 1, because there are phonemes that differ only in whether they are voiced or unvoiced. These ob- servations suggest that error-correcting output codes could be very beneficial. On the other hand, unlike either the one-per-class or distributed-output-code approaches, the individual bit positions of error-correcting codes will not be mean- ingful in the domain. They will constitute arbitrary disjunctions of the original k classes. If these functions are difficult to learn, then they may negate the benefit of the error correction. We investigate this approach by employing BCH error-correcting codes (Bose & Chaudhuri, 1960; Hoc- quenghem, 1959). The results show that while the in- dividual binary functions are indeed more difficult to learn, the generalization performance of the system is improved. Furthermore, as the length of the code n is increased, additional performance improvements are obtained. Following this, we replicate these results on the ISOLET isolated-letter speech recognition task (Cole, Muthusamy 8c Fanty, 1990) using a variation on the back propagation algorithm. Our error-correcting codes give the best performance attained so far by any method on this task. This shows that the method is domain-independent and algorithm-independent. A Comparison of In Sejnowski and Rosenberg’s (1987) NETtalk system, the task is to map from English words (i.e., strings of letters) into strings of phonemes and stresses. For example, f (“lollypop”) = (“la1-ipap” ’ ““>1<>0>2<“) . Where “la1-ipap” is a string of phonemes, and “>iOO>2<” is a string of stress symbols. There are 54 phonemes and 6 stresses in the NETtalk formula- tion of this task. Note that the phonemes and stresses are aligned with the letters of the original word. As defined, f is a very complex discrete mapping with a very large range. Sejnowski and Rosenberg re- formulated f to be a mapping g from a seven-letter window to a phoneme/stress pair representing the pro- nunciation of the letter at the center of the window. For example, the word “lollypop” would be converted into 8 separate seven-letter windows: g( “~~~1011”) = (“1” ) “>“> ,(‘I ,,lolly”) = (“a”, “1”) g(“,follyp”) = (“1’1, V”) g( “‘lollypo”) = (I’-” ’ “>“) g( “ollypop”) = (‘Ii” ’ o’O”) $( ‘“llypop,“) = (“p” 0 “Y’) g(“lypop,,“) = (“a”, “2”) g( “ypop,,,“) = (“P” ) V”) The function g is applied to each of these 8 windows, and then the results are concatenated to obtain the phoneme and stress strings. This mapping function g now has a range of 324 possible phoneme/stress pairs. This is the task that we shall consider in this paper. DIETTERICH & BAKIRI 573 The Data Set Sejnowski and Rosenberg provided us with a dictionary of 20,003 words and their corresponding phoneme and stress strings. From this dictionary we drew at ran- dom (and without replacement) a training set of 1000 words and a testing set of 1000 words. It turns out that of the 324 possible phoneme/stress pairs, only 126 ap- pear in the training set, because many phoneme/stress combinations make no sense (e.g., consonants rarely receive stresses). Hence, in all of the experiments in this paper, the number of output classes is only 126. Input and Output Representations In all of the experiments in this paper, the input repre- sentation scheme introduced by Sejnowski and Rosen- berg for the seven-letter windows is employed. In this scheme, the window is represented as the concatena- tion of seven 29-bit strings. Each 29-bit string repre- sents a letter (one bit for each letter, period, comma, and blank), and hence, only one bit is set to 1 in each 29-bit string. This produces a string of 203 bits (i.e., 203 binary features) for each window. Experiments by Shavlik, Mooney, and Towel1 (1990) showed that this representation was better than treating each letter in the window as a single feature with 29 possible values. Of course, many other input representations could be used. Indeed, in most applications of machine learning, high performance is obtained by engineering the input representation to incorporate prior knowledge about the task. However, an important goal for machine learning research is to reduce the need to perform this kind of “representation engineering.” In this paper, we show that general techniques for changing the output representation can also improve performance. The representation of the output classes varies, of course, from one multiclass approach to another. For the direct multiclass method, the output class is repre- sented by a single variable that can take on 126 possible values (one for each phoneme/stress pair that appears in the training data). For the one-per-class approach, the output class is represented by 126 binary variables, one for each class. For the distributed output code approach, we em- ploy the code developed by Sejnowski and Rosen- berg. We used the Hamming distance between two bit-strings to measure distance. Ties were broken in favor of the phoneme/stress pair that appeared more frequently in the training data. In Dietterich, Hild, and Bakiri (1990a), we called this “observed decoding.” The ID3 Learning Algorithm ID3 is a simple decision-tree learning algorithm devel- oped by Ross Quinlan (1983, 1986b). In our imple- mentation, we did not employ windowing, CHI-square forward pruning (Quinlan, 1986a), or any kind of re- verse pruning (Quinlan, 1987). Experiments reported in Dietterich, Hild, and Bakiri (1990b) have shown that these pruning methods do not improve performance. We did apply one simple kind of forward pruning to handle inconsistencies in the training data: If at some point in the tree-growing process all training examples agreed on the values of all features-and yet disagreed on the class-then growth of the tree was terminated in a leaf and the class having the most training ex- amples was chosen as the label for that leaf (ties were broken arbitrarily for multiclass ID3; ties were broken in favor of class 0 for binary ID3). In the direct multiclass approach, ID3 is applied once to produce a decision tree whose leaves are la- belled with one of the 126 phoneme/stress classes. In the one-per-class approach, ID3 is applied 126 times to learn a separate decision tree for each class. When learning class i, all training examples in other class- es are considered to be “negative examples” for this class. When the 126 trees are applied to classify ex- amples from the test set, ties are broken in favor of the more-frequently-occurring phoneme/stress pair (as ob- served in the training set). In particular, if none of the trees classifies a test case as positive, then the most fre- quently occurring phoneme/stress pair is guessed. In the distributed output code approach, ID3 is applied 26 times, once for each bit-position in the output code. Results Table 1 shows the percent correct (over the lOOO- word test set) for words, letters, phonemes, and stress- es. A word is classified correctly if each letter in the word is correct. A letter is correct if the phoneme and stress assigned to that letter are both correct. For the one-per-class and distributed output code meth- ods, the phoneme is correct if all bits coding for the phoneme are correct (after mapping to the nearest le- gal codeword and breaking ties by frequency). Similar- ly, the stress is correct if all bits coding for the stress are correct. There are several things to note. First, the direct multiclass and distributed output codes performed e- qually well. Indeed, the statistical test for the dif- ference of two proportions cannot distinguish them. Second, the one-per-class method performed marked- ly worse, and all differences in the table between this method and the others are significant at or below the .Ol level. Error Correcting Codes The satisfactory performance of distributed output codes prompted us to explore the utility of good error- correcting codes. We applied BCH methods (Lin & Costello, 1983) to design error-correcting codes of vary- ing lengths. These methods guarantee that the rows of the code (i.e., the codewords) will be separated from each other by some minimum Hamming distance d. Table 2 shows the results of training ID3 with distributed error-correcting output codes of varying lengths. Phonemes and stresses were encoded sep- arately, although this turns out to be unimportant. 574 CONNECTIONIST REPRESENTATIONS Table 1: Comparison of Three Multi-class 1 5 7 11 15 31 63 Methods Tree Statistics Table 2: Performance of Error-Correcting Output Codes 20 e d Level of Aggregation Stress % Correct (lOOO-word test set) Tree Statistics Ave -age Leaves Depth 677.4 51.9 684.7 53.1 681.4 53.9 700.5 56.4 667.8 52.7 669.9 53.3 661.6 54.8 n 9 11 13 13 30 30 30 Word 13.3 14.4 17.2 17.5 19.9 20.6 20.8 ‘age qDepth 73.0 N 19 25 34 39 61 77 157 Columns headed n show the length of the code, and columns headed d show the Hamming distance between any two code words. a point of diminishing returns where further increases in code length will not improve performance. An open problem is to predict where this breakeven point will The first thing to note is that the performance of even the simplest (19-bit) BCH code is superior to the 26-bit Sejnowski-Rosenberg code at the letter and word levels. Better still, performance improves mono- tonically as the length (and error-correcting power) of the code increases. The long codes perform much better than either the direct multiclass or Sejnowski- Rosenberg approaches at all levels of aggregation (e.g., 74.4% correct at the letter level versus 70.8% for direct multiclass). Not surprisingly, the individual bits of these error- correcting codes are much more difficult to learn than the bits in the one-per-class approach or the Sejnowski- Rosenberg distributed code. Specifically, the average number of leaves in each tree in the error-correcting codes is roughly 665, whereas the one-per-class trees had only 35 leaves and the Sejnowski-Rosenberg trees had 270 leaves. Clearly distributed output codes do not produce results that are easy to understand! The fact that performance continues to improve as the code gets longer suggests that we could obtain ar- bitrarily good performance if we used arbitrarily long codes. Indeed, this follows from information theory un- der the assumption that the errors in the various bit positions are independent. However, because each of the bits is learned using the same body of training ex- amples, it is clear that the errors are not independent. We have measured the correlation coefficients between the errors committed in each pair of bit positions for our BCH codes. All coefficients are positive, and many of them are larger than 0.30. Hence, there must come occur. Error-correcting Codes and SmaPl aining Sets Given that the individual binary functions require much larger decision trees for the error-correcting codes than for the other methods, it is important to ask whether error-correcting codes can work well with smaller sample sizes. It is well-established that small training samples cannot support very complex hypotheses. To address this question, Figure 1 shows learning curves for the distributed output code and for the 93- bit error-correcting code (63 phoneme bits, 30 stress bits). At all sample sizes, the performance of the error- correcting configuration is better than the Sejnowski- Rosenberg distributed code. Hence, even for small samples, error-correcting codes can be recommended. eplication i To test whether error-correcting output codes provide a general method for boosting the performance of in- ductive learning algorithms, we applied them in a sec- ond domain and with a different learning algorithm. Specifically, we studied the domain of isolated letter speech recognition and the back propagation learning algorithm. In the isolated-letter speech-recognition task, the “name” of a single letter is spoken by an unknown spea- DIETTERICH & BAKIRI 575 I I I I I I I I Table 3: Parameter Values Selected via Cross Valida- tion 1 # Hidden 1 Best ker and the task is to assign this to one of 26 classes corresponding to the letters of the alphabet. Ron Cole has made available to us his ISOLET database of 7,797 training examples of spoken letters (Cole, Muthusamy & Fanty, 1990). The database was recorded from 150 speakers balanced for sex and representing many dif- ferent accents and dialects. Each speaker spoke each of the 26 letters twice (except for a few cases). The database is subdivided into 5 parts (named ISOLETl, ISOLET2, etc.) of 30 speakers each. Cole’s group has developed a set of 617 features de- scribing each example. Each feature has been scaled to fall in the range [-1,$-l]. We employed the opt (Barnard & Cole, 1989) implementation of backprop- agation with conjugate gradient optimization in all of our experiments. In our experiments, we compared the one-per-class approach to a 30-bit (d = 15) BCH code and a 62-bit (d = 31) BCH code. I n each case, we used a standard 3-layer network (one input layer, one hidden layer, and one output layer). In the one-per-class method, test examples are assigned to the class whose output unit gives the highest activation. In the error-correcting code case, test examples are assigned to the class whose output codeword is the closest to the activation vector produced by the network as measured by the following distance metric: Ca ]a& - codei I. One advantage of conjugate-gradient optimization is that, unlike backpropagation with momentum, it does not require the user to specify a learning rate or a momentum parameter. There are, however, three pa- rameters that must be specified by the user: (a) the starting random-number seed (used to initialize the ar- tificial neural network), (b) the number of hidden u- nits, and (c) the total-summed squared error at which training should be halted (this avoids over-training). To determine good values for these parameters, we followed the “cross-validation” training methodology advocated by Lang, Waibel, and Hinton (1990). The training data were broken into three sets: e The training set consisting of the 3,120 letters spo- ken by 60 speakers. (These are the examples in Cole’s files ISOLETl and ISOLET2.) e The cross-validation set consisting of 3,118 letters spoken by 60 additional speakers. (These are the examples in files ISOLET3 and ISOLET4.) cb The test set consisting of 1,559 letters spoken by 88 - 84 - 80 - 76 - 72 - 68 - 64 - 60 I I I I I I I I I 0 200 400 600 800 1000 1200 1400 1600 Number of Training Examples Figure 1: Learning curves showing % phonemes correct for the distributed output code and for the 93-bit error- correcting code (63 phoneme bits with d = 31,30 stress bits with d = 15). Table 4: Performance in the Isolated Letter Domain ~~ 30 additional speakers. (These are the examples in file ISOLET5.) The idea is to vary the parameters while training on the training set and testing on the cross-validation set. The parameter values giving the best performance on the cross-validation set are then used to train a network using the union of the training and cross-validation sets, and this network is then tested against the test set. We varied the number of hidden units between 35 and 182, and, for each number of hidden units, we tried four different random seeds. Table 3 shows the parameter values that were found by cross validation to give the best results. Table 4 shows the results of training each configu- ration on the combined training and cross-validation sets and testing on the test set. Both error-correcting configurations perform better than the one-per-class configuration. The results are not statistically signif- icant (according to the test for the difference of two proportions), but this could be fixed by using a larg- er test set. The results are very definitely significant from a practical standpoint: The error rate has been reduced by more than 20%. This is the best known error rate for this task. 576 CONNECTIONIST REPRESENTATIONS Conclusions The experiments in this paper demonstrate that error- correcting output codes provide an excellent method for applying binary learning algorithms to multiclass learning problems. In particular, error-correcting out- put codes outperform the direct multiclass method, the one-per-class method, and a domain-specific distribut- ed output code (the Sejnowski-Rosenberg code for the nettalk domain). Furthermore, the error-correcting output codes improve performance in two very dif- ferent domains and with two quite different learning algorithms. We have investigated many other issues concerning error-correcting output codes, but, due to lack of space, these could not be included in this paper. Briefly, we have demonstrated that codes generated at random can act as excellent error-correcting codes. Experi- ments have also been conducted that show that train- ing multiple neural networks and combining their out- puts by “voting” does not yield as much improvement as error-correcting codes. Acknowledgements The authors thank Terry Sejnowski for making avail- able the NETtalk dictionary and Ron Cole and Mark Fanty for making available the ISOLET database. The authors also thank NSF for its support under grants IRI-86-57316 and CCR-87-16748. Ghulum Bakiri was supported by Bahrain University. eferences Barnard, E. & Cole, R. A. (1989). A neural-net train- ing program based on conjugate-gradient optimiza- tion. Rep. No. CSE 89-014. Beaverton, OR: Oregon Graduate Institute. Bose, R. C., & Ray-Chaudhuri, D. K. (1960). On a class of error-correcting binary group codes. Inf. C- ntl., 3, pp. 68-79. Breiman, L, Friedman, J. H., Olshen, R. A., & Stone, C. J. (1984). Cl assi ca ion and Regression Trees. fi t Monterey, CA: Wadsworth and Brooks. Cole, R. Muthusamy, Y. & Fanty, M. (1990). The ISOLET spoken letter database. Rep. No. CSE 90-004. Beaverton, OR: Oregon Graduate Institute. COLT (1988). H aussler, D. & Pitt, L. (Eds.) COLT ‘88, Cambridge, MA: Morgan Kaufmann. COLT (1989). Rivest, R. L., Haussler, D., and War- muth, M. K. (Eds.) COLT ‘89: Proceedings of the Second Annual Workshop on Computational Learn- ing Theory, Santa Cruz, CA: Morgan Kaufmann. COLT (1990). Fulk, M. A., and Case, J. (Eds.) COLT ‘90: Proceedings of the Third Annual Workshop on Computational Learning Theory. Rochester, NY: Morgan Kaufmann. Dietterich, T. G., Hild, H., Bakiri, G. (1990a) A com- parative study of ID3 and backpropagation for En- glish text-to-speech mapping. 7th Int. Conf. on Mach. Learn. (pp. 24-31). Austin, TX: Morgan Kaufmann. Dietterich, T. G., Hild, H., Bakiri, 6. (1990b) A com- parison of ID3 and backpropagation for English text- to-speech mapping. Rep. No. 90-30-4. Corvallis, OR: Oregon State University. Hocquenghem, A. (1959). Codes corecteurs d’erreurs. Chiffres, 2, pp. 147-156. Lang, K. J, Waibel, A. H, & Hinton, 6. E. (1990). A time-delay neural network architecture for isolated word recognition. Neural Networks, 3, 33-43. Lin, S., & Costello, D. J. Jr. (1983). Error Con- trol Coding: Fundamentals and Applications. En- glewood Cliffs: Prentice-Hall. Nilsson, N. J., (1965). L earning Machines, New York: McGraw Hill. Quinlan, J. R. (1983). L earning efficient classification procedures and their application to chess endgames, in Michalski, R. S., Carbonell, J. & Mitchell, T. M., (eds.), Machine learning, Vol. 1, Palo Alto: Tioga Press. 463-482. Quinlan, J. R. (1986a). The effect of noise on con- cept learning. In Michalski, R. S., Carbonell, 9. & Mitchell, T. M., (eds.), Machine learning, Vol. II, Palo Alto: Tioga Press. 149-166. Quinlan, J. R. (198613). Induction of Decision Trees, Machine Learning, l(l), 81-106. Quinlan, J. R., (1987). Simplifying decision trees. Int. J. Man-Mach. Stud., 27, 221-234. Rosenblatt, F. (1958). The perceptron. Psych. Rev., 65 (6) 386-408. Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning internal representations by error propagation. In Rumelhart, D. E. & McClelland, J. L., (eds.) Parallel Distributed Processing, Vol 1. 318-362. Sejnowski, T. J., and Rosenberg, C. R. (1987). Par- allel networks that learn to pronouce English text. Complex Syst., 1, 145-168. Shavlik, J. W., Mooney, R. J., and Towell, G. G. (1990). Symbolic and neural learning algorithms: An experimental comparison. Mach. Learn., 6, lll- 144. Valiant, L. G. (1984). A theory of the learnable. CACM, 27, 1134-1142. DIETTERICH & BAKIRI 577.
1991
87
1,150
LE LEARNING Dept. of Computer and Information University of Florida Gainesville, Florida 32611 Sciences Abstract If the backpropagation network can produce an infer- ence structure with high and robust performance, then it is sensible to extract rules from it. The KT algo- rithm is a novel algorithm for generating rules from an adapted net efficiently. The al ff orithm is able to deal with both single-layer and mu ti-layer networks, and can learn both confirming and disconfirming rules. Empirically, the algorithm is demonstrated in the do- main of wind shear detection by infrared sensors with success. Introduction Recently, the backpropa ation network has been com- pared with symbolic mat f-l ine learning algorithms (Fisher and McKusick, 1989; Mooney, Shavlik, Towell, and Gove, 1989; Weiss and Kapouleas? 1989). There is some indication that backpropagation is more advan- tageous on noisy data. Symbolic machine learning has found many practical applications in buildin based systems. It is natural for one to as fi knowledge- whether backpropagation is suitable for this kind of applica- tions as well. The backpropagation technique uses numerical computation whereas knowledge-based sys- terns use symbolic reasoning. Learning symbolic know1 edge by backpropagation must rely on a mechanism to translate or abstract the numerical knowledge of the backpropagation network into symbolic form in an ac curate and efficient way. Gallant( 1988) d escribes connectionist expert sys- tems which can explain conclusions with an if-then rule applicable to the case at hand, but he argues that the procedure of producin every if-then rule from a connectionist network wou d work only if the network f was very small; the number of implicitly encoded if- then rules can grow exponentially with the number of inputs. In this paper, however, we show that there exists an efficient algorithm referred to as KT which is able to generate a set of rules satisfying certain de- sired properties from an adapted net (a net means a connectionist network). There are reasons for learning rules from adapted nets. In the first place, there are cognitive advantages Rules are easier to explain the net behavior than itself does, and are easier to be accepted and memorizet! by human users. The modularity of rules increases ‘This work is in part supported by FHTIC {Florida High Technology and Industry Councils). 590 CONNECTIONIST REPRESENTATIONS the system maintainability. The knowledge of different nets in the same domain can be combined easily in rule form. In addition, this process is analogous to forming conditional reflex. The inference or action time for using the abstracted rules would be less than using the original network if there would be fewer connections traversed. However, there are circumstances where it is inap- propriate to translate the net knowledge into rules. For example, if each input node encodes information of a pixel m an ima attribute for ru es. ff e, it will be not useful to treat it as an Therefore, rule learning described here deals with high-level cognitive tasks rather than low-level perceptual tasks. To convert knowledge structures from one form to another is necessary in order to achieve advanta es in different respects. For example, Quinlan (1987 de- ‘I scribes a technique for generating rules from decision trees; Berzuini (1988) describes a procedure for turn- ing a statistical regression model into a decision tree. This paper presents the KT algorithm which is a novel algorithm for translatin a rule-based system wit ?I a trained neural network into equivalent performance. The Learning Method The learning method consists of two sequential phases. In the first phase, a network is trained using backprop- agation (Rumelhart, Hinton, and Williams, 1986). In the second phase, the KT algorithm is applied to the trained net to obtain a set of production rules, each represented as premise (antecedent) b conclusion (consequent) The KT Algorithm In a net, the output of a node in hidden or output layers is given by OUT = g(cwizi - 0) (1) where wi and xi are input weights and inputs respec- tively, 6 is the threshold, and function g is the squash- ing function (sometimes called a transfer function or activation function) and is often chosen to be sigmoid: g(net) = 1 1 + e-Xnet (2) where X determines the steepness of the function. the following descriptions, we use sigmoid-sum to d’,l From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. note the sigmoid of the sum minus 6, that is, sigmoid-sum = g(sum - 0) We restrict that the activation level at each node in the network range between 0 and 1. A value of 0 means “no” (or absent) while a value of 1 means “yes” (or present). We define parameters cy and p such that activation smaller than the cy value is treated as no, and activation greater than the ,8 value is treated as yes. The ,8 value must be greater than the a value in order to avoid ambiguities. The KT algorithm recognizes two important kinds of attributes known as pos-atts and neg-atts (pos-att and neg-att in singular form). For a given concept, pos-atts contribute to confirming (making the activa- tion approach 1) it and neg-atts contribute to discon- firming (making the activation approach 0) it. An in- put node (or unit) encodes an attribute and an output node encodes a final or tar network, attributes et concept. In a single-layer direct y connect to final concepts. B We define that an attribute is a pos-att (neg-att) for a concept if and only if it links to the concept through a connection with a positive (ne that this definition is based on t fl ative) weight. Note e fact that the sig- moid function is a monotonically increasing function and the restriction of activation to the range between 0 and 1. Attributes with zero weights can safely be eliminated without changing the activation level of the concept in any case. In a multi-layer network (say, a two-layer network), pos-atts and neg-atts for a final concept remain undefined because an attribute may support a hidden concept or attribute (corresponding to a hidden unit) which is a pos-att for the final con- cept and at the same time support another hidden concept which is a neg-att for the final concept. It may turn out that in this case, the type of contribu- tion of an attribute to a final concept is conditioned on the values of other attributes. For this reason, in a multi-layer network, pos-atts and neg-atts are only defined within each layer. The KT algorithm is structured as follows: KT FORM-CONFIRM-RULE EXPLORE-POS NEGATE-NEG FORM-DISCONFIRM-RULE EXPLORE-NEG NEGATE-POS REWRITE The procedure FORM-CONFIRM-RULE searches for rules each of which is able to confirm the iven concept independently. In simulating the rule 8 ring by presenting the network with any input matching the antecedent of any such rule, the activation of the concept should be cedure calls EXPL reater than the /3 value. This pro- 6 RE-POS and NEGATENEG. The output of EXPLORE-POS is a set of all combi- nations of at most k pos-atts each of which can confirm the concept if ullneg-atts are absent. Then, NEGATE- NEG is applied to each such combination in an at- tempt to finding all rules with rule size limited to k each of which can confirm the concept in the absence of some or no neg-atts. In our formalism, a rule has one or multiple conditions, and one conclusion. The size of a rule is determined by the number of condi- tions. MT learns rules of size up to k. Both procedures EXPLORE-POS and NEGATE- NEG involve heuristic search. Their heuristics are ba- sically derived from numerical constraints associated with pos-atts and neg-atts. Due to space limitation, only their pseudocodes are given without detailed de- scriptions. The procedure FORM-DISCONFIRM-RULE is sim- ilar to FORM-CONFIRM-RULE. The difference is that the roles of pos-atts and neg-atts are exchanged. It searches for rules each of which produces an activa- tion level less than the (Y value for the given concept as long as its premise is true. The prunmg heuristics are than ed accordingly. The details of EXPLORE- NEG an % NEGATE-POS are omitted here. We keep rules in the most general form. In terms of the cost and the efficiency of using rules, general rules are more desirable than specific rules. Some machine learning programs ization when no su erform maximally specific general- iii cient number of counter-examples is available for learnin . To deal with multi- f ayer networks, KT learns rules on a layer-by-layer basis, then rewrites rules to obtain rules which link attributes directly to a final (target) concept. In formin rules between a hidden layer and a concept, each hi f den unit in the layer is treated as an attribute. Since this is not an ori inal attribute, we call it a hidden attribute. The hid % en attributes are then categoried into pos-hidden-atts and neg-hidden- atts. The MT algorithm treats pos-hidden-atts and neg-hidden-atts the same as pos-atts and neg-atts re- spectively. This rewriting process is backward in the sense that the antecedent of a rule is rewritten on the basis of rules whose consequents deal with its antecedent. Ev- ery time, REWRITE rewrites rules of a layer in terms of rules of the next layer closer to the input of the net. Rewriting starts with the output layer and repeats un- til rules which associate attributes with final concepts result. After each rewriting, if a rule contains an attribute and its negation, then delete the rule; if a rule con- tains conflicting attributes, then delete the rule; if a rule contains redundant attributes, then delete the re- dundant ones; remove redundant rules; in addition, remove a rule whose premise part is a superset of that of any other rule (that is! remove subsumption). A rule which cannot be rewritten will be discarded. There are other possible ways to search for rules. However, problems exist with these alternatives. We address these problems by posing relevant questions as follows: Can we mix all the attributes together without making distinctions between pos-atts and neg-atts in the search? If this is the case, then it will amount to exhaustive search. When we search for confirm- ing rules, we do not consider negated pos-atts because if a confirming rule contains a negated pos-att in its premise part, the rule is still correct if that negated pos-att is deleted (which means that it does not mat- ter whether that pos-att is present or absent or un- known in the environment). Likewise, we do not con- sider negated neg-atts in forming disconfirming rules. As seen m the KT algorithm, separation of attributes into pos-atts and neg-atts provides efficient pruning strategies. Fu 591 EXPLORE-POS 1. 2. 3. lp 4. 5. 6. 7. 8. 9. allnodes <- nil. open <- 103. closed <- nil. if open = nil, then return allnodes. head <- the first element of open. if head is in closed, then open <- open - (head) and go lp. closed <- closed in union with {head). if the sigmoid-sum of the associated weights of all 1 attributes in head plus k - 1 other strongest, non-conflicting pos-atts is not greater than the beta value, then open <- open - (head) and go lp. if the sigmoid-sum of the associated weights of all attributes in head is greater than the beta value, then allnodes <- allnodes in union with {head). 10. if head has k attributes, then open <- open - {head) and go lp. 11. if the sigmoid-sum of the associated weights of all attributes in head plus all nag-atts (for mutually exclusive attributes, take the strongest one> is greater than the beta value, then open <- open - {head) and go lp. 12. successors <- a set of the sets formed by adding a new, non-conflicting pos-att to head in all possible ways. 13. successors <- remove the elements from successors which are in open or closed. 14. open <- open - {head). 15. open <- open in union with successors. 16. go lp. NEGATE-NEG allnodes <- nil. open <- a singleton set containing an element of the output set from EXPLORE-POS. closed <- nil. if open = nil, then return allnodes. head <- the first element of open. if head is in closed, then open C- open - {head) and go lp. head-neg <- head in union with all neg-atts whose negations are not in head. (special attention for mutually exclusive attr:butes). of the sigmoid-sum of the associated weights of all non-negated tytribute rn ' -ad-neg is greater th Tn tt beta v.. , than 8.1. allnodes <- allnodes ir nion with ihead); 8.2. closed <- closed in union with {head); 8.3. open <- open - {head); 8.4. go lp. 9. else 9.1. closed <- closed in union with {head); 9.2 if head has k attributes, then open <- open - {head) and go lp; 9.3. successors <- a set of the sets formed by adding to head the negation of an appropriate neg-att which has not been in head in all possible ways; 9.4. successors <- remove the elements from successors which are in open or closed; 9.5. open <- open - {head); 9.6. open <- open in union with successors; 9.7. go lp. Can we take the decision tree approach? In such an approach, we select one attribute at a time according to some criterion. It may miss the case when multiple attributes are weakly predictive separately but become strongly predictive in combination. Learning consid- ering the utility of a single attribute one at a time is referred to as “monothetic” while learnin ing multiple attributes simultaneously is ca led Ti consider- “poly- thetic” (Fisher and McKusick, 1989). In this terminol- ogy, the decision tree a the KT algorithm is po ythetic. P preach is monothetic whereas In general, the mono- thetic approach traverses the rule space less completely than the polythetic approach. Can we search for rules associating attributes with a final concept without explorin rules involving hid- den units in the case of multi- ayer networks? For 7 multi-layer networks, it is difficult to define pos-atts and neg-atts for a final concept directly (bypassing in- termediate layers). An attribute may contribute to confirming and/or disconfirming the final concept de- pendin with. pi on what other attributes it is in conjunction iven no distinction between pos-atts and neg- atts, expanding the search space directly using the at- tributes simply results in an exhaustive search. Even if we limit the number of attributes in the premise part, we still need to try every possible combination of other attributes absent in the premise part to make sure that a rule makes the same conclusion whether these other attributes are present or absent or unknown (in the environment); and the complexity is still exponen- tial. The identification of pos-atts and neg-atts can avoid this problem. For example, to learn a confirm- irig rule we can always assume that pos-atts absent in the premise part are absent (in the environment) and neg-atts absent in the premise part are present, and if the rule confirms a thing under this assumption, it still confirms it even if this assumption does not hold. In other words, a rule should reach its conclusion as long as its premise is true; information about attributes outside the premise should not change its conclusion (note that KT only learns rules with certainty). If a rule can conclude in the worst case concerning these outside attributes, it still concludes in any other case. The recognition of pos-atts and neg-atts allows KT to define what is the worst case right away. In multi- layer networks, since it is difficult to recognize pos-atts and neg-atts for a final concept, KT forms rules on a layer-by-layer basis and then rewrites rules to obtain rules linking attributes directly to a final concept, as an alternative to exhaustive search. Properties It may well be true that learning based on a statistical model implied by the data is more noise-tolerant t#han based on the data directly. The abstraction mecha- nism of KT can be viewed as a process of general- ization over the implicit sample space defined by the input-output relationships of the net. Thus, the capa- bility of generalization in this learnin system is pri- marily derived from the net and secon f As a conse arily from MT. by that of t uence, the performance of KT is limited R e net. KT translates an adapted net into a set of rules. In single layers, when a rule confirms concept, it means that the network I disconfirms) a wi 1 reach an ac- tivation level close to 1 (0) for the concept (this is 592 CONNECTIONIST REPRESENTATIONS strai forward from the algorithm . Thus, the rule will correct if the network J per ormance is taken asa ndard. A confirming rule and a disconfirming rule cannot fire simultaneously since there is no overlap between corresponding activation ranges. This guar- antees the consistency of the rule set. The strategy for rule rewriting can be likened to backward chain- ing. We can show that a rewritten rule containing no conflictin attributes in its premise part is correct by forward-c t ainin some loss in per p: modus ponens. Wowever, we expect ormance due to rectification of output values. An output value smaller than the CY value is treated as 0 while an output value greater than the p value is treated as 1. No rule is formed for the in- put which corresponds to the output between cy and ,Q. This loss may be accumulated from layers to lay- ers in the rewriting serious in the case o P recess. The situation is more a small X value when the gap between Q and p in the output of a node corresponds to a large zone in the input because of a entle slope of the sigmoid function. This problem can % e avoided or ameliorated by choosing a large X value which makes the sigmoid approach a threshold function with 0 and 1 as two possible values. Another possible loss in per- formance is due to the limitation on the number of attributes in the premise part. Imposing this limita- tion is to cognitive advantages, but some rules be missed. This problem arises partly because only learns rules with certainty. If a bi ? rule can broken up into multiple small rules with ess certainty and there exists a scheme to combine the confidence levels of different rules, then such loss in performance could be lessened. We do not know yet ether this approach is feasible. On the other hand, can learn rules based on a single case by focusing on those at- tribute combinations applicable to that case (the case- based mode). IBecause of a smaller search space, more complete search is permitted. The learning problem addressed here is to learn multiple rules rather a single rule. It is worth mention- ing that the potential number of rules is exponential with the number of attributes involved if no prefer- ence criterion is imposed. Assume that there are n attributes under consideration. Without recognizing pos-atts and neg-atts, each attribute may be present in either a positive or a ne ated form or absent. So, there are three possibilities f or each attribute in rules. Exhaustive search for rules may need to explore 3” combinations of all attributes in three possible forms. This is the cost of enerating possible solutions. To evaluate each possi % le rule, suppose the exhaustive approach considers all combinations of attributes ab- sent in the rule to ensure that the rule can conclude the same under all these circumstances. The worst cost of evaluating a solution is O(3n) (or 0(3*-l) if rules without antecedents are not considered). Thus, the worst overall cost of exhaustive search is O(32n). Next, we make analysis of the KT algorithm. Suppose n attributes include p nos-atts and Q neg-atts. Thus, n > p + Q. Since cost’ analysis for learning confirming r&s and for learning disconfirming rules will be sim- ilar, we just conside\;: learning co&rming rules here. In addition, we will first consider the case of single- layer networks. Each pos-att will be either present in positive form or absent and each neg-att will be either present in negated form or absent in confirming rules (the reason has been stated earlier). Thus, there are two possibilities for each attribute, corresponding to a search space of size O(Y) (which is already much better than O(3”)). T o g uarantee the feasibility of the T algorithm for any n, we further limit the number of the attributes in the premise part to a constant Ic. The worst cost of generation will be O(pk1qk2), where Bc I and ff2 are non-negative integers and El + rE2 = k. nk. As to the cost of evaluating a possi- nly needs to evaluate the circumstance tts absent in the premise part are ab- sent and all neg-atts absent in the premise part are present for the reason given earlier. Thus, the cost of evaluation for each solution is 0( 1). It follows that the worst overall cost is O(pk1qk2), which is a polynomial cost. The practical cost depends on how efficient the pruning heuristics employed by EXPLORE-POS and NEGATENEG are. In the best case, EXPLOREPOS and NEGATENEG just generate one node and the cost is O(I). Notice that exhaustive search (without identifying pos-atts and neg-atts) under the limitation of the number of attributes in the premise part still incurs an exponential cost because the cost of gener- ation is O(nk) but the cost of evaluating a solution is 0 k ta 3”). In the case of multi-layer networks, we need to e mto account the cost for rewriting rules. The cost of the procedure REWRITE depends on the number of combinations of how many ways to rewrite each at- tribute in the premise part. Suppose the rules of layer I are rewritten in terms of the rules in layer 2, and the numbers of rules for layer 1 and layer 2 are rl and ir2 respectively. Assume that there are at most Ir, at- tributes involved in a rule. Then, the cost of rewriting rules is 0( ~1 r,“). If rules are too many, one may reduce Ic: and/or use the case-based mode. &XXl~2WiSQ, With The principal difference between the T algorithm and other works (Gallant, 1988; Saito and Nakano, h rule extraction from trained nets intended to translate an adapted net system with equivalent performance, whereas the other related algorithms only extract part of the knowledge of a net as rules which are then used in conjunction rather than in replacement of the net (a system consistin of a neural network and a rule-based corn 6 onent is ca led a hybrid intelli f allant’s method is able to fin % ent system). a single rule to explain the conclusion reached by the neural network for a iven case. His method involves the orderin availa % of le attributes based upon inference strength f the absolute ma 5-l nitude of weights). To form a rule, the attribute wit the greatest strength among attributes remaining to be considered will be picked. The process continues until the conjunction of picked attributes is sufficiently strong to conclude the concept concerned. Saito and Nakano’s method can find multiple rules. Their method performs exhaustive search among the rule space spanned by attributes selected according to given instances. Rules extracted in this way may not cover unseen inst antes. Their method is empirical, treatin matica P the network as a black box without mathe- understanding of its internal behavior-a fun- damental difference from T. In their method, if af- firmative attributes are some other attributes wi 1 change the conclusion, then P resent and if the presence of Fu 593 a rule is formed using the affirmative attributes in con- junction with the negation of these other attributes. However, this conjunction is only a necessary rather than a sufficient condition for the conclusion because it is possible that the presence of other attributes will deny the conclusion. For this reason, rules found by their method are formulated in causal form rather than in diagnostic form as follows: If a disease, then symptom(s). In fact, they showed how to use extracted rules for confirmation of users’ inputs rather than for the diag- nostic purpose. Future Work For the case where there are a large number of hid- den units, since information may be wide spread over hidden units, it seems that a large k (k is the maxi- mum number of attributes allowed in the if-part) may be necessary. To make search efficient, it would be useful to remove combinations of hidden units which rarely occur. A smaller k would be possible by remov- ing such combinations. Hidden layer analysis in this aspect is currently underway. Furthermore, it was ob- served that rules missed with a reasonable rule size were often rules applicable to some exceptional or am- bi uous instances. It would not be cost-effective if the ru e size is increased just for learning such rules. Ap- P plying the case-based mode to these minority instances could allow one to discover applicable rules without incurring too much cost. More research will be con- ducted in order to understand the behavior of the KT algorithm in dealing with large nets. Application KT is a domain-independent algorithm. Here, we demonstrate it in the domain of wind shear detection by infrared sensors. Microburst phenomena have been identified as the most severe form of low-altitude wind shear which poses a tremendous threat to aviation. An observed re- lationship between wind velocity differential and tem- perature drop leads to the development of onboard in- frared wind shear detection system, which recognizes microburst as significant temperature drop in a narrow To detect signals (microburst) is complicated py?&y and dynamically than ing environments. In our approach, the signa s to be detected are s modeled in two forms: reverse triangles and reverse rectangles to various degrees of completeness. Each noise sample obtained empirically is an one-dimensional finite image. Noisy signals are generated by superim- posing various signal forms on noise samples. Each raw sample is converted into a feature vector (the vector of the values of extracted features), which along with a label denoting either presence or absence of signals is taken as an instance for training or testing the net- work. The extracted features include: Depth, Polarity: 1 is taken as the value for negative polarity and 0 for positive polarity, Maximum gradient, Slope, Summa- tion o ii gradients. Feature extraction takes into ac count 0th rectangular and triangular signals. Excel)1 the second feature, analog values are further covertt,tf to binary values (high and low) according to samlIlt* statistics. “Hi “low” means ot h” means ‘“above the average” while l-l erwise. 32 training instances and 32 test instances were gathered from different sets of weather conditions. The training data and the test data were digitized using the same thresholds. After digitization, the same instance may appear more than once in the data base, the fre- quency reflecting the distribution of feature patterns; ambiguous instances may be produced (i.e., the same feature pattern may be labeled differently noise. Note that there should be no more t h reflecting an 32 dis- tinct feature patterns after digitization. The results presented here mvolve two kinds of net- works: a single-layer network with 10 input units and 2 output units, and a two-layer network with 7 addi- tional hidden units. In rule search, the parameters cy and ,i? were set to 0.2 and 0.8 respectively. The rule size was limited to seven. The steepness of the transfer function depends on the parameter X; two values were tried out: 1 and 20. The KT algorithm was applied to each network af- ter trained to generate a rule set. The performance of a network and a rule set was evaluated against the training and the test instances. All networks and + 11pi r correspondin mance level: 93.7 o accuracy against the trainin B rule sets yielded the same perfor- and 84.4% accuracy against the test set except or a P set two-layer network with X set to 1, which ave 90.6% accuracy against the training set and 82.8 6 o accuracy against the test set for both the network and the rule versions. However, in this case, there were a few in- stances which were misconcluded by the network but unconcluded by the rule set. Whether the rule set translated from an adapted net is able to maintain the same level of performance as the original network can be evaluated by compar- ing their predictions (conclusions) on a case-by-case basis over both training and test sets. In this way, we arrive at the coincidence rate in prediction for each network, as shown in Table 1. This rate reflects the degree of success for the KT algorithm. As seen, rules obtained from single-layer networks did not sacrifice the network performance to any extent with a 100% coincidence rate. For multi-layer networks, a value of 1 for J+ gave a 96.8% coincidence rate while a value of 20 for X gave 100%. As mentioned earlier, rectification of output values may incur some loss in performance. The experimental results indicate that the loss in per- formance is negligible if X is set to high (say 20). Table 2 displays the rules formed from the single- layer and the two-layer networks with a value of 20 for A. When X was set to 1, there was no change in rules derived from the single-layer network, but, one rule was missing (namely, TR3 in Table 2) m the rule set obtained from the two-layer network. This loss of rules can be connected to the loss of performance in the above discussion. The rules formed just correspond to a small part of the network, creating efficient reflexive arcs and pro- viding a performance level equivalent to the network. There are two possible values for each feature and a feature can be absent in a rule. The combination of five features results in 35 = 243 B ossibilities to form rules. As seen, the rules generate by KT are a small fraction of the rule space. For example, only three con- firming rules out of these 243 possibilities were found using a two-layer network with a steep transfer func- 594 CONNECTIQNIST REPRESENTATIONS Table 1: Prediction coin*idence rates between a net- work and the correspondingly derived rule set over both training and test instances. X is the param+ ter determining the steepness of the network transfer function. n Network characteristics 1 Coincidence rate fl Ll u Single-layer, A = 1 100% Single-layer, X = 20 100% Two-layer, i = 1 96.8% Two-laver. A = 20 100% tion. Another observation is that the rules learned from the single-layer network are more than the rules from the two-layer network, and a former rule on an average has more conditions m the premise part than a latter rule, in the case of confirming rules. This observation is likely to be accounted for by hidden layers which perform additional abstraction and generalization. In terms of efficiency, a small rule is better than a big rule and a smaller rule set is better than a big one for the same level of performance. The merit of individual rules can be evaluated with respect to generality (how many instances a single rule covers), the false prediction rate, and the number of attributes in the premise part. The merit of a rule- based system can be evaluated along the dimensions of prediction accuracy and the total number of rules. The timing study showed that generating rules from an adapted net is quite fast in comparison with train- ing it. For example, it took 7.50 sec. CPU time (on a SUN Spare station, KT written in Common-Lisp) to enerate rules from a two-layer network (X = 20), whi e it took 928 sec. to train it. s conchlsion The growing interest in applying backpropagation to machine learning leads to the question of whether we can generate rules from an adapted net. This paper presents a novel algorithm referred to as KT which is an efficient al orithm for this purpose. This al orithm is able to dea with both single-layer and Fi mu ti-layer Y networks, and can learn both confirming and discon- firming rules. The steepness of the net transfer func- tion has been found critical for the algorithmic behav- ior. Analytically, we have shown that the algorithm is tractable. However, tractability may be purchased at the expense of possible missing of big rules (rules with a large number of conditions in the IF-part). In the domain of wind shear detection by infrared sen- sors, we have demonstrated that IX% can efficient1 produce a small set of rules from an adapted net wit ic nearly 100% mutual coincidence rate in prediction. Acknowledgments The author wish to thank Dr. Hrian J. Gallagher of Delco Electronics in Wisconsin for providing valuable atmospheric data. The experiment could not have been carried out if these data had not been available. Table 2: Rules generated by the KT algorithm. Al: Depth; A2: Polarity; A3: Maximum gradient; A4: Siupe; A5: Summation of gradients. Single-layer network: SRl: A2=1 and A3=high and A5=high -+ signal is present. SR2: A2=1 and A4=low and AS=high --+ signal is present. SR3: A2=1 and AS=high and A4=low -+ signal is present. SR4: Al=high and A2=1 and A5=high --+ signal is present. SR5: Al=high and A2=1 and A4=low - signal is present. SR6: Al=high and A2=1 and AS=high - signal is present. SR7: A2=0 - signal is absent. SR8: Al=low and A3=low and A5=low -* signal is absent. SR9: Al=low and A4=hiah and AS=low --+ signal is absent. SRlO: Al=low and A3=10w and A4=high - signal is absent. Two-layer network: TRl: A2=1 and A5=high --+ signal is present. TR2: A2=1 and A3=high + signal is present. TR3: Al=high and A2=1 -+ signal is present. TR4: A2=0 - signal is absent. TR5: Al=low and AJ=low and AS=low --+ signal is absent. ‘l‘R6: Al=low and A4=high and A5=low + signal is absent. ‘I’R7: Al=low and AJ=low and A4=high -+ signal is absent. 1. Berzuini, C. 1988. Combining symbolic learning techniques and statistical regression analysis. in Pro- ceedin s of AAAI-$8, Minneapolis, 612-617. 2. Fis er, D.H. and McKusick, K.B. 1989. An em- % pirical comparison of ID3 and back-propagation. in Proceedings of IJCAI-89, Detroit, 788-793. 3. Gallant, S.I. 1988. Connectionist expert systems. Communications of the ACM, 31 !l 2), 152-169. 4. Mooney, R., Shavlik, J., Towe 1, G., and Gove, A. 1989. An experimental comparison of symbolic and connectionist learning algorithms. in Proceedings of IJCAI-89, Detroit, 775-780. 5. Quinlan, J.R. 1987. Simplifying decision trees. In- ternational Journal of Man-Machine Studies, 27, 221- 234. 6. Rumelhart, D.E., Hinton, G.E. and Williams, R.J. 1986. Learning internal representation by error prop- agation. In Parallel Distributed Processing: Explo- rations in the Microstructures of Cognition, Vol. 1, MIT press, Cambridge. 7. Saito, K. and Nakano, R. 1988. Medical diagnostic expert system based on PDP model. in Proceedings of IEEE International Conference on Neural Networks, 255-262. 8. Weiss, S.M. and Kapouleas, I. 1989. An experimen- tal comparison of pattern recognition, neural nets, and machine learning classification methods. in Proceed- ings of IJCAI-89, Detroit, 781-787. Fu 595
1991
88
1,151
Craig noblock Carnegie Mellon University School of Computer Science Pittsburgh, PA 15213 cak&s.cmu.edu Steven Minton Sterling Federal Systems NASA Ames Research Center Mail Stop: 244-17 Moffett Field, CA 94035 minton@pluto.arc.nasa.gov rew Etaioni University of Washington Department of Computer Science and Engineering, FIR-35 Seattle, WA 98195 etzioni@cs.washington.edu This paper describes the integration of abstraction and explanation-based learning (EBL) in the con- text of the PRODIGY system. PRODIGY'S abstrac- tion module creates a hierarchy of abstract prob- lem spaces, so problem solving can proceed in a more directed fashion. The EBL module acquires search control knowledge by analyzing problem- solving traces. When the two modules are inte- grated, they tend to complement each other’s ca- pabilities, resulting in performance improvements that neither system can achieve independently. We present empirical results showing the effect of combining the two modules and describe the fac- tors that influence the overall performance of the integrated system. Introduction Artificial intelligence has traditionally favored a re- ductionistic approach to the study of intelligent sys- tems, where methods for reasoning, learning, knowl- edge representation, etc., are studied separately. hiore recently, researchers have begun to seriously consider the issues involved in integrating the individual com- ponents into a single system, uncovering issues that were not considered when the components were de- signed. In this paper, we study the integration of ab- straction and explanation-based learning (EBL) in the context of the PRODIGY system [Carbonell ed al., 1990]. PRODIGY consists of a central planner, and a set of dis- tinct modules for abstraction generation, explanation- based learning, static problem-space analysis, analog- ical reasoning, etc. PRODIGY'S EBL module acquires *The first author was supported by an Air Force Labo- ratory Graduate Fellowship through the Human Resources Laboratory at Brooks AFB. The third author was sup- ported by an AT&T Bell Labs Ph.D. Scholarship. This research was sponsored in part by the Avionics Labora- tory, Wright Research and Development Center, Aeronau- tical Systems Division (AFSC), U.S. Air Force, If7right- Patterson AFB, OH 45433-6543 under Cont,ract F33615- 90-C-1465, Arpa Order No. 7597. search control knowledge by analyzing problem-solving traces. ALPINE, PRODIGY'S abstraction module, cre- ates a hierarchy of abstract problem spaces so that problem solving can proceed in a more directed fash- ion. The purpose of both modules is to increase the efficiency of the problem solver’s search, but they do so using different methods. EBL and abstraction are natural candidates for inte- gration. ALPINE creates a hierarchy of abstract prob- lem spaces but provides no control guidance within each space, a role tailor-made for EBL'S control rules. EBL, on the other hand, can often produce more effec- tive control rules in an abstract problem space than in the original space. This occurs because the abstract space is generally simpler for EBL to analyze, so the resulting control rules are both more general and less expensive to match.’ The first part of this paper reviews the abstraction and EBL mechanisms in PRODIGY. Then we describe the integration of the two modules and illustrate the effect of combining the modules using the Tower of Hanoi puzzle. On this example, abstraction alone only provides some reduction in search, and EBL alone is ineffective in the original problem space due to the difficulties in analyzing Tower of Hanoi traces. Yet, the combination of the two modules completely elim- inates search in this problem space. This example illustrates the synergistic effect that can occur when the two methods are combined. We then describe an empirical study in a machine-shop scheduling domain, which illustrates the synergy achieved by the combined system in a more complex problem space over a large set of examples. Finally, we compare the sources of power used by the two mechanisms in order to describe why this synergistic effect occurs, and ident,ify when we can expect the two modules to work well together. 'PRODIGY also includes a problem space compiler, STATIC [Etzioni, 19901, that carries out a similar, but more restricted, version of EBL'S analysis. While we have not yet explored the integration of STATIC and abstraction, we would expect the use of abstraction to similarly benefit STATIC. KNOBLOCK, MINTON, & ETZIONI 541 The PRODIGY System PRODIGY is an integrated system for planning and learning. PRODIGY'S basic reasoning engine is a general-purpose means-ends analysis problem solver that searches for sequences of operators to accomplish a set of goals. In order to solve problems in a par- ticular problem space, PRODIGY must first be given a specification of that problem space, consisting of a set of operators and inferences rules. Search in PRODIGY can be guided by conirol rules that apply at its de- cision points: selection of which node to expand, of which goal to work on, of which operator to apply, and of which objects (operator bindings) to use. At each decision point, the control rules can select or reject alternatives or prefer some alternatives over others. In this paper the Tower of Hanoi puzzle is used for illustration. The picture at the top of Figure 1 shows the state space for the three-disk puzzle. As shown in the picture, the problem is to find a sequence of operators to move the three disks from the first peg to the third peg. Abstraction in PRODIGY PRODIGY'S abstraction module, ALPINE, takes an ini- tial problem-space specification and automatically gen- erates a hierarchy of abstraction spaces [Knoblock, 1990, Knoblock, 19911. Each abstraction space in the hierarchy is formed by dropping conditions from the original problem space. An abstraction space is de- fined by a set of abstract operators and states. In the Tower of Hanoi, for example, an abstract problem space can be formed by dropping all of the conditions referring to the smallest disk from both the operators and states. The resulting abstract space can be fur- ther simplified by dropping all the conditions referring to the medium-sized disk. Figure 1 shows the original state space and the two reduced state spaces for the Tower of Hanoi. PRODIGY uses the abstraction hierarchies produced by ALPINE to perform hierarchical problem solving. The use of the abstractions in problem solving allows PRODIGY to focus on the more difficult aspects of a problem first and thereby reduce the overall search. Given a problem, PRODIGY first solves it at the highest level of abstraction. To do this, a problem is mapped into an abstract problem by dropping the conditions that are not relevant to that abstraction level. The abstract problem is then solved in the abstract prob- lem space. The resulting abstract solution then serves as a skeleton for the solution at the next abstraction level, where additional operators are inserted into the plan to achieve the conditions that were ignored in the higher-level space. The problem solver refines the plan at each successive level in the hierarchy and ultimately produces a plan that solves the original problem. ALPINE forms abstraction hierarchies based on the ordered monotonicity property, which requires that the truth value of a condition introduced at one level is 542 LEARNING SEARCH CONTROL Figure 1: Original and Reduced State Spaces for the Tower of Hanoi. never changed at a lower level. This property guar- antees that the preconditions that are achieved in an abstract plan will not be deleted (clobbered) while re- fining that plan. To construct abstraction hierarchies with-this property, ALPINE analyzes the preconditions and postconditions of the problem space operators to determine potential interactions. If a plan for achiev- ing condition Ci can change the truth-value of condi- tion (72, then condition Cr cannot be in a lower ab- straction level than condition C2. The potential inter- actions define a set of constraints on the final abstrac- tion hierarchy that are sufficient to guarantee the or- dered monotonicity property. The detailed algorithm is described in [Knoblock, 19911. ALPINE automatically constructs an abstraction hi- erarchy for the Tower of Hanoi by partitioning the con- ditions for each of the three different-sized disks into three abstraction levels. The largest disk is placed in the most abstract level, the medium disk is added at the second level, and the third level contains all three disks. This hierarchy has the ordered monotonicity property, since once an abstract plan is produced for a given disk, one can plan how to move the smaller disks without clobbering the conditions achieved in the ab- stract plan. Explanation-Based Learning in PR PRODIGY'S explanation-based learning module pro- duces search control knowledge by analyzing the plan- ner’s experiences [Minton, 19SS]. After each planning episode, the EBL module examines the control choices that were made, as recorded in a trace produced by the planner. By explaining why a control choice was appropriate or inappropriate, the system can learn a general search control rule. The learned rule enables the planner to make the right choice if a similar situ- ation arises during subsequent problem solving. The EBL module is able to analyze several types of experi- ences, including success, failure and goal interference, as defined below. In EBL terminology these are referred to as the target concepts. 1. SUCCESS: A control choice succeeds if it leads to a solution. 2. FAILURE: A choice fails if there is no solution con- sistent with that choice. 3. GOAL-INTERFERENCE: A choice results in goal interference if in every solution consistent with that choice, a precondition in the solution is clobbered. (A precondition is clobbered if it is asserted, deleted, and then re-asserted). After each problem solving episode, the system ex- amines the problem-solving trace to find examples of the target concepts. The system uses a set of user- provided heuristics to pick out examples that it esti- mates would result in useful rules being learned. For each example, the system explains why the target con- cept is satisfied by the example. For instance, given an example of a failure where the wrong operator is chosen, the system explains why that operator failed. Explanations are constructed using a theory that de- scribes the problem-space operators and the relevant aspects of the PRODIGY planner. To form a control rule, the system finds the weakest preconditions of the explanation, and these become the antecedent of the rule. The antecedent specifies the conditions under which the rule is applicable. The consequent of the control rule is determined by the target concept and specifies the effect of the rule. Consider a two-disk tower of Hanoi problem that il- hStm%teS how PRODIGY learns from an example of goal interference. The problem is to move the two disks (a large disk and a small disk) from the first peg to the third peg. In order to move the large disk, the sub- goal of removing the small disk from the first peg must first be achieved. There are two available alternatives, since the small disk can be moved to either the second or third pegs. However, the third peg is a bad choice since the large disk cannot then be moved there (un- less the small disk is moved again). PRODIGY explains that moving the small disk to the third peg clobbers a precondition of moving the large disk to that peg, and it learns a control rule to make the correct control de- cision in the future. The control rule states that when the current goal is to move the small disk from a peg, and this is a subgoal of moving the large disk to some peg Z, then the system should prefer moving the small disk to a peg other than peg x. Integrating Abstraction and E While both of the learning methods described in the previous sections are effective at reducing search, each approach has some limitations. ALPINE takes the ini- tial problem space and forms a hierarchy of abstract problem spaces. However, the use of abstraction pro- vides no guidance about how to solve a problem within each of the abstract problem spaces, so search is still re- quired. The EBL module learns search control rules by analyzing the problem solving traces, however, as the complexity of the training problems increases, the EBL module tends to produce less effective control rules. A natural approach to addressing these limitations is to integrate the use of abstraction and explanation- based learning. The EBL module can be applied within each abstract problem space to learn search-control rules. This provides the guidance needed to solve prob- lems within an abstraction space. In addition, the ab- stract problem spaces provides simpler problems and a simpler theory for the EBL module, enabling it to learn better control rules. The learned rules can also be partitioned across the abstraction hierarchy, which results in fewer, more general rules to consider at each node in the search space. hplementation The use of abstraction in PRODIGY divides a problem into a number of simpler subproblems. To combine the use of abstraction and EBL, the EBL module is used to learn control rules on each of these subproblems. This is implemented by first solving a problem using hierarchical problem solving and recording all of the individual subproblems that arise and then running the EBL module on each of the subproblems to produce a set of control rules. Instead of applying the learned control rules at every level of abstraction, the learned control rules are only considered in the most abstract space in which all the conditions of the rule are relevant. The ordered mono- tonicity propert#y partitions the possible goals and sub- goals into distinct levels in the abstraction hierarchy. Since each control rule is specific to a goal, it follows that once a rule is considered at a given level, it need not be considered at any lower levels. Thus, before problem solving begins, each control rule is associated with a particular abstraction level. Integration in the Tower of Hanoi The Tower of Hanoi problem space illustrates the po- tential synergy between abstraction and EBL. If ab- KNOBLOCK, MINTON, & ETZIONI 543 straction is used alone, solving the Tower of IIanoi still requires some search. At each abstraction level the sys- tem must determine where to move the disk introduced at that level so that it does not interfere with the disk that was moved at the next higher level. While the use of abstraction reduces the total amount of search, it by no means eliminates it. In addition, the use of ab- straction produces shorter solutions, but not optimal ones. Similarly, if EBL is used alone, the system will not learn the control rules required for efficiently solving the three-disk problem or any larger problem. The problem is that the EBL module avoids learning about an observed goal interference if all ways to solve the problem involve interferences, as is the case with the Tower of Hanoi when three or more disks must be moved. (Every solution involves at least one goal that is clobbered and then reachieved). This heuristic is used to prevent the system from learning conflicting preference rules at decision points where all choices re- sult in goal interference. Conflicting preferences would provide no guidance and merely slow the system down. Combining abstraction and EBL in the Tower of Hanoi reduces the problem to one that can be solved completely *deterministically (i.e., the correct decision is made at every choice point). As explained previ- ously, the abstraction simplifies the problem such that the only search involves moving a disk out of the way of another disk that needs to be moved. There are only two places to move the disk, one of which is the “right” place and the other of which will interfere with the placement of another disk. Thus, within each ab- straction level, there is an interference-free solution. Consequently, EBL can learn the required control rules within each abstraction level, and in fact, this is ex- actly the type of learning situation illustrated in the last section for the two-disk example. Thus, the EBL system can learn a set of rules that allow the problem solver to make the correct choices at each level in the hierarchy. 0 Soiwtion Size Figure 2: Integration in the Tower of Hanoi. As shown in Figure 2, the combination of the two techniques in the Tower of Hanoi produces perfor- mance improvements that neither system can achieve independently. The use of ALPINE'S abstractions alone 544 LEARNING SEARCH CONTROL produces only a small reduction in problem-solving search. The EBL module alone is unable to learn a useful set of rules, so it performs the same as PRODIGY without any control knowledge. In contrast, the com- bination of the two approaches eliminates all search from the problem and produces optimal solutions. This section compares the performance of PRODIGY individually and in combination with the abstraction and EBL modules in a machine-shop scheduling domain [Minton, 19SS]. The various configurations were each run on one-hundred randomly-generated problems that were originally used for testing the EBL system. Comparing the results of the different configurations on the set of test problems is complicated by the fact that some of the problems cannot be solved within the 600 CPU second time limit. Since the choice of a time bound can affect the relative performance of the differ- ent configurations [Segre et al., 19911, Figure 3 shows the total time expended for each configuration as the time bound increases. The slope at each point in a given curve indicates the relative portion of the prob- lems that remain unsolved. A slope of zero, for exam- ple, means that all of the problems have been solved (no more time is required to solve any of the problems). As shown by the graph, PRODIGY combined with ei- ther abstraction or EBL performs better than the basic PRODIGY system, and the combination of abstraction and EBL performs better than either system alone.2 These results appear to be robust across different time bounds. Time Bound (CPU sec.) Figure 3: Integration in the Machine-Shop Domain. Discussion Methods for integrating abstraction and EBL have only been explored in a few other systems. Sacerdoti 1197’41 briefly discussed the potential uses for learned macro operators in the ABSTRIPS system. More re- cent work has explored the use of simplified or in- 2 Minor syntactic variations on the problem space spec- ification, required to integrate the two modules, degrade ALPINE's performance slightly relative to the results re- ported in [Knoblock, 19901. complete explanations to address the intractable the- ory problem in EBL [Chien, 1989, Tadepalli, 1989, Bhatnagar and Mostow, 19901. The most close1 related work is by Unruh and Rosenbloom [1989 , T who developed an automatic abstraction mechanism in SOAR. While their method for generating abstractions is quite different from the method used in PRODIGY, their overall approach is similar in that SOAR employs an EBL-like “chunking” process to learn search-control knowledge. Unruh and Rosenbloom showed an exam- ple in which abstraction increased the generality of a learned chunk, but they did not explore in detail how their abstraction method affects the utility of learning. In the integration of the abstraction and EBL systems in PRODIGY, there are several complex factors that de- termine the overall performance of the integrated sys- tem. It is relatively clear why abstraction and EBL can perform better than abstraction alone: EBL can learn control rules to guide search within each abstraction level. A more interesting question is why abstraction can help the EBL module to learn a more useful set of rules than it would in the original problem space? There are two different ways in which the abstraction module provides an advantage for EBL. First, the use of abstraction can simplify EBL’S explanations. Sec- ond, abstraction can increase the utility of EBL’S rules by reducing their match cost. The following sections explore these issues in more detail. The Impact of Abstraction on Explanation Generat ion In general, the main difficulty for EBL in PRODIGY is that the problem-solving trace may be very long and complex. The EBL module relies on the problem- solving trace for learning; if the trace is too complex, then it can lead the EBL module astray. This can hap- pen in two ways. First, the heuristics that the EBL module uses to identify useful training examples may be confounded, as in our Tower of Hanoi example. In the three-disk Tower of Hanoi, PRODIGY cannot iden- tify how to solve the problem without a goal interfer- ence, so it does not learn to avoid goal interferences. Second, the EBL module may identify a training exam- ple that appears useful, but the explanation may be overly complex, and thus overly-specific. Consider a simplified example from the machine- shop domain, where Part1 must be painted red at Time-Slot 1. There are two possible operators: Spray-Paint and Immersion-Paint. Suppose that when trying Spray-Paint, PRODIGY finds that the Spray-Paint machine is unavailable at Time-Slot 1, so the operation fails. When trying the alternative, Immersion-Paint, PRODIGY finds that there is no red paint left, so that choice fails too. The resulting ex- planation lists the observed reasons for failure, that is, painting an object fails if the Spray-Paint machine is unavailable at that time and there is no paint of that color left. However, this explanation is overly specific because, in fact, Spray-Paint will also fail if there is no red paint left regardless ofwhether the Spray-Paint machine is busy or not. The EBL system produces an overspecific explanation because it relies on the failure conditions identified by the problem solver, and the problem solver simply reports the failure conditions it finds first, not the most concise or general set of failure conditions. Thus, while a human can easily see that a simpler and more succinct explanation for the failure exists, EBL does not. Abstraction can improve EBL’S analysis because there tends to be less search in an abstract space than in the original problem space. Consequently, EBL's ex- planations will have fewer conditions, so the resulting control rules will be both more general and less expen- sive to match. Consider explaining the painting fail- ure described above. Since ALPINE automatically sepa- rates scheduling from process planning in the machine- shop domain by dropping the conditions that involve scheduling, PRODIGY first orders the machining opera- tions and only then assigns time slots to its operations. When time is abstracted away, painting will still fail, but the sole reason for the failure is that the required paint is not available, yielding the more succinct ex- planation we are after. One important special case of this occurs when ab- straction partitions the problem space so that recursion is restricted to certain partitions. Because the cost of matching control rules learned from recursive explana- tions is exponential in the depth of the recursions en- countered, and because such rules are recursion-depth specific, EBL is often ineffective when its explanations are recursive [Etzioni, 19901. Abstraction can help EBL overcome this “problem of recursive explanations” by partitioning a recursive problem space into a nonrecur- sive abstract space, and a lower-level recursive space. Thus, EBL will at least perform well at the abstract space. In the machine-shop scheduling domain, for ex- ample, recursion (or, more precisely, iteration) arises when considering successive time slots in a schedule. Since the abstractions push time slot considerations to the lowest abstract space, the more abstract problem spaces are nonrecursive and hence more manageable for EBL. In the Tower of Hanoi problem space, recursion arises in explaining goal interference. Since ALPINE'S abstraction hierarchies separate the placement of each disk into distinct abstract problem spaces, the recur- sion is completely removed from the traces EBL ana- lyzes. Abstraction will not necessarily help EBL. Specifi- cally, ALPINE may sometimes remove conditions crit- ical to explaining failure and goal interference. Al- though ALPINE guarantees the ordered monotonicity property, so that the conditions established by the ab- stract plan will never be undone by problem solving at a lower level, it does not guarantee that every ab- stract plan is realizable. This means that an abstract plan may be produced that cannot be refined into a KNOBLOCK, MINTON, & ETZIONI 545 plan in the original problem space. Thus, PRODIGY can produce an abstract solution, and find itself unable to completely refine this solution, in which case it must backtrack across abstraction levels to find a different abstract solution. In such cases EBL will not be able to form useful control rules in the abstract problem space. The information necessary to explain the appropriate control choices is “hidden” by the abstraction. Since, ALPINE usually produces “good” abstractions in the problem spaces studied [Knoblock, 19911, this scenario is atypical. Nonetheless, to the extent that the planner does backtrack across abstraction levels, it can cause EBL to perform poorly (relative to its performance in the original space). One solution to this problem is to enable EBL to explain failures, etc., across abstraction levels; this is a topic for future work. The Impact of Abstraction on the Utility of Control Rules Once EBL acquires a learned rule, it is indexed so that during subsequent problem solving, it is considered only at the most abstract space in which it could pos- sibly be relevant. The partitioning of control rules into different abstraction levels reduces the cost of match- ing the learned control rules for several straightfor- ward reasons. First, since each control rule is only matched at the abstraction level in which all of its con- ditions are relevant, the control rules are matched less frequently than in standard problem solving (where they are matched at every decision point). Second, the matching process itself is slightly faster at abstract levels because the reduced states may contain fewer literals that are relevant to the rule. On the other hand, abstraction can also reduce the potential benefit of the learned rules. Since less time is spent at any given abstraction level, the amount of search that is saved by the application of a control rule will be reduced. However, our empirical results show that the benefits outweigh the costs. Conclusion In this paper we have shown that explanation-based learning and abstraction can be integrated so that each complements the other’s performance. In some cases, as in the Tower of Hanoi, the individual modules may be relatively ineffective by themselves and yet the com- bined system can produce significant performance im- provements due to their synergistic interaction. The integrated system improves on the individual compo- nents in more complex domains as well, such as the machine-shop scheduling domain. As VanLehn [1989] points out, human learners can utilize a variety of dif- ferent learning methods in the course of a single learn- ing episode. The integration of abstraction and EBL brings PRODIGY one step closer towards this goal. eferences [Bhatnagar and Mostow, 19901 Neeraj Bhatnagar and Jack Mostow. Adaptive search by explanation-based learning of heuristic censors. In Proceedings of the Eighth National Conference on Artificial Intelli- gence, pages 895-901, 1990. [Carbonell et al., 19901 J aime G. Carbonell, Craig A. Knoblock, and Steven Minton. PRODIGY: An in- tegrated architecture for planning and learning. In Kurt VanLehn, editor, Architectures for Intelligence. Erlbaum, Hillsdale, NJ, 1990. Available as Technical Report CMU-CS-89-189. [Chien, 19891 Steve A. Chien. Using and refining sim- plifications: Explanation-based learning of plans in intractable domains. In Proceedings of the Eleventh International Joint Conference on Artificial Intelli- gence, pages 590-595, 1989. [Etzioni, 19901 Oren Etzioni. A Structural Theory of Explanation-Based Learning. Ph.D. Thesis, School of Computer Science, Carnegie Mellon University, 1990. Available as Technical Report CMU-CS-90- 185. [Knoblock, 19901 Craig A. Knoblock. Learning ab- straction hierarchies for problem solving. In Proceed- ings of the Eighth National Conference on Artificial Intelligence, pages 923-928, 1990. [Knoblock, 19911 Craig A. Knoblock. Automatically Generating Abstractions for Problem Solving. Ph.D. Thesis, School of Computer Science, Carnegie Mel- lon University, 1991. Available as Technical Report CMU-cs-91-120. [Minton, 19881 Steven Minton. Learning Search Con- trol Knowledge: An Explanation-Based Approach. Kluwer, Boston, MA, 1988. [Sacerdoti, 19741 Earl D. Sacerdoti. Planning in a hi- erarchy of abstraction spaces. Artificial Intelligence, 5(2):115-135, 1974. [Segre et al., 19911 Albert0 Segre, Charles Elkan, and Alex Russell. A critical look at experimental evalu- ations of EBL. Machine Learning, 6(2), 1991. [Tadepalli, 19891 P rasad Tadepalli. Lazy explanation- baserd learning: A solution to the intractable the- ory problem. In Proceedings of the Eleventh Inter- national Joint Conference on Artificial Intelligence, pages 694-700, 1989. [Unruh and Rosenbloom, 19891 Amy Unruh and Paul S. Rosenbloom. Abstraction in problem solving and learning. In Proceedings of the Eleventh Interna- tional Joint Conference on Artificial Intelligence, pages 681-687, 1989. [VanLehn, 19891 Kurt VanLehn. Discovering problem solving strategies: What humans do and machines don’t (yet). In Proceedings of the Sixth Interna- tional Workshop on Machine Learning, pages 215- 217. Morgan Kaufmann, Los Altos, CA, 1989. 546 LEARNING SEARCH CONTROL
1991
89
1,152
Joi3 Courtois Institut Sup6rieu.r d’Electronique de Paris 21, rue d’Assas 75270 Paris cedex Q6 PRANCE and LAFORIA, Universite PARIS VI 4, place Jussieu 75252 Paris cedex 05 PRANCE Abstract This paper shows how a new approach in the use of AI techniques has been successfully used for the design of an effective ITS in the domain of diagnosis training. The originality of this approach was to take into account three complex problems simultaneously: teaching diagnosis methods to students, giving the means to the teachers of maintaining the system by themselves and providing a tool easy to insert in the context of university laboratories. The architecture of the system is based on a distinct use of two kinds of knowledge representation. All the knowledge liable to modifications is gathered within libraries under descriptive forms easily maintained by the educational staff. General diagnosis knowledge independent of hardware, circuits and even application fields, is described with basic production rules and control metarules. The development of the system was based on the precise analysis of the expert’s behaviour and of the user’s needs, with the aim of making extensive use of the descriptive forms in order to minimize the static knowledge embedded in the rules. The system can work on a microcomputer and is used in an engineering school. I. ntroduction Troubleshooting is a well-known domain of application for AI researchers. The challenge is to identify elementary physical components responsible for the malfunction of a more complex structure. The definition of an elementary physical component depends on the kind of maintenance that can be done. Thus, to estimate the troubleshouting complexity of a device it is always necessary to take into account the level of structural decomposition required for this purpose. Diagnosing a network of microcomputers is quite an easy task while troubleshouting analog circuits (De Kleer 89), (Dague 87) is a very complex problem not yet solved. Nowadays, having regard for a real life cycle (Courtois gob), the insertion of a diagnosis expert system in an industrial context gives efficient results. At the same time many research works were conducted on the domain of Intelligent Tutoring Systems. The results of these works are the following: first of all, it is necessary to be an expert on the studied field of application; a student model can be useful to adapt the behaviour of the system; teaching strategies must be studied and a userfriendly environment plays a significant part in the efficiency of the system. Teaching diagnostic skills includes the complexity of the field of application, as explained in Section II. However, even if these points require much more study, our aim is to prove that, nowadays, the state of the art allows the design of efficient ITS in a field of application where significant needs exist: Practical Works aid. Section III shows how, in order to overcome difficulties, the different points previously explained were taken into account for the SIAM system design. Another problem that is sometimes forgotten by ITS designers, is the teacher’s point of view. It is a mistake to think that nowadays or tomorrow, ITS will replace teachers in any situation. The development of ITS implies the participation of the teachers and the first step, in the process of integration in the educational environment, is to conceive ITS as new technology tools. Consequently, Section IV explains how a true involvement from the teachers requires maintenance facilities, in order to allow them to improve the system by themselves, and how technical aspects must be taken into account for effective use in university laboratories. Section V describes the architecture of the SIAM system and the use of the different types of knowledge representations. The various models of description and expert modules used are shown with their intercommunications. An evaluation of the SIAM system is possible thanks to a real experimentation conducted over a period of one year. Section VI gives some findings of this experiment. At the same time as diagnosis systems were designed, research works on ITS were conducted within the same field of application. Three significant projects show us some steps of the evolution of the state of the art in the domain of teaching diagnosis: - SOPHIE I, II and III (Brown 81); In SOPHIE I, the student could take decisions and the inference procedure was good but the level of the explanations was weak. In SOPHIE II, the student was strongly directed and the inference procedure was limited but the level of the explanations was very good. The objectives of SOPHIE III were to improve these points by the design of three modules: an electronic expert, a troubleshouter and a coach. COURTOIS 55 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. - GUIDON I and II (Clancey 81-86); GUIDON I was a general teaching system able to work with any knowledge base using EMYCIN. The pedagogi- cal knowledge is clearly separated from the domain-specific knowledge. The difficulties came from the design of the technical knowledge bases. Only one diagnostic strategy was possible and the way it was conceived did not allow the student to memorize the expert rules. GUIDON II, with GUIDON-WATCH, GUIDON-DEBUG and GUIDON- MANAGE, improves the level of the explanations and focalizes on the-teaching of the diagnosis s&tegies. - SOCRATE (Moustafiades 90); SOCRATE is an expert system with a double vocation: SOCRATE-DIAGNOSTIC helps a technician on the spot when there is a breakdown, and SOCRATE-PEDAGOGUE teaches a rigorous diagnosis method by putting the student in an active learning context thanks to exercises drawn from real breakdowns. The domain concerns automatized production equipment. The aim is to teach a diagnosis method in an interactive and progressive environment thanks to various pedagogical and technical strategies. These works show that to teach diagnosis strategies the following problems must be tackled: - Design of an Efficient Diagnosis System. - Explicit Identification of the Diagnosis Strategies. - Explicit Identification of the Pedagogical Strategies. III. Specificities of the P.W. Context The domain chosen for experimentation is that of tutorials in higher education. This context is interesting from several points of view. The technical level of the devices used is high but the complexity of the circuits done by the students is never really considerable. This allows one to use model-based and symptom-based approaches very efficiently. In this context, troubleshouting can be done with a good expert system. The teaching of diagnosis strategies is one of the aims of the Practical Works. So these strategies are available from the “expert teachers”. In the same way, the various levels of explanation that should be given to the students, in accordance with their level of knowledge, are known by the teachers. In this context the design of a student model is not possible (and not necessary) knowing that the interactions with the student are short and not regular. Practical Works sessions make the student progress from the stage of beginner to that of expert while becoming aware of the links that exist between theory and practice. Practical assistance takes up most of the teachers’ time since its aim is to solve the numerous problems that crop up during the manipulations. This practical assistance is relatively complex and we can distinguish three principal sources of problems: - a misunderstanding of the work expected; this can come from incomplete knowledge of the underlying theory or of one of the materials involved; - an error in the carrying out of the manipulation; this is the most frequent type of problem ranging from an assembly error to confusion about materials or to any possible mistaken use; - a total breakdown or malfunctioning of a material; this is also quite frequent because of the bad handling to which the materials are frequently submitted. Lastly, it is essential not to solve the problems for the student but rather to give them the knowledge and the know-how that will make them autonomous in a short time. IV. eachers’ The use of an Intelligent Tutoring System is a useful complement to the human tutors’ work, for correct learning, in a practical context, needs very close surveillance to be efficient in a short time. But to be used in a laboratory the system must work on a large number of domains (physics, electronics, optics, etc.) and with several types of materials. Secondly, it needs the pedagogical skill which will allow the students to be autonomous as soon as possible. Lastly, it must be designed within a reasonable lapse of time because of the frequent modifications of the tutorial subjects linked to the evolution of the programs. With these conditions the following constraints appear: - necessity for the system to be able to solve the problem with which it is confronted; this point requires the creation of a real expert in technical diagnosis; - possibility of using the system for numerous tutorials in the same domain but also possibility of changing domain; the diagnostic knowledge supplied must be of a high enough level of generality to allow adaptation to several contexts; of the - possibility of doing the maintenance without AI technics or computer experts; sys tern - a pedagogical approach in the search for solutions; the system must be able to explain itself and to react in an interactive way with the learner. It is not a question of solving the problem but rather of guiding the student towards the solution by the presentation of the methodology of research followed by the “expert teacher”. The most important consequence for the design of the system is that priority must be given to the definition of descriptives rather than to the writing of rules in order to dissociate the context-specific data from the procedures of data processing. The justifications of this approach will be found in the genericity of the technical diagnosis and the facilities of maintenance. The technical expertise can be focused on the individual diagnosis of the materials. This choice can be discussed but we must not forget that we find ourselves in prepared sessions “that should work” and so we know the principal interactions between the materials. These interactions are therefore introduced as possible malfunctions at the individual level. 56 EDUCATION Finally the system must not be conceived to take the place of the human assistant but rather to work as a computerized complement. Architecture Introduction The practical implementation of all the previously defined constraints is the SIAM system. SIAM is an expert system designed for diagnosis aid which guides the student providing him with explanations of the key steps of the expert diagnosis method. The system works on the spot, in a laboratory during Practical Works in Higher Education, and when real failures happen. The SIAM development focalizes on the problems of reusability and maintainability of knowledge and on its use in a pedagogical aim. Genericity of knowledge is an essential point in the reusability of expertise for the development of new applications. It also allows one to improve maintenance by separating general knowledge from knowledge specific to an application context. The pedagogical objective was taken into account right from the design phase of SIAM so that it has all the expert’s behaviour justifications. The SIAM knowledge and know-how are described, either with descriptive sheets designed using predefined models, or with production rules. All knowledge, related to domains or to hardware, liable to modifications, is gathered within libraries under descriptive forms easily maintained by the educational staff. Basic rules and control me&rules which describe general diagnosis knowledge are independent of hardware, circuits and even application fields. The general architecture of the SIAM system is proposed inthefigure 1. This architecture is based on the use of Models (a) and Descriptives (b) conceived from the models. The necessary descriptives collected together define the Context of Work (c) for the system. SIAM technical knowledge can be divided into two principal categories: on the one hand, the descriptive knowledge, and on the other hand, the expert modules. Most of the system knowledge is represented in a descriptive way. The Production Rules (d), in the expert modules, are very general ones able to work with any descriptive made with the predefined models. The Inference Engine (e) has no specific knowledge. It only activates the rules and metarules of the knowledge base. LExpert J (+Assistant) Knowledge and Metaknowledge Figure 1: General architecture of the SIAM system COURTOIS 57 Interfaces (f) allow understanding and generation of natural language for communications between the Student and SIAM. The Menu technique is also used by the system. Descriptive Knowledge Before presenting the models used by the system it is necessary to indicate the kind of information they are able to represent. The models’ attributes were designed according to the needs of representation. Descriptive knowledge given to the system through descriptives contains different types of information: - conceptual descriptions; - functional and relational descriptions; - physical descriptions. The first category supplies the most general knowledge, the highest abstraction level to which the expert’s behaviour obeys. The extraction of these cognitive processes is often difficult because it comes up against the expert’s judgement “it’s obvious” or “it’s logical”. It is, nevertheless, at that level that we find the most useful knowledge, both to obtain a greater efficiency of the diagnosis system and a greater relevance of the pedagogical method being implemented. From the diagnosis point of view, identifying all the justifications of the expert’s behaviour allowed us to conceive a general diagnosis strategy, independant of the technical field of application, the domain specificities being described within declarative descriptives. From the pedagogical point of view, these justifications were used to define steps of the diagnosis reasoning and to give explanations to the students. For example, we may find the conceptual description of a tutorial (what a student is going to do in a tutorial) and the conceptual description of a measure and a signal (we also had to describe the different types of signals which could be studied - light, electricity, motion). The second category allows us to determine the function of the different elements which are used in the manipulations, as well as the main possible interactions due to the propagation of other elements’ problems. Here the whole technical knowledge of the expert is thoroughly involved. In this category, we find the functional and relational descriptions of the devices. These descriptions of the various materials are made within the same frame (the . Material Model) ; this descriptive constraint has the advantage of making it possible to write diagnosis rules quite dissociated from the materials and their application fields. Up to now, this structure has enabled us to describe all types of materials from the oscilloscope to the collimator as well as the lens and the simple wire. The third category, the easiest to carry out, enables us to represent the various elements and their kinds of connections physically. Here we find the physical description of the materials (so as to detect the risks of confusion between some of them), the physical and functional description of the circuits (a list of the materials and the links between them in the circuit as well as the way the circuit is carried out and the way it can be controlled) and the description of the precise context of a tutorial problem. The models which define the structure of representation of this knowledge are based on the following frames: - Practical Work Environment Model: allows the representation of the objectives of practical work sessions. The principal attributes are PW-Success-Conditions, Measurement-Success-Conditions, Signal-Success-Condi- tions and ProblemDescription. For examples: to succeed a PW session it is necessary to do correct measurements, good calculations and to understand the theory; to be able to do a correct measurement it is first necessary to have a correct signal. - Domain Model: used for information related to the domain, the signal description in particular. As for the signal description, we have attributes like Name, Detection-Complexity, Measurement-Means, Origin and Transit. - Material Model: used for all kinds of material Tom all domains. The most important attributes are Role, Directions-for-Use (with Setting, Non-Setting and Consequence), Test (with Symptom and Consequence), Limit (with Non-Respect and Consequence) and Physical-description. For each value of the attribut Directions-for-Use their is another attribut Setting with for each value another attribut Non-Setting and with for each value another attribut Consequence. - Circuit Model: on the one hand, this model describes the physical circuit and on the other hand, its strategy of checking which is specific to the domain of application. No descriptives are made through this model, the corresponding information being included into the domain descriptives for the strategy of checking, and into the practical work part descriptives for the physical description. - Practical Work Part Model: allows the description of each of the minor manipulations which constitute a session (e.g., materials, circuit, actions); Bibliotheca of descriptives were conceived through those models in 3 domains of application, 50 materials and 40 parts of practical work. An example of representation of an electronic circuit is given in Figure 2. A specific context of work for SIAM is made by gathering the required descriptives. With all these data, the system just needs the student’s problem description to be able to begin its investigations thanks to the expert modules. Expert Modules The descriptive knowledge is used by a set of general expert modules thoroughly independent from the context, the fields of application and the materials. These knowledge bases are managed by a large quantity of metaknowledge integrated into the various modules. On the one hand, the metaknowledge must control the local reasoning within the modules and, on the other hand, must provide the control transfer between the various modules. 58 EDUCATION The me&knowledge levels are not detailed and do not exist in a set way; the system itself chooses, at any given time, to which level the control is transmitted dynamically and without any pre-existing order. The principal expert modules are the following ones: - Tutorial Control Expertise; this high level expertise tries to verify if each of the necessary conditions for a correct carrying out of the practical work is present. It principally uses the conceptual descriptions. From this level, metaknowledge can transfer the control to the theory control expert module or to the calculus control expert module or to the measurement control expert module; - Problem Analysis Expertise; from information supplied by the student, this expertise tries to deduce the largest possible quantity of data on the real state of the manipulation. From this level, the control is transmitted to the tutorial control expertise; - Signal Control Expertise; this high level expertise will be activated when the functioning of the studied signal seems to be in doubt. From this level, me&knowledge can guide investigations according to three axes: . signal existence problem; . signal state problem; . signal value problem. - Material Diagnosis Expertise; this works following several steps directed by metaknowledge: finding of the whole set of defective settings, organization of these causes according to their complexity, dialogue with the student making checks, troubleshooter tests, control of the borderline cases of use. The main rule applied in this elaxation Oscilletor Wire 7 SIAM Representation of I the Relaxetion Oscillator expertise is: KNOWING: the problem is X; there exists a device A, whose Conseqlzence of the Non-Setting of a Setting of the Directions f or-Use is X; THEN: the device A is open to be the source of the problem. - Circuit Checking Expertise; this expertise will enable the student to examine and thus to control his circuit. The way of control to be followed is explicitly found, step by step, in the domain description thanks to the circuit description attributes. - Malfunctioning Checking Expertise: the order of control of the devices is deduced from their rate of breakdowns and from their difficulty of testing. Some of these modules contain several levels of metaknowledge. The various expert modules must operate as if they were autonomous agents able to transmit the control to each other, each time it is necessary. Interfaces Different interfaces are necessary to give more userfriendliness in the relations between the student and the system. The first contact is made with a natural language interface, based on an ATN (Woods 70), that allows the student to explain the problem which occurs. In order to avoid a situation of misunderstanding, if some trouble appears with this interface, another one based on the technique of keywords and menus is used as a back-up. MATERIAL DESCRIP CONNECT MATEm DESCRIP MANIP OA OA . . . OA OA OA NODE1 NODE1 NODE1 Rl Rl NODE2 NODE2 NODE2 OA-MASS OA-MASS . . . = = = = = = = = = = = = = OA OPER-AMPLIF SERIES Rl RESISTANCE . . . VW1 VW4 VW3 Wl WIRE2 WIRE3 WIRE2 WIRE8 WIRES w2 BNCBl-M w2 WIRE7 Figure 2: Part of the circuit descriptive COURTOIS 59 During the session, the explanations given by the system and the questions asked the student are dynamically conceived thanks to a specific interface that uses predefined pieces of sentences linked to the objects manipulated by the system: rules and attributes. The general structure of the sentences is defined with some special actions contained in the rules. The specificities of a context are found in the messages linked to the descriptions of the domain, circuit, materials, etc. The possible answers to the questions asked by the system are given in a menu which is also dynamically conceived. At the end of a session, the student can give some comments which contain technical indications or appreciations concerning the SIAM system behaviour. The system presented above has been evaluated over one year (1988-1989) in one of the laboratories of the Institut Superieur d’Electronique de Paris (France), where students can use it in three domains, optics, electronics and electrotechnology. This real experimentation has been very valuable for the final touches to the system. The way the system works is satisfactory and corresponds well enough to the expert’s behaviour. The student is able to catch the whole cognitive process followed by the expert which enables him to understand his future problems better (examples of diagnosis sessions can be found in (Courtois 9Oa)). It is important to note that the student is normally seldom supplied with this kind of information in tutorials, because the assistant usually explains why it did not work once the solution is found out, but does not systematically specify how he has detected the origin of the problem (whence the experts’ magical aspect). The amount of explanations which do not depend on any artificial model of the student, must be precisely adapted so that the students should not consider the system as too talkative or too fast. During the system development, it was necessary to condense some descriptions which appeared to be too detailed or, on the contrary, some others had to be more detailed, so that the precise origin of the problem could be found faster. The natural language interface must contain no errors because users would get tired of it very quickly. A graphic interface could improve the user-friendliness even more for the circuit checking. Finally the system fulfils its last aim which is to create new tutorial subjects rapidly. As a matter of fact, modification or creation of a practical work subject, pedagogical aspects included, can be done and tested in a very short time: if all the devices needed are already described, it will take about one hour of work to describe a new part of a practical work session. The whole system’s knowledge, descriptions and expertise, is stocked within a single format close to SNARK’s (Lauriere 88), that is to say, a production-rule-based system in predicate logic. This single representation enables the system to handle, in the same way, independent facts, structured descriptions, rules and metarules. SIAM can work on microcomputers and then be easily introduced in university laboratories where microcomputers are available. In this thesis research work, the adopted point of view consisted in giving the maximum of explicit knowledge to the system, so that it could control the development of its actions itself due to a large quantity of metaknowledge. The model-based representation of the most important part of this knowledge and the conception of general production rules are the principal characteristics of this work. They both offer the opportunity of a reusability of knowledge and a facility of maintenance that allows the teachers to monitor the evolutions of the system themselves. The obtained results have justified these choices both in the system’s efficiency, possible evolutions and adapta- tions, and also pedagogically speaking. In a domain where real needs exist, this work shows that the development of ITS based on the SIAM architecture is an available answer. Brown, J.S., Burton, R.R., De Kleer, J., 1981, Pedagogical, natural language and knowledge engineering techniques in SOPHIE I, II and III. Xerox Palo Alto Research Center. Clancey, W.J., Letsinger, R., 1981, NEOMYCIN: Reconfiguring a rule-based expert system for application to teaching, 7th IJCAI 81, p . 829-836 . Clancey, W.J., 1986, From Guidon to Neomycin and Heracles in Twenty Short Lessons, ONR Final Report 1979-198.5. Stanford Knowledge Systems Laboratory. Courtois, J., 1990a, SIAIM: a System that Adapts Easily to New Domains and which Teaches its Method, These de Doctorat d’llniversite’. Universite Paris VI. Courtois, J., Moustafiades, J., 199Ob, Human Being and Expert System: Which One Uses the Other ?, COGNITIVA 90. Madrid. Dague, P., Deves, P., Raiman, O., 1987, Troubleshooting when modeling is the trouble, AAAI 1987. Seattle. De Kleer, J., Williams, B.C., 1989, Diagnosis with Behavioral Modes, I I th IJCAI 89, p. 1324- 1330. Lauriere, J.L., 1988, Intelligence Artijicielle (tome2) Representation des connaissances. Eyrolles. Paris. Moustafiades, J., 1990, Formation au diagnostic technique: l’apport de l’intelligence artificielle. Masson. Paris. Pitt-at, J., 1990, Metaconnaissance, Futur de l’lntelligence Artificielle. Hermes. Paris. Woods, W.A., 1970, Transition network grammars for natural language analysis, Communications of the ACM VQ1.13. 60 EDUCATION
1991
9
1,153
STATIC ace Oren Etzioni Department of Computer Science and Engineering, FR-35 University of Washington Seattle, WA 98195 etzioni@cs.washington.edu Abstract Explanation-Based Learning (EBL) can be used to sig- nificantly speed up problem solving. Is there sufficient structure in the definition of a problem space to en- able a static analyzer, using EBL-style optimizations, to speed up problem solving without utilizing training examples? If so, will such an analyzer run in reason- able time? This paper demonstrates that for a wide range of problem spaces the answer to both questions is “yes .” The STATIC program speeds up problem solving for the PRODIGY problem solver without utilizin examples. In Minton’s problem spaces [1988 , STATIC s training acquires control knowledge from twenty six to seventy seven times faster, and speeds up PRODIGY up to three times as much as PRODIGY/EBL. This paper presents STATIC'S algorithms, derives a condition under which STATIC is guaranteed to achieve polynomial-time prob- lem solving, and contrasts STATIC with PRODIGY/EBL. Introduction Prieditis [19SS] and van Harmelen & Bundy [1988] have pointed to the affinity between Partial Evalu- ation (PE) [ van Harmelen, 19891 and Explanation- Based Learning (EBL) [Dejong and Mooney, 1986, Mitchell et al., 19861 suggesting that an EBL-like pro- cess could be performed without utilizing training ex- amples. These papers raise a number of interesting questions. The training example given to EBL helps to focus its learning process. Can PE, which does not utilize train- ing examples, be performed in a reasonable amount of time on standard EBL domain theories? van Harmelen & Bundy’s illustrative partial evaluator will not termi- nate on recursive programs. Yet standard EBL the- ories (e.g., 19881) the ones used by PRODIGY/EBL [Minton, are recursive. Can PE handle recursion ap- propriately? EBL is widely used for acquiring search control knowledge [Minton et al., 1989a]. How will control knowledge obtained via PE compare with that acquired by EBL? In the past, PE has been applied to inference tasks. Can PE be extended to analyze the goal interactions (e.g., goal clobbering) found in planning tasks? Finally, what guarantees can we make regarding STATIC'S impact on problem-solving time? This paper addresses these questions by describ- ing STATIC, a problem-space analyzer which uti- lizes PE, and comparing its performance with that of PRODIGY/EBL-a state-of-the-art EBL system. The following section provides some background on PRODIGY and PRODIGY/EBL. The subsequent sections present STATIC, and derive a condition under which STATIC is guaranteed to achieve polynomial-time prob- lem solving. STATIC's performance is compared with that of PRODIGY/EBL, and STATIC is contrasted with standard partial evaluators and other static analyzers. The paper concludes by considering the lessons learned from the STATIC case study. The G-Y System PRODIGY is a domain-independent means-ends analy- sis problem solver [Minton et al., 1989a] that is the substrate for EBL, STATIC, and a variety of learning mechanisms [Carbonell and Gil, 1990, Joseph, 1989, Knoblock, 1991, Knoblock et al., 1991, Veloso and Car- bonell, 19901. A complete description of PRODIGY ap- pears in [Minton et al., 1989131. The bare essentials follow. PRODIGY'S default search strategy is depth first. The search is carried out by repeatedly executing the fol- lowing decision cycle: choose a search-tree node to ex- pand, choose a subgoal at the node, choose an operator relevant to achieving the subgoal, and choose bindings for the operator. Control rules can direct PRODIGY'S choices at each step in this decision cycle. A sample control rule appears in Table 1. PRODIGY matches the control rule against its current state. If the antecedent of the control rule matches, PRODIGY abides by the recommendation in the consequent. RODIGY/EBL PRODIGY/EBL acquires control rules for PRODIGY by analyzing its problem-solving traces. PRODIGY/EBL'S primary target concepts are success, failure, and goal interaction. PRODIGY/EBL finds instances of these con- cepts in PRODIGY'S traces and derives operational suf- ETZIONI 533 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. (REJECT-UNSTACK (if (and (current-node 25) (current-goal N (holding Block)) (candidat e-op N unstack) (known N (not (on Block Blck2))))) (then (reject operator unstack))) Table 1: A Blocksworld control rule telling PRODIGY to reject the operator UNSTACK when the block to be unstacked is not on any other block. ficient conditions for the concepts via EBL. These con- ditions become the antecedents of PRODIGY /EBL'S con- trol rules and the target concept determines the rec- ommendation expressed in the consequent of the rule. The rule in Table 1, for example, is formed by analyz- ing the failure of the Blocksworld operator UNSTACK. STATIC STATIC is a program that acquires search con- trol knowledge for PRODIGY by statically analyz- ing PRODIGY'S problem space definitions. Like PRODIGY/EBL, STATIC analyzes success, failure and goal interaction. Unlike PRODIGY/EBL, STATIC only learns from nonrecursive explanations and does not require training examples, utility evaluation, or rule compression. This paper reviews briefly STATIC'S graph represen- tation and algorithms for analyzin failure and suc- cess (introduced in [Etzioni, 1990b ) ‘5 and focuses on STATIC'S algorithms for analyzing goal interaction and on a detailed comparison with PRODIGY/EBL? The main new result presented is a comparison of the time required to generate control knowledge via STATIC with the time required to train PRODIGY/EBL. In PRODIGY/EBL,S problem spaces,s~~~~~ generates con- trol knowledge from twenty six to seventy seven times faster than PRODIGY/EBL. STATIC's input is a problem space definition; its out- put is a set of control rules that are intended to speed up PRODIGY 'S problem solving. STATIC'S operation can be divided into four phases. First, it maps the prob- lem space definition to a set of Problem Space Graphs (PSGs), one for each potential subgoal.2 Second, it labels each PSG node and computes the conditions under which subgoaling or back-chaining on the node would lead to failure. Third, it creates control rules whose antecedents are composed of these conditions (or their negation). The rules guide PRODIGY'S opera- 'STATIC was introduced (in approximately one page) in [Etzioni, 199Ob] to demonstrate PRODIGY/EBL,S reliance on nonrecursive problem space structure. 2A potential subgoal is an uninstantiated literal found in the effects of an operator. Potential subgoals are enu- merated by scanning the problem space definition. tor and bindings choices. Finally, STATIC searches for goal clobbering and prerequisite violation by analyz- ing PSG pairs, and forms goal ordering and operator preference rules based on this analysis. Problem Space Graphs (PSGs) A PSG is an AND/OR graph that represents the goal/subgoal relationships in a problem space. STATIC constructs PSGs by partially evaluating the problem space definition; PSGs are independent of any state information. This section describes PSGs in more de- tail. A pseudo-code description of STATIC'S algorithm for constructing PSGs appears in Table 2. A PSG consists of disjoint subgraphs each of which is rooted in a distinct subgoal literal. Each subgraph is derived by symbolically back chaining on the prob- lem space’s operators from the root. The root literal is connected via o&links to all the operators whose effects match the literal, and each operator is con- nected via AND-links to its preconditions. Thus, the PSG nodes are an alternating sequence of (sub)goals and operators; the PSG edges are either AND-links or OR-links. Figure 1 depicts the Blocksworld PSG sub- graph rooted in (holding V). The graph is directed and acyclic. Two operators that share a precondition have AND-links to the same node, so the graph is not a tree. Figure 1: The holding PSG. A successful match between an operator’s effects and a subgoal imposes codesignation constraints between the arguments to the subgoal literal, and the opera- tor’s variables. The operator’s preconditions are par- tially instantiated to reflect these constraints. For ex- ample, since the goal is (holding V), UNSTACK'S first precondition is partially instantiated to (on V V2). The expansion of the graph is terminated under well- defined criteria discussed in [Etzioni, 1990b]. The most important criterion is the termination of PSG expan- sion whenever predicate repetition (on a path from a node to the root) is detected. This criterion suffices to prove that every PSG is finite [Etzioni, 199Oa]. 534 LEARNING SEARCH CONTROL Input: operators, constraints on legal states, and a uninstantiated goal literal g. Output: A PSG for g (e.g., Table 1). The goal stack of a node 12 in the PSG, goal-stacb(n), is the unique set of subgoal nodes on a path from n to the root. The function holds(p,goaZ-slaclc(p)) de- termines whether the precondition p necessarily holds given its goal stack and the constraints on legal states. o&g) refers to the set of operators that can potentially achieve g. Algorithm: 1. Create a subgoal node for g. 2. For each operator o E ops(g): (a) Partially instantiate 0 to og. (b) Create an operator node for og and OR-link it to (c) ;f any of og ‘s preconditions appears on its goal stack, then create a subgoal node corresponding to that precondition, label it gs-cycle, and AND- link it to og. (d) Else, for each precondition p E precs(o,): i. If a previously expanded sibling operator shares p, then a node for p already exists. AND-link og to the existing node. ii. Otherwise, create a new node p and AND-link it to og. iii. If hoZds(p, (goal-stack(p)) then label p holds. iv. If p’s predicate appears on og’s goal stack, then label p unknown (i.e. recursion could occur here). v. If no operator matches p, then label p unachiev- able. vi. Else, return to step 1 with p as the current sub- goal g. Table 2: Constructing a PSG. Having constructed a set of PSGs corresponding to a problem space, STATIC proceeds to label each internal node in the PSGs with one of the labels failure, suc- cess, or unknown indicating the eventual outcome of problem solving when PRODIGY reaches a correspond- ing node in its search. The labeling is performed using a three-valued “label logic” where the label of an oper- ator node is the conjunction of its preconditions’ labels and the label of a precondition node is the disjunction of the labels of the operators whose effects match the precondition. PRODIGY control rules that guide opera- tor and bindings choices are constructed based on this labeling process. See [Etzioni, 1990a] for more details. yzing Goal Inkractions in STATIC Goal interactions such as goal clobbering and prereq- uisite violation cause PRODIGY to backtrack. STATIC anticipates these goal interactions by analyzing PSG pairs, and generates goal ordering and operator prefer- ence control rules that enable PRODIGY to avoid goal interactions. STATIC is not cona$eZe in that it will not anticipate all possible goal interactions, but it is correct in that the interactions that STATIC anticipates would in fact occur, if it were not for STATIC'S control rules. STATIC anticipates goal interactions that will neces- sarily occur. That is, STATIC reports that two goals interact, under certain constraints, only if the goals interact in every state that obeys the constraints. An alternative o timization strategy (taken by Knoblock’s Alpine [1991 P , for example) is to anticipate possible goal interactions. While STATIC analyzes both goal clobbering and prerequisite violation, this section considers only goal clobbering. The section begins by formally defining goal clobbering. The section then presents STATIC’S algorithm for anticipating goal clobbering by analyz- ing PSGs. Two important simplifications are made in the presentation for the sake of brevity and clar- ity. First, the presentation is restricted to proposi- tional goals even though STATIC actually analyzes vari- ablized goals and codesignation constraints between them. Second, although necessary goal interactions are sometimes “conditional” (that is, occur necessar- ily only when the set of possible states is constrained appropriately) the discussion does not describe how STATIC derives these constraints. See [Etzioni, 199Oa] for a complete discussion. A plan for achieving a goal g is said to clobber a pro- tected goal pg when pg is deleted in the process of exe- cuting the plan. In Minton’s machine-shop scheduling problem space (the Schedworld), for example, punch- ing a hole through an object damages any polish on the object’s surface. Thus, the goal of having a pol- ished surface is clobbered by punching a hole through the object. The notation g]s denotes that the subgoal g holds in the state s, and c(s) denotes the state that results from applying the operator sequence c to the state s. The notation OC(g,pg) d enotes that the goal g necessarily clobbers the protected goal pg. OC(9, P9) 5 Vs s.t. pgls, and Vc s.t. we have that lpglc(s). SW>, Let E(9,p9) d enote the necessary effects of achieving the subgoal g from any state in which the protected subgoal pg holds. The necessary effects are the set of literals that necessarily hold in the current state after g is achieved by an arbitrary plan from any state in which pg holds. E(g,pg) = {e] V’s s.t. pg(s, and Vc s.t. gjc(s), we have that e/c(s)} ETZIONI 535 E(g,pg) is precisely the set of interest when consid- ering whether one goal will clobber another. g clobbers pg if and only if pg is negated as a necessary effect of achieving g . Lemma 1 q C(g,pg) ifl3e E E(g, pg) s.t. e 3 -pg.3 STATIC anticipates goal interactions by computing a subset of of E(g,pg) denoted by g(g). The neces- sary effects of achieving a goal are the necessary ef- fects shared by all the operators that could potentially achieved the goal. The necessary effects (computed by STATIC) of an operator o that achieves g are denoted bY fro(9). E($) = f-j fiolg). oEops(9) Thus, a goal is always one of its necessary effects (i.e. Vg, we have that g E Z?(g)). The necessary effects of an operator are, in essence, the union of its effects and the necessary effects of its preconditions. When two effects are potentially contradictory only the one that would occur later during problem solving is retained. JQo(9) = 1 1 U l?(p) U e$ecis(o). pEprecs(0) STATIC'S computation is correct. That is, Theorem 1 i?(g) C E(g,pg). To illustrate the algorithm’s operation consider the Schedworld goal (shape Obj cylindrical). Only two operators can potentially achieve this goal: LATHE and ROLL. Both operators delete the surface condi- tion of the object they are applied to. As a result, achieving (shape Ob j cylindrical) necessarily clob- bers (surf ace Obj polished). STATIC detects this necessary side-effect by intersecting &lathe and &oll. The rule formed based on the analysis appears in Ta- ble 3. (PREFER-SHAPE (if (and (current-node N) (candidate-goal N (shape V2 cylindrical))) (candidate-goal N (surface V2 polished))) (then (prefer goal (shape V2 cylindrical) (surface V2 polished))) Table 3: A goal ordering rule produced by STATIC. Comparing STATIC with PRODIGY/EBL This section compares the impact, time required to generate control knowledge, and scope of STATIC 3Proofs of the results are in [Etzioni, 199Oa]. and PRODIGY/EBL.~ Although STATIC outperforms PRODIGY/EBL in the problem spaces studied, this will not always be the case. PRODIGY/EBL can ex- ploit the distribution of problems encountered by PRODIGY whereas STATIC cannot and, unlike STATIC, PRODIGY/EBL can be guided by a carefully chosen training-example sequence. Experimental Methodology In his thesis [1988], Minton tested PRODIGY/EBL on one hundred randomly generated problems in three problem spaces (the Blocksworld, an extended version of the STRIPS problem space [Fikes et al., 19721, and a machine-shop scheduling problem space) and showed that PRODIGY/EBL is able to significantly speed up PRODIGY. These problem spaces, as well as a modified version of the Blocksworld (the ABworld) constructed to foil PRODIGY/EBL, are used to compare STATIC and PRODIGY/EBL. PRODIGY is run on Minton’s test-problem sets under three experimental conditions: guided by no control rules, guided by PRODIGY/EBL'S rules, and guided by STATIC'S rules. A bound of 150 CPU seconds was imposed on the solution time for each problem. The times reported below are slightly smaller than the times in [Etzioni, 1990b] due to minor improvements to the systems involved. Impact As Table 4 demonstrates, STATIC is able to speed up PRODIGY more than PRODIGY/EBL in each of Minton’s problem spaces and in the ABworld. This section re- ports on problem-solving time, number of nodes ex- panded, and average time to expand a node. No sig- nificant differences were found in the length of the so- lutions produced by the systems. Since neither system attempts to reduce solution length, this is not surpris- ing. Table 4: Total problem-solving time in CPU sec- onds. In the Schedworld, STATIC was unable to solve one problem within the 150-CPU second time bound compared with eight problems for PRODICY/EBL and twenty three for PRODIGY. The total number of nodes expanded by PRODIGY/EBL and STATIC is fairly close, compared with the number of nodes expanded by PRODIGY given 41t is important to note that PRODIGY/EBL predates STATIC by several years and that STATIC is based on an in-depth study of PRODIGY/EBL. 536 LEARNING SEARCH CONTROL no control rules (Table 5). The number of nodes ex- panded indicates the ability of each system to curtail PRODIGY'S search. As the table shows, STATIC actu- ally expanded more nodes than PRODIGY/EBL in the Blocksworld and the Stripsworld. Since STATIC'S rules are cheaper-to-match than PRODIGY/EBL'S (Table 6), however, STATIC'S overall search time was consistently lower than PRODIGY/EBL'S. STATIC'S rules are rela- tively cheap-to-match for two reasons. First, STATIC does not learn from recursive explanations which tend to yield control rules that are both more specific and more expensive-to-match than their nonrecursive coun- terparts. Second, PRODIGY/EBL'S control rules often retain incidental features of its training examples that do not appear in STATIC'S rules. Although EBL only retains the aspects of its training example that ap- pear in the weakest preconditions of its explanation, often there is a large set of possible explanations asso- ciated with any example, explanations whose weakest preconditions differ greatly. Frequently, STATIC'S “ex- planations” are more compact than those produced by PRODIGY/EBL even when both are nonrecursive. Table 7: Learning time in CPU seconds. decomposes PRODIGY/EBL'S running time into three components: time to solve training examples, time to construct proofs based on the training examples’ traces, and time to perform utility evaluation. It’s in- teresting to note that utility evaluation is quite costly, and that STATIC'S total running time is consistently smaller than PRODIGY/EBL'S proof construction time. 1 ABworld I 2493 I 8067 1 673 1 Table 5: Total number of nodes expanded. Table 6: Average CPU seconds per node. The cen- ter and right columns reflect the cost of matching PRODIGY/EBL and STATIC'S rule sets respectively. Cost of Learning The time required for learning is an important as- pect of a learning method because, when learning time scales badly with problem size, learning can become prohibitively expensive on problems of interest. In the problem spaces studied, STATIC was able to gen- erate control knowledge twenty six to four hundred and sixty three times faster than PRODIGY/EBL. Ta- ble 7 compares the learning time for the two systems. PRODIGY/EBL was “trained” by following the learning procedure outlined in Minton’s thesis. PRODIGY/EBL'S learning time includes the time re- quired to solve the training examples whose problem- solving traces are analyzed by PRODIGY/EBL. Table 8 Table 8: PRODIGY/EBL'S learning time decomposed into components: time to solve training examples, time to construct proofs based on the problem-solving trace, and time to perform utility evaluation. Why did STATIC run so much faster than PRODIGY/EBL? STATIC traverses the PSGs for the problem space, whereas PRODIGY/EBL traverses PRODIGY'S problem solving trace for each of the train- ing examples. Since STATIC's processing consists of several traversals over its PSGs (construction, label- ing, analysis of necessary effects, etc.), its running time is close to linear in the number of nodes in its PSGs. Since PRODIGY/EBL analyzes PRODIGY'S traces, it learning time scales with the number of nodes in PRODIGY's traces. We can predict, therefore, that the number of trace nodes visited by PRODIGY/EBL is much larger than the number of nodes in STATIC'S PSGs. Table 9 confirms this prediction. In fact, the ratio of trace nodes to PSG nodes is almost per- fectly correlated (correlation=0.999) with the ratio of PRODIGY/EBL's n.KIning hrM? t0 STATIC'% Table 9: PSG nodes versus trace nodes. It is reasonable to believe, therefore, that STATIC will continue to be significantly faster than PRODIGY/EBL when the ratio between trace size, summed over PRODIGY/EBL'S training examples, and PSG size re- ETZIONI 537 mains large. PSGs are more compact than traces be- cause the PSGs are partially instantiated, and because they do not contain nodes corresponding to recursive expansions. Scope STATIC utilizes target concepts based on those in PRODIGY/EBL, but only learns from nonrecursive ex- planations. Thus, STATIC will not acquire control knowledge that can only be learned from recursive ex- planations. Fortunately for STATIC, the knowledge re- quired to control search in the problem spaces studied can usually be derived from nonrecursive explanations using STATIC'S target concepts. The target concepts can frequently explain PRODIGY'S behavior in multiple ways, some of which are recursive and some of which are not. PRODIGY/EBL acquires control rules from each of these explanations causing it to generate redundant control knowledge. STATIC'S more conservative policy of only learning from nonrecursive explanations is ad- vantageous in this case; STATIC learns fewer redundant rules than PRODIGY/EBL. Because it performs static analysis, the range of proofs utilized by STATIC is narrower than that of PRODIGY/EBL. STATIC does not analyze state cy- cles for example, because, in contrast to PRODIGY'S trace, no unique world state is associated with the PSG’s nodes. Thus, merely detecting potential state cycles would be more difficult for STATIC than for PRODIGY/EBL. STATIC only analyzes binary goal interactions as op- posed to N-ary ones. The decision to analyze binary goal htf?raCtiOnS k aW%IOgOUS t0 STATIC'S POky Of an- alyzing only nonrecursive explanations. In both cases additional coverage can be obtained by analyzing a broader class of explanations, but STATIC chooses to focus on the narrower class to curtail the costs of ac- quiring and utilizing control knowledge. In the prob- lem spaces studied, STATIC acquires more effective con- trol knowledge than PRODIGY/EBL despite (or, per- haps, due to) STATIC'S narrower scope. slynomial-Time Problem Solving By definition, polynomial-time problem solving can only be achieved for problem spaces that are in the complexity class P. No learning method will work for all problems in P, because finding a polynomial-time algorithm for an arbitrary problem in P is undecid- able. A distribution-free sufficient condition for the success of STATIC (and EBL) can be derived, however, based on the notion of a fortuitous recursion. A re- cursion is said to be fortuitous when any plan that contains the recursion will succeed if the nonrecursive portion of the plan succeeds. When all problem-space recursions are fortuitous, nonrecursive proofs of fail- ure suffice to eliminate backtracking, and STATIC can achieve polynomial- time problem solving. Three additional definitions are required to state this idea precisely. Let a Datalog problem solver be a prob- lem solver whose operator preconditions and effects are restricted to Datalog literals.5 A goal interaction be- tween two subgoals is said to be binary when it occurs independent of other subgoals. A goal interaction is said to be nonrecursive if it can be detected using a nonrecursive proof. Theorem 2 STATIC will achieve polynomial-time problem solving for a Datalog problem solver when: 1. All recursions are fortuitous. 2. All goal interactions are binary and nonrecursive. 3. Solution length is polynomial in state size. When the above conditions hold in a given problem space, STATIC'S exhaustive generation of “nonrecursive control knowledge” guarantees that it will eliminate backtracking. Since, by assumption, solution length is polynomial in state size, it follows that no more than a polynomial number of search-tree nodes will be ex- panded. Since the cost of matching STATIC'S control knowledge at each node is polynomial in the state size (see [Etzioni, 1990a, Chapter 3]), STATIC will achieve polynomial-time problem solving. EBL will achieve polynomial-time problem solving when it has the ap- propriate target concepts (failure and goal interaction) and it encounters the appropriate training examples; namely, the ones that enable EBL to eliminate back- tracking using nonrecursive proofs. elated Work Although STATIC utilizes partial evaluation to con- struct its PSGs, STATIC goes beyond standard partial evaluators in several important ways. First, STATIC analyzes failure and goal interaction whereas standard partial evaluators do not. Second, STATIC derives var- ious subgoal reordering rules from this analysis. Such reordering transformations are not part of the usual arsenal of partial evaluators. Third, STATIC introduces a novel criterion for choosing between multiple target concepts: In the absence of knowledge about problem distribution, learn only from the target concepts that yield nonrecursive proofs. This criterion addresses the problem of recursive explanations for STATIC, a topic of active research for the PE community [van Harmelen, 1989, Sestfot, 19881. Unlike STATIC, most partial evaluators do not ana- lyze goal interactions. The remainder of this section compares STATIC with other problem space analyzers that do attempt to anticipate and avoid goal interac- tions. 5Datalog is the function and negation free subset of pure Prolog [Ullman, 19891. A Datalog problem solver repre- sents a slight idealization of the PRODIGY problem solver. 538 LEARNING SEARCH CONTROL REFLECT Dawson and Siklossy [1977] describe REFLECT, an early system that engaged in static prob- lem space analysis. REFLECT ran in two phases: a preprocessing phase in which conjunctive goals were analyzed and macro-operators were built followed by a problem-solving phase in which REFLECT engaged in a form of backward-chaining. During its prepro- cessing phase REFLECT analyzed operators to deter- mine which pairs of achievable literals (e.g., (holding X) D (arm-empty)) were incompatible. This was deter- mined by symbolically backward-chaining on the op- erators up to a given depth bound. A graph repre- sentation, referred to as a goal-kernel graph, was con- structed in the process. The nodes of the goal-kernel graph represent sets of states, and its edges represent operators. REFLECT also constructed macro-operators using heuristics such as “requiring that at least one variable in both operators correspond at all times.” Although the actual algorithms employed by RE- FLECT and STATIC are different, the spirit of STATIC is foreshadowed by this early system. In particular, STATIC'S PSG is reminiscent of the goal-kernel graph, both systems analyze successful paths, and both sys- tem analyze pairs of achievable literals. One of the main advances found in STATIC is its theoretically mo- tivated and well-defined criteria for terminating sym- bolic back-chaining, particularly its avoidance of re- cursion. Another advance is the formal specification and computation of necessary effects and prerequisites to facilitate goal reordering. Finally, unlike STATIC, REFLECT does not actually reorder goals. REFLECT merely terminates backward-chaining when an incom- patible set of goals is reached. INE Knoblock’s ALPINE [1991] generates hier- archies of abstract problem spaces by statically analyz- ing goal interactions. ALPINE maps the problem space definition to a graph representation in which a node is a literal, and an edge from one literal to another means that the first literal can potentially interact with the former. Knoblock’s guarantees on ALPINE'S behavior are analogous to my own: I compute a subset of the necessary goal interactions, and Knoblock computes a superset of the possible goal interactions. Both algo- rithms are correct and conservative. The ABSOLVER system [Mostow and Prieditis, 19891 contains a trans- formation that computes a superset of possible goal interactions as well. Universal Plans Schoppers [ 19891 derives “univer- sal plans” from problem space descriptions. Universal plans are, essentially, sets of rules that tell an agent what to do in any given state. Schoppers partially eval- uates the problem space definition to derive universal plans. Like STATIC, Schoppers only considers pairwise goal interactions. Unlike STATIC, he ignores the issues of recursion and binding-analysis for partially instan- tiated goals by only considering tightly circumscribed problem spaces such as the S-block Blocksworld where recursion is bounded and variables are fully instanti- ated. Instead, Schoppers focuses on handling condi- tional effects, nonlinear planning, and incorrect world models. robIem-Specific ethods Whereas STATIC an- alyzes the problem space definition, some algorithms analyze individual problems. Whereas STATIC is only run once per problem space, the algorithms described below are run anew for each problem. Since the algo- rithms use problem-specific information, they are often able to make strong complexity and completeness guar- antees presenting a tradeoff between problem-space analysis and problem-specific analysis. This tradeoff has not been studied systematically. Chapman’s TWEAK 119871 utilizes an algorithm for determining the necessary/possible effects of a par- tially specified plan. The input to the algorithm is an initial state and the plan. The output of the al- gorithm is a list of the plan’s necessary (occurring in all completions of the plan) and possible (occurring in some completion of the plan) effects. The algo- rithm runs in time that is polynomial in the number of steps in the plan. Cheng and Irani [1989] describe an algorithm that, given a goal conjunction, computes a partial order on the subgoals in the goal conjunc- tion. If a problem solver attacks the problem’s sub- goals according to the specified order, it will not have to backtrack across subgoals. The algorithm runs in O(n3) time where n is the number of subgoals. Fi- nally, Knoblock’s ALPINE performs some initial pre- processing of the problem space definition, but then develops considerably stronger constraints on possible goal interactions by analyzing individual problems. Conclusion This paper addresses the questions posed at the out- set by means of a case study. Partial evaluation a- h-STATIC can be performed in reasonable time on PRODIGY/EBL'S domain theories. The problem of re- cursion is overcome by learning only from nonrecursive explanations utilizing multiple target concepts. The control knowledge derived by STATIC is better than that acquired by PRODIGY/EBL, and was generated considerably faster. Finally, STATIC'S analysis of goal interactions demonstrates that partial evaluation can be applied to planning as well as inference tasks. As the previous section indicates, STATIC is only one point in the space of static problem space analyzers. Consider Knoblock’s ALPINE [1991] as a point of con- trast. STATIC is run once per problem space whereas ALPINE preprocesses individual problems. STATIC ana- lyzes necessary goal interactions whereas ALPINE ana- lyzes possible goal interactions. STATIC outputs control rules whereas ALPINE outputs abstraction hierarchies. The differences between the two systems suggest that the space of static analyzers is large and diverse. The ETZIONI 539 demonstrated success of the two systems indicates the potential of static problem space analysis as a tool for deriving control knowledge. Ultimately, this success is not surprising. Powerful optimizing compilers have been developed for most widely-used programming lan- guages. There is no reason to believe that problem- space-specification languages will be an exception. Acknowledgments This paper describes research done primarily at Carnegie Mellon University’s School of Computer Sci- ence. The author was supported by an AT&T Bell Labs Ph.D. Scholarship. Thanks go to the author’s advisor, Tom Mitchell, and the members of his the- sis committee, Jaime Carbonell, Paul Rosenbloom, and Kurt VanLehn for their contributions to the ideas herein. Special thanks go to Steve Minton-whose the- sis work made studying,PRODIGY/EBL possible, and to Ruth Etzioni for her insightful comments. eferences Carbonell, J. G. and Gil, Y. 1990. Learning by experi- mentation: The operator refinement method. In Ma- chine Learning, An Artificial Intelligence Approach, Volume III. Morgan Kaufman, San Mateo, CA. 191- 213. Chapman, David 1987. Planning for conjunctive goals. Artificial Intelligence 32(3):333-378. Cheng, Jie and Irani, Keki B. 1989. Ordering problem subgoals. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, Michigan. Morgan Kaufmann. 931-936. Dawson, Clive and Siklossy, Laurent 1977. The role of preprocessing in problem-solving systems. In Pro- ceedings of the Fifth International Joint Conference on Artificial Intelligence. Dejong, G. F. and Mooney, R. J. 1986. Explanation- based learning: An alternative view. Machine Learn- ing l(1). Etzioni, Oren 1990a. A Structural Theory of Explanation-Based Learning. Ph.D. Dissertation, Carnegie Mellon University. Available as technical report CMU-CS-90-185. Etzioni, Oren 1990b. Why Prodigy/EBL works. In Proceedings of the Eighth National Conference on Ar- tificial Intelligence. Fikes, R.; Hart, P.; and Nilsson, N. 1972. Learning and executing generalized robot plans. Artificial In- telligence 3(4). Joseph, Robert L. 1989. Graphical knowledge acquisi- tion. In Proceedings of the Fourth Knowledge Acquisi- tion For Knowledge-Based Systems Workshop, Banff, Canada. Knoblock, Craig; Minton, Steve; and Etzioni, Oren 1991. Integrating abstraction and explanation-based learning in prodigy. In Proceedings of the Ninth Na- tional Conference on Artificial Intelligence. Knoblock, Craig A. 1991. Automatically Generating Abstractions for Problem Solving. Ph.D. Dissertation, Carnegie Mellon University. Minton, Steven; Carbonell, Jaime G.; Knoblock, Craig A.; Kuokka, Daniel R.; Etzioni, Oren; and Gil, Yolanda 1989a. Explanation-based learning: A problem-solving perspective. Artificial Intelligence 40:63-l 18. Available as techincal report CMU-CS- 89-103. Minton, Steven; Knoblock, Craig A.; Kuokka, Daniel R.; Gil, Yolanda; Joseph, Robert L.; and Car- bonell, Jaime G. 1989b. Prodigy 2.0: The manual and tutorial. Technical Report CMU-CS-89-146, Carnegie Mellon University. Minton, Steven 1988. Learning Eflective Search Con- trol Knolwedge: An Explanation-Based Approach. Ph.D. Dissertation, Carnegie Mellon University. Available as technical report CMU-CS-88-133. Mitchell, Tom M.; Keller, Rich; and Kedar-Cabelli, Smadar 1986. Explanation-based generalization: A unifying view. Machine Learning l(1). Mostow, Jack and Prieditis, Armand E. 1989. Dis- covering admissible heuristics by abstracting and op- timizing: a transformational approach. In Proceed- ings of the Eleventh International Joint Conference on Artificial Intelligence. Prieditis, A. E. 1988. Environment-guided program transformation. In Proceedings of the AAAI Spring Symposium on Explanation-Based Learning. Schoppers, Marcel Joachim 1989. Representation and Automatic Synthesis of Reaction Plans. Ph.D. Disser- tation, University of Illinois at Urbana-Champaign. Available as technical report UIUCDCS-R-89-1546. Sestfot, Peter 1988. Automatic call unfolding in a partial evaluator. In Bjorner, D.; Ershov, A. P.; and Jones, N. D., editors 1988, Partial Evaluation and Mixed Computation. Elsevier Science Publishers. Workshop Proceedings. Ullman, Jeffrey D. 1989. Database and Knowledge- base systems, volume I. computer science press. van Harmelen, Frank and Bundy, Alan 1988. Explanation-based generalisation = partial evalua- tion. Artificial Intelligence 36. Research Note. van Harmelen, Frank 1989. The limitations of par- tial evaluation. In Logic-Based Knowledge Represen- tation. MIT Press, Cambridge, MA. 87-112. Veloso, Manuela M. and Carbonell, Jaime G. 1990. Integrating analogy into a general problem-solving ar- chitecture. In Intelligent Systems. Ellis Horwood Lim- ited. 540 LEARNING SEARCH CONTROL
1991
90
1,154
pingStone: irid an nalyt ica David Ruby and Dennis Kibler Department of Information & Computer Science University of California, Irvine Irvine, CA 92717 U.S.A. druby@ics.uci.edu Abstract Decomposing a difficult problem into simpler subprob- lems is a classic problem solving technique. Unfortu- nately, the most difficult subproblems can be as dif- ficult, if not more difficult, than the original problem. This is not an obstacle to problem solving if the difficult subproblems recur in other problems. If the difficult subproblems recur often, then its solution need only be learned once and reused. Steppingstone is a learning problem solver that decomposes a problem into simple and difficult-but-recurring subproblems. It solves the simple subproblems with an inexpensive constrained problem solver. To solve the difficult subproblems, Steppingstone uses an unconstrained problem solver. Once it solves a difficult subproblem, it uses the solu- tion to generate a sequence of subgoals, or stepping- stones, that can be used by the constrained problem solver to solve this difficult subproblem when it occurs again. In this paper we provide analytical evidence for Steppingstone’s capabilities as well as empirical results from our work with the domain of logic synthesis. Introduction In this paper we describe recent work with Stepping- Stone. 111 previous work with Steppingstone [Ruby and Kibler, 19891 we demonstrated its capabilities in the classic tile-sliding domain. In this paper we introduce an analytical model for its behavior. W’e also demon- strate its ability to operate on optimization problems with empirical results from the logic synthesis domain. SteppingStone Steppingstone is broken into five components: (1) goal ordering, (2) constrained search, (3) memory, (4) un- constrained search, and (5) learning. Each of these components operate largely independently. Figure 1 outlines how these components interface with each other. *This work was partially supported by a grant from the National Science Foundation. SteppingStone operates on problems defined with a state space representation consisting of a set of goals, a set of operators, and an initial state. The goal or- derer takes as input a set of goals. It orders these goals so that the constrained search method will likely solve them. It does this by ordering them so as to reduce the likelihood of subgoal interactions using a domain independent heuristic we call opellltess [Ruby and Ki- bler, 19891. It produces an ordered set of subgoals as output. The constrained search component takes as input an ordered set of subgoals and produces a solution for the subgoals. It attempts to solve the subgoals in the pre- scribed order and is constrained to protect each solved subgoal. An impasse occurs when constrained search is unable to solve a subgoal. When reaching an impasse, memory is called. Memory takes as input a context. A context con- sists of the subgoal currently being solved, the currently protected (or solved) subgoals, and the current state. Memory consists of a set of steppingstone records. Each of these records consist of a context and a new block of ordered subgoals. Note that the current state slot is empty for the context in a steppingstone record. The protected subgoals of the context in a steppingstone record represent those solved subgoals undone while re- solving the impasse when the steppingstone record was learned. If the input context matches the context of a steppingstone record, then the block of ordered sub- goals (steppingstones) contained in the record are re- turned to the constrained search component. For t,he input context to match the context of a steppingstone record, the subgoal solved by the steppingstone record must bind to the subgoal currently being solved and the protected subgoals of the steppingstone record must also be protected in the input context. When a new sequence of subgoals are returned, constrained search follows these subgoals in order to solve the current sub- goal as well as all of the protected ones. If constrained search is unable to follow all of the subgoals or the final state arrived at after following all of the subgoals does not improve upon the impa.sse, the system reverts to RUBY & KIBLER 527 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. A Set of Goals Solution Figure 1: Overview of Steppingstone the original impasse state. When memory fails to return any useful stepping- stones the constrained search component calls the un- constrained search component. The unconstrained search component takes as input a context, just as the memory component did. Unconstrained search relaxes the protection on the solved subgoals in its search for a solution. If it resolves the impasse, it returns the se- quence of moves found to the constrained search compo- nent. The unconstrained search component also sends its impasse solution, along with the context, to the learner. The learner takes as input a context and a solution. It uses the context to aid in generalizing over the sequence of moves making up the solution to generate a new block of ordered subgoals. These ordered subgoals are then passed to memory for reuse on other problems. We’ll now examine these components in more detail. Steppingstones as Plans SteppingStone learns plans for solving recurring sub- problems. Steppingstone represents its plans as se- quences of subgoals (steppingstones) for resolving an impasse. A sequence of subgoals consists of an or- dered set of partial state descriptions (subgoals). The constrained problem solver uses these subgoals as step- pingstones to lea.d\it through the impasse. These step- pingstones are indexed by the subgoal they reduce a.nd the previously solved subgoals that are undone and re- solved. After following a sequence of subgoals, any pre- viously solved subgoals remain solved and the subgoal difference generating the impasse is reduced. Figure 2 gives an example of a sequence of subgoals for solving an impasse from the $-puzzle domain. For one of these subgoals to be true in a state the tiles listed in the subgoal must be in the same position as they are in the state. The blank squares in these subgoals are al- lowed to match any tile. The subgoal sequence provides a method for correctly placing the 3-tile when the l-tile and 2-tile have already been correctly placed. These subgoals can be followed by the constrained problem solver to lead to a state where the l-tile, a-tile and S-tile are all correctly placed. Note the blank is not Figure 2: Steppingstones from Memory included in any of t’he subgoals because it was tected when the subgoal sequence was learned not pro- Steppingstones as a representation of a plan differ in several ways from macro-operators [Laird et al., 19871. Wit,11 macro-operators it can be difficult to represent some types of generalizations. For example, the se- quence of operations needed to move from the first sub- goal in Figure 2 to the last one will be dependent upon where the blank is initially. Any reuse of a plan for this solution based on operations will also be depen- dent upon the position of the blank. The sequence of subgoals provided is independent of the blank. Steppingstones allow for the use of heuristic gener- alization while still guaranteeing that any state gener- ated is a legal state. This is not possible with macro- operators. If new macro-operators are ever learned that produce results not originally possible with the initial domain operators, an illegal state might be produced. This limits the types of generalizations that can be al- lowed with macros. Steppingstones do not have this limitation. Since they can only be used if they can be followed using the initial set of domain operators, they can be overgeneral and still guarantee that any state generated is legal. Steppingstones pay for their increased expressiveness with a higher application cost. Steppingstones must be instantiated with search. Steppingstones demonstrate how search can be used to compensate for representa- tion limitations of a given formulation of a domain. 528 LEARNING SEARCH CONTROL Learning New Steppingstones If the system has no knowledge concerning how to re- solve an impasse, Steppingstone resorts to its uncon- strained problem solver, localized brute-force search [Ruby and Kibler, 19891. Once a sequence of moves for resolving a.n impasse is found, the learner general- izes it to derive a new subgoal sequence. The first step in this process is to translate the sequence of moves to the sequence of states generated by the moves. This se- quence of states can be regarded as a sequence of very specific subgoals. Since these subgoals solve an impasse, we assume that only those portions of the state involved in the impasse context are relevant. Here, the impasse context includes the protected subgoals that were un- done and the subgoal that was being solved. This al- lows the subgoals to be generalized by including in each subgoal only those portions of the state involved with the subgoal being solved and the protected subgoals that needed to be undone to solve the impasse. The removed portion of the state is variablized. For the es- a.mple subgoals given in Figure 2, tile-l and tile-2 are the portions of the state involved with previously solved subgoals. Tile-3 is the part of the state involved with the subgoal being solved. The rest of the stat(e is va.ri- ablized and allowed to match anything. Only the moves used to resolve an impasse and the impasse context are used to generate the new stepping- stones. This allows any method for finding the moves to resolve an impasse to be used to generate new step- pingstones. Steppingstone can use an impasse solution provided by a.n expert as ea.sily as the system uses the results of its brute-force search procedure. Analytic Models In this section we present an analysis of the effectiveness of the Steppingstone approach. In order to perform this analysis, we will make strong assumptions about the domain and the problem solving process. In partic- ular we will model the problem solvers in SteppingStone as bounded breadth-first search or hill-climbing search. The goal of this analysis is to a.nalytically eva.luate the value of memory in SteppingStone. In our analysis we will compare two situations: SteppingStone with no memory and Steppingstone with a complete memory. Breadth-First Model In both of our models, we assume that the branching factor is the consta.nt b. In our first model, we assume that the constrained problem solver does a breadth first, search to depth cl and the unconstrained problem solver does a search to depth L * d, where k > 1. We also a.s- sume that SteppingStone can solve all problems without the use of memory. Before any learning takes pla.ce, St,eppingStone will oscillate between two search processes defined by the constrained and unconstrained problem solving process. Suppose the constrained problem solver is called c times while the unconstra.ined one is called u times. Then the total computation cost for a memoryless SteppingStone is bounded above by: (c + u) * bd + u * bk*t (1) Now suppose that sufficient learning takes place such that the unconstrained problem solver need never be called. We define b, as the number of steppingstone sequences that will match an impasse and assume that it is the same for all impasses. Since all but the last of the steppingstones that match might fail to resolve the impasse, to resolve an impasse might require trying every steppingstone that matches. The resulting search cost is bounded above by: (c + u) * bd + u * b,, * k * bd. (2) Roughly speaking, the effect of memory is to replace the term b”* by the term b,,* k*bd. As long as b, is not large, this demonstrates how an appropriate memory of past difficulties can yield an exponential decrease in computa,tion cost. In one regard, this analysis has been too pessimistic. The factor b,, can be large without negatively affecting the search as long as the likelihood of the usefulness of a returned steppingstone sequence is high. In particular, if s is the likelihood a matched subgoal sequence from memory will succeed, then b, should be replaced by l/s. ill-Climbing Model A better model of the constrained problem solver used by Steppingstone is an incremental hill-climbing algo- rithm. This algorithm operates under the constraint that each solved subgoal must remain solved. It then attempts to solve the current subgoal by hill climbing towards it using some measure of that subgoal’s comple- tion. By incremental we mean the hill climbing occurs between subgoals and not from a state to the final goal. Note that unlike P revious search-based approaches like that of l’vlaclearn Iba, 19891, the heuristic measure used is only over the current subgoal, not all subgoals. Both previously solved subgoals and future subgoals are ig- nored by the heuristic. The heurist,ic used is only a measure of the degree to which the single subgoal being solved is completed. As above, let us first a.ssume that a memoryless step- pingstone is sufficient, to solve a problem. Let m be the length of a solution to a subgoal from hill climbing. Also let 1 be the length of a. solution to a subgoal by the unconstrained problem solver. In this ca,se equation 1 becomes: (c + 11) * 112 * b + u * b’. Now, after learning is complete, a bou putational cost, is: nd on (3) the com- RUBY & KIBLER 529 Critical Path Delay Impasse b States Leading to Improvement Steppingstones Figure 3: Steppingstones for Optimizing Critical Path (c+u)*m*b+u*b,*l*b. (4 As before, the factor b, can be replaced by l/s where s is the likelihood of success. In any case, the major effect is to replace the exponential factor b’ by the factor l/s * Z * b. Steppingstones for Optimization Problems where the goal is to find a structure that at- tempts to optimize one or more parameters form an important class of difficult real-world problems. To demonstrate Steppingstone can operate effectively on problems of this type we conducted a series of experi- ments with the logic synthesis task of VLSI design Logic Synthesis One important domain that requires optimizing real- valued constraints as well as meeting a set of Boolean constraints is the synthesis of digital logic. In logic syn- thesis, a functional specification of a circuit is mapped into combinational logic using a library of available components. These components are taken from a technology-specific library. These libraries vary de- pending upon the technology and particular manufac- turer chosen. The synthesized circuit is optimized for a set of constraints. Operating Steppingstone on the logic synthesis task requires a state space representation of the problem. Logic synthesis can be represented with a start state defined by a functional description of a circuit, along with a set of constraints. Boolean algebra provides a good language for the functional description of a cir- cuit. The goal is a realizable circuit using components from an available library that satisfies a set of hard con- straints and optimizes a set of soft constraints. Operators for this domain map parts of the functional description to components from the technology-specific library. These mappings are well-defined and ensure the correctness of the resulting design. Ma.pping a func- tional description to a realizable design is a simple task. Finding a realizable design that satisfies a set of hard and soft constraints is much more difficult. To ensure global optimality requires an exhaustive enumeration of the design space. Figure 3 gives an example of how steppingstones for optimizing critical path delay time can be learned. The initial state to this problem is the Boolean equation aAbAc. The goal is a realizable circuit that is optimized for its critical path delay time. A realizable circuit is found by mapping Boolean subexpressions of the circuit into actual components. Assume the following compo- nents are available: inverters, ‘L-input nand-gates, and a-input nor-gates. One mapping for the Boolean ex- pression X A Y is a nor-gate, with 1X and 1Y for inputs. An alternative mapping is to a nand-gate with X and Y as inputs and an inverter on its output. Using mappings like these in a depth-first fashion, Stepping- Stone generates a circuit that is realizable, but that is unlikely to be optimal for critical path delay time. The impasse state presents a circuit for a A b A c that is realizable. Initially, the system has no knowl- edge of how to optimize a circuit, so an impasse occurs. Unconstrained search is used to find a circuit with an improved critical path de1a.y time. The states shown are those generated by the sequence of moves leading from the impasse state to the improved state. The step- pingstones are generated by removing from these states all but those portions involved in the previously solved subgoal (realizable) that were modified while generating the improved state. These final steppingstones appear at the bottom of Figure 3. Note that the steppingstones presented in Figure 3 are goals that can match many states. The only re- quirement is tha,t the variables X, Y, and 2 are bound consistently in each of the subgoals in the sequence. Since steppingstones are used heuristically and only if 530 LEARNING SEARCH CONTROL C$tical Path Delay (ns) w---m_---- 20- - - --~.B,~&i;;;;lGi Search lo- r- St;$ngStone Ohi 2 3 4’ $ Size of Training Problems Size of Training Problems Figure 4: Average Performance on Problems of Size 30 grounded operations can achieve them, this type of gen- eralization is sound. Steppingstones for Logic Synthesis To demonstrate Steppingstone’s ability to learn opti- mization knowledge we conducted a series of experi- ments. A component library was created with com- ponents available from the LSI Logic Corporation. The components chosen and the critical path delay time/gates required were: S-input nand=4.2ns/2 gates, a-input nand=2.9ns/l gate, 3-input nol-=2.4ns/2 gates, 2-input nor=2,2ns/l gate, inverter=2.9ns/l gate. Steppingstone was initially trained on random Boolean equations small enough for unconstrained search to produce close to optimal designs. These equa- tions used the connectives and, or, and not. There are 22n different equations of this type with n inputs. With this library of components, there are approximately three ways of implementing an and or or. Thus, for a problem of size n there are at least of order 3n-1 different possible solutions. This large search space makes this problem difficult for brute-force methods. The subgoals described earlier are both highly interact- ing and different in character from those traditionally used, making the problem difficult for goal-based ap- proaches. Steppingstone was trained on four successive sets of problems. Each set of problems differed in the number of inputs. The first training set had %-input problems. The number of inputs increased until the last training set had 5-input problems. Training in a set continued until ten successive problems were solved without learn- ing any additional knowledge. Testing was done after finishing each set of training problems. The system was tested on three sets of twenty-five random prob- lems. These sets were drawn from problems with 10, 20, and 30 inputs respectively. Learning and uncon- strained search were turned off during testing. Figure 4 shows how the average critical path delay time of the circuits synthesized decreased as learning increased for the random problems with thirty inputs. Figure 4 also shows how the space required for the cir- cuits decreased as well with learning. Similar results were found for the other test problems. In order to judge the difficulty of these problems and the quality of the solutions generated by Steppingstone, we used the existing logic synthesis application system MisII [Bray- ton et (II., 19871 to optimize the problems for their crit- ical path delay time. Although, MisII had capabilities not available to Steppingstone, it served to provide a good lower bound on the solution quality. The averaged results of MisII on the test problems are also plotted in Figure 4. Note that Steppingstone almost matched the critical path delay time performance of MisII. Step- pingstone did not perform as well at optimizing for the space required because it lacked opportunities for learn- ing space optimization knowledge. Opportunities for learning this knowledge occurred only in those parts of the circuit off of the critical path. In the small train- ing problems, few opportunities for space optimization occurred off of the critical path. To further estimate the difficulty of these problems a simple brute-force approach was also tried. The best solution found using the brute-force approach with a cutoff of 500,000 search tree nodes was recorded for each of the test problems. The averaged results are also plotted in Figure 4. After training on all four training sets, 34 subgoal sequences were learned. Given tha,t the number of ran- dom Boolean functions with n inputs is 22n, or 232 for problems of size five, the amount of learning is ex- tremely small. As with the tile-sliding domain [Ruby and Kibler, 19$9], this small amount of learning is due to Steppingstone’s decision to learn only when its con- strained problem solver is unsuccessful and the recur- rence of these subproblems. After learning these 34 steppingstones, the amount of search required to find the solutions to the problems with thirty inputs a,ver- a.ged 2,841 nodes expanded. RUBY 8~ KIBLER 531 Comparison of Analytic Models and Empirical Results To validate our analytic models of Steppingstone we used them to analyze the empirical results in the logic synthesis domain. Some adjustments of the general model will be made to better fit some specific char- acteristics of the logic synthesis domain. In addition, because the hill-climbing model best matches the ap- proach used in our current implementation, we use it for the analyses. For the random problems with thirty inputs the av- erage length of a solution was 94 moves. The average number of subgoals solved by the constrained problem solver without generating an impasse, defined as c in our models, was 1. The average length of the solution to these subgoals, m in our hill-climbing model, was 41.9 moves. The branching factor for the random problems with thirty inputs was computed indirectly from the total amount of search required by the unconstrained prob- lem solver to find an impasse solution. The branching factor, b, when searching for critical path optimizations on the thirty input problems averaged approximately 7. The average number of times memory was used when solving a problem was 21.7. This corresponds to the average number of subgoals per problem that uncon- strained search would have to solve, u in our models. The average length of a solution to a subgoal found us- ing memory was 2.4 moves. This corresponds to the average length of a solution that before learning must be found by unconstrained search. Unfortunately, the length of these solutions varied from 1 move to 9 moves. Because the amount of search before learning is expo- nential in the length of the subgoal solution, perfor- mance before learning is dominated by the cost of find- ing the longest solution. With the longest solution be- ing 9 moves and a branching factor of 7, the amount of search required would be 7’, or in excess of 40,000,OOO nodes. With a search cutoff below this our model pre- dicts the solutions found will be of lower quality. We conducted an experiment with an empty memory and a search cutoff for unconstrained search of 30,000 nodes and, as predicted by our model, the quality of the so- lutions found was not as high as that produced after learning. After learning, the model assumes that the con- strained problem solver must search about as far when failing on a subgoal as when it succeeds. For the logic synthesis domain this wa.s not the ca.se, as failure oc- curred with no search since constrained search could not hill climb from an impasse state. Thus, we can re- place (c + u) in equation 4 by c. We also replace the branching factor of memory, b,,, by the better estimate of l/s where s is the probability that a subgoal sequence returned from memory will succeed. Thus, the amount of work after learning previously modeled by equation 4 is better modeled by: c*m*b+u*l/s*l*b (5) For the steppingstones learned, the average success rate, s, on the random problems with thirty inputs was 0.0594, so l/s is 16.8. For 1 we use the average length of a solution to a subgoal found using memory, 2.4 moves. Thus, the amount of work predicted by the model after learning is 6,418. The actual avera.ge amount of search after learning was 2,841. Given the assumptions of the model, the accuracy of its predictions greatly increase our confidence in it. Summary SteppingStone gains its power through the integration of several techniques. It decomposes a problem into subproblems and solves the simple subproblems with an inexpensive constrained problem solver. The more diffi- cult subproblems are initially solved with an expensive unconstrained problem solver. The solutions to these more difficult problems are used to learn further decom- positions. These new learned decompositions break the difficult subproblems into simpler subproblems that can be solved by the constrained problem solver. We pro- vided analytical results indicating that a significant de- crease in the amount of search required can be expected from this type of learning. We provided empirical evi- dence from a difficult real-world domain that the search required for problem solving was significantly reduced after learning. In addition, we demonstrated that the analytical model successfully predicted the general re- sults. We intend to continue to explore Steppingstone’s capabilities through a combina.tion of empirical and an- alytical methods. References [Bra.yton et al., 19871 B. I<. Brayton, R. Rudell, A. Sangiovanni-Vincentelli, and A. R. Wang. Mis: A multiple-level logic optimization system. IEEE Transactions on Computer-Aided Design, 6:1062- 1081, 1987. [Iba, 19S9] G. A. Iba. A heuristic approach to the dis- covery of macro-operators. Machine Learning, 3:285- 317, 1989. [Laird et al., 19871 J. Laird, A. Newell, and P. S. Rosenbloom. SOAR: An architecture for general in- telligence. Artificial Intelligence, 33:1-64, 1987. [Ruby and Mibler, 19891 D. Ruby and D. Kibler. Learning subgoal sequences for planning. In Proceed- in.gs of the Elevenih International Joint Colzference on Arti$cial Intelligence, pages 609-614, Detroit, MI, 19s9. 532 LEARNING SEARCH CONTROL
1991
91
1,155
Sanjay Bhansali and Mehdi T. Harandi Department of Computer Science University of Illinois at Urbana-Champaign 1304 W. Springfield Avenue, Urbana, IL 61801 bhansali@cs.uiuc. edu harandi@cs.uiuc. edu Abstract The feasibility of derivational analogy as a mechanism for improving problem-solving behavior has been shown for a variety of problem domains by several researchers. How- ever, most of the implemented systems have been empiri- cally evaluated in the restricted context of an already sup- plied base analog, or on a few isolated examples. In this pa- per, we address the utility of a derivational analogy based approach when the cost of retrieving analogs from a siz- able case library, and the cost of retrieving inappropriate analogs is factored in. Introduction The process of derivational analogy [Carbonell, 19831 consists of storing derivation histories of problems in an episodic memory, retrieving an appropriate case from the memory that shares “significant” aspects with a current problem, and transferring the derivation of the retrieved solution to the new problem by replaying rele- vant parts of the solution and modifying the inapplica- ble portions in light of the new context. In recent years, several systems based on the reuse of a design process have been developed, for domains including program synthesis ([Baxter, 1990, Goldberg, 1989, Mostow and Fischer, 1989, Steier, 1987, Wile, 1983]), circuit de- sign [Huhns and Acosta, 1987, Mostow et al., 1989, Steinberg and Mitchell, 19851, mathematical reason- ing ([Carbonell and Veloso, 1988]), human interface design ([Blumenthal, 19901) and blocks-world ([Kamb- hampati, 19891). Though these systems have demon- strated the effectiveness of a reuse-based paradigm for improving efficiency of search-intensive tasks, most of them have been tested in the restricted context of an already supplied base analog, or on a few isolated ex- amples, or for “toy” problem domains. Whether the derivational analogy approach would scale up for real- world problems, and how factoring in the cost of re- trieving analogs from a sizable episodic memory, as well as the cost of retrieving inappropriate analogs, would affect the system’s performance, remain open questions. In this paper, we address the last two is- sues in the context of a system, APU, that synthe- sizes UNIX shell scripts from a formal problem speci- fication [Bhansali, 1991, Bhansali and Harandi, 1990, Harandi and Bhansali, 19891 We describe experiments designed to determine whether automatic detection of appropriate analogies in APU is cost-effective, and whether using deriva- tional analogy does speed-up APU’s overall problem- solving performance. Following in the style of Minton [Minton, 1988], we assess APU’s performance on a pop- ulation of real-world problems, generated randomly by fixed procedures. The results of the experiment point to criteria that may be used to determine the viability of using derivational analogy, and also suggest ways of building up a case library. Overview of A The input to APU is a problem specification in the form of pre- and post-conditions, written in a func- tional, lisp-like notation, augmented with logical and set-theoretic quantifiers. This is transformed by a hi- erarchical pdanner into a shell-script by a top-down goal decomposition process employing a lcnowledge- base of rules and a component library of subroutines and cliches (or program templates), which represent the primitive operators for the planner. A solved problem is stored together with its derivation history in a derivation history library. When a new prob- lem is encountered, an analogical reasoner is used to retrieve an analogous problem from the deriva- tion history library, and to synthesize a solution for the new problem by replaying the derivation history of the replayed problem. A potential speedup is achieved by the elimination of search for the right sequence of rules to apply. Details of the program synthesis process are given elsewhere [Bhansali, 1991, Bhansali and Harandi, 19901. A concept dictionary contains the definitions and re- lationship between various domain-dependent concepts including objects, predicates, and functions. These concepts are organized in an abstraction hierarchy. Rules are written in terms of the most abstract con- cepts and apply to all instances of the concept in the hierarchy. For example, lines and words are instances of the abstract concept text-object, and lciZZ (a process) BHANSALI & HARANDI 521 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. and delete (a file) are instances of the abstract func- tion remove. Thus, e.g., a rule to remove an object would be applicable to both killing a process as well as removing a file. The abstraction hierarchies in the concept dictionary and the polymorphic rules form the basis of the analogical reasoning. Derivation History A derivation history of a problem is essentially a tree showing the top-down decomposition of a goal into sub-goals terminating in a UNIX command or subrou- tine. With each node in the tree the following infor- mation is stored: 1. The sub-goal to be achieved. This forms the basis for determining whether the sub-plan below it could be replayed in order to derive a solution for a new problem. The sub-goal is used to derive a set of keys, which are used to index the derivation tree rooted at that node. If these keys match those of a new sub- problem, the corresponding sub-tree is considered a plausible candidate for replay. 2. The sub-plan used to solve it (i.e. a pointer to the sub-tree rooted at that node). 3. The ruZe applied to decompose the problem, and the set of other applicable rules. This is responsible for some speed-up during replay by eliminating the search for an applicable rule. 4. The types of the various arguments. These are used to determine the best analog for a target problem, by comparing them with the types of the corresponding variables in the target problem (next section). If the types of all corresponding variables are identical, it represents a perfect match, and the solution can be copied - a much faster operation than replay. 5. The binding of variables occurring in the goal to the variables in the rule. This is used to establish cor- respondence between variables in the target and source - expressions bound to the same rule variable are as- sumed to correspond. Retrieval of Candidate Analogs When derivations are stored, they are trieved using a set of four heuristics. indexed and re- Solution Structure Heuristic (HI). This heuris- tic is used to detect analogies based on the abstract solution structure of two programs, as determined by the top-level strategies used in decomposing the prob- lem. The top-level strategies usually correspond to the outerlevel constructs in the post-condition of a prob- lem specification. These constructs include the various quantifiers and the logical connectives - not, and, or and implies. As an example an outermost construct of the form: (-(EXIST (?x : . ..) :ST (and ?constrl ?constr2))) is suggestive of a particular strategy for solving the problem: Find all ?x that satisfy the given constraints and delete them. Therefore, all problems with such a post-condition might be analogous. The other quanti- fiers and logical connectives result in analogous strate- gies for writing programs. The Solution Structure heuristic creates a table of such abstract keys and uses them to index and retrieve problems. Systematicity Heuristic (Hz). This heuristic is loosely based on the systemuticity principle proposed by Gentner [Gentner, 19831 and states that: if the in- put and output arguments of two problem specifications are instances o.f a common abstraction in the ubstruc- tion hierarchy,- then the two problems are more likely to be analogous. The principle is based on the intuition that analo- gies are about relations, rather than simple features. The target objects do not have to resemble their cor- responding base objects, but are placed in correspon- dence due to corresponding roles in a common rela- tional structure. To implement this heuristic, APU looks at each of the conjunctive constraints in the postcondition of a problem and forms a key for it by abstracting the con- stants, input-variables, output-variables, and the pred- icates. The constants and variables in the problem are treated as black-boxes, and unary functions are con- verted to a uniform binary predicate, since they are deemed unimportant, and the primary interest is in detecting the higher order abstract relations that hold between-these objects. This is done by replacing a function or predicate by climbing one step up the ab- straction hierarchy. The detection of these higher order relations also establishes the correspondences between the input/output variables of the source and target problem, which is used to prune the set of plausible candidates (using the Argument Abstraction heuristic, discussed shortly) as well as in replay. As an example, the constraints (2 (cpu-time ?p) ?t), and (< (?n (size ?f))) where ?p, ?f are input variables and ?t, ?n are output variables would result in the formation of the following key: (order-rel-op (attribute Constt inp-var) out-var) suggesting that the two problems might be analogous. Syntactic Structure Heuristic (Hs). The third heuristic relies on certain syntactic features of a prob- lem definition to detect analogies. The syntactic fea- tures include the structure of a specification (e.g. if two problems include a function definition that is defined recursively), as well as certain keywords in the spec- ification (e.g. the keyword :WHEN in a specification indicates that the program is asynchronous, involving - - a periodic wait or suspension of a process). APU ates a table of such syntactic features which are to index and retrieve problems. Argument Abstraction Heuristic (Hg). This heuristic uses the abstraction hierarchy of objects to cre- used 522 LEARNING SEARCH CONTROL determine how “close” the corresponding objects in two problem specifications are. Closeness is measured in terms of the number of links separating the objects in the concept dictionary - shorter the link, the better are the chances that the two problems will have analo- gous solutions. For example, lines and words are closer to each other than, say, to a process. Therefore, the problem of counting lines in a file is closer to the prob- lem of counting words in a file than to the problem of counting processes in the system. Interaction of Heuristics The retrieval algorithm resolves conflicts between candidate choices suggested by the above heuristics by using Ha, HI, and Ha, as the primary, secondary, and ternary indices respectively. Further ties are broken by using HJ, and then choosing one of the candidates at random [Bhansali, 19911. Experimental In order to address the issues mentioned in the intro- duction, we need experimental results that provide an- swers to the following aspects of APU’s retrieval and replay techniques: How good are the heuristics in determining ap- propriate base analogs? e How does the time taken to synthesize programs using analogy compare with the time taken to synthe- size programs without analogy? e How does the retrieval time depend on the size of the derivation history library? As mentioned earlier, it is not enough to show re- sults on isolated examples; the system must be tested on a population of problems that is representative of real-world problems. However, the limited knowledge- base of our prototype system, precluded testing on a truly representative sample of the space of UNIX pro- grams. Therefore, we decided to restrict ourselves to a subset of the problem domain, consisting of file and process manipulation programs; problems were con- structed randomly from this subset by fixed proce- dures. Generating the Data Set We began by constructing a rule-base for 8 problems that are typical of the kind of problems solved using shell scripts in this problem space. T,he problems in- cluded in the set were: 1) List all descendant files of a directory, 2) Find most/least frequent word in a file, 3) Count all files, satisfying certain constraints, in a directory, 4) List duplicate files under a directory, 5) Generate an index for a manuscript, 6) Delete pro- cesses with certain characteristics, 7) Search for cer- tain words in a file, and 8) List all the ancestors of a file. To generate the sample set, we first created a list of the various high-level operations which can be used to describe the top-level functionality of each of the above problem - count, list, delete, etc. - and a list of objects which could occur as arguments to the above opera- tions - directory, file, system, line, word, etc. Then we created another list of the predicates and functions in our concept dictionary which relate these objects, e.g., occurs, owned, descendant, size, cpu-time, word- iength, line-number, etc. Next, we used the definitions of the top-level pred- icates in the concept dictionary to generate all legal combinations of operations and argument types. For example, for the count operation, the following in- stances were generated: (count file system), (count file directory), (count word file), (count character file), (count line file), (count string file), (count process sys- tem). In a similar fashion, a list of all legal constraints were generated, using the second list of predicates and functions. Examples of constraints generated are (oc- curs file directory), (descendant directory directory), (= int (line-number word file)), and (= string (owner 3/e)). Constraints that were trivial or uninteresting were pruned away, e.g. (= int ant). Next we combined these constraints with the top- level operations to create a base set of problems. We restricted each problem to have a maximum of three conjunctive constraints. From this set a random num- ber generator was used to select 37 problems, which together with the initial set formed our sample popu- lation. The final step consisted of translating the high level description of the problems into a formal specification. This was done manually, in a fairly mechanical man- ner. The only non-mechanical step was in assigning the input and output arguments for each program. This was done using the most ‘natural’ or likely formulation of the problem. xperiment I: Feasibility of ~~t~~~t~~ Retrieval We stored 15 randomly chosen problems from the sam- ple set in the derivation history library. Then, for each of the 45 problems, we ran the retrieval algorithm to determine the best base analog. This was done for various combinations of the heuristics. To evaluate the heuristics, we compared APU’s choice against a human expert’s, namely ourselves. To ensure that our choices are not biased by APU’s, we compiled our own list of the best analogs for each prob- lem, before running APU’s retrieval algorithm. The result of the experiment is summarized in Fig- ure 1. The first column shows which heuristics were turned on during the experiment. The combinations tried were - all heuristics working, all but one heuristic working, and each heuristic working separately’ ‘The argument a bstraction heuristic cannot be used in- dependently, since it is not used to index problems, but simply to prune the set of candidates retrieved by the other analogs. BHANSALI & HARAM 523 H2 14 8 37 H3 41 41 4 Figure 1: Performance of APU’s retrieval heuristics against a human expert’s The second column shows the number of problems for which APU’s choice did not match ours. However, it would not be fair to judge APU’s performance simply on the number of mismatches, since that would imply that the human choices are always the best. Since we could not be confident of the latter, after obtaining the mismatches, we again carefully compared APU’s choice against ours to judge their respective merits. We discovered that in a few instances APU’s choices were clearly inferior to ours, while in others, it was not clear which of the mismatched choice was better. The former were marked as inferior choices (column 3), and an overall score for each heuristic combination, was determined by subtracting the number of the inferior choices from the total number of problems (column 4). Discussion. The experiment clearly indicates that using all 4 heuristics, APU’s retrieval algorithm per- formed almost as well as a human. There were only two cases in which APU’s choice of an analog was clearly inferior. The reason why APU failed to pick the cor- rect choice for the two cases became obvious, when we looked at the two cases. Consider the first case, which was to delete all di- rectories that are descendants of a particular sub- directory. This was specified using the post-condition (1 (EXIST (?sd: dir) :ST (descendant ?sd ?d))) where ?d is an input directory-variable. The best ana- log for this problem was the problem of listing all the descendant sub-directories of a directory, since both of them involve recursive traversal of the directory struc- ture in UNIX. However, the analog picked by APU was the problem of deleting all files under a given directory, specified with the post-condition: (-(EXIST (?f: file) :ST (occurs ?f ?d))) where ?d is again an input directory-variable. The reason APU picked this analogy was because occurs and descendant are groupted under a common abstrac- tion contained in APU’s concept dictionary. Thus, the systematicity heuristic abstracted both OCCUYS and descendant to (contained OUTPUT- VAR INPUT- VAR), and considered both to be equally good analogs for the target; the solution-structure heuristic then picked the delete-files problem because its outerlevel constructs were closer to the target’s. At a more abstract level, APU’s inability to pick the right analog can be explained by the fact that APU’s estimation of the closeness of two problems in the im- plementation domain is based solely on its assessment of the closeness of the two problems in the specification domain. A better organization of the concept dictio- nary, so that the distance between concepts in the spec- ification domain reflects the corresponding distance in the implementation domain, might avoid some of these missed analogies. The experiment also shows that HI and H2 are the two most important heuristics - as expected. Rows 4 and 5 show the number of missed analogs when one of the two is turned off. Though the table doesn’t show it, the problems for which the analogies were missed were also different, indicating that neither heuristic is redundant. The result in Row 2 was unexpected, since it seems to indicate that the argument abstraction heuristic is unimportant. This was contrary to our experience when we tried the heuristics on isolated examples. In particular, when the base set of analogs contained sev- eral similar analogs, the argument abstraction heuris- tic was important to select the closest one. The reason we got this result is because of the small set of base analogs - there weren’t two analogs sufficiently close as to be indistinguishable without using the argument abstraction heuristic. Finally, HS doesn’t seem to contribute much to the effectiveness of the retrieval. This is again due to the nature of the sample space, where most problem de- scriptions did not have syntactic cues like keywords and recursion. We remark, that although the heuristics are special- ized for the domain of synthesizing UNIX programs, we believe that the general strategy of using the abstract solution structure (HI), the problem formulation (Ha and H3), and generic objects (Ha) should be adaptable to other domains. Experiment 2: Speed-up using Derivation al Analogy For this experiment, we selected 10 examples at ran- dom from our sample set to form the set of source analogs. From the same sa.mple set, we selected an- other set of 10 examples (again using a random number generator) and measured the times taken to synthesize a program for each of them, once with the analogi- cal reasoner off, and once with the analogical reasoner turned on. This was repeated with 20 different sets of base analogs. Figure 2 shows the result of one typical run and fig- ure 3 shows the speed-up achieved on the first 5 runs. 524 LEARNING SEARCH CONTROL Without analogy 1 2 3 4 5 6 7 a 9 IO With analogy Roblem Numbers Figure 2: Sample data showing the speedup of program synthesis using derivational analogy Figure 3: Speed-up obtained on 5 different sets of source analogs Discussion. Figure 3 show that using derivational analogy, the average time to synthesize programs is re- duced by half. This is not as great as we had expected based on our experience on isolated examples. Nev- ertheless, the result is significant, because it demon- strates that derivational analogy is an effective tech- nique in improving problem-solving performance, not only on isolated examples, but on populations of prob- lems too. There are several factors that affect the generality of these results. First, it is based on the assumption that problems are drawn from the set of basic concepts and constraints with a uniform distribution. However, in practice, we expect a small set of concepts and con- straints to account for a large share of the problems encountered in real life. In that case, with a judicious choice of which problems to store in the derivation his- tory library, the number of problems for which close analogs can be found will be much larger than the num- ber of problems without analogs. Consequently, the benefits of using derivational analogy would increase. There are two potential ways in which replay can im- prove problem-solving performance. One is by avoid- ance of failure paths encountered by a planner during initial plan-synthesis. This is the primary source in domains where there are a few general purpose rules and the planner has to search considerably in order to find a solut,ion. The other (less substantial) speed- Retrieval tim far all pmblam Km sea) Derivation library size Figure 4: The time taken to retrieve analogs as a func- tion of library size. up is obtained by identifying solution spaces which are identical to a previously encpuntered one. In such sit- uations, instead of traversing the solution space, the problem-solver can copy the solution directly. This is the primary source of speed-up in systems like APU, where there are a large number of focussed rules. The planner does not spend much time in backtracking, however there are common sub-problems that arise fre- quently. (Again, we expect the 80-20 rule to apply in most domains, i.e. SOYo of the problems would be gen- erated from 20Y0 of the solution space, implying that there would be a large number of problems that are analogous, compared to non-analogous ones.) A promising observation from figure 2 is that, when target problems do not match analogs in the library, the degradation in 1,ctrformance is small (problems 2, 6, 7), compared to the improvement in performance when problems match (problems 4,5). This suggests that unless the number of mismatches is much larger than the number of matches, derivational analogy would im- prove problem-solving. Also, the speedup obtained for larger problems is generally greater t,han speedup for smaller problems. This could be used t,o decide when it -would be advan- tageous to use replay, based on an estimation of the size of a user specified problem. Experiment 3: Retrieval Time Our third experiment was designed to measure the cost of retrieving analogs as a function of the size of the derivation history library. To measure this, we incre- mentally increased the size of the derivation history library, (in steps of 5), and measured the time taken to retrieve analogs for all the 45 problems. Figure 4 shows the result of one typical run of this experiment. Discussion. The figure shows that the retrieval t‘ime increases almost linearly with the number of problems in the library. The time taken to search for analogs essentially depends on the average number of prob- lems indexed on eacll feature. For the retrieval time to converge, the average number of problems per feature BHANSALI & HARANDI 525 should approach a constant value. For our sample set, we found that this was not true. The average number of problems per feature after each set of 5 problems were added to the library were: 1.88, 1.85, 2.11, 2.5, 2.2, 3.58, 3.2, and 3.28, which corresponds remarkably well with (the peaks and valleys in) the graph. This provides a clue as to when problems should be stored in the derivation history library: if adding a set of problems to the library increases the ratio prob- lems/feature, it suggests that the new problems are quite similar to the problems already existing in the library, and hence their utility would be low. On the other hand, if the ratio decreases or remains the same, the problems are different from the ones in the library and should probably be added. Finally, the figure shows that the retrieval time it- self is not much - less than 4 seconds on an average - compared to the time to synthesize programs. Conclusion The significance of the experiment reported here lies in elucidating some of the criteria that determine the viability of using derivational analogy for improving problem-solving behavior - specifically, when the cost of retrieving the right source analogs and the cost of applying incorrect analogs (called gEobadZy divergent situations [Veloso and Carbonell, 19911) is factored in. To summarize, the derivational analogy approach is likely to be cost-effective if 1) The distribution of problems in a domain is such that a large proportion of the problems are analogous compared to non-analogous ones . 2) It is possible to abstract features of a problem- specification, so that the distance between problems in the abstracted feature space reflects the distance between problems in the implementation space. 3) The ratio problems/feature of problems in the case library converges. 4) The average complexity of solution derivation is large compared to the overhead of analogical detec- tion. The last two conditions also suggest when a solu- tion derivation should be incorporated in a case li- brary. This, in conjunction with an empirical analysis [Minton, 19881 of the utility of stored derivations can be used to maintain a case library that would maximize the benefit/cost ratio of using the derivation history li- brary. References [Baxter, 19901 Ira D. Baxter. Z’ransformationcsl M&&e- nance by reuse of design histories. PhD thesis, Univer- sity of California, Irvine, December 1990. [Bhansali and Harandi, 19901 Sanjay Bhansali and Mehdi T. Harandi. Apu: Automating UNIX pro- gramming. In Tools for Artificial Intelligence 90, pages 410-416, Washington, D.C., November 1990. [Bhansali, 19911 Sanjay Bhansali. Domain-based program synthesis using planning and derivational analogy. PhD thesis, University of Illinois at Urbana-Champaign, 1991. (Forthcoming). [Blumenthal, 19901 Brad Blumenthal. Empirical compar- isons of some design replay algorithms. In Proceedings Eighth National Conference on Artificial Intelligence, pages 902-907, Boston, August 1990. AAAI. [Carbonell and Veloso, 19881 Jaime Carbonell and Manuela Veloso. Integrating derivational analogy into a general problem solving architecture. In Proceedings Case-based Reasoning Workshop, pages 104-124, Clear- water Beach,Florida, May 1988. [Carbonell, 19831 J aime G. Carbonell. Derivational anal- ogy and its role in problem solving. In AAAI, pages 64-69, 1983. [Gentner, 19831 D. Gentner. Structure-mapping: A the- oretical framework for analogy. Cognitive Science, 7(2):155-170, 1983. [Goldberg, 19891 All en Goldberg. Reusing software devel- opments. Technical report, Kestrel Institute, Palo Alto, California, 1989. Dra.ft. [Harandi and Bhansali, 19891 Mehdi T. Harandi and San- jay Bhansali. Program derivation using analogy. In IJ- CAI, pages 389-394, Detroit, August 1989. [Huhns and Acosta, 19871 M.N. Huhns and R.D. Acosta. Argo: An analogical reasoning system for solving design problems. Technical Report AI/CAD-092-87, MCC, Mi- croelectronics and Computer Technology Corporation, Austin, TX, 1987. [Kambhampati, 19891 S. Kambhampati. Flexible reuse and modification in hierarchical planning: a validation struc- ture based approach. PhD thesis, University of Maryland, College Park, October 1989. [Minton, 19881 Steven Minton. Learning Eflective Search Control Knowledge: an explanation- based approach. PhD thesis, Carnegie Mellon University, March 1988. [Mostow and Fischer, 19891 Jack Mostow and Greg Fis- cher. Replaying transformationals of heuristic search algorithms in DIOGENES. In Proc. of the DARPA case- based reasoning workshop, Peniscola, Florida, May 1989. [Mostow et ad., 19891 Jack Mostow, Michael Barley, and Timothy Weinrich. Automated reuse of design plans. International Journal for Artificial Intelligence and En- gineering, 4(4):181-196, October 1989. [Steier, 19871 David Steier. CYPRESS-Soar: a case study in search and learning in algorithm design. In IJCAI, pages 327-330, August 1987. [Steinberg and Mitchell, 19851 L. I. Steinberg and T. M. Mitchell. The redesign system: a knowledge-based ap- proach to VLSI CAD. IEEE Design Test, 2:45-54, 1985. [Veloso and Carbonell, 19911 Manuela Veloso and Jaime G. Carbonell. Learning by analogi- cal replay in PRODIGY: first results. In Proceedings of the European Working Session on Learning. Springer-verlag, March 1991. [Wile, 19831 D. S. Wile. Program developments: formal explanations of implementations. the ACM, 26(11):902-911, 1983. Communications of 526 LEARNING SEARCH CONTROL
1991
92
1,156
Mark Derthick MCC 3500 West Balcones Center Drive Austin, TX 78759 derthick@mcc.com Abstract This paper discusses unsupervised learning of orthogo- nal concepts on relational data. Relational predicates, while formally equivalent to the features of the concept- learning literature, are not a good basis for defining concepts. Hence the current task demands a much larger search space than traditional concept learning al- gorithms, the sort of space explored by connectionist al- gorithms. However the intended application, using the discovered concepts in the Cyc knowledge base, requires that the concepts be interpretable by a human, an abil- ity not yet realized with connectionist algorithms. In- terpretability is aided by including a characterization of simplicity in the evaluation function. For Hinton’s Family Relations data, we do find cleaner, more intu- itive features. Yet when the solutions are not known in advance, the difficulty of interpreting even features meeting the simplicity criteria calls into question the usefulness of any reformulation algorithm that creates radically new primitives in a knowledge-based setting. At the very least, much more sophisticated explanation tools are needed. Introduction This research is being carried out in the context of the Cyc project, a ten year effort to build a program with common sense [Lenat and Guha, 19901. Much of the ef- fort is devoted to building a knowledge base of unprece- dented size. In such a large KB there will inevitably be important concepts, relations, and assertions left out, even within areas that have been largely axiomatized. Inductive learning algorithms that can discover some of this missing information would be helpful, but it is crucial to be able to explain new concepts in order to properly integrate the discovered knowledge. In this paper, concept learning is cast as assigning features to individuals based on training examples con- sisting of tuples of those individuals. Each possible value of each feature represents a concept. For instance, the feature COLOR generates the concepts Red, Blue, etc. Hopefully such an algorithm can be used directly to discover useful new concepts in the Cyc KB, and can be used indirectly in analogical reasoning in Cyc. Solv- ing Hinton’s [1986] f amily relations problem in a more pleasing manner has been the first important milestone. The Family Relations problem is described in the next section. Then the Minimum Description Length (MDL) principle [Rissanen, 19891, in which theories are judged to be good in direct proportion to how small they are, is very briefly described followed by its particular use in feature discovery on the family relations problem. Finally, some implications of this work on the general problem of reformulation are dis- cussed in light of experience on real Cyc data. The algorithm is called MDL/OC, for Minimum Descrip- tion Length/Orthogonal Clustering. A more detailed description, including results on a diagnosis problem also used by Michalski and Chilausky [1980], real Cyc data, and an analogical mapping also used by Falken- hainer et al [1989], is given in Derthick [1990]. roblem epresentation For the purposes of this paper, the goal of a learning knowledge representation system is to develop a domain theory from a set of ground atomic assertions that al- lows it to decide the truth or falsity of other ground assertions. The Family Relations problem’s assertions use only binary predicates, which are usually called fea- tures in the concept learning literature. MDL/OC can handle discrete predicates of arbitrary arity. The family relations training data consists of the 112 3-tuples representing true family relationships in the family trees shown in Figure l(top). The elements of the tuples include the 24 people and the 12 predicates husband, wife, son, daughter, father, mother, brother, sister, nephew, niece, uncle, aunt. These predicates make for poor features; the set of people that have the value Pierro for the feature father, for instance, has only two members, so it is not very good for generalizing over. A decision tree algorithm that does not redescribe the input by constructing new terms would construct a shallow tree that has a large branching factor. And previous constructive induction algorithms, which only consider simple boolean combinations of existing fea- tures or values, would not do much better. Another way to see the poor match of this data to previous concept learning algorithms is to look at the feature-vector rep- resentation of the problem in Figure l(bottom). There DERTHICK 565 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. Christopher 7 Penelope Andrew p Christine 1 ’ 1 I I 1 Margaret = A!thu; ViAoria r James Jrn&er = Charles Colin C harlotle Gina Pierro = Francesca a arco Ange a = ‘lomaso AlA nso d So hia (Christopher Wife Penelope) (Christopher Son Arthur) (Christopher Daughter Victoria) HWSDFMBSNN U A Christopher : -PAV------ - - Charlotte : ----JVC---(CA}{ JM} Arthur : -M--CP-VCC - - Figure 1: Top, the family trees from which the 112 train- ing tuples are derived. “2 means “spouse,” and lines mean “child.” Middle, a few examples of the training tuples for the family relations problem, in the syntax accepted by MDL/OC. It is possible to use an feature-value representa- tion of this domain (Bottom), in which the given predicates Husband, Wife, Son, Daughter, Father, Mother, Brother, Sister, Nephew, Niece, Uncle, Aunt induce an feature vec- tor on each individual. are many null values and some multi-values. ber of values of each feature is large. The num- Much more desirable would be to construct new pred- icates that make for more natural features, such as Sex, Nationality, or Generation. These features would still take a person as an argument, but their values would be constructed: Male, Female, English, Italian, Old, Middle, and Young. Of course the algorithm cannot come up with English names for its features and val- ues. But this paper would be even less readable using gensyms, so it is important to remember that only the 24 people and 12 given predicates appear in the input, and that other features and values are names I attached to partitions and their elements constructed by the al- gorithm. Somewhat confusingly, MDL/OC does not distinguish between predicates and arguments. It in- terprets the training set strictly as a set of tuples, and discovers new ways to categorize the elements. To em- phasize this, the given predicates and people are some- times collectively called individuals. The feature that distinguishes them is useful to the algorithm, so Type is another constructed feature, with values Person and Given-Predicate. In algorithms that represent an individual by its feature-vector, all assertions about one individual are present in a single training example, and testing is ac- complished by completing one or a few missing features values of a new individual. In MDL/OC, a training ex- ample consists of only a single assertion, of the form (PERSON1 GIVEN-PREDICATE PERSON2). Test- ing generalization on new ground assertions (termed “completion”) is done by a separate system. The intent is to use the discovered features to help extend a knowl- edge base with a subtheory of the training domain, and to have completion done by the knowledge-based sys- tem. However a simple-minded algorithm that does not require a human to integrate the features with existing concepts in a KB is described in the evaluation of re- sults section. Feature Discovery Coding Scheme MDL is a very powerful and general approach that can be applied to any inductive learning task involving suf- ficient data. It appeals to Occam’s razor-the intuition that the simplest theory that explains the data is the best one. The simplicity of the theory is judged by its length under some encoding scheme chosen subjectively by the experimenter. Its ability to explain the data is measured by the number of bits required to describe the data given the theory. There is a tradeoff between having a large theory that requires few additional data bits to explain the particular training set, and having a small theory that requires much of the information to be left as data. To ensure that the encoding scheme is information-preserving, the task is often thought of as transmitting the training set to a receiver using the fewest bits. There must be an algorithm for the receiver to reconstruct the data from the compressed form. This subsection assumes that useful features like Sex have already been found, and the concern is only to describe how they are used to encode and decode the training data, and to measure the resulting encoding length. This length serves as a declarative evuhalion function over sets of features. The next section de- scribes how a weak method is used to search for a set of features that minimizes the evaluation function. MDL/OC’s encoding scheme reflects two character- istics I believe intuitively good features should have: They should enable simple expression of the domain constraints, and they should not be redundant. Redun- dancy is minimized by any MDL scheme; simplicity is enforced by constraining each feature separately. For instance, the Sex feature of the GIVEN-PREDICATE element of a tuple perfectly predicts the Sex feature of the PERSON2 element, independent of the type, Na- tionality, Generation, and branch of the family. For instance, Mother has the value Female, and any moth- erhood assertion will have a Female PERSON2. The other features listed have their own independent reg- ularities. Once each feature of a missing testing tu- ple element has been found, the individuals with these feature-values can be looked up. MDL/OC gets its 566 LEARNING THEORY AND MDL Data (given) ’ e T h%% Christopher Pierro ex Christopher Pierro iati~rm~ y husband Christopher Pierro ieneration EE%! Christopher Pierro Male Male lst-Gen Theory (constructed) (Andrew son James) (Andrew daughter Jennifer) (Arthur wife Margaret) (Penelope husband I I Female Male Male English (Italian) English Is&Geq (Znd-Gen) lst-Gen Figure 2: The encoding scheme used by MDL/OC. To transmit the training example (Penelope husband Christopher), the encoder looks up the feature-vectors for each component of the tuple (columns). Each row then represents a feature-tuple to be transmitted, using code-words that have previously been sent to the receiver. The feature value names in this figure are attached by the experimenter as an aid to understanding. When applied to a given predicate such as husband, the label “Italian” is in parentheses to indicate that, while husband formally has the Same value for this feature as Emilio, it is not really meaningful; no pairs of rules differ only in their treatment of the Nationality feature of the GIVEN-PREDICATE of a tuple. “2nd-Gen” is in parentheses because it really means “same generation” when applied to a GIVEN-PREDICATE. This overloading doesn’t cause problems because the subset of the domain that can appear as a GIVEN-PREDICATE is disjoint from that that can appear as a PERSON1 or PERSON2. power from discovering features that makes this kind of underlying regularity apparent. The convenience af- forded by assumptions of independence is embraced al- most universally in Rayesian inference. It is surprising that there have been few attempts to discover orthogo- nal concepts. One straightforward encoding scheme would assign each individual a code-word and transmit the elements of each training tuple in order. However this scheme ignores the regularities that hold across each train- ing tuple. Since we are looking for regularities across orthogonal features, MDL/OC’s coding scheme (Fig- ure 2) transmits tuples of feature-values across individ- uals (called a feature-iuple) for each feature, as opposed to a tuple of feature-values across features (called a feaiure-vector) for each individual. ,At the receiver, the process is just the reverse. The code-words representing feature-tuples are decoded and rearranged into feature- vectors, which are decoded into individuals. In case multiple individuals have the same feature vector, the message is augmented with disambiguation code-words. For instance, using the features in Figure 2, both Pene- lope and Christine have the feature-vector PFEl, so every training tuple in which one of them appears will require an extra bit. Using the notation in figure 3, what ends up being transmitted for each training tuple, therefore, is d (= 4 in the figure) feature tuples, rather than n (= 3 for Family Relations) feature vectors. This is a win if the codes for feature tuples are shorter than codes for fea- ture vectors by at least n/d, on average. For Family Relations, the average code length for feature vectors is H(f(7)) = 5.1 bits. Th e average feature tuple code lengths, H(fi(S)), are: Type = 0.0, Sex = 1.9, Na- tionality = 1.0, Generation = 2.6. So this domain has regularities allowing feature tuple codes that are indeed much shorter than feature vector codes. By Shannon’s coding theorem, and assuming opti- mal codes, the number of bits required to transmit each symbol is the negative log of the symbol’s probability of occurrence. Summing this for all the feature-tuple and disambiguation codewords gives the total number of bits for the data term. (Any real code will require an integral number of bits. However, all the codes in this paper are “simulated,” and the lengths are calculated using information-theoretic formulas whose range is the non-negative real numbers.) The theory description consists of the number of fea- tures (d), individuals (q), training examples (I), training tuple size (n), number of values for each feature (ci), the assignment of individuals to feature vectors (f), and the feature-tuple and disambiguation codes. There is insufficient space here to derive the evaluation function in any detail. In principle, it is just turning the cr&nk. A code table can be transmitted as a histogram of sym- bol frequencies; both transmitter and receiver can apply DERTHICK 567 ci =2 Feature arlties s Random variable ranging over training tuples. Uniform probabilities. 7 Random variable ranging over individuals. Probabilities match the training data. Feature vector for individual 7” ith feature value for individual 7. ith feature tuple for training example S. Entropy of a random variable = fw the expected number of bits to encode a value of the variable = r Pr(.) logPr(.) Figure 3: Notation used in specifying the MDL/OC evil- uation function. Variable values are given for the Family Relations problem and for the search parameters given in the search algorithm section. a previously agreed-on algorithm to derive an optimal code from this. Since the total number of symbols (7-d) and the alphabet size (C cr) are known to the receiver, straightforward counting arguments lead to a formula for the number of bits to transmit the code tables. In practice, however, this efficient but abstract ap- proach does not produce a smooth evaluation function. Although it takes about twice as many bits, the search works much better with an evaluation function based on transmitting the code tables entry by entry. Each en- try specifies the code for one symbol. Quinlan (personal communication) found the same factor of two worsening in total number of bits, along with improved search per- formance, when he used a less abstract coding scheme in his MDL approach to learning decision trees [Quinlan and Rivest, 19891. Here this effect is apparently due to smoothness, because the intuitive feature sets are local optima in both models. Several approximations are made to further enhance the smoothness of the evaluation function. The average number of bits to specify the codes in the code tables is approximated by the entropy of the random variable being coded (or the conditional entropy of the random variable given the feature-vector in the case of the dis- ambiguation code). This corresponds to weighting the average using the observed probabilities in the data. The number of bits to transmit an integer z is approx- imated log Z. Finally, the log of the arity of a feature is approximated by the entropy of its probability dis- tribution over all occurrences of all individuals in the training set, logci M H(fi(7)). When the feature par- titions the individuals so that each value occurs equally often in the training set, this approximation is exact. As the partition becomes more uneven, the approxi- Features (T s N (3 (T s NtG 3 CT s> \TTq ‘7 (RANDOM) (IDENTITY) Theory 298 349 265 215 215 238 199 197 222 317,808 Feature Disam- Tuples biguation 625 339 922 78 296 899 218 1066 113 1179 296 1036 0 1402 0 1711 335 1375 762 0 Total 1262 1348 1460 1500 1507 1571 1601 1908 1933 318,570 Table 1: The value of the evaluation function for several sets of features that I hand-constructed to solve the problem. The units are bits. T=Type, S=Sex, N=Nationality, and RANDOM are 2-valued; G=Generation and B=Branch are 3-valued; IDENTITY is 36-valued. When constrained to find five binary features, MDL/OC achieved a slightly better score (1260) using a 2-valued version of Generation, and a feature distinguishing the parents from the non-parents instead of Branch. mation varies smoothly down towards the next lower arity. I found no smooth approximation for d, which is expected to be very small, so it is simply dropped. The evaluation function to minimize, including both theory and data terms, and all the approximations, is E(f) = Cyzl [(q + l)W(fi(T)) + H(fi(S))(enH(fr(T)) + Z)] +W + q) . W(I) - WV))) + 1s dn The constant terms, H(T) and logqln, can be ig- nored by the optimization algorithm. If there are no features, f contains no information, and this reduces to (nl + q) . H(T) + bgqln. This is a more precise cal- culation for the straightforward encoding scheme men- tioned above in which each tuple element is transmitted separately. As f becomes more informative, H(7) is re- duced by H( f (I)), but th e cost of encoding f increases. In the other limit, where everything is included in the theory, there is a single feature with q = 36 values, one for each individual. The feature-tuple code table will contain qn = 46,656 entries, so this evaluation func- tion, based on transmitting the table explicitly, blows up (Table 1). Search Algorithm By virtue of the above declarative specification of what constitutes a good set of features, the learning task has been cast as a well-defined optimization problem: min- imize E(f) over all possible sets of partitions. Initially the individuals are assigned random values for each fea- ture. This constitutes the initial feature-set hypothesis. The search procedure thus requires the maximum num- ber of features sought, d,,, , and the maximum number of values for each, ci,,, , to be fixed in advance. The search is then carried out by simulated anneal- ing. This is a generalization of hill-climbing, and like 568 LEARNING THEORY AND MDL hill-climbing the search proceeds by evaluating the eval- uation function for a configuration of the search space that neighbors the currently hypothesized configura- tion. Depending on this value, we may “move” to the neighboring configuration as the new hypothesis. In MDL/OC, neighboring feature sets in the search space are those for which the value of a single feature of a single individual differs. For instance, a move might consist of changing the value of Pierro’s Nationality fea- ture from Italian to English. Since so little is changed between neighboring feature sets, it is expected that the evaluation function will change smoothly. This al- lows the search to descend gradually and hopefully find the optimal feature set. I have tried more complicated kinds of moves, but this topology has worked best. Simulated annealing differs from hill climbing in that uphill moves are sometimes made, in an effort to avoid becoming stuck in local optima. The greater the im- provement in the evaluation function, the greater the chances of accepting a move. The amount of random- ness in the decision is controlled by the temperature, T, which is gradually decreased during search. Numeri- cally, Prbo4 = l+e--(c;Reu, -Ihfearrcnl ),T * The parameters used for the Family Relations prob- lem were as follows: Anneal from an initial temperature of 500.0 and gradually decrease until the probability of accepting any move falls below 0.001. This happens around a temperature of 1.0 for this problem. Each time a move is considered, the temperature is multi- plied by .999999. This takes about four hours on a Symbolics 3630. Before running any experiments, I constructed what I expected would be the best set of features for this problem. Table 1 shows that the evaluation function ranks combinations of these features in a reasonable way. However MDL/OC finds slightly different fea- tures. This is acceptable as long as they at least equal the expected ones according to both the evaluation function and intuition. Finding five binary features, the results obtained over 20 trials were as follows: It always found Type, Sex, and Nationality. It found Parental- Status (with two values, Parent and Non-Parent) 19 times, and a skewed version the other time. It found a a-valued version of Generation (with values Middle- Generation and Old-or-Young-Generation) 13 times, and a skewed version the other 7 times. Thus a to- tal of 92 of 100 discovered features were ones to which intuitive English descriptions can be attached, and the remaining ones were recognizable variants. Annealing ten times as fast, 83% met this criterion; with another factor of 10 reduction in search time the result was 51%. Hill climbing gives only 15%. A greedy algorithm that begins with the identity feature and successively com- bines the pair of feature values that results in the short- est message length also did very poorly. Noise immu- nity was quite good, achieving 92% when 10% of the 112 correct patterns were replaced by tuples in which each element was randomly selected from the 36 individuals, and 64% when 25% of the patterns were replaced. For these features the encoding length is 1260, marginally better than for my solution. In retrospect, Parental-Status is more intuitive than my attempt to define Branch-of-the-Family. Scaling There are q individuals and d features, so the search space size is ndd,l c% . Each point has qCtxl(ci - 1) neighbors. The time to calculate the change in evalua- tion function due to a move is proportional to the num- ber of training examples in which the moving individual appears. Assuming individuals rarely appear multiple times in a training tuple, this can be approximated by d/q. The number of moves necessary to find a good solution is difficult to estimate. The best theoretical bound that can be proven for simulated annealing is proportional to the search space size, which is multi- ply exponential. In practice, simulated annealing often works reasonably fast, so useful scaling estimates can only be found empirically. But it is hard to compare search time across domains, because the difficulty of finding the regularities in a domain is hard to quantify. The best approach to equating solution quality seems to be to adjust the annealing rate until it is just slow enough to give a smooth, non-monotonic energy ver- sus temperature plot. Using this criterion, the largest problem I have tried has 30 times more training exam- ples, 200 times more individuals, and the same values for d, n, and the cd. It requires between three and four orders of magnitude more cpu time. Without more ex- periments, all that can be said is that it is worse than linear and much better than exponential. One way to fight this scaling is to shrink the search space by find- ing only a few features and then “freezing” them while more are sought. Evaluation of Results Just finding features is of no use unless they can be used to infer new facts, and the ultimate goal of this research is to do this by extending the Cyc knowledge base with new concepts and rules based on the dis- covered features. But even without understanding the features and extending the KB by hand, it is possi- ble to do completion by considering the feature-tuple frequencies as simple kinds of rules. Note that the fol- lowing procedure is not part of MDL/OC, but only an attempt render its results in a form suitable for quan- titative comparison with other algorithms. For exam- ple, if the assertion (Jennifer brother James) had been left out of the training set, the completion for (Jennifer brother ?) could still be found from the five binary features found by MDL/OC as follows: The feature- tuples that occur in the training data for Sex are FMM, MMM, MFF, and FFF. Only FMM matches the value of this feature for the two known elements of the test- ing tuple, so the missing element must have the value Male. By similar reasoning, the missing element must have Type=Person, Nationality=English, and Gener- ation=Middle. For Parental-Status, Jennifer has the DERTHICK 569 value NonParent, and brother has the value Parent, and both the patterns NPP and NPN occur in the training data. However the former occurs 19 times (leaving out the current assertion) and the latter only 4. There- fore we guess that the missing element is a Parent. James is the only individual with the guessed feature vector. However using these five features, even when the feature-tuple frequencies are based on all 112 as- sertions, completion percentage on those same 112 as- sertions is only 62%. This decreases to 47% when the frequencies are based on half the assertions and com- pletion is tested on the other half. In contrast, with a leave-four-out testing paradigm, Hinton achieved 88% on unseen assertions, and Quinlan 96%. Since few algorithms have been applied to the family relations data, MDL/OC was also run on the large soy- bean data in spite of the fact that it is not relational. Completion was 69% on the training set and 74% on the test set, in contrast to others who have attained close to 100% [Michalski and Chilausky, 19801. Primarily this poor completion performance is be- cause the MDL principle limits discovered knowledge to that which can be expressed more compactly than the data, and MDL/OC ‘s “rule language” is so simple that only a small class of knowledge can be captured concisely. Better completion was achieved on Fam- ily Relations with an encoding scheme that assumed the receiver already knew PERSON1 and RELATION, and so only rewarded features relevant to predicting PERSON2. When features with up to five values were sought, a solution was found that gave 89% and 66% completion on the two testing methods. However the features were a mess. I believe this indicates that the encoding scheme described here does a good job of re- warding only simple features, which are likely to be most easily interpretable. It certainly indicates that completion performance does not always mirror feature interpretability. Given that the goal is enhancing a knowledge base, rather than doing reasoning directly, this is an appropriate tradeoff. Interpretation is hard enough even when learning is very selective. Indeed, when run on real data from the Cyc KB, the algorithm found features that were largely uninter- pretable. Having rejected completion percentage as a measure of feature intuitiveness, and unable to inter- pret and judge this subjectively, it is still possible to evaluate them on the basis of the amount of compres- sion achieved. For a naive control encoding, I use the MDL/OC scheme with exactly one feature, which seg- regates the predicates from the individuals. Referring to Figure 1, the ratio for the Family Relations problem is 1262/1601=79%. For one run on Cyc data involving countries’ trading partners, it is 95%, which considering how regular the family relations data is, I think signif- icant. With considerable effort, involving automated search for Cyc predicates correlating with the features, I determined that one of them segregated the countries according to economy size. I believe this indicates that even in natural domains, MDL/OC’s restricted class of orthogonal features is useful, tures are h ard to interpret. and that even good fea- Related Work Many hierarchical clustering algorithms have been de- veloped, but as mentioned in the problem representa- tion section only those that construct new features are suited to the Family Relations problem, and none exist that can form features as far removed from the given ones as is required here. Additionally, any hierarchi- cal algorithm will suffer data fragmentation. Near the leaves they will have so few training instances that ob- served statistical regularities are almost entirely spu- rious. Pagallo and Haussler [1990] suggest a way to recombine identically structured subtrees after the tree is learned. This can be thought of as orthogonalizing. Because orthogonal features are not context sensitive, they should be ideal for analogical reasoning, which af- ter all is just extending regularities beyond their cus- tomary context. It is appealing that in this sense, doing completion with MDL/OC is doing analogical reason- ing without explicitly going through the several stages customary in the literature. Back-propagation approaches to feature discovery [Hinton, 1986, Miikkulainen and Dyer, 19871 have suf- fered from the asymmetric treatment of input and out- put units, and the use of indirect methods like bottle- necks for encouraging good representations, rather than incorporating an explicit declarative characterization of conciseness. This is not an inherent limitation; a back- propagation version of MDL/OC is possible. Declara- tive evaluation functions are a more appealing a priori way to prevent overfitting than the post hoc stopping or pruning criteria using multiple partitions of the data into training set, stop training set, and test set. And these techniques are based on the questionable assump- tion that completion performance is a good measure of feature quality. Information-theoretic algorithms [Lucassen, 1983, Becker and Hinton, 1989, Galland and Hinton, 19901 avoid the asymmetry of back propagation, but no other algorithm has directly addressed the goals of reducing redundancy or generating a simple completion function. FOIL [Quinlan, 19901 is really not comparable. Quinlan used it to learn intensional definitions of the given pred- icates in the Family Relations problem. This is much better than the extensional definitions learned here, but he did not construct any new predicates. CIGOL [Mug- gleton and Buntine, 19881 learns new concepts from re- lational data, but does not search a large enough space to construct the kinds of features that MDL/OC uses to constrain the tuple elements. On the other hand, it is unnaturally confining for MDL/OC to rely solely on constraints, such as ‘the sex of a female-relation must be female.’ Much more informative, when available, is data about other relationships in which the people par- ticipate. So an algorithm combining MDL/OC’s ability to learn radically new concepts with FOIL or CIGOL’s 570 LEARNING THEORY AND MDL ability to learn rules involving binary predicates would be much more powerful than either approach alone. This paper has described a new approach to unsuper- vised discovery or orthogonal features and described its strengths and weaknesses. It is based on a well- motivated declarative description of what good con- cepts are. The only subjectivity that entered in the derivation is the decision to use an MDL approach at all, and the encoding scheme. The resulting evalua- tion function has no parameters to adjust. Actually finding solutions that optimize the evaluation function, however, is an intractable problem. The search algo- rithm used in this paper, simulated annealing, does re- quire empirical parameter setting to work well, and the search is slow. Although scaling was briefly examined, more experience with real-life problems will be neces- sary to evaluate whether good solutions can be found in practice. If orthogonal features exist for a domain, they are better than hierarchical ones, because they allow max- imally general inferences. Algorithms to find them should also be more robust, since data fragmentation is avoided. It is disappointing that completion percentage is not competitive with other algorithms, however this may be a necessary trade-off when seeking simple fea- tures. No other algorithm has discovered such clean in- tuitive features for problems like Family Relations that are not naturally represented as feature-vectors. There- fore I believe the MDL approach, which hopefully can be extended to more expressive and sophisticated en- coding schemes, is a promising way to deal with the interpretation problem for learning new representation primitives in a knowledge-based setting. Still, the difficulty of interpreting feature vectors learned from real data is surprising and frustrating. Al- though this problem has been recognized and mostly just accepted as inevitable for connectionist learning systems, there was certainly reason to hope that an al- gorithm designed to find interpretable features, would. Blame on MDL/OC is misplaced, I believe. Rather, new kinds of interpretation tools will be required, ei- ther acting in concert with a learning algorithm, or post hoc. Searching for correlations between the discovered concepts and existing ones has been of some help in interpreting features learned from real Cyc data. This problem will doubtless come up for other systems as they evolve beyond local modifications to existing rep- resentations. Acknowledgments I am indebted to Geoff Hinton for advice and encouragement through several active stages of my obsession with the family relations prob- lem, and to Wei-Min Shen for help during this last stage reported here. I am also grateful for tutoring in machine learning and suggestions about MDL/OC and the presentation of this paper to Andy Baker, Sue Becker, Guha, Eric Hartman, Jim Keeler, Doug Lenat, Ken Murray, Jeff Rickel, and an anonymous reviewer. eferences [l] Suzanna Becker and Geoffrey E. Hinton. Spatial coherence as an internal teacher for a neural net- work. Technical Report CRG-TR-89-7, University of Toronto, December 1989. [2] Mark Derthick. The minimum description length principle applied to feature learning and analogical mapping. Technical Report ACT-AI-234-90, MCC, June 1990. [3] Brian Falkenhainer, Kenneth D. Forbus, and Dedre Gentner. The structure-mapping engine: Algorithms and examples. Artificial Intelligence, 41:1-63, 1989. [4] Conrad C. Galland and Geoffrey E. Hinton. Exper- iments on discovering high order features with mean field modules. Technical Report CRG-TR-90-3, Uni- versity of Toronto, February 1990. [5] Geoffrey E. Hinton. Learning distributed represen- tations of concepts. In Proceedings of the Eighth Annual Cognitive Science Conference, pages 1-12, Amherst, Massachusetts, 1986. Cognitive Science So- ciety. [6] Douglas B. Lenat and R. V. Guha. Building Large Knowledge-Based Systems. Addison-Wesley, Read- ing, MA, 1990. [7] John M. Lucassen. Discovering phonemic base forms automatically: An information theoretic ap- proach. Technical Report RC 9833, IBM, February 1983. [8] Ryszard S. Michalski and R. L. Chilausky. Learning by being told and learning from examples: An experi- mental comparison of the two methods of knowledge acquisition in the context of developing an expert system for soybean disease diagnosis. International Journal of Policy Analysis and Information Systems, 4(2), 1980. [9] Risto Miikkulainen and Michael Dyer. Building distributed representations without microfeatures. Technical Report UCLA-AI-87-17, UCLA, 1987. [lo] Stephen Muggleton and Wray Buntine. Machine invention of first-order predicates by inverting reso- lution. In EWSL-88, pages 339-352, Tokyo, 1988. [ll] Guilia P g 11 a a o and David Haussler. Boolean fea- ture discovery in empirical learning. Machine Learn- ing, 5( 1):71-99, 1990. [12] J. Ross Quinlan and Ronald L. Rivest. Inferring decision trees using the Minimum Description Length principle. Information and Computation, 80:227- 248, 1989. [13] J. Ross Quinlan. Learning logical definitions from relations. Machine Learning, 5(3), 1990. [I4] Jorma Rissanen. Stochastic Complexity in Statis- tical Inquiry. World Scientific Publishing Company, Singapore, 1989. DERTHICK 571
1991
93
1,157
avid -IV. Aha Department of Information and Computer Science The Turing Institute University of California, Irvine Irvine, CA 92717 U.S.A. albertQics.uci.cdu 36 North Flanover Street Glasgow Gl 2AD Scotland aha@Xuring.ac.uk Abstract This paper presents PAC-learning analyses for instance-based learning algorithms for both sym- bolic and numeric-prediction ta.sks. The algo- rithms analyzed employ a variant of the k-nearest neighbor pattern classifier. The main results of these analyses are that the I131 instance-based learning algorithm can learn, using a polynomial number of instances, a wide range of symbolic con- cepts and numeric functions. In addition, we show that a bound on the degree of difficulty of predict- ing symbolic values may be obtained by consider- ing the size of the boundary of the target concept, and a bound on the degree of difficulty in predict- ing numeric values may be obtained by consider- ing the maximum absolute value of the slope be- tween instances in the instance space. RIIoreover, the number of training instances required by IBl is polynomial in these parameters. The implica- tions of these results for the practical application of instance-based learning algorithms a.re discussed. 1 Introduction and Style of Analysis Several instance-based learning (IBL) algorithms based on variants of the k-nearest neighbor function (b- NN) have performed well in challenging learning tasks (Bradshaw, 1987; Stanfill & Waltz, 1986; Kibler & Aha, 1987; Salzberg, 1988; Aha & Kibler, 1989; Aha, 1989; Moore, 1990; Tan & Schlimmer, 1990; Waltz, 1990; Cost SC Salzberg, 1990). H owever, these investigakions contained only entpiricnl evaluations. This pa.per gen- eralizes our previous mathemakical analyses, which re- stricted the values of k and the i&ance spa.ce’s climen- sionality (Kibler, Aha, & Albert, 1959; Aha, Kibler, & Albert, 1991). Early mathema.tical analyses of k-NN investigated its asymptotic capabilities by comparing it aga.inst the Bayes decision rule, a strategy that is given all of the instances’ joint proba.bility densities and minimizes the probability of classificakion error. Cover and IIart (1967) showed that l-NN’s error rake was less than twice that of the Bayes rule a.nd, thus, less than twice the er- ror rate of any decision a.lgorithm. Cover (1968) showed that, as k increases, k-NN’s error rate quickly converges to the optimal Ba.yes rate. However, these analyses as- sume an infinite number of training instances, which are rarely available in practice. They did not deter- mine how many instances k-NN requires to ensure that it will yield acceptably good cla.ssification decisions. Therefore, our analysis employs Valiant’s (1984) PAC-learning (probably, approximately correct) model for investiga.ting learnability issues, which states that a class of concepts is polynomially learnable from exam- ples if at most a polynomial number1 of instances is required to generate, with a certain level of confidence, a relatively accurate approximakion of the target con- cept. This definition of learnability is more relaxed t,han those used in earlier studies, which required that the al- gorit8hms llave perfect accuracy wit11 100% confidence. It is formalized as follows. Definition 1.1 A class of co7lcep-ts C is polynomia.lly learnable ifl there exists a polynomial p and a Zearniiag algorithm. A such that, for any 0 < E, S < 1, if at least PC+, 6) P osi 1 ive and negative instances of a concept C E C are drawn according to an arbitrary fixed distribution., then, with confiden,ce at least (1 - S), A will generate an approximation C’ E C whose probability of error is less than E. Moreover, A will halt in time bounded by a polynom.ial in the number of instances. This definition states that t,he approximation gener- atcd may be imperfect, but must have its inaccuracy bounded by E. Also, A is not required to always generate sufficiently accurake concept descriptions (i.e., witkn c), but must do so only wit11 probability at least (1-S). The polynomial time bound is automatic for IBl since the amount of time it takes to genera.te a predic- tion is polynomia.1 in the number of training instances. Thus, we will not mention time bounds further in this a.r title. Valiant8’s model trades off accuracy for generality; it apl:,lies to any fixed distribution. Learnability is indica- tive only of the concept, in C that is most difficult to learn and the probability distribution that, results in the slowest possible learning rate. Thus, the number ‘Tllat is, poly nomial wit11 respect to tke parameters for col&tlence and accuracy. ALBERT & AHA 553 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. of instances required to PAC-learn concepts is a loose upper bound that can be tightened by restricting the set of allowable probability distributions. Therefore, many researchers adapt this model to their own needs by adding restrictions on probability distributions (Li Ss Vitanyi, 1989; Kearns, Li, Pitt, & Valiant, 1987). Many investigations also analyze a specific set of learn- ing algorithms (Littlestone, 1988; Valiant, 1985; Haus- sler, 1987; Rivest, 1987; Angluin St Laird, 1988). Our analyses do both. Only the capabilities of a specific &NN-based learning algorithm (i.e. IBl) are investi- gated, rather than the learnability of concept classes in general.2 Also, predictor attributes are assumed to be numeric-valued and noise-free. The analyses in Sec- tion 3.1 address symbolic learning tasks and those in Section 3.2 address the learnability of numeric func- t ions. Since few researchers have investigated learn- ability for numeric functions, this is a relatively novel contribution to computational learning theory. 2 Coverage Lemmas This section presents two lemmas that are used to prove the theorems in Section 3, which concern the PAC- learning capabilities of IB 1. These lemmas establish how large a set of training instances S is required to give, with high confidence, a “good coverage” of an in- stance space. That is, they ensure that, for all instances x in the space, except for those in regions of low prob- ability, there is an instance (or set of X: instances) in S that is sufficiently similar to 2 (i.e., their similarity is above a threshold). This information will be used in the learnability theorems to bound 1131’s amount of prediction error. The first lemma applies to the case when X: = 1 (where only the single most similar training instance is used to predict target values) while the second lemma allows for k 2 1. First, we need to define the notion of a (o, y)-net. Definition 2.1 Let X C_ %!d have an arbitrary but fixed probability distribution. Then S C X is an (a, y)-net for X if, for all x in X, except for a set with prob- ability less than y, there exists an s E S such that distance(s, 2) < cy. The following proof shows that a sufficiently large random sample from a bounded subset of SJZ” will prob- ably be an (cu, $-net. 21B1 performed well in previous empirical evaluations (Alla, Kibler, & Albert, 1991). An attribute-value represen- tation was used for instances, where all attributes are used for predictions (i.e., predictors) except the target attribute, whose value is to be predicted. The k-NN algorithm pre- dicts a given instance’s target attribute value from those of its h most similar previously processed instances. IBl uses a majority vote to predict symbolic values and a similarity- weighted prediction for numeric values, where similarity is defined as the inverse of Euclidean distance (or some mono- tonicly increasing function of that). Lemma 2.1 Let X be a bounded subset of 9?‘. Then there exists a polynomial p such that for any 0 < %YJ < 1, P($ $, 4 ) a random sample S containing N 2 instances from X, drawn according to any -- I fixed probability distribution, wiZl form an (a, y)-net with probability at least (1 - 6). Proof 2.1 Without loss of generality we may assume that X is [O - lld (i.e., the unit hypercube in sd). This lemma is proven by partitioning [0- lld into nzd disjoint hyper-squares, each with diagonal of length less than a. Thus, pairs of instances lying in a single hyper-square are less than cu apart. The desired value for m is found using the Pythagorean Theorem, which shows that m = 1 a 1. y4! (We assume, without loss of generality, that [J;E/cy] > dd/a.) Let s<, be the subset of hyper-squares with probability greater than or equal to y/md. Let S>, be the set of remaining hyper-squares, which will there- fore have summed probability less than -&md = y. The probability that arbitrary instance i E [0 - lid will not lie in some selected hyper-square in SC, is at most 1 - y/md. The probability that none of the N sample instances will lie in a selected hyper-square in S<cy is at most (1 - r/m d ) N. The probability that any hyper- square in S<, is excluded by all N sample instances is at most ntd( 1 172de-Ny/nxd - y/md)N. Since md( 1 - y/nzd)N < we can force this probability to be small by setflting ide-NY/nad < S. N can then be solved for to yield N > rgld In @Id - Y s * Consequently, with probability at least (1 - S), each hyper-square in SC, contains some sample instance of S. Also, the total probability of all the sub-squares in SrQ is less than y. Since each instance of [0 - lld is in some hyper-square of S <cu, except for a set of probabil- ity less than y, then, with confidence at least (1 - S), an arbitrary instance of [0 - lld is within (Y of some instance of S (except for a set of small probability). The next lemma extends Lemma 2.1 to rl- 2 1. The following generalization of an (a, y)-net will be used. Definition 2.2 Let X 5 ZRd have an arbitrary but fixed probability distribution. S C X is a &(a!, y)-net for X if, for all x E X, except for a set with probability less than y, there exists at least k instances s E S such that distance(s, 2) < ct!. Lemma 2.2 Let X be a bounded subset of !Rd. Then there exists a polynomial p such that for any 0 < @J,J < 1, P($Y $ $1 a random sample S containing N 2 instances from AT, drawn according to any fixed probability distribution, will form a k-(a, y)-net with probability at least (1 - 6). Proof 2.2 This proof ensures that, with high confi- dence, at least b of the N sample instances lies in each hyper-square of sufficient probability. 554 LEARNING THEORY AND MDL The process of drawing instances described in Lemma 2.1 needs to be repeated X: times for this lemma. Since the probability distribution is fixed, the set Sza is the same for each repetition. This yields the follow- ing inequality for assuring that a training instances are ineach-hyper-square of S<, : N > k@ld In rQld - Y 6’ if the desired level of confidence tha.t a. single repeti- tion will produce an (CY, y)-net is (1 - 6’). We will get a &(a, y)-net if each of the k repetitions produces an (a, y)-net. A lower bound on the probability of this oc- curring is (1 - 6’)‘. Thus, if we are required to produce a k-(Ly, y)-net with confidence (1 - S), then we should set (1 - S’)k = (1 - S). This yields S’ = 1 - vm. Substituting this expression for S’ above yields However, since 1 - VK-J 2 S’ for small values of 6, this completes the proof. Thus we are guaranteed that, by picking enough ran- dom samples, we will probably get a good coverage of any instance space. 3 Convergence Theorems This section shows that IBl can PAC-learn a large class of concepts and numeric functions with a polynomial bound on its number of required training instances. Nonetheless, IBl cannot learn some target concepts, in- cluding those whose predictor attributes are logical/y in- adequate (e.g., the concept of even numbers given posi- tive and negative instances whose only attribute is their integer value). Section 3.1 describes two theorems concerning IBl’s ability to predict symbolic values. The second theo- rem makes a statistical assumption on the distributions of training sets, which requires a small extension of Valiant’s model. Both theorems make geometric as- sumptions to constrain the target concept. Rosenblatt (1962) demonstrated that if a concept is an arbitra.ry hyper-half-plane (i.e., the set of insta,nces on one side of a hyper-plane), then the perceptron learning algorithm is guaranteed to converge. The proofs analyzing IBl’s learning abilities use a# more general geometric assump- tion - that a target concept’s boundary is a finite union of closed hyper-curves of finite lengtl~.3 Section 3.2 presents a theorem concerning IBl’s abil- ity to predict numeric values. The proof shows that 3Tl~is class has infinite VC dimension (Vapnik & Cher- vonenkis, 1971). Blumer, Ehrenfeucht, Haussler, and War- muth (198G) p roved that a concept class C is learnable with respect* to the class of all probability distributions iff C has a finite VC dimension. We can show that IBl can learn a class with an infinite VC dimension because our theorem restricts the class of allowable probability distributions. IBl can learn the class of continuous, real-valued nu- meric functions with bounded slope in polynomial time. Since the PAC-learning model has rarely been used to address the learning of numeric-valued functions, a sub- stantially different definition of polynomial lea.rnability is used in Section 3.2, although it preserves the spirit of Valiant’s original model. 3.1 Convergence Theorems: Predicting Symbolic Values This section details convergence theorems for the IBl a.lgorithm. The following definition for polynomial learnability will be used. It modifies Valiant’s (1984) model by constraining the class of allowable probabil- ity distributions P. Definition 3.1 A class of concepts C is polynomially learnable with respect to a class of probability distribu- tions P iff there exists a polyn.onaind p and an algorithnz. A such that, for any 0 < E, 6 < 1, if at least p($, i) positive and negative instances of C E C are drawn according to any fixed probability distribution P E P, then, with confidence at least (1 - S), A wild generate an approximation G’ E C fhat diflers from C on a set of insfances with probability less than E. By definition, the instances predicted by IBl to be- long to a concept C are those for which at least [kl of the set of k nearest training instances are in 6. ii ow- ever, the proofs in this section also apply if a similarity- weight,ed vote among the k nearest training instances is used instead. Theorem 3.1 describes the relationship, for a partic- ular class of concepts, between a target concept C and the concept description approximation C’ converged on by IB 1 for d, k 1 1. Theorem 3.2 then shows that, by restricting the class of allowable probability distribu- tions P to those representable by bounded probability density functions, Theorem 3.1 ca.n be used to prove polynomial learnability for this class of concepts. Figure 1 on the following page illustrates a-few more definitions needed for the analysis. Dcfixiition 3.2 For any ck > 0, the Q-core of a set C is the set of instances of C that are at least a distance (Y from any instance not in C. Definition 3.3 The a-neighborhood of C is !he set of inslallces th.at are within cv of some instance of C. Definition 3.4 A set of instances C’ is un ((~,y)- approximation of C if, ignoring some set S>, with probability less than y, it contains the a-core of C and is contained in the a-neighborhood of C. The following theorem describes, in a geometric sense, how accurately IBl’s derived concept description approximates the target concept. In particular, the I131 algorithm converges (with probability at least (1-S)) to a concept that is an (a, y)-approximation of the target concept. ALBERT & AHA 555 Figure 1: Exemplifying some ing learnability. This instance valued attributes. terms used for analyz- space has two numeric- Theorem 3.1 Let C be any region bounded by a closed curve in a bounded subset of %!‘. Given 0 < a, 6, y < 1, then the IBl algorithm with 12 2 1 converges with a polynomial number of training instances to Cl, where (a-core(C) - S&) E (C’ - Slcr) c (a-neighborhood(C) - 5’2,) with probability 1 (1 -S), where S>, is a set with prob- ability less than y. Proof 3.1 We may assume, without loss of generality, that the bounded subset of !R’ is [0 - lld. Let 0 < cy, 6, y < 1. Lemma 2.2 states that, if then any N randomly-selected training instances will form a &(a, y)-net (with probability at least (1 - 6)). Let Sz, be the set of instances in [0 - lid that are not within CY of E of the N training instances. Two inclusions need to be proven. The first must show that, excluding the instakes of S>, , the Q-core of C is contained in C’ (and thus in C’-s>,). Let p be an arbitrary instance in the a-core of C n6t in 5’1, and let I< be its set of k nearest (i.e., most similar) training instances. Since the distance between each s & I< and i is less than Q and p is in the a-core, then each s is al& in C. Thus I< correctly predicts that p is a member of C. Equivalently, this shows that p is a member of C’. Consequently, (&-core(C) - S>,) C (C’ - S>,). The second inclusion states &at C’-S>, isconta.ined in the a-neighborhood of C. This can-be proven by .&owing that, if p is outside the cr-neighborhood of C, then p is outside of C’ - S>,. Let p be an arbitra,ry instance outside the o-neighborhood of C and let I< be its set of k most similar neighbors. If p is not in S>cu, then each s E Ir’ is within 0 of p, so each s is outside of C. In this case, IC correctly predicts that p is not a member of C. Since no instance outside the a-neighborhood of C, excluding instances in Sz,, is predicted by C’ to be a member of C, then (C’-Sz,) 5 (a-neighborhood(C) - Sz,). Notice that Theorem 3.1 do not specify what the probability is of the set on which C’ and C differ. Rather, it only shows where and how prediction errors could occur in terms of the geometry of C within the instance space. Some constraints on both the length of the boundary of C and the probability of regions of a given area are needed to bound this proba.bility of error. Theorem 3.2 adds these constraints. For reasons of simplicity, it arbitrarily delegates half of the allowed prediction error E to each way that errors can arise (i.e., (1) when p E a-neighborhood of C and p $! a-core of C or (2) when p E S>,). The proof shows that, for a large class of probabaity distributions, IBl will, with high probability, converge to an approximately correct definition of the target concept for a large class of con- cepts in a bounded subset of Xd with d 2 1. Theorem 3.2 Let C be the class of all concepts in a bounded subset of !Rd that consist of a finite set of re- gions bounded by closed hyper-curves of total hyper- lengih less than L. Let P be the class of probability distributions representable by probabiliiy density func- tions bounded from above by B. Then C is polynomially learnable from examples with respect to P using IBl. Proof 3.2 Again we may assume that the bounded re- gion is [0 - l] d. In Theorem 3.1, if the length of the boundary of C is less than L, then the total area be- tween the a-core and the cl\c-neighborhood of C is less than 2Lcy. Then 2LBa is an upper bound on the prob- ability of that area. Therefore, the total error made by C’ in Theorem 3.1 is less than ~LBcv + y. If we fix y = 2LBcy = $, then (Y = &, and the theorem follows by substitutin? these expressions for y and cy into the inequality derived in Lemma 2.2. This yields N > 2kppp111 pyl” E l-vK-8 This proof ha.s severa. practical implications. First, the number of instances required by IBl to learn this class of concepts is also polynomial in L and B, which suggests that I131 will perform best when the target con- cept’s boundary size is minimized. Second, C’ could be any subset of the a-neighborhood of C when the o-core is empty, which could occur when C’s shape is extremely thin and cr is chosen to be too large. The IBL approximation of C could be poor in this case. Third, II31 cannot distinguish a target concept from anything containing its a-core and contained in its CY- neighborhood; small perturba.tions in the shape of a target concept are not captured by IBl. Fourth, except for a set of size less than y, the set of false positjives is contained in the “outer ribbon” (the a-neighborhood of C excluding C) and the set of false negatives is con- tained in the “inner ribbon.” Fifth, a.s the number of 556 LEARNING THEORY AND MDL predictor attributes increases, the expected number of instances required to learn concepts will increase expo- nentially. Space transformations that reduce this di- mensionality or reduce L will significantly increase the efficiency of IBL algorithms. Finally, no assumptions about the convexity of the target concept, its number of disjuncts, nor their relative positions were made. 3.2 Convergence Theorems: Predicting Numeric Values This section defines PAC-learnability for predicting nu- meric values and proves that IBl can PAC-learn the class of continuous functions with bounded slope. Definition 3.5 The error of a real-valued function f’ in predicting a real-valued function f, for an instance 2, is If(x) - f’(z)!. Definition 3.6 Let f be a real-valued target function. Let Bf be the least upper bound of the absolute value of the slope between any two instances on the curve off. If Bf is finite, then we say that f has bounded slope. If C is a class of functions in which each f E C has bounded slope, and Bf < B for some number B, then we say that C has bounded slope (bounded by B). Continuously differentiable functions on [0, l] have bounded slope. As a counterexample, the function sin(i) does not have bounded slope. Definition 3.7 Let C be a class of functions for which each f E C is a function from the unit hypercube in ?Xd to !R. C is polynomially learnable if there exists an algorithm A and a polynomial p such that, for any 0 < y, a, S < 1, givelt an f E C, if p($, $, $) or more examples are chosen according to any fixed probability distribution on [0 - lld, then, with confidence at least (1 - 6), A will output an approximation off with error less than a everywhere except on a set of instances with probability less than y. In this definition y is a bound on the size of the set on which “significant” prediction errors can occur. By definition, IBl computes a new instance’s similar- ity to all instances in a concept description and predicts tha,t the target value is the similarity-weighted average derived from the E most similar instances. Thus, it generates a piecewise linear approximation to the tar- get function. The next theorem demonstrates that continuous, real-valued functions with bounded slope in the unit hy- percube in !Rd are polynomially learnable by IBl. Note that the class of functions with slope bounded by B includes the large class of differentiable functions with derivative bounded by B, Theorem 3.3 Let C be the class of continuous, real- valued functions on the unit hypercube in ZRd with slope bounded by B. Then C is polynomially learnable by IBl with k 2 1. Proof 3.3 Let f be a continuous function on [0 - lld. Let 0 < a, y, 6 < 1. The bound B ensures that f will not vary much on a small interval. Let&‘= -. Draw N training instances in accordance with Lemm: 2,2, with cv replaced by o’. Let f’ be the a.pproximation that IBl generates for f, and let x be a.n arbitrary instance in an interval of ,!Qcrl. The point is to ensure that the error off’ at x is small (i.e., less than (w). That is, it must be shown that If(x) - f’(x)1 < cy. Let Ir’ be the set of z’s X: most similar training in- stances. Since f’(x) is a weighted-average of the tar- get values of each 2’ E I<, it suffices to show that If(x) - fW < a* Because N is sufficiently large, the k most similar neighbors of x must all be within a’ of x. Also, we know that B is an upper bound on the slope between x and z’. Thus, since If(x) - f(x’)l = Islope(x,x’)( x distance(x,z’), then If(x) - f(x’)l < B x 0 = cy. Therefore, IBl will yield a prediction for x’s target value that is within c~ of x’s actual target value if at least N training instances a.re provided, where N > ‘@Id In 1 r$ld s. Y -- Thus, given any f E C, if at least that many training instances are provided, IBl will, with confidence at. least (1 - S), yield an approximation f’ with error less than CY for all instances except those in a set of proba.bility less than y. Thus, the number of required training instances is also polynomial in B. hIany functions have bounded slope. For example, any piecewise linear curve and any function with a con- tinuous derivative defined on a closed and bounded re- gion in gd has a bounded slope. Therefore, IBl can accurately learn a large class of numeric functions using a polynomial number of training instances. However, it may not be able to PAC-learn numeric functions whose masimum absolute slope is unbounded. For example, sin( $). As x approaches 0, the derivative of this func- tion is unbounded. This paper detailed PAC-learning analyses of IBl, a simple instance-based learning algorit*hm that, per- formed well on a variety of supervised learning tasks (Aha, Kibler, 8~ Albert, 1991). The analyses show that II31 can PAC-learn large cla.sses of symbolic concepts and numeric functions. These analyses help to explain IBl’s capabilities and complement our earlier empirical studies. However, we did not address their average- case behavior, aa important topic of future research. Analyses for more ela.borate IBL algorithms, such as those that tolerate noisy instances, tolerate irrelevant ALBERT & AHA 557 attributes, or process symbolic-valued attributes, would also improve our understanding of these practical learn- ing algorithms’ capabilities and limitations. Acknowledgments Thanks to Dennis Kibler who initiated this line of re- search and was a collaborator on the initial analyses (published elsewhere). Thanks also to Dennis Volper and our reviewers for comments and suggestions, and to Caroline Ehrlich for her assistance on preparing the final draft. References Aha, D. W. (1989). Incremental, instance-based learn- ing of independent and graded concept descriptions. In Proceedings of the Sixth International Workshop on Machine Learning (pp. 387-391). Ithaca, NY: Morgan Kaufmann. Aha, D. W., St Kibler, D. (1989). Noise-tolerant instance-based learning algorithms. In Proceedings of the Eleventh International Joint Conference on Arti- ficial Intelligence (pp. 794-799). Detroit, MI: Mor- gan Kaufmann. Aha, D. W., Kibler, D., & Albert, M. K. (1991). Instance-based learning algorithms. Machine Learn- ing, 6, 37-66. Angluin, D., & Laird, P. (1988). Learning from noisy examples. Machine Learning, 2, 343-370. Blumer, A., Ehrenfeucht, A., Haussler, D., 6r; War- muth, M. (1986). Cl assifying learnable geometric concepts with the Vapnik-Chervonenkis dimension. In Proceedings of the Eighteenth Annual Association for Computing Machinery Symposium on Theory of Com.puting (pp. 273-282). Berkeley, CA: Association for Computing Machinery. Bradshaw, G. (1987). Learning about speech sounds: The NEXUS project. In Proceedings of the Fourth International Workshop on Machine Learning (pp. l-l 1). Irvine, CA: Morgan Kaufmann. Cost, S., St Salzberg, S. (1990). A weighted nearest neighbor algorithm for learning with symbolic features (Technical Report JHU-90/11). Baltimore, MD: The Jolms Hopkins University, Department of Computer Science. Cover, T. M. (1968). Estimation by the nearest neigh- bor rule. IEEE Transactions on Information Theory, 14, 50-55. Cover, T. M., & Hart, P. E. (1967). Nearest neigh- bor pattern classification. Institute of Electrical and Electronics Engineers Transactions on Information Theory, 13, 21-27. Haussler, D. (1987). Bias, version spaces and Valiant’s learning framework. In Proceedings of the Fourth International Workshop OIL Machine Learnin.g (pp. 324-336). Irvine, CA: Morgan Kaufmann. Kearns, M., Li, M., Pitt, I;., & Valiant, L. G. (1987). On the learnability of Boolean formulae. In Proceedings of the Nineteenth Annual Symposium on the Theory of Computer Science (pp. 285-295). New York, NY: Association for Computing Machinery. Kibler, D., & Aha, D. W. (1987). Learning representa- tive exemplars of concepts: An initial case study. In Proceedings of the Fourth International Workshop on Machine Learning (pp. 24-30). Irvine, CA: Morgan Kaufmann. Kibler, D., Aha, D. W ., & Albert, M. (1989). Instance- based prediction of real-valued attributes. Computa- tional Intelligence, 5, 51-57. Li, M., & Vitanyi, P. M. B. (1989). A theory of learning simple concepts under simple distributions and aver- age case complexity for universal distribution (pre- lirninary version) (Technical Report CT-89-07). Am- sterdam, Holland: University of Amsterdam, Cen- trum voor Wiskunde en Informatica. Litt*lestone, N. (1988). Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. nfachine Learnin.g, 2, 285-318. Moore, A. W. (1990). Acquisition of dynamic control knowledge for a robotic manipulator. In Proceedings of the Seventh International Conference on Machine Learning (pp. 244-252). Austin, TX: Morgan Kauf- mar-m. Rivest, R. (1987). Learning decision lists. Machine Learning, 2, l-20. Rosenblatt, F. (1962). Principles of neurodynamics. New York, NY: Spartan. Salzberg, S. (1988). Exemplar- based learning: The- ory and implementation (Technical Report TR-lO- 88). Cambridge, MA: Harvard University, Center for Research in Computing Technology. Stanfill, C., si Waltz, D. (1986). Toward memory-based reasoning. Communications of the ACM, 29, 1213- 1228. Ta,n, A/I., St Schlimmer, J. C. (1990). Two case stud- ies in cost-sensitive concept acquisition. In Proceed- ings of the Eighth National Conference on Artificial Intelligence (pp. 854-SSO). Boston, MA: American Association for Artificial Intelligence Press. Valiant, L. G. (1984). A theory of the learnable. Com- munications of the ACM, 27, 1134-1142. Valiant, L. G. (1985). L earning disjunctions of conjunc- tions. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence (pp. 560-566). Los Angeles, CA: Morgan Kaufmann. Vapnik, V. N., & Chervonenkis, A. (1971). On the uni- form convergence of relative frequencies of events to their probabilities. Theory of Probability and its Ap- plications, 16, 264-280. Waltz, D. (1990). M assively parallel AI. In Proceedings of the Eighth National Conference on Artificial In- tclligence (pp. 1117-1122). Boston, MA: American Association for Artificial Intelligence Press. 558 LEARNING THEORY AND MDL
1991
94
1,158
ussein Almuallim and Thomas G. ietterich 303 Dearborn Hall Department of Computer Science Oregon State University Corvallis, OR 97331-3202 almualhQcs.orst.edu tgd@cs.orst.edu Abstract In many domains, an appropriate inductive bias is the MIN-FEATURES bias, which prefers consistent hy- potheses definable over as few features as possible. This paper defines and studies this bias. First, it is shown that any learning algorithm implementing the MIN-FEATURES bias requires O( $ In $+ $[2P + p In n]) training examples to guarantee PAC-learning a concept having p relevant features out of n avail- able features. This bound is only logarithmic in the number of irrelevant features. The paper also presents a quasi-polynomial time algorithm, FOCUS, which implements MIN-FEATURES. Experimental studies are presented that compare FOCUS to the ID3 and FRINGE algorithms. These experiments show that- contrary to expectations-these algorithms do not im- plement good approximations of MIN-FEATURES. The coverage, sample complexity, and generalization performance of FOCUS is substantially better than ei- ther ID3 or FRINGE on learning problems where the MIN-FEATURES bias is appropriate. This suggests that, in practical applications, training data should be preprocessed to remove irrelevant features before being given to ID3 or FRINGE. Introduction. Historically, the development of inductive learning al- gorithms has been a two-step process: (i) select a rep- resentation scheme (e.g., decision trees), (ii) develop an algorithm to find instances of the scheme that are con- sistent with given collections of training examples. A shortcoming of this approach is that there is no separa- tion between a specification of the desired learning be- havior of the algorithm and its implementation. Specif- ically, the bias of the algorithm is adopted implicitly, particularly as a side-effect of the second step. Often, it is difficult even to state the bias in any simple way. Consequently, it is difficult to tell in advance whether the bias is appropriate for a new learning problem. Recently, a few authors (Buntine 1990, Wolpert 1990) have advocated a different procedure: (i) adopt a bias over some space of hypotheses (or, equivalently, select a prior probability distribution over the space), (ii) select a scheme for representing hypotheses in the space, and (iii) design an algorithm that implements this bias, at‘le&st approximately. d The goal of this paper is to pursue this second proce- ure. We consider the space of all binary functions de- fined over n Boolean input features. We adopt the fol- lowing bias, which we call the MIN-FEATURES bias: if two functions are consistent with the training ex- amples, prefer the function that involves fewer input features (break ties arbitrarily). This is a bias in favor of simplicity-but not mere syntactic simplicity. Func- tions over fewer variables are semanticdy simpler than functions over more variables. We begin by adopting a straightforward represen- tation for binary functions defined over n input fea- tures. We then analyze the sample complexity of any probably-approximately correct (PAC) learning algo- rithm that implements the MIN-FEATURES bias. It is proved that 0 ( flni+ f[2P+plnn] > training examples are required to PAC-learn a bina- ry concept involving p input features (out of a space of n input features) with accuracy parameter c-and confidence parameter S. Note in this bound that the total number of available features n appears only log- arithmically. Hence, if there are k irrelevant features, it only costs us a factor of In X: training examples detect and eliminate them from consideration. to Following this analysis, a simple, quasi-polynomial time algorithm that implements the MIN-FEATURES bias is described and analyzed. The algorithm, called FOCUS, first identifies the p features that are needed to define the binary function. It then applies a straight- forward learning procedure that focuses on just those p features. At first glance, it may appear that there are already many algorithms that approximate this bias. For ex- ample, ID3 (& uinlan 1986) has a bias in favor of small decision trees, and small trees would seem to test only a subset of the input features. In the final section if the paper, we present experiments comparing FOCUS ALMUALLIM & DIETTERICH 547 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. to ID3 and FRINGE (Pagallo & Hussler 1990). These demonstrate that ID3 and FRINGE are not good im- plementations of the MIN-FEATURES bias-these al- gorithms often produce hypotheses as output that are much more complex (in terms of the number of in- put features used) than the hypotheses found by FO- CUS. Indeed, there are some cases in which ID3 and FRINGE miss extremely simple hypotheses. These results suggest that the FOCUS algorithm will require fewer training examples and generalize more correctly than ID3 in domains where the MIN- FEATURES bias is appropriate. We believe there are many such domains. For example, in many practical applications, it is often not known exactly which input features are relevant or how they should be represent- ed. The natural response of users is to include all fea- tures that they believe could possibly be relevant and let the learning algorithm determine which features are in fact worthwhile. Another situation in which many irrelevant features may be present is when the same body of training data is being used to learn many different binary functions. In such cases, one must ensure that the set of features measured in the data is sufficient to learn all of the tar- get functions. However, when learning each individual function, it is likely that only a small subset of the fea- tures will be relevant. This applies, for example, to the task of learning diagnosis rules for several different dis- eases from the medical records of a large number of pa- tients. These records usually contain more information than is actually required for describing each disease. Another example (given in Littlestone, 1988) involves pattern recognition tasks in which feature detectors au- tomatically extract a large number of features for the learner’s consideration, not knowing which might prove useful. Notation For each n 2 1, let {z~,z~,...,z~} denote a set of n Boolean features and X, denote the set (0, l}n of all assignments to these features-the set of instances. A binary concept c is a subset of X,. A binary function LnyF;;;nts the concept c if f (2) = ? for all x ? c = 0 otherwise. Of course, binary functions can be represented as Boolean formulas. A feature xi, for 1 5 i 5. n, is said to be relevant to a concept c if xi appears in every Boolean formula that represents c and irrelevant otherwise. The complexity of a concept, denoted s(c), is defined to be the minimum number of bits needed to encode the concept with respect to some encoding scheme. The encoding scheme we use in this work will be in- troduced in Section 3. We let Cn,s denote the set of all binary concepts of complexity at most s defined on { X1,X2,“‘, &a}. We assume an arbitrary probability distribution D on X,. For 0 < e < 1, a concept h is said to be E- close to a concept c with respect to D if the sum of the probability of all the instances in the symmetric difference of h and c is at most E. Let f be a function that represents the class c. Then, for x E X,, the value f(x) is said to be the class of x. A pre-classified example of c is a pair of the form (x, f(x)). A sample of a concept c is a multi-set of examples of c drawn randomly (with replacement) ac- cording to D. The size of the sample is just the number of instances drawn. In this work, we adopt the notion of Probably Ap- proximately Correct (PAC) learning as defined by Blumer et al. (1987a). With respect to parameters E and 6, 0 < E, 6 < 1, we say that a learning algorith- m PAC learns (or simply, learns) a concept c using a sample of size m if, with probability at least (1 - S), this algorithm returns as an hypothesis a concept that is c-close to c when the algorithm is given a sample of c of size m, for all fixed but unknown D. Formal Analysis In this section, we first define the MIN-FEATURES bias. We then investigate the sample complexity of any algorithm that implements MIN-FEATURES. Finally, we present the FOCUS algorithm that implements this bias, and we analyze its computational complexity. The MIN-FEATURES bias can be stated simply. Given a training sample S for some unknown bina- ry function f over X,, let V be the set of all binary functions over X, consistent with S. (V is sometimes called the version space; Mitchell, 1982.) Let H be the subset of V whose elements have the fewest rele- vant features. The MIN-FEATURES bias chooses its guess, f^, from H arbitrarily. Given that we wish to implement and analyze the MIN-FEATURES bias, the first step is to choose a representation for hypotheses. We will represent a con- cept c by the concatenation of two bit vectors R, and TC. R, is an n-bit vector in which the ith bit is 0 if and only if xi is irrelevant to c. T, is the right- most column of the truth table of a Boolean function f that represents c defined only on those features in {3&~2,“‘, x,}, whose corresponding bits in R, are set to 1. With this definition, we can now analyze the sample complexity of MIN-FEATURES-that is, the number of training examples required to ensure PAC learning. We must first define a complexity measure correspond- ing to our bias. Following Blumer et al. (1987b), we will define the complexity s(c) for concept c to be the number of bits needed to encode c using our bit-vector representation. This measure has the property that S(Q) < S(Q) iff the number of relevant features of cl is less than the number of relevant features of cz. Specif- ically, if c has p relevant features then s(c) = n + 2P. Section 3.1 of Blumer et al. (1987a) gives 3 properties to be satisfied by a reasonable representation of con- cepts. The reader may verify that these are satisfied by our method of encoding. 548 LEARNING THEORY AND MDL Example: Let n = 5 and let c be a concept repre- sented by xi V 2s. Then, Rc = 10100 and Tc = 0111. Hence, the complexity of c is 9. 0 The following theorem gives an upper bound on the sample complexity of any algorithm implementing the MIN-FEATURES bias. Theorem 1 Let Cn+ be the class of concepts defined on n features with complexity at most s. Then, under any probability distribution D, any n 2 1, any E and 6 such that 0 < E, 6 < 1 and any concept c E C*,+, a sample of size Figure 1: The FOCUS Learning Algorithm f In f + f [logz(s - n)lnn+s-n] is suficient to guarantee that any algorithm imple- menting the MIN-FEATURES bias will return un hy- pothesis that is e-close to c with probability at least l-6. Proof(Sketch): For any target concept of complex- ity at most s, the hypothesis space for any algorithm that implements the MIN-FEATURES bias is con- tained in Cn,J. We argue that IG,s I F (,og,~-nl)2s-n- The result follows immediately by applying the lemma of (Blumer et al. 1987b). 0 It is interesting to note that the number of exam- ples sufficient for learning grows only logarithmically in the number of irrelevant features and linearly in the complexity of the concept. We now show that this bound is tight by exhibit- ing an identical lower bound using the methods devel- oped by Blumer et al. (1987a) exploiting the Vapnik- Chervonenkis dimension (VC-dimension). The VC-dimension of a class of concepts C is defined to be the largest integer d such that there exists a set of d instances that can be labelled by the concepts in C in all the 2d possible ways. It is shown that the number of examples needed for learning any class of concepts strongly depends on the VC-dimension of the class. Specifically, (Ehrenfeucht et al. 1988) prove the following: Theorem 2 Let C be a class of concepts and 0 < E,6 < 1. Then, any algorithm that PAC learns all the concepts in C with respect to E, S and any probability distribution must use a sample of size Q ( $ln*+!L.ZfGJ . ) To apply this result, we must first determine the VC- dimension of the Cn+, the set of boolean concepts over n features having complexity less than or equal to s. Lemma 1 Let Cn+. be us in Theorem 1. Then VCdim(C,,,) 2 max n) log, 12, s - n . The proof of this result is lengthy and is omitted for lack of space. We can now state the lower bound: Algorithm FOGUS (sample) 1. For i = 0,1,2,... do: 1.1 For all A C (xi, ~2,. . . , x,} of size i: 1.1 .l If there exist no two examples in the sample that agree on all the features in A but do not agree on the class then go to 2. 2.Return any concept h consistent with the sample, such that only those features in A are relevant to h. Theorem 3 Under the same conditions as Theorern 1, any algorithm that PAC-learns Cn,s must use a sum- ple of size R ( ilni+ i[ln(s-n)lnn+s-n] > . These results show that the presence of many irrele- vant features does not make the learning task substan- tially more difficult, at least in terms of the number of examples needed for learning, since the sample com- plexity grows only logarithmically in the number of irrelevant features. Now that we have analyzed the sample complexity of the MIN-FEATURES bias, we exhibit an algorithm that implements this bias. The algorithm given in Fig- ure 1 searches for and returns a consistent hypothesis using a minimal set of attributes, and hence, it imple- ments the desired bias. To determine the computational complexity of FO- CUS, suppose that it is given a sample of size m for a concept of complexity s. The condition in the inner loo can be tested by maintaining an array of length 21A P with an entry for each possible assignment of the features in A. For each example in the sample, we check the values of the features in A as given in the example and label the corresponding entry in the array using the class of the example. If, during this process, a label of any entry has to be reversed, then the re- sult of the test is false. Otherwise, the result is true. This will cost time O(m . 21Al). Since the target con- cept has complexity s, the value of IAl will reach at most log,(s - n). The outer loop is then executed at most (log,(nJ-,,) = Oh l’g~(~-~)) times. The com- putational complexity of this algorithm is dominated by the two nested loops, and, therefore, the algorithm will terminate in time 0((2n)10ds-n)m). This is quasi- polynomial in n and s, but clearly it will be impractical for large values of s. According to the definition of learnability given in (Blumer et al. 1987a), this says that the class of Boolean concepts, under our complexity measure, is learnable using a polynomial number of examples in quasi-polynomial time. An analogous result is given in (Verbeurgt 1990) h w ere, taking the minimum number ALMUALLIM & DIETTERICH 549 of terms needed to encode the concept as a DNF for- we have specialized the above criteria in the following mula as the complexity measure, they obtain a learn- manner. First, concepts with i + 1 relevant features ability result using a polynomial sample size and quasi- are not counted in the coverage of an algorithm un- polynomial time. However, their result is only shown less all concepts of i or fewer features are FAC learned for the uniform distribution case, while ours applies to all distributions. Experimental Work Several learning algorithms appear to have biases sim- ilar to the MIN-FEATURES bias. In particular, algo- rithms related to ID3 (Quinlan 1986) attempt to con- struct “small” decision trees. These algorithms con- struct the decision tree top-down (i.e., starting at the root), and they terminate as soon as they find a tree consistent with the training examples. Features tested at each node are chosen according to their estimated relevance to the target concept, measured using the mutual information criterion. In this section, we test these algorithms to see how well they implement the MIN-FEATURES bias. In particular, we compare three algorithms: (i) ID3: As described in (Quinlan 1986), but resolving ties ran- domly when two or more features look equally good. (ii) FRINGE: As given in (Pagallo & Haussler 1990), with the maximum number of iterations set to 10. (iii) FOCUSed-ID3: First, a minimum set of features suf- ficient to produce a consistent hypothesis is obtained as in FOCUS. After finding a minimal-size subset of relevant features, the training examples are filtered to remove all irrelevant features. The filtered examples are then given to ID3 to construct a decision tree. We consider three evaluation criteria: coverage, sam- ple complexity, and error rate. The coverage of a learn- ing algorithm, L, is a measure of the number of distinct concepts that can be learned from a training sample of size m. More precisely, consider the collection of all training samples containing m distinct examples for a concept c, and suppose we give each of these samples to algorithm L. If, for fraction 1 - S of the training sam- ples, L outputs a function that is E-close to the correct concept, then we say that L frequently-approximately correctly (FAC) 1 earns c (Dietterich 1989). The cover- age of an algorithm, given m, E, and S, is the number of concepts that can be FAC-learned by the algorithm. The sample complexity of an algorithm L for a space of concepts C is estimated as the smallest sample size sufficient to enable L to FAC-learn every concept in C. This is equivalent to the sample complexity of PAC- learning, except that it is measured only for the u- niform distribution and instances are drawn without replacement. Finally, the errOr rate for an algorithm on a given concept is measured as the probability that a randomly chosen example would be misclassified by the hypoth- esis output by the algorithm, assuming the uniform distribution over the space of examples. Since our objective is to evaluate the learning per- formance with respect to the MIN-FEATURES bias, as well. If this condition is not included, there exist trivial algorithms that can attain high coverage while learning only very uninteresting concepts. Second, for the sample complexity measurement, we choose C to be a class of concepts with only p or fewer relevant fea- tures. Finally, in measuring the error rates of the three algorithms, the target concept is chosen randomly from those concepts having p or fewer features. One technical problem in performing our experi- ments is the immense amount of computation involved in the exact measurement of coverage and sample com- plexity when the number of features is large. There- fore, we employed two techniques to reduce the com- putational costs of these measurements. First, we ex- ploited the fact that each of the three algorithms is symmetric with respect to permutations and/or nega- tions of input features. More precisely, if an algo- rithm FAC-learns a concept represented by a Boolean function f(~i,~z,. . , , xi,. . . , ~j, . . . , zra), then the same algorithm also learns the concepts represent- ed by f(wo,..., xj,...,~, . . ..G). f(x1,~,..., S-iy...,Xj, ss.7 x~) and so on for all functions obtained by permuting and/or negating the features in f. These symmetry properties partition the space of concepts into equivalence classes such that it suffices to test one representative concept in each equivalence class to determine FAC-learnability for all concepts in the c1ass.l Second, we measured FAC-learning statistical- ly by running each algorithm on a large number of randomly-chosen samples (10,000 or 100,000 depend- ing on the experiment). This number was observed to be large enough to reliably determine the FAC- learnability of concepts. Experimental Results EXPERIMENT 1: Coverage. In this experiment, we measured the MIN-FEATURES-based coverage of each of the three algorithms. For each algorithm, we counted the number of concepts learned in order as a function of the size m of the sample and the total number of features n. The learning parameters were n = 5,6,7, and 8, E = 0.1, S = 0.1 and m varying. The number of samples tested per concept was 10,000. Figure 2 shows the result for n = 8. The results for n = 5,6,7 were similar. EXPERIMENT 2: Sample Complexity. In this experiment, we estimated the sample size needed to learn all concepts having 3 or fewer relevant features out of a total of 8, 10, or 12 available features. As ‘For counting techniques that can be employed to find the number of equivalence classes and the number of con- cepts in a given equivalence class see Harrison (1965) and Slepian (1953). 550 LEARNING THEORY AND MDL C lo7 0 lo6 V e lo5 I a lo4 g lo3 e 102 lo1 I 8 I 1 I I I 15 20 N$?iber ?I$ exagples 40 45 50 Figure 2: Coverage of the three algorithms for n = 8. before, E and S were 0.1. The number of samples tested per concept was 100,000. The results are given in the following table: No. features ID3 FRINGE FOCUS 8 194 52 34 10 648 72 40 12 2236 94 42 EXPERIMENT 3: Error rates. In the previous experiments we were looking at the “worst case” per- formance of the learning algorithms. That is, given a reasonable sample size, an algorithm may learn all the concepts under consideration with the exception of few that require a substantial increment in the sample size. Such an algorithm could exhibit poor performance in the previous two experiments. The purpose of this ex- periment is to perform a kind of “average case” com- parison between the three algorithms. The procedure is to plot the learning curve for randomly chosen con- cepts with few relevant features. We randomly selected 50 concepts each having at most 5 (out of n) relevant features. For each of these concepts, we measured the accuracy of the hypothe- ses returned by the three algorithms while successively increasing the sample size. For each value of m, the accuracy rate is averaged over 100 randomly chosen samples. This experiment was performed for n = 8, 12, and 16. 50 randomly chosen concepts of no more than 5 relevant features were tested for each value of n. Figure 3 shows a pattern typical of all learning curves that we observed. Over all 50 concepts, af- ter 60 examples, the mean difference in accuracy rate between FOCUS and ID3 was 0.24 (variance 0.0022). The mean difference between FOCUS and FRINGE was 0.21 (variance 0.0020). EXPERIMENT 4: Irrelevant Features. The goal of this experiment was to see how the three algorithms are influenced by the introduction of additional (irrel- 0.95 - 0.90 - 0.85 - 0.80 - 0.75 - 0.70 - 0.65 - 0.60 - 0.55 - 20 N u%ber of %amples80 100 Figure 3: Learning curve for the randomly chosen concept f(~i, . . . ,216) = xlx2x3fJ v xlx2x3x& v xlx2~3%& v xlif793 v x1&*3if4 v ~lx@3x@5 v ~r~2xsx&j v ~j~2xa& v &$3X4X5 v s?lif354s?5 which has 5 relevant features out of 16. A 1.0 C C 0.9 U r 0.8 a ; 0.7 0.6 t 0.5 ' ' I I t I I 8 10 12 14 16 Number of total attributes Figure 4: Accuracy of the three algorithms on a ran- domly chosen concept-sample pair as more irrelevant random features are introduced. The sample size was 64. evant) features whose values are assigned at random. For this purpose, we choose a concept-sample pair at random, and measure the accuracy of the hypothe- sis returned by each algorithm while adding more and more irrelevant features to the sample. The concepts chosen have 5 relevant features out of 8. The sam- ple size was chosen such that all the three algorithms are reasonably accurate when tested using only the 8 starting features. A sample of such a size is chosen randomly and then augmented by successively adding random features to bring the total number of features up to n = 16. For each value of n, the accuracy is averaged over 100 runs. This experiment was repeated on more than 50 concept-sample pairs. A typical result of these runs is shown in Figure 4. ALMUALLIM & DIETTERICH 551 Discussion These experiments show conclusively that the biases implemented by ID3 and FRINGE, though they may be interesting and appropriate in many domains, are not good approximations of the MIN-FEATURES bias. The final experiment shows this most directly. Using the MIN-FEATURES bias, FOCUS maintains a con- stant, high level of performance as the number of ir- relevant features is increased. In contrast, the perfor- mance of ID3 and FRINGE steadily degrades. This occurs because ID3 and FRINGE are proposing hy- potheses that involve many extra features (or perhaps different features) than those identified by FOCUS. This also explains the results of Experiments 1, 2, and 3. In Experiment 2, we see that many more train- ing examples are required for ID3 and FRINGE to find good hypotheses. These extra training examples are needed to force the algorithms to discard the ir- relevant features. This also means that, for a fixed sample size, ID3 and FRINGE can learn many fewer concepts (with respect to the MIN-FEATURES bias), as shown in Experiment 1. Experiment 3 shows that if the MIN-FEATURES bias is appropriate, then FO- CUS will give much better generalization performance than either ID3 or FRINGE. Conclusion This paper defined and studied the MIN-FEATURES bias. Section 3 presented a tight bound on the number of examples needed to guarantee PAC-learning for any algorithm that implements MIN-FEATURES. It also introduced the FOCUS algorithm, which implements MIN-FEATURES, and calculated its computational complexity. Finally, Section 4 demonstrated empirical- ly that the ID3 and FRINGE algorithms do not provide good implementations of the MIN-FEATURES bias. As a consequence, ID3 and FRINGE do not perform nearly as well as FOCUS in problems where the MIN- FEATURES bias is appropriate. These results suggest that one should not rely on ID3 or FRINGE to filter out irrelevant features. Instead, some technique should be employed to eliminate irrelevant features and focus ID3 and FRINGE on the relevant ones. There are many problems for future research. First, we need to develop and test efficient heuristics for find- ing the set of relevant features in a learning problem. Analysis must be performed to ensure that the heuris- tics still have near-optimal sample complexity. Second, we need to address the problem of determining relevant features when the training data are noisy. Third, some efficient variant of FOCUS should be tested in real- world learning problems where the MIN-FEATURES bias is believed to be appropriate. Acknowledgements The authors gratefully acknowledge the support of the NSF under grant number IRI-8G-57316. Hussein Al- muallim was supported by a scholarship from the U- niversity of Petroleum and Minerals, Saudi Arabia. Thanks also to Nick Flann for helpful comments. References Buntine, W. L. 1990. Myths and Legends in Learn- ing Classification Rules. In Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI-90), 736-742. Boston, MA: Morgan Kauf- mann. Blumer, A.; Ehrenfeucht, A.; Haussler, D.; and War- muth, M. 1987a. Learnability and the Vapnik- Chervonenkis Dimension, Technical Report UCSC- CRL-87-20, Department of Computer and Informa- tion Sciences, University of California, Santa Cruz, Nov. 1987. Also in Journal of ACM, 36(4):929-965. Blumer, A.; Ehrenfeucht, A.; Haussler, D.; and War- muth, M. 198713. Occam’s Razor. Information Pro- cessing Letters, 241377-380. Dietterich, T. G. 1989. Limitations on Inductive Learning. In Proceedings of the Sixth International Workshop on Machine Learning, 124-128. Ithaca, NY: Morgan Kaufmann. Ehrenfeucht, A.; Haussler, D.; Kearns, M.; and Valiant, L.G. 1988. A General Lower Bound on the Number of Examples Needed for Learning. In Pro- ceedings of the First Workshop on Computational Learning Theory, 139-154. Boston, MA: Morgan Kaufmann. Harrison, M. 1965. Introduction to Switching and Au- tomata Theory. McGraw Hill, Inc. Littlestone, N. 1988. Learning Quickly When Irrel- evant Attributes Abound: A New Linear-threshold Algorithm. Machine Learning, 2:285-318. Mitchell, T. M. 1982. Generalization as Search. Ar- tificial Intelligence, 18:203-226. Pagallo, G.; and Haussler, D. 1990. Boolean Feature Discovery in Empirical Learning. Machine Learning, 5( 1):71-100. Quinlan, J. R. 1986. Induction of Decision Trees, Ma- chine Learning, l(l):81-106. Slepian, D. 1953. On the Number of Symmetry Types of Boolean Functions of n Variables. Can. J. Math., 5(2):185-193. Verbeurgt, K. 1990. Learning DNF Under the Unifor- m Distribution in Quasi-polynomial Time. In Pro- ceedings of the Third Workshop on Computational Learning Theory, 314-326. Rochester, NY: Morgan Kaufmann. Wolpert, D. 1990. A Mathematical Theory of Gen- eralization: Parts I and II. Complex Systems, 4 (2):151-249. 552 LEARNING THEORY AND MDL
1991
95
1,159
arity and Str Alexander Botta New York University - Courant Institute of Mathematical Sciences Author’s current address: I310 Dickerson Road, Teaneck, N.J. 07666 Abstract We present an approach to unsupervised concept formation, based on accumulation of partial regularities. Using an algo- rithmic complexity framework, we define regularity as a model that achieves a compressed coding of data, We discuss induction of models. We present induction of ftite automata models for reg- ularities of strings, and induction of models based on vector trans- lations for sets of points. The concepts we develop are particularly appropriate for natural spaces - structures that accept a decompo- sition into recurrent, recognizable parts. They are usually hierar- chical and suggest that a vocabulary of basic constituents can be learned before focussing on how they are assembled. We define and illustrate: Bmic regularities as algorithmically independent building blocks of structures. They are identifiable as local maxi- ma of compression as a function of model complexity. Stepwise in- duction consists in finding a model, using it to compress the data, then applying the same procedure on the code. It is a way to in- duce, in polynomial time, structures whose basic components have bound complexity. Libraries are sets of partial regularities, a theo- retical basis for clustering and concept formation. Finally, we use the above concepts to present a new perspective on expkznufion based generalization. We prove it to be a language independent method to specialize the background knowledge. 1. Introduction Unsupervised learning aims at the discovery of laws that govern the structure of data. It succeeds only if the data is constrained by some regularities, that is, not totally random. In this case, the data can be encoded in a shorter representa- tion. The algorithmic information theory [4,5,22], and the related concept of minimal length encoding [ 1,3,12,19,20] will provide the framework for defining regularity. Our main concern is learning the structures of those spaces that can be decomposed in recurrent, recognizable parts. Ex- pressing the whole depends on being able to name the parts. For example, the syntax of a sentence is based on the inter-, mediate concept of word. The concept of rectangle illustrat- ed by vectors of pixels depends on the intermediate con- cepts of pixel adjacency and line segment. Such structures are usually hierarchical, involving several levels of vocabu- laries. They suggest learning at fast a dictionary of building blocks, and only then concentrating on how they are assem- bled. We call them natural spaces. In [l] we looked at a ro- bot that can move around a room and apply simple actions (push, rotate, etc.) on simple-shaped blocks. By capturing important regularities in the flow of its perceptions, the ro- bot constructs an ontology of its universe. We shall investi- gate here ways to capture and combine regularities in natu- ral spaces. This may allow the proverbial blind men to put together an accurate representation of their elephant. In this section we review some definitions from [4,5,22]. Let s,t be strings. Pu(s) is the probability that a universal Turing machine U will produce s, if it starts with any self- delimiting program string on its program tape. KU(S) is the length of the shortest such program. Since a fixed size rou- tine enables another machine V to simulate U, we drop ‘U’, assuming that the equations involving K(s) are correct up to a constant term. K and P are related through P(s) = 2-K(S). K(s) measures the algorithmic complexity, or the algorith- mic information of s. K(s,t), the join complexity, is a mini- mal length program for the string “s,t”. K(s:t) = K(s) + K(t) - K(s,t) is the mutual information of s and t. K(slt) = K(s,t) - K(t), the conditional complexity, is the length of a minimal program that has to be added to a generator of t in order to obtain a program for s. An infinite string s is k-random iff, beyond a certain length, none of its prefixes can be compressed more than k bits. Random data has no regularity, thus nothing to learn from it. What we expect to discover in an apparently random uni- verse are constraints that prove that certain rational predic- tions are still possible. But, there is almost nothing to learn from trivially regular data either. Take, for example, two in- finite strings: SIE (a+b)” is random and s2= a” ; s1 has no structure and K(st) is infinite, s2 has a trivial one and its K(s2) is almost null. They are equally uninteresting since they have simple explanations: s1 is random, s2 is a string of a’s. They both lack sophistication, a concept formalized in [ 141. He assumes that observations derived from a phys- ical universe are produced by processes that have both a de- terministic structure and a random component. Let us re- view some of his definitions. D, S are sets of strings, s, is the length n prefix of s, and dtc d2 denotes that dt is a prefix of d2. A function p:D->S is a process if dt c d2 => p(d$ p(d& Processes provide generative explanations for strings. If a minimum size pro- cess p, and an infinite random sequence d are found such that p(d) = s, then p contains the structure of s and d contains BOTTA 559 From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. the random choices needed to instantiate p to s. If p(d) = s, then (p,d) is a description of s. The algorithmic complexity, can be redefined as K(s) = min (Ipl+ldl I (p,d) is a descrip tion of s) . If p(d) = s, p and (p,d) are c-minimal iff Ipl+ldl c K(s)+c. A compression program is a process that is c- minimal for every prefix of s. Sophistication(s)=min( Ipl I p is a compression program for s} Let A be 0 length program. A string a is random iff (&a) is a minimal description for it, thus its sophistication is 0. [ 141 shows that the definition of compression program is in- dependent of the universal Turing machine. Therefore ifit exists, it does determine the structure of a string. Conse- quently, a theory currently held valid need not be refuted by new data in order to be replaced. A more complex one may be a better explanation, if it achieves higher compression. 3. Regularities and Models We shall use the above concepts for their expressive pow- er, but, since K(s) is not computable, we shall operate with constrained versions of them. Here are two precedents. [ 161 introduced a definition for K restricted to a class of finite state machines instead of universal Turing machines. [12] defined a similar complexity for the elements of language L. A grammar G for L can be seen as a machine executing a string of decisions, that control the application of G’s pro- ductions, and derives an element XE L. K(x) is approximated by Zlog(ni) where the i* step is a decision among ni altema- tives. Therefore, we accept a restricted class of computa- tions as an implicit bias, and use pIex(x) to denote an ap- proximation of K(x). We shall claim the following points: (a) A short but non minimal description (p,d) of a string still captures a frag- ment of its structure. A residual of the string’s structure should be inducible by finding compressing descriptions for p and d (section 8). (b) An agent executing a random walk through a well structured universe should be able to induce a minimal description of its infinite string of sensorial infor- mation. This description should separate the randomness in- troduced by the exploratory walk, from the fixed structure of that universe. [l] (c) A Bayesian argument shows that, the most promising generalization of a single string s is its compression program p. This will provide a new perspec- tive on explanation-based generalization (section IO). Intuitively, regularity can be used for prediction. Given partial knowledge, data can be reduced to a shorter code that embodies the missing knowledge. Let S, C be sets of strings. S contains descriptions of objects, and C contains codes. M is a class of processes m:C->S. We introduce now our main definitions: (i) A model m for a string s is a process that is able to com- press some fragments (wi} of the string s, into the code strings (Ci) . It captures a regularity of s: (ii) (iii) m(q) = Wi & ClWil > plex(m) + ClCil parses(m,w) c=> 3 c E C such that m(c)=s and WC s co&(m,w) = min ( c I wc m(c) ] 560 LEARNING THEORY AND MDL gain,(m) = Iwl - ( Icode(m,w)l + plex(m) ) press,(m) = gain,(m) / Iwl (compression factor) A model that parses an entire string is a complete model for that string. (vii) A partial model is one that can account for only some fragments of the string. For example, the rule aab is always followed by bbab or bbb. Also, an agent exploring a partial- ly known territory will alternate among confusion, recogni- tion of a familiar space, and prediction within that space. A partial model can be easily extended to a complete one by using the identity function on unparsable fragments. (viii) A complete model that achieves the maximum com- pression represents a complete regularity. It is a compres- sion program and, consequently, it reduces the data s to a random code c. All other models are partial regularities. Example 3.1 [l] Let s be a string in (0,I ) * where the ifh position is I if i is prime and 0 otherwise. Here are three models: m1 enumerates the prime numbers. It can generate the entire string. It is a complete model, but also a complete regularity since it captures the entire structure of s. There is no random component to s, thus ml reduces s to a null code. m2 produces a 0 in position i, if i is even, and uses a code string in (0,l) * to determine what to produce in odd posi- tions. This is a complete model that captures only a partial regularity. m3 only predicts that every 1 in s is followed by a 0, except for the fast 2 positions. This is a partial model. It cannot parse the entire string but only the 10 sequences. 4. Induction of We focus now on incremental induction of complete models for an infinite string s. A fixed order enumeration would discard a hypothesis as soon as it fails to parse a pre- fix of s, loosing information on initial successes. Therefore we start with a small set of simple models as hypotheses and refine them when they fail. Let m be a model that parsed the prefix w but failed on the next symbol a. Assume a function refine that takes m, a, and some information on how m failed, and returns a set of models m’ that can parse wa. An incremental refinement procedure requires press&wa) to be computable from press&w). In [I] we gave sufficient conditions for this approach. We also showed that, (i) If h, is the hypothesis with the highest press&h,) af- ter reading the prefix s, (the best at step n), and s=m(c) where m is a bijective process of finite complexity, and c is k-random code, then l.imn+,[ press,,(h,) - press,,(m) ] = 0 [l] presents an incremental refinement algorithm to in- duce finite automata models. It maintains a set H of hypoth- eses (A,q,g) where A is transition graph, q the current state, and g the current gain. Initially H= ((A,q,O)) where A has one state and no transitions. Let a be the current symbol read from s. Parsing fails when there is no a-transition from the current state. An a-transition is added to the failed hypothe- sis, in all possible ways. Maintaining, for every state q, the number of times q was reached in parsing, allows us to com- m2: (aab+abbb( aab+abbb))* (i) insures that, if the right hypothesis is in H, it is likely to S ing. [l] We applied this algorithm on a string generated by taking random decisions for the ‘+’ in the reg- ular expression E= ( aab (aab + abbbabbb) abbb )*. Com- plexity of models was measured by the number of transi- tions. Two less performant complete models, ml and m2, emerged before the one equivalent to E, m3, was found. One of the runs is summarized in the above table. oea Searching for partial models is particularly appropriate when the target structure is based on a finite set of smaller substructures. [6] presents a related idea. He defines the d- diameter complexity of a spatial structure X (a container of gas, a crystal, a live cell etc.) as: Kd(x) = min [ K(o) + C i K(Xi) ] where ( Xi I diame- ter(Xi) 5 d ) is a partition of X. It is a minimum length de- scription of X as an assemblage of parts, without cross-ref- erences, each not greater than d. When d grows from 0 to di- ameter(X), Kd(X) decreases since it can tak.e advantage of more complex substructures and more distant correlations. Changes occur at the points where d exceeds the diameters of important substructures of X. Therefore Kd(X) provides a kind of spectrogram of the object X. An ideal vocabulary of building blocks would contain el- ements as different from each other as possible, since any substantial amount of shared information among blocks could support a smaller vocabulary. Therefore they should be algorithmically independent. This allows us to define a criterion for identification of good building blocks: (i) If a structure is composed jkom highly algorithmk- dissimilar constituents, then those constituents can be iden- t@ed, during model refinement, by local maxima of com- pressibility. Here is an argument for our claim. Let wl, ~2, w3 be fragments of a string s. Let m be a model for w1 but not for w2 or w3. We assume that wI and w2 are algorithmically in- dependent (dissimilar). Their mutual information is very small. Thus we can expand a minimal program for w 1, in or- der to produce ~2, but the addition will be almost as large as a minimal program for w2 alone. Also, let w1 and w3 have a similar structure. Thus program for wt needs only a small addition to generate w3, since the mutual information of w1 and w3 is almost the same as the total information of w3. The two assumptions can be expressed as: (a) K(wl : w2) < E c=> K(w2 I wt) > K(w2) - E ) K(w3 I wl) c E c=> K(w3 : wl) > K(w3) - E et m’ be a minimal reGnement of m that parses ~1~2. Unless plex(m’) = plex(m) + K(w’lw), that is unless m’ is substantially more complex that m, m’ will not be able to maximally compress ~2. But (a) makes such a refinement unlikely. Let us look at one of the smallest additions that could m out of impasse. This is the copy program, that prints its input to the output, coding every string by itself. Let m’ be m endowed with copy, and let us estimate its per- formance on w1w2. If w2 is coded by itself, then Icode(m’,wIw2)1= Icode(m,w1)1 + 1~21. It follows that: press,lw2W) = (lwlw21 - Icode(m’,w1w2)l) / Iwlw21 = = ( lwlw21 - Icode(m,wl)l - 1~21) / lwlw21 = = (Iwll - Icode(m,wl)l) / lwlw21 = ( lwll / Iw,w21 )press,t(m) = pres&l(m) / Iw2l < press,l(m) Therefore m’ can parse anything but can compress less. Its performance decreases with the length of w2 and as- sumption (a) forces w2 to be a relatively long string. (If it were a short one, its complexity would be small, hence it would have been derivable by a program for w 1 with a small addition). To parse w3, m has to be refined too. But (b) shows a small addition to m yields a high compression performance on w1w3. Certain classes of models may not allow expres- sion of that small addition as a refinement. For example, a recursive invocation is a small addition not expressible in a finite automata class. Nevertheless this discussion shows that there is a trade-off point beyond which models should not be refined any more. If the encountered novelty w2 is small (that is, similar to the model), then it is likely to be in- corporated into the model without a serious loss in com- pressibility. If the novelty is large, all small additions to the model will degrade compressibility; looking for a different model is a better solution. This principle justifies the following definition of basic regularity. Let us consider a refinement that increases com- plexity by a small quantum. Let m << m’ denote that m’ is obtained from m through one or more refinement steps. Let OChCl. (ii) We define a h-basic regularity of s to be a model m with the property: f m’>>m & press,(m’)2press,(m) then 3m” such that m’>>m”>>m & press,(m”)<press,(m)-h The parameter a affects the separation between regulari- her a forces a higher separation. Be 5.1 A similar criterion is used in [2]. It de- algorithm (Procrustes) that partitions a text into agments. It starts with 1 letter fragments and refines them by concatenation with continuations encountered in the text. BOTTA 561 The refinement stops when the average information neces- sary to specify the letter following a fragment approaches the average information necessary to specify that letter in- dependently. The algorithm converges to a set of basic reg- ularities. For example, without any background knowledge, an Italian text yields a set of proper Italian syllables; a text of Morse code yields the codes of the individual letters. + Example 5.2 [ 131 contains another empirical application of this principle. It searches for substructures in organic chemistry formulae (for example, benzene rings). Identifi- cation of proper substructures takes into account a compres- sion factor, a substructure density, its connectivity to its neighbor atoms, and a coverage factor. These parameters determine a trade-off point between a candidate substruc- ture and its possible extensions (refinements). + Example 53 [I] Plotting compression as a function of complexity for the automata in example 4.1 shows that, af- ter about 250 steps, the models ml ,m2,m3 become well iso- lated local maxima. Indeed, they reflect the m nents of the string: the sequences aab and abbb Example 5.4 [I] Let S be a set of 36 points mensional space (l.a), given as a set of vectors [Xi,yi] of 8 bit integers. A routine r= { ~1, . . . . pk) E R is a set of vectors. It has one argument p of type point. Let r(pc)=(pe+pl, . . . . pO+pk). Also plex(r)=16k. A routine translates its points to different positions. We shall look at models that are set of routines of the same complexity (see next section). For this structure we shall study gain(m) by considering, in turn, Every vector in S is the sum of 3 points, from ~1,~2,~3 re- 40CkCatn FiLle _______-_-_______-_----------~----------~-- -'-me, spectively (fig ,I .d). Look now at the local maxima of gain (fig.I.e). The routine with 4 points (fig.1.b) identifies s3. The one with 9 points (fig.I.c) identifies s1+s2. + Induction of basic regularities can assist a shift in repre- sentation bias. They provide a vocabulary for a new, more compact representation of data. The next section establishes a connection between such a vocabulary and the emergence of categories in conceptual clustering. es ro ent a set of partial regu- larities. Given M, a class of models, let E(M) be the power set of M. A model LE&@I) can be seen as a library of rou- tines each of which can generate, with appropriate input data, some fragments of the string s. If the regularities in L do not cover the entire string, we can always add to L the identity function, to turn the partial model into a complete one. The string s will be coded by the model into a sequence of invocations and routine parameters. The performance of L will take into account the size of reference, that is, infor- mation necessary to distinguish among the routines. If s is a concatenation of fragments Wij = ri(code$, then (i) gain,(L)=& &Zj (Iwijl-lreferenceil-Icode;jl)-plex(ri)) Example 6.1 [3] defines two types of library models for finite strings: sequences and classifications. Classifications are partitions of the alphabet into classes of symbols. A li- brary corresponding to a classification would have a routine for each class. Routine arguments would distinguish among symbols within a class. Sequences and classifications can be composed in both directions to generate more complex rou- tines: classes of sequences and sequences of classes. pie 6.2 [l] Let S=(xl, . . . xn} be a set of vet natural numbers. Given a fix point x0, S can be expressed as S’= (y1, . . . y,) where yi = $-xc. This is equivalent to defin- ing a routine rxo such that xi = r.&yi). If lxil is the sum of representation lengths for its numeric coordinates, the gain of representing S by rxe is gains(rxo)= ISI - IS’I - Ireference(rxo)l= C,lx,l - qlyil - Ix01 A partition of S into the sets Cj, j=l,...,k together with a set of points (x01, . . . X& ) such that (Vj=I,...,k) gain&& > 0 is in fact a clustering of S. This is not surprising, since find- ing any good clustering for S is equivalent to discovering a part of its structure. Let H(xilCj) be the information needed to retrieve xi E Cj, knowing Cj, A good clustering would minimize Zj=lg( ICjl + c<xi c cj>H(xJCj) ) which is equiva- lent to maximizing the gain of representing S by the library of routines (Cl ,... ,C,) . Clusters defined by a central point require simple routines. Libraries allow more complex ex- amples. Let S={xl,... xJ be a cloud of points arranged around a certain curve C. Each GUI be expressed as xi = yi + q where q is a proximal point on C. But, q can be reduced to the equations for C and a scalar parameter ci. Thus, a more complex routine can reduce each xi to r(zi&i) + 562 LEARNING THEORY AND MDL Assuming that we have a parsing procedure for each rou- tine, we have to decide which routine in the libr should parse the current string fragment. S have well defined fragments. For example, attribute-value vectors are classified one at a time. Texts, visual images, speech, or handwriting, however, do not have well defined fragments. Routines represent regularities. Fragmentation is the problem of recognizing an instance of a regularity and decide its boundaries. A Bayesian approach (based on algo- rithmic probability) yields a minimal description length cri- terion: given the fragment w, choose r to minimize K(r) + K(wlr). Given a class of models, and incompressible codes, this amounts to min(plex(r)+Icode(r,w)I)c=> max press,(r). A library operates a classification on fragments. Every routine in a library defines a class (or cluster) that contains all fragments parsed by it. The routine captures common algorithmic information shared ments, ideally, their structure. Therefore Ii retical models for classification. A p amount of data may turn out not to be a basic regularity in view of further data. Research in conceptual clustering, (for example, the COBWEB and CLASSIT algorithms surveyed in [9]) encountered the problem and described an evolution of clusters through (a) merging or (b) splitting. (a) Two clusters that were initially well merge if new points are added in the space Routines should adapt as well. For some (vectors, first order formulae, etc.) a merging can be defined as a minimal common generalization, for other representa- tions this is an open problem. If the past fragments were available, a common model could be induced from them. (b) A routine r maps all fragments ( xi I k I ) parsed by it into the codes { ci} . If a regularity allows set (Ci) itself to be compressed by a library (p,q} , tha Ci = p(di) for k 11 and ci = q(ei) for k I,, then r cm split into the pair rl = p@r and r-2 = q@r (where Q9 stands for composition of functions). This operation can be interpreted as specializa- tion since the classes defined by rl and r2 are contained in the class defined by r. In many approaches of learning from examples, specialization of hypotheses is prompted by neg- ative examples. Here the more specific r1 and r2 are the re- sult of positive examples! (see this distinction in section 9). 7. ise An experiment reported by [lo] showed that human sub- jects try to recognize a regularity, and then focus on the re- sidual parts of the data, that are not accounted for by that regularity. Our approach supports this view. Once the com- pressing description s=m&) was found, the residual information is contained both in the model mc and the code se. A second induction step can be attempted on each: (a) find ml, sl such that su=ml(sl) & plex(ml)+lsll < Isol (b) fmd m1 s1 such that (Vx)(m’(s’:x)=me(x)) & plex(m $-Is’ I < plex(m0) While (a) is induction on strings (this time code strings), (b) de the availability of a model class ap- r descriptions of object of class I!& (For exam- be the set of finite automata, and s be a string of riced parentheses like ((())()()(((())))(())(). M will never f parentheses. That will be visible only able to recognize similar parts in the o and code them as subroutines.) Let stepwise induction denote this approach. It provides a ierarchical structures one level at a els of a hierarchy might share a large ation. (For example, structures that accept models like m@m@m@m). This suggests that the se- levels itself may become an object for induction. resting learning problems are intractible. Howev- level hieriarchical structure, where each level has complexity less that C, a stepwise induction algorithm will ta Let us stop the algorithm in example 4.1 at c regularity: ml. Let ml code aab by a and abbb by b. The same algorithm applied on the code will find ml again because the string s is a hierarchy: m3 = m @m 1. Ie 7.2 [l] The set of points in example 5.4 has a al structure too. The code duced by the 4 point routine (fig.1.b) is a set, cl, of 9 ints. By applying the same procedure to it, we find the ngle (fig.1 -d) as a 2nd level basic regularity. 2nd model will code cl into a3 ints set identical to sl. [2,3] apply a similar procedure on texts. I-Iow is background knowledge used in learning? In [l] we defined analysis as tie part of the learning process that relates the new datum (w) to the background knowledge (B) in order to separate two components: K(B:w) - the part of w that is recognized as known, and K(wlB) the part of w that is really paew. Analysis is well defined when B is given as a formal system that generates the set of learner’s expecta- tions. Explanation based generalization (ERG), [8,17], is such a case. Let us see how it looks in our framework: Take a formal system T with axioms A that generates a language L. For every object Ed L there is at least one se- quence of operations d that derives e from A. Some arbitrary choices (among applicable rewriting rules, etc.) are made in order to build d, rather than some other derivation. Let 4: be this set of choices. Then c contains that part of the structure of e that is not determined by the fact that e is an element of L. In our terms, T is a model of e and c=code(T,e). The proof c represents the novelty K(elT). Assume now that examples “1 0.. en from L illustrate an unknown concept XCL. I-Iow can we use in the induction of X the knowledge that we al- ready have in T? We derive each et from T, that is, we use A as a model to parse them, and we extract their codes (ct } . If there are any common features among e’s that distinguish them porn other elements of L, they must be represented in BOTTA 563 the particular choices used to derive them. Therefore the in- duction should operate on { ct } . However, the target of gen- eralization should not be only the constants but the entire structure of the proof! [7,21] are first attempts to identify re- current substructures that allow the compression of the der- ivation graph. Example 8.1 [l] Let A= (Al(X,Y): p(X,Y)=>p(f(X),Y), A&W:p(S,u)=>qCW~ A3(V,z):q(f(V)~)=>q(V,g(~))) All variables are universally quantified. Consider the proof p(O) => pW,b) => p(f(f(a)M) => pW(W),b) => q(f(f(fb)hb) => q(f(fGOMW => MW&dW)) => q(a,g(g(g(b)))). A possible model for the proof structure is WXN) = Al (X,Y)NA,(X,Y>A,(X,Y)N When A cannot derive e, the failed attempts contain the novelty. They can be used to refine the model [11,18]. Traditional approaches to learning need negative exam- ples to specialize a theory. We show now how EBG can do this with positive examples. The theory is a model m:C->E. Endowing a learner with m as background knowledge is equivalent to constraining its expectations to m(C), the range of m. Let e be an example of a target concept. There- fore we assume that e is par-sable and e=m(c). EBG gener- alizes c. In absence of other constraints, the natural general- ization of c is its compression program. Short of that, any model f would do. Let c = f(c’). Let C’ be the range off. The only constraint on generalizing c to f is C’ c C, so that m can apply! The new model, m’=f@m, is a specialization of m since it constrains m(C) to m(C). Our perspective shows EBG to be a valid method beyond the usual fiist order logic. [ 151 reached a similar conclusion by a different path. Example 82 [l] Let Go be a grammar with the following productions: ((1) S -> E (2) S -> aSa (3) S -> bSb). L(Gc)={ WF I WE (a+b)” & w=reverse(w)] Production are numbered to allow Go to code a string in L(G0) by a string of production numbers, in Let us examine the string the order of their applications. e = aabbaaaabbaabbbbaabbaaaabbaa = GO(C), where c = 2233222233223333 1 Assume we found in c the following regularity: c is made up of pairs 22 and 33, exceptfor the last character. This en- ables us to compress it into c = Cl(@) where c’ = 23223233 1 and Gl= (( 1) S -> 1 (2) S -> 22s (3) S -> 33s) The structure of e is contained in the new theory G2=G1@Go. We operate this composition by noting that Gt is forcing productions 2 and 3 in 60 to be applied twice in a row. Thus G2=[ (1) S -> E (2) S -> aaSaa (3) S -> bbSbb) . In conclusion Go was specialized to G2 while the set (e) was generalized to L(G2) = ( w’i;T I WE (aa+bb)” & w=reverse(w)) cL(Gu) REFERENCES [l] Botta, A. 1991. A Theory of Natural Learning. Ph.D. Thesis, NYU, Courant Institute of Mathematical Sciences. [2] Caianiello, E.R.; Capocelli, R.M. 1971. On Form and Language: The Procrustes Algorithm for Feature Extrac- tion. Kybernetik 8, Springer Verlag [3] Caianiello, P. 1989. Learning as Evolution of Represen- tation. Ph.D. Thesis, NYU, Courant Institute of Mathemat- ical Sciences [4] Chaitin, GJ. 1975. A Theory of Program Size Formally Identical to Information Theory. J. of the ACM. 22(3) [5] Chaitin, GJ. 1977. Algorithmic Information Theory. IBM Journal of Research and Development [6] Chaitin, G.J. 1978. Toward a mathematical Definition of “Lzfe”, in Levine, R.D.; Tribus, M. : The Maximum Entro- py Formalism, M.I.T. Press [7] Cohen, W.W. 1988. Generalizing Number and Learning from Multiple Examples in Explanation Based Learning. Proceedings of the 5th International Conference on Ma- chine Learning. [8] DeJong, G.F.; Mooney, R. 1986. Explanation Based Learning - An Alternative View, Machine Learning 1 [9] Gennari, J.H.; Langley, P.; Fisher, D. 1989. Model of In- cremental Concept Formation. Artificial Intelligence 40 [lo] Gerwin, D.G. 1974. Information Processing, Data In- ferences, Scientific Generalization. Behavioral Sciences 19 [l l] Hall, R. 1988. Learning by Failing to Explain: Using Partial Explanations to Learn in Incomplete or Intractable Domains. Machine Learning 3 (1) [12] Hart G.W. 1987. Minimum Information Estimation of Structure, Ph.D. Thesis MIT, LIDS-TH- 1664 (1987) [13] Holder L.B. : Empirical Substructure Discovery, Pro- ceedings of the 6th International Conference on Machine Learning (1989) [14] Koppel, M. 1988. Structure. in Herken,R. (ed.): The Universal Turing Machine - a Half-Century Survey. Oxford University Press [15] Laird, Ph.; Gamble, E. 1990. Extending EBG to Term- Rewriting Systems. Proceedings of the 8th National Confer- ence on AI [16] Lempel, A.; Ziv, J. 1976. On the Complexity of Finite Sequences. IEEE Trans. on Information Theory, IT-22 (1) [17] Mitchell, T.; Keller, R.; Kedar-Cabelli, S. 1986. Expla- nation Based Learning - A Untfying View. Machine Learn- ing 1 [18] Mostow, J.; Bhatnagar N. 1987. Failsafe - A Floor Planner That Uses EBG To Learn From Its Failures. Pro- ceedings of the 10th IJCAI [19] Pednault, E.P.D. ed. 1990. Proceedings of the Sympo- sium on TheTheory and Applications of Minimal Length En- coding. Preprint. [2O] Rissanen, J. 1986. Stochastic Complexity and Model- ing. The Annals of Statistics 14 [21] Shavlik, J.W. 1990. Acquiring Recursive Concepts with Explanation Based Learning, Machine Learning 5 (1) [22] Solomonoff, R.J. 1964. A formal Theory of Inductive Inference (part I&‘). Information and Control 7 564 LEARNING THEORY AND MDL
1991
96
1,160
An Algo acking John Woodfill and Ramin Zabih Computer Science Department St anford University Stanford, California 94305 Abstract We describe an algorithm for tracking an unknown ob- ject in natural scenes. We require that the object’s approximate initial location be available, and further assume that the it can be distinguished from the back- ground by motion or stereo. No constraint is placed on the object’s shape, other than that it not change too rapidly from frame to frame. Objects are tracked and segmented cross-temporally using a massively parallel bottom-up approach. Our algorithm has been imple- mented on a Connection Machine, and runs in real time (15-20 frames per second). 1 Introduction Recently there has been a great deal of emphasis in AI on the construction of autonomous artifacts, such as Brooks’ proposed mobile robots [Brooks, 19861. Per- ception will be a vital part of such robots, as the pri- mary source of information about the dynamic environ- ment. Due to the difficulty of real-time vision, how- ever, mobile robots have often had to rely on other modalities (such as touch or sonar). This in turn has restricted the tasks that such robots can perform. Our goal is to construct visual capabilities suitable for au- tonomous robots. Real-time performance in unstruc- tured domains is a critical step towards this end. In this paper, we concentrate on moving objects. In particular, we are interested in tracking objects whose shape is neither fixed nor known a priori. Such a ca- pability will be important for navigation, interacting with humans, visual servoing and possibly model con- struction. Work on motion tracking has concentrated on rigid objects of known S-dimensional shape [Gennery, 1982, Verghese et al., 1990-J. These approaches have difficulty dealing with unstructured environments, such as office buildings or parks. A robot that required a model of every object that could appear in an office, or every animal that might wander by in a park, would not be very useful. Even if such a model database were available, vision techniques that assume rigidity would have trouble dealing with curtains, people, kittens and other highly deformable objects. 718 VISION AND SENSOR INTERPRETATION Many vision researchers have attempted to produce detailed information about the 3-D motion of objects, and have concentrated on rigid objects to simplify the task. Our approach is to produce less information about a broader class of objects. In particular, we can handle objects of unknown shape undergoing arbi- trary non-rigid motions, as long as the object is inter- mittently separable from the background along some modality (currently either motion or stereo). The prin- cipal restrictions are that the object’s shape and posi- tion not change too drastically from frame to frame.’ Under these conditions, we can determine the 2-D mo- tion of the object. The output of our algorithm can be combined with a depth map (from stereo, for ex- ample) to produce a useful description of the object’s behaviour. There are several reasons to believe that this ap- proach is worthwhile. First, as noted above, many en- vironments are full of non-rigid objects and objects that one would like to avoid having to model. Sec- ond, there is evidence that knowing the 2-dimensional motion of an object is sufficient for many tasks (es- pecially if augmented with stereo depth). Horswill [Horswill and Brooks, 19SS], for instance, has con- structed a robot which follows moving objects based solely on two-dimensional tracking. Finally, our ap- proach is computationally tractable and yields reason- able results. We have a parallel implementation of our algorithm which runs in real time on a Connection Ma- chine, and we are currently exploring serial implemen- tations on standard hardware. We begin with some basic definitions and an overview of our algorithm. The algorithm has two parts: a motion computation, and a tracking and re- segmenting phase. The motion computation (and a related stereo computation) are described in section 3, and the tracking phase is discussed in section 4. We then describe our Connection Machine implementation and present some results. Finally, in section 6 we sur- vey related work. Pictures of the results of our algo- rithm are included at the end. ‘We will elaborate on these requirements at the end of section 4.2. From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. 2 Overview Since we are concerned with real-time performance, the representations used in our algorithm consist entirely of retinotopic maps at the same scale as the image. We have worked on 128 by 128 8-bit gray level images. The object being tracked, for instance, is represented as a boolean bitmap (sometimes called an “intrinsic image” [Barrow and Tenenbaum, 19781). Such representations are ideal for massively parallel implementation. Our basic scheme operates on two images at a time. Denote the set of image pixels (points) by P, the gray- level intensity map of the first image by It(P), and the second by It+i(P). A ssume that we have distinguished some object in the first image, and wish to determine its location in the second image. Let Ot : P -+ (0,l) be a binary predicate (equivalently, a function from points to truth values) that holds at those points in the image It that are part of the object. The inputs to our algorithm are It, I 1+1 and Ot, while the output is Ot+l. In other words, given a pair of images together with the position of an object in the first image, we compute the position of the object in the second image.2 The first step of the algorithm is to compute a dense motion field, which can be viewed as a map Mt: P + P. Mt determines which point in the second image corresponds to each point in the first image. The method we use to compute Mt is detailed in section 3. The second step to the algorithm is to compute Ot+l from Mt and Ot. To a first approximation, Ot+i will hold at those points in 1t+i that correspond to points in It where Ot holds. Our approach, described in sec- tion 4, applies Mt to Ot and adjusts the result towards motion or stereo boundaries, while also smoothing its shape. 3 Computing Motion The job of the motion computation is to efficiently pro- duce a dense field of discrete motion displacements. We take a sequential pair of images (It and It+i) as input, and produce a motion map Mt as output. irhe result- ing motion map is a “goes to” map, indicating where each pixel on the old image is likely to have moved to on the new image. The computation we use has two steps: an initial motion estimate, and a smoothing step.3 Our initial motion estimation scheme uses SSD (Sum of Squared Differences) correlation in a small lo- cal area. The smoothing step is similar to that used by Spoerri and Ullman [Spoerri and Ullman, 19871. Initial motion estimation is intensity based. Motion displacements are bounded by a fixed radius Y,. Mo- 2We will ass u m e that the object’s approximate initial position 01 is supplied in some task-specific manner. This will be discussed briefly in section 5. 30ur algorithm has no commitment to the underlying motion computation. Any procedure that produces such displacements, such as [Horn and Schunk, 19811, could be used instead. tions faster than this will not be detected. This pa- rameter cannot be arbitrarily increased since the initial motion estimation takes time O(T~). The SSD matching scheme determines for each pixel p on the old image, what the most likely displacement for p is. The similarity measure is an SSD match on p and its 4-connected neighbors. More precisely, for each possible displacement A, we compute the degree of mismatch by E(A) = c (&(P + 6) - &+I(P + 6 + A))“. (1) Il4l=l The displacement for p is minll~ll~f,, E(A). Usually this will assign each pixel a unique displacement. After the initial estimation step, each pixel has a displacement. However, these displacements tend to be quite noisy. We assume that the scene’s actual motions exhibit spatial coherence in order to smooth the initial estimates. We determine the most likely displacement for each pixel p, i.e. the most popular displacement in a region of fixed size ri surrounding p. If a tie occurs, a displacement is chosen randomly from among the front runners. The smoothing step has two important properties. First, polling the potential displacements of the neigh- borhood tends to weed out noisy motions. Second, it results in larger regions of coherent motion, hence im- proving the motion boundaries. When two adjacent regions of the image move dif- ferently, a motion boundary occurs. Near a motion boundary many of the pixels polled will lie on the other side. As a result the most popular displacement may not get many more votes than the runner up. Various schemes have attempted to detect this situ- ation using statistical tests [Black and Anandan, 1990, Spoerri and Ullman, 19871. The problem appears to be difficult, however, and we take a different approach. Our motion computation produces motion boundaries that are not precise, but our tracking scheme attempts to compensate for this imprecision. Our motion computation produces fairly good re- sults provided that several conditions are met. Inten- sity levels must vary sufficiently; otherwise the initial matching step will give ambiguous results. Objects must neither move too fast, nor too slowly. Objects moving further than T,, between a pair of frames can- not produce reasonable motion displacements. Objects moving too slowly can also fail to produce reasonable motion displacements. No sub-pixel displacements are computed. Hence if an object moves less than a pixel between frames, no motion may be apparent. 3.1 stereco In order to produce a stereo displacement map, we perform a computation very similar to the motion one. In our current implementation, we assume that the cameras are aligned so that the epipolar lines are WOODFILL & ZABIH 719 Figure 1: An outdoor scene. camera scan lines, and also that the principal cam- era axes are parallel. As a consequence, we only con- sider unidirectional horizontal displacements. We have obtained good results using an SSD initial estimation phase followed by a smoothing step. In the stereo SSD estimation phase the sum in equation 1 is done for 6 E {(LO), @Al)). Figure 2 shows the result of our stereo computation applied to the outdoor scene seen in figure 1. The computed displacements are shown; pale intensities in the figure correspond to large displacements, and hence to nearby objects. 4 Tracking and resegmentation In a scene with a single uniform motion, the edge of the image, Ot+i = Ot o A&t-l ignoring is cor- rect. However, real image sequences contain multi- ple non-uniform motions, and in general Mt is not an automorphism. 4 Imagine a scene with a square moving uniformly against a stationary background. In the second image It+l, immediately behind the square there will be pixels to which no pixel in It corresponds. Furthermore, immediately in front of the leading edge there will be pixels in It that have been occluded in It+l. These pixels in It will nevertheless. determine pixels in It+1 to which they appear have gone, result- ing in pixels in It+1 to which more than one pixel in It corresponds. We first compute 6t+1, which is a generalization of OtoMtml to the case where Mt is not an automorphism. We then produce Ot+i by adjusting 6,+r towards mo- tion or stereo boundaries. *Recall that an automorphism is a bijection from a set to itself. Figure 2: Stereo results for an outdoor scene. 4.1 Tracking Formally, define Mr P -+ 2p as the inverse of Mt, i.e. M,-VP) = -I P I M(P = PI* In general M; cannot be guaranteed to be a sin- gleton set, so we define Oi,, , an intermediate result, bY { I, if M,-l(p) = 0, O:+,(P) = 1, if 3p' E M;'(p) where Ot (p = 1, 0, otherwise. Then 6,+i will be Oi+, except at those points where o:+, = 1. At those undecided pixels we use a winner-takes-all solution, taking a poll of the pixels in a neighborhood around p. &+I holds when Oi,, = 1 at the majority of these pixels. This tie-breaking scheme has the effect of filling holes caused by non-rigid motion and by the discretization of motion vectors. 4.2 Resegmentation As noted in section 3, our method for computing 2Mt is somewhat inaccurate near motion boundaries. Conse- quently, 8,+i will tend to have inaccurate boundaries. Since we assume objects are distinguishable from the background by some labeling (such as motion or stereo displacements), we adjust &+I towards the boundaries induced by the labeling to produce Ot+i. This adjust- ment is accomplished by applying a winner-takes-all local algorithm that adds and deletes pixels near the boundaries of 6,+r. We will describe the adjustment n of Ot+i under the assumption that motion boundaries are used, but the same scheme can be used for stereo, or for other modalities. 720 VISION AND SENSOR INTERPRETATION As the motion displacements are discrete, Mt+i (produced from It+1 and rt+2) defines a set of con- nected components of pixels with the same label (mo- tion displacement). These connected components are separated by motion boundaries. The resulting Ot+l ought to have two properties: its perimeter should be consistent with these motion boundaries, and it should diverge from &+I as little as possible. The first of these properties can be naturally mea- sured component by component. A given pixel p is part of exactly one connected component. 6,+i will hold at some of the pixels in that component; in par- ticular, G1+i will either hold at the majority of the pixels or will not. We define p to be a minority pixel if 8 t+l holds at p but not at the majority of pixels in p’s component, or if &+I does not hold at p but holds at the majority of pixels in p’s component. The number of minority pixels is a natural measure of how incon- sistent &+I is with the boundaries. In particular, if there are no minority pixels, then the boundaries of ,. Ot+l line up precisely with the boundaries. We would like to produce a result Ot+l which minimizes the num- ber of minority pixels. The second desired property is also easy to measure. We would like to produce a result Ot+l which agrees with 02+r wherever possible. Define p to be a changed pixel if Ot+l(p) # 6,+1(p). The number of changed pixels (the Hamming distance) is a natural measure of how different Ot+i is from 8,+i. In particular, if there A are no changed pixels, Ot+i and Ot+i are identical. We would like to produce a result Ot+l which minimizes the number of changed pixels. Ideally, one would like to satisfy both these con- straints at once. But if we start with dl+i, reducing the number of minority pixels introduces changed pix- els. Similarly, trying to keep the number of changed pixels at zero cannot reduce the number of minority pixels. Thus it is impossible to minimize both mea- sures simultaneously. Rather, we must minimize some combination of the two measures. Many combinations are possible; our approach is to minimize the number of minority pixels first, and only secondarily to mini- mize the number of changed pixels. We now describe a simple algorithm which is provably optimal under this criterion. To produce Ot+i from ol+l, consider each connected component in turn. If 8,+1 holds at the majority of pixels in the component, then Ot+l will hold at every pixel in the component. Otherwise, Ot+i will hold at no pixel in the component. The resulting segmenta- tion, Ot+l is the union of the set of connected compo- nents in which the majority of pixels are in &+I. Unfortunately this simple algorithm has two unde- sirable properties. Both these properties result from the definition of minority pixels, which is non-local in nature.5 First, the adjustments can be too drastic. If It+r and It+2 are identical, Mt+l will be the iden- tity map. There will be one connected component, and hence either &+I will consist of less than half the pixels, and Ot+l will be empty, or ol+l will be a ma- jority and 0 t+i will be all pixels. Second, the labeling of connected components is slow on parallel machines such as the Connection Machine. To ameliorate these shortcomings we use a purely local computation, that is intended to approximate the notion of minority pix- els in a local area around each pixel. If there is no motion boundary within a local re- gion around p, Ot+l(p) is defined to be ol+l(p). This condition on the adjustment phase allows us to track objects that are only intermittently separable from the background. So, for example, we can track an object which moves, and then stops, and then moves again, such as a pendulum. The adjustment phase of the tracking algorithm is A intended to recover from inaccuracies in Ot+l. If the distance between the boundaries of 8,+1 and the mo- tion boundaries gets too large, local adjustment cannot recover. In particular, when a large expanse of an ob- ject comes into view suddenly, the motion boundary on the periphery of the object may be far from the boundaries of &+I. Thus the algorithm does not deal well with objects which change shape drastically from frame to frame. An additional limitation comes from internal mo- tion boundaries (in other words, motion boundaries that occur within the object we are tracking, rather than between the object and the background). If &+I ends up nearer to an internal motion boundary than the external motion boundary, local adjustment may actually force Ot+l to be further from the boundaries of the object than ot+l. 5 Our Irhplementation Our algorithm has been implemented on a 16- kiloprocessor Connection Machine at Xerox PARC. The algorithm runs in real time at a rate of 15-20 frames per second on 128 by 128 images (subsampled from the 512 by 512 output of a camera). This includes the overhead for image digitization and the shipment of image data into the Connection Machine, which oc- cupies about a third of our running time. The basic motion computation described in section 3 is very fast; we can compute iVlt at a rate of over 50 frames per second. The stereo computation shown in figure 2 is comparably fast. Our implementation has been tested on several dozen image sequences, which are typically 100 frames 5Finding the minority pixels requires computing the connected components of pixels with the same motion dis- placement. This can involve communication between pixels that are arbitrarily far apart. WOODFILL & ZABIH 721 each. We have tried to make these sequences as differ- A longer term goal is to use our system for for robotic ent from each other as practical. applications. An implementation of our algorithm on Some results are shown at the very end of this paper. standard serial hardware is also currently underway. Each column consists of four images. The top image Our algorithm currently yields only 2-D positional is the original image. The second is 01, the initial information about the object. While this can produce location of the object in that image. The third image some 3-D information when combined with stereo, ad- is a later image in the sequence, and the fourth image is the location of the object in the later image. All ditional descriptions of the object’s structure will un- doubtedly prove necessary for some applications. the images are 128 by 128 except the kitten sequence, which is 256 by 256. The initial segmentation is done on a case-by-case basis. In the data shown none of the initial segmen- tations are precise (they always include some of the background, and exclude some of the object). Empir- ically our algorithm is not very sensitive to the exact initial segmentation. 6 Related Work Acknowledgements We wish to thank Harlyn Baker for his advice and guid- ante. John Lamping provided useful insights on our al- gorithm. Jon Goldman of Thinking Machines and Hal Moroff of Datacube supplied technical support. We also received useful comments from David Chapman, and help from numerous people at Xerox PARC. John Woodfill is supported by a Shell Doctoral fel- lowship. Ramin Zabih is supported by a fellowship Probably the closest approach to ours is the use of ac- from the Fannie and John Hertz Foundation. Addi- tive contour models (also called “snakes”) for tracking, tional financial support was provided by SRI, Xerox first suggested in [Kass et al,, 19871. If supplied with an PARC, and Hewlett-Packard. We also wish to thank appropriate potential function (such as one with a local the Center for Integrated Facility Engineering. minimum along motion or stereo boundaries), snake- based tracking might work for our purposes. However, References there are important differences. Snake-based tracking Barrow, H. and Tenenbaum, J. M. 1978. Recovering , does not take advantage of any bottom-up.motion es- intrinsic scene characteristics from images. In Hanson, timates. Instead, the snake from image 1t is placed on A. and Riseman, E., editors 1978, Computer Vision image It+1 and relaxed into place. In the event that the Systems. Academic Press. 3-26. background also contains motion or stereo boundaries near where the object was, the snake will not necessar- Black, Michael and Anandan, P. 1990. Constraints ily track the object, even if the motion of the object for the early detection of discontinuity from motion. is clear and unambiguous. In addition, one needs to In Proceedings of AAAI-90, Boston, MA. 1060-1066. design a potential function that will enable the snake Brooks, Rod 1986. A robust layered control system to seek motion or stereo boundaries, a problem that is for a mobile robot. IEEE Journal of Robotics and not necessarily trivial. Finally, our approach has pro- Automation. duced real-time performance on real data; we are not Gennery, Donald 1982. Tracking known three- aware of any similar results for snake-based tracking. dimensional objects. In Proceedings of AAAI-82, Horswill’s work [Horswill and Brooks, 19881 has Pittsburgh, PA. 13-17. somewhat similar goals to our own, in that he wants to Horn, Berthold and Schunk, Brian 1981. Determining deal with people and other non-rigid objects. However, optical flow. Artificial Intelligence 17. his approach is highly specialized to the environment that his robot inhabits and to the objects he wishes Horswill, Ian and Brooks, Rod 1988. Situated vision to follow. His robot can follow highly textured objects in a dynamic world: Chasing objects. In Proceedings against a background that does not have much texture, of AAAI-88, St. Paul, MN. American Association for at the very low resolution that his system uses. In the Artificial Intelligence, Morgan Kaufman. 796-800. examples that interest us, both the objects of interest Kass, Michael; Witkin, Andrew; and Terzopolous, and the background have a lot of texture. More gener- Dmitri 1987. Snakes: Active contour models for ma- ally we do not believe that there are simple bottom-up chine vision. In International Conference on Com- primitives (such as amount of texture) which can seg- puter Vision. IEEE. 259-268. ment an arbitrary scene correctly. Spoerri, Anselm and Ullman, Shimon 1987. The early detection of motion boundaries. In International Con- 7 Future Work ference on Computer Vision. 209-218. Most of the work we have done has assumed that the Verghese, Gilbert; Gale, Karey; and Dyer, Charles object to be tracked can be separated from the back- 1990. Real-time, parallel motion tracking of three ground by motion. We hope that stereo boundaries will dimensional objects from spatiotemporal sequences. prove a better modality for segmentation than motion. In Kumar, Vipin, editor 1990, Parallel Algorithms We have an initial implementation of tracking with for Machine Intelligence and Vision. Springer-Verlag. stereo boundaries, which is fast but not yet real-time. 310-339. 722 VISION AND SENSOR INTERPRETATION I , . . .:- 8. . . . . & WOODFILL & ZABIH 723
1991
97
1,161
AUTOMATIC R. L. Cromwell and A. C. Kak Robot Vision Lab School of Electrical Engineering, Purdue University West Lafayette IN 47907 USA cromwell@ecn.purdue.edu, kak@ecn.purdue.edu Abstract Object recognition requires complicated domain- specific rules. For many problem domains, it is impractical for a programmer to generate these rules. A method for automatically generating the required object class descriptions is needed - this paper presents a method to accomplish this goal. In our approach, the supervisor provides a series of example scene descriptions to the system, with accompanying object class assignments. Generalization rules then produce object class descriptions. These rules mani- pulate non-symbolic descriptors in a symbolic frame- work; the resulting class descriptions are useful both for object recognition and for providing clear expla- nations of the decision process. We present a simple method for maintaining an optimal description set as new examples (possibly of previously unseen classes) become available, providing needed updates to the description set. Finally, the system’s performance is shown as it learns object class descriptions from real- istic scenes - video images of electronic components. Introductio Consider the following unsolved problem in model- based computer vision: Suppose a system can discriminate between N different object classes in a library, possibly using one of the many modern graph-theoretic approaches (Bolles and Horaud 1986, Chen & Kak 1989, Faugeras & Hebert 1986, Grimson & Lozano-Perez 1984, Kim & Kak 1991, Qshima and Shirai 1983). Each model might be a graph whose nodes correspond to surfaces and whose arcs signify adjacency, each node and arc represented by a frame of attribute-value pairs. Now suppose we add a new object class to the library. Can the system be designed in such a manner that it would automatically figure out how to discriminate between the now N+I classes? This prob- lem is not the same as addressed in (Hansen dz Henderson 1988, Ikeuchi & Kanade 1988); those efforts focus on automatic generation of efficient single-class object recog- nition strategies rather than on multi-class discriminations. A recognition strategy is more concerned with sequencing various tests to quickly establish the identity and pose of an This work was supported by the National Science Foundation under Grant CDR 8803017 to the Engineering Research Center for Intelligent Manufacturing Systems. 710 VISION AND SENSOR INTERPRETATION object and less so with automatically generating descrip- tions that discriminate between different classes. To illustrate the problem of multi-class discriminations, suppose the system can already discriminate between red blocks and red cylinders. A computationally efficient sys- tem would seek a minimal set of features, and simply ignore color. If the system must now also recognize blue cylinders as a distinct class, the system should automati- cally add color to its its model graph descriptions. In this paper, we show how such a system can be built using the methods of symbolic learning, methods that are perhaps best exemplified by the work of Michalski and his colleagues (Dietterich & Michalski 1979, Dietterich & Michalski 1981, Michalski 1980). Central to these methods is generalization, a concept also investigated by other researchers (Bairn 1988, Hayes-Roth & McDermott 1977, Hayes-Roth & McDermott 1978, Vere 1975, Vere 1977). Stated superficially, generalization examines a set of symbolic descriptions to find statements that are true of all members. Generalization is not merely a search for common symbolic descriptors, but also draws conclusions such as “all objects are polygonal” when told “type-I objects are square” and “type-2 objects are triangular”. To form such “higher level” generalizations, the system must know hierarchical relationships between attribute values. The methods proposed in the cited literature on sym- bolic learning addressed the problem of learning pattern descriptions - especially 2D patterns in simple line draw- ings - but their application in computer vision has not been easy. Since these contributions focused more on the pro- cess of learning generalizations, the authors assumed that perfect symbolic descriptions of the patterns were avail- able. In practical computer vision problems, object descriptions are rarely purely symbolic. Even in 2D vision, segmentation problems may preclude categorical assessments of object shape. There may additionally be models that have, at a purely symbolic level, identical descriptions, although they differ with respect to numerical attribute values. Contrary to what might appear to be the case at tirst thought, the main difficulty with using numerical attribute values is not their usual associated uncertainty, but the large number of possible generalizations. Suppose we show two cylinders, of lengths 10 cm and 12 cm, to a learning vision system. The system may form any of the following five generalizations: From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. 1) All lengths are equal to or shorter than 12 cm. 2) All lengths are equal to or longer than 10 cm. 3) All lengths are between 10 cm and 12 cm. 4) All lengths are shorter than 10 cm or longer than 12 cm. 5) All lengths take one of two integer values; 10 cm or 12 cm. Given several possible generalizations for an object class, one of them should be chosen to represent the class on the basis of the following principle: the generalization chosen should be maximally general while possessing sufficient discriminatory power for all the required interclass dis- tinctions. If a human told the system that there existed a different class of objects consisting of two cylinders of lengths 14 cm and 18 cm, it should discard generalizations #2 and #4. If at a later time a cylinder of length 10.5 cm were added to the first class, the system should discard (or modify) generalization #5. In this paper, we have embedded these ideas in a work- ing vision system that evaluates the different generaliza- tions that are possible when numerically valued attributes are used and discards those violating the above principle. As new examples become available, possibly of previously unseen classes, the system automatically updates the object class descriptions as needed. In the experiments described at the end of this paper, the system first learned generalized descriptions for two object classes. Examples of a third class were then introduced, and the system updated the set of descriptions. The updated descriptions were used to classify previously unseen objects and the error perfor- mance measured. The error rates are clearly a function of the number of examples used for the formation of descrip- tions. Our preliminary results are encouraging in the sense that even with very few training samples, the system correctly identifies most of the previously unseen objects. A learning system in our context must be capable of four critical tasks. The first is automatically forming graph- theoretic or first-order predicate-calculus generalized descriptions of objects, the two description types being iso- morphic. Second, evaluating descriptions to test their discriminatory power. Third, modifying object class descriptions with available information until the desired discriminatory power is attained. Finally, using new infor- mation as it becomes available to form new descriptions that either possess greater discriminatory power and/or are more computationally efficient. Even a learning system must possess what may be called generic knowledge, provided by the supervisor and possi- bly context-dependent. The system must know that an entity has measurable qualities that can be described by attribute-value pairs (henceforth called AV-pairs). It must also be conversant in the language of object descriptions, meaning that it knows about all attributes measurable by available sensors, the ranges of values of these attributes, and how these AV-pairs can be embedded in a graph representation. Before explaining how the system can combine explicit example descriptions into generalizations, in the process discovering which attributes are useful for the desired discrimination, we must first formally define an object description. efinition 1: Given n measurable qualities of an object, the ith quality expressed as AV-pair (Ai: Vi), a description D is a WFF with AV-pairs as sentence symbols, D = ((A,: V,) /\ (A2: V,) IY ..*. A (A,: V,)) , and is thus a logical statement about the values taken by a set of attributes. Sensory information for an example object j provides an assignment function V~ specifying explicit values for the attributes; the question “Does the description D hold for object j?” is answered by the truth value of v&D). Kkfinitiorn 2: For a single AV-pair, yj((Ai: Vi)) = TRUE if and only if any of the follow- mg is true: (a) Ai was measured for object j and found to have value Vi, or (b) Vi is a variable, or (c) Vi is an expression satisfied by the measured value (e.g., if vj specifies any length 18, then vj((length: {X: 8 5X))) = TRUE. If description D is a conjunction of more than one AV-pair, then Vi(D) = TRUE if and only if vj((Ai: Vi)) = TRUE for all AV-pairs (Ai: Vi). With a formal definition of a description and a method for determining if that description holds for an example ct, we can now define a generalization. efinition 3: Given N descriptions, each correspond- ing to one of N example objects, a generalization is a description that holds for all N objects. Given exam- e descriptions (D,, D2,.... DN), the generalization is denoted by: V Di E (DI, Dz,.... DN) : 1 Di k If Vj(Di) = TRUE for any Di in the example set, then Vj(D)= TRUE dS0. This definition all by itself is too weak to be useful. Con- sider the following three descriptions for the each of the object classes jeep and car: Djccp_1 = ((color : blue) /\(size : 20) h(shape : boxy) l\(wheels : 4)) Dj+++p_z = ((color : red)l\(size : 27) /\(shape : boxy) A(wheeZs : 4)) Djccp_3 = ((color : white) l\(size : 30) //(shape : boxy) l\(wheels : 4)) Dcor_I = ((color : red) A (size : 22) //(shape : sleek) A (wheels : 4)) Dcar_z = ((color : blue) /\(size : 30) l\(shape : sleek) A(wheels : 4)) Dar_3 = ((color : white) /\(size : 32) A(shape : sleek) A(wheels : 4)) In accordance with the definition of a generalization, ident- ical generalized descriptions could found for the classes jeep and car: D jeep = ((wheels: 4)), and D = ((wheels: 4)) Such generalizatio; are clearly useless for discriminating between jeeps and cars, as they are far too general. At the other extreme, the system could instead construct the fol- lowing, which are disjunctions of the example descrip- tions: D’jeq = ( ((CCJh 1 blU.S?) A( size : 20) R(shpe : boxy) R(wheels : 4)) v ((color : red)A(size : 27)A(shpe : boxy)A(wheels :4)) V ((color : white) A(size : 30) A(shpe : boxy) A(wheels : 4)) ) CROMWELL & KAK 711 either a new class description is needed or an existing one should be modified. To make that decision, the high level control examines both the current set of class descriptions, as gleaned from the output of module 5, and all the exam- ple descriptions, as available from the current and any pre- vious outputs of module 2. If new generalizations are needed for some class, the rules in module 3 produce them and provide them as input to module 5 where the high level control helps select the most discriminating generalizations by testing them against all the known example descrip- tions. These selected generalizations then serve as classification rules. sensory data human supervision exumple descriptions 2 example descri tions grouped into c P asses 3 generalized descriptions 5 class descriptions classification rules igure 1: Flow of control of the learning system Low Levd Control of The goal of what we caIl the low level control strategy is to limit the formation of generalizations that are devoid of any discriminatory power. For example, since the earlier generalizations Djeq and D,, possess no discriminatory power, their formation would be blocked. Our low level control strategy, since it is tightly integrated with the computational process required to form generalizations, is best explained by first describing the steps required to form a generalization from example descriptions. This we do by returning to the car-jeep example. However, in order to simplify the explanation, we shorten the descriptions by including only the size and shape attributes. Say we have Nd descriptions of N, attri- butes each (here Nd = 3, N, = 2), and assume that N, rule applications are possible for each AV-pair. For the size, the rules GCI, Gpu~, G~LB, and GDc can be used. For the shape, whose values obey the hierarchy of Fig. 2, GCGT and GDc can be used. - vehicle generic Multi-conjunct descriptions are generalized by a split- generalize-merge sequence. In order to both control and maintain a trace of the generalization process, a network is formed for each class. The first state of the jeep network is Nd disconnected nodes, one for each example description. It is important that these descriptions have equivalent structures. By that we mean that if all values in each description were replaced by free variables, then all descriptions would be logically equivalent. If the descrip- tions are not structurally equivalent, additional AV-pairs with free variables for values are added as needed. The Nd descriptions are split into N, attribute clusters, linking the NdNa new nodes to the initial descriptions as in Fig. 3. Asize: 20) ((size: 2O)A (shape: boxy) size: 27) ((size: 27)/\(shupe: boxy) size: 30) ((size: 30)/\ (shape: boxy) shape: boxy) shape: boxy) \jshape: bo$) igure 3: Splitting initial descriptions into clusters. N, generalization rules are applied to each attribute cluster, and the N,Nr cluster generalization nodes are linked to the cluster nodes, as in Fig. 4. size: (X: 20 IX)) size: {X:X 5301) size: [X: 20 <X < 30)) ((size: 2O)A(shape: size: X) ((size: 27)A (shape: ((size: 3O)A (sh&e: \shape: boxy) % shape: boxy) shape: boxy) Figure 4: Cluster generalizations. shape: shape: shape: shape: boxy) vehicle) generic) W All possible generalizations of the initial set of descrip- tions may be formed by selecting one descriptor from each set of cluster generalizations. This adds N+ new nodes linked to cluster generalizations, as in Fig. 5. Figure 5: (size: {X: 20 S X)) ((size: (X: 20 I X))A (shape: boxy)) ((size: (X: 20 5 X))/\ (shape: vehicle)) ~ (size: 20) (size: (X:X 5 30)) ((size: {X: 20 IX})/\ (shape: generic)) (size: 2 7) ((size: {X: 20 IX})/\ (shape: X)) (size: {X: 20 5 X 5 ((size: (X: X 5.30))/\ (shape: boxy)) ((size: 20)/\ (shape: boxy) (size: 30) ((size: {X:X S 30))/\ (shape: vehicle)) ((size: (X: X S 30})/\ (shape: generic)) ((size: 27)A (shape: boxy) ((size: (X: X S 3Oj)A (shape: X)) ((size: 30)/\ (shape: boxy) (shape: b (shape: b (shape: b ((size: (X: 20 %X I30))/\ (shape: boxy)) ((size: (X: 20 IX I3O})l\ (shape: vehicle)) ((size: (X: 20 IX I 30 ))I\ (shape: generic)) ((size: (X: 20 IX I30))/\ (shape: X)) ((size: X)/\ (shape: boxy)) ((size: X)/\ (shape: vehicle)) ((size: X)/\ (shape: generic)) AU possible generalizations. ((size: X)1\ (shape: X)) CROMWELL & KAK 713 A total of N,N, + N,N, + NF nodes were added to the network by this process. While the network does contain the desired generalization ((size: X) A (shape: boxy)), many other generalizations added in the expensive final step are useless, as they also hold for most (or even all) examples of cars. We will now describe a strategy for pruning nodes at the level of cluster generalizations, the result being far fewer generalizations at the final level. The philosophy that guides this pruning is as follows. If a cluster genemlization has absolutely no discrimina- tory power of its own, it cannot contribute to any description’s discriminatory power. Therefore it is point- less to construct generalizations containing such AV-pairs. The system discards these AV-pairs, except for the ones whose values are free variables; they are needed to allow the system to ignore particular attributes. In the car-jeep example, no discriminatory power is gained by retaining information about the color attribute. The low level con- trol allows the system to ignore color by including the cluster generalization (color 5X) (for the sake of intelligibil- ity, such variable-instantiated AV-pairs were omitted from D jeep and Dc,,h To detect cluster generalizations with no discriminatory power, a WFF is formed for each. These WFF’s are struc- turally equivalent to the initial descriptions, except that all attribute values unspecified by the cluster generalization under test are free variables. For the generalizations of the size attribute cluster, the four test descriptions ((size: (X: 20 5X)) A (shape: Y)), ((size: (X: X 5 30)) R (shape: Y)), ((size: (X: 20 <X I30)) /\ (shape: Y)), and ((size: X) /\ (shape : Y)), hold for lOO%, 67%, 67%, and 100% of the car examples, respectively. Since the AV-pair (size: (X: 20 <X)) has no discriminatory power, it is discarded. Turning to the generalizations of the cluster, the test descriptions ((size: X) I\ (shape: boxy)), ((size: X) /\ (shape : vehicle)), ((size: X) /\ (shape : generic)), ((size: X) /\ (shape: Y)), shape attribute and hold for 0%, lOO%, lOO%, and 100% of the car examples, respectively. The cluster generalizations (shape: vehicle) and (shape: generic) may be discarded, as have no discriminatory power. The required network reduces to that of Fig. 6 (boxes enclose pruned nodes). TWO interesting facts may be gleaned from this analysis. The first is that Nd has a very limited effect on the total number of nodes in the network. The second is that the presence of examples of other classes may greatly reduce the number of generalized description nodes needed. Pruning some fraction of the cluster generalization nodes effectively prunes an even greater fraction of the general- ized description nodes. Evaluation of a Generalized Description Selecting a set of class descriptions in module 5 of Fig. 1 requires the system to select, from the output of module 3, those descriptions with optimal discriminatory power and maximal generality. To measure a description’s discrimi- natory performance with respect to a single class, we measure the fraction of the examples of that class for which it holds. The set of examples of class i for which D holds is: {Vj E Ci : vi(D)), where j E Ci indicates that the supervisor assigned example j to class i. If there are Ni examples of class i, and Ix 1 is the cardinality of x, the fraction for which D holds is: Representing the union of all classes other than i as class z our idealized goal is an object class description D for which a predicate G,(D) = TRUE. If r\;(D) = I .O and r\dD) = 0.0, then Gi(D) = TRUE. If this goal is unattain- able, the “best” description must be chosen. A simple evaluation function estimating how close an expression comes to satisfying Gi(D) is: hi(D) = QQ(D) + f&Cl-0 - qdD)) 9 where oi and pi reflect domain-specific false-negative and false-positive error costs for class i. Predicate &i(D) = TRUE if D has an acceptable error rate. For example, if hi(D) were greater than some threshold, then &i(D) = TRUE. Gi(D) and &i(D) evaluate the error performance of a description, allowing the system to meet the primary goal of finding an expression with sufficient accuracy. Of all generalizations with adequate discriminatory power, as measured by G;(D) and &i(D), the system should select the one that is, in some sense, most general. It is beyond the scope of this paper to detail how this is done, suffice it to say that heuristics estimate the generality of each conjunct, and the estimated generality of a description is the sum of the generalities of its conjuncts. In this way, if the two descriptions below have equal discriminatory power, the , Figure 6: ((size: 2O)A (shape: boxy) ((size: 27)/\ (shape: boxy) ((size: 3O)A (shape: ~ bony) Pruning cluster generalizations. ((size: {X:X 5 30))/\ (shape: boxy)) ((size: (X: X I30))/\ (shape: X)) ((size: {X: 20 IX S 30))/\ (shape: boxy)) ((size: {X: 20 5 X I 30 ))I\ (shape: X)) ((size: X)A (shape: boxy)) ((size: X)l\ (shape : X)) 714 VISION AND SENSOR INTERPRETATION system would choose D 2. Dl = ((size: (X:85XS12))l\(shape:X)l\(wheels:4)) 02 = ((size: {X:XSZ2))//(shape.X)A(wheels:Y)) igh Level Control of Gemerahdion As shown in Fig. 1, high level control orchestrates the overall learning process. Returning again to the car-jeep example, let us assume that some initial examples were used to find generalized descriptions for each class, and some set of these was selected as the current class descrip- is the description for class i. Assume e: boxy)) in Fig. 6 was designated as the current DieeP. Let us assume that a new jeep example description D,,+l now becomes available. The high level control becomes aware of this, and immediately checks the current descrip- tion set to see if any updating is required. If DjeeP does not hold for this new example, the high level control causes module 3 to form new generalizations that take this new information into account, From these new generalizations, module 5 then selects a new D’jeep. This new of the initial Djeep and the new example is used to classify a previously unseen object, the human supervisor may wish to know how the system produced D,,. Since a network is built when generalizations are formed, links can be traced back to the original examples, as in Fig. 7 (note that attribute cluster and cluster generalization nodes are omitted for clarity). The network formed for class i is called Qi. ,““..“““....“‘““‘““““““““_...._..~....~~....._....._...._......~ 1 J-h I f Q. 1 i Jeep : f D, # i jeep f :i &+I I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 7: Generalization D)eep extracted from network. When the new example description D,+I becomes avail- able, the high level control really has two choices. It can form the new description ep by generalizing Dn+lr as shown in Fig. 7, it can effectively process by forming the generalizations of all examples DI through D,+I. The most efficient choice, in terms of the effort required for generalization, is to generalize D ,,+I. However, this can lead to overly general descrip- tions, which would force the system to repeat much previ- ous work by generalizing the set {D I . . . ..Dn. Dn+l }. This is conceptually illustrated in Fig. 8a. Assume that circles and squares represent two classes for which four descriptions each, in terms of attributes a and b, are ini- tially seen by a learning system. Although the value of b is the true distinguishing factor, the system may have initially found a description for the class circle that specified a2al. Similarly, the class square might have been described by the generalization a 5 a2. But, if a new example of circle were added as in Fig. 8b, and its descrip- tion generalized with the initial circle 9 would specify a 2 a3 o This new description would also describe every single example of square! In this case, the system must re-examine the original examples to discover that b 5 p1 is true for all examples of circle and that b 2 p2 is true for all examples of square. a a a1 a2 .___--____._-___._______ a2 a3 8(b): one new example Figwe 8: Result of incorrect early generalizations. Returning now to the car-jeep example, let us further assume that a third set of example objects was originally considered to form a distinct class truck. After mck w= formed and some selected, the human supervisor may decide that jeeps and trucks should not really be separate classes, but m single class jeep2. An external disjunction of ruck could form the gen- eralized description of this new class, as in Fig. 9. This is the reason that rule GED is needed, and its use is limited to situations of this nature. jeep2 Figure 9: Network with class truck added. ai ains In order to maintain an updated set of object class descriptions, the high level control must monitor both the available examples and the current set of object descrip- tions; in other words, module 4 of Fig. 1 monitors the out- puts of modules 2 and 5. We have already shown, in an informal fashion, how the high level control monitors these outputs; in Fig. 7 a new example of class jeep was added. At that time, although it was not expanded in the discus- sion, the high level control first had to determine if the new example represented a previously known class, and also which of the current class descriptions held for it. There are six possible cases that describe the state of the system when a new example becomes available: The new object may or may not represent a currently known class. If it does not, current descriptions for other classes may or may not incorrectly hold for it. If it is of a known class, current descriptions for other classes may or may not incorrectly hold for it; additionally, the current description for its own class may or may not hold for it. To illustrate these six cases, we denote the current set of object classes known to the system by c = {C,, c2 ,.... C, ); initially, e = &J. This six-case stra- CROMWELL & KAK 715 tegy specifies the action required whenever a new example description Q, classified by the supervisor as an instance of class Ci, is presented to the system. Recall that j) = TRUE if Dj, the current class description for C&S C’, holds for example description Di . Case 1: Cj E C, Vi(Dj) = TRUE, vk;f j (vi(Dk) * TRUE) The new example belongs to a previously known class, and no classification errors were caused. Add node Di to Qjm Case 2: Cj E C, vi(Dj) # TRUE, Vk # j (vi(Dk) + TRUE ) is of a known class, with no false positive i does not hold. If no useful substitutes are available to module 5, module 3 finds further generaliza- tions of class C’j. One of those generalizations is chosen as the new Die Case 3: c;i E C, vi(Dj)=TRUE, 8 kz j (vi(Dk)=TRUE) Class Cj is known, and Dj is still valid. But some other class description also holds for the new example. Again, module 5 examines the output of module 3, with the possi- ble discovery of additional generalizations by module 3, to select a new Die Case 4: Cj E C, vi(Dj) # TRUE, 3 k #j (vi(Dk) = TRUE > A new Dj is chosen, as in case 2 above. Additionally, the erroneous descriptions for other classes are replaced, as in case 3. Case 5: Cjd C9 Vk #j ( Vi(Dk) # TRUE ) The fifth case is novel, as the new example is of an unknown class. The new class description Di is simply Di, and C is augmented to include Cj. Case 6: Cjd C, ak#j (vi(D~)=TRUE) The new class is added as in case 5. Additionally, the false positive classification of Di as Ck must be handled as in cases 3 and 4. This learning system is implemented in Prolog. We will discuss the results obtained for two domains: symbolic descriptions of vehicles and numerical descriptions obtained from images of electronic components. In each domain, the system first uses examples of two classes to find generalized class descriptions. The third class is then added, and the system automatically updates the class descriptions. For the vision domain, we used the generali- zations discovered to classify further examples not yet seen by the system, to test the error performance. Symbolic Data: Vehicle Domain The first experiment shown here is in the symbolic vehi- cle world we used for illustration throughout this paper. As expected, the system produced the following when given the car and jeep example descriptions presented ear- lier in the paper. Dje,, = ((size :X) /\(shupe : boxy) /\(coZor : Y) //(wheels : 2)) D tOr = ((size : X) A (shape : sleek) A(color : Y) A (wheels : 2)) We then added examples of the new class motorcycle: D cyclcl = ((size :14)l\(shape :boxy) //(color :green) l\(wheels :2)) 2;:::: = ((size:12)l\(shape:boxy)l\(color :blue)A(wheeZs:2)) = ((size :10) A(shape :boxy ) //(color :black)/\(wheels :2)) D cyclcl = ((size :19)A(shupe :boxy ) r\(coZor :red) l\(wheeZs :2)) The system’s state was then described by Case 6 - an existing class description (jeep) incorrectly held for exam- ples of a previously unknown class. The system dropped D jeep, and specified two potential replacements. D, = ((size :20 ormore)A(shape: buxy)A(cofor :X)R(wheels :Z)) Dz = ((size :X)A(shape: boxy)A(color: Y)A(wheefs :4 or more)) The system then generalized the examples of motorcycle and found two potential Dnrolorcycle descriptions: iJ = ((size: 19or Zess)r\(shupe:X)A(coZor: Y)l\(wheels:Z)) 4 = ((size : X)l\(shape : Y) /\(color : 2) A(wheefs : 2 or less)) II,,, remained valid; no changes were needed. Real Data: Component Domain For the second experiment, an image analysis system produced descriptions of electronic components seen by a CCD camera. Each image was segmented into contiguous regions, and several attributes were measured for each region; of these, the number of pixels, the greylevel vari- ance, the border length, and the ratio of the variance to the mean of the 2-D radius were retained (called np, gv, 61, and r, respectively). Figs. 10, 11, and 12 show examples of the classes capacitor, transistor, and resistor. The system first used only the capacitor and transistor examples; the following descriptions were used (note that not all the available examples were used): Figure IO: Capacitors. Figure 11: Transistors. Figure 12: Resistors. 716 VISION AND SENSOR INTERPRETATION
1991
98
1,162
The Geometry of Visual C at Jean-Yves Her& Rajeev Sharma eter Cucka Computer Vision Laboratory, Center for Automation Research and Department of Computer Science University of Maryland, College Park, MD 20742 herve@cvl.urnd.edu rajeev@cvl.urnd.edu cucka@cvl.umd.edu Abstract I.1 Human Hand/Eye Coordination We present a new model for the perceptual rea- soning involved in hand/eye coordination, and we show how this model can be developed into a control mechanism for a robot manipulator with a visual sensor. This new approach over- comes the high computational cost, the lack of robustness, and the need for precise calibra- tion that plague traditional approaches. At the heart of our model is the Perceptual Kinematic Map (PKM), d a irect mapping from the control space of the manipulator onto a space defined by a set of measurable image parameters. By exploring its workspace, the robot learns, qual- itatively, the topology of its PKM and thus ac- quires the dexterity for future tasks, in a strik- ing parallel to biological systems. 1 Introduction Despite considerable progress in the development of for- malisms for symbolic reasoning, reasoning about actions using perceptual input is still not well understood. Con- sider the problem of hand/eye coordination: we perform it effortlessly every time we write, lift a glass, etc., yet we have only begun to understand the complex mecha- nisms involved, let alone to be able to emulate human performance with mechanical systems. In the study of human hand/eye coordination we roughly distinguish two approaches. The biomechanical ap- proach is based on the assumption that the representa- tions and controls used by the-brain can be inferred from external observations of human performance. In the case that interests us, the external observations are arm tra- jectories, but the biomechanical approach has also been applied to the the study of legged-motion and of the vi- sual saccadic and smooth tracking systems (Robinson, Gordon & Gordon 1986). Arm trajectory observations have been compiled for various types of tasks (writing, pointing, etc.) and movement constraints, generally with vision as the main source of sensory feedback. The mea- surements obtained are interpreted in terms of a model of the musculoskeletal system (Hogan 1985), with the intent of determining the control variables that achieve these trajectories (Hollerbach & Atkeson1987). But al- though the results of current simulations are qualita- tively very similar to the observed data, we still lack a model for which the control patterns can actually be detected in the brain. In this paper we present a new model for the percep- tual reasoning involved in hand/eye coordination (specif- ically, in positioning the hand using only visual feed- back), and we show how this model can be developed into a control mechanism for a robot manipulator with a visual sensor. At the heart of the model is the Percep- tual Kinematic Map (PKM), a direct mapping from the control space of the manipulator onto a space defined by a set of parameters that are directZy observable by the vi- sual sensor. In a striking parallel to biological systems, the robot acquires dexterity by exploring its workspace, i.e. by learning, qualitatively, the properties of its PKM. Figure 1: A grasping task: initial and subsequent positions The support of the Defense Advanced Research Projects Agency (ARPA Order No. 6989) and U.S. Army Engineer To- pographic Laboratories under Contract DACA76-89-C-0019 is gratefully acknowledged. The psychological approach favors the study of exter- nal behaviors and phenomena for their own sake: the reaction time of the hand/eye system (Fischer & Rogal 1986) and its relation to the accuracy of movement (Fitts 1954), the fixation process (Ballard 1989), the plan- ning of hand movements with visual feedback (Fleischer 1986), etc. Results of this type are of particular interest 732 VISION AND SENSOR INTERPRETATION From: AAAI-91 Proceedings. Copyright ©1991, AAAI (www.aaai.org). All rights reserved. to the segment of the robotics community that attempts to mimic global behaviors without necessarily having to reproduce the control actually implemented in the brain. 1.2 Design of Robotic and/Eye Systems The classical robotic hand/eye system is described by two modules: the control module which, given the ini- tial position of the robot’s hand and the goal position, generates the sequence of movements that lead the hand to the goal, an .d the vision modube, which provides infor- mation about the position and orientation of the robot relative to the goal. 1.2.1 Mechanics of the manipulator Figure 2: (a) A five-jointed robot arm (b) Its schematic representation A robot manipulator, such as the five-jointed arm shown in Figure 2(a), can be modeled as a sequence of rigid links connected by either revolute (turning) joints or prismatic (sliding) joints. Figure 2(b) gives the schematic for our example manipulator, all of whose joints are revolute. The state of a-manipulator at any given time can be represented by the join; configuratioi, which is the vector of angles between consecutive links, measured at each joint: Q = (al, 42, . . . , q,JT. The set of all such n-tuples defines the joint space of the manipula- tor, and to a given joint configuration of the robot there will correspond a unique point in the joint space. Any change in a joint angle will affect the position p and the orientation w of the robot’s hand. This re- lationship between the joint space and the task space, Kc, of possible positions and orientations of the hand is expressed by the kinematic map K : 3 -+ K. Given an adequate mathematical model that describes the posi- tions and orientations of the joints relative to each other, an analytic expression for the kinematic map can be de- rived. A good survey of the models and techniques in- volved was done by Hollerbach (1989). Moving the hand to a given position and orientation involves the solution of an inverse kinematics problem, that is, the determination of the appropriate joint config- uration. Most of the points in the robot’s joint space are regular points, where the kinematic map is well-behaved and the Inverse Function Theorem guarantees locally the existence of a smooth inverse mapping. However, the analytic determination of this mapping is usually pro- hibitively difficult to compute, so in order to solve the problem one has to resort to algorithms that compute fast, numerical approximations. At singudar points of the joint space, on the other hand, the theoretical inversion of the kinematic map is impossible, and the numerical algorithms fail, so a good trajectory planner for the robot should be able to avoid such points. The general study of singularities of the kinematic map is almost as complex as the inverse kine- matics problem itself, and has no known general solution: most articles on the subject concentrate on one particu- lar type of manipulator (Borrel & Liegeois 1986). 1.2.2 Visual calibration A robot’s visual system generally consists of a cam- era, from whose input estimates of the world coordinates and orientations of the components of the robot relative to one another are made. Tsai (1989) gives a synopsis of the important results in this domain. Using this in- formation, the robot should in theory be able to plan a path and proceed, blindly, to its goal. Unfortunately, the mathematical models used to compute the kinematic map are either too simple to take into account the limi- tations of the manipulators, such as defects in the align- ment of axes, or too complex to actually be useful, partly because of their effects on the stability of the inversion computations. Furthermore, existing vision algorithms are not robust enough to provide data at the accuracy re- quired by this strategy (Aloimonos 1990). Consequently, systems based on this approach have so far failed in general, three-dimensional tasks, despite their success in constrained, two-dimensional, industrial applications. 1.2.3 uantitative visual feedback A new approach is emerging in which visual feedback is closely integrated with the control of robot manipula- tors (Weiss, Sanderson & Neuman 1987) with the result that the accuracy of the trajectories is greatly increased. HERS&, SHARMA, & CUCKA 733 For example, the measurement errors compiled over a sequence of images tend to “average out,” so that their effect on the control is considerably reduced. A now classical example of what can be done when visual feed- back is carefully integrated into the control scheme is the robot Ping Pong player (Anderson 1988). But even this approach requires good initial calibration of the sys- tem and complex inverse kinematics computations, and if our goal is to comprehend human hand/eye coordina- tion, this approach does not provide any new insight. It seems unlikely, after all, that the brain needs to know the exact angle of each of the arm’s joints before it can plan a trajectory, and we know from experience that we can very quickly adapt to significant changes in the proper- ties (bounds on acceleration, limits of the degrees of free- dom, etc.) of our arms, for example when we carry heavy objects or when a joint is immobilized due to arthritis. 2 The Perceptual Kinematic Map Having argued for the integration of visual feedback with manipulator control, we proceed to introduce the Per- ceptual Kinematic Map, a mapping from the space of joint parameters of a manipulator to the space formed by the values of a set of parameters that are directly ob- servable by the visual sensor. In what follows, we will refer to a robotic hand/eye system, but the principles are equally applicable to biological systems, as will be noted in a later section. Consider a coordination problem in which the goal is to maneuver the hand into position next to an object of known size so that the object can be readily grasped. (Though it is unlikely that one would ever be required to grasp a totally foreign object, if the size is in fact unknown, it could be estimated through stereo dispar- ity analysis or in the course of the exploration phase of a moving observer.) We will examine how changes in the image plane of the eye reflect the motion of the hand. For example, if a robot rotates a single joint Ji while holding its other joints stationary, we expect that a point on its hand will trace an arc of an ellipse in the image plane. Let s = (sr , ~2, . . . , s,)~ be an array of measurable image parameters, considered as a function of the joint parameters of the robot. If S is the space of such image parameter arrays, then we can consider s as a point of S and define a mapping 7r : 3 + S. We call this the Perceptual Kinematic Map (PKM). For our five-degree-of-freedom robot manipulator, we have identified five independent features that can be readily extracted from any image of the robot: the im- age coordinates of two points on the hand and the area of a rectangle inscribed on the hand. The PKM for this manipulator gives rise to a five-dimensional parametric “hypersurface,” which we call the control surface. The image processing is purposely kept simple. A white rectangle is inscribed on the hand in order to facil- itate the tracking of the image parameters, its position and area in the image giving the five parameters needed for the control (Figure 3). Through experiments in which we have tracked the five image features, we have found that their trajectories agree closely with the predicted ones, and the mappings are quite smooth For example, Figure 4 shows the action of one of the wrist parameters on the image of a feature point, for different settings of the other joints. In particular, it should be noted that the slopes vary little from curve to curve, a property we exploit in our control strategy. Figure 3: (a) A n image of the hand - (b) The image parameters We do not deal here with the problem of fine motion, so the goal of a grasping task is to superimpose the hand on an object (Figure 3). Both the initial and goal posi- tions of the hand correspond to points in S that lie on the control surface. In fact, for any configuration of the manipulator there is a corresponding point on the con- trol surface, so the grasping task is reduced to a problem of constrained path planning on the control surface. There is little point in attempting to invert the PKM, since the computations are even more complex than those for the original kinematic map, and small dis- cretization and detection errors would invalidate the re- sults. Qualitative decisions, however, such as whether or not the current configuration is close to a singular- ity of the control surface, are unaffected by small errors. Our experimental results confirm that the robust detec- tion of these singularities through image measurements is feasible. 2ooc 1 I I I I Figure 7 Experttental :esults: rcoordizte VS. wrist 734 VISION AND SENSOR INTERPRETATION The study of the perceptual kinematic map tells us what the control surface can look like and what kinds of singularities can be expected. The ability to predict events in the image, and thereby the nature of the con- trol surface, allows us to devise a control strategy that is easy to describe algorithmically. At the same time, we can avoid undue stress on the manipulator and the unnecessary movement that is incurred when it is made to maneuver blindly across a singularity of its control surface. 3 Vision-Based Control The control surface comprises regudar points, where the surface can be locally approximated by its tangent hy- perplane, and singular points, where the Jacobian of 7r is zero. Over a neighborhood of regular points one can ap- ply a simple gradient descent method to bring the hand closer to the desired configuration (using a cost function on S that measures the “distance” between the current and goal points). In our current implementation we track the hand and the Jacobian using a Kalman filter, a choice made possible by the smoothness of the PKM. On the other hand, when the robot reaches or crosses a singularity it may need to choose a new path to the goal. The decision depends upon the type of the singu- lar point, which the robot determines by exploring the neighborhood of the point. Ideally, singularities are iden- tified by zeros of the Jacobian, but since we are dealing with discrete images and manipulator displacements, we can rely only on qualitative information. As such, the robot detects that it has crossed a singularity by noting a change in the sign of the Jacobian. Even it its simplest form, the regular gradient descent strategy poses interesting problems. For example, the choice of a good gain in the regular control is closely related to the debate in the psychology literature about multiple correction (Keele 1968) versus single correction models (Howarth, Beggs, & Bowden 1971) for the visual control of movement. The former can be modeled in our case by the use of a (small) constant gain, the latter by a function of the distance to the goal. Another interesting possibility, which is both computationally efficient and psychologically motivated, is to decompose the planning into a gross motion phase during which the manipulator only attempts to position one endpoint close to the goal (thus reducing the control surface to a 3D surface), and a more precise phase, in which the control surface in its complete dimensionality is considered. 3.1 Exploration of the Control Surface At the beginning of a new grasping task, or when a sin- gularity is encountered, the robot needs to explore its neighborhood, meaning that it computes the local tan- gents to the control surface. The exploration of a config- uration’s neighborhood proceeds from the current config- uration in fixed steps for each joint, with a measurement being made after each step. As an example, in Figure 5 we show how these measurements are made in the neigh- borhood of a regular point and in the neighborhood of a singular point. The list of possible types of singularities of a generic lR5+ lR5 mapping is well known (Golubitsky & Guillemin 1973). Singularities of the control surface will typically be folds, which are degenerate along one direction of the control surface and locally look like five-dimensional cylinders. The direction of a fold can be directly deter- mined from the measurements made, and it qualitatively identifies the singularity. After deciding on which side of the singularity the goal lies, the robot determines its new course and returns to the regular control mode. de generate direction Figure 5: Exploration of the control surface in the hood of a regular point and a singular point s(q), a regular point control surface of the s(q+Aq), a measurement made in the neighborhood Of ¶ q +W neighbor- 3.2 Qualitative Learning of the Environment The ability to learn a control strategy for a particular environment or task is of utmost importance to both bio- logical and artificial systems and has long been a subject of psychological research (Schmidt 1975). More recently, this problem has been addressed in the robotics com- munity as either a parameter recovery problem (Skaar, Brockman, & Jang 1990) or through connectionist tech- HERV&SHARMA,& CUCKA 735 niques (Mel 1987). Despite promising results, these mod- els have so far been applied only to low-dimensional problems and very constrained environments (Miller 1987), conditions under which, as we mentioned earlier, classical reconstruction methods have been quite success- ful. In the following sections we address the learning problem using a geometrical, qualitative approach. 3.2.1 Learning the control surface geometry We have shown that, for a fixed position of the eye, the control surface so completely defines the possible arm movements that we can identify knowledge of the surface’s geometry with the actual coordination of the hand/eye system. We would now like to investigate the possibility of being able to learn (and perhaps to memorize) the geometry of the control surface. The high dimensionality of the problem may at first seem to be an obstacle, but consider a function of one vari- able whose singular points (extrema) have been detected (Figure 6(a)). There are obviously an infinity of curves admitting the marked points as singularities, but all these curves will be qualitatively similar (Figure 6(b)). This principle can be extended to higher dimensions, al- lowing us to say that a surface is qualitatively uniquely defined by its singularities. Each new coordination task may lead to unexplored portions of the control surface and to intersections with as yet undetected singularities. The identification and localization of a new singularity helps in obtaining a bet- ter representation of the surface, and therefore in achiev- ing more dextrous hand/eye coordination. Figure 6: (a) A curve and its singularities (b) A qualitatively similar curve 3.2.2 The mobile observer So far, we have only considered the case of a station- ary eye, but our theory generalizes naturally to the case of a mobile observer. A limited degree of mobility can be provided by placing the camera and the robot on a mo- bile platform while keeping their relative positions fixed. In this case, movement of the observer will affect not the shape of the control surface but the location of the goal on it. Thus the robot is free to position itself so that the goal will lie in a regular portion of the control surface. / Cd) Figure 7: (a) Surface derived from a generalized PKM (b) Its cross sections are control surfaces (c) Derivative of the generalized PKM (d) Localization of the singularities A higher degree of mobility is achieved if the cam- era can be dynamically repositioned with respect to the Naturally, the generalized control surface we will be robot. Then, the effect of eye movements can be assim- considering will be of high dimension (typically higher ilated into perturbations that alter the topology of the than ten), and it may seem unlikely that the human control surface. Most singular points will be structurally brain is able to represent such a complex structure. How- stable, that is, their positions may vary under small per- ever the brain does not need to store a complete descrip- turbations of the surface, but their topological type will tion of the surface: an incrementally updateable scale- based approximation would suffice. 736 VISION AND SENSOR INTERPRETATION remain unchanged. However, a few points will undergo catastrophic changes of topological type. Although a thorough study of the PKM allows us to predict what topological changes can occur when a per- turbation parameter, say the position of the eye, is modi- fied in a particular direction, we cannot predict, since we do not have a perfectly calibrated system, the exact value of the parameter for which the change will occur. Neither can we predict the effect of an arbitrarily small change of parameter on the shape of the surface. However, we get a much better understanding of the topological changes on the control surface if we look at the generalized con- trol surface lying in the much higher-dimensional space of the joint variables and perturbation parameters (here, the configuration of the eye), of which each control sur- face is a section. For example, Figure 7 shows the evolu- tion of a planar control “surface” (b) and of the location of its singularities (d) under the action of parameter t. The resulting generalized control surface it describes is shown in (a). (4 W 4 elated roblems Even when the eye remains stationary, the generalized control surface can be used to choose positions and ori- entations of the observer that make the task simpler, e.g. ones that minimize the number of singularities lying between the current and goal configurations. An interesting coordination problem is one in which the goal point is dynamic, such as when a robot is re- quired to catch a moving object. The added constraint of timing makes the problem more difficult: in terms of the PKM, the problem becomes a dynamic, constrained mo- tion planning problem on the control surface (Sharma, Hervk & Cucka 1991). For simplicity we have only presented the case in which the goal configuration of the hand can be represented as a point on the control surface. In general a coordination task will involve a set of goal points corresponding to multiple acceptable grasp positions. In (Cucka, Sharma & Herve 1991) paper, we show how existing methods for the analysis of stable grasps can be exploited, in coop- eration with model-based analysis of an object’s pose, to directly determine a complex goal set on the control surface. 5 Conclusion We have demonstrated that the analysis of the PKM and the control surface of a hand/eye system allows us to model and solve the coordination problem. The close integration of visual feedback with the decision process and our reliance on qualitative, topological information rather than precise, quantitative information allow us to robustly reason about spatial tasks. The principles of our approach are general, and the algorithm and system presented here represent only one of many possible ways to implement and test the approach. Acknowledgement The authors would like to thank Profs. Larry Davis, Yiannis Aloimonos, and Azriel Rosenfeld for their help- ful comments. References Aloimonos, J. 1990. Purposive and qualitative active vi- sion. In Proceedings of the 1990 DARPA Image Un- derstanding Workshop, 816-828, Pittsburgh, Penn. Anderson, R. L. 1988. A Robot Ping-Pong Player: Exper- iment in Read- Time Intelligent Control. Cambridge, Mass.: MIT Press. Ballard, D. H. 1989. Reference frames for animate vision. In Proceedings of the International Joint Conference on Artificial Intelligence, 1635-1641. Detroit, Mich. Borrel, P., and Liegeois, A. 1986. A study of multiple manipulator inverse kinematic solutions with applica- tions to trajectory planning and workspace determi- nation. In Proceedings of the 1986 IEEE International Conference on Robotics and Automation, 1180-l 185. San Francisco, Calif. Cucka, P., Sharma, R., and Herve, J-Y 1991. Grasping goals for visual coordination. Forthcoming. Cutkosky, M. R. 1989. On grasp choice, grasp models, and the design of hands for manufacturing tasks. IEEE Trans. on Robotics and Automation 5(3):269-279. Fischer, B., and Rogal, L. 1986. Eye-hand coordination in man: a reaction time study. Biological Cybernetics 55(4):253-261. Fitts, P. M. 1954. The information capacity of human motor system in controlling the amplitude of move- ment. Journal of Experimental Psychology 47:381-391. Fleischer, A. G. 1989. Planning and execution of hand movements. Biologicad Cybernetics 60(4):311-321. Golubitsky, M., and Guillemin, V. 1973. Stable Mappings and their Singularities, New York: Springer-Verlag. Her+, J-Y, Cucka, P., and Sharma, R. 1990. Qualitative visual control of a robot manipulator. In Proceedings of the 1990 DARPA Image Understanding Workshop, 895-908, Pittsburgh, Penn. Hogan, N. 1985. The mechanics of multi-joint pos- ture and movement control. Biological Cybernetics 52(5):315-331. Hollerbach, J. M., and Atkeson, C. G. 1987. Deducing variables from experimental arm trajectories: pitfalls and possibilities. Biological Cybernetics 56( 1/2):279- 292. Hollerbach, J. M. 1989. A survey of kinematic calibra- tion. In The Robotics Review 1, 207-242. Cambridge, Mass.: MIT Press. Howarth, C. I., Beggs, W. D. A., and Bowden, J. M. 1971. The relationship between speed and accuracy of movement aimed at a target. Actu Psychologica 35:207-218. Keele, S. W. 1968. Motor control in skilled motor perfor- mance. American Journal of Physiology 47:381-391. Mel, B. W. 1987. MURPHY: A robot that learns by doing. In Proceedings of the AIP Neural Information Processing System Conference, 544-553. Denver, Colo. Miller, W. T. 1987. Sensor-based control of robotic ma- nipulators using a general learning algorithm. IEEE Journal of Robotics and Automation 3(2):157-165. Robinson, D. A., Gordon, J. L., and Gordon, S. E. 1986. A model of smooth pursuit eye movement systems. Biological Cybernetics 55( 1):43-57. Schmidt, R. A. 1975. A schema theory of discrete motor skill learning. Psychologicud Review 82:225-260. Sharma, R., Herve, J-Y, and Cucka, P. 1991. Dynamic robot manipulation using continuous visual feedback and qualitative learning. Forthcoming. Skaar, S. B., Brockman, W. H., and Jang, W. S. 1990. Three-dimensional camera-space manipulation. Inter- national Journal of Robotics Research 9(4):22-39. Tsai, R. Y. 1989. Synopsis of recent progress on cam- era calibration for 3D machine vision. In The Robotics Review 1, 147-159. Cambridge, Mass.: MIT Press. Weiss, L. E., Sanderson, A. C., and Neuman, C. P. 1987. Dynamic sensor-based control of robots with visual feedback. IEEE Journal of Robotics and Automation 5(3):404-417. HERVI?,SHARMA,& CUCKA 737
1991
99
1,163
Gary C. Borchardt Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 Abstract This paper introduces the causaZ reconstructa’on task- the task of reading a causal description of a phys- ical system, forming an internal model of the spec- ified behavior, and answering questions demonstrat- ing comprehension and reasoning on the basis of the input description. A representation called transita’on space is introduced, in which events are depicted as path fragments in a space of “transitions,” or com- plexes of changes in the attributes of participating objects. By identifying partial matches between the transition space representations of events, a program called PATHFINDER is able to perform causal re- construction on short causal descriptions presented in simplified English. Simple transformations applied to event representations prior to matching enable the program to bridge discontinuities arising from the writer’s use of analogy or abstraction. The operation of PATHFINDER is illustrated in the context of a sim- ple causal description extracted from the Encyclopedia Americana, involving exposure of film in a camera. Introduction Causal descriptions of the sort appearing in encyclope- dias, reports and user manuals comprise an important source of knowledge about the behavior of physical sys- tems. In circumstances where complex interactions, in- tuitive concepts or metaphorical understanding are in- volved, such descriptions often constitute the only way in which humans can express what they know about a causal situation. In this paper, I address the problem of getting programs to understand and reason on the basis of such descriptions. Consider the following excerpt from the Encyclope- dia Americana [1989]: CAMERA. The basic function of a camera is to record a permanent image on a piece of film. When light enters a camera, it passes through a lens and converges on the film. It forms a latent image on the film by chemically altering the silver halides contained in the film emulsion. *This research was supported in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-85-K-0124. 2 Explanation and Tutoring Given this description, a human previously unfamil- iar with the operation of a camera should be able an- swer non-trivial questions such as the following: What happens to the distance between the light and the film? (This distance decreases, then disappears as the light converges on the film.) How does the light ‘converging on the film” relate to the light “forming the image on the film?” (The for- mer leads to chemical alteration of the silver halides, which change appearance, constituting the latter.) How could a building rejtecting light into the camera cause the light to converge on the film? (This event ends with light entering the camera, from which it passes through the lens and converges on the film.) Does the light come into contact with the film emul- sion? (Yes. The light contacts the silver halides as it chemically alters them, and as these are a part of the emulsion, it therefore contacts the emulsion.) Answering such questions involves: (1) recognizing un- stated associations between events in the description and questions, and (2) using these to model the activ- ity at the level of time-varying attributes of objects. Causal Reconstruction I call this task causal reconstruction. Causal recon- struction involves reading a causal description, form- ing an internal model of the activity, and answering questions testing for comprehension of the description. This task differs in two ways from the related task of causal modeling [Pearl and Verma, 1991; Doyle, 19891. First, an initial causal model is already known to the writer of the description. Second, the writer’s commu- nication of this model is governed by conversational constraints; e.g., from Grice [1975]: (1) (The Maxim of Quantity) provide enough information but not too much, (2) (The Maxim of Quality) be truthful, (3) (The Maxim of Relation) provide relevant information, and (4) (The Maxim of Manner) be perspicuous. From these considerations, we may state specific cri- teria for assessing success at causal reconstruction. As- suming that the comprehender’s model must also be describable by the supplied description, we pose ques- From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. tions to the comprehender, evaluating its responses to see if Grice’s maxims would forbid use of the initial description as an account of this new model of activ- ity. From the Maxim of Quantity: (1) does the new model introduce objects/events not motivated in the description (posing the description as underinforma- tive)? From the Maxim of Quality: (2) does the new model disagree in any way with the description (posing it as untruthful), or (3) is the new model physically un- realizable (posing the description as vacuous)? From the Maxim of Relation: (4) does the new model fail to incorporate any information in the description (posing the description as containing irrelevant information), or (5) is the new model not fully connected (posing the description as containing one or more unrelated events)? From the Maxim of Manner: (6) does the new model make any piece of information in the description redundant (such that it could be condensed)? This task characterization suffices for human or ma- chine comprehension. Note, however, that Grice’s Maxims depend on the intended audience of an ut- terance; e.g., in describing a situation to a child, one must include more information. For automated com- prehension of causal descriptions, it is perhaps sim- plest to stipulate an absence of relevant background knowledge on the part of the program. Thus, we re- quire definitions for events, static properties of objects, rules of inference and so forth to accompany an input description. This simplifies the task, but by no means trivializes it. Given the pieces of a puzzle, the program must still determine how these pieces fit together. Transition Space I now introduce a representation called transition space, supporting causal reconstruction in the program PATHFINDER. This representation describes events as paths in a space of “transitions,” or complexes of changes in attributes for objects participating in a de- scribed scenario .l Such a representation finds motiva- tion in the perceptual psychology literature [Michotte, 1946; Miller and Johnson-Laird, 19761, and is broadly consistent with research in qualitative reasoning [For- bus, 1984; de Kleer and Brown, 1984; Kuipers, 19861. In contrast with the work in qualitative reasoning, transition space relies on language for attributes and their changes. Examples of assertions characterized by this representation appear below (attributes appear in boldface, indications of change in italics). The contact between the steam and the metal plate appears. The concentration of the solution increases. The appearance of the film changes. The pin becomes a part of the structure. The water remains inside the tank. ‘This representation is related to and in part based on previous representations described in Waltz [I9821 and Bor- chardt [1985]. Miller and Johnson-Laird [1976] enumerate a large number of such attributes and characterize them as typically quantitative or qualitative (including boolean), and typically unary or binary. Assuming an “absent” or “false” value in the range of each attribute, then if we know whether or not a specific attribute of one or more objects is present at each of two time points, one of which follows the other, and we know the qualitative relationship between the values at these two time points, then the following ten change characterizations are exhaustive, though over- lapping. (The accompanying mnemonic symbols are used in a graphic representation described below.) - (presence versus absence) for boolean attributes APPEAR NOT-APPEAR DISAPPEAR NOT-DISAPPEAR - (specializations of NOT-DISAPPEAR) qualitative I L CHANGE NOT-CHANGE attributes t INCREASE NOT-INCREASE quantitative I attributes L DECREASE NOT-DECREASE These characterizations are depicted as predicates taking four arguments: an attribute of concern, an ob- ject or tuple of objects, and two time points. The as- sertions below correspond to each of the English state- ments appearing above. APPEAR(contact, (the-steam, the-metal-plate), tl, t2) INCREASE(concentration, the-solution, t3, t4) CHANGE(appearance, the-film, t5, t6) APPEAR(a-part-of, (the-pin, the-structure), t7, t8) NOT-DISAPPEAR(inside, (the-water, the-tank), t9, t10) Two primitive predicates, EQUAL and GREATER, plus their negations, form a basis for defining these ten change characterizations. Additionally, six predi- cates are defined for assertions at a single time point. The definitions are omitted here, but may be found in [Borchardt, 19901 and [Borchardt, 19923. A transition is a set of assertions at and between two ordered time points. Events are sequences (or, more generally, directed acyclic graphs) of transitions. These representations are called event truces, as they correspond to simple paths in transition space. While the underlying representation remains propo- sitional, a simplified graphic format will be used here. The following diagram illustrates this format for an event trace depicting the event “push away.” Only dynamic (across-time) information is portrayed, with the ten change characterizations coded using the mnemonic symbols specified above. Also, for ease of (our) visualization, a drawing is placed above each transition in the event. This event trace contains two transitions, the first corresponding to appearance of pressure between “object-l” and “object-2,” the sec- ond, motion of “object-2” with respect to the back- ground and parting of contact between the two objects. Borchardt 3 object-l, object-2 object-Z, the- background t1 t2 t3 Matches Between Event Traces Given a set of event traces for the events in a causal description, simple inter-event associations may be detected by identifying partial matches between the traces. As an example, suppose we are given two events, steam moving into contact with a metal plate, and steam condensing on the metal plate, as follows. the-steam, the-background the-steam, the-metal-plate < t11 t12 t13 The steam moves into contact with the metal plate. stm. 0 0 0 the-steam, the-metal-plate < the-steam, vapor < the-steam, liquid < t21 t22 t23 The steam condenses on the metal plate. Partial matches are grouped into two classes: (1) partial chaining matches, where a non-initial transition in one trace partially matches the initial transition in another, and (2) partial restatement matches, where the traces match in some other way.2 For the above traces, there are two possible partial chaining matches and three possible partial restatement matches (first transitions only, second only, both transitions). Of these possibilities, a single partial chaining match ap- pears, leading from the first trace to the second and 2This distinction is refined slightly in [Borchardt, 19921. involving a disappearance of distance and appearance of contact between the steam and the metal plate.3 The match identified here is a partial association, as the transitions in question do not match completely. By performing simple transformations on the event traces, we may bring them into a complete match. Mere, we distinguish between two classes of transfor- mations: information-preserving and non-information- preserving. The former are members of inverse pairs of transformations in transition space; the latter are not. A set of event traces linked by complete asso- ciations-transformations and complete matches- comprises an association structure. Association struc- tures are diagrammed using a more abstract vi- sual format. Here, event traces appear as arrows (or more generally, DAGs), with associations repre- sented by alignment in three dimensions: horizontal for complete chaining associations, vertical for non- information-preserving transformations, and depth- wise for information-preserving transformations. The following diagram illustrates an association structure elaborating the partial match discussed above. (ch>ning) n.-i.-p. trans. i.-p. trans. chaining A: The steam movee into contact with the metal plate. B: The steam condenees on the metal plate. The chain of associations may be summarized as follows. First, we perform an information-preserving transformation on the second trace, B, replacing time points “t21” and “t22” with their matched equivalents “t12” and “t13” (see previous illustration). This pro- duces trace D. Next, we remove information concerning the attribute “is-a” from trace D (a non-information- preserving transformation), producing E, and likewise remove assertions involving “position,” “speed” and “heading” from A, producing C. Finally, C and E are linked by a complete chaining association. Inference and ackground Statements Symmetric, transitive and other properties of the prim- itives EQUAL and GREATER, plus properties of par- ticular attributes can be mechanized into an inference step augmenting event traces with relevant new asser- tions. This provides additional material for match- ing. Background statements supplied with a descrip- tion (e.g., “The water is inside the tank.“) can assist in this process. Separately, inference can be used to test partial matches for logical consistency. 3Heuristics for ranking alternative partial matches are listed in the section entitled “PATHFINDER.” 4 Explanation and Tutoring Exploratory ansformation of Traces In addition, transformations of both varieties can be applied in an exploratory manner to form alternate characterizations of events at different levels of abstrac- tion or in terms of different underlying metaphors. An event trace together with its transformed images forms a small “cluster” of traces, all of which participate in the association process. In this manner, we may bridge discontinuities arising from the writer’s use of analogy or abstraction. The following traces depict a simple exploratory information-preserving transformation (of type substi- tution). If we are told that an object “slides to a stop,” it is natural to represent this by the first trace. For a rotating object like a wheel, however, a substitution of attributes taking us into the domain of spinning objects may be more appropriate. By including both traces in the association process, we can determine by matching which interpretation is correct. / wheel \ / wheel \ the-wheel, the-axle f / t31 t32 t33 substitution the-wheel, the-axle t41 t42 t43 Below is an exploratory non-information-preserving transformation (of type object composition). Suppose one object is specified as coming into contact with a second object, and the second object is a part of a third object. An alternate, more abstract characteri- zation of the event portrays the first object as simply coming into contact with the third object. Such a sit- uation arises in processing the camera description (as discussed below): light comes into contact with the sil- ver halides as part of chemically altering them, yet this must be matched with light “converging on the film” which contains the silver halides. object-52, // I I object composition object-51, object-53 t51 t52 Additional exploratory non-information-preserving transformations appear in [Borchardt, 19901 and [Bor- chardt, 19921. These include: generalization, in which a reference term is replaced by a subsuming term; two additional varieties of composition (interval composi- tion, attribute composition), and three varieties of reifi- cation (attribute-object reification, event-attribute reifi- cation, and event-object reification). PATHFINDER is a 20,000 line program coded in Com- mon Lisp and running on a Symbolics 3640. It consists of a parser operating on a simple, context free skele- ton of English grammar, a simple language generation capability, a toolbox for representing, matching and conducting inference and transformations on events in transition space, and additional facilities for causal re- construction. PATHFINDER has been applied to over 60 causal descriptions, most involving 2-4 events, in a wide range of physical domains including interaction between solid objects and liquids, condensation and melting, combustion, radio signals, light, chemical re- actions and electric currents. All input to PATHFINDER consists of statements in simplified English. (A sample appears in Figure 1, below.) First, PATHFINDER is given a causal de- scription, consisting of (1) event references (“The light enters the camera.“), (2) background statements (“The head is a part of the nail.“) and, in some cases, (3) explicit meta-level statements (“The device starting to move causes the lever to start to move.“). Next, a set of supplementary information is provided, possibly in- cluding (1) additional background statements, (2) event definitions, (3) precedent events, which may be of use in reconstructing the activity, (4) rules of inference, and (5) rules of restatement, including specifications of analogical mappings and rules of abstraction. Given input in this form, PATHFINDER performs causal reconstruction in four phases. First, it uses the event definitions to form event traces for all events Borchardt 5 (a) (the causal desctiption in simplified English) The camera records the image on the film. The recording of the image is a function of the camera. The light enters the camera. The light passes through the lens. The light converges on the film. The light forms the image on the film. The light chemically alters the silver halides. The silver halides are contained in the emulsion. The emulsion is a part of the film. ) (an event definition for Uentering, ” involving physical objects) Object 11 entering object 12 translates to the following event. Concurrently, object 11 remains a physical object, object 12 remains a physical object, object 12 remains hollow, the position of object 11 changes, the speed of object 11 does not disappear, the heading of object 11 does not change, and object 11 becomes inside object 12. (a precedent event: change of appearance during chemical transformation) Object 61 changes appearance from chemical transformation. Object 61 changing appearance from chemical transformation translates to the following event. Concurrently, object 61 remains a physical object, object 61 becomes not made of substance 62, object 61 becomes made of substance 63, and the appearance of object 61 changes. (d) (a restatement rule: light viewed as a physical object with respect to “contact”) Concurrently, quantity 141 is a beam of light, object 142 is a physical object, and the contact between quantity 141 and object 142 is present. The following statement parallels the preceding statement. Concurrently, object 151 is a physical object, object 152 is a physical object, and the contact between object 151 and object 152 is present. (e) (a restatement rule: contact with a part summarized as contact with whole) Concurrently, object 201 remains a part of object 202, the distance between object 203 and object 201 disappears , and the contact between object 203 and object 201 appears. The preceding statement is summarized by the following statement. Concurrently, the distance between object 203 and object Figure 1: Input text for the camera description (partial). referenced in the description. Second, it extends the traces through inference and applies exploratory transformations-these motivated by rules of restate- ment in the input-producing for each event a clus- ter of traces describing that activity in different ways. Third, it constructs an agenda of partial matches iden- tified between traces in different clusters. Iteratively choosing the top-ranked partial match and elaborat- ing it in the manner illustrated above, it associates the clusters together. Inference is used to check each se- lected partial match for consistency. Fourth, when all of the events have been associated, it answers questions (described in the next section). Current status of the association structure. Penalties are introduced for matches providing a sec- ond antecedent or consequent for an event, matches between events already connected via associations, and matches involving hypothesized objects (e.g., a conjectured part of a physical object). Meta-level considerations. Restatement matches are penalized slightly, and matches violating explicit meta-level statements are penalized heavily. recessing the Camera Description Figures 1 and 2 contain an abbreviated session tran- script for PATHFINDER regarding the camera de- scription introduced in the Introduction. Input to PATHFINDER appears in Figure 1; question- answering appears in Figure 2. Several heuristics are used in choosing partial matches for elaboration. These heuristics are listed below. The first heuristic is given strongest influence over the selection process. Matching between transitions. Definite changes (APPEAR, DISAPPEAR, CHANGE, INCREASE, DECREASE) are weighted most, other dynamic as- sertions next and static assertions least. Proximity to description events. Penalties are in- troduced for matches involving precedent events or exploratory transformations of events. Narrative ordering. Preference is given to chaining matches between events referenced consecutively in the description. Details of processing for this example are given in [Borchardt, 19921. The association phase executes six iterations, selecting from 120 candidate partial matches on the first iteration. The diagram below il- lustrates a part of the association structure generated by PATHFINDER, connecting “The light converges on the film.” with “The light chemically alters the silver halides .” . Traces M, W and Z (outlined) have been formed by exploratory transformations. For M and W, an information-preserving transformation derived from the rule of restatement shown in Figure 1 (d) has 6 Explanation and Tutoring (L) (a question of type 1: describing the time-varying status of a particular attribute) What happens to the distance between the light and the film? First, as the light passes through the lens, the distance between the light and the film decreases. Next, as the light converges on the film, the distance between the light and the film disappears. Next, as the camera records the image on the film, the distance between the light and the film does not appear. ) (a question of type 2: describing the interrelationship between two events) How does the light converging on the film relate to the light forming the image on the film? The light converging on the film causes the light to chemically alter the silver halides, which ends with the silver halides changing appearance from chemical transformation, which occurs at the end of the light forming the image on the film. (a question of type 3: identifying a plausible causal connection) How could the building reflecting the light into the camera cause the light to converge on the film? The building reflecting the light into the camera could end with the light entering the camera,which could cause the light to pass through the lens, which could cause the light to converge on the film. (d) (a question of type 4: restating a portion of the activity) Does the light come into contact with the emulsion? Yes. The light coming into contact with the emulsion is a part of the light converging on the film. recast activity involving a physicalobject with activity involving a beam of light. For Z, a non-information- preserving transformation derived from the rule of re- statement shown in Figure 1 (e) has recast light con- tacting the silver halides as light contacting the film. A partial chaining match has been identified between traces M and Z, both of which specify the light con- tacting the film. Finally, traces Al, Bl and Cl have been produced by elaboration of this partial chaining match (involving an equivalence mapping from Z to Bl, removal of assertions from M to Al and from Bl to Cl, and chaining from Al to Cl). Al (chaining) Cl n.-L-p. trans. i.-p. trane. chaining equivalence) bstitution) D: The light converges on the film. F: The light chemically alters the silver halides. PATHFINDER uses the association structure to an- swer four types of questions. The first type concerns the time-varying status of particular attributes of ob- jects; e.g., “What happens to the distance between the light and the film?” To answer this, PATHFINDER merges the traces in the association structure, overlap- ping where indicated, extracts the portion relevant to the question, and expresses it in simple English. For the above question, the relevant portion is as below. PATHFINDER’s response appears in Figure 2 (a). Figure 2: Question answering for the camera description. the-light, ( the-film t32 The second type of question concerns inter-event re- lationships; e.g., “How does the light converging on the film relate to the light forming the image on the film?” To answer this type of question, PATHFINDER extracts the relevant path in the association structure and describes this path in simple English, highlighting important associations (see Figure 2 (b)). The third and fourth types of questions also ask about inter-event relationships, but require PATHFINDER to do further association first. Supple- mentary information (e.g., event definitions) may be provided with these questions. The third type of ques- tion involves plausible causal associations; e.g., “How could the building reflecting the light into the camera cause the light to converge on the film?” (Figure 2 (c)). The fourth t ype asks if a new event may para- phrase part of the activity; e.g., “Does the light come into contact with the emulsion?” (Figure 2 (d)). iscussion This work is motivated by a range of research, as sum- marized in [Borchardt, 19921. However, very little work has addressed the task of understanding written causal descriptions of physical systems. Rieger [1976] pro- posed a mechanism for such understanding, but his approach lacked an explicit notion of time and was never fully implemented and tested. More recently, Sembugamoorthy and Chandrasekaran 119861 and By- lander and Chandrasekaran Cl9851 provide interesting accounts of causal physical behavior that are consis- tent with human intuition. Their research targets the task of reasoning using knowledge entered directly in a Borchardt 7 representational format, however, rather than the task of extracting causal knowledge from written material. Several differences exist between the transition space representation and that used in qualitative physics, these arising primarily from differences in the prob- lems being addressed. As noted above, representations for individual events are grounded in language, rather than a scientific model of a physical system. As a re- sult, the transition space representation tends to be more macroscopic, with events often spanning several qualitative states of a device. Additionally, transition space explicitly represents differences in the description of events at alternate levels of abstraction or in terms of different underlying metaphors. Finally, the mech- anism for reasoning is different, consisting of heuristic search rather than constraint propagation. Work in spreading activation [Quillian, 1969; Alter- man, 1985; Norvig, 19891 has addressed the problem of recognizing inter-event associations in natural lan- guage text. However, little of this work deals with re- constructing sequences of physical causation. I suspect that this is because inferring plausible causal links be- tween physical events is very context dependent (e.g., a dropped object will fall, but only if it is not oth- erwise supported). Event traces in transition space can capture this context, whereas doing so in a seman- tic network would require considerable bifurcation of event nodes into sub-nodes depicting special cases. The transition space representation works best when the activities are not all of the same type (e.g., all translational motion). In such cases, it is expected that incorporation of a more finely-grained spatial rep- resentation may be required. Other useful extensions include abstraction shifts for elaborating or summariz- ing repetetive events and feedback cycles, a means of estimating likelihood for causal sequences, and a means of classifying objects and events. On the basis of descriptions processed by PATH- FINDER, the heuristic of matching of transitions ap- pears to be quite useful in causal reconstruction. The transition space representation is also easy to generate from simple, stylized verbal accounts of what happens during events. Since the representation is grounded in the variety of changes expressible in simple language, it is quite possible that this representation may find utility in other domains as well, beyond the current focus on physical systems. Acknowledgements I thank Patrick Winston, Randall Davis, David Waltz, Susan Carey and the members of the Learning Group at the MIT AI Laboratory-in particular, Rick Lath- rop, Jintae Lee and Lukas Ruecker-for guidance and helpful criticism in the course of this research. References Alterman, R., “A Dictionary Based on Concept Co- herence,” Artificial Intelligence 25:2, 1985, 153-186. Borchardt, G. C., “Event Calculus,” Proc. Ninth In- ternational Joint Conference on Artificial Intelli- gence, 1985, 524527. Borchardt, G. C., “Transition Space,” AI. Memo 1238, Artificial Intelligence Laboratory, Massachusetts In- stitute of Technology, Cambridge, MA, 1990. Borchardt, G. C., Causal Reconstruction: Understand- ing Causal Descriptions of Physical Systems, Ph.D. Dissertation, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, 1992, forthcoming. Bylander, T. and Chandrasekaran, B., “Understanding Behavior Using Consolidation,” Proc. Ninth Inter- national Joint Conference on Artificial Intelligence, 1985, 450-454. de Kleer, J. and Brown, J. S., “A Qualitative Physics Based on Confluences,” Artificial Intelligence 24:1-3, 1984, 7-83. Doyle, R. J., “Reasoning About Hidden Mechanisms,” Proc. Eleventh International Joint Conference on Artificial Intelligence, 1989, 1343-1349. The Encyclopedia Americana, International Edition, Grolier Inc., 1989. Forbus, K. D., “Qualitative Process Theory,” Artificial Intelligence 24:1-3, 1984, 85-168. Grice, H. P., “Logic and Conversation,” in Cole, P. and Morgan, J. L. (eds.) Syntax and Semantics, Volume 3: Speech Acts, Academic Press, 1975, 41-58. Kuipers, B., “Qualitative Simulation,” Artificial Intel- ligence 29:3, 1986, 289-338. Michotte, A., The Perception of Causality, Trans- lated by T. and E. Miles from French edition, 1946, Methuen, London, 1963. Miller, G. A. and Johnson-Laird, P. N., Language and Perception, Harvard University Press, 1976. Norvig, P., “Marker Passing as a Weak Method for Text Inferencing,” Cognitive Science 13:4, 1989, 569-620. Pearl, J. and Verma, T. S., “A Theory of Inferred Cau- sation,” Proc. Second International Conference on Principles of Knowledge Representation and Reason- ing, 1991, 441-452. Quillian, M., “The Teachable Language Comprehender: A Simulation Program and Theory of Language,” Communications of the ACM 12~8, 1969, 459-476. Rieger, C., “An Organization of Knowledge for Prob- lem Solving and Language Comprehension,” A rtifi- cial Intelligence 712, 1976, 89-127. Sembugamoorthy, V. and Chandrasekaran, B., “Func- tional Representation of Devices and Compila- tion of Diagnostic Problem-Solving Systems,” in Kolodner, J. and Riesbeck, C. (eds.) Experience, Memory, and Reasoning, Lawrence Erlbaum Asso- ciates, 1986, 47-73. Waltz, D. L., “Event Shape Diagrams,” Proc. AAAI Second National Conference on Artificial Intelli- gence, 1982, 84-87. 8 Explanation and Tutoring
1992
1
1,164
Integrating Planning and Reacting Architecture for Controlling Erann Gat Jet Propulsion Lab, California Institute of Technology 4800 Oak Grove Drive Pasadena, California 9 1109 gat@robotics.jpl.nasa.gov ABSTRACT This paper presents a heterogeneous, asynchronous architecture for controlling autonomous mobile robots which is capable of controlling a robot performing multiple tasks in real time in noisy, unpredictable environments. The architecture produces behavior which is reliable, task-directed (and taskable), and reactive to contingencies. Experiments on real and simulated real- world robots are described. The architecture smoothly integrates planning and reacting by performing these two functions asynchronously using heterogeneous architectural elements, and using the results of planning to guide the robot’s actions but not to control them directly. The architecture can thus be viewed as a concrete implementation of Agre and Chapman’s plans-as- communications theory. The central result of this work is to show that completely unmodified classical AI programming methodologies using centralized world models can be usefully incorporated into real-world embedded reactive systems. 1. Introduction We have been investigating the problem of controlling autonomous mobile robots in real world environments in a way which is reliable, task-directed (and taskable), and reactive to contingencies. The result of our research is a control architecture called ATLANTIS1 which combines a reactive control mechanism with a traditional planning system In this paper we describe a series of experiments using the architecture to control real-world and simulated real-world robots. We demonstrate that the architecture is capable of pursuing multiple goals in real time in a noisy, partially unpredictable environment. The central result of the work is to show how a traditional symbolic planner can be smoothly integrated into an embedded system We begin by reviewing the difficulties associated with embedding AI systems into real-world robots. Controlling autonomous mobile robots is hard for three fundamental reasons. First, the time available to decide what to do is limited. A mobile robot must operate at the pace of its environment. (Elevator doors and oncoming trucks wait for no theorem prover.) Second, many aspects of the world are unpredictable, lATLANTIS is an acronym which stands for (among other things): A Three-Layer Architecture for Navigating Through Intricate Situations. making it infeasible to plan a complete course of action in advance. Third, sensors cannot provide complete and accurate information about the environment. These are fundamental problems because they cannot ever be engineered away. No matter how powerful a computer we build, a finite amount of time will allow only a finite amount of computation. No matter how good a sensor we may build there is always information that it cannot deliver because the relevant situation is hidden behind a wall or across town. No matter how good our domain theory may be, many important aspects of the world simply cannot be predicted reliably. Related ark: Classically the problem of mobile robot control has been addressed within a framework of functional decomposition into sensing, planning and acting components. There is a vast literature on the traditional sense-plan-act architecture and its variations. A good recent example of a sense-plan-act approach to mobile robot control appears in [StentzgO]. An alternative approach spearheaded by Brooks advocates decomposition of the robot control problem into special-purpose task-achieving modules (often called behaviors) rather than into general-purpose functional modules [Brooks86]. Gne of the interesting results of this work is that useful behaviors can be built out of very simple computations with very little internal state. (This result is often mistakenly believed to imply that reactive control is one of the tenets of the behavior-based approach. Behavior-based control makes reactive control possible, but it does not mandate it.) A number of researchers have described systems which integrate these two approaches (e.g. [Connell91], [Kaelbling88], [SoldoBO], [ArkingO], [Georgeff87]), or which start with one approach and try to push its capabilities towards that of the other (e.g. [Rlatari&O], [Simmons90]). Most of these systems are homogeneous, that is, they use basically the same computational structure throughout. ([Connell J is a notable exception.) Overview: This paper introduces ATLANTIS, a heterogeneous, asynchronous architecture for controlling mobile robots which combines a traditional AI planner with a reactive (not necessarily behavior-based) control mechanism. The next section describes the theory behind the approach. Section 3 describes the architecture. Section 4 describes experiments using the architecture to control real-world and simulated real-world mobile robots Gat 809 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. performing multiple tasks in real time in noisy, unpredictable environments. Section 5 summarizes, presents conclusions, and suggests directions for future research. This paper is constrained by space limits to be somewhat superficial. For more complete technical details see [Gatgla], and forthcoming papers. 2. Activities and The ATLANTIS architecture is based on an action model which is different from most traditional AI systems. This section presents a brief review of this model. For a more complete discussion see [Gatgla]. A majority of past work in AI on robot control architectures and planning systems is based fundamentally on a state-action model. This model is based on the idea that the temporal evolution of the configuration of a system (or an environment) can be described as a sequence of discrete states. One state is transformed into a subsequent state by the performance of an action. The word action is variously used to denote both the physical action as well as a computational structure which represents the physical action. The latter is sometimes called an operator, a term which we will adopt here to avoid ambiguity. According to the classical paradigm, an action is produced by executing an operator. In order to distinguish between this technical notion of action as the physical activity produced by the execution of an operator, and the idea of physical activity in general, we will capitalize the former. Thus, executing an operator produces an Action, but an action might be produced in other ways. There is a very close correspondence between Actions and operators, which is one reason the terms are sometimes used interchangeably. The process of execution is atomic, resulting in a strict one-to-one correspondence between operators an Actions. This structure facilitates analysis, but makes it difficult to model simultaneous, interleaving, or overlapping actions. It also makes it impossible to model a process where the execution of an atomic action is abandoned in the middle in response to a contingency. ATLANTIS is based on a continuous action model similar to those described in willer841, mogge88], and pean88J. The ATLANTIS action model is based on operators whose execution consumes negligible time, and thus do not themselves bring about changes in the world but instead initiate processes which then cause change. These processes are called activities, and the operators which initiate (and terminate) them are called decisions. A decision may initiate an activity which contains computational processes which initiate other activities. If we assume that there are no circularities in this network of initiations then we may classify activities into a hierarchy. High-level activities contain computational processes which initiate low-level activities. The hierarchy bottoms out in primitive activities, reactive sensorimotor processes which contain no decision-making computations. Because there is no strict correspondence between decisions and changes in the world, an activity-based model of action is more difficult to analyze than a state- action model. Such an analysis is beyond the scope of this paper. Instead, we will use the activity model as an engineering tool to help us organize the computations required to produce robust behavior in our robots. The key observation is that activities consist of potentially overlapping sequences of primitive (physical) activities and computational activities. The results of the computations are used to guide the sequencing of primitives. Thus, to control a mobile robot we need three things: a control mechanism for controlling primitive activities, a computational mechanism for performing decision-making computations, and a sequencing system to control the interactions between the two. The next section describes such a system. This section briefly describes ATLANTIS, a heterogeneous asynchronous architecture for controlling mobile robots based on the activity model of action described in section 2. ATLANTIS consists of three components. The controller is a reactive control mechanism which controls primitive activities, i.e. activities with no decision-making computations. The sequencer is a special-purpose operating system which controls the initiation and termination of primitive activities, and of time-consuming deliberative computations like planning and world modelling which are performed in the deliberator. 3.1 The controller: This component is responsible for controlling primitive activities, that is, activities which are (mostly) reactive sensorimotor processes. It is possible to design the controller for a given application using nothing but classical control theory. However, in many cases control theory cannot be applied directly to the problem of controlling autonomous mobile robots because of the difficulty in constructing an adequate mathematical model of the environment. In such cases it is necessary to provide a framework wherein an appropriate control algorithm can be effectively engineered and empirically verified. There are a great many issues which must be addressed in the design of such a system, not the least of which are control-theoretical issues. For now we shall lay these aside, concentrating instead on the computational organization of the system that allows a designer to conveniently engineer systems that do the right thing. The sorts of transfer functions required to control reactive robots tend to be highly nonlinear, of high dimension, and often discontinuous. The design of the system must be such that we can easily describe the functions we need 810 Robot Navigation and, having defined them, actually implement them in a way that lets them be connected to actual hardware. To support these requirements we have designed a new programming language called ALFA (A Language For Action) [GatB lb]. ALFA is similar in spirit to REX [Kaelbling87], but the sorts of abstractions the two languages provide are quite different. ALFA programs consist of computational modules which are connected to each other and to the outside world by means of communications channels. Both the computations performed and their interconnections are specified within module definitions, allowing modules to be inserted and removed without having to restructure the communications network. ALFA provides both dataflow and state-machine computational models. It has a clean syntax and a realistic uniform interface to external hardware. ALFA is currently compiled into uniprocessor code, but the semantics of the language are such that it could be compiled onto a parallel processor or even analog hardware. 3.2 The sequencer: This component is responsible for controlling sequences of primitive activities and deliberative computations. Controlling sequences is difficult primarily because the sequencer must be able to deal effectively with unexpected failures. This requires the careful maintenance of a great deal of internal state information because the sequencer must be able to remember what actions have been taken in the past in order to decide what action should be taken now. The fundamental design principle underlying the sequencer is the notion of cognizant failure /Firby89]. A cognizant failure is a failure which the system can detect somehow. Rather than design algorithms which never fail, we instead use algorithms which (almost) never fail to detect a failure. There are two reasons for doing this. First, it is much easier to design navigation algorithms which fail cognizantly than ones which never fail. Second, if a failure is detected then corrective action can be taken to recover from that failure. Thus, algorithms with high failure rates can be combined into an algorithm whose overall failure rate is quite low provided that the failures are cognizant failures Howe9 11. The sequencer initiates and terminates primitive activities by activating and deactivating sets of modules in the controller. In addition, the sequencer can send parameters to the controller by means of channels. The progress of a primitive activity is monitored by examining the values of channels provided for this purpose. The sequencer is modelled after Firby’s Action Package (RAP) system. The system maintains a task queue, which is simply a list of tasks that the system must perform. Each task contains a list of methods for performing that task, together with annotations describing under what circumstances each method is applicable. A method is either a primitive action or a list of sub-tasks to be installed onto the task queue. The system works by successively expanding tasks on the queue until they either finish or fail. When a task fails, and alternate method is tried. (cf. [Simmons90], FTorielsBO]). The main difference between the original RAP system and the ATLANTIS sequencer is that the latter controls activities rather than atomic actions. This requires a few modifications to the original RAP system. First, the system must insure that two activities which interfere with each other are not enabled simultaneously. This is accomplished by attaching to each activity a list of resources that the activity requires and using a set of semaphores to prevent conflicts. Second, if a primitive activity must be interrupted (for example, to take care of some unexpected contingency) then the system must insure that the interrupted activity is properly terminated so that the modules which control that activity are disabled and any resources used by that activity are relinquished. The solution to this problem is to provide a mechanism similar to a LISP unwind-protect which allows an interrupted process to execute some clean-up procedures before relinquishing control. 3.3 The deliberator: This component is responsible for performing time-consuming computational tasks such as planning and maintaining world models. The deliberator performs computations under the control of the sequencer - all deliberative computations are initiated (and may be terminated before completion) by the sequencer. This allows the sequencer to direct scarce computational resources to the task at hand. Results of deliberative computations are placed in a database which can be accessed by the sequencer. The deliberator has no restrictions on its computational structure except that the sequencer must be able to initiate and terminate its computations. It typically consists of a set of LISP programs implementing traditional AI algorithms. This is a central feature of the system. The function of the sequencer and controller is to provide an interface which connects to physical sensors and actuators on one end, and to classical AI algorithms implemented in traditional ways on the other. The following interesting question now arises: how does a planner based on a traditional AI state-action model interface with a control mechanism based on a continuous activity model? It turns out that this interface is quite straightforward. Because the output of the planner is used only as advice by the sequencer, it doesn’t matter at all what the planner’s internal representation is. The only requirement is that the output of the planner contains some information which the sequencer can effectively use. ATLANTIS can be considered an implementation of Agre and Chapman’s theory of plans-as-communications (or plans-as-advice) [AgregO]. A concrete example is described in section 4. 3. esign methodology: ATLANTIS advocates a bottom-up design methodology (cf. [Simmons90]). Primitive activities are designed first, Gat 811 keeping in mind that they must be designed to fail cognizantly. The primitives are then used as fundamental building blocks for the construction of task schemas for the sequencer task library. Finally, deliberative computations are designed to support the sequencer in making choices among multiple tasks, task methods, or in supplying task method parameters. Although there are no restrictions on the computational structure of the deliberator, there is a caveat concerning the semantics of its computations. Deliberative computations by definition are time- consuming and maintain internal state information which contains implicit predictions about the world. Thus it is critical that the information content of the internal state pertain to predictable aspects of the environment. One way to do this is to insure that all deliberative computations are performed at a high level of abstraction where unpredictable aspects of the environment are abstracted away to be dealt with at runtime by the rest of the architecture. 4 Experiments ALFA, and the ATLANTIS architecture and design methodology have been implemented on a variety of real robots operating in both indoor and outdoor environments. ALFA has been used to control Tooth, an extremely reliable indoor object-collecting robot [Gat9 lc]. The language has also been used to program the JPL outdoor microrover testbed Rocky III, the only example known to the author of an autonomous outdoor robot which collects and returns samples [Miller91]. A significant result of this work was that the control structures for these two robots was very similar, indicating that the abstractions used to program them may be more widely applicable. ALFA and a simple sequencer were also used to control an indoor robot performing a complex navigation task using very simple sensors [Gat9 ld]. (cf. [Mataric90], [Connell 11). The ATLANTIS architecture has been used to control Robbie, the JPL Mars rover testbed Wilcox871. Robbie is a large outdoor testbed whose primary sensor is a pair of stereo cameras. A trace of a typical experiment is shown in figure 1. The robot’s path is shown as a solid black line starting at the right. The light polygons are the areas scanned by the robot’s stereo vision cameras. The shaded circles are obstacles detected by the robot during the traverse. The total length of the path is about forty meters. The robot moved at about two meters per minute. (The robot has now been retrofitted with a new drive mechanism which should significantly improve this performance.) The sequencer in this experiment coordinated four active tasks running concurrently during the traverse: controlling the vehicle’s direction, controlling the aiming of the stereo camera, and allocating processor time to the stereo processing and planning tasks running in the deliberator. The navigation task used an algorithm based on navigation templates (NaTs) [Slack901 to avoid obstacles. This algorithm used a symbolic map constructed by the vision system, together with the current strategic plan constructed by the deliberator, to quickly calculate a preferred direction of travel from the robot’s current location about three times a minute. The robot was given three goals and no advance knowledge of the environment whatsoever. The robot initially planned to achieve goal B first, then goal A, then goal C. On the way to the second goal, however, it acquired new obstacle data which indicated that goal C would be easier to achieve next. The deliberator, running asynchronously, advised the sequencer to temporarily abandon goal A in favor of goal C. Goal C Figure 1: An outdoor robot performing a complex navigation task. Simulation results: In order to facilitate experimentation, a sophisticated simulation of the Robbie robot was constructed. The simulation onerates in real time, and includes an accurate kinematic simulation of the robot, as well as a sensor model with tunable noise parameters which can be adjusted to yield performance very close to the real robot. (In most of our simulator experiments we adjusted the noise parameters to give significantly worse performance than the real robot.) The performance of the simulator was verified by reproducing the results obtained on the real robot. (See figure 2.) The code controlling the simulated robot was identical to the code which controlled the real robot. The world model built up by the real robot in the outdoor experiment was used as the simulator’s internal world model (shown as shaded circles in the figure). The simulated robot had no direct access to this model, but could access it only through the simulated vision system. The obstacles actually detected by the robot are shown as pairs of hollow circles (representing uncertainty ranges on the size of the obstacle). 812 Robot Navigation The results of the simulated run are qualitatively identical to the experiment on the real robot. The small quantitative differences are due to the random differences in the sensed world model due to sensor noise. While a single experiment does not warrant sweeping conclusions, these results do indicate that the performance of the simulator is not totally out of step with reality. (Extensive informal experience with the simulator also indicates that its performance is comparable to the real robot.) Figure 2: A simulated recreation of the outdoor navigation experiment. Multiple tasks: The simulator was used to perform an extensive series of experiments in an augmented environment far more complex than that available in reality. (See figure 3.) First, a set of random obstacle fields were generated with obstacle densities far higher than those on our actual test course (shown as shaded rectangles in the figure). Second, the noise parameters of the simulated vision system were set to produce data which were sparser and noisier by an order of magnitude than on the real robot. Third, the environment was augmented with a set of simulated martians which roamed about in semi-random trajectories (shown as I$ shaped objects). Fourth, a set of sample sites and a home base were added to the world model (labelled “REDSOURCE”, “HOMEZ-BASE”, etc.). The robot was given three tasks in this augmented environment: to photograph as many martians as possible, to collect and deliver samples from the sample sites to various destinations according to orders which were given to the robot at runtime (an example of a meta- goal), and to keep itself refueled by visiting home-base periodically. To support this experiment, a task planner was written based on the work of Miller IpdIiller841). The planner is a simple linear planner which performs a forward beam- search through a space of world states. The search is kept to manageable size through a set of powerful heuristics. The planner can deal with issues of limited resources, deadlines, and travel time. This planner was implemented entirely in Lisp. Figure 3: A complex delivery task. To use the plan, the sequencer looked at the current first step which could be one of four things: collect or deliver a sample, refuel, photograph a martian, or go to a new destination. If the fiist step was to refuel or collect or deliver a sample, this step was simply executed as if it were a classical operator (since the simulator has no real- time model of manipulation or refueling). However, if the next step of the plan was to photograph a martian or to go to a new location, the sequencer extracted the target martian or the goal location and initiated an activity to aim the camera in the direction of the martian, or to go to that new location. Because the code to control navigation and camera aiming had already been developed and debugged, the information from the planner could be seamlessly incorporated as inputs to parameters in the code for controlling those activities. Furthermore, because the planner interfaced to the rest of the system only through previously designated inputs, there was a high degree of confidence that the combined system would work properly without modification to existing code. (This was confifmed by a subsequent experiment - see the last paragraph of this section.) The rock-collecting martian-photographing system has logged over twenty hours of runtime, and over thirty kilometers of (simulated) traversed terrain. The longest Gat 813 single run to date lasted eight hours, and we have no reason to believe that the system would not run indefinitely. Two snapshots of the system in action are shown in figure 3 and 4. Figure 4: Recovering from failure. The snapshot shown in figure 3 is particularly interesting because it shows an example of the planner interacting with the rest of the system. In this case the robot’s goals were to collect a sample from each of the three sample sites and return them to the home base. The robot begins by collecting green and blue rocks. However, before collecting red rocks the robot returns to the home base. This is because the task planner, which had been running asynchronously, determined that there was not enough fuel to collect red rocks and safely return to the home base. Thus, the planner advised the sequencer to go back to refuel first. On the way, the robot encounters an obstacle blocking its intended route which was not detected by the vision system due to noise, forcing a detour. (A more dramatic example of this is shown in figure 4.) All of this is completely transparent to the task planner. To demonstrate the ease with which different sorts of strategic planners could be incorporated into the architecture, a different planner (a topological path planner) was written and installed. The resulting system worked with no modifications to previously existing code. Details of this experiment can be found in [Ga%9la]. 5. Comparison to In this section we contrast ATLANTIS with other current work. The purpose of these comparisons is to clarify the operation of ATLANTIS, and is not a comprehensive review of the literature. Only those architectures which are most similar to ATLANTIS are reviewed here. One of the most similar architectures to ATLANTIS is Connell’s recently introduced SSS architecture [Connell91]. SSS was developed independently of ATLANTIS at about the same time, and they share many of the same motivations and features. The primary differences between the two are: 1) ATLANTIS provides a more complete framework for engineering the controller, 2) The middle layer of SSS is based on Brooks’ subsumption architecture whereas the sequencer in ATLANTIS is based on Firby’s RAP system, and perhaps most important, 3) the symbolic layer of SSS is actually in the control loop, whereas the deliberator in ATLANTIS merely provides advice to the sequencer. Putting the symbolic layer in the control loop can adversely affect the real-time response of the system, requiring a special mechanism in SSS (the contingency table) to help circumvent the symbolic layer when speed is critical. In ATLANTIS, because the deliberator is not directly in the control loop its performance in no way affects the system’s ability to respond to contingencies with dispatch. In fact, the deliberator can be completely removed and the resulting decapitated architecture is still quite capable of controlling a robot [Gatgld]. In this way, ATLANTIS achieves one the original aims of Brooks’ subsumption architecture, namely, that the system should degrade gracefully with the failure of higher-level components. ATLANTIS is the direct intellectual descendent of a complete control architecture described in the original work on RAPS pirby89]. The main differences between ATLANTIS and the RAP archi%ec%ure is that in the latter control flows top-down from the symbolic planner which installs tasks in the task queue (although the possibility of controlling symbolic compu%ations by the sequencer is also suggested). The original RAP system also assumed a discrete action model and an optimistic sensor interface. It is interesting to note that most of the ideas in the RAP work turn out to extend with little or no modification %o real-world sensors and continuous actuations. An architecture developed Bonasso is also very similar to ATLANTIS and shares many of its intellectual roots [Bonasso92]. . conclusisns We have shown that classical AI planning systems can be usefully embedded in%o a control mechanism for autonomous mobile robots operating in real world environments. Special compilation and implementation techniques are not required. Instead, a classical planner should be operated asynchronously in conjunction with a reactive control mechanism, and the planner’s output should be used to guide the robot’s actions but not to control them directly. This work can be viewed as an implementation of Agre and Chapman’s plans-as- communications theory. We have implemented a control architecture called ATLANTIS according to these principles on a variety of 814 Robot Navigation real-world and simulated real-world robots operating in both indoor and outdoor environments. We have demonstrated that ATLANTIS can control a robot pursuing multiple goals in real time in a noisy, partially unpredictable environment. The robot’s performance is reliable, task-directed and taskable, and reactive to contingencies . We draw the following general conclusions: 1. Robot control architectures should be heterogeneous. Much effort has been expended trying to design architectures which perform strategic planning using the same computational structure which they use to do low-level motor control. There seems to be little to be gained by this. Using different computational mechanisms to perform different tasks is straightforward and it works. 2. Robot control architectures should asynchronous. Slow computations should be performed in parallel with fast ones to allow fast reaction to contingencies . A continuous rather than a discrete action model should be used to allow actions to overlap or to be terminated before completing in response to unexpected situations. 3. Classical planning, abstraction, and centralized world models are at least useful, if not necessary, in real-world autonomous mobile robots. While it is certainly possible to implement planners and world models in non-classical or distributed ways, it is not clear that there are advantages in doing so over using established, classical techniques. Abstraction can be a powerful tool for dealing with unpredictable aspects of the environment, and need not introduce large control errors if the plans-as-advice model is followed. 4. Plans should be used to guide, not control, action. This is the view put forth by Agre and Chapman. We consider this work experimental evidence in support of their position. Finally, we make one unsubstantiated conjecture: 5. obot control systems should be designed bottom-up. This is a software engineering issue, and can probably be verified only with much more experience designing mobile robot control systems. (cf. [Simmons90]) Acknowledgements: This research was performed at the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration. Many of the ideas in this paper grew out of discussions with Rodney Brooks, Jim Firby, Marc Slack, Paul Viola and David Miller. Portions of the control software for Robbie were written by Jim Firby, Marc Slack, Brian Cooper, Tam Nguyen and Larry Matthies. Portions of the simulator software were written by Jim Firby and Marc Slack. t&9WNXS [Agre90] Phil Agre, “What are Plans For?“, Robotics and Autonomous Systems, vol. 6, pp. 17-34, 1990. [A&in901 Ronald C. Arkin, “Integrating Behavioral, Perceptual and World Knowledge in Reactive Navigation,” Robotics and Autonomous Systems, vol. 6, pp. 105-122, 1990. [Bonasso92] R. Peter Bonasso, “Using Parallel Program Specifications For Reactive Control of Underwater Vehicles,” to appear, Journal of Applied Intelligence, Kluwer Academic Publishers, Norwell , MA, June 1992. [Brooks861 Rodney A. Brooks, “A Robust Layered Control System for a Mobile Robot”, IEEE Journal on Robotics ana’ Automation, vol RA-2, no. 1, March 1986. [Connell91] Jonathan Connell, “SSS: A Hybrid Architecture Applied to Robot Navigation,” unpublished manuscript. [Dean881 Tom Dean, R. James Firby and, David P. Miller, Hierarchical Planning with Deadlines and Resources, Computational Intelligence 4(4), 1988. [Firby R. James Firby, Adaptive Execution in Dynamic Domains, Ph.D. thesis, Yale University, 1989. [Gatgla] Erann Gat, “Reliable Goal-directed Reactive Control for Real-world Autonomous Mobile Robots”, Ph.D. Thesis, Virginia Polytechnic Institute and State University, Blacksburg, Virginia. [Gatglb] Erann Gat, “ALFA: A Language for Programming Reactive Robotic Control Systems”,IEEE Conference on Robotics and Automation, 1991. [GatBlc] Erann Gat and David P. Miller, “Modular, Low-computation Robot Control for Object Acquisition and Retrieval,” unpublished manuscript. [Gatgld] Erann Gat, “Low-computation Sensor-driven Control for Task-directed Navigation,” IEEE Conference on Robotics and Automation, 1991. [Georgeff87] Michael Georgeff and Amy Lanskey, “Reactive Reasoning and Planning”, Proceedings of AAAI-87. [Hogge88] John Hogge, “Prevention Techniques for a Temporal Planner,” Proceedings of AAA88. [Howe911 Adele E. Howe, “Failure Recovery: A Model and Experiments,” Proceedings of AAAI91. [Kaelbling87] Leslie Pack Kaelbling, “REX: A Symbolic Language for the Design and Parallel Implementation of Embedded Systems,” Proceedings of the AIAA conference on Computers in Aerospace, 1987. [Kaelbling88] Leslie Pack Kaelbling, “Goals as Parallel Program Specifications”, Proceedings of AAAI-88. [MataricgO] Maja Mataric, “A Distributed Model for Mobile Robot Environment Learning and Navigation”, Technical Report 1228, MIT AI Laboratory, 1990. [Miller841 David P. Miller, “Planning by Search Through Simulations”, Technical Report YALEU/CSD/RR423, Yale University, 1984. [Miller911 David P. Miller, et al., “Reactive Navigation through Rough Terrain: Experimental Results,” Proceedings of AAA192. [Noreils90] Fabrice Noreils, “Integrating Error Recovery in a Mobile Robot Control System,” IEEE International Conference on Robotics and Automation, 1990. [Simmons901 Reid Simmons, “An Architecture for Coordinating Planning, Sensing and Action,” Proceedings of the DARPA Workshop on Innovative Approaches to Planning, Scheduling, and Control, 1990. [Slack901 Marc G. Slack, “Situationally Driven Local Navigation for Mobile Robots”, JPL Publication 90-17, California Institute of Technology Jet Propulsion Laboratory, April 1990. [Soldo90] Monnett Soldo, “Reactive and Preplanned Control in a Mobile Robot,” IEEE International Conference on Robotics and Automation, 1990. [Stentz90] Anthony Stentz, “The Navlab System for Mobile Robot Navigation”, Ph.D. Thesis, Carnegie Mellon University School of Computer Science, 1990. [Wilcox92] Biran Wilcox, et al., “Robotic Vehicles for Planetary Exploration”, IEEE International Conference on Robotics and Automation, 1992. Gat 815
1992
10
1,165
James M. Crawford avid W. Etherington AI Principles Research Department AT&T Bell Laboratories 600 Mountain Ave. Murray Hill, NJ 07974-0636 {jc,ether}@research.att.com Abstract The development of a formal logic for reasoning about change has proven to be surprisingly dif- ficult. Furthermore, the logics that have been developed have found surprisingly little applica- tion in those fields, such as Qualitative Reasoning, that are concerned with building programs that emulate human common-sense reasoning about change. In this paper, we argue that a basic tenet of qualitative reasoning practice-the separation of modeling and simulation-obviates many of the difficulties faced by previous attempts to formal- ize reasoning about change. Our analysis helps ex- plain why the QR community has been nonplussed by some of the problems studied in the nonmono- tonic reasoning community. Further, the formal- ism we present provides both the beginnings of a formal foundation for qualitative reasoning, and a framework in which to study a number of open problems in qualitative reasoning. Introduction Formalizing reasoning about change has received much attention in the nonmonotonic reasoning community (NMR) [Ginsberg 19871. The qualitative reasoning community (QR) h as f ocused on the closely-related task of developing programs that emulate human common-sense reasoning about physical systems [Weld & de Kleer 19901. Strangely, there has been little in- teraction between the two fields. This may be par- tially explained by the gulf between the formal meth- ods used in NMR and the experimental methods used in QR. However, certain principles have recently begun to emerge in QR that are useful in guiding the formal- ization of reasoning about change. In particular, it has become apparent that reasoning about change can be simplified if one separates the task of (formally) deriv- ing a description of a particular system from general principles (the “modeling problem”) ,l and the problem of making predictions based on that description (the “simulation problem”). We describe a preliminary for- malism incorporating this separation. We show that this formalism improves on previous work by showing that it correctly handles examples in which information is available about non-initial states (e.g., the “stolen car problem” [Kautz 19861 and its relatives). Given a complete description of a system (expressed, for example, as a differential equation or a set of state- ments in first-order logic), it is relatively straightfor- ward to predict the possible behaviors of the system t using a numerical simulator, a program such as QSIRa Kuipers 19861, or a formalism such as that presented in [Morgenstern & Stein 1988]). Such predictions, in general, take the form of state transition graphs that show the effects of each possible action in each possi- ble state. Given the initial state of the system, one can then use the state transition graph to predict the behavior of the system. We refer to the task of de- riving state transition graphs (and making behavioral predictions) as the simulation problem. Things are more complex if one does not have a com- plete description of the system. A description may be incomplete, for example, because one wants to allow for the possibility that events may fail to have their expected effects (e.g., flipping the switch may fail to light the lamp) or because one wants to allow for the possibility of ‘miracles’-changes in the state of the ‘Unfortunately, the term model has very different mean- ings in QR and NMR. In QR, a model is a description of a component or system (e.g., the normal model of a light is that you turn it on and it glows, while the fault model is that you turn it on and nothing happens). In NMR, a model is an assignment of truth values. The sense of model we mean will (we hope) be clear from context (e.g., whenever we speak of “model building” we are using model in the QR sense). When the sense is not obvious we use Qqualitative model” or “logical model”. Crawford and Etherington 577 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. (k-)4 simulation (&] Figure 1: The Roles of Model Building and Simulation. world that are not caused by any known actions. In such cases, one must hypothesize whether each event has its normal effects and which-if any-miracles oc- cur at each time. Any such set of hypotheses induces a complete hypothetical description (a ‘model’) of the system. We refer to the task of choosing a set of hy- potheses as model building. Each model can be simu- lated to derive behavioral predictions, which can then be compared against observations of the actual system. Sets of hypotheses that induce models whose predicted behavior does not match the observed behavior of the system can then be pruned away (see Figure l).” The nonmonotonic reasoning community has gener- ally failed to distinguish the model building and sim- ulation problems. As a result formalisms have been developed that allow observations to corrupt the sim- ulation machinery. This leads to difficulties when in- formation about non-initial states is available: rather than allowing models to be pruned because they do not match observations, existing formalisms (e.g., [Baker 19891) essentially change the behavior of the “simula- tor” so that any model is made to match the observa- tions. An example of this problem is given after the next section. By explicitly separating the model-building task from the simulation task, we gain two advantages. First, we are able to handle a class of problems that has caused difficulties for previous approaches. Second, we lay the groundwork for a formalization of qualitative reasoning practice. We are thus able to understand why problems such as the Yale Shooting Problem have not been of interest in QR, and suggest future work that could be of interest to both communities. Nonmonotonic Formalisms for Reasoning About Change In this section we briefly review the ideas from past work on formalizing change that are necessary to un- derstand the discussion that follows. Much formal work on representing change is based on the situation-calculus [McCarthy & Hayes 19691. The situation calculus represents the world in terms of situations (states of the world), actions, and fluents (time-varying properties that may or may not hold in particular situations). The predicate Holds is a rela- tion between situations and fluents that states that a fluent holds in a particular situation. The function re- suit maps a situation and an action to the situation produced by performing the action in that situation. Axioms are usually given that detail the necessary pre- conditions for, and results of, the actions. Logical formalisms for actions are generally non- monotonic for two reasons. First, one may want to state that an action typically has certain effects, but be unable or unwilling to explicitly list all combina- tions of circumstances in which the action may fail to have these effects. Second, actions often have indi- rect effects, so it is in general infeasible to axiomatize the exact effects of successful actions. (For example, painting a house probably changes the value of its color fluent, but it may also change the value of the fluent pretty). To address this problem, persistence axioms are usually added that state that if a fluent value is not known to change as the result of an action, then one may assume that it persists. Both types of nonmono- tonicity are generally represented by the use of abnor- malit y predicates. The extensions of these predicates are then minimized; logical models with fewer abnor- malities (a’. e., fewer violations of default assumptions) are preferred over those with more abnormalities. Thus, for example, a (simplified) axiom stating if block y is initially clear, then block x will be on it after the action PutOn(x, y) might look like Vs. Vxy. Nolds(cZear(y), s) A-Abnormal(s, PutOn(x, y)) 3 Holds(On(x, y), result(PutOn(x, y), s) . This axiom says that if y is clear, and the PutOn pro- ceeds normally, then x will be on y in the resulting state. Of course, there is the potential for conflict be- tween the statement that fluents normally persist and the statement that actions normally change certain fluents.3 We shall return to this conflict in the next section. A Problematic Example Consider the example shown in figure 2. Initially there is water in TankI. After waiting for some period of time the valve is opened. We then ask whether there is water in Tank 2.4 We can represent this problem using the fluents Full(Tank1) and FulZ(Tank2), and the actions Wait and Open (i.e., open the valve). We 2The idea of simulating all possible models and then pruning those that conflict with observations is the heart of the QR system MIMIC [Dvorak and Kuipers, 19891. 31n fact, the question of how to resolve conflicts between defaults is one of the fundamental questions in the study of nonmonotonic reasoning. 4The astute reader may notice that this example is sim- ilar to an unnameable example often studied in NMR. 578 Representation and Reasoning: Action and Change 9ve I -- Tank2 Figure 2: The tank problem. axiomatize Open as Vs. Holds(Full(Tankl), s) A yAbnormal(s, Open) > Holds(Full(Tank2), result(Open, s) thus allowing for the possibility that the valve may fail to open. Assume for the moment that we do not sepa- rate model buildin from simulation (e.g., that we use Baker’s formalism ‘t Baker 19891, which handles many of the problems studied in non-monotonic reasoning, including the original Yale Shooting Problem). Note that initially we have FuZZ(Tankl) and lFuZl(Tank2). The persistence assumption is violated if Full(Tunk2) becomes true after Open, but it violates the default that the valve usually works for FuZI(Tank2) to re- main false. Although one or the other violation must occur, a straightforward axiomatization gives no basis for preferring one to the other. Such ambiguities will occur whenever the effects of actions are defeasible. The usual solution to such difficulties is to mini- mize persistence abnormalities with lower priority than other abnormalities (in effect preferring persistence failures to action failures). This allows Open to be normal and thus to change the value of FubZ(Tank2). Unfortunately such a prioritization leads to other dif- ficulties. For example, consider what happens if we know that Tank2 remains empty after the valve is opened. The prioritized approach then allows us to conclude (unambiguously) that Tank1 became empty during the Wait action, since that involves only a vi- olation of a persistence assumption (in particular the persistence of Full(Tankl)). Now, while this is pos- sible, it is hardly the only reasonable conclusion (es- pecially since we explicitly axiomatized the possibility that valves can fail). In general, such a prioritization can force any number of persistence assumptions to be dropped in order to avoid introducing a single non- persistence abnormality-the simulation is changed to make the model “work”! Things are better if we separately simulate the case in which the valve works, and the case in which it fails, and then check the resultant predictions against obser- vations. If we hypothesize that the valve works, we can simulate the expected behavior and predict that Tank2 fills. Similarly, if the valve fails then Tank2 remains empty. In either case, water remains in Tank1 dur- ing the Wait. We can then compare these predictions against the observation that Tank2 is empty, and rule out the case in which the valve works. Note that the key difference between this approach and the priori- tized approach is that we explicitly separate consid- eration of each possible set of assumptions about the normality of the actions (which we will refer to as mod- eling assumptions) from the simulation process (which determines persistence abnormalities). In the next sec- tion we show how such a distinction can be formalized; later, we return to the tank problem and show how it is solved by our approach. alizing t g/Simulation In this section, we outline a formalism for reasoning about change that explicitly separates modeling and simulation. We describe the axiomatization, discuss the distinguished role of observations, and explain how inference is performed. For concreteness and simplicity, we base our formal- ism on the situation calculus. However, the basic idea of separating modeling and simulation is a much more general notion, and does not depend on the situation- calculus (e.g., QPE [Forbus 19841 and QPC [Crawford, Farquhar & Kuipers 19901 both make such a separa- tion, but allow reasoning about non-instantaneous and overlapping processes, and describe physical devices in terms of continuously-varying parameters). xiomat izat ion Actiom: We use MAb (action, time, fluent) to mean that the instance of action action at time time is subject to a modeling abnormality that affects fluent fluent. Thus, if Preconditions(Action, sitn) stands for a formula that expresses the preconditions for ac- tion Action in situation sitn, then Vsitn. Preconditions(Action, sitn) A yMAb(Action, Time(sitn), Fluent) > Holds(Fluent, result(Action, sitn)) expresses the fact that the normal model of the ac- tion is one where Fluent holds after the action (if the preconditions hold before). The particular form of the axiom is relatively unimportant and can be changed Crawford and Etherington 579 to reflect the desired behavior; the key feature is the fact that modeling abnormalities are properties of the time at which a situation occurs, rather than of the situation itself. Persistence: By contrast, we view persistence ab- normalities as properties of situations (i.e., as prop- erties of the underlying fabric of the representation), and hence deal with them during simulation. Following Baker 119891 axioms are added that force the existence of situations corresponding to each possible set of flu- ent values5 We use Ab(action,sitn,fluent) to mean that the performance of action in state sitn changes the value of fluent (this is “abnormal” since fluents are normally assumed to persist). Persistence is thus postulated by the usual single axiom: V&ion, sitn, fluent. Holds(fluent, sitn) A -Ab(action, sitn, fluent) e Holds(fluent, result(action, sitn)) Miracles: In order to represent rich worlds with ap- parently uncaused changes (such as the Stolen Car Problem discussed in the next section), it is useful to have a representation for so-called “miracles”- changes that are not the result of any particular ac- tion (e.g., the disappearance of one’s car during some Wait event) [Lifschita & Rabinov 19891. Miracles are a type of modeling abnormality. They are treated in a slightly peculiar way: Vsitn. Preconditions(Action, sitn) AMiracle(Action, Time(sitn), Fluent) > Holds(Fluent, result(Action, sitn)) . This axiom can be read as saying that, under cer- tain circumstances (specifically, when Preconditions holds), in an abnormal model (one where a miracle occurs), Fluent will come to hold after the action Action.6 Thus, while action axioms say that actions normally have certain effects, miracle axioms say that miracles abnormally have certain effects-that miracles are not the normal state of affairs. It should be noted that miracle axioms differ from typical axioms involv- ing abnormalities in that Miracle appears positively in the antecedent. It might appear that there is thus no way to infer that a miracle has occurred. However, as we shall see when we discuss the stolen car example, models are pruned by observations, and in some cases all models without miracles may be rejected. Observations An observation is any formula that mentions a specific (or an existentially-quantified) time or situation. Thus observations include facts about initial, final, and in- termediate conditions, statements about what actions ‘[Crawford & Etherington 19921 details these axioms. 6The dependence of miracles on particular actions can, of course, be avoided by quantifying over actions. occur when, and even conditional statements about ac- tions that behave differently at “landmark” times. Observations have a distinguished role: they do not participate in simulation, but are critical in pruning the set of possible models. This role is described in the next sub-section. Inference The actual reasoning process proceeds in three stages, which correspond to (1) simulating all possible qual- itative models, (2) comparing the results with ob- servations to rule out inappropriate models, and (3) choosing preferred models from among those remain- ing. These 3 stages are realized by 1. Minimizing Ab varying result, but holding MAb and Miracle fixed; 2. Augmenting the resulting theory with the observa- tions; and 3. Minimizing MAb and Miracle in the augmented theory, varying Ab and result. These stages are formally defined using circumscription in [Crawford & Etherington 19921. Recall from our discussion of persistence axioms that we include Baker’s axioms to force the existence of all possible situations. The interior (first) minimization essentially runs the simulation (specifying the result of each action in each situation, while minimizing per- sistence abnormalities) for each qualitative model. In this minimization, MAb and Miracle are fixed in order to force consideration of all qualitative models. Thus, for each configuration of modeling assumptions (MAbs and Miracles), a set of Ab-minimal logical models is produced. These logical models correspond to the re- sults of simulation under those qualitative modeling assumptions.7 The addition of observations then rules out configurations of modeling assumptions that, when simulated, yield predictions that conflict with actual observed values. Finally, MAb and Miracle are min- imized to prefer those qualitative models that violate as few modeling assumptions as possible (while still satisfying the observations).” In summary, the formalism simulates all possible qualitative models, and then prunes those that do not jibe with the observations. Among the remaining mod- els, those that violate as few modeling assumptions as possible are preferred. The formalism differs from past ‘The result is similar to a total envisionment [Forbus 19841 in QR. ‘Ab is varied in this minimization to prevent the per- sistence violations induced by simulation from unduly in- fluencing the choice of models by making models incompa- rable; this avoids artifacts induced by differing qualitative models having different simulations. Allowing Ab to vary is not problematic since the functional relationship between modeling and simulation assumptions has already been de- termined in the first minimization. 580 Representation and Reasoning: Action and Change work in that simulation (the minimization of Ab) is sep- arated from model building (the minimization of MAb and Miracle). Further, observations are included only after the minimization of Ab (thus preventing them from affecting the simulation). By taking a model-theoretic view, we can sharpen our understanding of this inference process. odels of the action and persistence axioms can be grouped into equivalence classes according to the sets of MA b s they contain. Within each equivalence class, models can be partially-ordered by their sets of Abs. The first mini- mization then selects those models in each equivalence class that are Ab-minimal, and prunes the rest. The second step prunes models that are incompatible with the observations. Notice that this can result in entire equivalence classes being pruned (even though some members of the class-that are not Ab-minimal-may have been consistent with the observations). The fi- nal step selects, from the models that remain, those minimal in MA b . Since each step begins with the models remaining after the previous step, and since the observations are not considered until after the Abs for each configura- tion of MAbs are determined, it is easy to show that the introduction of observations cannot accommodate observations by generating abnormality in the simula- tion. This is what we set out to guarantee. The Two Tank We now return to the two tank problem shown in Fig- ure 2.’ The first minimization results in two relevant Ab-minimal logical models,” one with no MAbs, in which Open is effective (the valve works, and Tank2 fills), and another with one MAb, in which the valve sticks (and Tank2 remains empty). If there are no ob- servations about the final state, both of these models are considered for minimization of MA b , and we con- clude that Tank% fills. On the other hand, if we know that Tank2 is empty after the Wait and Open, the first model is pruned, but the second model remains: we explain the lack of water in Tank2 by postulating that the valve stuck. If one adds an axiom permitting the “miraculous” disappearance of water) there will be an additional Ab-minimal model, with one modeling abnormality, in which Tank2 remains empty because Tank1 emptied during the Wait. Given no observations about the fi- nal state, this model would be dropped in favor of the model with no modeling abnormalities. owever , if one knows that Tank2 remains empty, then the possibility ‘The details, which are straightforward, are given in [Crawford & Etherington 19921. “There are actually several other minimal models that are later eliminated and so do not materially affect the result. of miraculous emptyings introduces an ambiguity be- tween the model in which the water disappears and the model in which the valve fails. Notice that in neither of these models does the water disappear simply due to a failure of persistence. Something (a miracle in this case) has to happen. One might wish to further pri- oritize the minimization of modeling abnormalities to stipulate that miracles are much less likely than other types of modeling abnormalities; this presents no dif- ficulties, and would cause one to attribute the lack of water in the Tank:! to the failure of the valve rather than to the disappearance of the water in TankI. In the so-called “Extended Stolen Car Problem”, Fred parks his car and then waits two intervals. We are told that at the end of that time, his car has been stolen. The representation of this problem in our formalism is straightforward. Note, however, that if one’s axiom- atization does not include the possibility of miracles, then the observation prunes call models (thus indicating that the axiomatization is too restrictive to allow any adequate description of the world to be built). If mira- cles are included in the axiomatization then two models will be produced-one in which the car is miraculously stolen during the first wait event, and one in which it is stolen during the second (with no preference between the two). Now consider what happens if we enrich our rep- resentation so we can ask whether Fred is alive after the car is stolen. The provision of a fluent describing Fred’s vitality seems a trivial enrichment of the prob- lem. One would expect Alive to simply persist (since there is no reason to believe it changes). Problems arise in Baker’s formalism, however, precisely because there is no distinction between persistence abnormalities and modeling abnormalities. In that formalism, abnormal- ities are functions of situations. Thus the disappear- ance of the car in a situation in which Fred is alive is a distinct (and incomparable) abnormality from its disappearance in a situation in which he is dead. This results in an extra, unwanted, minimal model, in which Fred dies and then his car is stolen. This leaves Baker agnostic about Fred’s health after the two waits. Since modeling abnormalities are a function of the time at which situations occur, this enriched problem presents no difficulty for our formalism. The abnormal- ity of having the car stolen during the second wait is the same regardless of the other fluent-values. Thus the model in which Fred dies and then has his car stolen has two modeling abnormalities, while the model in which he survives to suffer the indignity of losing his car has only one, and hence is preferred. We are thus left with the original pair of models (one in which the car is miraculously stolen during the first wait event, and one in which it is stolen during the second), and we predict unambiguously that Fred remains alive. Now consider the case in which we are told that Fred Crawford and Etherington 581 alive Talive -alive MI: -stolen --* -rstolen + stolen (Miracle(w&t, tl, alive), Miracle(wait, t2, stolen)} alive alive alive M2: -stolen + -stolen + stolen { Miracle(wait, t2, stolen)} alive alive 7dive M3: -stolen + -stolen + stolen (Miracle(wait, tz, stolen), Mo’racle(wait, t2, alive)} dive alive alive M4: --stolen --* stolen + stolen (Miracle(wait, tl, stolen)) alive. alive Talive M5: -stolen + stolen + stolen {Miracle(wait, tl, stolen), Miracle(wait, t2, alive)} alive -dive --alive M6: -stolen -+ stolen ---) stolen (Miracle(wait, tl, stolen), MdTacle(wa%‘t, tl, alive)} Figure 3: Six possible models of the extended stolen car problem (before minimization of MAb ). just cannot face the loss of his car-if the car is stolen during the first Wait then Fred is dead after the sec- ond Wait. Since this statement concerns a particular time, our formalism treats it as an observation and uses it to prune possible qualitative models. Without loss of generality, we can restrict our attention to models with two or fewer MA b s and ignore differences between models that only affect states unreachable from the given initial state. This leaves eleven possible models. If we simulate each of these, and prune on the obser- vation that the car is stolen after the two Waits, then we are left with the six models shown in Figure 3 (the MA bs for each model are shown in braces). (Note that if we minimize MAb at this point we get MS and A& as expected .) We then prune using the observation that if the car is stolen during the first Wait then Fred is dead after the second Wait. This eliminates only M4. We can now minimize MA b , and eliminate Mi and Ms (whose MA b s are supersets of the MA b in Mz). We are thus left with three possibilities-the car may be stolen dur- ing the second wait, in which case Fred survives, or it may be stolen during the first wait, in which case Fred dies during one of the waits. All of these are reasonable behaviors since the observation only stipulated that if the car is stolen then Fred is dead after the second wait-it did not require him to die after the car was stolen! Conclusions and We have shown that preserving qualitative reason- ing’s distinction between model building and simula- tion avoids many of the problems faced by current non- monotonic formalisms for reasoning about change. In particular, the problems (such as unwanted multiple models and unexplained persistence violations) typi- cally caused by “unexpected” observations are avoided in our approach. The motivation for our work is much like that for causal minimization [Lifschitz 1987; Lin & Shoham 19911: no change should occur that is not explained by the model. Further, the separation of simulation from observations can be justified on causal grounds- observations about the system should not be allowed to cause violations of persistence (rather, they should be explained by an appropriate model). However, our approach appears to have no difficulties with ramifica- tions (which cause problems for causal minimization). In retrospect, we can see why other earlier ap- proaches worked on some problems and failed on oth- ers. Baker’s formalism [Baker 19891 was the first to adequately axiomatize simulation (by axiomatizing the existence of all situations and then “simulating” by circumscriptively determining the connections between situations through result links). However, Baker did not recognize model-building as a separate problem, and thus could not handle problems like the extended stolen car problem. Similarly, Morgenstern and Stein [1988] ignore the simulation problem by assuming a complete description of the effects of each action, but handle part of the model-building problem by showing how one can derive minimal sets of actions necessary to explain a given outcome. The Yale Shooting Problem and its relatives have never been of much interest to the qualitative rea- soning community. This is understandable since the model building/simulation distinction prevents them from occurring (in the normal model, waiting has no effect and the target dies, and in one possible abnormal model waiting unloads guns and he lives). However, the approach presented here provides a formal charac- terization of current QR practice, and hence a basis for formally studying extensions to that practice that address several outstanding QR problems. First, dealing with region transitions-points at which a qualitative model ceases to describe a system and a new model must be built (e.g., because a pipe breaks, or a tank overflows)-has recently emerged as a tricky problem in QR [Forbus 1989; Sandewall, 1989; Rayner, 19911. The semantics of region transitions is poorly understood (especially since, in commonsense reasoning, region transitions often involve instanta- neous changes in the value of otherwise smooth func- tions), and may well require a formal theory of change. Second, the paradigm of building the entire state transition graph (showing each possible state and the effect of each possible action thereon) roughly corre- sponds to the QR notion of a total envisionment [For- bus 19841. In the interest of computational efficiency, one might want to modify the formalism so that the initial fluent values, and statements about which ac- tions occur when, are given to the simulator. This would allow the simulator to build only a part of the 582 Representation and Reasoning: Action and Change state transition graph. Such an approach would corre- spond to the QR idea of an incremental envisionment [Crawford, Farquhar & Kuipers 19901. It is generally believed that the only differences be- tween total and incremental envisionments are com- putational, but the exact conditions under which the approaches are semantically equivalent have not been studied. Further, showing that algorithms based on incremental envisionments produce the same simula- tions as algorithms based on total envisionments has proved difficult [Forbus 1992). This is especially true in domains, such as qualitative physics, in which the ex- istence’of some conceptual objects may be conditioned on modeling assumptions (i.e., objects may exist in some qualitative models but not in others). Our for- malism could provide a formal basis for studying such equivalences. Finally, there is a debate in the qualitative reasoning community concerning whether approaches that sepa- rate model building and simulation need to be aug- mented with explicit representations of causality. This work indicates that one can get surprisingly far in a formalism without causality, but the role of and neces- sity for causal information remain open questions. One interesting avenue for further work is to study whether there is a fundamental difference between formalisms based on the modeling/simulation distinction and for- malisms that explicitly axiomatize causality. Our ex- perience with this formalism suggests that the sepa- ration of model building and simulation may capture many aspects of causation. cknowllec&gnents We would like to thank Ben Kuipers for several fruit- ful discussions on this topic, and for comments on an earlier draft of this paper. eferences Baker, A.B. (1989). A simple solution to the Yale Shooting Problem. Proc. First Int ‘1 Conf. on Prin- ciples of Knowledge Representation and Reasoning. pp. 11-20. Crawford, J.M. and Etherington, D.W. (1992). For- malizing Reasoning About Change: A Qualitative Reasoning Approach. In preparation. Crawford, J., Farquhar, A., and Kuipers, B. (1990). &PC: A compiler from physical models into Qualita- tive Differential Equations. Proc. Eighth Nat ‘1 Con5 on Artificial Intelligence. pp 365-372. D, Dvorak and B. J. Kuipers. (1989). Model-based monitoring of dynamic systems. Proc. Eleventh Int ‘1 Joint Conf. on Artificial Intelligence. pp. 1238-1243. Forbus, K.D. (1984). Qualitative process theory. Artificial Intelligence, 24~85-168, 1984. Forbus, K.D. (1989). Introducing actions into quali- tative simulation. Proc. Eleventh Int ‘1 Joint Conf. on Artificial Intelligence. pp. 1273-1278. Forbus, K.D. (1992). Persona1 communication. Ginsberg, M.L. (ed.) (1987). Readings in Nonmono- tonic Reasoning. Morgan Kaufmann, Los Altos, CA. Kautz, B.A. (1986). The logic of persistence. Pruc. Fifth Nat’1 Conf. on Artijkial Intelligence. pp. 401- 405. Kuipers, B.J. (1986). Qualitative simulation. Artifi- cial Intelligence, 29:289-338. Lifschitz, V. (1987). Formal theories of action. In Matthew L. Ginsberg, ed., Readings in Nonmono- tonic Reasoning. Morgan Kaufmann, Los Altos, CA. pp. 410-432. Lifschitz, V., and Rabinov, A. (1989). Things that change by themselves. In Proc. Eleventh Int ‘1 Joint Conf on Artificial Intelligence. pp. 864-867. Ein, F. and Shoham Y. (1991). Provably correct the- ories of action (preliminary report). Proc. 9th Nat ‘1 Conf. on Artificial Intelligence. pp. 349-354. McCarthy, J ., and Hayes, P.J. (1969) Some Philo- sophical Problems from the Standpoint of Artificial Intelligence. In B. Meltzer and D. Mitchie (eds.) Ma- chine Intelligence 4. Edinburgh University Press, pp. 462-502. Morgenstern, L., and Stein, L.A. (1988) Why things go wrong: A formal theory of causal reasoning. Proc. Seventh Nat’1 Conf. on Artificial Intelligence. pp. 518-523. Rayner, M.. (1991) On the applicability of nonmono- tonic logic to formal reasoning in continuous time. Artificial Intelligence, 49:345-360. Sandewall, E. (1989) Combining logic and differen- tial equations for describing real-world systems. Proc. First Int’1 Conf. on Principles of Knowledge Repre- sentation and Reasoning. pp. 412-420. Weld, D.S., and de Kleer, J. (1990). Readings in Qual- itative Reasoning About Physical Systems. Morgan Kaufmann, Los Altos, CA, 1990. Crawford and Etherington 583
1992
100
1,166
584 Representation and Reasoning: Action and Change Alvaro de1 Val Robotics Lab Stanford University Stanford, CA 94305 delval@scottie.stanford.edu Abstract Two areas that have attracted much interest in recent years, belief update and reasoning about action, have so far been largely disjoint. Indeed, at first glance there appears to be little connec- tion between them. In this paper we argue that this first impression is wrong; specifically, we show that the postulates for belief update recently pro- posed in [Katsuno and Mendelzon, 19911, can in fact be analytically derived, using the formal the- ory of action proposed in [Lin and Shoham, 19911. Introduction In this paper we tie together theories of belief change and theories of action, two areas that have attracted much interest in recent years. Theories of belief change address the following general question: Given an ini- tial database I and a new piece of information ~1 to be incorporated into it, what should the new database be? Initial work concentrated on normative theories of belief revision, postulating a number of conditions that a ‘rational’ belief-revision operator should sat- isfy (cf. [Gardenfors, 1988; Alchourron et al., 19851). These postulates aim to capture stability properties, eliminating unnecessary perturbations to the original database. For example, one postulate states that if p is consistent with I’ then the new database is simply the addition of p to I?. It has recently been proposed that the operation of incorporating a new piece of information into an ex- isting database might take different meanings. In par- ticular, it has been suggested to distinguish between belief revision and belief update; loosely speaking, the former says that the beliefs may have been wrong and in need of revision, whereas the latter says that the be- liefs were correct, but the world has in the meanwhile evolved and the beliefs must be updated. [Katsuno and Mendelzon, 19911 proposed a set of belief-update postulates, which are similar to, but distinct from, the belief-revision postulates; they also provide the model theory for these postulates, in the form of a represen- t ation theorem. We will describe this work in more Uoav Shoham Computer Science Department Stanford University Stanford, CA 94305 shoham@cs.stanford.edu detail later in the paper. In a largely independent line of enquiry, researchers have been interested in formal theories of time and ac- tion. Of particular interest have been theories of non- monotonic temporal reasoning, and associated prob- lems such a.s the frame, quaZijkation and ramification problems. The essential issue in nonmonotonic tem- poral reasoning is that fully specifying the conditions needed to make predictions (or other temporal infer- ences) might be impossible to do explicitly. For ex- ample, one would not want to have to explicitly state that after starting the engine of a yellow car, the car re- mains yellow; that should follow ‘by default.’ Research in this area consists primarily of formal methods for achieving these default conclusions. For an overview of the literature on nonmonotonic temporal reasoning, cf. [Shoham and Baker, 19921. As we have said, these two research areas have been largely disjoint. In this paper we tie them together, and in particular show that the KM-postulates need not be postulated at all, but can instead be derived analytically. The basic idea is simple, we believe, and is as follows. Although update is supposed to reflect changes that have taken place in the world over time, the update problem (like that of belief revision) is for- mulated using a language incorporating no model of time or change. The sentences describe a single state of the world, a snapshot of it at a given situation. There is a clear computational advantage to this, since one does not need to store the whole history of the domain un- der consideration. Time is only implicit in the succes- sion of theories resulting from a series of updates, but old theories are simply discarded. However, the price of this conciseness is empoverished semantical content, in which the rules of update must be postulated from outside the theory. In order to recover the lost infor- mation, in this paper we translate the update problem into a richer language, which explicates the temporal information: The initial database is taken to describe a particular situation, and the update formula is taken to describe a particular action. A formal theory of action is then used to infer facts about the result of taking the particular action in the particular situation; the formal From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. theory of action we will employ is that proposed in [Lin and Shoham, 19911, which is described later. Finally, anything inferred about the resulting situation can be backtranslated to the timeless framework of belief up- date. In this way the KM-postulates can be proved. This approach relies crucially on the meaning of up- date, and is not applicable in any straightforward way to belief revision. We note, however, that [Grahne et al., 19921 have recently proposed a connection between revision and update, which suggests that the revision postulates can nonetheless be derived; we return to this topic in the summary section. The structure of this article is as follows. Section 2 briefly reviews the results of [Katsuno and Mendelzon, 19911 on belief update. Section 3 does the same for the theory of action proposed in [Lin and Shoham, 19911. The main contribution of the paper lies in section 4, where the belief update problem is encoded as a theory of action, and the KM-postulates are derived as theo- rems. In section 5 we consider the case of update in the presence of “integrity constraints”, and in section 6 we provide a characterization of the propositional update operators determined by our construction. We discuss related work and open problems in the concluding sec- tion. Update in propositional languages: eview Katsuno and Mendelzon proposed eight postulates that should be satisfied by update operators. Let o be an update operator for a propositional language with a finite number of propositional variables. The KM- postulates are the following: (Ul) 7c, op implies p. (U2) If $ implies p then $ o ~1 is equivalent to $. (U3) If $ and p are satisfiable then 1c, o p is also satisfiable. (U4) If + +l E $2 and b ~1 G ~2 then $1 o ~1 is equivalent to ti2 0 ~2. (U5) (II, 0 p) A 4 implies $0 (p A 4). (U6) If + 0~1 implies ~2 and + o ~2 implies ~1 then $0 l-11 is equivalent to $0 ~2. (U7) If $ is complete then ($0~1) A (+ 0~2) implies 1c, 0 (Pl v P2). (W (?h v $2) OP is equivalent to ($10 p) V ($2 0 cd) Update operators satisfying these postulates can be characterized in terms of the following representation theorem. An update assignment is a function which assigns to each interpretation I a relation 51 over the set of interpretations of the language. We say that this assignment is faithful iff for any interpretation J, if I # J then I 51 J and J $1 I. In what follows, we use Min(S, L), for any set S and preorder < over S, to denote the set of elements of S that are miiimal under 5. Theorem 1 (Katsuno and Mendelzon, 1990) An up- date operator o satisfies conditions (Ul)-(U8) iff there exists a faithful assignment that maps each interpreta- tion to a partial preorder 51 such that: Mods( II, o Jo) = u kfin( Mods(p), 51). IEMods(3) sovably corr eories of action: As mentioned in the introduction, in tying together the theory of belief update with theories of action, we will use a particular theory of action, which has been proposed in [Lin and Shoham, 19911. Beside demon- strating that their formulation yields the desired re- sults in particular examples that had been discussed in the literature previously, Lin and Shoham were the first to offer a formal justification for a theory of ac- tion. Specifically, they defined a formal criterion for the adequacy of theories of action (called “epistemo- logical completeness”), and showed their formulation adequate relative to this criterion. We do not have the space to explain this criterion further here, but men- tion it by way of justifying our selection of a theory of action. In the remainder of the section we review the theory. We use the situation calculus formalism. To be pre- cise, our language S is a three-sorted predicate calcu- lus language. The three sorts partition the terms of the language into situation, action and (propositional) fluent terms. In addition, there is a binary function result, whose first argument is of action sort and whose second argument and value are of situation sort; and a binary predicate holds, such that its first argument is of fluent sort and its second argument is of situation sort. The semantics of the language is the standard one for sorted predicate calculus Lin and Shoham consider a class of causal theories for deterministic actions, defined in the standard situ- ation calculus language. Formally, a causal theory T for action A with the domain constraint C and the di- rect effects Pr , . . . , Pn under preconditions RI,. . . , & is the set of causal axioms: Vs. Ri(s) I> holcZs(Pa, result(A, s)) and the constraint involving no situation terms other than s: vs. C(s) The causal theory T for an action A tells us what changes as a result of action A. It does not tell us what does not change; for that we need either a set of frame axioms, or some way of non-monotonically specifying them. We describe the latter next. We fix a set of fluent terms I$, and, following [Lif- schitz, 19901, use a predicate frame, whose extension is exactly the set of fluents denoted by some fluent de1 Val and Shoham 585 term in P. We also use the predicate ab(p; s; a) as an abbreviation for: frame(p) A (hoZds(p, s) E -holds(p, result(u, s))). We assume frame to be explicitly defined by means of some axiom (F). Since we are going to use circum- scription, we need unique names axioms for the set of fluents P and for situations. We will denote the set of unique names axioms for fluents by (Nl). For situa- tions, we use the unique names axiom (N2): vu, s. eur/ier(s, resutt(a, s))A ‘ds, s’, s”. eurlier(s, s’) A eurlier(s’, s”) 3 eurlier(s, s”)A V’s, s’. eurlier(s, s’) I> s # s’. In order to apply the circumscription policy, we con- sider the language S’, which is the extension of S with a new predicate symbol holds’, with same sorts as holds for its arguments. Given a causal theory T, let W(s) be an abbreviation for the formula: (‘dp. holds(p, s) E holds’(p, s)) A (A T) A N 1 A N2 A F. Finally, Camp(T) is an abbreviation for: Vs, u. Circum( W(s); ub(p; s; a); holds), where Circum( W(s); ub(p; s; a); holds) stands for the circumscription of ub in W with holds allowed to vary. Intuitively, what this circumscription policy does is to minimize changes one situation at a time. For any situation s, the minimization will allow holds to vary at any other point except at s, since holds’ is kept fixed. As a very simple example, suppose we have an action toggle, whose effect is to change the value of a fluent PI, formulated in a theory To with no constraints and the single causal axiom: Vs. holds(PI , s) z -holds( PI, result (toggle, s)). Then Comp(T0) entails: Vs, P. f rume(p) A P # PI 3 holds(p, s) E holds(p, result(toggle, s)), i.e. toggle causes no change in the value of any (frame) fluent other than PI. The update problem in situation calculus The update problem can be formulated in situation calculus as follows. The initial database is taken to describe some particular situation S. The update for- mula is taken to describe the occurrence of a spe- cial action, denoted by A;, whose intuitive reading is “that action which when taken in S causes p.” The updated database is taken to describe the situation result(Az, S). As an illustration of our approach, suppose we are given an initial database (p V (q A T)), which we want to update with the formula 1~. Using for example Winslett’s “P&IA” update operator (de- fined later), the updated database is then ((p V q) A 1~). Our approach to obtain this result is to trans- late the database into the situation calculus for- mula holds(or(P, und(&, R)), S), for some situation S, and to compute the set of consequences about the situation result(A,, , ’ S) entailed by the circum- scription of a theory similar to the one described in the previous section, containing the causal axiom holds(not(R), result(A$., S)). For simplicity, we consider only the finitary case, that is, our initial propositional language contains only a finite number of variables. In addition, we will as- sume that the set of frame fluents used below is finite. We will be using the situation calculus, defined in the previous section. The situation calculus allows ar- bitrary terms. We will first fix the terms that we will be using in order to express the initial database and the update formula in situation calculus. Then we will specify the tranlation process. Finally, we encode the update problem as a causal theory. We first introduce the sets of situation and action terms. For any situation term S and any satisfiable formula p f C, we will introduce an action constant Ai, with the intuitive meaning described before. Sit- uation terms consist of the constant Se, intuitively de- noting the initial situation, and of terms of the form result(AE , S’) for any action term A: and situation term S’. The set of fluent terms is obtained quite directly from the propositional language. Consider a proposi- tional language C with a set of primitive propositional symbols P, and closed under negation (1) and dis- junction (V). The set ‘P of fluent terms of S is defined as follows: P is a (primitive) fluent term if p E PC If P is a fluent term, then so is not(P). If P and Q are fluent terms, then so is or(P, Q). Non-primitive fluent terms are required to satisfy the following axioms: Vp, s. hoZds(not(p), s) E -holds(p, s); Vp, q, s. holds(or(p, q), s) z holds(p, s) V holds(q, s). Now that the terms of the language are fixed, we can translate the database into situation calculus. To translate a propositional formula $, we have to think of it as holding at a particular point of time or situation. This is quite natural, since the database is subject to change through updates, but forces us to make a choice about what is the situation in which II, holds, since this needs to be expressed in S. Thus, rather than defining “the” translation of a formula $ into the language S, we define the translation of II, at a situation S, denoted by tis. The easiest way to do it is to first map + into a fluent term $+ as follows: = PifpEPL: ?& = not(@) (Ic, v 4)” = or(tit, 4”). 586 Representation and Reasoning: Action and Change We can now define $ ‘, for any formula $ E ,C, and any situation term S in S, simply as: $S = holds( ?p ) S). The causal theory for the actions we have introduced is given by the axiom schema: holds(#) result(Ai, S)), where $ is the fluent term corresponding to a satisfi- able propositional formula p, and A: and S are closed terms of the appropriate sortsl. We will now apply the policy of the previous section. Fix first the fluent terms P for which frame(P) holds. The following results assume the set of frame fluents is kept fixed but, unless explicitly noted, are independent of what choice do we make in this regard; it also as- sumes that this set is finite2. Second, let (Nl’) be the unique names axiom for the frame fluents. For situa- tions, the matter is slightly more complex than before. (N2’) stands for the conjunction of (N2) with: ‘v’s, s’, A;, A$‘.(result(Az, s) = result(A$‘, s’)) E (s = s’ A Vs”. holds(j2 ) s”) G holds@+ ) s”)). Intuitively, this makes any two situations distinct ex- cept in the case in which they are the result of perform- ing “equivalent” actions in identical situations. (The recursion to determine whether two situations are iden- tical bottoms out in SO.) Let W be as described earlier3, replacing (Nl) and (N2) by (Nl’) and (N2’), respectively, and let Camp(T) be as before. The circumscription of ab for each situation and ac- tion results in a Set of Strict partial orderings <ab,S/,A; over the interpretations of s, such that I <&S’,AS J iff I and J agree on everything except holds andPab, and the extension of ab(p)(s/S’; a/AZ) in I is a proper subset of its extension in J. In order to keep the correspondence with the propo- sitional case as close as possible, however, we choose to characterize the models of Camp(T) in terms of a different set of orderings. Formally, for any situation term S we say that MS is a state of situation S iff there is some P’ z P such that MS = (holds(P, S) 1 P E P’}u{lholds(P, S) 12 E 7’ - ‘Nothing in our results depends on actions being parametrized by situations. But as presented here, if S and S’ denote different situations, there is no axiom char- acterizing the effect of A: in situation S’, resulting in an “update” which leaves the database unchanged. 2This restriction, as well as the restriction to a finitary propositional language, can be lifted by requiring that the circumscriptive ordering resulting from choosing an infinite set of frame fluents be smooth in the sense of [Lehmann and Magidor, 19901. Analogous results can then be proved for an infinitary version of the postulates. See [de1 Val, 19921. 3Taken as a se t rather than as a conjunction, since T now consists of an infinite number of axioms. and iUs is consistent. Intuitively, a state of a given situation is a complete specification of the values of all fluents at that situation. We will use orderings over states rather than over interpretations. The intuition here is that the results of propositional update should only depend on the the- ory and the update formula, which in our framework means that it should only depend on the immediately preceding states and the action corresponding to the update formula. efinittion 1 Let I,.(,Q, ,q and J,(A,,s) be two states of the situation result(Af’,S), and let MS be a state of the situation S. Ir(~,, ,s) <Mu J,.(A,,s) i$ for any interpretation J, if J E Mods(W(S)), J(result(Az’,S)) = Jr(A,,S) and J(S) = M(S) then there exists an interpretation I E Mods(W(S)) such that I < ab,S’,Af J and I(result(A,S’, S)) = &-(A, ,s)* In what follows, for any set of formulas I’ (in either language) we will use the notation Mods@‘) for the set of models of I’. Similarly, let Is E S be a set of sit- uation calculus formulas, with S as the only situation term occurring in P: we use States(rs) for the set of states MS of S such that MS /== ‘ps for every cps E I”. Intuitively, the set of states of a set of situation calcu- lus formulas containing a single situation term corre- sponds to the set of models of the translation of these formulas into C. The following lemma tells us the sense in which these orderings capture the result of the circumscription. Lemma 2 For every M E Mods(Vs.W $ s)), M E Mods(Comp(T)) i$ for every S’ and A,, M(resuZt(Az, S’)) E Min(States(holds(pl, resuZt(AE, S’))), <M(s,)). Suppose now that we are given an initial proposi- tional database +. Let $‘o be the translation of $ into situation calculus as holding at So. We can then take result of updating 1c, with p as the set of con- sequences about the situation result(AF , So) entailed by Camp(T) U J@. To capture this, let R Qso,A;o = {P 1 Camp(T) U ‘@” k ‘P and P contains result (AP so, So) as only situation term). The next lemma draws us very close to the represen- tation theorem of Katsuno and-Mendelzon. - Lemma 3 States(Rqso A~O) = U Min(States(holh&‘, resuZt(Ap, So))), <Mu). MsEStates(+SO) As in the representation theorem for propositional update, this can be seen as selecting for each state of the original theory (for each model, in the proposi- tional case) the set of closest states (models) satisfying the update formula. Given this result and our translation, it is easy to see how we can derive the KM-postulates. For some de1 Val and Shoham 587 fixed choice for the predicate frame, define the update operator o as follows, for any formula p and database 1ct: Definition 2 1c, 0 p + 4 $7 holds(@ , result(Ap , SO)) E R+so ,A;O. We are now ready for the main results of this paper. Theorem 4 The update operator o satisfies postulates (Ul) and (U3)-(U8). In order to satisfy (U2) we need a further condition. Definition 3 (Frame completeness condition) A choice of frame jluents is complete ifl for any situ- ation s and states R and T ojs consistent with W(s), if R and T agree on all frame ftuents then R = T. Intuitively, the frame completeness condition en- sures that the values of the frame fluents are sufficient to completely characterize a state, and plays the same role as the faithfulness condition of section 2. Theorem 5 The update operator o satisfies (V2) if and only if it satisfies the frame completeness condi- tion. Integrity constraints In Lin and Shoham’s proposal for reasoning about ac- tion, the ramification problem (roughly, the problem of specifying both direct and indirect effects of actions) is solved to a great extent by means of the constraint Vs. C(s). Indirect effects of actions are simply those that follow from the direct effects by using this con- straint and the frame axioms (or its non-monotonic equivalent). Similarly, constraints can play a crucial role in the update problem. There is often a set of for- mulas which play the role of “integrity constraints”, to use a term common in the database literature; these are formulas which the database should always satisfy. [Katsuno and Mendelzon, 19891 postulate, in the con- text of AGM revision rather than KM-update, that revision operator oy under constraints y should be de- fined in terms of a standard revision operator o as: Our framework allows us, once again, to prove that the analogous approach for update under constraints is correct; there is no need to “postulate” it. Constraints are easily handled in our framework. Suppose we are given, in addition to the initial database $, a constraint y (we assume $ + y). Re- move from the language all terms AZ such /I A y is unsatisfiable, and let T’ be the theory obtained by re- stricting T to the new language and adding the con- straint Vs. ys. Let W’ be the formula obtained by replacing T by T’ in W, and similarly for Comp(T’). Let <L, be the ordering obtained by replacing W(S) by W’(S) in definition 1. Finally, let RcLSO y A~O be the result of replacing Camp(T) by Comp(+ j & the definition of RllSO A so, and define the update opera- tor or under constr$nts +y analogously. Then all the lemmas of section 4 still hold, after making the ap- propriate substitutions. We will therefore not repeat them here. Rather, we remark that satisfaction of the constraints is already built into the definition of the orderings <t, . Lemma 6 Min(States(holds(#, result(AE, S))), <L,) = Min(States(holds($ /\ Ye, result(Az, S))), <‘,,). As a result, lemma 3 can also be written as: Lemma 7 States(R+,, r A~O) = UMin(States(holds(it’A;t ,result(Ap ,So))), <‘,) MGtates(+So) Corollary 8 II, or J.J E II) o (cl A y) Notice however that constraints add an additional degree of freedom in the design of update operators satisfying (U2), by making it easier for the frame com- pleteness condition to be satisfied. Propositional update operators The constraints imposed on update operators by our construction seem to be tighter than the KM- postulates, and thus it appears that the converse of theorem 4 does not hold. However, we can characterize the set of propositional update operators determined by our construction as follows. Definition 4 A propositional update operator oy (un- der constraints y) is “action based” ifl there exists a set r C L and an ordering <M over interpretations for each interpretation M such that: l* II, O-Y p b u i.f &Mods($) Min( Mods@ A Y), 51) C Mods(a) 2. I? is a set of ‘lfTame” formulas satisfying: if I + y, J b y and for every 8 E l7, I b 0 i$ J b 8, then I= J. 3. I <M J i$ I)i&(I, M) C Bij&(J, M), where Di$,(I, M) = (0 E r 1 I b 8 i$M k 0). Theorem 9 The class of action-based update opera- tors and the class of operators definable with our con- struction and satisfying the frame completeness condi- tion are identical. In the special case in which jrame(pt) iff p is a prim- itive propositional symbol, we have Winslett’s (non- prioritized) “PMA” update operator. We do not con- sider in this paper the introduction of “priorities” in the definition of update operators, which would cor- respond in our framework to using prioritized circum- scription. For epistemologically complete theories of action, prioritization has no role to play. For non- epistemologically complete theories, however, it might be desirable to use it, to capture the information that some changes are more likely than others. We expect 588 Representation and Reasoning: Action and Change our results to extend easily to this case, including the “generalized prioritized circumscription” introduced in [Grosof, 19911. (See [de1 Val, 19921 for details). By analytically deriving the KM-postulates for update from a rigorous theory of action, we have linked two previously unrelated fields of research and provided a foundation for the KM update proposal. A formal con- nection between non-monotonic reasoning and update was first studied in [Winslett, 19891, for Winslett’s up- date operator, but with no relation to theories of ac- tion. She has also suggested in [Winslett, 1988] to use update for reasoning about action; our results can be seen as providing formal support to this proposal. Independently, [Reiter, 19921 has proposed an ac- count of database update in terms of his recent pro- posal for solving the frame problem in [Reiter, 19911. Though his proposal appears to be somewhat more lim- ited in the type of updates that it allows and might run into limitations in dealing with the ramification prob- lem, the connections between his work and ours are still unclear. It would be desirable to obtain similar results for AGM revision, and to establish the connections be- tween AGM revision and KM update. The work re- ported in [Grahne et al., 19921 is an important step in the direction of a solution to the second problem, but the first one remains open. There are other issues that suggest themselves for further work. For example, updates can be seen as providing the expected changes in the domain as a re- sult of a change. How should we deal with the case in which these expectations turn out to be wrong? Should we use AGM revision, or is there a more promising ap- proach based on research in reasoning about action? Theories of action provide an excellent framework in which to deal in a principled way with the persistence of facts, a topic which lies at the heart of the update problem. Of special interest in this context is the ques- tion of the persistence of derived information. The def- inition of parallel updates on the basis of a treatment of parallel actions is another open problem4. Finally, the framework of propositional update loses some of the information encoded by the non-monotonic approach for reasoning about action, specially information about the past. An interesting issue is whether hybrid repre- sentations could be defined to benefit from this infor- mation without incurring in the full representational cost of keeping the whole history of the database en- coded in situation calculus. Acknowledgements We thank John McCarthy and Ray Reiter for useful comments on previous versions of this paper. *Cf. [Lin and Shoham, 19921. eferences Alchourron, Carlos E.; Ggrdenfors, Peter; and Makin- son, David 1985. On the logic of theory change: Par- tial meet functions for contraction and revision. Jour- nal of Symbolic Logic 50:510-530. de1 Val, Alvaro 1992. Belief Revision and Update. Ph.D. Dissertation, Stanford University. In prepara- tion. Gtirdenfors, Peter 1988. Knowledge in Flux. The MIT Press. Grahne, Gijsta; Mendelzon, Alberto; and Reiter, Ray- mond 1992. On the semantics of belief revision sys- tems. In Proceedings of the Conference on Theoretical Aspects of Reasoning about Knowledge. Grosof, Benjamin 1991. Generalizing prioritization. In Proceedings of the Second International Conference on Principles of Knowledge Representation and Rea- soning. Katsuno, Hirofumi and Mendelzon, Albert0 0. 1989. A unified view of propositional knowledge base up- dates. In Proceedings of the 11th International Joint Conference on Artificial Intelligence. Katsuno, Hirofumi and Mendelzon, Albert0 0. 1991. On the difference between updating a knowledge database and revising it. In Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning. Lehmann, Daniel and Magidor, Menachem 1990. What does a conditional knowledge base entail? Technical Report TR-90-10, CSD, Hebrew University. Lifschitz, Vladimir 1990. Frames in the space of situ- ations. Artificial Intelligence 46:365-376. Lin, Fanghzen and Shoham, Yoav 1991. Provably cor- rect theories of action (preliminary report). In Pro- ceedings of the Tenth Conference of the AAAI. Lin, Fanghzen and Shoham, Yoav 1992. Concurrent actions in situation calculus. In Proceedings of the AAAI-92. Reiter, Raymond R. 1991. The frame problem in the situation calculus. In Working Notes of the AAAI Spring Symposium. Reiter, Raymond R. 1992. On formalizing database updates: Preliminary report. In Proceedings of the 3rd International Conference on Extending Database Technology. Shoham, Yoav and Baker, Andrew 1992. Non- monotonic temporal reasoning. In Gabbay, D., edi- tor 1992, Handbook of Artificial Intelligence and Logic Programming. Forthcoming. Winslett, Marianne 1988. Reasoning about action us- ing a possible models approach. In Proceedings of the 7th Conference of the AAAI. Winslett, Marianne 1989. Sometimes updates are cir- cumscription. In Proceedings of the 11th International Joint Conference on Artificial Intelligence. del Val and Shoham 589
1992
101
1,167
Concurrent Actions in t Fang&en Lin and Uoav Shoharn Department of Computer Science Stanford University Stanford, CA 94305 Abstract We propose a representation of concurrent actions; rather than invent a new formalism, we model them within the standard situation calculus by in- troducing the notions of global actions and prim- itive actions, whose relationship is analogous to that between situations and fluents. The result is a framework in which situations and actions play quite symmetric roles. The rich structure of actions gives rise to a new problem, which, due to this symmetry between actions and situ- ations, is analogous to the traditional frame prob- lem. In [Lin and Shoham 19911 we provided a so- lution to the frame problem based on a formal ad- equacy criterion called “epistemological complete- ness.” Here we show how to solve the new problem based on the same adequacy criterion, Introduction In [Lin and Shoham 19911 we proposed a methodology for formalizing the effects of actions in the situation calculus. In this paper we extend this methodology to a framework which allows concurrent actions. Recall that traditional situation calculus [McCarthy and Hayes 19691 is a many-sorted first-order logic with the following domain independent sorts: situation sort (s), propositional fluent sort (p), and action sort (a). There is a domain independent function ResuZt(a, s), which represents the resulting situation when a is performed in s, and a domain independent predicate H(p, s), which asserts that p holds in s. It is clear that there is an asymmetry between ac- tions and situations in this picture. While situations are ‘rich’ objects, as manifested by the various fluents that are true and false in them, actions are ‘poor,’ primitive objects. In this paper, we propose to model concurrent actions in the situation calculus by correct- ing this asymmetry. We introduce the notions of global actions, primitive actions, and the binary predicate In, whose roles will be completely analogous to those of situations, fluents, and the predicate H, respectively. Intuitively, a global action is a set of primitive ac- tions, and In expresses the membership relation be- tween global actions and primitive actions. When a global action is performed in a situation, all of the primitive actions in it are performed simultaneously. Formally, the extended situation calculus is a multi- sorted first-order logic with four domain-independent sorts: situation sort (s), propositional fluent sort (p), global action sort (g), and primitive action sort (a). We have a binary function Result with the intuitive meaning that ResuZt(g, s) is the resulting situation of performing the global action g in the situation s. We have two binary predicates, H and In. Intuitively, H(p, s) means that the fluent p is true in the situation s, and In(a, g) means that the primitive action a is one of the actions in g. For any finite set of primitive actions, Al, . . . . An, we assume that {Al, . . . . An} is a global action satisfying the following properties: Va.(In(a, {Al, . . . . An}) E (a = Al V . . . V a = An)), and (1) Vg(Va.(In(a, g) E (a = A1 V . . . V a = An)) 3 g = {AI, . . . . An}). (2) If A is a primitive action, then we shall write (A} as A. Most often, whether A is a primitive action or the corresponding global action {A} will be clear from the context. For example, A is a global action in ResuZt(A, s), but a primitive action in In(A,g). We shall make it clear whenever there is a possibility of confusion. To be sure, there are other proposals in the litera- ture for extending the situation calculus to allow ex- pressions for concurrent actions. Most introduce new operators on actions [cf. Gelfond et al. 1991, Shubert 19901. For example, in [Celfond et al. 19911 a new op- erator “+” is introduced, with the intuitive meaning that a + b is the action of executing a and b simul- taneously. This approach is common also in the pro- gramming languages community. The relationships be- tween our formalism and those with new operators for concurrent actions are delicate. It seems that our for- malism is more convenient in expressing complicated 590 Representation and Reasoning: Action and Change From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. actions such as the global action where every agent makes a move. Another way to extend the situation calculus is to think of Result as a relation, rather than a function. For example, we can introduce RES(a, sr, sz) with the intuitive meaning that s2 is one of the situations resulted from executing a, along with possibly some other actions, in sr. This is essentially the approach taken in [Georgeff 1986, Peleg 1987, and others]. The drawback of this approach is that it does not explic- itly list the additional actions that cause the transition from sr to ~2. Thus Georgeff (1986) had difficulty in formalizing the effect of a single action when performed exclusively. A more radically different approach to concurrency is via temporal logic [Allen 1984, McDermott 19821. For a comparison between the situation calculus ap- proach to reasoning about action and that of temporal logic, see [Pelavin and Allen 1987, Shoham 19891. The rest of the paper is organized as follows. In section 2 we introduce a formal criterion, called epis- temological completeness, to evaluate theories of ac- tions. This criterion was first introduced in [Lin and Shoham 19911 in the context of traditional situation calculus with respect to primitive actions; the exten- sion to global actions is straightforward. In section 3, we show that this criterion not only help us clarify the traditional (fluent-oriented) frame problem, as we have done in [Lin and Shoham 19911, it also clarifies the new action-oriented frame problem. In section 4, we illustrate our solution to these problems using the well-known Stolen Car Problem. The solution is ex- tended to a class of causal theories in section 5, and compared to others in the context of conflicting sub- actions in section 6. Finally, we conclude in section 7. Epistemologically complete theories of action In the situation calculus we formalize the effects of ac- tions using first-order logic. However, as is well-known, adopting classical semantics leads to problems such as the frame problem, the ramification problem, et cetera. In [Lin and Shoham 19911 we argued that the ultimate criterion against which to evaluate theories of (prim- itive) actions is what we called epistemological com- pleteness. The various problems amount to achieving this completeness in a precise and concise fashion. We shall see that, when we introduce concurrency, new problems arise. However, the notion of epistemolog- ical completeness is relevant also to their definition and solution. This section therefore repeats some key definitions from [Lin and Shoham 19911, appropriately modified to the context of global actions. In that paper, and here as well, we concern ourself only with deterministic actions, which map a situa- tion into a unique other one. Intuitively, a theory of a (deterministic) action is epistemologically complete if, given a complete description of the initial situation, the theory enables us to predict a complete description of the resulting situation when the action is performed. In order to formalize this intuition, we first introduce the notion of states, which are complete descriptions of situations with respect to the set of the fluents we are interested in. In the following, let P be a fixed set of ground fluent terms in which we are interested. This fixed set of flu- ents plays a role similar to that of the Frame predicate in [Lifschitz 19901. Definition 1 A set SS is a state of the situation S (with respect to P) ij th ere is a subset P’ of P such that SS = (H(P, S) 1 P E P’) u (yH(P, S) 1 P E P -7”). Therefore, if SS is a state of S, then for any P E P, either H(P, S) E SS or lH(P, S) E SS. Thus we can say that a first-order theory T is episte- mologically complete about the global action G (with respect to P) if it is consistent, and for any ground situation term S, any state SS of S, and any flu- ent P E P, either T U SS b H(P, ResuZt(G, S)) or T U SS j= lH(P, ResuZt(G,S)), where b is classical first-order entailment. However, as is well-known, classical entailment is not the only choice, we may also interpret our language nonmonotonically according to a nonmonotonic entail- ment. Indeed, the notion of epistemological complete- ness is not limited to monotonic first-order theories. In general, for any given monotonic or nonmonotonic en- tailment j=c, we can define epistemological complete- ness as follows: Definition 2 A theory T is epistemologically com- plete about the action G (with respect to P, and ac- cording to kc) if T &tc FaZse, and for any ground situation term S, any state SS of S, and any jZu- end P E Q, there is a finite subset SS’ of SS such that either T /=c /\SS > H(P, ResuZt(G, S)) or T /=c ASS’ I lH(P, ResuZt(G, S)). We note that for any sets T, SS, and formula (p, TU SS b cp if there is a finite subset SS’ of SS such that T b A SS’ > p. Thus if we replace k= in Definition 2 by classical entailment b, we get the same definition we have earlier for monotonic first-order theories. The frame problems There are a number of well-known problems in formal- izing the effects of actions. The most famous one is the frame problem [McCarthy and Hayes 19691, which is best illustrated by an example. Consider a primitive action Paint which paints Block10 blue: Vs.H(C(BZoctElO, Blue), ResuZt(Paint, s)). (3) The axiom tells us nothing about what happens to the color of a red block next to BZocLlO after Paint is Lin and Shoham 591 iib-““iB Figure 1: The frame problems performed. For that, we need a so-called frame axiom which says that the neighboring block will still be red: Vs.(H(C(BZockS, Red), s) E H(C(BZock9, Red), Result(Paint, 8))). (4) The frame problem is that of succinctly summa- rizing the frame axioms. Formally, we can say that the frame axiom is needed because, although ((3)) is a complete theory about Paint w.r.t. (C(BZockl0, Blue)}, it is not so w.r.t. P = (C(BZock10, Blue), C(Block9, Red)}. It is easy to see that WI (41 is complete w.r.t. P. The frame prob- lem, then, is concerned with achieving epistemological completeness in a convenient way for a given set of fluents. In [Lin and Shoham 19911, we proposed a solution to the frame problem for a wide class of causal the- ories. However, for concurrent actions, in addition to the traditional frame problem, there is a closely related problem. Suppose now that we have a new primitive action Close which closes the door. Consider the global action (Paint, Close). Since (3) tells us nothing about (Paint, Close}, we need an “inheritance ax- ioms” which says that {Paint, Close} can inherit the effect of Paint: Vs.H(C(BZocklO, BZue), ResuZt({Paint,CZose}, s)). (5) It is clear that, for any global action that includes Paint, and does not include a subaction that “inter- feres” with Paint, we need a similar axiom. Then, like the frame problem, we have a problem of how to succinctly summarize these “inheritance axioms.” Again, in terms of epistemological completeness, we can say that the axiom (5) is needed because, although ((3)) is an epistemologically complete theory about Paint w.r.t. (C(BZockl0, Blue)), it is not so about {Paint, Close}. The new problem, then, is again about how to achieve epistemological completeness in a convenient way for a given set of global actions. It is clear that the two problems are symmetric (Fig.1). The first one, called the fluent-oriented frame problem, involves a rich structure of propositional flu- ents, but only a single primitive action. The second one, called the action-oriented frame problem, involves a single fluent, but a rich structure of actions. The symmetry is exactly the same as the one between sit- uations and actions in our framework. In [Lin and Shoham 19911 we argued that a useful way to tackle the fluent-oriented frame problem is to consider a monotonic theory with explicit frame ax- ioms first, and then to show that a succinct and prov- ably equivalent representation using, for example, a nonmonotonic logic, captures the frame axioms con- cisely. We shall follow the same strategy here for the generalized frame problem. Let us illustrate it using a version of Kautz’s stolen car problem [Kautz 19861. The Stolen Car Problem revisited The scenario is as follows. Initially, the car is present, but after two waiting periods it is gone. When was it stolen? Suppose we have two propositional fluents Stolen (the car is gone) and Returned (the car owner is back), and two primitive actions: Steal and Return. After Return, the car owner returns: Vs.H(Returned, ResuZt(Return, s)). (6) If the owner of the car has not returned, then after SteaZ, the car would be stolen: Vs.(-H(Returned, s) 3 H(StoZen, ResuZt(SteaZ, s))). (0 If Return and Steal are performed simultaneously, then the effect of Steal would be canceled: Vs.(-H(StoZen, s) > --H(Stolen, ResuZt((SteaZ, Return), s))). (8) Let P = (Stolen, Returned). Then, of course, U6h (7h (8)) is not an epistemologically complete the- ory of Return, Steal, and (Steal, Return). Following the strategy in [Lin and Shoham 19911, we shall provide two ways to complete the theory. One is to stick to first-order logic, and supply the necessary frame axioms and inheritance axioms explicitly. The other one is to use a nonmonotonic logic. It is impor- tant that the two completions are equivalent. A monotonic completion We first explicitly supply necessary inheritance axioms. This will give us a causal theory for each global action as defined in [Lin and Shoham 19911. In this example, there is only one global ac- tion, {Steal, Return}, to consider. Because of (8), {Steal, Return} can only inherit the effect of Return: Vs.H(Returned, Result((SteaZ, Return), s)). (9) Now we have a causal theory, {(6), (7), (9)), about the three actions. We can use the technique in [Lin and Shoham 19911 to generate necessary frame axioms. 592 Representation and Reasoning: Action and Change For Steal, we have the frame axioms: Now let T2 consist of the axioms (15) - (18), the Vs.(H(Returned, s) z following instances of (1): H(Returned, Result(Stea1, s))), (10) Va.(In(a, Steal) G a = Steal), (19) Vsp.(H(Returned, s) 9 Va.(In(a, Return) E a = Return), (20) [H(p, s) E H(p, Result(Stea1, s))]). (11) Va.(In(a, (Steal, Return)) z For Return, we have: a = Steal V a = Return), (21) Vs.(H(Stolen, s) G H(Stolen, Result(Return, s))). and the unique names assumption: For {Steal, Return), we have: (12) Return # Steal A Returned # Stolen. (22) Vs.(H(Stolen, s) E H(Stolen, Result((Steal, Return), s))). Then the circumscription of Canceled in T2 with H al- lowed to vary, written Circum(T2; Canceled; H), im- Let Tl be the set of axioms (6) - (13). Then it is clear that Tl is an epistemologically complete theory of Steal, Return, and {Steal, Return}. Furthermore, if we assume that these are the only global actions: Vg.(g = Steal V g = Return V g = (Steal, Return)), (14) then plies the causal rules (6), (7); and (9). Thus, in a sense, Circum(T2; Canceled; H) solves the action- oriented frame problem for the Stolen Car Problem. Tl U {(14)} I- Vglg2s.(lH(Stolen, s) A H(Stolen, Result(g2, ResuZt(gl, s))) 3 -rH(Returned, s) A TH(Returned, Result(gl, s)) A (gl = Steal V g2 = Steal)). That is, the owner never returned, and that Steal must have happened during one of the waits. We solve the fluent-oriented frame problem using the solution proposed in [Lin and Shoham 19911. Specifi- cally, we circumscribe ab in Circum(T2; Canceled; H) according to the policy in [Lin and Shoham 19911. The results in [Lin and Shoham 19911 show that the cir- cumscriptive theory is equivalent to Tl U ((22)) in the sense that for any sentence yo in the language of Tl, cp is a first-order consequence of the circumscriptive the- ory iff it is a first-order consequence of Tl U { (22)). In particular, T2 is a nonmonotonic theory that is epistemologically complete about Steal, Return, and {Steal, Return). A nonmonotonic completion Although Tr gives the right answers to the stolen car problem, it suffers from the frame problems since it appeals to the explicit inheritance and frame axioms. We now provide an equivalent nonmonotonic theory that avoids them. We notice that the axiom (18) does not take into ac- count the fact that for Return to override the effect of Steal, Return itself must not be overridden by some- thing else such as “Murder.” Thus a more appropriate axiom might be: We introduce two auxiliary predicates. We have ab(p, g, s) which is true if the truth value of p changes after g is performed in s: bw(lab(p, 9, s) 1 (H(P, s) = H(P, Resuit(g, s)))). 05) We also have CanceZed(gl , g2, s) which is true if the “normal” causal effect of the global action gl is can- celed by some other actions in g2 when they are per- formed simultaneously in the situation s. Thus the causal rules (6) and (7) are rewritten as: Vgs.(In(Return, g) A -Canceled(Return, g, s) > H(Returned, Result(g, s))), (16) Vgs.(In(Steal, g) A 4’anceled(Steal, g, s)A --H(Returned, s) > H(Stolen, Result(g, s))), (17) and (8) is replaced by Vgs.(In(Return, g) A -Canceled(Return, g, s) > Canceled(Steal, g, s)). (23) But if we simply circumscribe Canceled in the above axiom, we shall have two minimal models, one in which Canceled(Return, {Steal, Return}, s) is true, and the other in which Canceled(Stea1, (Steal, Return), s) is true. It is clear that we should prefer the second one. Formally, this can be done by using prioritized subdo main circumscription [Lifschitz 19861. But we suspect that the formal theory will be complicated, witness the result in [Lifschitz 19871. Notice that axioms of the form (23) resemble rules in logic programs with negation-as-failure. Thus it would be natural that we use default logic to capture Canceled, and pipe the re- sult to circumscription. Although this is perfectly well- defined, some may find it odd to use two nonmonotonic logics at the same time. An alternative formulation of (18) in light of Murder is the following axiom: Vg.(In(Return, g)A-In(Murder, g) 3 Canceled(Stea1, Vgs.(In(Return, g) 3 Canceled(Stea1, g, s)). (18) Although (24) (24) is not as good as (23), it should suffice in many applications. Lin and Shoham 593 Causal theories We notice that both our monotonic and nonmonotonic solutions to the Stolen Car problem are adequate in the sense that they are epistemologically complete. Fur- thermore, they are provably correct with respect to each other. In this section, we show how the solutions can be generalized to a class of causal theories. In the following, let P be a fixed set of propositional fluents, and G be a fixed set of global actions. Let Def be the instantiations of (1) and (2) to the global actions in 6. Thus, for example, if (Al, As) E 6, then Va.(In(a, (Al,A2}) c a = A1 V a = AZ) will be an axiom in Def. A causal theory of 6 consists of a domain constraint of the form Vs.C(s), (25) a set of causal rules of the form Vs.(R(s) > H(P, Result(G, s))), and a set of cancellation axioms of the form (26) Vs.(K(s) 3 c anceled(G1, G2, s)), (27) where C(s), R(s), and K(s) are formulas with s as their only free variable, P E P, and G, G1 , G2 E g. We assume that for any pair of global actions G1, G2 E 6, there is at most one cancellation axiom (27) for it. As with the Stolen Car problem, a causal theory is usually not epistemologically complete. In the follow- ing, let T be a fixed causal theory of 0. A monotonic completion For any global action G E 6, if there is a global action G1, a causal rule about G1 in T: Vs.(R(s) > H(P, Result(G1, s))) such that {Vs.C(s)} U Def I- Va.(In(a, G1) > In(a, G)), and a cancellation axiom about G1 in G: Vs.(K(s) > Canceled(G1, G, s)), then the following is a derived causal rule about G: Vs.(R(s) A -K(s) > H(P, Result(G, s))). (28) For any G E Q, let TG be the set of domain con- straint (25), causal rules about G in T, and derived causal rules about G. Then TG is a causal theory of the action G in the sense of [Lin and Shoham 19911, and the procedure in that paper can be used to provide a monotonic completion of TG. A nonmonotonic completion We first rewrite (26) as Vsg.(Va.(In(a, G) 2 I+, g)) > (R(s) A +?anceled(G, g, s) > H(P, Resulth 4)). (29) Then the derived causal rule (28) is obtained from (29) by applying predicate completion on Canceled: Vs.(Canceled(G1, G2, s) E K(s)), where K(s) is as in (27). Let W be the conjunction of the domain constraint (25), the axioms in Def, and the cancellation axioms in T. Then under certain con- ditions [Reiter 19821, minimizing Canceled in W with H allowed to vary, that is, Circum( W; Canceled; H), (30) will achieve this predicate completion. Let T2 be the conjunction of (30), the causal rules in T, (15)) and some unique name assumptions. Then our nonmono- tonic completion is the circumscription of ab in T2 ac- cording to the policy in [Lin and Shoham 19911. If (30) captures the predicate completion for Canceled, then T2 would be “equivalent” to the union of TG for all G E g. The results in [Lin and Shoham 19911 can then be used to show that under certain con- ditions, the monotonic and nonmonotonic completions are equivalent, and are both epistemologically com- plete. Conflicts Let T be a causal theory of 9. We suppose that T is consistent. Let G E G. If TG is inconsistent, then G contains unresolved conflicting subactions. For ex- ample, if we have the following causal rules about the primitive actions Close and Open, which closes and opens the door, respectively: Vs.H(Opened, Result(Open, s)), Vs.H(Closed, Result(Close, s)), the following domain constraint: Vs-(H(Opened, s) A H(Closed, s)), and no cancellation axioms, then we’ll have Vs.H(Opened, Result((Open, Close), s)), Vs.H(Closed, Result((Open, Close), s)). It is easy to see that these two axioms are contradictory under the domain constraint. In some cases G may contain unresolved conflicting subactions even if TG is consistent. For example, sup- pose the agent must have a key in order to open the door: Vs.H(Key, s) 1 H(Opened, Result(Open, s)). 594 Representation and Reasoning: Action and Change Then we’ll have: Vs.H(key, s) 5) H(Opened, Result((Open, Close), s)). It is easy to see that TG, where G = (Open, Close), is consistent. But is is clear that the potential conflict between Open and Close is still there. We shall say that G contains unresolved potentially conflicting sub- actions if for some situation S, there exists a state SS of S such that T U SS is consistent, but TG U SS is in- consistent. Notice that the requirement that TUSS be consistent means that the state SS satisfies the domain constraint. Potential conflicts among actions can be resolved us- ing cancellation axioms. For example, if we think that Close will always prevail, then we can write: Vgs.(In(Close, g)AIn(Open, g) 3 CanceZed(Open, g, s)). If we think that Open and Close will cancel each other’s effect, then we can write: Vgs.(In(Close, g) A In(Open, g) 1 Canceled(Open, g, s) A Canceled(Close, g, s)). In contrast to our approach, concurrency is mod- eled as so-called “interleaving concurrency” in [Gelfond etc. 19911. In other words, in their system, concur- rently performing two actions whose individual effects are contradictory amounts to performing the two ac- tions sequentially, in one of the two orders. This is also the approach taken in [Pednault 19871. Conclusions We have proposed a formalization of concurrent ac- tions in the situation calculus, and showed that the “provably-correct” approach proposed in [Lin and Shoham 19911 to formalizing the effects of actions can be extended to the new framework. Compared to other work on concurrent actions, our work is unique in that it is based on a formal adequacy criterion, and applies in a rigorous fashion to a well-defined class of causal theories. Acknowledgements We would like to thank Kave Eshghi, Michael Gelfond, and Vladimir Lifschitz for stimulating discussions re- lated to the subject of this paper. We also thank anonymous referees for comments on an earlier draft of this paper. The work of this paper was supported in part by a grant from the Air Force Office of Scientific Research. References Allen, J.F. (1984), Towards a general theory of action and time, Artificial Intelligence 23 (1984), 123-154. Gelfond, M., V. Lifschitz, and A. Rabinov (1991), What are the limitations of the situation calculus? in the Working Notes of the AAAI Spring Sympo- sium on the Logical Formalizations of Commonsense, 1991, Stanford, California. Georgeff, M.P. (1986), The representation of events in multiagent domains, in Proceedings of AAAI-86, pp 70-75, Philadelphia, PA, 1986. Kautz, H. (1986), The logic of persistence, in Proceed- ings of AAAI-1986. Lifschitz, V. (1986), Pointwise circumscription, in AAAI - 1986. Lifschitz, V. (1987), On the declarative semantics of logic programs with negation, in Readings in Non- monotonic Reasoning, Morgan Kaufmann, 1987, ed. by M. Ginsberg, pp. 337-350. Lifschitz, V. (1990), Frames in the Space of Situa- tions, Artificial Intelligence, 46 (1990) 365-376. Lin, F. and Y. Shoham (1991), Provably correct the- ories of action: Preliminary report, in AAAI-1991. McCarthy, J. and P. Hayes (1969), Some philosophical problems from the standpoint of artificial intelligence. In Muchine Intelligence 4, Meltzer, B and Michie, D. (eds), Edinburgh University Press. McDermott, D., A temporal logic for reasoning about process and plans, Cognitive Science 6(2) (1982), lOl- 155. Pednault, E.P.D. (1987), Formulating multi-agent, dynamic world problems in classical planning frame- work, in M. Georgeff and A. Lansky, editors, Reu- soning about actions und plans, pp. 47-82, Morgan Kaufmann, San Mateo, CA, 1987. Peleg, D. (1987), Concurrent dynamic logic, JACK 34 (1987) 450-479. Pelavin, R.N. and J.F. Allen (1987), A model for con- torrent actions having temporal extent, in Proceed- ings of AAAI-87, pp. 247-250, Seattle, WA, 1987. Reiter, R. (1982), C ircumscription implies predicate completion (sometimes), in Proceedings of AAAI-82, pp. 418-420. Shoham, Y. (1989), T ime for action, in Proceedings of IJCAI-89. Shubert, L.K. (1990), Monotonic solution to the frame problem in the situation calculus: an efficient method for worlds with fully specified actions, in H.E. Kyberg, R.P. Loui, and G.N. Carlson, editors, Knowledge Representation and Defeasible Reasoning, Kluwer Academic Press, 1990. Lin and Shoham 595
1992
102
1,168
Nonmonotonic orts for Feature Structures Mark A. Young Artificial Intelligence Laboratory The University of Michigan 1101 Beal Ave. Ann Arbor, MI 48109 marky@caen.engin.umich.edu Abstract There have been many recent attempts to incorpo- rate defaults into unification-based grammar for- malisms. What these attempts have in common is that they all lose one of the most desirable prop- erties of feature systems: namely, presentation or- der independence. This paper describes a method of dealing with defaults that retains order inde- pendence. The method works by making a strong distinction between strict and default information. The addition of nonmonotonic sorts allows default information to be carried in the feature structure while retaining a simple, deterministic unification operation. Monotonic feature structures are red- erived through a satisfaction relation that is ab- stract in that it depends only on the ordering in- formation for sorts. Introduction There have been many recent attempts to incorporate defaults into unification-based grammar formalisms. (Shieber 1987) suggests add conservatively and (1986) overwrite as default mechanisms for PATR-II. (Kaplan 1987) describes priority union for LFG. (Bouma 1990) describes a default unification operation for a generic UG. (Carpenter 1991) compares skeptical and credu- lous methods for default unification, and describes a method of default inheritance. None of these schemes retain the property of pre- sentation order independence. That is, (Ffl,G)fl,H may be very different from (Fn,H)n,G (where fl, is the default unification operator). Since unification grammars are meant to be declarative, this is a big step backward. To make sure that the “right” answer comes out, these mechanisms are restricted in their use (usually to just the lexicon) and even there require that specific procedures be followed. This paper describes an effort to put defaults into a declarative framework. Our object is to provide a the- oretically “clean” mechanism-one that produces the right answers without any procedural entanglements. Feature structures should be simple bundles of infor- mation, and unification should combine them without any loss (or gain) of information. The method of non- monotonic sorts meets these requirements. This paper is organized as follows. We first describe our favorite way of defining feature structures. The method does not rely on this particular definition, but having a concrete system to work with will simplify the presentation. We then give a simple example that, illustrates the way defaults are often used in unification systems. Thereafter we define nonmonotonic sorts and nonmonotonically sorted feature structures, and show how they apply to the example. We finish up with a slightly more involved example and a discussion of extensions to the method. Notation Feature structures are bundles of information. Each structure has a sort (drawn from a finite set, S), and a (possibly empty) set of attributes (drawn from a finite set F). Each attribute has a value, which is again a feature structure. One feature structure may be the value of more than one attribute, so the whole struc- ture is best thought of as a rooted, labeled di-graph. More precisely: Definition 1 A feature structure is u tuple (Q, q, S, 0) where a Q is a finite set of nodes, l q E Q is the root n.ode, 0 6 : Q x F + Q zs a partial feature value function that gives th.e edges and their labels, a.n,d a 0 : Q -+ S is a sorting function that gives the lube/s of the nodes. This structure must be connected. Although we prefer this definition, the notion of non- monotonic sorts is not tied to it: any similar notion of feature structures will do. It is usual to require that the graph be acyclic, and not, unusual to require that 0 be defined only for sink nodes. We will assume that there is a partial order, 4, de- fined on S. This ordering is such that the greatest lower bound of any two sorts is unique, if it exists. In 596 Representation and Reasoning: Action and Change From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. VERB template <past tense suffix> isa +te <past participle prefix> isa ge+ <past participle suffix> isa +t spiel lex VERB MIXED-VERB template VERB <past participle suffix> isn +en mahl lex MIXED-VERB STRONG-VERB template MIXED-VERB <past tense suffix> dsa 0 zwing lex STRONG-VERB <past tense stem> isa zwang <past participle stem> isa zwung Figure 1: Example Lexicon Entries other words, (S U {I}, -t) is a meet-semilattice (where L represents inconsistency or failure). This allows us to define the most general unifier of two sorts as their greatest lower bound, which write as a&b. We also assume that there is a most general sort, T, called top. The structure (S, 4) is called the sort hierarchy. The main operation carried out on feature structures is unification. We write FnG for the most general uni- fier of F and G. Since we are working with sorted feature structures, we will require a method of sorted unification (Walther 1988). Note that the restriction on the sort hierarchy will ensure that most general uni- fiers are unique if they exist at all. Defaults and the Lexicon Defaults most commonly appear in the lexicon of nat- ural language systems. The formation of verb tenses is particularly suited for the use of defaults, and we will be using examples of tense formation to illustrate our method. We assume that our lexicon is made up of a sys- tem of templates and lexical entries. These objects are arranged in an inheritance hierarchy, with templates corresponding to non-terminals of the grammar and lexical entries to terminals. Figure 1 shows a simple hi- erarchy for a subset of German verbs. There are three kinds of verbs indicated: weak, mixed, and strong. The VERB template holds information about weak verbs. It specifies that the suffix for the past tense is +te, and that the past participle prefix and suffix are ge+ and -/-t respectively. The MIXED-VERB template in- dicates that it inherits information from VERB, and that the past participle suffix for mixed verbs is +en. The lexical entry m~uhl is an instance of the MIXED- VERB template. It is intended that mahl will get +te as a past tense suffix and Se-/- and +en as past partici- ple affixes. However, if we use only strict unification, we will find that the information specified for muhl is inconsistent. In particular, we get +en as a past par- VERB template <past tense suffix> defmdt +te <past participle prefix> a’sa gef <past participle suffix> default +t spiel lex VERB MIXED-VERB template VERB <past participle suffix> isa +en mahl lex MIXED-VERB STRONG-VERB template MIXED-VERB <past tense suffix> asa 0 zwing lex STRONG-VERB <past tense stem> ksa zwang <past participle stem> 4sa zwung Figure 2: Example Lexicon with Defaults ticiple suffix from MIXED-VERB, and ft from VERB. Since strict unification cannot deal with this situation, we need a form of default unification to save us. (We could stick to strict unification if we simply replicatzd the required information in every template. This would not only lose meaningful generalizations, but woulcl also 1ea.d to much larger lexicons.) Current methods to deal with defaults introduce an asymmetric “default unification” operation. Systems using these methods might start at the top of the hier- archy and work down toward the lexical entries, r-e- placing inconsistent information as it is found. Or they might start at the lexical entries and work their way up, conserving information by ignoring conflicts. These conflict resolution methods are by their very na- ture sensitive to order of presentation. There is very much a notion of “what we knew before” and “what we are being told now.” Such methods cannot hope to provide a decla.rative way of dealing with defaults. For that we need a whole new way of looking at the matter. Nonmonotonic Sorts To allow defaults in our feature structures, we intro- duce the notion of nonmonotonic sorts (NSs). The in- tuition behind a nonmonotonic sort is that it encodes both strict and default information, and it keeps the distinction between the two kinds of information. We cannot be sure what sort, a NS will finally turn out to be. That is, a NS is a description that may be satisfied by various sorts. Looking back at Fig. 1, we can see that the suffixes +te a,nd j-t are both defa.ult information. Each is con- tradicted by information lower in the hierarchy. The suffixes +-en. and 0 and the prefix ge+ are, on the other hand, strict information. They are not contradicted by information lower in the hierarchy. We make this in- formation explicit in Fig. 2. Nonmonotonic sorts are defined in terms of the Young 597 monotonic sorts they describe. That is, we start with the same hierarchy of basic sorts, (S, +), with the same sort unification operation, As defined on S. A nonmonotonic sort occurs when we say that a feature structure is “strictly of this sort” and “by default of that sort .” The intent of Fig. 2 is that the past par- ticiple suffix of mahl will be strictly -ten and only by default -/-A Since the two are incompatible, -ten will overrule +t, and mahl will get the correct suffix re- gardless of the order templates are scanned. Once formed, NSs can be freely combined, leading to NSs with multiple strict and default sorts. The strict parts can be combined into a single sort through sort unification. Default parts cannot be so combined, how- ever. This is because the two pieces of information- that this FS is by default of sort a and that it is also by default of sort b-are two independent pieces of in- formation which may be independently overruled. The default parts must be kept separate, not only from the strict parts, but also from each other. But default sorts can be made more specific by adding the information that they must be subsumed by the strict information. Definition 2 A nonmonotonic sort is a pair (f, A) where f E S, and A C S such that for each d E A, a’+ f. We write N for the set of nonmonotonic sorts. Note that a nonmonotonic sort can be made from any sort f, and subset D of S. Those elements of D that are inconsistent with f can be discarded. The re- maining sorts can then be sort-unified with copies of f, discarding duplicates. Nonmonotonic sort unification uses a similar method to combine the information from two NSs without loss or gain. Definition 3 The nonmonotonic sort unifier of nonmonotonic sorts (fi, Al) and (fi, As) (written (.!$;dA,v(fz, A,>) is th e nonmonotonic sort (f, A) e f = fiASf2, and 9 A={dAsf IdeA&JA2A(dAsf)+f}. The nonmonotonic sort unifier is undefined if fiAsf2 is undefined. The definition of AN allows defaults to be eliminated in two ways (neither of which changes the informa- tion present). If a default sort is inconsistent with new strict information, it is dropped (we follow the conven- tion that if dAsf is undefined, then dAsf 4 f is false). Also, new strict information may be more specific than a default sort; the redundant default is then dropped. Going back to our example above, the past partici- ple suffix in VERB is of nonmonotonic sort (T, ( -/-t}) , while that of MIXED-VERB is (-/-en, 0). Their non- monotonic sort unifier is (-/-en, (}) since +tAs+en is undefined. Theorem 1 Nonmonotonic sort unification is a com- mutative and associative operation. Proofi The method is defined simply in terms of sort unification and set union, both of which are commu- tative and associative. The proof that NS unification retains these properties is straightforward (though te- dious). 0 Because of this, we may meaningfully speak of the non- monotonic unifier of a set of nonmonotonic sorts, and not worry about the order that they are presented in. We said earlier that we could not be sure what sort a NS would turn out to be. We can, however, give the range of possible values. First we define a weaker notion of expZan,ation. This notion is borrowed from Theorist (Poole et aZ. 1986). In that default reasoning system, a belief can be explained by showing that it follows from what is known together with a consistent set of hypotheses. In our case, what is known is the strict information, and the possible hypotheses are the default sorts. A set of hypotheses is consistent, if its sort unification exists. Thus we have: Definition 4 A set D is said to be a theory of a NS (f, A) if D C A and (l\,D) is de$n,ed. A theory D explains a sort t if t = fAs(l\,D). The reason for including f in the definition of expla- nation is that D may be empty (in which case we take AsD to be T). We will say that a NS, n, explains a sort, t, if there is a theory of n which explains t. The only sort explained by (-ten., {}) is +en itself. On the other hand, (T, { +t}) explains both T and +t. Theorem 2 For each sort t that (f, A) explains, there is a unique maximal theory fort, equal to the union of all theories for t. Proof.- Given any two theories for t, say D1 and Dz, t is the greatest lower bound of {f} U II1 as well as the greatest, lower bound of {f) U Dz. Thus t is t,he greatest lower bound of {f } U D1 U 02. Thus D1 U D:! is a theory for t. This is easily extended to finitely many theories. 0 This allows us to associate a unique “best” theory with each expla,ined sort. The set of sorts that (f, A) explains is in general too large for our purposes. The idea of a default is that, it is true unless proven false. Therefore, we want to restrict ourselves to sorts that include as many defaults as possible. Definition 5 A sort t is a solution for a NS n = (f,A) ,f t . . . i i is ez: e pl ained by a m.aximal theory of n. Th(lt is, a M E A and t = fAs(&ikf), edtfi;;il D c A such that M c D, (&D) is not , . The solutions for (-/-en, (1) and (T, { +t}) are +en and +t respectively. There may be multiple solutions to a single NS, because default sorts can be individually consistent with the strict, sort while being inconsistent with each other. We say that a sort satisfies a NS n if 598 Represent at ion and Reasoning: Act ion and Change s is subsumed by some solution for n. Thus for n is also a most general satisfier for n. a solution Theorem 3 The solutions of a NS n are the infor- mationally maximal sorts explained by n. That is, if t is a solution of n then there is no s 4 t such that n explains s. Proof: Let t be a solution for n = (f, A) and let T be the maximal theory for t. Assume there is an s 4 t explained by n and let D be some theory of n that explains s. We then have: fAsoLwm) = f ~suLDMs(AsT) = (fAs(AsD>>As(fAs(AsT)) = sAst VERB template <past tense suffix> deja& +ed <past participle suffix> default +ed call lex VERB DDOUBLE template VERB <past tense suffix> default +d+ <past participle suffix> default +d+ nod lex DDOUBLE STRONG template VERB <past tense suffix> isa 8 <past participle suffix> h +en beat lex STRONG forbid lex DDOUBLE,STRONG <past tense stem> e’sa forbade Obviously T c (D U T) (equality would require s = t as well), so T fails to be a maximal theory of n. That means t is not a solution: a contradiction. 0 Feature Structures with Defaults We have now said just about everything we want to say about nonmonotonic sorts. Nonmonotonically sorted feature structures (NSFSs) are easy to define, given what has gone before. You can think of a NSFS as a FS that is decorated with NSs instead of sorts. While this view makes them easier to understand, it should be remembered that they are more properly thought of as descriptions of feature structures. In particular, each NSFS stands for its set of solutions. Definition 6 A nonmonotonically sorted feature structure is a tuple (Q, q, 6, R) where Q, q, and 6 are defined as for feature structures, and R is a nonmonotonic-sorting function, 52 : Q 3 N. Unification of NSFSs is carried out as for monotonic FSs, but with nonmonotonic sort unification replacing sort unification. Since nonmonotonic sort unification is order-independent, nonmonotonic unification is also free of ordering effects. The notions of explanation, and solution are easily extended to NSFSs. A node’s sort information is lo- cal to it, and so we only need to look at each node separately. Definition 7 A NSFS, F = (Q, qo, S, Q), explains a feature structure, C, if and only if e C = (Q,Qo, 6, @), and e k/q E &.0(q) explains O(q). C is a solution for F if O(q) is a solution for $2(q) for each q E Q. A feature structure is said to satisfy a nonmonoton- ically sorted feature structure, F, if and only if it is subsumed by some solution for F. Figure 3: Multiple Defaults Lexicon Given that we know how to find the solution(s) for a NSFS, it remains only to determine under what cir- cumstances it is appropriate to do so. Since the pur- pose of the lexicon is to associate feature structures with lexical entries, it is clear that it is appropriate to take solutions at that level. Taking the solution at any other level would be an error. Nothing inherits from a lexical entry, so the information in a lexical entry ca.n- not be overruled. Defaults appearing in templates, on the other hand, are likely to be overruled by informa- tion in lower templates and lexical entries. Given these definitions, we can work out the full fea- ture structures for the verbs given in Fig. 2. The lexical entry mahl gets from MIXED-VERB a strict value +e?L for the past participle suffix. From the VER.B template it inherits a strict, PP prefix, and a default PT suffix. The default PP suffix +t from VERB is incompatible with the strict PP suffix in MIXED-VERB, and so is dropped. There is only one solution to the resulting description, one with PT m.a,hlte and PP gemahlen. Similarly, we get spielte and gespielt for the VERB spiel, and zwang and gerwungen for the STRONG- VERB zwing . Structures with Multiple Sorts The example given above was very simple, involving as it did a very flat sort structure. Fig. 3 uses a set of sorts with some more order. In particular, it represents a doubled final d with the +d+ sort. (The example is only given to illustrate the method. It is not meant to represent an adequate linguistic analysis.) This com- bines with the +ed and +en sorts to make the +ded and +den sorts. The Q) sort is incompatible with any other sort, and +ed is incompatible with +en. (See Fig. 4.) The analysis for call is quite simple. It is a solution of the verb template, and so takes the default endings to yield culled and culled. The value for beat is the so- Young 599 +den +ded Figure 4: Simple Sort Hierarchy lution to the STRONG template. Here the strict values for the suffixes overrule the defaults from VERB, and so the solution can be read directly from the template: beat and beaten. The analysis for nod is not very complicated. The past tense suffix has two default values, +ed from VERB, and +d+ from DDOUBLE, giving NS (T, { -f-ed,+d+}). These are compatible, so both can be used in the solution, yielding a +ded sufSx (nod- ded). The same reasoning exactly gives the (same) past participle, nodded. The analysis for forbid is a little more complicated. Consider the past participle. The suffix here inherits three values: (T, (+ed}) from VERB, (T, { +d+}) from DDOUBLE, and (+ en, (}) from STRONG. The +en is incompatible with the +ed, and so overrules it. The +d+ is still consistent with the +en, and so is retained. The resulting NS is (-/-en, (+den}) (remembering to unify +d+ with a copy of $-en). The solution has +den as a suffix: forbidden. For the past tense, however, the 0 suffix overrules both the +ed and +d+, meaning that we get (0,{}): forbade. The analysis of forbid shows that nonmonotonic sorts can handle in a straightforward way what other systems require special methods for. Systems that con- serve information have to inherit from STRONG before DDOUBLE so that the empty suffix for past tenses ap- pears before the +d+ (otherwise it would get *forbaded as the past tense). Since there is no a priori reason to use this order, the system must give some way for the user to order templates other than simple inheritance order. Systems that use replacement are even worse off for this example. These systems must start with VERB in order to correctly overrule the -/-ed suffixes. Given that, the information from STRONG must be added before that from DDOUBLE: otherwise the fed and +d+ sorts would b e combined to form -/-ded, and then +en would overwrite them both (yielding *forbiden). Conversely, the system must add the information from DDOUBLE b f e ore that from STRONG, lest the +d+ overwrite the Q) (giving *forbaded). To solve this prob- lem, the system must break one of DDOUBLE or STRONG into two parts, or must augment the default, unification operator somehow. Conclusion In an effort to put defaults in a declarative framework, we have developed the concept of nonmonotonic sorts. Nonmonotonic sorts make a distinction between strict and default information. Maintaining this distinction allows us to define a simple unification operation that respects the information encoded in the nonmonotonic sorts. This operation allows us to use a single structure to keep track of multiple possible solutions in a com- pletely deterministic way. Since the method is declara- tive, multiple inheritance with defaults can be carried out without regard to the order that information is presented. The method as described above only deals with atomic information. The method can easily be ex- tended to deal with path information (using a node instead of a sort). To say that two paths are by de- fault equal means that the nodes at the ends of those paths are by default the same structure (as opposed to other systems, where it, usually means that they have similar information by default). We are currently in- vestigating the properties of such a system, including the question of how such unification tests would best be carried out (adding default equations leads to non- local effects in the NSFS). We have already developed a version of the method that uses priorities on defaults sorts to choose between multiple solutions. The most natural system of prior- ities would mirror the structure of the lexicon. This extension allows for defaults appearing at lower lev- els of the lexicon to overrule those appearing at, more general levels. We are also investigating the effect of typing infor- mation (Carpenter 1992) on nonmonotonic sorts. Such inform&on allows a limited form of inference to be car- ried out, within a FS. As with default equations, this leads to non-local effects in the FS. In both cases, a solution FS may not be made up entirely of solution sorts, but will consist entirely of explained sorts. References Bouma, Gosse 1990. Defaults in unification grammar. In Proceedings of the 1990 Conference of the Associ- ation for Computation.al Linguistics. 165-172. Carpenter, Bob 1991. Skeptical and credulous de- fault unification with applications to templates and inheritance. In Proceedhgs of Acquilex Workshop ou Defaults and the Lexicon. Ca.rpenter, Bob 1992. The Logic of Typed Feature Structures. Cambridge IJniversity Press. To appear. Kaplan, Ronald 1987. Three seductions of computa- tiona.1 linguistics. In Lin.guistic Theory and Computer Applications. Academic Press, London. 149-188. Poole, David; Goebel, Randy; and Aleliunas, Romas 1986. Theorist: A logical reasoning system for de- faults and diagnosis. Reseazch Report CS-86-06, Uni- versity of Waterloo. 600 Representation and Reasoning: Action and Change Shieber, Stuart 1986. An Introduction to Unification- Based Approaches to Grammar, volume 4 of CSLI Lecture Notes. University of Chicago Press, Chicago. Shieber, Stuart 1987. Separating linguistic analyses from linguistic theories. In Linguistic Theory and Computer Applications. Academic Press, London. I- 36. Walther, Christoph 1988. Many-sorted unification. Journal of the ACM 35(1):1-17. Young 601
1992
103
1,169
ilXt~OS~QCtiVQ bQliQf Kurt Konolige* Artificial Intelligence Center SRI International 333 Ravenswood Avenue Menlo Park, CA 94025 konolige@ai.sri.com Abstract Autoepistemic (AE) logic is a formal system character- izing agents that have complete introspective access to their own beliefs. AE logic relies on a fixed point defini- tion that has two significant parts. The first part is a set of assumptions or hypotheses about the contents of the fixed point. The second part is a set of reflection prin- ciples that link sentences with statements about their provability. We characterize a family of ideal AE rea- soners in terms of the minimal hypotheses that they can make, and the weakest and strongest reflection prin- ciples that they can have, while still maintaining the interpretation of AE logic as self-belief. These results can help in analyzing metatheoretic systems in logic programming. Introduction Wha,t kind of introspective capability can we expect an ideal agent to have ? This question is not easily an- swered, since it depends on what kind of model we take for the agent’s representation of his own beliefs. Au- toepistemic logic (Moore [lo]) uses a sentential or list semantics, which looks like this: derivations cl . . . *The research reported in this paper was supported by the Office of Naval Research under Contract No. NOOOl4- 89-C-0095. The beliefs of the agent are represented by sentences in a formal language. For simplicity, we consider just a propositional language LO, and a modal extension Cl which has modal atoms of the form L$, where 4 is a sentence of LO. The arrow indicates that the intended semantics of the beliefs from LO is given by the real world, e.g., the belief CJ is the agent’s judgment that Q is true in the real world. Of course an agent’s beliefs may be false, so that in fact q may not hold in the world. On the other hand, beliefs of the form Ltp refer to the agent’s knowledge of his own beliefs, so the semantics is just the belief set itself. An agent starts with an initial set of beliefs, the premises. Through assumptions and derivations, he ac- cumulates further beliefs, arriving finally at a belief set that is based on the premises. In order for an agent to be ideally introspective, the belief set I’ must satisfy the following equations: The premises are in I’. 4 E r and 4 E LO - L$ E r (1) 4 6 l? and 4 E & --+ lL4 E I’ Any set I’ from Cl that satisfies these conditions, and is closed under tautological consequence, will be called Cl-stable (or simply stable) for the premises I’. The definition and term “stable set” are from Stalnaker [13]. The beliefs are stable in the sense that an agent has perfect knowledge of his own beliefs according to the intended semantics of L, and cannot infer any more atoms of the form L4 or 1Ltj. Although an idea.1 agent’s beliefs will be a stable set containing his beliefs, not just any such set will do. For example, if the premises are {p V q), one stable set is {p V q, p, Lp, L(p V q), . . s}. This set contains the belief p, which is unwarranted by the premises. The constraint of making the belief set stable guarantees that the beliefs will be introspectively complete, but it does not constrain them to be soundly based on the premises. Moore recognized this situation in formu- lated autoepistemic logic; his solution was to ground the belief set by making every element derivable from the premises and some assumptions about beliefs. The reason he needed a set of assumptions is that negative Konolige 635 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. introspective atoms (of the form ~Lc$) are not soundly derivable from the premises alone. For example, con- sider the premise set {lLp > q,p V q}. We would like to conclude ‘Lp, since there is no reasonable way of coming to believe p. But an inference rule that would allow us to conclude 1Lp would have to take into ac- count all possible derivations, including the results of its own conclusion. This type of circular reasoning can be dealt with by adding a set of assumptions about what we expect not to believe, and checking at the end of all derivations that these assumptions are still valid. In autoepistemic logic, a belief set T is called grounded in premises A if all of its members are tauto- logical consequences of A U LTo U 1 L??o, where LTo = (L4 I+ E Tnto}, and 1LTo = (lL4 14 E LO and 4 4 T}. This concept of groundedness is fairly weak, since it relies not only on assumptions about what isn’t believed (lLTo), but also about what is (LTo). In this paper we consider belief sets that use only assumptions 1LTo in forming the belief set T. Everything else in the belief set will follow deductively (and monotonically) from the premises A and the assumptions 1LTo. In some sense lLT0 is the minimal set of assumptions that we can use in this manner; for every smaller set, we have to resort to nonmonotonic rules, such as negation-as-failure [6], in order to form a stable set. For this reason we call a belief set grounded in A and 1LTo ideally grounded. Ideally grounded logics are similar to the modal non- monotonic logics defined in [S, 12, 71, but allow an agent to make fewer assumptions about his own beliefs. The main difference is that ideally grounded logics are more grounded in the premises than modal nonmonotonic logics, and in general will have fewer unmotivated ex- tensions (see Section ). In the rest of this paper we explore ideally grounded belief sets from the perspective of introspective reflec- tion principles. We are able to characterize the minimal set of principles that will yield a stable set of beliefs, and also (once nested belief operators are introduced) the maximal ones. The resultant family of introspective logics fill in a hierarchy between strongly and moder- ately grounded autoepistemic logic [53, and suggest that the moderately grounded fixed-point is the best system for an ideal agent with perfect awareness of his beliefs. Minimal ideal introspection In this and the following section we restrict the language to Cl, containing no nesting of the belief operator. This presents a simple system to explore the consequences of idea.1 introspection. In Section we relax this restriction and consider the fully nested modal language C. An ideally grounded introspective agent determines his belief set using the following fixed-point equation: T={~jAu~LTot-s$}, (2) where S is some system of inference rules. Any set T that satisfies this equation will be called an ideally grounded extension of A. The set TO = T n CO is the kernel of T. In the remainder of this section we consider the min- imal set of rules S that guarantees a stable belief set for T. Because a stable set is closed under tautological consequence, the rules S must contain a complete set of propositional rules. In addition, whenever C$ is in the belief set, we want to infer L4. The following two rules fulfill these conditions. Rule Taut. From the finite set of sentences X infer 4, if # is a tautological consequence of X. Rule Reflective Up. From 4 infer L$, if 4 E LO. Proposition 1 Let RN be the rules Taut and Reflec- tive Up. Every RN-extension of A is a ,CO stable set containing A. Proof. Every extension is closed under tautological con- sequence by rule Taut, and the premises must be in it, by the properties of I-. The condition 4 E I? and 4 E LO --+ LC#J E I’ holds because of rule Reflective Up. The condition # @ I’ and C#J E LO --+ lL$ E I’ holds since any proposition 4 not in T will be part of the assumptions 1LTo. Proposition 2 If for every set A E Cl, the S- extension of A is an ..Cl stable set containing A, then Taut and Reflective Up are admissible rules of S. Proof. If Taut is not an admissible rule for some exten- sion T, then it cannot be closed under tautological consequence, and is not a stable set. Similarly, if Re- flective Up is not admissible, T will contai will not contain L4 for some proposition f$. These two propositions show that the rules RN form the minimal logic for ideally grounded agents, in the sense that RN extensions produce stable belief sets, and they must be included in any system that produces such sets. Further, every RN extension of A is minimal for A: there is no stable set S containing A such that So c To. Proposition 3 Every RN extension of A is a minimal stable set for A. Proof. Suppose there is a stable set U for A whose ker- nel is a proper subset of T’s. Then U must a.lso satisfy the fixed-point condition, since the rules Reflective Up and Taut are admissible for stable sets (Proposi- tion 2). By hypothesis the set 1Lro contains ALTO, and so Uo ust contain every element of TO, a con- tra.diction. The proof of this proposition points to a more generaJ result for any class of rules that are sound with respect to the stable set conditions. An inference rule is sound with respect to stable sets if, whenever its antecedents are contained in a. stable set, its consequent a.lso must be (e.g., Reflective Up is sound because if 4 is in a. sta.ble set, L$J must be also). Proposition 4 If the rules S are sound, then any S- extension of A is a minimal stable set for A. 636 Representation and Reasoning: Belief Proof. Suppose there is a stable set U for A whose ker- nel is a proper subset of T’s. Then U must also satisfy the fixed-point condition, since the rules S are admis- sible for stable sets. By hypothesis the set 1Lg0 con- tains ~L??o, and so Uo must contain every element of To, a contradiction. Groundedness, autoe default logic In this section we relate ideally grounded extensions to their close relatives, default logic and AE exten- sions. Ideal groundedness is somewhat weaker than de- fault logic and strongly grounded AE extensions, but stronger than moderately grounded ones. Simple as it is, the system RN is almost equivalent to default logic [ll]. It is not quite as strongly grounded as the latter; for while there exists a translation from DL to RN that preserves extensions, the inverse translation fails in a few cases. We will assume that the reader is familiar with DL. A default theory (TV, 0) consists of a set of first-order sentences W and a set of defaults D of the form Here only the propositiorml case will be considered, but extending the results to first-order languages is straight- forward (as long as no quantifying-in is allowed, e.g., sentences of the form Qx.L$(x)). To get a translation to RN, simply take W and add a translation of each default, rule, as follows: (3) Note the form of the first modal atom: L(a A CY), rather than La. Since the beliefs of an agent are closed under tautological consequence, this a,mounts to the same con- straint on beliefs; however, the difference is important for finding extensions, a.s will be made clear shortly. Proposition 5 U is th.e kernel o.f an RN extension of A i@ it is a DL extension of (T/v, b) . Proof. Let A = TV U { L(a A a) r\ ~Llp1 > P1,- e/y E D} . We will show that, the set r(U) = (4, E Lo 1 AU lLi7 I-RN 4) is the least set satisfying the properties: w c I?(U). Z I’(U) is closed under ta.utologica.1 consequence. 3 For LU : PI, - - . /-y E D, if cy E I’(U) and lp # U, then y E r(U). The first two properties follow directly from the defi- nition of r(U). The tl lird property follows by simple propositional inference, given the form of A. To show I’(li) is minima.1, not,e tl1a.t it is the set of tautological consequences of TV and some set yi of conclusions of defaults. To make it smaller, we would have to eliminate some of the yi. But it is clear from the discussion below that the only way a pi could be present is if the third condition defining I’(U) holds; thus all 3;’ must be present, and I’(U) is minimal. We can reduce the definition of extensions (2) to use only the kernel: This gives a fixed-point condition defining extensions as u = r(u) which is the same as for default logic. This is a simple translation of DL into a minimal AE logic. It is the same as the translation in [5] (except for the use of a A Q instead of a), but there it was neces- sary to limit the extensions of the AE logic to strongly grounded ones, a syntactic method based on the form of the premises. No such method is needed here. The stipulation on the form of L(o A a) is necessary to prevent derivations that arise from the interaction of modal atoms. Consider the two theories: (TLP 1 P, LP 1 PI (TLP 3 P, L(P A P) 1 PI The first one has an RN extension Cn(p), because p is a tautological consequence of the initial constraints. On the other hand, it is not a consequence of the second set of constraints, because 1Lp and L(p A p) are consistent from the view of propositional logic. Since there is no way to derive p by any of the rules, h(p) cannot be an extension; yet assuming 1Lp leads to the derivation of p and a contradiction. So the second set has no extensions. To get autoepistemic logic, we need to include more assumptions about beliefs in the fixed point equation 2. Let us define open RN extensions as solutions of the equation (4) where LTo is the set { L~!J I C$ E To}. Actually, the presence of the Up rule is redundant here. From results in [5], it is easy to show the following proposition. Proposition 6 T is an open RN extension of A ifl it is the kernel of an AE extension of A. The kernel of an AE extension is just the part of the extension from ,CO. The kernel completel3; determines the extension. So the basic difference between AE and default logic is based on the groundedness of the extensions, that is, AE logic lets an agent assume belief in a. proposition a, and use that assumption to derive the very same proposition as part of the filial set of beliefs. In default logic, all derivatious must be ideally grounded, so that, assumptions a.re of the form lL+. The circu1a.r reasoning possible in AE logic was noted in [5], and two increasingly stronger notions, moder- ate and strong groundedness, were defined as a means Konolige 637 of throwing out extensions that exhibit such reason- ing. Moderately grounded extensions of A are defined as those AE extensions are also minimal stable sets con- taining A. Strongly grounded extensions use a syntactic method to eliminate all inferences from facts to belief propositions, e.g., even with the premise set A = (La 3 a, TLa ZI a) (5) there is no derivation of a, because La and -1La are not allowed to interact. This means that different sets A, even if they are propositionally equivalent, can generate different extensions. Strongly grounded extensions are equivalent to default logic extensions under the simple translation of default rules: a : Pl, * * A/Y H LwI~~~~A--A~L~~ 3 y. (6) Note the difference with the translation of (3): Lo in- stead of L(a A a). Here, rather than defining restrictions on extensions, we have taken the approach of trying to find the min- imal reflective principles that will allow an agent full knowledge of his beliefs, at the same time trying to make them as grounded as possible. The result is a logic that is somewhere between moderately and strongly grounded AE extensions, and which can imitate the groundedness conditions of default logic. Let us define one fixed point logic Sl to be included in another S2 (Sl -+ S2) if for any premise set the extensions of Sl are always extensions of S2, and for some premise set there is an extension of S2 that is not an extension of Sl. Sl is the stronger nonmonotonic logic if we define C#I as a consequence of a premise set just in case 4 is in every extension of the premises. The relationship among the various AE logics can be diagrammed as follows: (7) Nested belief So far we have preferred to forego the complications of beliefs about beliefs, using the language ll that con- tains no nesting of modal operators. This language and its semantics can be extended in a straightforward way. Let ,C be the propositional modal language formed from ,!ZO by the recursive addition of atoms of the form Lp, with p E l. The semantic equations for a sta.ble set (1) are mod- ified to ta.ke a.wa.y the restriction of beliefs being in ,&: The premises a,re in I?. 4u jLdEr +@r j1L4-5r (8) Any set from L that satisfies these conditions, and is closed under tautological consequence, will be called a stable set for A (in contrast to ,Ci-stable, which does not consider nested modal atoms). Consider a premise set A that is drawn from iGr, as before. In every RN extension of A there is complete knowledge of what facts are believed or disbelieved, i.e., Lq5 or lLq5 is present for every nonmodal 4. The ad- dition of the nested modal atoms should make no dif- ference to this picture, except to reflect the presence of the belief atoms in the correct way. So, for example, if La is in an RN extension S, then LLa should be in the extension when we consider Is; and similarly LlLa should be present if TLa is not in S. This much is easily accomplished by removing the restriction on Reflective Up, and giving it its usual name from modal logic. Rule Necessitation. From 4 infer Lq5. This rule will add positive modal atoms; but we need also to add negative ones. For example, if La is in an extension, and the extension is consistent, then -La is not in it, and this fact should be reflected in the presence of 1LlLw. In fact we want to infer 1Lp for every sentence ~1 that will not be in the extension, given that we have full knowledge of the belief atoms from ,Cr. Suppose that there is a sentence La V 1LbV c that is not in S, where c is a nonmodal sentence. This implies that, for stable S, 1Lw E S, Lb E S, and TLC E S. So from these latter sentences we should infer lL( La V TLb V c). This is what the following rule does. l Rule FIB. From LCQ, TLflj, SLY, and /L > (Vi Lai V Vj ILpj V r), infer ‘L/J. The system NRN consists of the rules Taut, Neces- sitation, and Fill. The basic properties of NRN exten- sions are that they are minimal stable sets, the rules are essential, and they are conservative extensions of RN fixed points. Proposition 7 If for every set A C_ ic, the S-extension of A is a stable set containing A, then Taut, Necessita- tion, and Fz’ll are admissible rules of S. Proof. Taut and Net are the same as for Proposition 2. For Fill, note that every consistent stable set contain- ing the premises to the rule cannot contain ,Q, and so must contain 1 Lp. Proposition 8 Every NRN extension of A is a stable set for A. Proof. Assume tl1a.t T is a consistent NRN extension of A. By rule Nestsed Reflective Up, the first part of the semantic definition is satisfied. For negative modal atoms, we proceed by induction on the level of nesting of L. By definition and the rule Nested Reflective Up, either LC#I or lLq5 is in T for every nonmodal 4. Suppose a sentence S = (Vi LcVi V Vj lL/3j V 7) E ic1 is not in T. Then each of ~Lcyi, L/3j and 1 Ly is in T. By rule Fill, 1 Lp is in T for any p 1 s. Hence for every sentence 11 E ,C 1, the negative semantic rule is 638 Representation and Reasoning: Belief satisfied, and either LZJ or 1Lv is in T. By induction, it can be shown that the semantic rule is satisfied for all levels of nesting. Extensions that are able sets are also minimal, as for the nonnested language. Proposition 9 If the rules S are sound with respect to stable sets, and the S-extension of A is a stable set, then it is a minimal stable set for A. Proof. Same as for Proposition 4. Proposition 10 If A C_ L1, then the kernel of every RN extension is the kernel of an NRN extension, and conversely, the kernel of every NRN extension is the kernel of an RN extension. Proof. The converse is obvious, since the rules NRN include RN. For the original direction, assume we have an RN extension S, which contains Lg5 or lL$ for every 4 E &. From the proof of Proposition 8, it is clear that the set T = {p 1 S I-NRN p} is a stable set for A, and further it is an NRN extension, since all elements of its kernel are derivable from A and we can show that the Fill rule is redundant if the schema Ir’ ([L+ A L(+ ZI $J)] 1 L$) is present. Proposition 11 The rule Fill is admissible in any sys- tem containing Ii, Taut and Necessitation. Proof. Suppose each of ~Laa, L@j and 1Ly is in A, together with Ii and all instances of K. Let p = l\ilLCri AAj L@j. By Taut and Up, L[/JA(~ > 7) II r] is derivable, and from schema Ii and ‘Ly we have -4P A (P 2 r)l* s ince we also have Lp by Up, this gives (using Ii) 1 L(p > 7). Again by Ii’ and Taut, we could derive ~Lv for any v such that v > (p > y) is a tautology. Because nested modal atoms are propositionally dis- tinct from nonnested ones, it is possible to derive new translations from default logic to sentences of L such that all extensions are strongly grounded and hence equivalent to default logic extensions. There are ma,ny ways to do this; all that is required is to translate from (Y : P/r to a sentence in which CY and ,0 are put under different nestings of modal operators that correspond to the single nesting semantics. For example, three such translations are: a) L.h A d+’ z) y b) La A 7LL+ 3 y (9) c) La A Ld+ 3 y Reflective reasoning principles The systems RN and NRN are minimal rules tha.t might be used by an agent reasoning a,bout its own beliefs. They have the nice characteristic of giving minimal sta- ble sets, and so are somewhere between strongly and moderately grounded. But are there other reflective reasoning principles that could be incorporated? In this section we will give a partial answer to this question by examining several standard modal axiomatic schemata, and showing how some of them are appropriate as gen- eral reasoning principles, while others must be regarded as specific assumptions about the relation of beliefs to the world. The most well-known modal schemata are the follow- ing. Ii-. L(4 3 Icl) > (L4 > Lti) T. L4 14 D. L$ > lL+ (10) 4. L4 3 LL4 5. 7L4 > L-L4 The first question we could ask is: which of these schemata are sound with respect to the semantics of amalgamated belief sets. ? It should be clear that Ii’, 4 and 5 are all sound, since if their antecedents are true of a stable set, then so are their consequents. The schema D is true only of consistent stable sets, as we might expect, since it says that a sentence can be in a belief set only if its negation is not. The schema T, on the other hand, is not semantically valid. It is possible for an agent to believe a fact 4, but that fact may not be true in the real world. Asserting T for a particular fact 4 says something about the agent’s knowledge of how his beliefs are related to the world, and causes different reasoning patterns to appear in an agent’s inferences about his own beliefs. Here is a short example of how the sentence Lp 3 p could be used by an agent. Consider the propositions: p = The copier repairman has arrived Q = The copier is ok Suppose an agent believes that if he has no knowledge that the repairman has arrived, the copier must be ok. Further he believes that the copier is broken. We rep- resent this as: A=(-q,TLp>q). (11) The premises A do not have any NRN or AE extension, because while Lp is derivable, p is not. One solution is t(o give the a.gent confidence in his own beliefs, e.g., A’=(~q,lLp>q,Lp>p). (12) Now there is an NRN-extension in which p is true, since from Lp the agent can derive p. It is as if the a.gent says, “I believe tl1a.t p, therefore p must be the case.” Although one might not want to use this type of rea- soning in a particula,r a.gent design, the point is that T saactions a. certain type of reasoning about the connec- tion of beliefs to the world, and is thus a “nonlogical” a.xiom, similar to 7Lp > q. Different modal systems can be constructed by com- bining the different, modal schemata with the inference rules Ta.ut and Necessitation. Using our previous defini- tion of inclusion, we show the following relations a.mong the different, versions of S-extensions. Konolige 639 Proposition 12 The following diagram gives all the inclusion relations of ideally grounded extensions based on the modal systems formed from the schemas K, T, D, 4, and 5. the world. These systems do not respect sound au- toepistemic reasoning, and are not included in AE logic: the extensions generated using instances of T can dif- fer significantly from AE extensions. In fact, if the AE fixed-point equation (4) is supplied with the ax- iom schema T, then it degenerates into monotonic S5 SG - NR 0 ----PK,KD [9, lo]. This is because it interacts with the positive ---P AE assumptions LTo, producing arbitrary ungrounded be- a liefs. In ideally grounded logic, the T schema can serve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . a useful representational purpose, and all modal sys- tems, including S5, produce nonmonotonic fixed points. T - s4 _o s5 Modal nonmonotonic logics Proof. We will sketch the technique for two exam- ples. The basic idea is to consider a theory con- taining variations of the pair of sentences Lp 1 p, -Lp 1 p. This theory has the single extension with kernel Cn(p). For the system Ii’, consider the pair Lp 3 p, lL(p A p) 3 p. This theory has no RN ex- tensions. But it does have a I<-extension, since in the system Ir’ one infers p. Hence K extensions and RN extensions are distinct. For the schema 4, consider the pair of sentences LLp 3 p, -L(p A p) > p. No I< or RN extensions exist; but there is a 1<4 extension, since in K4 the pair infers p. Similar pairs can be found for the other systems. I The top half are systems whose extensions are all subsets of AE logic. SG sta.nds for strongly grounded AE extensions, and MG for moderately grounded. The minimal ideally grounded system is NRN, and the max- imum is K45 or KD45, which is equivalent to MC (see [5]). An ideal introspective agent would use KD45 ex- tensions, which we call ideal extensions. Note that the schema D does not make any difference as far as ide- ally grounded extensions are concerned; in effect, the agent cannot use reasoning about self-belief to detect an incoherence in his beliefs. In fact all of the systems from NRN to KD45 are very similar. Their only difference comes from premise sets that contain sentences of the form ‘LPIP cu>P, where a > Lp is a theorem of the modal system. For example, in li’ we have L(p A p) > Lp, and a premise set as above with a = L(p A p) would distinguish Ii’ from NRN, in that the former would have an extension containing p. Similarly, a = 1LlLp could be used for 1<5. But the sentence ‘Lp 1 p is generally not one that captures a useful introspective reasoning pattern, and would probably not occur by design in an application. There thus seems to be no practical difference between NRN and KD45, since the additi0na.l axioms do not result in potentially interesting reasoning patterns. The second tier is present, for forma,1 completeness. The axiom schema T, we have argued, is a useful wa.y of characterizing a domain-dependent and proposition- dependent connection between the agent’s beliefs and Modal nonmonotonic fixed point equation: logics are based on the following where S is a modal system. McDermott [8] analyzed this equation for the systems T, S4, and S5. Subsequent investigations [12, 7] considered many other modal sys- tems, including most of those mentioned in this paper. The difference with ideally grounded extensions is the presence of assumptions containing nested atoms, e.g., -L- Lp. For an ideal agent, this amounts to an as- sumption of Lp, since any stable set not containing -Lp must contain Lp. In fact, modal nonmonotonic logics whose underlying modal system contains the schema 5 are all equivalent to AE logic. And as with AE logic, the schemas 5 and T combine to collapse the fixed point to monotonic S5. From the point of view of ideally grounded exten- sions, the assumption set -LT is too “la.rge.” The schema 5, which in ideally grounded extensions is just a principle of reasoning about derived beliefs, in modal nonmonotonic logic also interacts with nested negative assumptions to produce positive ones. The inclusion diagram for ideally grounded extensions is almost the same as that for the normal modal systems serving as a deductive base (see [2]), except for the schema D. But all modal nonmonotonic logics containing the I( and 5 schemas (but not T) are equivalent to weakly grounded AE logic because of their large assumption set, collapsing systems that are distinct in the ideally grounded case. Because of this, modal nonmonotonic logic misses the moderately grounded endpoint. In fact, no modal nonmonotonic logic produces only min- imal stable sets: in the simplest system N, containing only the necessitation rule and no logical axioms, the premises (Lp 3 p, 7LlLp > p) have two extensions, Cn() and Cn(p). Only the first of these is minimal. Conclusion We have presentecl the minima.1 logic (NRN) that an ideal introspective agent shoulcl use. It is minima.1 in the sense tha.t the agent ma.kes a. minimal set of assump- tions about his own beliefs, a.ncl employs a minima.1 set of rules necessary to gua.rantee that, his beliefs are sta- ble. An ideal introspective rea.soner may enjoy more 640 Represent at ion and Reasoning: Belief powerful rules of introspection, for example the modal schemas 4 and 5, but he should keep the assumptions about his beliefs to a minimum. The schema T is not a sound axiom for an introspective agent, but can be used to characterize a contingent connection between beliefs and the world. The concept of ideally grounded extensions first ap- peared in [5], where the system KD45 was presented and proven equivalent to moderately grounded AE extensions.’ Fixpoints of the systems T, S4 and S5 were introduced under the name of nonmonotonic ground logics in [14], and it was shown that the S5 logic was nondegenerate and consistent, i.e., does not reduce to monotonic S5, and always has an extension. Ideally grounded logic might be employed in an anal- ysis of metatheoretic systems, such as the DEMO and SOLVE predicates in logic programming [l, 33. Using a predicate to represent provability can cause problems with syntax and consistency (see [4] for some com- ments). Instead, this research suggests using a modal operator, and defining a theory by the fixed point def- inition (2). Some appropriate notion of negation-as- failure would be used to generate the assumptions, and the rest of the fixed point could be calculated using the reflection rules. References [l] K. A. Bowen and R. A. Kowalski, Amalgamating language and metalanguage in logic programming, Computer and Information Science Report 4/81, Syracuse University (1981). [2] B. F. Ch e 11 as, Modal Logic: An Introduction (Cam- bridge University Press, 1980). [3] S. Costantini, Semantics of a metalogic program- ming language, International Journal of Founda- tions of Computer Science 1 (3) (1990). [4] J. des Rivieres and H. Levesque, The consistency of syntactical treatments of knowledge, in: J. Y. Halpern, ed., Conference on Theoretical Aspects of Reasoning about Knowledge (Morgan Kaufmann, 1986) 115-130. [5] K. Konolige, On the relation between default and autoepistemic logic, Artificial Intelligence 35 (3) (1988) 343-382. [G] J. W. Lloyd, Foundations of Logic Programming (Springer-Verlag, Berlin, 1987). [7] W. Marek, G. F. Schwarz, and M. Truszczynski, Modal nonmonotonic logics: ranges, characteriza- tion, computation, in: Proceedings of the Second International Conference on Principles of Knowl- edge Representation and Reasoning, Cambridge, MA (1991). IA slightly different fixed-point was used because of a technical difference in the form of monotonic inference in PI PI PO1 WI WI WI PI D. McDermott, Non-monotonic logic II, JournaZ of the ACM 29 (1982) 33-57. D. McDermott and J. Doyle, Non-monotonic logic I, Artificial Intelligence 13 (l-2) (1980) 41-72. R. C. Moore, Semantical considerations on non- monotonic logic, Artificial Intelligence 25 (1) (1985). R. Reiter, A logic for default reasoning, ArtificiaZ Intelligence 13 (l-2) (1980). G. F. Schwarz, Autoepistemic modal logics, in: Conference on Theoretical Aspects of Reasoning about Knowledge, Asilomar, CA (1990). R. C. Stalnaker, A note on nonmonotonic modal logic, Department of Philosophy, Cornell Univer- sity, (1980). M. Tiomkin and M. Kaminski, Nonmonotonic de- fault modal logics, in: Conference on Theoretical Aspects of Reasoning about Knowledge, Asilomar, CA (1990). modal systems. Konolige 641
1992
104
1,170
A Belief-Function Logic Alessandro Saffiotti* IRIDIA - Universite Libre de Bruxelles 50 av. F. Roosevelt - CP 19416 - 1050 Bruxelles - Belgium asaffio@ulb.ac.be Abstract We present BFL, a hybrid logic for representing uncertain knowledge. BFL attaches a quantified notion of belief - based on Dempster-Shafer’s theory of belief functions - to classical first-order logic. The language of BFL is composed of objects of the form F:[a,b], where F is a first-order sentence, and Q and b are numbers in the [O,l] interval (with c&b). Intuitively, a measures the strength of our belief in the truth of F, and (l- b) that in its falseness. A number of properties of first-order logic nicely generalize to BFL; in return, BFL gives us a new perspective on some important points of Dempster-Shafer the- ory (e.g., the role of Dempster’s combination rule). Introduction Logic plays a central role in the task of representing knowl- edge in artificial intelligence. Much of logical tradition is con- cerned with two-valued logics, i.e. logics in which we can only talk about propositions being completely true or com- pletely false (possibly, according to a given believer). This contrasts with the widely recognized fact that real world knowledge is almost invariably affected by uncertainty, and judgments about the truth of propositions are rarely categori- cal. Techniques for handling uncertainty have long been studied within artificial intelligence, and a number of for- malisms developed - ranging from those based on proba- bility theory (e.g. Pearl, 1988), to possibility theory (Zadeh, 1978; Dubois & Prade, 1988), to Dempster-Shafer’s (D-S) theory of belief functions (Dempster, 1967, Shafer, 1976, Smets, 1988). However, these formalisms are normally grounded on some mathematical, rather than logical, model. Recently, a few attempts at merging these formalisms with the logical tradition have been proposed in the AI literature (e.g., Nilsson, 1986; Ruspini, 1986; Bacchus, 1988; Dubois, Lang & Prade, 1989; Fagin & Halpem, 1989; Provan, 1990). While most of these approaches are based on the idea of defining some uncertainty measure over a set of possible worlds, the target is often different. Nilsson aims at (probabilistically) expressing uncertainty about truth of sen- tences: hence, he extends logic to have probability values as truth values (i.e., probabilities appear at the meta-level). From a different position, Bacchus focuses on the represen- tation of probabilistic (but known with certainty) knowledge; accordingly, he puts probability values, and statements about them, inside the language of his logic (i.e., at the object * Currently at SRI International, AI Center, 333 Ravenswood Ave. Menlo Park CA 94025. E-mail: saffiotti@ai.sri.com. 642 Representation and Reasoning: Belief level). Both Ruspini and Fagin & Halpem are more inter- ested in investigating the foundations of uncertain reasoning: they conduct insightful possible-world analyses that, though grounded in probability theory, reach and scrutinize the the- ory of belief functions, and its representation and inference mechanisms. From a seemingly similar position, Provan ana- lyzes D-S theory following a proof-theoretic approach. In this paper, we take yet another position. We are inter- ested in building a logic where partial belief can be repre- sented and reasoned with. We follow what could be a usual schema for defining a first-order logic - going from lan- guage to semantics and to deduction procedures - but add a quantified notion of belief at each stage. The outcome is a “belief-function logic” (BFL, for short). BFL is similar in spirit to the logic proposed by Dubois, Lang and Prade (1989; also, Lang, 1991), but is based on belief functions rather than on possibility measures. The language of BFL is composed of objects of the form F:[a,b], where F is a first- order sentence, and a and b are numbers in the [O, l] interval (with alb). Intuitively, a measures the strength of our belief in the truth of F, and (l-b) that in its falseness. BFL is aimed at modelling partial and incomplete belief: belief is “partial” in that we can partly believe in the truth of a proposition (e.g., a can be strictly smaller than 1); it is “incomplete” in that we can remain completely non-committal about the truth status of some propositions (i.e., both a=0 and b=l). We give se- mantics to BFL in a way that makes it a “coherent” hybrid of first-order logic and standard D-S theory. Many formal prop- erties of first-order logic generalize (in a “graded” form) to BFL - including properties of (partial) inconsistency. Also, BFL gives us a new perspective on some important points of D-S theory (e.g., the role of Dempster’s combination rule). More concretely, BFL is a hybrid knowledge representation language that combines the power of first-order logic for rep- resenting knowledge with that of D-S theory for representing uncertainty about this knowledge. The desirability of such a tool has been advocated in (Saffiotti, 1990). Moreover, and differently from most of the above logics, BFL is equipped with a (non-standard) deduction procedure. Finally, though BFL is based on first-order logic and belief functions, it can be extended to other languages and/or ucertainty formalisms. In the rest of this paper, we describe the syntax and se- mantics of BFL, and discuss some of its properties. We also analyze a particular class of models for BFL, called D-mod- els, based on Dempster’s combination rule, and outline a de- duction procedure for BFL. A full treatment of BFL, and the proofs of the theorems, can be found in (Saffiotti, 1991a). From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. Language We consider a standard first-order (f.o.) language, with the usual operators -, A and V, plus the abbreviations FvG for -(-FA-G); FIG for -FvG; and 3x.P(x) for -Vx.-P(x). We define our language as: Y’= (F:[a,b] 1 Ffirst-order sentence, and 0 I a I b 21). We call a formula of 4p a bf-formula (for “belief-function formula”). A bf-formula F:[a,b] may be thought of as ex- pressing how a unitary amount of belief is distributed among the propositions “F is true” (a), and “F is false” (1 -b). Roughly, a and b play the role of the Be2 and PI measures in Dempster-Shafer theory. The quantity b-a may be read as the amount of belief we leave uncommitted (or our “ignorance” about the truth of F). In particular, F: [ l,l] represents com- plete confidence in F’s being true; F:[O,O] represents com- plete confidence in its being false; and F:[O,l] represents complete ignorance about its truth state. Notice that we im- pose (internal) consistency of single items of knowledge by requiring that a5b. Example I. “Italian(alex):[0.6, 0.83” is a bf-formula, with intended meaning “We believe to the extent 0.6 that Alex is Italian; also, we believe to the extent 0.2 that he is not”. “-Italian(alex):[0.2,0.4]” represents the same information. Example 2. “Vx.drinker(x)~smoker(x):[0.7,1]” is a bf- formula, with intended meaning: “we believe with strength 0.7 that all drinkers are also smokers”. It is important to notice that we are concerned here with what we call “epistemic” uncertainty: what the 0.7 above measures is our partial belief about the (complete) truth of the given formula. Other readings are in principle possible: e.g., “we believe that the fact that all drinker are also smoker is partially (0.7) true” (compete belief about partial truth); or “we believe that 70% of drinkers are also smokers” (complete belief about a statistical fact). BFL is meant to model the epistemic uncertainty reading. Example 3. ‘Vx. (3~. alarm-report(y,x) A reliable(y)) I> alarm(x) : [0.95, 11” is a bf-formula with intended meaning “We strongly believe (0.95) that, whoever is X, if some reli- able person reports that X’s alarm is ringing, then X’s alarm is indeed ringing”. Notice that the item of knowledge in the last example (adapted from Pearl, 1988) could not be expressed in a stan- dard Dempster-Shafer formulation, if not by enumerating all the possible Xs who have an alarm, and, for each one of them, all the possible Ys who could report about her alarm ringing. (similarly, Pearl must redefine his network every time a new element is added to his burglary example). Given a set Qr of bf-formulae, we let g=(F 1 F:[a,b]E a) be the set of f.o. sentences obtained by dropping the uncer- tainty components from a’, and we call it the “first-order component” of cl?. In particular, g denotes the f.o. language from which J?’ has been built. We use F, F1, F2, G ,... as metalinguistic variables ranging over f.o. formulae; $, $1, $2, cp ,... for bf-formulae; Cp, Y ,... for sets of bf-formulae. We give our language a semantics in the model-theoretic style. As it can be expected, this semantics makes use of concepts borrowed from both logical tradition, and Dempster-Shafer theory. A similar construction, however, could be used to generate logics for partial belief based on different languages and/or different uncertainty formalisms. he Interpretation Structures To start with, we need to find a suitable class of mathematical structures to act as models of our logic. Given our language z we focus on its first-order component 2, and the set 3 of standard f.o. interpretations for it;1 we denote by b the standard f.o. truth relation. Each element of 3 can be thought of as encoding a complete description of the state of the world. Given a f.o. formula F, IFI denotes the set of interpretations where F is true: [Fl= ( IE 3 1 Ib F ) . In order to model incompleteness of belief, we consider (non empty) sets of interpretations, or “hyper-interpreta- tions”. We can think of an hyper-interpretation s w as saying that the real “state of affairs” is one of those in s (but we do not know which one). The entailment relation can be extended to work on hyper-interpretations by: st-?l;i F iff for each IE s, Ib F We draw a possible scenario in Fig, 1: there, sb F, s&i-F; &&F, s’&k-F; s”/#k F, s”~-F. In order to enter partiality of belief (or “uncertainty”) into the picture, we consider functions Cr from @ (4 (the power set of 3) to the unit interval [O,l]. Given a subset s of 3, we read C?(s) as the extent to which we believe that the real state of the world is one of the elements in s. Correspondingly, we read C’<[Fl) as the extent to which we believe that the real state of the world is one where F is true (i.e., our confidence in F). We may legitimately ask which class - if any - of the ($9 (J;4 -3 [O,l]) functions consti- tutes a reasonable representation of (partial and incomplete) belief. The answer, of course, depends from our notion of what a “reasonable” representation of belief is. Though we do not mean to enter here the philosophical debate on this * To avoid unnecessary complications, we pass over the issue of the cardinality of 3. However, we do assume to have countable domains. Saffiotti 643 issue, we suggest three possible requirements Rl-3 for a measure of belief Crthat agrees with fol entailment. Rl. (monotonicity) FbG implies Cfc[Fl) 5 Cfl[Gl). R2. (deductive closure) If Cfi@l) > 0 and CT([FXG~) > 0, then Cti[Gj) > 0. R3. (consistency) Cfl!Fn) + C&FI1> 5 1. We use belief functions as Cr functions: given any function f: @,(.Jftl -+ [O,l], and any xc4 we let Be+(x) = &J(y). Because x E y implies Belf(x) 5 Bel&), the Belf functions satisfy RI. However, they do not satisfy R2 in general: Belf([Fn) = a md Belf([FxG@ = 6 only imply Belf( [Gn)2 max (0, a+b-1 ) . We might drop R2 on the ground that it is too strong a requirement, and model a weaker notion of be- lief where we may have, e.g., P:[.4,1] and Q:[.3,1], and yet PAQ: [0,1].2 However, we opt here for a stronger notion, where R2 always holds. In particular, we will require that Belf(xny) 2 Belf(x).Belf(y).3 Finally, Belf satisfy R3 when f(0) = 0: we do not enforce this condition, as we want to deal with partial inconsistency. We are now ready to de- fine the interpretation structures for BFL. Definition. A bf-intertvetation for Y is a function A : &I(>) + [0, l] such that: (i) BelJ2) = 1 (ii) If xuy f 2 then Bel,(xny) 2 BelJx).Bel_Xy). A bf-interpretation A is normal iff A(@) = 0. Normal bf-interpretations correspond to a particular class of basic probability assignments in usual D-S theory: those satisfying condition (ii) above. Given a bf-interpretation A, Beld[F]O) measures the total amount of belief committed by d to the proposition “F is true”. Entailment We define satisfaction, validity and entailment for BFL. Definition. Let d be a bf-interpretation. Then (i) dis a @f-model of F:[a,b] (written dbF:[a,bJ) iff both BelJ[Fn) 2 a and BelJ[-Fn> 2 l-b. (ii) &is a b_f-model of Q, (&a) iff .,k/=(p for all (PE a,. Example Q..Referring to Fig. 1, let J be s.t. d(s) = 0.6, A@‘) = 0.2, J@“) =O. 1, and &?iZZ =O. 1. It is easy to check that JJ is a bf-interpretation, and that: Bel,([Fn) = 0.6 and Beld([-Fn) = 0.1; i.e., &/=F:[.6,.9]. Example 5. For any subset s of .Y, the bf-interpretation lls given by: lll(s)=l and n,(x)=0 otherwise, encodes a cate- 2 A possible reading of F:[a,b] under this notion would be ‘F is true at least 1 OOa% of times, and false at least lOO( 1 -b)%“. 3 This choice is clearly instrumental to have BFL behave according to Dempster-Shafer theory. Still other notions of belief can be captured by imposing different constraints. E.g., requiring Belf(xny) 1 min(Belf(x), Belf(y)) (and replacing C by sup in the definition of Belf) would force belief to obey possibility theory. The situation is reminiscent of the one in Kripke-style semantics, where properties of the modal operators correspond to constraints on the accessibility relation in the models. 644 Representation and Reasoning: Belief gorical but incomplete state of belief. In particular, lXen- codes the empty state of belief, and Ilp, the completely in- consistent one. If 1 is a fol interpretation, I{/) encodes a cat- egorical and complete state of belief. We say that Jsatisfies 9 if .JZ k$. Notice that AbF:[a,b] implies AbF:[c,d] for any interval [c,d] that contains [a$]. In example 3, &/=F:[a,b] for any [a,b]z[.6,.9]. A bf-formula $ is said bf-valid (written bj~) iff ~#=?c, for all J?. It is easy to see that all the bf-valid for- mulae take one of the forms: F:[O,l], for any F; F: [a, 11, with F (fol) valid; or F: [O&l, with F unsatisfiable. efinition. CE, bf-entails Y (written (PbY) iff every 6f- model of Q is a bf-model of Y. Example 6. Let d3tr = (Vx.drinker(x)Ismoker(x) : [ .7,1], drinker(peter) : [X,.9] ) ; we want to know whether or not @ /= smoker(peter):[S,l]. In Fig. 2, we draw the set .Ypar- titioned among the interpretations where “drinker(peter)” holds (upper part) and those where “smoker(peter)” holds (right part): [drinker(peter)n = A uB, and [smoker@eter)n = Gpuk). As Vx.drinker(x)r, smoker(x) b drinker(peter)x smoker(peter), [Vx.drinker(x)l>smoker(x)n is a subset of udrinker(peter)r>smoker(peter)n = BuCuD. Consider now any bf-model d of @. By definition of bf-model, d must be such that Bel,(AuB) > 0.8, Bel,(CuD) 2 0.1, and BefJBuCuD) 2 0.7. lvloreover, by definition of bf-inter- pretations, we must also have Bel,(B) 2 0.8.0.7 = 0.56 (and Bel,(CuD)20.07). Hence, BelJ [Ismoker(peter)n)> 0.56. As this is true for any .M, @bsmoker(peter):[.5, I]. We list some interesting properties of bf-entailment. Tlheorena~ I. Let Q be a set of bf-formulae, F, G fol formulae, t a ground term in Z and a, 6, c E [0, l]. (a) QkF:[a,b] iff cB/=-F:[l-h, l-a] (b) @kF:[a,b] iff both (DkF:[a,l] and QbF:[0,6] (c) FklG ifand only if, for all a, F:[a,l]kG:[a,l] (d) bF ifand only ifkF:[l,l] (e) Zf cDkFI>G:[a,l] and @kF:[c,l] then QkG:[ac,l] (f) If cDbFzG:[a,l] and @kG:[O,b] then @bF:[O,ab] (g) If QkVx.F(x):[a,l] fhen @bF(t):[a,l] (h) If @/=F(t):[O,b] then @bVx.F(x):[O,b]. Point (a) shows that BFL treats negation according to D-S theory; (b) allows us to consider the a and b values sepa- rately wrt bf-entailment. (c) and (d) show that BFL is a con- servative extension of standard fol. Also, they show that agents modelled by BFL are (partially) logically omniscient: they completely believe all the fol tautologies, and whenever a formula is believed to some extent, all its logical conse- quences must be believed to at least the same extent. (e) and (f) show that a graded modus ponens (and modus tollens) can be soundly used for performing deductions in BFL. (g) and (h) establish the relation between universal properties and single instances, and confirm that BFL models epistemic uncertainty. Believing, to some extent, a universal Vx.F(x) means to partially believe that F is true for each individual; dually, the existence of one single counter-example -F(t) suffices for partially negating the validity of that universal4 Example 7. Let @ = { Vx.drinker(x)~smoker(x):[0.7, 0.91, drinker(peter): [0.8, 11, drinker(mary): [0.6, 0.73 ) . The reader can use theorem 1 to check that the following are true: Q, f= -smoker(peter) : [0,0.44] @ b smoker@eter) v -smoker(peter) : [I, l] CI? k smoker(mary) : [0.42, l] CD b 3x. smoker(x) : CO.56, l] a + 3x. -smoker(x) : [O.l, 1] (Hint: use (e) and (a) for the first bf-formula; the last bf-for- mula is entailed by the (negation of the) universal in 0). amsistency a Unlike most approaches to belief functions, we allow a basic probability assignment to assign a non-zero value to the empty set, and have Bel, “count” this value. Thanks to these peculiarities, BFL preserves (in a “graded” form) many of the properties that accompany inconsistency in standard fol. Definition. @ is bf-cons’stent if @ has a normal bf-model. It is a-consistent (O-C&~, if it has a bf-model A such that A(0) c a. We say that @ is a-inconsistent if @ is not a-consistent, and that CD is bf-inconsistent if it is l-inconsistent. Example 8. The set @ = (PvQ:[.$,I], P:[.3,.6], -Q:[l,l]) is 0.2-inconsistent. In fact, any bf-model A of @ must be such that BezA[wQ11)r0.8, Bez,([pjjp0.3, BezA[-PI)2 0.4, and BelJ[-Q&21. The last two conditions imply, by definition of bf-interpretation, fkz~U-P4& 2 0.4. AS [PvQn and [-PA-Q1 are disjoint, the only way Acan satisfy all the above con- straints is by having A(Q)) 2 0.2 (remind that Bel,&Y)=l). The lower bounds for the p -IJ Q -Q PvQ 4’vQ) Bet, values of for any bf- 4 Hence, BFL cannot handle (nor is it meant to!) non-monotonicity. model of @ are shown on the bottom left comer. Notice that, for any F, BefJuFn) > 0.2, and hence cBkF:[.2,1] (in particular, @/=Q: [.2,1]). &inconsistency can be thought of as a “noise” that covers all our beliefs below the threshold a: if & is o-inconsistent, then @bF:[a,l] (and <9~F:[O,l--a]) for any F (graded ex falso quodlibet). BFL can live with this “noise”, and still express meaningful information, without degenerating into “logical chaos” (Rescher & Brandom, 1979; Lang, 1991): esrelraa 2. Let @ be a-consistent. Then there is a fol formula G such that Q # G: [a, 11. Finally, a-inconsistency can be used in proofs by refuta- tion (recall that, in fol, r/=~ F iff I% (-F} is inconsistent): ‘BThes~m 3. For any SD, F and a, Qk F: [a, l] if and only if @u (-F:[l,l]) is a-inconsistent. tions encode states of partial belief. We may wonder whether there is, for any given set @ of bf-formulae, a bf-model of CD that encompasses all and only the information encoded by CD. I.e., we may want to seek a least informative bf-model of CD. We show that, when CD satisfies a condition called D-consis- tency, such a bf-model - called a D-model - exists, and can be constructively defined. Interestingly enough, D-mod- els and D-consistency are both based on Dempster’s rule of combination (hence the “D”), the pivot mechanism for aggre- gating knowledge in Dempster-Shafer theory. We start by making the notion of “least informative” precise. efia8litilon. Let A and ~9 ’ be two bf-interpretations. (i) A is less informative than .A ’ (written A&.& ‘) if, for any $, A+* implies A’+$. (ii) For a given a’, Ais a least informative bf-model of CD if&Q, and for any bf-model A ’ of 0, AU ‘. It is easy to verify that 6 is a partial order, and that 16 &lla for any A5 It follows from the above definition that least informative bf-models completely characterize bf-en- tailment: if .A is the least informative bf-model of @, then, for any bf-formula $, @ k 4 iff A/= 9 We now define D-models. We consider the single item of evidence represented by a bf-formula 1c) = F:[a,b], and define the $-induced evidence to be the function %$, given by: g+([FI]) = a; 8+<[-Fn> = l-6; &!C+(d = b-a; and 8+(x) = 0 otherwise.6 Intuitively, 89 says that we believe to the extent a that the “true” state of the world is one where F holds, and to the extent (l-b) that it is one where -F does. We extend the idea of “induced evidence” to sets of bf-formulae as fol- lows. 5 Notice that AEA! ’ iff BefAx) I BeI,+) for all x; as a consequence, least informative bf-models are unique. Shafer, Dubois & Prade, and Smets independently defined equivalent orders. 6 2?-+ slightly generalize Shafer’s (1976) simple support functions. Saffiotti 645 Definition. Let 0 be a set of bf-formulae. The D-model of Sp is the function J&,(Q) given by: d&p)=w~~I*~l’ where : (flOf&) = c fl(Y)f2(Z) ynz=x The 0 is the usual (but un-normalized) Dempster’s rule of combination. Intuitively, A?“@) distributes our credibility over all the possible interpretations in such a way that all the information contained in Q, is considered - through its in- duced evidence. &&a) is indeed a bf-model of a. Theorem 4. For any <9, A&Q> FG?. Example 9. Consider again the CD in Example 6, and let B’uC’uD’=[Vx.drinker(.@smoker(.x)Jj. EPdrinkcr(p=tcr):[.8,.91 assigns 0.8 to the set AuB, 0.1 to CUD, and 0.1 to .X 8 Vx.drinker(x)~smoker(x):[.7,1] assigns OS7 to B ‘“@‘uD’* and Oe3 to 3. By combining these two functions through 0 we get: i B'IC'UD'I B'uC'uD'I AuBkud .f 1 Q) -'$i@@ 1 0.241 0.03 1 0.07 1 0.56 1 0.07 1 0.03 1 0 The reader can verify that Ad@) is a bf-model of @ by computing the values of BeZJda). In the above example, Ad@)(@). However, because our 0 is not normalized, we have no guarantee in general that A&@) is normal. We give the following: Definition. Cp is D-consistent iff &do)(Q)) = 0. When @ is D-consistent, its D-model has some very inter- es ting properties. Theorem 5. i’f @ is D-consistent, then: (i) Ad@) is the least informative bf-model of a. (ii) @ +F: [a,b] if and only if&‘@) +F: [a,b] (iii) @uF:[l,l] a-inconsistent ifJf A&@uF:[l,l])(@) 2 a. From a D-S viewpoint, (i) says that the result of combin- ing items of information by Dempster’s rule is the least in- formative basic probability assignment that subsumes the given information, provided that it is D-consistent. From the viewpoint of BFL, (ii) guarantees that we can build a bf- model of any D-consistent set @ that fully characterize its entailment set. We will see in the next section the relevance of (iii) to automatic deduction. Example 10. The willing reader who has computed the BeZ&dQ) values in example 9 can now verify that the corre- sponding A&@) is least informative, by comparing these values with the lower bounds for the Bel, values of any bf- model of @ computed in example 6. In order to conveniently use the results of Theorem 5, we need to find a more syntactical characterization of D-consis- tency. Notice that a bf-formula F:[a,b] “says something” about (the truth of) both F and -F. We write &F to denote either F or -F. Given a set @ of bf-formulae, we focus on the set of all the sets of sentences about which @ says something: we let a*= ( {*I,..., z!F,) 1 FiE Z, i=l,..., n) . We say that Cp is coherent if all the sets in g?* are (fol) con- sistent. Two sets of bf-formulae Q and Y are distinct if @uY is coherent. Intuitively, CD and Y are distinct if @ does not say anything about any formula that is entailed by some set in V* (recall that a set r of (fol) formulae entails a formula F if and only if l-u{-F) is inconsistent), and vice- versa. Q, and Y “speak about” different formulae. Notice that @ is coherent iff all its subsets are mutually distinct. Theorem 6. If Q is coherent, then it is D-consistent. Coherence may seem too strong a condition to be useful: e.g., even the propositional set (P, PI&) is not coherent ((HP, -(Pz@)) is inconsistent). The situation is however less extreme when we consider quantified knowledge: (Vx. p(x)~Q(x), P(a)), is coherent (intuitively, ‘v’x.P(x)~>&(x) and P(a) do speak about different things). Similarly, the a’s in examples 6 and 7 are coherent, and the one in ex. 8 is not. Our definition of coherence and distinctness play the role of the intuitive “distinctness” condition required by Shafer for applying Dempster’s combination 0 (also, cf. the notion of evidential independence in Shafer, 1976, $7.4): if @ has been built up from distinct items of information (hence, it is coherent), then our O-based D-models “behave well” as rep- resentations of @. Otherwise, D-models are not least infor- mative (but they are still bf-models). Said differently, D- consistency formalizes a well known (but otherwise poorly understood) precondition in Dempster-Shafer theory. Automatic iilction We outline a technique for performing automatic deduction in BFL (a full account’ and an algorithm, are given in Saffiotti, 1991a). By automated deduction we mean here the ability to decide whether @)1= F: [a,b] holds. The approach taken here is peculiar in that it relies on the construction of an uncer- tainty network that corresponds - in a precise sense - to the given entailment problem. The technique is sound and complete under the hypothesis of D-consistency of a,. We answer the question @bF:[a,b] by answering the two questions @k F: [a, l] and #= F: [O,b] separately (Theorem 1). We test if @kF:[a,l] by testing if ti(-F:[l,l]) is a- inconsistent (Th. 3)’ and we test the latter by testing if Ad@u(-F:[l,l]J)(@) 2 a (Th. 5). (similarly for F:[O,b]). Uncertainty networks enter into play at this point: using Shafer and Shenoy’s valuation system formalism7, we con- struct a valuation system KS&W{ -F: [ 1, 11 }) in such a way that evaluating it produces (for a certain variable) exactly the value of Ad@u (-F: [ 1 ,l ] ) )(8). Rather than exposing the construction technique in general, we illustrate it by an ex- ample. We let <D = (V~.P(x)~Q(x):[.7,.9], P(a):[.8,0], P(b):[.6,.7]) (cf. ex. 7)’ and wonder if @k%.Q(x):[.6, 11. We need to check if A$@u(tj~.-Q(x):[l,l]))(@) 2 0.6. We take each set of (fol) formulae in @*, and translate it into Skolem clausal form; let r be the set of resulting sets of 7 Space precludes us from giving even a short introduction to valuation systems (e.g., Shenoy and Shafer, 1988). We want to emphasize that valuations systems are not limited belief functions, but are intended as a general framework for local computation. 646 Representation and Reasoning: Belief clauses. We also translate Vx.-Q(x) into clausal form, and look for resolution derivations of the empty clause I from it and any of the sets in II. Two derivations are found: P(a) -P(X) V Q(x) P(b) -t’(x) v Q(x) -Q(X) Q(a) -Q(x) Q(b) We then &ild a valuation system that “mirrors” these deriva- tions. For the present goals, we can think of a valuation sys- tem as a network (X ‘Lj, where X is a set of nodes, each as- sociated with a set of possible values, and ‘I/a set of basic probability assignments (called here valuations) that express information about the values taken by some (subsets of) nodes. A propagation mechanism (evaluation) can be used to compute the effect of the given valuations on the value of some “unknown” nodes. We build a valuation system K!h&Du (Vx.-Q(x): [ 1, l] >) = (J?, ‘I”) as follows. For each clause C appearing in the above deductions, we put a node X(C) in x*, with possible values {in. out]. For each resolu- tion step Cl,C2/R, we put in (I/” a valuation Res on (X(Cl), X(C,), X(R)) that encodes the relation “whenever both X(C1) and X(Cd are h, X(R) must be &“. Finally, we put in lip” valuations corresponding to the [a,b] values in a,. The following picture shows the (x”, v) for our example.8 We are almost done. All we have to do now is to evaluate this valuation system to find a value for the event X(I) = b: here, 0.644. This value is exactly the value given by the D- model of ti { Vx.-Q(x): [ 1, l] ) to the empty set. The full pa- per proves this result in general. (Saffiotti & Umkehrer, 1991) applies this technique to the generation of uncertainty networks from first-order clauses. Conclusion BFL extends first-order logic with a notion of quantified be- lief based on the belief function formalism. From a dual viewpoint, BFL extends Dempster-Shafer theory with the ability of expressing belief about first-order (rather than propositional) statements. A number of results define the be- havior of BFL as an integrated logic; as a by-product, these results reveal a new perspective on Dempster-Shafer theory. Though contructed from first-order logic and belief func- tions, BFL illustrates a general approach to coupling a logical 8 Rounds represent nodes, rectangles valuations. The real system is more complex: (Saffiotti, 1991a) reports the details. Arrows are sugges- tive of the propagated values: the actual propagation works differently. language and an uncertainty formalism (discussed in Saffiotti, 1990). On the one hand, the properties of the mea- sures of strength of belief can be changed, by imposing dif- ferent constraints on the interpretation structures. On the other hand, any language can be used for representing the objects of belief, provided that a (recursively enumerable) entailment relation is defined for it. The network generation procedure described above can similarly be extended to other languages and/or uncertainty formalism (Saffiotti, 199 lb). Acknowledgements. Yen-Teh Hsia, Robert Kennes, Victor Poznanski, Enrique Ruspini, Philippe Smets, Elisabeth Umkehrer, and two anonymous referees provided very useful comments. eferences Bacchus, F. (1988) Representing and Reasoning with Probabilistic Knowledge. PhD Thesis (Univ. of Alberta). Dempster, A.P. (1967). “Upper and Lower Probabilities Induced by a Multivalued Mapping”. Annals of Math. Statistic 38. Dubois, D. and Prade, H. (1988) Possibility Theory: An Approach to Computerized Processing of Uncertainty (Plenum Press, NY). Dubois, D., Lang J. and Prade, H. (1989) “Automated Reasoning Using Possibilistic Logic:“. Procs. of the 5th Workshop on Unc. in AI (Winsdor, Ontario) Fagin, R. and Halpem, J. Y. (1989) “Uncertainty, Belief and Probability”. Procs. of IJCAI 89. Lang, J. (1991) Logique Possibiliste: aspect formels, de’duction automatique et applications. PhD Thesis (Toulouse, Fr). Nilsson, N. J. (1986) “Probabilistic Logic”. Artif. Intell. 28. Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems (Morgan Kaufman, CA). Provan, G. M. (1990) “A Logic-Based Analysis of Shafer Theory”. Iti. J. of Approximate Reasoning 4. Dempster- Rescher, N. and Brandom, R. (1979) The Logic of Inconsistency. (Billings & Sons Ltd., GB) Ruspini, E. H. (1986) “The Logical Foundations of Evidential Reasoning”. Technical Note 408. SRI Int. (Menlo Park, CA). Saffiotti, A. (1990). “A Hybrid Framework for Representing Uncertain Knowledge”. Procs. of N-90 (Boston, MA). Saffiotti, A. (1991 a) “A Belief-Function Logic”. Technical Report TRIIRIDZAIgl-25 (UniversitC Libre de Bruxelles, Belgium). Saffiotti, A. (1991b) “Dynamic Construction of Valuation Systems”. Tech. Rep. TRIIRIDIA/91-18 (Univ. L. Bruxelles). Saffiotti, A. and Umkehrer, E. (1991) “Automatic Construction of Valuation Systems from General Clauses”. Procs of IPMU-92. Also Tech. Rep. TRIIRIDIA/91-24 (Univ. Libre de Bruxelles). Shafer, G. (1976). A Mathematical (Princeton University Press, Princeton). Theory of Evidence Shenoy, P.P. and Shafer, G.R. (1988). “An Axiomatic Framework for Bayesian and Belief-Function Propagation”. Procs. of the 4th Workshop on Unc. in AI (Minneapolis, MN). Smets, Ph. (1988). “Belief Functions”. In: Smets, Mamdani, Dubois, and Prade (Eds.) Non-Standard Logics for Automated Reasoning (Academic Press, London). Zadeh, L. A. (1978) “Fuzzy Sets as a Basis for a Theory of Possibility”. Fuzzy Sets and Systems 1. Saffiotti 647
1992
105
1,171
g Circ odal Logic Jacques Wainer Dept of Computer Science University of Colorado Boulder, CO 80309-0430 wainer@cs.colorado.edu Abstract This paper discusses the logic LKM which extends circumscription into an epistemic domain. This extension will allow us to define circumscription of predicates that appear within the context of a modal operator. In fact, LKM can be seen as a method of extending any first-order nonmono- tonic logic whose semantic definition is based on a partial-order among models, into a new nonmono- tonic logic defined for a modal language, whose modal operator (K) follows an underlying S5 or weak-S5 semantics. One interesting use of this nonmonotonic logic is to model nonmonotonic as- pects of the communication between agents. Introduction We will assume, as McCarthy implicitly does, that the speaker always believes in what she said. Thus there is no need to represent Grice’s maxim as a default and we can automatically assert that the speaker be- lieves the content of her utterance. We will also avoid dealing with the last step in the reasoning: the incor- poration of the speaker’s beliefs into the hearer’s be- liefs. What is left to be modeled is the reasoning from a) the fact that the speaker knows a proposition, and b) that the hearer can assume that the speaker knows that there is an default that applies to that proposi- tion, to conclude that the speaker believes that the default hold. In a schematic way, and assuming that A’s utterance was that a entity b is a bird: McCarthy [McCarthy, 19861 suggests using a non- monotonic logic to model conventions in communica- tion. For example, if A tells B about a bird, and A says nothing about the bird’s ability to fly, B should conclude that the bird does fly. McCarthy proposes that circumscription could account for this reasoning. We claim that the reasoning cannot be accounted for by circumscription alone (or any first-order nonmono- tonic logic) and that a nonmonotonic logic that com- bines both circumscription and knowledge is necessary for the task. (1) From KA bird( b) and KA pz [bird(z) A labn(z) - fly(z)]] conclude KA fly(b) where KA is the operator that represents the speaker’s knowledge (or belief). The first formula above comes from the assumption that the speaker believes in what she said. The second formula comes from the fact that the default on birds flying condition is mutually known to A and B, and thus A knows it. Let us detail the defaults involved in the reasoning above. First, there is the need to represent the default that birds usually fly, or, at least, that if nothing is mentioned about a bird’s flying condition, one should assume that it can fly. I will assume circumscription or some other first-order nonmonotonic logic can rep- resent such a defau1t.l Furthermore, there is the default that the speaker A is being truthful and she believes in what she says (Grice’s conversational maxim of quality [Grice, 19751). That is, if A utters a proposition Q! then B should assume that A believes o. McCarthy’s proposal implicitly assumes that one can remove the external knowledge operator from the assumptions, perform the default reasoning within a first-order framework (by circumscribing abn), and re- insert (if needed) the knowledge operator in the con- clusions. In other words, the reasoning is performed within a single belief space: the propositions known by the speaker. We will call this methodology internal reasoning. This method seems reasonable, but it can- not account for all utterances. For example, if A had uttered: (2) b is a bird, perhaps a penguin. ‘One also has to represent the fact that this default is one would not like to conclude that b flies, or that the mutually known between A and B in order to characterize speaker believes that b flies. In fact, the speaker men- it as a convention. But I will leave this detail aside. tioned that she considers it possible that b cannot fly. Finally there is the default that if B knows that A believes o, and CI! does not contradicts B’s beliefs, then B should also believe cy. 648 Representation and Reasoning: Belief From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. We will call utterances like (2) as epistemic cuncella- tions. Epistemic cancellations are expressions where an epistemic possibility construct is used to cancel a default. Or more specifically, by using epistemic can- cellation the speaker is informing the hearer that a par- ticular default proposition should not be attributed to her. Epistemic cancellations also occur in the domain of generalized implicatures wainer , 199 11. For the utterance above, internal reasoning will not work. The problem is that the result of removing the external knowledge operator will not be a first-order formula, but a formula that contain a modal operator, that is: (3) bird(b) A Ppenguin(b)2 In this case, it will not be possible to circumscribe abn because circumscription is not defined for modal formulas. This paper will propose a way of extending circum- scription so that the reasoning described in (1) can be performed with the K operators in place, as opposed to the internal reasoning method. The logic will al- low for the circumscription of predicates that appear in modal formulas and will be able to deal correctly with examples like (2). This extension of circumscription to modal domains will be accomplished by semantic means. A large family of first-order nonmonotonic logics, which in- cludes circumscription, are semantically defined by a partial order “5” among models [Lifschitz, 1985, Shoham, 19871. For these logics, entailment is defined as satisfiability by the set of s-minimal models as in (4). (4) a I=5 P iff Wf E G(a), M I= P In the definition above, /= is the standard first-order satisfiability operator, and I’l(cw) is the set of all I- minimal models that satisfy cy, or formally: (5) I-,(a) = (A4 E r&Y) Il3M’ E I+) N’ < Al}3 where the set I’c(cr) is the set of all models that satisfy Cy: (6) row = w I M I= 4 The logic LKM (after “Logic of Knowledge and Min- imization”) described in this paper is a method of defining a new partial order “5” among modal-models based on the ordering 5 among first-order models. The definition of entailment in LKM is the same as (4) but where the set I’1 (ar) is defined in terms of “C” instead of “5.” In fact, nothing on the definition of LKM hinges on the fact that ,< is the partial order defined 2The operator P for epistemic possibility, is the dual of K. From now on we will use K as the speaker’s belief operator. for some form of circumscription. Any first-order non- monotonic logic whose semantic is based on partial or- der can be extended to a corresponding LKM logic. The next section will discuss the definition of the partial-order & and show that it correctly extends ,< to an epistemic domain. Semantics of L The language of LKM is that of a modal logic with quantification and equality, where the modal operator is K. There are no restrictions to the relative scope of quantifiers and the modal operator, and in particular quantifying into a modal context is allowed. The propositional aspect of the semantics of the K operator should follow the S5 or weak-S5 semantics, and therefore the K operator refers to the knowledge of a perfect reasoner, with both positive and negative introspection. In particular, because the underlying propositional modal logic is weak-S5 or S5, instead of using an accessibility relation among possible worlds [Halpern and Moses, 19851 one can talk about a set of possible worlds, each one accessible to the others.4 The difference between S5 and weak-S5 is whether the real world belongs or not to this set of mutually accessible possible-worlds. Before discussing the formal aspects of the seman- tics for LKM, I will discuss some of the intuitive ideas behind these definitions. Intuitive view There are two main ideas in defining the semantics of LKM. The first idea is that a possible world, as defined for the semantics of modal logics, can be seen as a first order model, and thus the 5 relation can be used to order possible worlds. In propositional modal logics semantics, a possible world is just an index for the interpretation function ([Halpern and Moses, 19851 for example). For exam- ple, the truth value of a propositional symbol p in a possible world 201, is the value assigned to it by the interpretation function, at index wl. If I is the inter- pretation function, then the truth value of the proposi- tional symbol p in the world 201 is I(wi , p). The idea of possible-worlds as index is also used in the semantics of quantified modal logics [Garson, 1983, Wainer, 19881. But there is another view of possible worlds, one that agrees more with the intuitions behind the whole idea of possible-world semantics: each possible world can be seen a first-order model. A first-order model, in this context, is any structure that assigns consistent truth values to all first-order formulas of a language (formulas with no modal operators). In more details, a first-order model is a structure that assigns to each term in the language an object, to each unary pred- icate symbol a set of objects, and so on, and finally *This point is also made in [Levesque, 19901. Wainer 649 assigns truth value for each formula in the language according to the usual recursive rules. Formally, this view of possible-world as a first-order model can be easily accomplished by incorporating the interpretation function into the possible world itself. In the index view of possible-worlds I(uti , (u) denotes the truth value of CY in world 201. In the first-order model view, one defines the function Aa: I(wi, Z) and call it the possible-world ~1. Now, at each possible world, one can determine the truth value of all first- order formulas. This corresponds more closely to the philosophical intuitions of possible-words as “alterna- tive realities,” each one with its own domain and true propositions. Under this view of possible worlds, it is possible to use the relation 5 to compare possible worlds across different modal models. A modal model, under this view, is a 2-tuple M = (wc, IV), where 200 is a possible-world called the real world, and IV is a set of possible worlds. There is no need to define an interpretation function since all pos- sible worlds are themselves interpretation functions. The second idea is that of an elementary improve- ment between two different modal models. The ele- mentary improvement relation uses this ability of com- paring possible-worlds of different modal models using 5 to define a relation among modal models. A model Ml = (wcl, WI) is an elementary improve- ment of M2 = (wo2, W2) if the sets Wi and IV2 agree in all but a pair of possible worlds, one in each set. And the differing possible world that belongs to Wi is “smaller” (in the 5 sense) than its correspondent in Wz. In an informal language, if one sees the formula wi < w2 as stating that wr is “smaller” then ~2, then Ml % an elementary improvement of M2 if exactly one possible world in M-J got “smaller” in Ml. The elementary improvement relation is the bridge between the partial order 5 among first-order models, and the relation & among modal models (which will be defined as the transitive closure of the elementary im- provement). Informally, a modal model Ml is “better” (in the 5 sense) than M2 if at least one world in Ml is “smaller” (in the 5 sense) than its correspondent in M2 .5 In fact, the definition of elementary improvement is necessary in order to formalize what “correspondent” means. Formal definitions The definition of a model M for a LKM formula is: (7) M = (~0, W) where we is called the recsl world, and W is a set of possible worlds. 51nformally, I use the term “smaller” when comparing possible-worlds using the relation 5, and I use the term “better” when comparing modal models under the relation &. This is also the intuition underlying the name “elemen- t ary improvement .)) 650 Representation and Reasoning: Relief The logic LKM itself places little constraints on the semantic of terms. So far, the only requirement is that each possible-world should be a first-order model, so that possible-worlds can be compared using the 5 re- lation. This means that all terms in the language must have a denotation in each possible-world. This rules out some quantified modal logics where a term may not have a denotation in some possible-worlds in a model (see [Garson, 1983, Wainer, 19881). But, to be able to prove the theorem in the next section, which states that the logic LKM based on the relation C is indeed an extension of the first-order nonmonoton: logic based on 5, one need further constraints. We will assume the stronger set of constraints for the semantic of terms, that is that all terms are rigid designators. This means that for each model: (8) e all possible-worlds in the model have the same domain. 0 all terms in the language denote the same ject in each possible-worlds in the model. ob- The requirement of rigid designators seems appropri- ate when describing the knowledge state of a single agent (in this case the speaker), but is too restrictive when relating the knowledge of two different agents, or relating the knowledge of an agent to the reality [Maida, 199 11. We can now define satisfiability for LKM, in the usual way: (w,W) ka iff w ~FOL CY where Q con- tains no modal operators and “blot” is the standard first-order satisfiability (w,W) ~CEA@ iff (w,W) /=cu and b%W) l=P (w, W) f= -a iff (w, IV) k cy (w, W) b 33x2 iff there exist an object o in the domain of w and (w, W) b og6 (w, W) b KCY iff for all w’ E W (w’, w> I= * We now define the elementary improvement rela- tion “ge” between two models Ml = (WI, WI) and M2 = (w2W2). (10) Ml C, MS iff Wl = W2 or Wl - w2 = {wa) and W2 - WI = {wb} and w, 5 wb or Wl - w2 = 0 and W2 - WI = {wb} and there is wC E WI and wc 5 wb 6 “~5” is the formula cy where all instances of 2: has been substituted by o. This introduces the new concept of for- mulas that contain references to objects of the model, but this can be dealt with in a natural way [Waker, 19881. In other words, Ml is “better” then MS, if a) they have the same set of accessible worlds, or b) the sets differ by one world each, and the differing world in Ml is “smaller” then the correspondent world in MS, or c) Ml has a world less then M2, and the extra world in M2 is not minimal. The relation & among modal models is defined as the transitive closure of C_,. Finally, one defines entailment as described in (4). We first define two auxiliary sets A0 (the set of all models that satisfy a formula) and Ai (the set of all E-minimal models of IO): (11) A,(a) = {M I M I= a} Al(o) = {M E AO(CY) ] GIM’ E no(a) and M’cM) and define entailment based on those sets: (12) Q! l=~ P iff VM E AI(~), M I= P This completes the definition of the logic LKM. .A rnetalogical result We can prove that the logic LKM (based on the rela- tion 51) is an extension of the corresponding first-order nonmonotonic logic (based on the relation 51). We will prove that if 0 entails p in the first-order (nonmono- tonic) logic defined by 51, then Kcu will entail KP in LKM (with partial relation &I). Or, formally: (13) a I=<, P iff * I=5 w where cv and p are first-order formulas. The claim above proves that any example of a phe- nomenon that is correctly modeled by using a first- order nonmonotonic logic and internal reasoning, will be correctly modeled by the LKM extension of that logic. And hopefully, LKM will also correctly model other examples for which internal reasoning cannot be applied, for example epistemic cancellations. In par- ticular we will show in the next section of this paper that a LKM extension of prioritized circumscription will correctly model inheritance reasoning for epistemic cancellation examples. Prcoof We will present the sketch of the proof of (13). The full proof can be found in [Wainer, 19911. The first step of the proof is to collect all possible-worlds that satisfy o in a single set Uo(cr). That is, (14) Uo(Q) = {u’ I (w,@) l= a) where “b” is satisfiability in the logic LKM and 0 is the empty set. The set u< 1 (o) collects all si-minimal models of Uo(Q). (15) U<,(cr) = {w E Uo(cy) 1 +lw’ E Uo(a), W’<lW) The next step is to prove that: (16) a l=rl P iff VW E &(4, (w, 0) I= P Under the view of possible-worlds as first-order models, all first-order models can be seen as a possible-word and vice versa. Then, the proof of (16) is simple: the set Us, (a) correspond to the set Ii(o) in the definition of l=gl in (4). Formula (16) is the bridge between the first-order nonmonotonic logic defined by j=s, and the logic LKM. The rest of the proof is done within the seman- tics of LKM itself. What is left to show is: 07) VW E u&4 (w,S> I= P iff I& j=c, W To prove that, one has to show that: (18) M E A(o) iff M = (z, W) and W and M satisfy the constraints in (8). c v<IW where 2 is any possible world. That can be easily ac- complished by a proof by contradiction for each direc- tion of the biconditional. This completes the proof of (13). Applications of the logic LKM The logic LKM seems suitable to be used in model- ing some defeasible communication phenomena. In specific, if a first-order nonmonotonic is suitable to model a defeasible phenomenon, the LKM extension of that logic will allow the model to deal with epistemic cancellations. For example, [Wainer, 19911 uses LKM to model generalized implicatures in natural language understanding. In that work, generalized implica- tures (which can be understood as a non-conventional form of ambiguity that is derived from communicative principles--G rice’s maxims [&ice, 19751) are treated essentially as the preferred meaning of some linguistic expression, and circumscription is used to make this preferred meaning the default meaning of the expres- sion. The use of LKM allows the model to account for epistemic cancellation expressions, that is sentences of the form (2). D ue to space constraints we will not further describe this model. But since the main use of LKM in that model is to account for of epistemic cancellations we will instead show that the logic does capture the correct conclusions of these expressions. An example Let us go back to the the following axioms: utterance (2) and let us assume (19) Iqvl: bird(s) A lab?+) + fly(z)] KPJx penguin(x) 3 bird(x)] Kpx penguin(x) A wh2(x) ---) -fly(x)] which are the standard circumscription implementa- tion of the penguin/bird defaults, except that the formulas refer to the speaker’s knowledge of the de- faults. To obtain the defaults (for the formulas without the knowledge operator) one should circumscribe both Wainer 651 abnl and abnz, giving abnz a higher priority. For sim- plification let us assume that all other predicates are allowed to vary. This circumscription policy defines a first-order partial order, denoted as SC, as follows: domain = domain(w2) and l~bmlwl c labn21wl or bbmlwl = lab&, and labnllwl c IabnllWa In the formulas above the expression Iabn21W, denotes the extension of the predicate abn2 in the first-order model (or possible world) ‘~1. The statement that the speaker believes in the se- mantic content of (2) is: (21) K[bird(b) A Ppenguin(b)] which is equivalent to (22). (22) Kbird(b) A Ppenguin(b)7 Let us determine what is entailed by applying the circumscription policy above to the conjunction of (19) and (22). Given a model M = (we, W) that satisfies the conjunction of (22) and (19), each possible-world in W satisfies the formulas in (19) without the external knowledge operator, and also satisfies bird(b), and at least one possible-world in W satisfies penguin(b). The set W can be divided into two nonintersect- ing sets Al = A2 = ( W E w , 1; pc g” I;)&- hpenguin(b)I and 20 en uin , w ere at least Al is not empty. Let us now consider the sets Bi and B2 of the <,-minimal worlds in A1 and A2 respectively. The worlds in Bi assign the singleton {b} to the extension of abnl and assign the empty set to the extension of abn2, and the worlds in B2 assign the empty set to both extensions. The set of models of the form M’ = (we, B1 U &) is exactly the set of all CC-minimal models. That is, a model of the form M’ = (pus, B1 U B2) satisfies the conjunction of (19) and (22) and none of its possi- ble worlds can be “improved upon.” Thus every lZ,- minimal model satisfies (23) which seems to be the correct conclusion to derive from the utterance (2). (23) K[penguin(b) A -fly(b) V bird(b) A ypenguin(b) A f Zy(b)] Related work Lin [Lin, 19881 1 a so combines circumscription and modal logic but with very different goals. The work shows that a form of minimal-knowledge logic can be obtained by “circumscribing” the knowledge operator in a propositional modal logic. The circumscription of the knowledge operator is defined as a syntactic analog 7KPp c-) Pp is a tautology in S5 and weak-S5. of the circumscription formula for predicate circum- scription. Thus, Lin’s work differs from the research reported in this paper in both goals and methodology. Lifschitz’s work on introspective circumscription [Lifschitz, 19891 is also related with this research. Lif- schitz’s goal is to extend circumscription to include the equivalent of non-normal defaults. This is done by defining for each predicate symbol Pi an auxiliary pred- icate symbol L-P’ which is interpreted as epistemic. In other words, L-Pi( c is interpreted as meaning that c is ) known to have the Pi property. Introspective circum- scription provides a syntactic and semantic definition for the extension of L-Pi. It is important to notice that introspective circum- scription is a non-modal logic, and the concept of “known” there is very different from the concept of knowledge in modal logics, and therefore in LKM. In fact, “known” in introspective circumscription is closely related with the concept of “being identified as having the property.” If c is a term in the language, the formula L-Pi(c) will be true if and only if the en- tity denoted by c is in the circumscribed extension of Pi. Thus, if one circumscribes the predicate P in the theory 3oP(x) the extension of P contains only one entity, but the extension of L-P is empty since no spe- cific entity can be proven to belong to the extension of P. In some ways, LKM and introspective circumscrip- tion can be seen as complementary efforts. Introspec- tive circumscription is interested in defining what is the circumscription of the predicate expression AzK P( z) given some (first-order) theory that relates P with other predicates. And it proposes that only entities that can be “identified” should belong to the “exten- sion” of hKP(z). The logic LKM, on the other hand is interested in extending the definition of circumscrip- tion so that one can circumscribe the predicate P in theories that contain formulas like Finally, it seems that because of introspective cir- cumscription’s view of “known,” it is not the correct tool to deal with epistemic cancellations. The content of the sentence (2) does not refer to “identity” in any way. The use of “perhaps” in (2) seems to indicate the lack of knowledge on the truth of the formulapenuin(b) and not the lack of knowledge on the entities that can be identified as penguins. @onclusions This paper presented a method of defining the se- mantics of a family of logics that extends any first- order nonmonotonic logic whose semantic is based on partial-order into an epistemic domain. The logic seems to have application in modeling communica- tion between agents, specially in cases where the speaker makes an explicit reference to her own be- lief state. We believe that these logics can find ap- plication in modeling other areas of communication where first-order nonmonotonic logics have been useful 652 Representation and Reasoning: Belief [Kautz, 1987, Wainer, I99IJ. This paper also presented the intuition of possible- worlds as first-order models, which allows one to use a first-order partial order to compare possible-worlds within and across models. We believe that this intu- ition can be applied to other forms of logics based on possible-world semantics like other non-S5 modal log- its, modal temporal logics, conditional logics, and so on. That would allow us to extend first-order non- monotonic logics into those domains. But research in this direction is complicated by the fact that there would be two independent relations among possible- worlds: the first-order partial order 5, and the ac- cessibility relation among possible-worlds. These two relations must be combined in a yet unclear way to define the partial order C among the modal models. Finally, the logic LKM opens some interesting possi- bilities on combining epistemic with minimization con- cepts. For example, a combination of circumscrip- tive and auto-epistemic reasoning could result from including the concepts described in this paper with Levesque’s [Levesque, 19901 possible-world semantics for autoepistemic logic (a tentative definition of entail- ment for such a logic is described in [Wainer, 19913). The author would like to thank Anthony Maida for helpful discussions and two anonymous referees for sug- gestions and corrections to the earlier version of this paper. The author is responsible for all remaining er- rors and omissions. This work was done while the au- thor was at the Pennsylvania State University Garson, James W. 1983. Quantification in modal log- its. In Gabbay, D. and Guenthner, F., editors 1983, Handbook of Philosophical Logic, volume 2. D.Reidel, Dordrecht, Molland. Grice, II. P. 1975. Logic and conversation. In Cole, P. and Morgan, J. L., editors 1975, Syntax and Seman- tics 3: Speech Acts. Academic Press, New York. Halpern, J. Y. and Moses, Y. 0. 1985. A guide to the modal logics of knowledge and belief: Preliminary draft. In Proceedings of the 9th IJCAI. 480-490. Kautz, Henry A. 1987. A formal theory of plan recog- nition. Technical Report TR215, Department of Com- puter Science, University of Rochester. Levesque, Hector 3. 1990. All I know: A study in au- toepistemic logic. Artificial Intelligence 42:263-310. Lifschitz, Vladimir 1985. Computing circumscription. In Proc. of the Ninth International Joint Conference on Artificial Intelligence. Morgan Kaufman. 121-127. Lifschitz, Vladimir 1989. Between circumscription and autoepistemic logic. In IBrachman, R.; Levesque, H.; and Reiter, IX., editors 1989, Proc. of the First Conference on Principles of Knowledge Representa- tion and Reasoning, Los Altos, CA. Morgan Kauf- man. 235-244. Lin, F. 1988. Circumscription in a modal logic. In Vardi, Moshe Y., editor 1988, Proc. of the Sec- ond Conference on Theoretical Aspects of Reasoning About Knowledge. Morgan Kaufman. 113-127. Maida, Anthony S. 1991. Maintaining mental models of agents who have existential misconceptions. Arti- ficial Intelligence 50(3):331-383. McCarthy, John 1986. Applications of circumscrip- tion to formalizing common-sense reasoning. Artifi- cial Intelligence 28:89-116. Shoham, Yoav 1987. A semantical approach to non- monotonic logics. In Proc. of the Tenth International Joint Conference on Artificial Intelligence. 388-392. Wainer, Jacques 1988. Quantified modal logic: A guide. Technical Report CS-88-45, Penn State Uni- versity. Department of Computer Science, University Park, PA 16802. Wainer, Jacques 1991. Uses of nonmonotonic logic in natural language understanding: Generalized im- plicatures. Technical Report CS-91-17, Penn State University. Department of Computer Science, Uni- versity Park, PA 16802. Wainer 653
1992
106
1,172
From Statistics to Beliefs* Fahiem Bacchus Adam Grove Computer Science Dept. University of Waterloo Waterloo, Ontario Canada, N2L 3G1 fbacchus@logos.waterloo.edu Computer Science Dept. Stanford University Stanford, CA 943005 grove@cs.stanford.edu Abstract An intelligent agent uses known facts, including statistical knowledge, to assign depees of belief to assertions it is un- certain about. We investigate three principled techniques for doing this. All three are applications of the princ$e of indiflerence, because they assign equal degree of belief to all basic “situations” consistent with the knowledge base. They differ because there are competing intuitions about what the basic situations are. Various natural patterns of reasoning, such as the preference for the most specific sta- tistical data available, turn out to follow from some or all of the techniques. This is an improvement over earlier the- ories, such as work on direct inference and reference classes, which arbitrarily postulate these patterns without offering any deeper explanations or guarantees of consistency. The three methods we investigate have surprising charac- terisations: there are connections to the principle of maxi- mum entropy, a principle of maximal independence, and a “center of mass” principle. There are also unexpected con- nections between the three, that help us understand why the specific language chosen (for the knowledge base) is much more critical in inductive reasoning of the sort we consider than it is in traditional deductive reasoning. I Introduction An intelligent agent must be able to use its accumulated knowledge to help it reason about the situation it is cur- rently facing. Consider a doctor who has a knowledge base consisting of statistical and first-order information regarding symptoms and diseases, and some specific in- formation regarding a particular patient. She wants to make an inference regarding the likelihood that the pa- tient has cancer. The inference of such a likelihood, or degree of belie5 is an essential step in decision making. We present here a general and principled mechanism for computing degrees of belief. This mechanism has a number of particular realizations which differ in the in- *The work of Fahiem Bacchus was supported by NSERC under their operating grants program and by IRIS. The work of Adam Grove, Joseph Halpern, and Daphne Keller was sponsored in part by the Air Force Office of Scientific Re- search (AFSC), under Contract F49620-91-C-0080. Adam Grove’s work was also supported by an IBM Graduate Fel- lowship. The United States Government is authorized to re- produce and distribute reprints for governmental purposes. 602 Representation and Reasoning: Belief Joseph Y. Halpern IBM Almaden Research Center 650 Harry Road San Jose, CA 95120-6099 halpern@almaden.ibm.com Daphne Koller Computer Science Dept. Stanford University Stanford, CA 943005 daphneQcs.stanford.edu ferences they support. Through an analysis of these dif- ferences and of the principles which underlie the general mechanism, we are able to offer a number of important new insights into this form of reasoning. To illustrate some of the subtle issues that arise when trying to compute degrees of belief, suppose that the domain consists of American males, and that the agent is interested in assigning a degree of belief to the propo- sition “Eric (an American male) is (over six feet) tall,” given some subset of the following database: A 20% of American males are tall. B 25% of Californian males are tall. C Eric is a Californian male. A traditional approach to assigning a degree of belief to TaD(Eric) is to find an appropriate class-called the reference chss- which includes Eric and for which we have statistical information, and use the statistics for that class to compute an appropriate degree of belief for Eric. Thus, if the agent’s database consists solely of item A, then this approach would attach a quite reasonable degree of belief of 0.2 to Tall(Eric) using the reference class of American males. This general approach to computing degrees of belief goes under the name direct inference, and dates back to Reichenbach [Rei49], who used the idea in an attempt to reconcile his frequency interpretation of probability with the common practice of attaching probabilities to particular cases. He expounded a principle for direct inference, but did not develop a complete mechanism. Subsequently, a great deal of work has been done on formalizing and mechanizing direct inference [BacSO, Kyb61, Kyb74, Lev80, Po190, Sal71]. If the database consists only of A, there is only one reference class to which Eric is known to belong, so ap- plying direct inference is easy. In general, however, the particular individual or collection of individuals we are reasoning about will belong to many different classes. We might possess conflicting statistics for some of these classes, and for others we might not possess any statis- tical information at all. The difficulty with direct in- ference, then, is how to choose an appropriate reference class. There are a number of issues that arise in such a choice, but we focus here on three particular problems. From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. Specific Hrrformation: Suppose the knowledge base consists of all three items A-C. Now Eric is a member of two reference classes: Americans and Californians. Intuition suggests that in this case we should choose the more specific class, Californians. And indeed, all of the systems cited above embody a preference for more specific information, yielding 0.25 as the degree of belief in Tall(Eric) in this case. However, we must be careful in applying such a pref- erence. For one thing, we must deal with the problem of disjunctive reference classes. Consider the disjunc- tive class D consisting of Eric and all non-tall Califor- nian males. Being a subset of Californian males this is clearly a more specific reference class. If there are many Californians (and thus many non-tall Californi- ans, since 75% of Californians are not tall), using D as the reference class gives a degree of belief for Tall(Eric) that is very close to 0. The answer 0.25 seems far more reasonable, showing that we must be careful about how we interpret the principle of preference for more spe- cific information. We remark that two of the most well- developed systems of direct inference, Kyburg’s [Kyb74] and Pollock’s [Po190], address this issue by the ad hoc device of simply outlawing disjunctive reference classes. Irrelevant Irrformation: Suppose that the knowledge base consists only of items A and C. In this case Eric again belongs to two reference classes, but now we do not have any statistics for the more specific class, Cal- ifornians. The standard, and plausible, approach is to assign a degree of belief of 0.2 to TaZ@ric). That is, we use the statistics from the superclass, American males; this amounts to assuming that the extra information, that Eric is also Californian, is irrelevant. In the face of no knowledge to the contrary, we assume that the subclass has the same statistics as the superclass. Sampling: Finally, suppose the knowledge base con- sists only of B. In this case we have statistical informa- tion about Californians, but all we know about Eric is that he is an American. We could assume that Califor- nians are a representative sample of Americans when it comes to male tallness and adopt the statistics we have for the class of Californians, generating a degree of belief of 0.25 in TaLI(Eric). The process of finding the “right” reference class, and then assigning degrees of belief using the further as- sumption that the individual in question is randomly chosen from this class, is one way of going from statis- tical and first-order information to a degree of belief. But, as we have seen, choosing the right reference class is a complex issue. It is typically accomplished by posit- ing some collection of rules for choosing among compet- ing reference classes, e.g., [Kyb83]. However, such rules are not easy to formulate. More importantly, they do not provide any general principles which can help elu- cidate our intuitions about how statistics should influ- ence degrees of belief. Indeed, the whole idea of ref- erence class seems artificial; it does not occur at all in the statement of the problem we are trying to solve. We present a different approach to computing degrees of belief here, one that does not involve finding appro- priate reference classes at all. We believe it is a more general, high-level approach, that deals well with the three problems discussed above and, as we show in the full paper, many others besides. The essential idea is quite straightforward. We view the information in the database as determining a set of situations that the agent considers possible. In or- der to capture the intuition that the information in our knowledge base is “all we know,” we assign each of these possible situations equal probability. After all, our information does not give us any reason to give any of them greater probability than any other. Roughly speaking, the agent’s degree of belief in a sentence such as Tall(Eric) is then the fraction of the set of situations where Tall(Eric) is true. The basic idea here is an old one, going back to Laplace [Lap20], and is essentially what has been called the principle of insu.cient Teason [Kri86] or the principle of indifference [Key21]. Our general method, then, revolves around applying indifference to some collection of possible situations. The method has a number of different realizations, as there are competing intuitions involved in defining a “possible situation.” We focus on three particular mechanisms for defining situations, each of which leads to a different method of computing degrees of belief. The differences between the three methods reflect dif- ferent intuitions about how degrees of belief should be generated from statistics. They also point out the im- portant role of language in this process. Although the approaches are different, they share some reasonable properties. For example, as we show in Section 3, they all generalize deductive inference, they all agree with direct inference in noncontroversial cases, and they all capture a preference for more specific in- formation. Furthermore, since our general method does not depend on finding reference classes, the problem of disjunctive classes completely disappears. Despite these similarities, the methods differ in a number of significant ways (see Section 3). So which method is “best”? Since all the methods are defined in terms of assigning equal probability to all possible situations, the question comes down to which notion of “situation” is most appropriate. As we show, that depends on what intuitions we are trying to capture. Our framework allows us to obtain an understanding of when each method is most appropriate. In addition, it gives us the tools to consider other methods, and hybrids of these methods. Because there is no unique “best” answer, this is a matter of major importance. There has been a great deal of work that can be viewed as attempts to generate degrees of belief given a database. Besides the work on reference classes men- tioned above, much of Jaynes’s work on maximum en- tropy [Jay781 can be viewed in this light. Perhaps the Bacchus, et al. 603 work closest in spirit to ours is that of Carnap [Car52], Johnson [Joh32], and the more recent work of Paris and Vencovska [PV89, PV91] and Goodwin [Goo92]. We compare our work with these others in some detail in the full paper; here we can give only brief hints of the relationship. 2 The Three Methods We assume that the knowledge base consists of sen- tences written in a formal language that allows us to express both statistical information and firs&order in- formation. In particular, we use a simplified version of a probability logic developed in [Bac90] and [HalgO], which we describe very briefly here. To represent the statistical information, we augment first-order logic by allowing proportion formulas of the form 1l~(~)lL which denotes the proportion of individ- uals in the domain satisfying $ when instantiated for 2. Notice that this proportion is well defined in any first- order model (over an appropriate vocabulary) if the model’s domain is finite; in the following, this will al- ways be the case. For example, ]I Californian (z)]lz = .l says that 10% of the domain elements are Californians, while ((Tall(z)(CaZifornian (z)([, = .25 says that 25% of Californians are tall, via the standard abbreviation for conditional probabilities (and thus represents assertion B from the introduction).’ We want to use the information in the knowledge base to compute a degree of belief. Note that there is an im- portant distinction between statistical information such as “25% of Californian males are tall” and a degree of belief such as “the likelihood that Eric is tall is .25”. The former represents real-world data, while the latter is attached by the agent, hopefully using a principled method, to assertions about the world that are, in fact, either true or false.2 Following [HalSO], we give seman- tics to degrees of belief in terms of a set of first-order models or possible wurbds, together with a probability distribution over this set. The degree of belief in a sen- tence cp is just the probability of the set of worlds where 50 is true. For our method the set of possible worlds is easily described: given a vocabulary <P and domain size N, it is the collection of all first-order structures over the vocabulary @ with domain (1,. . . , N}. The probability distribution is generated by apply- ing the principle of indifference to equivalence classes ‘As pointed out in [PV89, GHK92b], we actually want to use “approximately equals” rather than true equality when describing statistical information. If we use true equality, the statement j)Californian (z)]]~ = .1 would be false if the domain size were not a multiple of 10. In this paper, we ig- nore the subtleties involved with approximate equality, since they are not relevant to our discussion. 2This distinction between statistical information (fre- quencies) and degrees of belief has long been noted in studies of the foundations of probability. See, for instance, Carnap’s discussion in [CarEiO]. of worlds ( “situations”). We assign equal probability to every equivalence class, and then, applying the prin- ciple of indifference again, we divide up the probabil- ity assigned to each class equally among the individual worlds in that class. Alternate realizations of our method arise from dif- ferent intuitions as to how to group the worlds into equivalence classes. We consider three natural group- ings, which lead to the three methods mentioned in the introduction. (Of course, other methods are possible, but we focus on these three for now, deferring further discussion to the full paper.) Once we have the proba- bility distribution on the worlds, we compute the degree of belief in cp given a database KB by using straightfor- ward conditional probability: it is simply the probabil- ity of the set of worlds where cp A KB is true divided by the probability of the set of worlds where KB is true. In this paper we restrict attention to vocabularies having only constants and unary predicates. Our meth- ods make perfect sense when applied to richer vocabu- laries (see the full paper), but the characterizations of these methods given in Section 3 hold only in the unary case. In the first approach, which we call the rundom- worlds approach, we identify situations and worlds. Hence, by the principle of indifference, each world is assigned equal probability. In the second approach, which we call the random- structures approach, we group into a single equivalence class worlds that are isomorphic with respect to the predicates in the vocabulary.3 The intuition underlying this approach is that individuals with exactly the same properties are in some sense indistinguishable, so worlds where they are simply renamed should be treated as being equivalent. Suppose, for example, that our vocabulary consists of a unary predicate P and a constant c, and that the domain size is N. Since P can denote any subset, and c any member, of the domain, there will be N2N possible worlds, each with a distinct interpretation of the vocabulary. In the random-worlds approach each world is an equally likely situation with equal probabil- ity. In the random-structures approach, on the other hand, worlds in which the cardinality of P’s denotation is the same are isomorphic, and thus will all be grouped into a single situation. Hence, there are only N + 1 equally likely situations, one for each possible size T of P’s denotation. Each such situation is assigned proba- bility l/(N + 1) and that probability is divided equally among the N(y) worlds in that situation. So, accord- ing to random-worlds, it is much more likely that the number of individuals satisfying P is [N/2] than that 3Note that we only consider the predicate denotations when deciding on a world’s equivalence class, and ignore the denotations of constants. This is consistent with Carnap% approach [Car52], and is crucial for our results. See the full paper for further discussion of this point. 604 Representation and Reasoning: Belief it is 1, whereas for random-structures these two possi- probability &. bilities are equally likely. We remark that two of our three methods-the More generally, suppose the vocabulary consists of the unary predicate symbols PI, . . . , Pk and the con- stants cl,. . . , cl. We can consider the 2” atoms that can be formed from the predicate symbols, namely, the formulas of the form Q1 A . . . A Qk, where each Qi is either P; or 1 Pi. If we have a domain of size N, there will be NL(2N)L p ossible worlds, corresponding to all choices for the denotations of the e constants and k predicates. Given two possible worlds w1 and 202, it is easy to see that they are isomorphic with respect to the predicates if and only if for every atom the number of individuals satisfying that atom in wr is the same as in ~2. This means that a random-structures situ- ation is completely described by a tuple (dl, . . . , d2k) with dl + . . . + d2k = N, specifying how many domain elements satisfy each atom. Using standard combina- torics, it can be shown that there are exactly (“z,_*;‘) such situations. The third method we consider, which we call the random-propensities approach, attempts to measure the propensity of an individual to satisfy each of the predicates. If our vocabulary contains the unary pred- icates PI, . . . , Pk and the domain has size N, then a situation in this approach is specified by a tuple (el, . . . , ek); the worlds contained in this situation are all those where ei of the domain elements satisfy Pi, for all i.4 Intuitively, ei/N is a measure of the propensity of an individual to have property Pi. It is not difficult to see that there are (N + 1)” distinct situations. Suppose, for example, that the vocabulary consists of the unary predicates P and Q and that the do- main consists of three elements {a, b, c}. There are P312 = 64 distinct possible worlds, one for each choice of denotation for P and Q. In the random-worlds ap- proach each of these worlds will be assigned probabil- ity l/64. In the random-structures approach there are ( 3g;1) = (3 = 20 distinct situations. Each will be given probability l/20 and that probability will then be divided equally among the worlds in the situation. For example, the world w that assigns P the denota- tion (a} and Q the denotation (a,~} belongs to the situation in which the atom P A 1Q has size 0 and all other atoms have size 1. There are 6 worlds in this sit- uation, so w will be assigned probability A. In the random-propensities approach there are (3 + 1)2 = 16 distinct situations. Each will be given probability l/16 to be divided equally among the worlds in the situa- tion. For example, one of these situations is specified by the tuple (1,2) consisting of all those worlds where one element satisfies P and two satisfy Q. This situa- tion contains 9 worlds, including the world w specified above. Hence, under random-propensities w is assigned 4Note that again we consider only the predicate denota- tions when deciding on a world’s equivalence class. random-worlds method and the random-structures method-are not original to us. They essentially date back to Laplace [Lap20], and were investigated in some detail by Johnson [Joh32] and Carnap [Car50, Car52]. (These two methods correspond to Carnap’s state-description and structure-description techniques, respectively.) We believe that the random-propensities method is new; as we shall show, it has some quite at- tractive properties. If KB is a formula describing the knowledge base, cp is a first-order sentence, and N is the domain size, we denote by Prg(pJKB), Prh(pJKB), and P$((pJKB) the degree of belief in cp given knowledge base KB ac- cording to the random-worlds, random-structures, and random-propensities methods, respectively. We write Pr& (‘pj KB) in those cases where the degree of belief is independent of the approach. We often do not know the precise domain size N, but do know that it is large. This leads us to consider the asymptotic behavior of the degree of belief as N gets large. We define Prz(plKB) = limN-oc Pr”,(cp(KB); WA PrpX and Pr& are similarly defined.5 Our methods can also be viewed as placing different priors on the set of first-order structures. Viewed in this way, they are instances of Bayesian inference, since we compute degrees of belief by conditioning on this prior distribution, given our database. But the deep- est problem when applying Bayesian inference is always finding the prior distribution, or, even more fundamen- tally, finding the appropriate space of possibilities. This is precisely the problem we address here. 3 Understanding the Methods As a first step to understanding the three techniques, we look for general properties characterizing their behav- ior. Then we examine some specific properties which tell us how the techniques behave in various paradig- matic reasoning situations. 3.1 Characterizing the Methods Recall that when the vocabulary consists of Ic unary predicates, these predicates define 2’” mutually exclu- sive and exhaustive atoms, Al, . . . , A2k. Every possible world defines a tuple (pr, . . . , p2k) where pi is the pro- portion of domain individuals that satisfy the atom Ai. Given a database KB we can form the set of tuples de- fined by the set of worlds which satisfy KB; this set can be viewed as the set of proportions consistent with KB. Let S(KB) d enote the closure of this set. We can often find a single point in S(KB) that will characterize the degrees of belief generated by our dif- “There is no guarantee that these limits exist; in complex cases, they may not. As our examples suggest, in typical cases they do (see [GHK92b]). Bacchus, et al. 605 ferent methods. In the random-worlds method this is the maximum entropy point of S(KB) (see [GHK92b, PV89]). In the random-structures method, the char- acteristic point is the center of muss of S(KB). Fi- nally, in the random-propensities method, the charac- teristic point maximizes the statistical independence of the predicates in the vocabulary. We formalize these latter two characterizations and describe the conditions under which they hold in the full paper.6 When appli- cable, the characteristic point determines the degree of belief in cp given KB; we construct a particular prob- ability structure (described also in [GHK92b]) whose proportions are exactly those defined by the character- istic point. The probability of sp given KB is exactly the probability of cp given KB in this particular structure. Suppose that the vocabulary consists only of (P, c), and the database KBis simply ]/P(x)]]~ E [a,@]. What does the above tell us about the degree of belief in P(c) under the three methods? In this case, there are only two atoms, P and +, and S(KB) consists of all pairs (pr, p2) such that pl E [o,p]. Since the random-worlds method tries to maximize entropy, it focuses on the pair (pl, p2) where p1 is as close as pos- sible to l/2. The random-structures method considers the center of mass of the region of consistent propor- tions, which is clearly attained when p1 = (Q + ,0)/2. Since there is only one predicate in the vocabulary, the “maximum independence” characterization of the random-propensities method gives no useful informa- tion here. However, it can be shown that for this vocabulary, the random-propensities method and the random-structures method give the same answer. Thus, we get Prz(P(c)]KB) = y, where y E [a,/31 minimizes IT- $1, and Prk(P(c)]KB) = PrP,(P(c)]KB) = q.’ Notice also that we were careful to say that the vo- cabulary is (P, c} here. Suppose the vocabulary were larger, say {P, Q, c, d}. This change has no impact on the random-worlds and the random-propensities method; we still get the same answers as for the smaller vocabulary. In general, the degree of belief in cp given KB does not depend on the vocabulary for these two methods. As shown in [GHK92a], this is not true in the case of the random-structures method. We return to this point in the next section. ‘The conditions required vary. Roughly speaking, the maximum-entropy characterization of random-worlds al- most always works in practice; the center-of-mass tech- nique finds degrees of belief for a smaller class of formu- las, although there are few restrictions on KB; maximum- independence works for most formulas, but is not sufficient to handle the fairly common case where S(KB) contains several points that maximize independence equally. ‘All of our methods give point-valued degrees of belief. In examples like this is may be desirable to allow interval- valued degrees of belief; we defer discussion to the full paper. 3.2 Properties of the Methods As we mentioned in the introduction, all of our share some reasonable properties. methods 1) Deductive inference: All three methods general- ise deductive inference; any fact that follows from the database is given degree of belief 1. Proposition 1: If /= KB j cp then PrL(((p]KB) = 1. 2) Direct inference: All three methods agree with di- rect inference in noncontroversial cases. To be precise, say the reference class C is specified by some formula $(z); we have statistical information about the pro- portion of C’s that satisfy some property cp, e.g., the information J]cp(z)]+(~)]]~ E [o,p]. All we know about a constant c is that it belongs to the class C, i.e., we know only +(c). In this case we have only one reference class, and direct inference would use the statistics from this class to generate a degree of belief in p(c). In such cases, all three of our methods also reflect the statistics we have for C. Proposition 2 : Let c be a constant, and let p(x), $(x) be formulas that do not mention c. Then xo$(cp(4I IlPwlPwll~ E WI *w> E b,Pl* Therefore, in the example from the introduction, if the database consists only of A, then we will obtain a degree of belief of 0.2 from all three methods. 3) Specific Information: Suppose we have statistics for cp relative to classes Cl and C2. If C1 is more specific, then we generally prefer to use its statistics. Proposition 3 : Suppose KB has the form II54mh(~C)llz E blJ%l * IIP(4+2(4llr E b27P21 A $1(c) * vx(h(x) * h(4), where cp, $1, and $9 do not mention c. Then Pr*,(v(c)JKB) E [q,&J. This result demonstrates that if the knowledge base consists of items A-C from the introduction, then all three methods generate a degree of belief of 0.25 in TaU(Eric), preferring the information about the more specific class, Californians. 4) Irrelevant information: Often, databases contain information that appears to be irrelevant to the prob- lem at hand. We usually want the computed degree of belief to be unaffected by this extra information. This turns out to be the case for the random-worlds and the random-propensities methods, but not for the random- structures method. The proposition below formalizes one special case of this phenomenon. Proposition 4 : Let $(x) be a formula not mention- ing c 0~ P, let KB be (IIP(#+-#z E [@I) A $(c), and let 9 be a formula not mentioning P. Then PrL(P(c)(KB) = PrW,(P(c)]KBr\cp) = Prk(P(c)(KB) = Plrp,(P(c)lKB A ‘p) E [a, PI- This result demonstrates that if our knowledge base consists of items A and C from the introduction, then Representation and Reasoning: Belief we obtain a degree of belief of 0.2 in TaU(Eric) us- ing either random-worlds or random-propensities; these methods allow us to inherit statistics from superclasses, thus treating subclasses for which we have no special statistical information as irrelevant. In contrast, the random-structures method assigns a degree of belief of 0.5 to TaU(Eric) in this example. This can be quite reasonable in certain situations, since if the subclass is worthy of a name, it might be special in some way, and our statistics for the superclass-might not apply. 5) Sampling: Suppose llQ(x)llz = fl and IlP(~c)lQ(~)ll~ = cr. Intuitively, here we want to think of ,@ as being small, so that Q defines a small sample of the total domain. We know the proportion of P’s in this small sample is CY. Can we use this information when the appropriate reference class is the entire do- main? In a sense, this is a situation which is dual to the previous one, since the reference class we are interested in is larger than that for which we have statistics (Q). One plausible choice in this case is to use the statistics from-the smaller class; i.e., treat it as sample data from which we can induce information relevant to the super- set. This is what is done by the random-propensities - - method. The random-worlds method and the random- structures method enforce a different intuition; since we have no information whatsoever as to the overall pro- portion of P’s satisfying l&, we assume by default that it is l/2. Thus, on a fraction fl of the domain, the pro- portion of P’s is or, on the remaining fraction (1 i p) of the domain, the proportion of P’s is l/2. This says that.the proportion of P’s is cup + (1 - p)/2. Formally: Proposition 5 : Let KB be IIP(x)IQ(x)ll, = Q * II(Q(4llz = ,d. Then Pr”,(P(c)IKB) = (;Y and Pr”,(P(c)]KB) = Prk(P(c)IKB) = cr/? + (1 - ,8)/2. There are reasonable intuitions behind both answers here. The first, as we have already said, corresponds to sampling. For the second, we could argue that-since the class Q is sufficiently distinguished to merit a name in our language, it might be dangerous to treat it as a random sample. These propositions are just a small sample of the pat- terns of reasoning encountered in practice. But they demonstrate that the issues we raised in the introduc- tion are handled well by our approach. Furthermore, in those cases where the methods differ, they serve to high- light competing intuitions about what the “reasonable inference” is. The fact that our techniques automati- cally give reasonable answers for these basic problems leads us to believe that our approach is a useful way to attack the problem. 4 Understanding the Alternatives How do we decide which, if any, of our three techniques is appropriate in a particular situation? We do not have a universal criterion. Nevertheless, as we now show, the different methods make implicit assumptions about lan- guage and the structure of the domain. By examining these assumptions we can offer some suggestions as to when one method might be preferable to another. Recall that random-structures groups isomorphic worlds together, in effect treating the domain elements as indistinguishable. If the elements are distinguish- able, random-worlds may be a more appropriate model. We remark that this issue of distinguishability is of cru- cial importance in statistical physics and quantum me- chanics. However, there are situations where it is not as critical. In particular, we show in [GHK92a] and in the full paper that, as long as there are “enough” predi- cates in the vocabulary, the random-worlds method and the random-structures method are essentially equiva- lent. “Enough” here means “sufficient” to distinguish the elements in the domain; in a domain of size N, it turns out that 3 log N unary predicates suffice. Hence, the difference between distinguishability and indistin- guishability can often be explained in terms of the rich- ness of our vocabulary. The random-propensities method gives the language an even more central role. It assumes that there is information implicit in the choice of predicates. To illustrate this phenomenon, consider the well-known “gruelbleen” paradox [Goo55]. A person who has seen many emeralds, all of which were green, might place a high degree of belief in “all emeralds are green.” Now suppose that, as well as the concepts green and blue, we also consider “grue” -green before the year 2000, and blue after -and “bleen” (blue before 2000, and green after). All the evidence for emeralds being green that anyone has seen is also evidence for emeralds being grue, but no one believes that “all emeralds are grue.” Infer- ring “grueness” seems unintuitive. This suggests that inductive reasoning must go beyond logical expressive- ness to use judgements about which predicates are most “natural.” This intuition is captured by the random-propensities approach. Consider the following simplified version of the “grue/bleen” paradox. Let the vocabulary 9 con- sist of two unary predicates, G (for “green”) and B (for “before the year 2000”), and a constant c. We identify “blue” with “not green” and so take “Grue’ to be (GA B) v(lGr\~B).~ The domain elements are observations of emeralds. If our database KB is ~~G(x)~B(z)~~~ = 1, then it can be shown than Prk(G(c)IKB A -1B(c)) = 1 and PrL(Grue(c)lKB A lB(c)) = 0. That is, the method “learns” natural concepts such as “greenness” and not unnatural ones such as “grueness”. By way of contrast, Pr”,(G(c)lKBr\lB(c)) = Prz(Grue(c)lKBr\ lB(c)) = Pr”,(G(c)lKBr\lB(c)) = Pr’,(Grue(c)lKBA -B(c)) = 0.5. To understand this phenomenon, recall that the random-worlds and random-structures meth- ‘This does not capture the full complexity of the para- dox, since the true definition of “grue” requires the emerald to change color over time. Bacchus, et al. 607 ods treat “grue” and “green” symmetrically; they are both the union of two atoms. The random-propensities method, on the other hand, gives “green” special status as a predicate in the vocabulary. The importance of the choice of predicates in the random-propensities approach can be partially ex- plained in terms of an important connection between it and the random-worlds approach. Suppose we are interested in the predicate Tall. A standard approach to defining the semantics of Tall is to order individuals according to height, and choose a cutoff point such that an individual is considered “tall” exactly if he is taller than the cutoff. It turns out that if we add this implicit information about the meaning of Tall to the knowl- edge base, and use the random-worlds approach, we obtain the random-propensities approach. Intuitively, the location of the cutoff point reflects the propensity of a random individual to be tall. Many predicates can be interpreted in a similar fashion, and random- propensities might be an appropriate method in these cases. However, many problems will include differ- ent kinds of predicates, requiring different treatment. Therefore, in most practical situations, a combination of the methods would almost certainly be used. In conclusion, we believe that we have offered a new approach to the problem of computing degrees of be- lief from statistics. Our approach relies on notions that seem to be much more fundamental than the traditional notion of “choosing the right reference class.” As should be clear from our examples, none of the three methods discussed here is universally applicable. Instead, they seem to represent genuine alternative intuitions appli- cable to different situations. We feel that the elucida- tion of these alternative intuitions is in itself a useful contribution. [BacSO] [Car501 [Car521 [GHK92a] [GHK92b] [Go0551 References F. Bacchus. Representing and Reasoning with Probabilistic Knowledge. MIT Press, Cambridge, MA, 1990. R. Carnap. Logical Foundations of Probabil- ity. University of Chicago Press, Chicago, 1950. R. Carnap. The Continuum of Inductive Methods. University of Chicago Press, 1952. A. Grove, J. Y. Halpern, and D. Koller. Asymptotic conditional probabilities for first-order logic. In Proc. 24th ACM Symp. on Theory of Computing, 1992. A. Grove, J. Y. Halpern, and D. Koller. Random worlds and ma,ximum entropy. In Proc. 7th IEEE Symp. on Logic in Com- puter Science: 1992. N. Goodman. Fact, fiction, and forecast, chapter III. Harvard University Press, 1955. [Go0921 [Hal901 [Jay781 [Joh32] [Key211 [Kri86] [KM 11 [Kyb741 [Kyb831 [Lap201 [LevBO] [Po190] [PV89] [PV9 l] [Rei49] [Sal711 S. D. Goodwin. Second order direct infer- ence: A reference class selection policy. In- ternational Journal of Expert Systems: Re- search and Applications, 1992. To appear. J. Y. Halpern. An analysis of first-order logics of probability. Artificial Intelligence, 46:31 l-350, 1990. E. T. Jaynes. Where do we stand on max- imum entropy? In R. D. Levine and M. Tribus, editors, The Maximum Entropy Formalism, pages 15-118. MIT Press, Cam- bridge, MA, 1978. W. E. Johnson. Probability: The deductive and inductive problems. Mind, 41( 164):409- 423, 1932. J. M. Keynes. A Treatise on Probability. Macmillan, London, 1921. J. von Kries. Die Principien der Wahrscheinlichkeitsrechnung und Rational Expectation. Freiburg, 1886. H. E Kyburg, Jr. Probability and the Logic of Rational Belief. Wesleyan University Press, Middletown, Connecticut, 1961. H. E. Kyburg, Jr. The Logical Founda- tions of Statistical Inference. D. Reidel, Dor- drecht, Netherlands, 1974. H. E. Kyburg, Jr. The reference class. Phi- losophy of Science, 50( 3):374-397, 1983. P. S. de Laplace. Essai Philosophique SUT les Probabilite’s. 1820. English translation is Philosophical Essay on Probabilities, Dover Publications, New York, 1951. I. Levi. The Enterprise of Knowledge. MIT Press, Cambridge, MA, 1980. J. L. Pollock. Nomic Probabilities and the Foundations of Induction. Oxford Univer- sity Press, Oxford, 1990. J. B. Paris and A. Vencovska. On the ap- plicability of maximum entropy to inexact reasoning. International Journal of Approx- imate Reasoning, 3: l-34, 1989. J. B. Paris and A. Vencovska. A method for updating justifying minimum cross entropy. Manuscript, 1991. H. Reichenbach. Theory of Probability. Uni- versity of California Press, Berkeley, 1949. W. Salmon. Statistical Explanation and Sta- tistical Relevance. University of Pittsburgh Press, Pittsburgh, 1971. 608 Representation and Reasoning: Belief
1992
107
1,173
A Logic for evision a &iv eri Craig Boutilier Department of Computer Science University of British Columbia Vancouver, British Columbia CANADA, V6T 122 email: cebly@cs.ubc.ca Abstract We present a logic for belief revision in which revision of a theory by a sentence is represented using a con- ditional connective. The conditional is not primitive, but rather defined using two unary modal operators. Our approach captures and extends the classic AGM model without relying on the Limit Assumption. Rea- soning about counterfactual or hypothetical situations is also crucial for AI. Existing logics for such subjunc- tive queries are lacking in several respects, however, pri- marily in failing to make explicit the epistemic nature of such queries. We present a logical model for subjunc- tives based on our logic of revision that appeals explic- itly to the Ramsey test. We discuss a framework for answering subjunctive queries, and show how integrity constraints on the revision process can be expressed. Introduction An important and well-studied problem in philosoph- ical logic, artificial intelligence and database theory is that of modeling theory change or belief revision. That is, given a knowledge base KB, we want to characterize semantically the set that results after learning a new fact cy. However, the question of how to revise KB is important not just in the case of changing information or mistaken premises, but also when we want to inves- tigate questions of the form “What if A were true?” A subjunctive conditional A > B is one of the form1 “If A were the case then B would be true.” Subjunctives have been widely studied in philosophy and it is gener- ally accepted that (some variant of) the Ramsey test is adequate for evaluating the truth of such conditionals: First add the antecedent (hypothetically) to your stock of beliefs; second make whatever adjust- ments are required to maintain consistency (with- out modifying the hypothetical belief in the an- tecedent); finally, consider whether or not the con- sequent is true. (Stalnaker 1968, p.44) The connection to belief revision is quite clearly spelled out in this formulation of the Ramsey test: to evaluate ‘At least, in “deep structure.” a subjunctive conditional A > B, we revise our beliefs to include A and see if B is believed. If we take KB to represent some initial state of knowledge, a characteri- zation of subjunctive reasoning must include an account of how to revise KB with new information. In this paper, we will develop a logic for belief revi- sion using a conditional connective %, where A -% B is interpreted roughly as ‘“If KB were revised by A, then B would be believed.” The connective will not be prim- itive however; instead it is defined in terms of two unary modal operators, which refer to truth at accessible and inaccessible worlds. Our model of revision will satisfy the classic AGM postulates and will be general enough to represent any AGM revision function. However, our approach is based on a very expressive logical calculus rather than extra-logical postulates, and can be used to express natural constraints on the revision process. Furthermore, this is accomplished without reliance on the Limit Assumption. We will then use this logic to develop a framework in which subjunctive queries of a knowledge base can be answered, and show that it improves on existing subjunctive logics and systems in several crucial respects. Finally, we provide a seman- tic characterization of integrity constraints suitable for this type of subjunctive reasoning. Revision and the AGM Postulates Recently, work on the logic of theory change has been adopted by the AI community for use in the task of belief revi- sion. By far the most influential approach to revision has been that of Alchourron, Gtirdenfors and Makin- son (1985; 19SS), h h w ic we refer to as the AGM theory of revision. We assume beliefs sets to be deductively closed sets of sentences, and for concreteness we will assume that the underlying logic of beliefs is classical propositional, CPL. We let b and Cn denote classical entailment and consequence, respectively, and use K to denote arbitrary belief sets. If K = Cn(KB) for some finite set of sentences KB, we say K is finitely specified by KB. Revising a belief set K is required when new infor- mation must be accommodated with these beliefs. If K /#= -A, learning A is relatively unproblematic as the Bout ilier 609 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. new belief set Cn(K U (A)) seems adequate for mod- eling this change. This process is known as expansa’on and the expanded belief set is denoted Ki. More trou- blesome is the revision of K by A when K k 1A. Some beliefs in K must be given up before A can be accom- modated. The problem is in determining which part of K to give up, as there are a multitude of choices. F’ur- thermore, in general, there are no logical grounds for choosing which of these alternative revisions is accept- able (Stalnaker 1984), the issue depending largely on context. Fortunately, there are some logical criteria for reduc- ing this set of possibilities, the main criterion for pre- ferring certain choices being that of minimal change. Informational economy dictates that as “few” beliefs as possible from K be discarded to facilitate belief in A (GLdenfors 1988), where by “few” we intend that, as much as possible, the informational content of K be kept intact. While pragmatic considerations will often enter into these deliberations, the main emphasis of the work of AGM is in logically delimiting the scope of ac- ceptable revisions. To this end, the AGM postulates, given below, are maintained to hold for any reasonable notion of revision (GSCrdenfors 1988). We use Ki to denote the belief set that results from the revision of K by A and I to denote falsity. (RI) Ki is a belief set. (R2) A E K;. (R3) Ki C KA+. (R4) If 1A 4 K then K$ C Ki. (R5) K; = Cn(l) iff /= 71. (R6) If + A E B then Ki = Kg. tR7) K:hB 5 tK:);. (RS) If 1B $ K; then (Kz); G K&B. Of particular interest are (R3) and (R4), which taken together assert that if A is consistent with K then Ki should merely be the expansion of K by A. This seems to reflect our intuitions about informational economy, that beliefs should not be given up gratuitously. Revision and Subjunctive Conditionals Coun- terfactuals and subjunctives have received a great deal of attention in the philosophical literature, one classic work being that of Lewis (1973). A number of peo- ple have argued that these conditionals have an impor- tant role to play in AI, logic programming and database theory. Banner (1988) has proposed a logic for hypo- thetical reasoning in which logic programs or deduc- tive databases are augmented with hypothetical impli- cations. Ginsberg (1986) has identified a number of areas in AI in which counterfactuals may play an im- portant role in the semantic analysis of various tasks (e.g., planning, diagnosis). He proposes a system for reasoning about counterfactuals based on the ideas of Lewis. Unfortunately, this model suffers form certain shortcomings, including a sensitivity to the syntactic structure of KB. Jackson (1989) considers the problems with this approach and presents a model-theoretic sys- tem for counterfactual reasoning based on the possible models approach to update of Winslett (1990). Again, this system is extra-logical in nature, and is committed to specific minimality criteria. The systems of Ginsberg and Jackson both take very seriously the idea that counterfactuals are intimately tied to belief revision. However, this connection had not gone unappreciated by the revision community. GErdenfors (1988) provides an explicit postulate for re- vision and conditional reasoning based on the Ramsey test. If we assume that conditionals can be part of our belief sets, a concise statement of the Ramsey test is (RT) A>BEKiffBEKi. Ggrdenfors also describes a formal semantics for condi- tionals. Variants of postulates (Rl) through (R8), to- gether with (RT), determine a conditional logic based on a “revision style” semantics that corresponds exactly to Lewis’s (1973) counterfactual logic VC. A Conditional for Revision The Modal Logic CO* We now present a modal logic for revision in which we define a conditional con- nective z. A % B is read “If (implicit theory) KB is revised by A, then B will be believed.” The modal logic CO is based on a standard propositional modal language (over variables P) augmented with an addi- tional modal operator 6. The sentence ecv is read %Y is true at all inaccessible worlds” (in contrast to the usual q a that refers to truth at accessible worlds). A CO-model is a triple M = (W, R, ‘p), where W is a set of worlds with valuation ~3 and R is an accessibility relation over W. We insist that R be transitive and connected.2 Satisfaction is defined in the usual way, with the truth of a modal formula at a world defined as: 1. M +=w q cu iff for each v such that WRV, M btv cr. 2. M bw &X iff for each v such that not wRv, M j==v a. We define severt;w c.ectives as follows: Ocu fdf lo-&; %)a Edf ‘a; a! fdf ocll A &x; and so zdf Ocu V &. It is easy to verify that these connectives have the following truth conditions: Ocu (%;a) is true at a world if cx holds at some accessible (inaccessible) world; &Y (%;a) holds iff cy holds at all (some) worlds. The following set of axioms and rules is complete for CO (Boutilier 1991): 2R is (totally) connected if wRv or vRw for any o, w E W (this implies reflexivity). CO was first presented in (BoutiIier 1991) to handle the problem of irrelevance in de- fault reasoning. 610 Representation and Reasoning: Belief o(A II B) > (aA > OB) K’ fi(A > B) > (6A > isB) T oAI,A 4 aA z) OOA S A>fiOA ?@A A 6B) 3 a(A v B) Nes Prom A infer EA. MP From A > B and A infer B. For the purposes of revision, we consider the exten- sion of CO based on the class of CO-models in which all propositional valuations are represented in W; that is, (p : f maps P into (0, 1)) E (w* : w E W).” The logic CO*, complete for this class of structures, is the smallest extension of CO containing instances of the following schema: LP & for all satisfiable propositional cy. We note that CO*-structures consist of a totally- ordered set of clusters of mutually accessible worlds. evision as a Conditional A key observation of Grove (1988) is that revision can be viewed as an order- ing on possible worlds reflecting an agent’s preference on epistemically possible states of affairs. We take this as a starting point for our semantics, based on struc- tures consisting of a set of possible worlds W and a binary accessibility relation R over W. Implicit in any such structure for revision will be some theory of in- terest or belief set K that is intended as the object of revision. We return momentarily to the problem of specifying K within the structure. The interpretation of R is as follows: wRv iff v is a,s plausible as w given theory K. As usual, v is more plausible than w iff wRv but not vRw. Plausibility is a pragmatic measure that reflects the degree to which one would accept w as a possible state of affairs given that belief in K may have to be given up. If v is more plausible than w, loosely speaking, v is “more consistent” with our beliefs than w, and is a preferable alternative world to adopt. This view may be based on some notion of comparative sim- ilarity, for instance.4 We take as minimal requirements that R be reflexive and transitive .5 Another requirement we adopt in this paper is that of connectedness. In other words, any two states of affairs must be comparable in terms of similarity. If neither is more plausible than the other, then they are equally plausible. We also insist that all worlds be represented in our structures. 3For all u, E W, W* is defined as the map from P into (0,l) such that w*(A) = 1 iff w E p(A); in other words, w* is the valuation associated with w. *See (Lewis 1973; Stalnaker 1984) on this notion. 51n (Boutilier 1992a) we develop this minimal logic in the context of preorder revision. Given these restrictions, we can use CO*-models to represent the revision of a theory K. However, arbitrary CO*-models are inappropriate, for we must insist that those worlds consistent with our belief set K should be exactly those minimal in R. That is, vRw for all v E kv iff M beu K. This condition ensures that no world is more plausible than any world consistent with K, and that all K-worlds are equally plausible. Such a constraint can be expressed in our language as EI(KB 3 (OKBA LKB)) (1) for any K that is finitely expressible as KB. This en- sures that any K&world sees every other K&world (E-K@, and that it sees only KB-worlds (OKB). All statements about revision are implicitly evaluated with respect to KB. We abbreviate sentence (1) as O(KB) and intend it to mean we “only know” KB.6 Such mod- els are called K-revision models. Given this structure, we want the set of A-worlds minimal in R to represent the state of affairs believed when K is revised by A. These are the most plausible worlds, the ones we are most willing to adopt, given A. Of course such a minimal set may not exist (consider an infinite chain of more and more plausible A-worlds). Still, we can circumvent this problem by adopting a conditional perspective toward revision. Often when revising a belief set, we do not care to characterize the entire new belief state, but only certain consequences of interest of the revised theory (i.e., conditionals). The sentence A % B should be true if, at any point on the chain of decreasing A-worlds, B holds at all more plausible A-worlds (hence, B is true at some hypothet- ical limit of this chain). We can define the connective as follows: A”B’B =df alA V ?$A A q (A 3 B)). (2) This sentence is true in the trivial case when A is im- possible, while the second disjunct states that there is some world w such that A holds and A > B holds at all worlds still more plausible than w. Thus B holds at the most plausible A-worlds (whether this is a “hypotheti- cal” or actual limit). In this manner we avoid the Limit Assumption (see below). It is important to note that -% is a connective in the usual sense, not a family of connectives indexed by “KEY’. Given the Ramsey test, z is nothing more than a subjunctive conditional. We can define for any propositional A E Lcp~, the belief set resulting from revision of K by A as M KJi = {B E LCPL :A+A-=+B}. (3) Theorem 1 If M is a K-revision model for any K, then tM satisfies postulates (Rl)-(R8). ‘P’ltaeorelaa 2 Let * be any revision operator satisfying (Rl) through (RS). Th en there exists a K-revision mode!, for any theory II, such that * = *M. 6This terminology is discussed in the next section. Boutilier 611 Thus, CO* is an appropriate logical characterization of AGM revision and, in fact, is the first logical calculus of this type suited to the AGM theory. However, the modal approach suggests a number of generalizations of AGM revision, for example, by using CO or dropping connectedness (Boutilier 1992a). It also provides con- siderable expressive power with which we can constrain the revision process in novel ways. The Limit Assumption Our approach to revision makes no assumption about the existence of minimal A-worlds, which Grove (1988) claims forms an integral part of any model of revision. As Lewis (1973) empha- sizes, there is no justification for such an assumption other than convenience. Consider a KB that contains the proposition “I am 6 feet tall.” Revising by A = “I am over 7 feet tall” would allow one to evaluate Lewis’s classic counterfactual “If I were over 7 feet tall I would play basketball.” However, it doesn’t seem that there should exist a most plausible A-world, merely an infinite sequence of such worlds approaching the limit world where “I am 7 feet tall” is true. Our model allows one to assume the truth of the counterfactual A -% B if the consequent B (“I play basketball”) is strictly im- plied (i.e., if “(A > B) holds) at any world in this sequence of A-worlds. In this respect, we take Lewis’s semantic analysis to be appropriate. Models of revision that rely on the the Limit As- sumption (e.g., (Grove 1988; Katsuno and Mendelzon 1991b)) would also make A z B true, but for the wrong reasons. It holds vacuously since there are no minimal A-worlds. Of course, A % -B is true in this case too, which is strongly counterintuitive. How can this be reconciled with Theorems 1 and 2, which show that our notion of revision is equivalent to the AGM version, including those that make the Limit Assump- tion? In fact, this points to a key advantage of the modal approach, its increased expressive power. We can easily state that worlds of decreasing height (ht), down to my actual height of 6 feet, are more plausible using:r @+43[y< 2 >(ht(Me) =z &ht(Me) #y)]. (4) Not only can we constrain the revision process directly using conditionals, but also indirectly using such inten- sional constraints. Of course, there must be some AGM operator that has /3 E Kz exactly when our model sat- isfies o % /3, including B E Ki (the basketball coun- terfactual). But models making the Limit Assumption do not reflect the same structure as our CO*-model. They cannot express nor represent the constraint relat- ing plausibility to height. In order to ensure B E Ki they must violate sentence (4). The modal language also improves on AGM revision in general, where such 7We assume the obvious first order extension of CQ- models, a partial theory of < over (say) the rationals, etc. a constraint can only be expressed by some infinite set of conditions p E Kz. In (Boutilier 1992a) we exam- ine the Limit Assumption and intensional constraints in more detail. In particular, we show how constraints on the entrenchment and plausibility of sentences and beliefs can also be expressed at the object level, When reasoning about the revision of a knowledge base KB, we require a background theory with the sentence O(KB), which implicitly constrains the condi- tional connective to refer to KB, and a set of conditional assertions from which we can derive new revision asser- tions. I?or instance, to take an example from default reasoning, if one asserts (bird-%fly,penguinz-fly,penguin-%bird} then the conclusion birdApenguin % lfly can be derived. Indeed this should be the case as penguins are a specific subclass of birds and properties associated with them should take precedence over those associated with birds. Beliefs in KB can influence revision as well. If we take KB to be {A I B, C > D} (where A, B, C, D are distinct atoms) then from O(KB) we can infer, for instance, A -% B and A V C -% B V D. We can also derive -(A % C) since revision by A does not force acceptance of C. Other derived theorems include (see (Boutilier 1992a) for more examples): e (A -=+ B) A (A -=+ 6) > (A -=+ B A C) e (A=+C)A(B -=+ C) I (A V B =+ C) e (A~B)>((AAB-%C)>(A-%C)) e (A =+ B) A (A %C)>(A/\B%C) e (A-%C)A-(AAB-%C)~A~~B We now turn our attention to a systematic framework for representing knowledge about belief revision. Let KB be as usual a set of beliefs representing our knowledge of the world. We also expect there to be some conditional beliefs among these that constrain the manner in which we are willing to revise our (objective) beliefs. These take the form o -% p (or o > p when we intend Lewis’ connective), and will be referred to as subjunctive premises. By a subjunctive query we mean something of the form “If A were true, would B hold?” In other words, is A > B a consequence of our beliefs and subjunctive premises? Given the connection between VC and belief revi- sion, and assuming the Ramsey test is an appropriate truth test for subjunctives, it would appear that VC is exactly the logical calculus required for formulating subjunctive queries. However, we have misrepresented the Ggrdenfors result to a certain degree; in fact, his 612 Representation and Reasoning: Belief semantics does not account for the postulate of consis- tent revision (R4). It is excluded because it results in tti’uiality (see (Gkdenfors 1988)) and, together with the other postulates, is far too strong to be of use, Because (R4) is unaccounted for in VC, it is inadequate for the representation of certain subjunctive queries. Example Suppose KB = {B}, a belief set consisting of a single propositional letter. If we were to ask “If A then B?” intuitively we would expect the answer YES, when A is some distinct atomic proposition. With no constraints (such as A > lB), the postulate of consistent revision should hold sway and revising by A should result in KB’ = (A, B). Rence, A > B should be true of KB. Similarly, l(A > C) should also be true of KB for any distinct atom 6. In VC there is no mechanism for drawing these types of conclusions. At most one could hope to assert B as a premise and derive A > B or -(A > C), but neither of B t-vc A > B or B I-vc ‘(A > C) is true, nor should they be. It should be the case that if A is consistent with our beliefs that A > B holds, but merely assert- ing B doesn’t carry this force. When B is a premise we mean ‘“B is believed;” but this does not preclude the possibility of A, iA, C, or anything else being be- lieved. When KB = {B} we intend something stronger, that “B is a/l that is believed.” Because B is the only sentence in KB, we convey the added information that, say, neither A nor IA is believed. In Levesque’s (1990) terminology, we only know KB. To only know some sentence is to both know (or be- lieve) B and to know nothing more than B. To know B is to restrict one’s set of epistemic possibilities to those states of affairs where B is true. If some lB- world were considered possible an agent could not be said to know B, for the possibility of 1B has not been ruled out. To know nothing more than B is is to in- clude all possible A-worlds among one’s set of epistemic possibilities. Adding knowledge to a belief set is just re- stricting one’s set of epistemic possibilities to exclude worlds where these new beliefs fail, so if some B-world were excluded from consideration, intuitively an agent would have some knowledge other than B that ruled out this world. In our logic CD* we have precisely the mechanism for stating that we only know a KB. consider the set of min”lma1 worlds to represent our knowledge of the actual world. Exactly those possible worlds consistent with our beliefs KB are minimal in any KB-revision model. This is precisely what the sentence O(KB) e serts, that KB is believed (since only K&worlds are minimal) and that KB is all that is believed (since only minimal worlds are K&worlds). I Returning to the query A % B, this analysis sug- gests that i I-CO* A 2 B is not the proper formula- tion of the query. This derivation is not valid (just as it is not in VC). Rather, we ought to ask if A -% 63 holds if we only know B. Hn fact, both O(B) l-co* A -% B and O(B) I-CO* ‘(A % C) are legitimate derivations. This leads to an obvious framework for subjunctive query answering, given a set of beliefs KB. Our knowl- edge of the world is divided into two components, a set KB of objective (propositional) facts or beliefs, and a set S of subjunctive conditionals acting as premises, or constraints on the manner in which we revise our be- liefs. To ask a subjunctive query $ of the form a % p is to ask if 0 would be true if we believed cy, given that our only current beliefs about the world are rep- resented by MB, and that our deliberations of revision are constrained by subjunctive premises S.8 The ex- pected answers YES, NO and UMK (unknown) to $ are characterized as follows. YES if {O(KB } US j=~o* & ASK(Q) = MO if {0( KB } U S ~CO* 1Q 1 UNK otherwise Objective queries about the actual state of affairs (or, more precisely, about our beliefs) can be phrased as T %,0 where ,8 is the objective query of interest. It’s easy to see that for such a-& - - ASK(&) = YES iff Fco* KB > j3. The ability to express that only a certain set of sen- tences is believed allows us to give a purely logical char- acterization of subjunctive queries of a knowledge base. The logic VC seems adequate for reasoning from sub- junctive premises and for deriving new conditionals, but it cannot account for the influence of factual informa- tion on the truth of conditionals in a completely satisfy- ing manner; for it lacks the expressive power to enforce compliance with postulate (R4).’ The approaches of Ginsberg and Jackson take VC to be the underlying counterfactual logic. Indeed, their approaches (under certain assumptions) satisfy the Lewis axioms. How- ever, they recognize that the ability to only know a knowledge base is crucial for revision and subjunctive reasoning, an expressive task not achievable in VC. Therein lies the motivation for their extra-logical char- acterizations, and the underlying idea that KB is rep- resentable as a set of sentences or set of possible worlds from which we construct new sets in the course of re- vision. CG* can be viewed as a logic in which one can capture just this process.1° 8We note t h a t S can be part of I(B, lying within the scope of 0, but prefer to keep them separate for this exposition (see (Boutilier 199213)). ‘In fact, it is not hard to verify that the axioms for VC are each valid in CO* if we replace nonsubjunctive informa- tion (say cy) by statements to the effect that o is believed, also expressible in CO*; see (Boutilier 199210)). “Other distinctions exist (see Section l), e.g., Ginsberg’s proposal is syntax-sensitive, while Jackson’s system is com- mitted to specific minima&y criteria. Boutilier 613 Integrity Constraints Often it is the case that only certain states of knowl- edge, certain belief sets, are permissible. The concept of integrity constraints, widely studied in database theory, is a way to capture just such conditions. For a database (or in our case, a belief set) to be considered a valid rep- resentation of the world, it must satisfy these integrity constraints. For instance, we may not consider feasi- ble any belief set in which certain commonsense laws of physics are violated; or a database in which there exists some student with an unknown student number may be prohibited. This apparently straightforward concept actually has several distinct interpretations. Reiter (1990) surveys these and proposes the definition we favor, which es- sentially asserts that an integrity constraint C should be entailed by KB. The distinguishing characteristic of Reiter’s definition is that integrity constraints can be phrased using a modal knowledge operator, which refers to “what is known by the database.” We will as- sume constraints are propositional and that KB satisfies C just when KB entails C.ll As emphasized in (Fagin, Ullman and Vardi 1983) and (Winslett 1990), integrity constraints are particu- larly important when updating a database. Any new database (or belief set) should satisfy these constraints, therefore any reasonable model of update or revision must explicitly account for integrity constraints. Example Let the constraint C, that a department has only one chair, be expressed as chair(x,d)Achair(y,d) >x=y (5) Suppose we update KB with chair(Ken,DCS) V chair (Maria, DCS ), and assume KB = {chair(Derek,DCS)) so this new fact in inconsistent with the existing KB (assuming Unique Names). The constraint can not be enforced in the updated KB’, for nothing about C says it must be true in the revised state of affairs, even if it is an explicit fact in the original KB. This example illuminates the need for integrity con- straints to be expressed intensionally. They refer not only to the actual world, but to all (preferred) ways in which we view the world. We can ensure a revised belief ;S,et or database sat- isfies a constraint C by asserting q C as a premise in our background theory (on the same level as O(KB)). This has the effect of ensuring any possible worlds ever considered satisfy C (thus requiring the logic CO). How- ever, this may be too strong an assertion in many ap- plications. Such a statement will force any revision of KB by a fact inconsistent with C to result in the in- consistent belief set Cn(l). In certain (maybe most) l1 We can express constraints involving a knowledge modality (see (Boutilier 1992b) for details). circumstances, we can imagine the violation of a con- straint C ought not force us into inconsistency. Instead of abolishing G-worlds outright, we’d like to say all C-worlds are “preferred” to any world violat- ing the constraint. Such a condition is expressible as Ef(C > UC). To see this, imagine some 1C-world v is more plausible than some C-world 20. Then w Rv and A4 kw C 1 UC. We denote this formula by WIC. Since we are often concerned with a set of constraints c = (Cl , * * -, Cn), in such a case we use C to denote their conjunction, and WIC = E(C 3 DC) where C = A Ci. i<n Definition M is a revision model for K with weak in- tegrity constraints C iff M is a revision model for K and M + @(C 1 OC). Theorem 3 Let M be a revision model for K with weak integrity constraints C. Then KIM b C for all A consistent with C. Thus we validate the definition of integrity constraint. If a sentence A is consistent with C it must be the case that revising by A results in a belief set that satisfies C. Of course, this requires that the original belief set K must also satisfy the integrity constraints, not just revised belief sets K” AM (imagine revising by T). Example Let KB = {chair(Derek,DCS)} and C be the previous constraint (5). If we update KB with chair(Ken,DCS) V chair(Maria,DCS), then from (O(Kl3)) U WIG we can derive in CO* chair(Ken,DCS)Vchair(Maria,DCS) -% chair(Ken,DCS) s -chair(Maria,DCS). This definition of integrity constraint has the unap- pealing quality of being unable to ensure that as many constraints as possible be satisfied. For instance, if some update A violates some Ci of 6, then revision by A is not guaranteed to satisfy other constraints. In (Boutilier 1992a) we introduce strong constraints that accomplish this. We can also prioritize constraints, as- signing unequal weight to some constraints in C. Fagin, Ullman and Vardi (1983) have argued that sentences in a database can have different priorities and that up- dates should respect these priorities by “hanging on to” sentences of higher priority whenever possible during the course of revision. Consider two constraints assert- ing that a department has one chair and that the chair of Computer Science (CS) is the only person without a course to teach. It might be that certain information cannot satisfy both of the constraints, but could satisfy either one singly - for example, when we learn that Maria is the chair and Ken has no course load. We may also prefer to violate the constraint that a non- chair faculty member teaches no course, deferring to the fact that CS has only one chair. 614 Representation and Reasoning: Belief Suppose that the set C = (Cl, l l l , Cn} is now an or- dered set of integrity constraints with Ci having higher priority than Cj whenever i < j. We prefer Ci when a conflict arises with cj. Let Pi denote the conjunction of the i highest priority integrity constraints The set of prioritized integrity constraints is ICP = (13(Pi > q Pg) : i 5 n). Definition M is a revision model for K with prioritized integrity constraints Cl, . . . , Cn iff M is a revision model for K and M k ICP. Theorem 4 Let M be a revision model for K with pri- oritized integrity constraints Cl, . l l , C,,. If A is consis- tent with the conjunction Pj of all constraints Ci, i 5 j, then KiM + Ci for all i 5 j. Example Let KB = {chair(Derek,DCS)) and C = {Cl, C’s), where Cl is constraint (5) and c2 = chair(x,DCS) ~teachnocourse(x). From (O(KB)) U ICP we can derive in CO* teachnocourse(Ken)Achail-(Maria,DCS)z chair(x,DCS)Gx = Maria. Concluding Remarks We have presented a modal logic for revision and sub- junctive reasoning that, unlike current logics, can ac- count for the effect of only knowing a knowledge base. CO* can be viewed as a logical calculus for AGM re- vision. Furthermore, it characterizes these processes without making the Limit Assumption, as advocated by Lewis (1973), and allows integrity constraints to be expressed naturally within the logic. In (Boutilier 1992b) we show that CO*, with its ability to reason about knowledge, can be viewed as a generalization of autoepistemic logic, and that our subjunctive bears a remarkable similarity to the normative conditionals postulated for default reasoning. Indeed, we show that default reasoning can be viewed as a special case of re- vision by using our subjunctive. Furthermore, we show that the triviality result of Ggrdenfors (1988) is avoided in our logic at the expense of postulate (R4), which we claim cannot be correct for belief sets that include con- ditionals. The triviality result has also led to a distinc- tion between revision and update (Winslett 1990; Kat- suno and Mendelzon 1991a), which has also be used to define subjunctives (Grahne 1991). An interesting avenue of research would be to pursue the connection between the two, examining the extent to which up- date can be captured by unary modal operators, and the extent to which either problem subsumes the other. The generalizations of revision afforded by the modal approach may also apply to update. Acknowledgements I would like to thank Peter Gkrdenfors, Gijsta Grahne, Hector Levesque, David Makinson, Alberto Mendelzon and Ray Reiter for their very helpful comments. eferences Alchourrcin, C., G%rdenfors, P., and Makinson, D. 1985. On the logic of theory change: Partial meet contrac- tion and revision functions. Journal of Symbolic Logic, 50:510-530. Bonner, A. J. 1988. A logic for hypothetical reasoning. In Proc. of AAAI-88, pages 480-484, St. Paul. Boutilier, C. 1991. Inaccessible worlds and irrelevance: Pre- liminary report. In hoc. of IJCAI-91, pages 413-418, Sydney. Boutilier, C. 1992a. Conditional logics for default reasoning and belief revision. Technical Report KRR-TR-92-1, University of Toronto, Toronto. Ph.D. thesis. BoutiIier, C. 199213. Subjunctives, normatives and autoepis- temic defaults. (submitted). Fagin, R., Ullman, J. D., and Vardi, M. Y. 1983. On the semantics of updates in databases: Preliminary report. In Proceedings of SIG’ACT-SIGMOD Symposium on Principles of Database Systems, pages 352-365, G&rdenfors, P. 1988. Knowledge in Flux: Modeling the Dy- namics of Epistemic States. MIT Press, Cambridge. Ginsberg, M. L. 1986. Counterfactuals. Artificial Intelld- gence, 30( 1):35-79. Grahne, G. 1991. Updates and counterfactuals. In Proc. of K&91, pages 269-276, Cambridge. Grove, A. 1988. Two modellings for theory change. Journal of Phiilosophical Logic, 17:157-170. Humberstone, I. L. 1983. Inaccessible worlds. Notre Dame Journal of Formal Logic, 24(3):346-352. Jackson, P. 1989. On the semantics of counterfactuals. In Proc. of IJCAI-89, pages 1382-1387, Detroit. Katsuno, H. and Mendelzon, A. 0.1991a. On the difference between updating a knowledge database and revising it. In Proc. of K&91, pages 387-394, Cambridge. Katsuno, H. and Mendelzon, A. 0. 1991b. Propositional knowledge base revision and minimal change. Artificial Intelligence, 52:263-294. Levesque, H. J. 1990. All I know: A study in autoepistemic logic. Artificial Intelligence, 42:263-309. Lewis, D. 1973. Counterfactuals. Blackwell, Oxford. Reiter, R. 1990. What should a database know? Technical Report KRR-TR-90-5, University of Toronto, Toronto. Stalnaker, R. C. 1968. A theory of conditionals. In Harper, W., Stalnaker, R., and Pearce, G., editors, Ifs, pages 41-55. D. Reidel, Dordrecht. 1981. Stalnaker, R. C. 1984. Inquiry. MIT Press, Cambridge. Winslett, M. 1990. Updating Logical Databases. Cambridge University Press, Cambridge. BoPxtilier 615
1992
108
1,174
Lexical Imprecision in zzy Constraint Networks James Bowen, Robert Lai and Dennis Bahler Department of Computer Science North Carolina State University Raleigh, NC 27695-8206 jabowen@adm.csc.ncsu.edu, lai@jim.csc.ncsu.edu, drb@adm.csc.ncsu.edu Abstract We define fuzzy constraint networks and prove a theorem about their relationship to fuzzy logic. Then we introduce Khayyam, a fuzzy constraint-based programming lan- guage in which any sentence in the first-order fuzzy predicate calculus is a well-formed con- straint statement. Finally, using Khayyam to address an equipment selection application, we illustrate the expressive power of fuzzy constraint-based languages. 1 Introduction We agree with Zadeh [13] that “the imprecision that is intrinsic in natural languages is, for the most part, possibilistic rather than probabilistic in nature . . . the denotation of [an imprecise] word is generally a fuzzy . . . subset of a universe of discourse.” We also agree with Nilsson [lo] that “For the most versatile machines, the language in which declarative knowledge is represented must be at least as expressive as first-order predicate calculus” (FOPC). S o, we are interested in languages which support the expressive power of the full fuzzy FOPC. The inference engines for such languages can- not be sound, complete and terminating. Although termination is a sine qua non, neither of the other two attributes is indispensable. Unsound inference has long been accepted; in crisp logic, it has been codified as various forms of non-monotonic logic. Acceptance of incomplete inference is less widespread, but we argue that there are many situations in which the expressive- ness of the full fuzzy FOPC is more important than complete inference. We have developed a fuzzy constraint-based pro- gramming language in which any sentence in fuzzy FOPC about an arbitrary universe of discourse is a well-formed constraint statement. The inference en- gine for this language, called Khayyam, is sound and terminating. While not complete in general, it is refutation-complete for many applications. In Section 2, we introduce semantics in fuzzy logic, with reference to semantics in classical logic. Then, 616 Represent at ion and Reasoning: Belief in Section 3, we give the theoretical basis of Khayyam, by presenting the relationship between fuzzy constraint networks and semantic models in fuzzy logic. In Sec- tion 4, we introduce Khayyam and give an example program, an expert system for computer selection. We give a comparative review and discussion in Section 5. 2 Semantics in Classical and ZZY ogic A theory I’ in some classical FOPC language L = (P, 3, Ic), where P is the vocabulary of predicate sym- bols, 3 the function symbols and X: the constant sym- bols, is satisfiable iff there exists some model M = (24, Z) of the language L under which every sentence in I’ is true. In the model M = (U,Z), U is a universe of discourse while Z is an interpretation function for the symbols of LB, in terms of U. Using M (x,+1 to denote a model containing the extended interpretation func- tion Z U {X c--) u}, the semantic rules for truth are: M j= p(al , . . . . a,) iff (Z(citi), . . . , Z(a,)) is in Z(p). M blAiffM Ft. M bAABiffM j=AandM b=. M bAVviffM kAoorM j=B. M~A=+~iffM~AoorMj=B. M kAeBiff(MkAandMkB)or (M + A and M + B). M b (VX)A ill’ M CX+,,,) b A for every u E U. M b (3X)A iff M {x,+,1 + A for some u E U. A fuzzy FOPC language contains one species of sym- bol not found in classical FOPC, hedge symbols. These are used to capture the meaningofTinguistic hedges, such as very or fairly, which are used in sentences of the form h A, where h is a hedge and A is a sentence. In a fuzzy language C = (3-1, P, 3, n), #H is the vo- cabulary of hedge symbols, while P, 3, and #: are the predicate, function and constant symbol vocabularies as in classical logic. In fuzzy logic, truth can be either multi-valued, with truth-values in the range [O,l], or linguistic, with truth- values denoting fuzzy subsets of [O,l]. Here, we con- sider multi-valued fuzzy logic. As in classical logic, in a fuzzy model M = (U, Z), 2.4 is a universe of discourse From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. and Z provides interpretations for each symbol in the various vocabularies of cc. As in classical logic, for and d is min({l, 1 - a + b, 1 - b + a}). M bd (VX)A, where X is free in A, iff every K: E ic, Z( ) K is an element of U. However, the in- terpretations for the predicate and function symbols of L are fuzzy relations over 24. A symbol h in Z modifies the truGFZalue of the sentence to which it is applied, so these hedge symbols are interpreted as crisp one-to-one relations over the segment [O,l] of the real number line. For example, if Z(h) = ((X,Y)l LE&(O, X) A LEQ(X, I) A EQUALS(Y) TIMES(X, X))} and the truth-value of a sentence A is d, then the truth of h A is d2. A theory I’ in ic is satisfiable iff there exists some fuzzy model M = (24, Z) of c under which every sentence in I’ has a truth-value greater than zero. d is min((elu E 24 A n/r,,,,, j=” A)). M bd (3X)A, where X is free in A, iff d is max((elu E 24 A M{x,+,) be A)). zzY strai Formally, a fuzzy constraint network can be defined as: Fuzzy Constraint Network: A fuzzy constraint network is a triple (2.4, X, 6) where U is a universe of discourse, X is a finite tuple of q pa- rameters, and C is a finite set of of P fuzzy constraints. Each constraint &(Z$) E C imposes a restriction on the allowable values for the uh parameters in Tk, a sub- tuple of X, by specifying that some fuzzy subset of UQk contains all acceptable combinations of values for the parameters. 0 Fuzzy functions generalize crisp functions by being able to map onto fuzzy subsets of 24 as well as onto individual members. Consider the interpretation for some n-ary function symbol f E 3. In this inter- pretation, which is a fuzzy (n + 1)-ary relation over U, there may be several tuples that have the same first n components but different (n + l)th components. Suppose, for example, that the constant symbols ca and b are interpreted as 3 and 4, respectively, while the binary function symbol g has as its interpretation the fuzzy set {(1,5,6)@1.0, (3,4,7)@0.7, (3,4,8)@1.0, (3,4,9)@0.8}, h w ere x @ y denotes that x has mem- bership degree y in the fuzzy set. In this situation, the functional expression g(a, b) has three possible inter- pretations: 7QO.7, 8Ql.0, and 9@0.8. Because of this aspect of fuzzy functions, model theory for fuzzy logic is simplified if functional ex- pressions are de-Skolemized. Taking a language d: = (Z, P, 3, KZ), where 3 is non-empty, we produce a function-free language C’ = (Z, P U 3’, 8, AZ), by re- placing each n-ary fuzzy function symbol f E 3 by a new (n+ 1)-ary predicate symbol f’ E 3’ which is given the same interpretation as f had. $rom a theory F in .C we generate a theory l? in L’, by replacing each usage of f by an appropriate usage of f’; for exam- ple, the atomic sentence p( f (a), b) would be replaced by (=)(f’(a, X) A P(X, b)). To consider the model-theoretic rules for a function- free fuzzy logic language, let M bd y mean that the sentence y has the truth-value d under the model M. Then, the model-theoretic rules, where p is a predicate symbol, h is a hedge symbol, and the & are-constant symbols, are as follows: M l=d P(W, . . . . K,J iff (+I), . . . . Z(G)) h= membership degree d in Z(p). g /=z h”, ifF% k; $ yd (e, d) is in Z(h). M /=di/\lgifTM j=‘AandM bbB and d is min({a, b}). M bdAVBiffM b=OAandM bbB and d is mux((u, b}). MkdA+BiffMbaAandMbbB and d is min( { 1, 1 - u + b}b. MbdAeBiffMkaAandMb B A fuzzy constraint network is an intensional specifi- cation of a joint fuzzy possibility distribution for the values of the parameters in the network: The Intent of a Fuzzy Constraint Network: The intent of a fuzzy constraint network (2.4, X, C) is %,x,c = cl(X) n . . . n cr(X), where, for each ck(Tk) E c, ck(x) is its cylindrical extension in the Cartesian space defined by X. 0 The network intent is a fuzzy set of q-tuples, each tuple giving a valuation for the q parameters in X, the membership of the tuple in the intent being the degree to which the valuation satisfies all the constraints in 6. A fuzzy constraint network is consistent iff the network intent is not the empty set. It can be shown that finding consistent values for the parameters in a fuzzy constraint network is the same as finding a model of a fuzzy FOPC language under which some theory is satisfied. In what follows, let AB, where A and B are tuples of the same arity, be the mapping function from the components of A onto those of B in which each component of the tuple A is mapped onto the component in the corresponding position of tuple B; for example, if A = (2, y) and B = (4,2), then A3 = ( x I---) 4, y I+ 2 1. Theorem 1 Given ic = (7f, P, 3, K’U K”), U, ZD and I’, where Z, interprets, in terms of Zi, all-and only the symbols in 3c U P U 3 U K’, K” is finite, each sentence in I’ refer- ences at least one symbol in Ic”, and each symbol in K” is referenced by at least one sentence y E F. Let (24, X, C) be a fuzzy constraint network such that X is the lexical ordering of the elements of K” and such that ]C] = ]I’], with each sentence y E I’ having a cor- responding constraint C(T) E C such that T is the lex- ical ordering of those elements of Ic” which appear in y and (2.4, Zp U 5%) bd y for each tuple t having member- ship of degree d in C(T). Then (U, Zp U x’7) be I’ for each tuple r having membership degree e in lIu,x,c. 0 That is, the set of all XT where 7 has membership Bowen, Lai, and Bahler 617 degree d in lI~,x,c is the set of all ,7 such that J interprets all symbols in K” and (U,Z, U 3) Ed I?. Thus, finding consistent values for the parameters in a. fuzzy constraint network (24, X, C) may be regarded as semantic modeling in a fuzzy logic language C = (WV-A). The proof of this theorem depends on the following lemma. Lemma Given L = (‘?f, P, ZF, ic’ U I?‘), 2f, Zp and 7, where Zp interprets, in terms of 24, all and only the symbols in ‘MJPUFUK’, Ic” is finite, and the sentence +y references at least one symbol in fc”. Let X be the lexical order- ing of the elements of Ic” and let C(T) be a constraint such that T is the lexical ordering of those elements of Ic” which appear in y and such that (U, & U fi) kd y for each tuple t having membership degree d in C(T). Then (U, Zp U XT) kd y for each tuple r having mem- bership degree d in c(X) where c(X) is the cylindrical extension of C(T) into the space defined by X. •I Proof of Lemma Take any tuple r having membership degree d in c(X). The projection of the singleton set {r} on T will be a singleton set containing some tuple r’; that is, proj({r},T) = (7’). Th en, by the definition of cylin- drical extension, r’ has membership degree d in C(T). The mapping function XT provides a mapping for each constant symbol in lc”. However, the truth of +y de- pends only on the interpretation of those members of IC” which appear in y. Thus, the truth of y depends only on the mappings provided by T>, which is a sub- set of XT. Since r’ has membership degree d in C(T), (U,Z, U Tq/) bd y, and hence (U,Z, U $T) bd y, be- cause the additional mappings in (XT - T?) have no impact on the truth of 7. Since r is any tuple in c(X), (U,z,Uc-) bdr f or any tuple r having membership degree d in c(X). 0 Proof of Theorem 1 Since (24, Zp U Tyt) I=” yr for each tuple t with mem- bership degree a in Cl(Tl) then, by the lemma, (U,Z, U XT) b” yr for each tuple r with member- ship degree a in cl(X). Similarly, (2.4, Zp U x’7) b” 72 for each tuple r with membership degree b in Es(X). Therefore, (U, Zp U 2~) bd {+yr,y2} for each tuple r with membership degree d in c,(X) n es(X), since the membership degree d of 7 in cl(X) f~ c2(X) is the minimum min({a,b)) of its membership degrees a and 6 in cl(X) and c2 (X), respectively. Similarly, (UJ, lJ fr) l=d hY2, . . ..T.-} for each tuple r with membership degree d in cl(X) n cz(X) I-I . . . n c,.(X). That is, (24, Zp U XT) bd I’ for each tuple r with mem- bership degree d in lI~,x,c. 0 Khayyam is a programming language based on the the relationship, discussed above, between fuzzy con- straint networks and possibility distributions for se- mantic models of fuzzy logic theories. A program in Khayyam provides a declarative specification of a constraint network by specifying a fuzzy language C = (‘H,P,F,K), a theory I’ written in C, a universe of discourse 24, and a partial interpretation function Zp for EC. Of these, only I’ must always be specified explicitly. The Khayyam run-time system provides a default language L, = (‘H, , Ps, Fg, Ic,) in which /c, contains the rational decimal numeric strings, Pg con- tains names of standard predicates (=, =<, etc.), 9’g contains names of standard functions (*, +, etc.) and ‘Hs contains names of standard hedges (very, fairly, etc.). The run-time system also provides a default universe of discourse Us = !J? and a total interprets tion function Zs for -C,. This interpretation includes a bijection from the numeric strings ic, onto the finitely expressible rationals Qr C Q c ?I?, providing the obvi- ous mappings, such as 1.5 --) 1.5; note the distinction between 1.5, which is a symbol in Ic, and I. 5 which is a real number in 2% The Khayyam run-time system computes the marginal possibility distribution [13] for each param- eter in the network defined by a program. To do so, it transforms the language C defined by a pro- gram into an internal function-free language C’ and de-Skolemizes all function-referring sentences in I’ to produce I”. The result is subjected to an inference al- gorithm called Fuzzy Compound Propagation (FCP). Although full details of the algorithm are beyond the scope of this paper, FCP involves interleaved appli- cation of three inference techniques: LP, local prop- agation of known states [7]; FAC, a fuzzy version of arc consistency [8], generalized to infinite domains and constraints of arbitrary arity, and based on particular- ization and projection [13]; GPC, a form of path con- sistency [8], generalized to infinite domains and con- straints of arbitrary arity, which is only applied to small portions of the network in certain very specific circumstances. The algorithm is sound, in that, at any time, the real intensional marginal possibility distribution for a pa rameter is contained within whatever distribution the algorithm computes for that parameter. If the algo- rithm were complete, the intensional distribution for a parameter would be exactly that computed. But the algorithm is not guaranteed to be complete, so the in- tensional distribution for a parameter may sometimes be a proper fuzzy subset of the computed distribution. However, given the meaning of the term “possibility,” a computed distribution which subsumes the intensional is still correct; we merely lack full knowledge of the intensional possibility distribution function. Khayyam programs are interactive. The user is al- lowed to augment the theory defined in a program 618 Representation and Reasoning: Belief with additional sentences and, whenever he does so, the Khayyam run-time system uses FCP to compute revised marginal possibility distributions. Whenever the intent of the network that results from a sequence of’ user assertions is the empty set, the system reports a. constraint violation. If this happens, the user can recover by retracting one of his earlier assertions. The system maintains a set of dependency records and, at any time, the user can request a justification for the current marginal possibility distribution of a parame- ter in the network. By making assertions and retrac- tions in any order he pleases, the user can explore var- ious “what if” scenarios. .I ZZY xpert System The following program defines a fuzzy language C = (ti,, P, .T, Ic) in which P = Ps U {big, cheup, f ust), F = .Fg U (price, slots, space, speed, s-f vnc) and K: = K, U (chosen-model, reqd,disk, reqdslots, reqdspeed). /*Function and Predicate Definitions*/ function speed =::= datafile(qdb,$SPEED$). function price =::= datafile(qdb,$PRICES$). function space =::= datafile(qdb,$DISKCPTY$). function slots =::= datafile(qdb,$NUMSLOTS$). function sfunc =::= {(X,A,B,G) -> 0 : X < A) union {(X,A,B,G) -> Y : A =< X < B and Y = 2*((X-A)/(G-A)) n 2) union {(X,A,B,G) --> Y : B =< X < G and Y = l-2*((X-G)/(G-A)) - 2) union {(X,A,B,G) -> 1 : G =< X}. predicate big =::= {X@Y: Y = sfunc(X,100,150,200)}. predicate fast =::= {X@Y: Y = sfunc(X,16,20,24)}. predicate cheap =::= (X@Y: Y = 1 - sfunc(X,1000,2000,3000)}. /*Theory*/ space(chosen,model) >= reqd,disk. slots(chosen-model) >= reqdslots. speed( chosen-model) >= reqdspeed. not(exists X : space(X) >= reqd,disk and slots(X) >= reqdslots and speed(X) >= reqdspeed and price(X) < price( chosenmodel)). The program specifies the following theory I’ written in the language LI: (slots(chosen,model) 2 reqd,slots, speed(chosen,model) 2 reqdspeed, space(chosenaaode1) 2 reqddisk, -(3X)( slots(X) 2 reqd,slots A speed(X) 2 reqd-speed A space(X) > reqd,disk A price(X) < price(chosen,model))]. This program is an expert system which interacts with a user and selects a computer to provide, at a minimum price, whatever functionality the user spec- ifies. The user inputs his specification by augment- ing the theory r given in the program with additional assertions about the symbols reqd,disk (representing the amount of disk space he needs), reqd-slots (rep- resenting the number of slots he needs for attaching peripheral devices), and reqdspeed (representing the CPU speed he requires). The program reads the ex- ternal files PRICES, DISKCPTY, NUMSLOTS and SPEED to get the extensions for the function sym- bols price, capacity, slots and speed. Thus, when se- lecting a computer, the program chooses from among those machines whose characteristics are described in the external database files. The program defines a constraint network whose in- tent is a joint possibility distribution for the values of chosen-model, reqddisk, reqd-slots and reqdapeed. Suppose that the contents of the external files PRICES, DISKCPTY, NUMSLOTS and SPEED are as shown in Table 1. Then the joint possibility distri- bution is the relation shown in Table 2. modell model2 model3 model4 model5 model6 (a) ’ model1 model2 model3 model4 model5 model6 40% 3500 2500 1500 900 500 CES 7 7 5 n 3 3 3 (c) NUMSLOTS le 1 model1 628 model2 314 i-T-l model3 192 model4 96 A user interacting with this program can specify his requirements, either crisply, or fuzzily, using one of the fuzzy predicates, big, fust or cheap, defined in the program. For example, in specifying speed, he could be crisp, as in reqd-speed > 18.7, or im- precise, as in very fast(reqd-speed), which uses a predefined hedge very and one of the application- specific fuzzy predicates whose interpretations are de- fined intensionally in the program. Alternatively, in expressing a budgetary limitation, he could be crisp, as in price(chosen,modeZ) =< 2000, or fuzzy, as in cheup(price(chosen-model)). Note that the fuzzy predicates are defined using a crisp application-specific function s,func, the definition of which shows it to be the standard piecewise quadratic function found in the literature [ 131. Suppose, for example, that the user asserts f ust (reqd-speed). The definition of the fuzzy predicate fast shows that any speed equal to or less than 16 MHz is definitely not fast. Thus, model4, model5 and model6 are not pos- sible values for chosen-model and the corresponding Bowen, Lai, and Bahler 619 tuples are removed distribution. from the resultant joint possibility Table 2 reqdwddrsk reqdalota reqdaxpeed (01314 < D s 628) {LIL 5 7) PP<25 {DID 5628) (LIL 571 iPi < P’s 25) (01192 < D 5 314) IJJF 5 73 IpIp 5 231 {DID 53143 y-)5 71 {pip {PI19 s 23) {DID 5 192) {Jw z 51 < P 5 23) (Dl96 < D 5 192) {PIP 5 191 {DID < 192) {PIP I 193 {DID 5 192) (PI16 < P < 19) CD140 < D 5 96) (PIP 5 161 IDID 5 961 (PI12 < P 5 16) {PIP 5123 {PIP < 12) The run-time system will compute and report the marginal possibility distributions for the network pa- rameters chosen-model, reqddisk, reqdslots and reqdspeed that result from intersecting the joint pos- sibility distribution specified by the program with the cylindrical extensions of the distributions specified by the user’s assertions. Suppose that the user also asserts cheap(price(chosen,model)). The definition of the fuzzy predicate cheap indicates that any price equal to or greater than $3000 is defi- nitely not cheap; thus, model1 and model2 are removed from the joint possibility distribution. The only com- put#er still possible is the model3 which only partially trwets the specification: the truth-value is 0.125, which is the minimum of 0.28 (the extent to which 19 MHz is fast) and 0.125 (th e extent to which $2500 is cheap). If the user is content with this level of truth, he can accept the mode13. Alternatively, he can try other sce- narios by asserting and retracting different specifica- tions. All inferences can be explained by the run-time system. 5 Comparative Discussion The notion of a constraint permeates the fuzzy logic literature. Indeed, in fuzzy logic, “inference is viewed as a process of propagation of elastic constraints” 1141. In matrix-based fuzzy production systems [12], the ma- trix represents the intent of a constraint network, al- beit of a restricted kind. In matrix-based fuzzy process controllers, for example, the matrix is a joint possibil- ity distribution for the sensor readings and control set- tings. The large and growing literature on fuzzy math- ematical programming can also be viewed as concerned with constraint processing. There is also a smaller body of literature, see for example [ll], on consistent la.beling, which involves networks where the universe of discourse is finite and the constraints correspond to ground atomic sentences involving unary or binary predicates. Khayyam provides richer expressive power than any other constraint-based programming language (all of which handle only crisp constraints) because it allows the theory which is used to specify a constraint net- work to contain arbitrary first-order fuzzy logic sen- tences about a many-sorted universe of discourse that subsumes 3. Most constraint-based programming lan- guages restrict the theory used for defining a network to ground sentences. In the crisp Prolog-based con- straint languages (Prolog III [4], CLP(%) [6] and CHIP [5]), for example, a constraint network is treated as a compound top-level goal; all logic variables in a goal are existentially quantified, which makes them effec- tively, for query purposes, the same as uninterpreted constant symbols. CONSUL [3], the only other con- straint language which does support arbitrarily quan- tified constraints, restricts the universe of discourse to the integers. Although the expressive power of Khayyam means that well-formed programs can be beyond the infer- ential competence of the run-time system, this is true even of less expressive languages; for example, it is pos- sible to express undecidable diophantine equations in CONSUL. Khayyam differs from languages based on a fuzzy version of Prolog, such as Fril [l] and FProlog [9], in that these languages are based on the Horn clause sub- set of fuzzy logic. Khayyam, on the other hand, admits as well-formed syntactically, arbitrary sentences from the first-order fuzzy logic; this is based on a decision to trade inferential completeness for expressiveness. For further details on Khayyam, see [2]. [l] Baldwin J, and Martin T, 1992, “Fast Operations on Fuzzy Sets in the Abstract Fril Machine,” Proc. IEEE Intnl. Conf. on Fuzzy Systems, 803-810. [2] Bowen J, Lai R and Bahler D, 1992, “Fuzzy Se- mantics and Fuzzy Constraint Networks,” Proc. IEEE Intnl. Conf. on Fuzzy Systems, 1009-1016. [3] Chronaki C and Baldwin D, 1990, “A Front End for CONSUL,” Tech. Report CS-TR-319, Dept. of Computer Science, Univ. of Rochester. [4] Colmerauer A, 1987, “An Introduction to Prolog III, ” Tech. Report, Univ. Aix-Marseille II. [5] Dincbas M, van Hentenryck P, Simonis H, Aggoun A, Graf T and Berthier F, 1988. “The Constraint Logic Programming Language CHIP,” Proc. Fifth Gen. Comp. Sys. ‘88. [6] Jaffar J and Michaylov S, 1986. “Methodology and Implementation of a CLP System,” Proc. ICLP-4. [7] Leler W, 1988, Constraint Programming Lan- guages, Reading, MA: Addison-Wesley. [8] Mackworth A, 1977, ‘Consistency in Networks of Relations,” Artif. Intel., 8, 99-118. [9] Martin T, Baldwin J and Pilsworth P, 1987, “The implementation of FProlog - A fuzzy Prolog inter- preter,” Fuzzy Sets and Systems, 23, 119-129. 620 Representation and Reasoning: Belief [lo] Nilsson N, 1991, “Logic and artificial intelligence,” Artif. Intel., 47, 31-56. [ll] Snow P and Freuder E, 1990, “Improved Relax- ation and Search Methods for Approximate Con- straint Satisfaction with a Maximin Criterion,” Proc. CSCSI-8. [12] Whalen ‘I’, and Schott B, 1983, “Issues in Fuzzy Production Systems,” Int. J. Man-Machine Stud., 19, 57-u. [13] Zadeh L A, 1978, “PRUF - A meaning represents tion language for natural languages,” Int. J. Man- Machine Studies, 10, 395-460. [14] Zadeh L, 1989, “Knowledge Representation in Fuzzy Logic,” IEEE Trans. on Knowledge and Data Engineering, l( 1), 89-100. Bowen, Lai, and Bahler 621
1992
109
1,175
Landmark- obot Nav Anthony Lazanas Jean-Claude Latombe lazanas@flamingo.stanford.edu latombe@cs.stanford.edu Robotics Laboratory Department of Computer Science, Stanford University Stanford, CA 94305, USA Abstract To operate in the real world robots must deal with errors in control and sensing. Achieving goals de- spite these errors requires complex motion plan- ning and plan monitoring. We present a reduced version of the general problem and a complete planner that solves it in polynomial time. The basic concept underlying this planner is that of a landmark. Within the field of influence of a land- mark, robot control and sensing are perfect. Out- side any such field control is imperfect and sens- ing is null. In order to make sure that the above assumptions hold, we may have to specifically en- gineer the robot workspace. Thus, for the first time, workspace engineering is seen as a means to make planning problems tractable. The plan- ner was implemented and experimental results are presented. An interesting feature of the planner is that it always returns a universal plan in the form of a collection of reaction rules. This plan can be used even when the input problem has no guar- anteed solution, or when unexpected events occur during plan execution. Introduction To operate in the real world robots must deal with errors in control and sensing. Achieving goals de- spite these errors requires complex motion planning and plan monitoring (Latombe 1991). This problem has attracted a lot of interest recently, but many of the proposed approaches are based on unclear assumptions and/or are incomplete. The most rigorous approach so far is the LMT preimage backchaining approach (Lozano-Perez et al. 1984). Several effective planning methods based on this approach have been proposed, but most of them require exponential time in the size of the input problem or its solution (Donald 1988a; Canny 1989)) or they are incomplete with respect to the class of problems they attack (Latombe et al. 1991). Motion planning algorithms will not be appli- cable to real-world problems, as long as they remain exponential or unreliable. Since the general problem 816 Robot Navigation seems to be intrinsically hard (Canny & Reif 1987), a promising line of research is to identify a more re- stricted, but still interesting subclass of problems that can be solved in polynomial time. This subclass can be obtained, for example, by engineering the robot’s workspace. Thus, for the first time, workspace engi- neering is formally seen as a means to make planning problems tractable. Of course, workspace engineering has its own cost, and we should be careful not to spe- cialize the class of problems too much. In this paper we consider a class of planning prob- lems in the context of the navigation of a mobile robot. We assume that landmarks are scattered across the robot’s two-dimensional workspace. Each landmark is a physical feature of the workspace, or a combina- tion of features, that the robot can sense and iden- tify, if it is located in some appropriate subset of the workspace. This subset is the field of influence of the landmark. A landmark may be a natural feature of the workspace (e.g., the corner made by two walls) or an artificial one specifically provided to help robot navigation (e.g., a radio beacon or a magnetic device buried in the ground). Robot control and sensing are assumed to be perfect within the fields of influence of the landmarks; control is imperfect and sensing is null outside any such field. Given an initial region in the workspace, where the robot is known to be, and a goal region, where we would like the robot to go, the plan- ning problem is to generate motion commands whose execution guarantees that the robot will move into the goal and stop there, provided that our assumptions are satisfied. We propose a planning method based on the LMT approach to solve the above problem. The method it- eratively backchains non-directional preimages of the goal, until one encloses the set of possible initial po- sitions of the robot. Each non-directional preimage is computed as a set of directional preimages for critical directions of motion. At every iteration, the intersec- tion of the current non-directional preimage with the fields of influence of the landmarks define the interme- diate goal from which to backchain. The overall al- gorithm takes polynomial time in the total number of From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. landmarks. It is complete with respect to the problems it attacks, that is, it produces a guaranteed plan (for input control uncertainty bounds), whenever one such plan exists, and returns failure, otherwise. (A guar- anteed plan is one whose execution is guaranteed to succeed if the actual errors are within the uncertainty bounds.) The polynomiality and completeness of the algorithm essentially derive from the combination of the two notions of a landmark and a non-directional preimage. Another interesting aspect of the method is that, whether it returns success or failure, it always con- structs a plan in the form of a non-ordered collection of reaction rules described as motion commands asso- ciated with regions of the workspace from which the goal can be reliably achieved. This is important in two ways. First, if the input problem has no solution, the robot may nevertheless try to enter one of the regions where a rule is available by performing an initial ran- dom motion. Second, if an unexpected event occurs at execution time, the robot may attempt to recon- nect to the plan in the same way. When the mean duration of a random motion before it enters one of the regions where reaction rules are available is small enough, i.e. when the total area of these regions is large relative to the workspace area, the idea of inserting random motions is a very attractive one. In the following we present the planning problem we attack, our planning method and its use for navigation, experimental results and possible extensions of the ap- proach. A more detailed presentation of the method, along with several extensions can be found in (Lazanas & Latombe 1992). Related Work The planning method we propose is an instance of the LMT preimage backchaining approach introduced in (Lozano-Perez et al. 1984; Mason 1984; Erdmann 1984). The complexity of the general problem ad- dressed by the LMT approach is shown to be NEXPTIME- hard in three dimensions (Canny & Reif 1987), which strongly suggests that planning can take double ex- ponential time in some measure of the size of the in- put problem. To our best knowledge no lower-bound time complexity result has been established for the two-dimensional problem, but there are several upper- bound results applying to this case. A rather general planning procedure based on algebraic decision tech- niques is described in (Canny 1989), which takes dou- ble exponential time in the number of steps of the mo- tion plan. A less general algorithm obtained by re- stricting sensory feedback is given in (Donald 1988a), which is simply exponential in the number of steps. A perhaps more practical, but incomplete algorithm is presented in (Latombe et al. 1991). Part of the complexity of LMT, and, more gen- erally, of motion planning under uncertainty comes from the interaction between goal reachability and goal recognizability. We not only want the robot to reach the goal despite uncertainty in control; we also want it to recognize goal achievement despite uncertainty in sensing. Erdmann suggested to sim- plify planning by assuming partial independence be- tween these two notions (Erdmann 1984). This con- sists of extracting a subset of the goal that can be un- ambiguously recognized by the sensors independently of the way it has been achieved. This notion is central to the methods described in (Donald 1988a; Latombe et al. 1991). It is also related to the notion of landmarks used in (Levitt et al. 1987), and un- der different names in (Buckley 1986), (Christiansen et al. 1990) and (Donald & Jennings 1991). See also (Wutchinson 1991). Our planning algorithm iteratively computes non- directional preimages of the goal. The notion of a non- directional preimage was already present in the original LMT, but its exact computation was first described in (Donald 1988a). Although our algorithm applies to a different setting, ‘it uses several of the ideas introduced in (Donald 1988a; Briggs 1989), namely the fact that when the direction of motion varies continuously, the preimage of a goal remains qualitatively (i.e. topolog- ically) the same, except at critical directions where it changes suddenly. Furthermore, the amount of changes over all critical directions is bounded. These ideas, combined with a strong landmark notion, are the basis of our polynomial-time planning algorithm. A previous instantiation of LMT into a polynomial-time planner is described in (Friedman 1991). LMT assumes bounded errors and produces guar- anteed plans, that is, plans whose success is guar- anteed as long as the actual errors during execution stay within these bounds. The concept of a plan that may fail recognizably is introduced in (Donald 1988b). The concept of a probabilistically guaranteed plan (a plan whose probability of success converges toward one when time grows to infinity) is developed in (Erdmann 1989). Both concepts are related to the notion of a universal plan produced by our planner, when we add random motions. The notion of a universal plan was first proposed in (Schoppers 1989). lanning Problem The robot is a point moving in a plane called the workspace. There are no obstacles in the workspace. The robot can move in either one of two control modes, the perfect and the imperfect modes. The perfect control mode is feasible only in some cir- cular areas of the workspace called the landmark disks. When the robot is in a landmark disk, it recognizes the landmark without ambiguity, it knows its position exactly, and it has perfect control over its motions. In total, there are e landmark disks scattered across the workspace. Some disks may intersect, creating larger areas through which the robot can move in the perfect control mode. Lazanas and Latombe 817 The imperfect control mode can be used everywhere. In that mode the robot is requested to execute some motion command (d, ,C), where d E S1 is a direction in the plane and C is a subset of landmark disks, called the termination set of the command. When it exe- cutes the command (d,L), the robot follows a path whose tangent at any point makes an angle with the direction d that is no greater than some angle 0 called the directional uncertainty. The cone of angle 20 whose axis points along d is the control uncertainty cone. The robot stops as soon as it enters a landmark disk in C. The robot has no sense of time, which means that the modulus of its velocity is irrelevant to the planning problem. The robot is known to be anywhere in a specified initial region Z that consists of one or several disks. The goal region 6 is any subset (connected or not) of the workspace whose intersection with the landmark disks is easily computable. The problem is to generate a motion plan (i.e., a sequence of motion commands in the perfect and imperfect control modes) that is guar- anteed to attain 9, if such a plan exists, and to return that no such plan exists otherwise. As we will see, our planner delivers more than that. The current planner assumes that there are no obstacles in the workspace and that influence fields are circular disks. However, both these assumptions can be relaxed, with the com- putational complexity of the method remaining poly- nomial. See (Lazanas & Latombe 1992). Planning Method Preimage of a Goal Consider the goal region G. We define the kernel of 6 as the largest set of landmark disks such that, if the robot is in one of them, it can attain the goal by moving in the perfect control mode only. Let a landmark area be any maximal connected subset of landmark disks. The kernel of 6 is constructed as the union of all the landmark areas having non-zero intersection with 6. We let K(g) denote the kernel of &7. The preimage of G, for any given direction d, is the largest subset of the workspace such that, if the robot executes a motion command (d, K(G)) in the imperfect control mode, starting anywhere in this subset, then it is guaranteed to reach K(G). (With our hypotheses, there is no more powerful condition to stop the motion than to recognize the entry into K(B).) From the entry point in the kernel, the robot can attain S: in the perfect control mode. We let P(G, d) denote the preimage of G for d. A preimage P(G, d) consists of one or several con- nected subsets. Each connected subset is bounded by straight and circular edges. The circular edges are por- tions of the boundary of the landmark disks in the ker- nel of the goal. The straight edges are supported by lines parallel to the two sides of the control uncertainty cone and tangent to some landmark disks. All the straight edges in the boundary of a connected subset . Figure 1: A preimage of a set of disks for d = x of the preimage end in circular edges except two that end at the same vertex, called a spike of the preim- age. Fig. 1 shows an example of a preimage with four connected subsets. Each edge on the boundary of a preimage can be la- beled with the name of a landmark disk, the straight edges with the name of the disk from which they start, and the arcs with the name of the disk to which they belong. If e is the number of landmark disks, the total number of symbols is also 4!. Note, that if we describe the boundary of the preimage with a sequence of sym- bols, no two symbols can alternate more than twice. It is a well-known result that the size of such a sequence is linear in 4! (Guibas & Stolfi 1989). Therefore the total size of a preimage is 0(e). The construction of a preimage is rather straightfor- ward and is not described here. Using a line-sweep al- gorithm our planner computes a preimage in 0(e log 4!) time, after an initial (and performed only once) divide- and-conquer precomputation of the landmark areas that takes 0(elog2 e) time. Preimage Backchaining If we select d such that P(G, d) contains the initial re- gion Z, then we have succeeded in producing a motion plan to achieve the goal. Indeed, from its initial po- sition in Z, the robot can attain the kernel K(G) by executing the motion command (d, K(g)). Then, by switching to the perfect control mode, it can reach the goal without leaving K(G). However, in general, such a one-step motion plan’ does not exist. Of course, if I<(G) is empty, so is P(G, d) for any d E S1, and the planner returns fail- ure. If K(6) is not empty and P(g, d), for the selected direction d, does not contain 2, we treat P(6, d) as an intermediate goal 61 and try plan to achieve it from Z . If say dl, such that P(&, dl) contains Z, we have a two- step motion plan to achieve 6; otherwise, we consider to build a one-step motion we select a new direction, P(Gl, dl) as the new intermediate goal, and so on whole process is called preimage backchaining. . The There is still a major issue to address: How to choose a direction at every iteration of the preimage backchaining process ? We could arbitrarily discretize ‘We measure the number of steps as the number of mo- tion commands in the imperfect control mode. 818 Robot Navigation the continuous set S1 into a finite set and try all pos- sible combinations of directions in this finite set. But the planner would not be complete, even if we used a very fine discretization (which by the way would also lead to searching a huge graph). We solve this problem by using the notion of a non-directional preimage and slightly modifying the preimage backchaining process. Non-Directional Preimage Let us consider the preimage P(G,d). It turns out that when d moves around the circle S1, the answers to the questions “Does P(&T, d) include Z?” and “What landmark disks does P(G, d) intersect?” change at a fi- nite number of critical orientations. In order to detect these changes we must track the variation of P(G, d). Fortunately, P(G, d) varies continuously with the same topology, except at a finite number of other critical ori- entations where the topology of its boundary changes (e.g., new edges appear or old edges disappear). The open angular slice between any two consecutive critical orientations is called a regular interval. Let d 1,. . . , dp denote the critical orientations in counterclockwise order and Ii, . . . , Ip be the regular intervals between them (the endpoints of Ii are di and di+l(mod p)). For any interval 1i, let di be any ori- entation in 1i. In order to characterize the preim- ages of 6 and their relations with Z and the land- mark disks, it suffices to compute P(G, d) for all d E Vl, 4, da, . . ..d.}. Th e set P(G) of all these preim- ages is called the non-directional preimage of 6. Each of the 2p preimages in P(G) will now be called a direc- tional preimage. The events that give rise to critical orientations are caused by the motion of the straight edges of the preim- age, as their positioning with respect to landmark or initial region disks changes. Fig. 2 shows the subset of events where the topology of the directional preimage changes. The events of Fig. 2 occur when a straight segment enters or exits a disk by becoming tangent to it (a,c,f,h), when segments appear or disappear (b,g), when segments cross the intersection of disks (d,i), and when a spike enters or exits a disk (ej). The total num- ber of potential events of each type is O(e) or O(e2), except for Spike (e) and Hidden Spike (j) events (and similar events not shown in Fig. 2) whose number is O(e3). All events, except two, cause local changes in the preimage which can be computed in constant or logarithmic time. Only events (a) and (h) may cause O(e) changes in the topology of the preimage (catas- trophic events). After these events we recompute the preimage from scratch. Therefore, the complexity of our algorithm is O(t3 log&). Non-Directional Preimage Backchaining Our planner first computes the non-directional preim- age P(&), with 6 = GO. (Again, if the kernel I<(&) is empty, the planner terminates with failure.) If P(&J) contains a directional preimage P(& d) that includes (a) l&t Tom41 (b)LdtBid (d) Left Vertex (e) S@ke Figure 2: Events responsible for critical orientations Z, then d determines a one-step motion plan to achieve G as we already described, and the planner terminates with success. Otherwise, the planner considers the union of the directional preimages in P(G) as an in- termediate goal Gr. The kernel K(&) consists of all the landmark ar- eas that have a non-zero intersection with at least one of the directional preimages in P(&). By construc- tion, the set of landmark disks in K(&) is a superset of, or equal to the set of landmark disks in K(&&). If the two sets are equal, the planner terminates with failure because it cannot compute a larger preimage than P(&). Instead, if K(Gi)\K((jc) # 0, the non- directional preimage of & is computed. If it contains a directional preimage that includes 2, the planner ter- minates with success; otherwise it proceeds as above by treating the union of the directional preimages in P(&) as the new intermediate goal &, and so on. During this process, the set of landmark disks in the kernels of the successive goals increases monotonically. At every iteration, either there is a new landmark disk in the kernel, and the planner proceeds, or there is no new disk, and the planner terminates with failure. The planner terminates with success whenever it has constructed a non-directional preimage P(GN) that in- cludes 1, for some N 2 0. Ifs E O(e) denotes the total number of landmark areas in the workspace, the num- ber of iterations is bounded by s. Hence, the total time complexity of the planner is O(sZ3 log I). In both cases (success and failure), the planner re- turns the sequence of non-directional preimages it has constructed. obot Navigation Case where the planner returns success Let P(~N) be the non-directional preimage that con- tains Z. The plan actually built by the planner is a set of reaction rules. The reaction rules are associ- Lazanas and Latombe 819 ated with the initial region Z and the landmark areas. The reaction rule associated with Z prescribes to exe- cute the command (dN, LN) (in the imperfect control mode), where dN is such that P(GN, dN) is in P(GN) and contains Z, and CN is the set of landmark disks in the kernel K(GN). Executing this command guar- antees the robot to move from its initial position to a landmark disk in LN. The plan also provides one or several reaction rules per landmark area A in LN. Each such rule is con- structed during the backchaining process when a land- mark disk L in A intersects a constructed preimage for the first time. Let & be the goal whose preim- age was being constructed when this happens. If k = 0, then & is the original goal 6, and the reac- tion rule is simply to move to Q in the perfect control mode. If k > 0, & is the union of the directional preimages in P(~s-1). Then the reaction rule is de- fined by three parameters denoted by dh-1, &,I, and &-I. The direction dk-1 is any direction2 such that P(Gk-1, dk-1) E P&-i) and P(&-1, dk-1) intersects L. EI, is equal to L n P(gk-1, dk-1) and is called the e&t region of L. CJ+r is the set of landmark disks in q&-i). trance in The reaction rule says: from the point of en- A, move in perfect control mode to the exit region El,,1 of L; then switch to the imperfect control mode and execute the motion command (dk-1, &-I). Since there may be several rules attached to a land- mark area A (there are at most as many rules as there are disks in A), the navigation system must choose one. It chooses the one that is the closest to the point of entry in A to avoid an unnecessary long motion in the perfect control mode. By construction, no land- mark disk in A is in the termination set Lk-1 and &c C 61 C . ..LN. Hence, the plan cannot loop. The robot is guaranteed to reach Q in a finite number of steps that smaller). is at most N + 1 (but this number can be Case where the planner returns failure It nevertheless provides a set of reaction rules. Every landmark area in the last goal kernel has one such rule, at least, associated with it. The set of rules can be re- garded as a universal plan in the sense that it provides an appropriate motion command for all recognizable subsets (landmark areas) in the workspace from which a guaranteed plan to the goal exists. This universal plan can be used as follows: Let us assume that the workspace is bounded and the robot can detect that it attains the workspace boundary. Let us also define a third control mode for the robot, the random mode, which consists of executing a Brownian motion with reflection on the workspace boundary. The robot first executes a motion in the random mode until it reaches 2ActuaUy, our implemented planner selects the median direction in the largest interval of directions that allow the robot to attain Qk from L. The intuition for this choice is that it is more robust to unmodelled control errors. one of the recognizable subsets. Then it switches to using reaction rules in the universal plan. The prob- ability that a Brownian motion will attain a recogniz- able subset converges toward 1 when time grows to infinity. The expected duration of the Brownian mo- tion depends on the size of the landmark areas that are equipped with reaction rules. Obviously, this method is attractive only if this area is big enough, so that the expected duration of the motion is small. Unexpected event In any of the previous two cases, imagine that an unex- pected event occurs at execution time (e.g., the robot slipped and the error in control has been exception- ally large, or a landmark has been accidently “turned off”). If this event leads the robot to miss all the land- mark areas it was expected to attain, it will ultimately reach the workspace boundary. Then it may switch to a motion in the random mode, attain a landmark area with a reaction rule, and resume executing the planner’s plan. Experimental We implemented the above planner along with nav- igation techniques and a robot simulator in C on a DECstation 5000. Below we present some examples of produced plans and their simulated execution. In the figures, white disks are landmark disks that inter- sect the last non-directional preimage computed by the planner. Grey disks are landmark disks that have not been touched by any non-directional preimage; thus no plan exists for them. There is a single initial region disk marked Z and a single goal region disk marked G. Every time the robot enters a new landmark area with white disks, it executes an appropriate reaction rule, namely it moves into an exit region using the perfect control mode, and from there it executes the specified motion command in the imperfect control mode, until it reaches one of the landmark disks in the termina- tion set of the rule. Each reaction rule is represented by the commanded direction of motion. The outline of exit regions is also shown, unless the exit region cov- ers the entire disk. The termination sets of the rules are not represented. The directional uncertainty 8 (see Section 3) is measured in radians. We ran the algorithm on an example with 51 land- marks, one initial region disk and one goal disk. With 0 = 0.1, a plan was returned in less than two seconds, after two backchaining steps. As we let uncertainty grow, the planner returned more and more complicated plans, leading the robot through many landmark disks in order to reduce uncertainty. We ran the algorithm on thissameexample with 8 = 0.15,0.2,0.25,0.29,0.3, and 0.35, obtaining successful plans after 3, 4, 5, 6, 7, and 8 iterations, respectively. Each example was com- pleted in less than one minute. Fig. 3 displays the case 6 = 0.15. It took 3 iterations of the planner before the initial region got included in a preimage. In the process 820 Robot Navigation Figure 3: Successful planning and execution (0 = 0.15) the planner discovered guaranteed plans for many more landmark disks. A simulated plan (with three steps as well) is shown in the figure. Fig. 4 displays the case 8 = 0.3. The resulting plan is clearly more compli- cated, as is evident from its simulated execution. Re- action rules are markedly different, as they now point towards neighboring rather than remote disks. Fig. 5 illustrates the use of the output of the plan- ner in a simpler workspace (6 landmarks), when no guaranteed plan exists. With 6 set to 0.35, even the preimage of all six landmark disks in this example fails to include the initial region disk. The universal plan produced by the planner is represented as a reaction rule attached to each landmark disk not intersecting the goal. The robot first executes a Brownian motion and it is lucky enough to enter the upper-left landmark disk in a relatively short amount of time. From there it reaches the goal safely. Conclusion We have described a complete polynomial planning al- gorithm for landmark-based robot navigation. The al- gorithm addresses a rather simple class of problems, but it is fast and provides robust plans. In addition, the class of problems is strongly related to many real- life problems with mobile robots operating in open ar- eas. The algorithm can be extended to deal with more Figure 4: Successful planning and execution (0 = 0.3) Figure 5: Success using a Brownian motion complicated problems. Two relatively straightforward extensions are the use of more general landmarks with polygonal and/or circular fields of influence and the introduction of polygonal/circular obstacles in the workspace (Lazanas & Latombe 1992). Dealing with polygonal rather than circular landmark fields is actu- ally easier. We just need to adapt our critical events to the polygonal structure of the areas. Also, obsta- cles can be handled by the preimage computation al- gorithms, by shading out the subsets of the preimage containing the points that have a chance to lead the robot to collide with an obstacle. The adaptive selec- tion of a value for 8, as well as precomputing plans for faster planning in a stable workspace are also discussed in (Lazanas & Latombe 1992). So far, most algorithms to plan motion strategies under uncertainty were either exponential in the size of the input problem or its solution, or incomplete, or both. Such algorithms may be interesting from Lazanas and Latornbe 821 a theoretical point of view, but their computational complexity or lack of reliability prevent them from be- ing applied to real-world problems. Our work shows that it is possible to identify a restricted, but still realistic, subclass of planning problems that can be solved in polynomial time. This subclass is obtained through assumptions whose satisfaction may require prior engineering of the workspace and/or the robot. In our case, this implies the creation of adequate land- marks, either by taking advantage of the natural fea- tures of the workspace, or by introducing artificial bea- cons. We call this type of simplification engineering the workspace for planning tractability. Engineering the workspace has its own cost and we would like to minimize it. This will lead us to inves- tigate further extensions of our planner. For exam- ple, the task of creating landmarks would be consid- erably simplified if the planner could deal with small uncertainty in sensing and control within the land- mark fields of influence, and/or in the location and the size of these fields, perhaps allowing influence fields with soft, rather than sharp boundaries. Our goal will be to find a more general class of problems than the one solved by the current planner, but requiring less workspace/robot engineering and still solvable in polynomial time. A related issue will be to investi- gate the following “inverse” problem: Given our plan- ning method and the description of a family of tasks (e.g., the set of all possible initial and goal regions), how to minimally engineer the workspace, that is, what is the minimal number of landmarks that we should place and where should we place them so that possible problem admits a guaranteed solution. every eferences Briggs, A.J. 1989. An efficient Algorithm for One- Step Planar Compliant Motion Planning with Uncer- tainty. In Proceedings of the Fifth ACM Symposium on Comp. Geometry, 187-196. Saarbrticken, Germany. Buckley, S.J. 1986. Planning and teaching Compliant Motion Strategies. Ph.D. diss., Dept. of Electrical En- gineering and Computer Science, MIT. Canny, J.F., and Reif, J. 1987. New Lower Bound Techniques for Robot Motion Planning Problems. In Proceedings of the 28th IEEE Symposium on Foun- dations of Computer Science, 49-60. Los Angeles, CA. Canny, J.F. 1988. The Complexity of Robot Motion Planning. Cambridge, MA: MIT Press. Canny, J.F.1989. On Computability of Fine Motion Plans. In Proceedings of the IEEE International Con- ference on Robotics and Automation, 177-182. Scotts- dale, AZ. Christiansen, A.; Mason, M.; and Mitchell, T.M. 1990. Learning Reliable Manipulation Strategies without Initial Physical Models. In Proceedings of the IEEE International Conference on Robotics and Au- tomation, 1224-1230. Cincinnati, OH. Donald, B.R. 1988. The Complexity of Planar Com- pliant Motion Planning Under Uncertainty. In Pro- ceedings of the Fourth ACM Symposium on Compu- tational Geometry, 309-318. Champaign, IL. Donald, B.R. 1988. A Geometric Approach to Error Detection and Recovery for Robot Motion Planning with Uncertainty. Artificial Intelligence Journal 37( l- 3):223-271. Donald, B.R., and Jennings, J. 1991. Sensor Interpre- tation and Task-Directed Planning Using Perceptual Equivalence Classes. In Proceedings of the IEEE In- ternational Conference on Robotics and Automation, 190-197. Sacramento, CA. Erdmann, M. 1984. On Motion Planning with Uncer- tainty, Technical Report, 810. AI Laboratory, MIT. Erdmann, M. 1989, On Probabilistic Strategies for Robot Tasks. Ph.D. diss., Dept. of Electrical Engi- neering and Computer Science, MIT. Friedman, J. 1991. Computational aspects of Compli- ant Motion Planning. Ph.D. diss., Dept. of Computer Science, Stanford Univ. Guibas, L.J., and Stolfi, J. 1989, Ruler, Compass, and Computer: The Design and Analysis of Geometric Al- gorithms, Technical Report No. 37. Systems Research Center, Digital, Palo Alto. Hutchinson, S. 1991. Exploiting Visual Constraints in Robot Motion Planning, In Proceedings of the IEEE International Conference of Robotics and Automa- tion, 1722-1727. Sacramento, CA. Latombe, J.C. 1991. Robot Motion Planning. Boston, MA: Kluwer Academic Publishers. Latombe, J.C.; Lazanas, A.; and Shekhar, S. 1991. Robot Motion Planning with Uncertainty in Control and Sensing, Artificial Intelligence Journal 52( l):l- 47. Lazanas, A., and Latombe, J.C. 1992. Landmark- Based Robot Navigation, Technical Rep.ort, Dept. of Computer Science, Stanford Univ. Forthcoming. Levitt, T.S.; Lawton, D.T.; Chelberg, D.M.; and Nel- son, P.C. 1987. Qualitative Navigation. In Proceed- ings of the Image Understanding Workshop, 447-465. Los Angeles, CA. Lozano-Perez, T.; Mason, M.T.; and Taylor, R.H. 1984. Automatic Synthesis of Fine-Motion Strate- gies for Robots. International Journal of Robotics Re- search 3( 1):3-24. Mason, M.T. 1984. Automatic Planning of Fine Mo- tions: Correctness and Completeness. In Proceedings of the IEEE International Conference on Robotics and Automation, 492-503. Atlanta, GA. Schoppers, M.J. 1989. Representation and Automatic Synthesis of Reaction Plans. Ph.D. diss., Dept. of Computer Science, University of Illinois at Urbana- Champaign. 822 Robot Navigation
1992
11
1,176
A Symbolic Generalization eory Adnan Y. Darwiche and Matthew E. Ginsberg Computer Science Department Stanford University Stanford, CA 94305 Abstract This paper demonstrates that it is possible to re- lax the commitment to numeric degrees of be- lief while retaining the desirable features of the Bayesian approach for representing and changing states of belief. We first present an abstract rep- resentation of states of belief and an associated notion of conditionalization that subsume their Bayesian counterparts. Next, we provide some symbolic and numeric instances of states of be- lief and their conditionalizations. Finally, we show that patterns of belief change that make Bayesian- ism so appealing do hold in our framework. Introduction Representing states of belief and modeling their dy- namics is an important area of research in AI that has many interesting applications. A number of for- malisms for this purpose have been suggested in the literature [Aleliunas, 1988; Bonissone, 1987; Dubois and Prade, 1988; Ginsberg, 1988; Pearl, 1988; Shenoy, 1989; Spohn, 19901 but Bayesian formalisms [Pearl, 19881 seem to be among the best we know. Here, a state of belief is represented by a probability function over some set of propositions, and Bayes conditional- ization is used to change a state of belief upon acquir- ing certain evidence. The success and increasing popularity of Bayesian formalisms result largely from two factors. First, their admission of non-binary degrees of belief makes them more convenient than classical logic formalisms, for ex- ample, which support true and false propositions only. Second, the associated notion of Bayes conditional- ization ives rise to many desirable patterns of belief change F Polya, 1954; Pearl, 19881. Among these pat- terns are Polya’s five patterns of plausible inference: examining a consequence, a possible ground, a con- flicting conjecture, several consequences in succession, and circumstantial evidence [Polya, 19541. For exam- ple, the first of these patterns says “The verification of a consequence renders a conjecture more credible” while the second says “Our confidence in a conjecture 622 Representation and Reasoning: Belief can only diminish when a possible ground for the con- jecture has been exploded.” A significant problem with probability calculus?, however, is that it commits one to numeric degrees of belief. The reason why this commitment is problem- atic was clearly expressed by Jon Doyle [Doyle, 19901: One difficulty is that while it is relatively easy to elicit tentative propositional rules from ex- perts and from people in general, it is consider- ably harder to get the commitment to particu- lar grades of certainty . . . Worse still, individual informants frequently vary in their answers to a repeated question depending on the day of the week, their emotional state, the preceding ques- tions, and other extraneous factors . . . Reported experiments show the numbers do not actually mean exactly what they mean, for the perfor- mance of most systems remains constant under all sort of small (< 30%) perturbations in the precise values used. Nevertheless, AI practitioners continue to have mixed feelings about probability calculus and other numerical approaches to uncertainty: Understandably, expert system designers have dif- ficulty justifying their use of the numerical judge- ments in face of these indications of psychologi- cal and pragmatic unreality. Unfortunately, they have had to stick to their guns, since no satisfac- tory alternative has been apparent. [Doyle, 19901 It is therefore of significant interest to the AI com- munity to have a calculus that (1) does not commit to numbers, (2) admits non-binary degrees of belief, and (3) supports patterns of belief change that make probability calculus so successful. But is this possi- ble? This paper answers “Yes.” In the following sec- tions, we present a belief calculus that enjoys the above properties.2 ‘We use the terms “probability calculus,” “probability theory,” and “Bayesianism” interchangeably in this paper. 2Proofs, omitted due to space limitations, can be found in the full version of this paper. From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. Representing states of belief A state of belief can be viewed as an attribution of degrees of support to propositions.3 To formalize this intuition, however, we need to choose particular repre- sentations of propositions and degrees of support, and to constrain the mappings from propositions to degrees of support so that they correspond to coherent states of belief. Propositions Our account of propositions is to identify them with sentences of a propositional lan- guage L with the usual connectives 1, A, V, and >. We use false to denote any contradictory sentence, and true to denote any tautologous sentence, in C. The symbols A, B, and C denote sentences in G, and /= denotes logical entailment. Degrees of support A degree of support is an abstract quantity. It is neither strictly numeric nor strictly symbolic. Degrees of support can be integers, rationals, and even logical sentences. We view a de- gree of support as a primitive concept that derives its meaning from the operations and relations that are de- fined on degrees of support S. The symbols a, b, and c denote degrees of support in S. States of belief A state of belief is a mapping 0 from a language C into degrees of support S.4 This definition, however, admits some incoherent states of belief. For example, if S = (true, false}, we may have a state of belief that assigns false to a proposition and to its negation. We would like to exclude such states. And we will do this by identifying and formalizing a set of intuitive properties about coherent states of belief. The following are the properties we have identified: (AO) Equivalent sentences have the same support in the same state of belief. (Al) Contradictory sentences have the same support across all states of belief. (A2) Tautologous sentences have the same support across all states of belief, which is different from the support for contradictory sentences. (A3) The support for A > B is a function of the support for 1d and the support for A A B. (A4) If A + B b C and A has the same support as C, then B has also the same support as A and C. Formalizing the above properties constrains the de- grees of support S, and the mappings form C to S, as shown by the following theorem. 3 We use the term “degree of support” as a generalization of the term “degree of belief.” Support could be for or against the proposition to which it is attributed. 4 We assume that degrees of support are useful. That is, for all a in S, there is a state of belief that attributes a to some sentence in its domain. Theorem 1 Let 9 : t + S be a state of belief, A and B be sentences in t. Properties (AQ)-(Ad) hold iff: I. O(A) = @e(B) if A is equivalent to B. 2. There exists a partial function $ : S x S -+ S such that:= e +(A V B) = @(A) $9(B) if /= l(A A B), and ea@b = b $ a, (a $ b) $ c = a @ (b $ c), and if(a@b)$c=a thena$b=a. 3. There exists a unique support 0 in S such that: 0 *(false) = 8, and 0 for all a, a $0 = a. 4, There exists a unique support 1 # 0 in S such that: e @(true) = 1, and e for all a, there exists b that satisfies a $ b = 1. The function $ is called support summation and (S, @) is called a partial support structure. 0 1 0 1 I-- 0 1 00 0 true false Table 1: Examples of partial support structures. The full paper shows that the first three partial support structures of Table 1 induce states of belief that correspond to the following, respectively: classi- cal logic, probability calculus, and nonmonotonic logic based on preferential models [Kraus et al., 19901. In probability calculus, we assess our support for a sentence by providing a number in the interval [0, 11. If we have complete confidence in a sentence, we give it a support of one; otherwise, we give it a support of less than one. Another way to assess support for a sen- tence is to explicate the reason we have doubts about it. For example, given that Tweety is a bird, we may have doubts about its flying ability because “Tweety is wingless.” This intuition motivates a class of states of belief where degrees of support are sentences, called objections, in a propositional language 0. The support summation function in objection-based states of belief is logical conjunction. That is, the ob- jection to A V B is equivalent to A’s objection con- joined with B’s objection. For example, considering Table 2, the objection to A > B = “Bird implies flies” can be computed by conjoining the objection to A A B = “Bird and flies” with the objection to 1A = “Not bird,” which yields a,( A A B) Aa = “Wingless and has feather.“6 ‘a @ b is defined iff a = 9(A) and b = @(B) for some i5, A, B where I= ‘(A A B). ‘Note that A > B is equivalent to (A A El) V -A. Darwiche and Ginsberg 623 Table 2: A partial objection-based state of belief. There is a close connection between objection-based states of belief and ATMS [Reiter and de Kleer, 19871, which rests on the following observation: the objection to a sentence can be viewed as an ATMS label for the negation of that sentence. For example, the objection to “Bird implies flies,” “Wingless and has feather,” can be viewed as a label for “Bird and does not fly.” Ordering degrees of support Degrees of support can be partially ordered using the support summation function. The intuition here is that the sum of two supports is at least as great as each of the summands. Theorem 2 Define support a to be no greater than support b (written a < $ b) ifl there is a support c sat- isfying a @ c = b. The relation Le is a partial order on 5, and for all a in S, 0 -<@a se 1. se is called a support order. Table 3: Examples of support orders. A sentence that has support 0 will be called rejected and its negation will be called accepted. Note that if a sentence is accepted, it must have support 1, but the converse is not true. To consider an example, let degrees of support S be (possible, impossible}, and let support summation be defined as follows: a @ b = possible unless a = b = impossible. This makes 1 = possible. Moreover, a state of belief 9 may be such that G( “Bird”) = +( “Not bird”) = possible. Hence, a sentence and its negation may have support 1 and yet none of them may be accepted. Changing states of belief This section is mainly concerned with the following question: how should a state of belief change as a result of accepting a non-rejected sentence? When we accept a sentence A in a state of belief 9, we say that we have conditionalized 9 on A. Our goal in this section is to formalize this process. Definition 1 Let @ be a state of belief with respect to (5, L, @). If A E II: is not rejected by @, then a conditionalization of @ on A (written +A) is a state of belief, with respect to (S, .C, @), in which A is accepted. Given a state of belief 9, there are many condition- alized states of belief @A that satisfy the above def- inition. Some of these states correspond to plausible changes in a state of belief, but others do not. We would like to constrain conditionalization so that im- plausible belief changes are excluded. And we will do this by identifying and formalizing some intuitive prop- erties of belief change. The following are the properties we have identified: (A5) Accepting a non-rejected sentence retains all accepted sentences. (A6) Accepting an accepted sentence leads to no change in a state of belief. (A7) Accepting A V B does not decrease the support for A. (A8) If A’s support .after accepting C equals its sup- port after accepting B A C, then B’s support after accepting C equals its support after accepting AAC. (AO) If A V B is equally supported by two states of belief and A is unequally supported by these states, then A remains so after each state accepts A V B. (AlO) The support for A after accepting A V B is a function of the initial supports for A and A V B. Formalizing the above properties leads to a construc- tive definition of conditionalization as shown by the following theorem. Theorem 3 Assume (AU)-(Ad), let d be a state of belief with respect to (S, L, @), and let A, B be two sentences in 1: where A is not rejected by 9. Properties (A5)-(AlO) hold ifl there exists a partial function @ : S x S + S such that:? 0 @A(B) = @(A A B) 8 9(A), and 0 O@a =O, a01 = a, (a @ b) 8 c = (a 8 c) $ (b 8 c), a@a= 1, a 0 b >e a, if a @ c = b @ c then a = b, andifa@b= cad thena@c=b@d. The function 8 is called support scaling and (S, a, 0) is cabled a support structure. The full paper shows that the first three scal- ing functions in Table 4 give rise to conditionaliza- tion rules that correspond to the following, respec- tively: augmenting conclusions in classical logic, Bayes conditionalization in probability calculus, and aug- menting/retracting conclusions in nonmonotonic logic based on preferential models [Kraus et al., 19901. Conditionalization of objection-based states of be- lief roughly states the following: the objection to B after accepting A is the initial objection to A A B minus the initial objection to A [Darwiche, 1992b]. ‘a 0 b is defined iff a leb and b # 0. 624 Represent at ion and Reasoning: Belief I a0b ((0’ 11, ma4 min(a, b) m 11, +> a/b ((O,l,..., oo),min) a-b wT*) true, if a E true; aA-,b, otherwise. Table 4: Examples of support scaling. To give an example, let us compute the objection to B= “Flies” conditioned on accepting A = “Bird.” According to Theorem 3, this can be computed from the objection to A A B = “Bird and flies,” and the objection to A = “Bird,” which are given in Table 2. The desired objection, @A(B), is then computed by @(A A B) A 4(A) = ‘LWingless.9’ We conclude this section by noting that objection- based conditionalization is closely related to updating ATMS labels. A complete treatment of this connec- tion, however, is not within the scope of this paper - the interested reader is referred to [Darwiche, 1992b). Conditional and unconditional supports For our framework to be useful in building artificial agents, the specification of a state of belief must be made intuitive enough so that a domain expert can naturally map his state of belief onto an artificial agent. This section discusses a function on degrees of support that helps achieve this goal. A basic observation about human reasoning, claimed by Bayesian philosophers, is that it is more intuitive for people to specify their support for a sentence B (e.g., “The grass is wet”) conditioned on accepting a relevant sentence A (e.g., “It rained”) than to specify their unconditional support for B. It is therefore nat- ural for domain experts to specify their states of belief by providing conditional supports. This is indeed the approach taken by most probabilistic representations where a domain expert provides statements of the form “P( BIA) = p,” which reads as “If I accept A, then my probabilistic support for B becomes p.” One should note, however, that conditional sup- ports are most useful when they can tell us something about unconditional supports. For example, condi- tional probabilities can be easily mapped into uncon- ditional probabilities: P(A A B) = P(B IA) P(A). It is then important to ask whether the previous equal- ity is an instance of a more general one that holds in our framework. This question is answered positively by the following theorem, which states that for every support structure there is a function on degrees of sup- port that plays the same role as that played by numeric multiplication in probability theory. Theorem 4 Let (S, $, 8) be a support structure, @ be a state of belief with respect to (S, C, @), and A, B be two sentences in L where A is not rejected by @. Properties (AO)-(AlO) imply the existence of a partial function 8 : S x S -+ S such that? o @(A A B) = CPA(B) @ <h(A), and o (a@b)@b=(a@b)8b=a,O@a=O,a@l=a, a@b <e a, a@b = b@a, and (a@b)@c = a@(b@c). The function @ is called support unsealing. Table 5: Examples of support unsealing. The support unsealing function in objection-based states of belief is logical disjunction. We have pre- viously computed the objection to B = “Flies” condi- tioned on accepting A = “Bird” to be @A(B) = “Wing- less.” So let us now compute the objection to A A B = “Bird and flies.” Theorem 4 tells us that we need the objection to A for this computation, which is given in Table 2. The desired objection, @(A A B), is then computed by @A(B) V (P(A) = “Wingless.” Patterns of plausible reasoning The ultimate objective of many works in AI - most notably nonmonotonic logics - is to capture patterns of plausible reasoning in nonnumerical terms. George Polya (1887-1985) was one of the first mathematicians to attempt a formal characterization of qualitative hu- man reasoning. Polya identified five main patterns of plausible reasoning in [Polya, 1954, Chapter XV] and demonstrated that they can be formalized using prob- ability theory. Pearl highlighted these patterns in his recent book [Pearl, 1988] and took them - along with other patterns such as nonmonotonicity, abduction, explaining-away and the hypothetical middle [Pearl, 1988, Page 191 - as evidence for the indispensability of probability theory in formalizing plausible reason- ing. In his own words: We take for granted that probability calculus is unique in the way it handles context-dependent information and that no competing calculus exists that closely covers so many aspects of plausible reasoning [Pearl, 1988, Page 201. This section shows that four of Polya’s patterns of plausible reasoning hold in our framework. First, how- ever, we need to formally define certain terms that Polya used in stating his patterns. To “verify” or ‘a @ b is defined iff there is c satisfying c 0 b = a. Darwiche and Ginsberg 625 “prove” a proposition is to accept it. To ‘Lexplode9’ a proposition is to reject it. And, the “credibility of,‘, “confidence in,” and “belief in” a proposition are all equivalent terms. Definition 2 A is no more supported than B in a state of belief @ ifl 9(A) Le 9(B). Given Theorem 2, it should be clear that the relation no-more-supported is a partial ordering. Definition 3 A is no more believed than B in a state of belief 9 ifl A is no more supported than B, and TB is no more supported than lA, in <p. The second part of the above definition may seem re- dundant, but it generally is not. Table 6 provides a counterexample. Table 6: A state of belief with respect to the structure ((0, l}, max), where 0 srnax 1. bird is no more supported than fly although it is more believed. The reason for this asymmetry with probability cal- culus is that, in general, the support for a proposition does not determine the support for its negation, as is the case in probability calculus. Theorem 5 No-more-believed is a partial ordering. We are now ready to state and prove Polya’s patterns. P Examining a consequence: The verification of a consequence renders a con- jecture more credible.[Polya, 1954, Page 1201 Theorem 6 If A > B is accepted, and B is not rejected, by a state of belief 0, then @B(A) >@@(A) unless +(A) = 0 or 9(B) = 1. I Examining several consequences in succession: The verification of a new consequence enhances our confidence in the conjecture, unless the new consequence is implied by formerly verified consequences.[Polya, 1954, Page 1251 Theorem 7 If A > Cl, . . . , A 1 Cn are accepted, and Cl A . . . A C, is not rejected, by a state of belief @, then @c,A...Ac,@) >&c~A...Ac,-~ (A) unless +c~A...Ac~-~(G) = 1. The patterns of examining a possible ground and ex- amining a conflicting conjecture are omitted for space limitations and can be found in the full version of this paper. Discussion and related work I An important attraction of probabilistic states of be- lief is that they can be specified using Probabilistic Causal Networks (PCNs) [Pearl, 19881. PCNs are easy to construct and serve as models for computing un- conditional and conditional probabilities. There are parallel constructs for specifying abstract states of be- lief, called abstract causal networks (ACNs) [Darwiche and Ginsberg, 1992; Darwiche, 1992a]. ACNs are also easy to construct and serve as models for computing unconditional and conditional degrees of support. I The four belief calculi that we presented so far are not the only instances of our framework. Table 7, for example, depicts two more calculi. 5 a@b a@b 0 1 a F@b a@b Improbability P9 11 a+b-1 (a - Wl - b) 1 0 a>b a+b-ab Improb. of A Consequence A propositional language avb C false, if a E false; a v Tb, otherwise. false true al=b ar\b Consequence of A Table 7: Improbability and Consequence calculi. If a domain expert is not satisfied with the calculi pro- posed in this paper, all he has to do is the following: choose a set of supports S that he feels more com- fortable with and accept properties (AO)-(AlO) with respect to this choice. The results of this paper show that there must exist a support structure, (S, @, a), which gives rise to a new belief calculus that shares with probability calculus its desirable properties. I It is typical of multivalued logics [Bonissone, 1987; Ginsberg, 19881 and generalizations of probability cal- culus [Aleliunas, 19881 to assume one of the following axioms: @(A A B) is a function of 9(A) and 9(B), or @(-A) is a function of <p(A). None of the six calculi presented in this paper satisfy the first axiom, and only probability and improbability calculi satisfy the second axiom. a One can define a notion of qualitative conditional influence that generalizes the probabilistic notion de- fined for Qualitative Probabilistic Networks [Wellman, 19881. In a state of belief 9, we say that A positively influences B given C if and only if 9 ~A*c*D(B)<$~PA/\c*D(B), for all D where 1A A C A D and A A C A D are not rejected by @. Negative and zero influences are similar. 626 Representation and Reasoning: Belief One criticism of the work presented here may be that its conception of a state of belief is restrictive because of Property (A3). As a result of this property, states of belief such as those represented by Dempster’s basic probability assignments [Shafer, 19761 do not fit in our framework. I It is not clear whether the pattern of circumstantial evidence holds in our framework. This pattern says that “If a certain circumstance is more credible with a certain conjecture than without it, the proof of that circumstance can only enhance the credibility of that conjecture.“[Polya, 1954, Page 1271. B Some important questions remain unanswered about our framework. For example, what additional proper- ties of states of belief and belief change would commit us to Bayesianism ? Moreover, what additional proper- ties of belief change would force the uniqueness of sup- port scaling, thus, reducing a support structure into a pair (S, @)? Finally, is their an abstract decision the- ory that subsumes probabilistic decision theory, in the same way that our framework subsumes Bayesianism? Conclusion We have presented an abstract framework for repre- senting and changing states of belief, which subsumes the Bayesian framework. At the heart of the frame- work is a mathematical structure, (S, $, a), called a support structure, which contains all the informa- tion needed to represent and change states of be- lief. We have also presented symbolic and numeric instances of support structures, and have shown that our framework supports some patterns of plausible rea- soning that have been considered unique to numeric formalisms. Acknowledgements The first author has benefited greatly from com- ments of Ahmad Darwiche, Don Geddis, Adam Grove, Jinan Hussain, Alon Levy, H. Scott Roy, Ross Shachter, Yoav Shoham, and the PRINCIPIA group at Stanford University. This work has been supported by the Air Force Office of Scientific Research under grant number 90-0363, by NSF under grant number IRI89-12188, and by DARPA/Rome Labs under grant number F30602-9 l-C-0036. References Aleliunas, Romas 1988. A new normative theory of probability logic. In Proceedings of the Canadian Artificial Intelligence Conference. Morgan Kaufmann Publishers, Inc., San Mateo, California. 67-74. Bonissone, Piero 1987. Summarizing and propagating uncertain information with triangular norms. Inter- national Journal of Approximate Reasoning 1~71-101. Darwiche, Adnan Y. and Ginsberg, Matthew L. 1992. Abstract causal networks. Submitted to UAI-92. Darwiche, Adnan Y. 1992a. Objection-based causal networks. Submitted to UAI-pg. Darwiche, Adnan Y. 1992b. Objection calculus. (forthcoming). Doyle, Jon 1990. Methodological simplicity in expert system construction: The case of judgements and reasoned assumptions. In Shafer, Glenn and Pearl, Judea, editors 1990, Readings in Uncertain Reason- ing. Morgan Kaufmann Publishers, Inc., San Mateo, California. 689-693. Dubois, Didier and Prade, Henri 1988. Possibility Theory: An Approach to Computerized Processing of Uncertainty. Plenum Press, New York. Ginsberg, Matthew L. 1988. Multivalued logics: a uniform approach for reasoning in artificial intelli- gence. Computational Intelligence 4:265-316. Kraus, S.; Lehmann, D.; and Magidor, M. 1990. Non- monotonic reasoning, preferential models and cumu- lative logics. Artificial Intelligence 24( l-2):167-207. Pearl, Judea 1988. Probabilistic Reasoning in Intelli- pent Systems: Networks of Plausible Inference. Mor- gan Kaufmann Publishers, Inc., San Mateo, Califor- nia. Polya, George 1954. Patterns of Plausible Inference. Princeton University Press, Princeton, NJ. Reiter, Ray and de Kleer, Johan 1987. Foundations of assumption-based truth maintenance systems: Pre- liminary report. In Proceedings of AAAI. AAAI. 183- 188. Shafer, Glenn 1976. A Mathematical Theory of Evi- dence. Princeton University Press, Princeton, NJ. Shenoy, Parkash P. 1989. A valuation-based language for expert systems. International Journal of Approx- imate Reasoning 5(3):383-411. Spohn, Wolfgang 1990. A general non-probabilistic theory of inductive reasoning. In Kanal, L.; Shachter, R.; Levitt, T.; and Lemmer, J., editors 1990, Un- certainty in Artificial Intelligence 4. Elsevier Science Publishers. 149-158. Wellman, Michael P. 1988. Qualitative probabilistic networks for planning under uncertainty. In Lem- mer, J. and Kanal, L., editors 1988, Uncertainty in Artificial Intelligence, volume 2. Morgan Kaufmann Publishers, Inc., San Mateo, California. Darwiche and Ginsberg 627
1992
110
1,177
ogic of Knowle ge and Belief for * e Preliminary Report Piotr J. Grnytrasiewics and Edmund H. Durfee Department of Electrical Engineering and Computer Science University of Michigan Ann Arbor, Michigan 48109 Abstract To make informed decisions in a multiagent en- vironment, an agent needs to model itself, the world, and the other agents, including the models that those other agents might be employing. We present a framework for recursive modeling that uses possible worlds semantics, and is based on extending the Kripke structure so that an agent can model the information it thinks that another agent has in each of the possible worlds, which in turn can be modeled with Kripke structures. Using recursive nesting, we can define the propo- sitional attitudes of agents to distinguish between the concepts of knowledge and belief. Through the Three Wise Men example, we show how our framework is useful for deductive reasoning, and we suggest that it might provide a meeting ground between decision theoretic and deductive methods for multiagent reasoning. Introduction In this paper, we develop a preliminary framework for recursive modeling in multiagent situations based on logics of knowledge and belief. If an intelligent agent is engaged in an interaction with another agent, it will have to reason about the other’s knowledge, beliefs, and view of the world in order to interact with the other agent effectively. Reasoning about knowledge and belief is thus important not only for philosophy, but also for distributed and multiagent systems. Presently, there seems to be no consensus among philosophers and AI researchers as to what particular properties concepts like knowledge and belief should have. As a result, a whole family of logics have ap- peared, with basically the same formalism but with differing sets of axioms. We summarize this formalism in the first section. After this, we go on to extend this ‘This research was supported, in part, by the Depart- ment of Energy under contract DG-FG-86NE37969, and by the National Science Foundation under grant IRI-9015423 and PYI award IRI-9158473. 628 Representation and Reasoning: Belief formalism to the multiple agent case in a way that can be used for recursive modeling. We describe how our framework can define proposi- tional attitudes of agents in a way that provides for a natural distinction between the concepts of knowledge and belief. The intuition that we are able to formalize, suggested by Hintikka [Hintikka, 19621, is that state- ments about knowledge, unlike statements about be- lief, contain an element of commitment to this knowl- edge on the side of the agent making the statement. We then compare our framework to other approaches and discuss the practical issues of creating the recur- sive hierarchy of models. Finally, we outline our frame- work’s application to nested deductive reasoning using as an example the Three Wise Men puzzle, and we sug- gest how is might also be applied to coordination and communication using decision theory. Classical Model This section largely follows the presentation in [Halpern and Moses, 1990; Halpern and Moses, 19911. The classical model for reasoning about knowledge and belief is the possible worlds model. The basic intu- ition here is that an agent has a limited view of the world and cannot be sure in what state the world re- ally is. Hence, there are several states of the world that an agent considers possible, called possible worlds. A formal tool for reasoning about possible worlds is a Kripke structure. A Kripke structure, M, for an agent is (S, I, R), where S is a set of possible worlds, x is a truth assignment to the primitive propositions for each possible world (i.e. n(s)(p) E (true, false) for each state s E S and each primitive proposition p), and R is a binary relation on the possible worlds, called the possibility relation. The truth assignment r can used to define a relation, b ,between a proposition, p, and a possible world, s, of a structure M, as follows: (M, s) /= iff 7r(s)(p) = true. Let us examine an example (based on [Halpern and Moses, 19911). Let p denote a primitive proposition, then n(s)(p) = true describes the situation in which p holds in state s of structure A4. Let us take an example set of possible worlds consisting of s, t and From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. Figure 1: Diagram of a Kripke Structure u: S = (s&u). A ssume that proposition p is true :(;.;$s s and u but false in t (so that ~(s)(p) = = true and n(t)(p) = false) and that a par- ticular agent cannot tell the states s and t apart, so that R = ((s, s), (s, t), (t, s), (t, t), (u, u)}. This situa- tion can be diagrammed, as in Figure 1, where the pos- sibility relation between worlds is depicted as a vector, as between s and t, denoting that in the state s the agent considers state t possible. Now, in state s, the agent is uncertain whether it is in s or in t, and since p holds in s and does not in t, we can conclude that the agent does not know p. In state u, the agent can be said to know that p is true, since the only state accessi- ble from u is u itself and p is true in u. Considerations of this sort lead us to the modal operator I<, denot- ing knowledge. According to the classical definition, an agent in some state is said to know a fact if this fact is true in all of the worlds that the agent considers possible. In multiagent situations different agents might have different possibility relations. The model proposed in [Halpern and Moses, 1990; Halpern and Moses, 19911 for the case of multiple agents, named 1 through n, is a Kripke structure A4 = (S, X, RI, Rs, . . . . Rra). Thus, the possibility relation of each of the agents is included directly in M. While a straightforward extension of a single agent case, we have found this representation problematic when one wants to consider agents rea- soning about other agents. Specifically, we would like the possibility relation that agent 1 ascribes to agent 2 to potentially differ from agent 2’s true possibility rela- tion. Thus, each agent might have many possibility re- lations associated with it, depending on who’s perspec- tive is being considered. As we detail next, our own approach for treating with multiple agents involves a nesting, rather than an indexing, of possible worlds that permits different viewpoints to coexist and that allows a distinction between the concepts of knowledge and belief. After describing our approach in the next section, we compare it to related work in more detail. Personal ecursive Mripke Structures Our formalism views an agent’s knowledge from its own, personal perspective so that the formalism can be used by an agent when interacting with others. Let us consider a set of n interacting agents, named 1 through n. Without loss of generality, we will consider the sit- uation from the perspective of agent 1, which is in a world about which it has limited information. We will call the representation of this information that 1 has its view. Based on its view of the world, I can form a set of possible worlds that are consistent with its lim- ited view, and represent them in its Kripke structure. Each of these worlds can be described by a set Qp, of primitive propositions p. Since there are other agents around, the agent should wonder about their views of the world. In the formal- ism we are proposing, agent 1 forms its model of the other agents’ views in each of the worlds it consid- ers possible. Thus, each of the possible worlds sk is augmented with structures representing the knowledge the agent attributes to each of the other agents in this world. It is natural to postulate that these structures themselves be Kripke structures. We are getting a re- cursively nested Kripke structure of agent I: RM1 = (S, ?r, R), where the elements of S are augmented pos- sible worlds: sk = (sk, RMZ, . . . . R&f:, . . . . R&l,“). The first element, Sk, is a classical possible world described by a set of primitive propositions; the other elements are recursively defined Kripke structures of the other agents, corresponding to their limited views of the pos- sible world Sk. Thus, RI& = (Si, $, Ri) in which §‘i is the set of augmented possible worlds of agent i in the world Sk. In the above formulation, the 7r relation is, as before, the truth assignment to the primitive propo- sitions for each possible world, Sk. The binary relation R in RM1 is a possibility relation defined over the set of augmented possible worlds S. The truth assignment x can be used to define a binary relation, b, between a proposition, p, and a possible world, Sk, of a structure RM1, as follows: (RM’, sk) + p iff n&)(p) = tT=Ue. The personal recursive Kripke structure RM de- fined above can serve to define a number of con- cepts useful in multiagent reasoning, in a manner analogous to one used in the case of classical Kripke structures (we follow the spirit of [Hintikka, 1962; Hughs and Cresswell, 19721). These concepts are re- ferred to its propositional attitudes. Propositional Attitudes of 8 Single Agent Based on its recursive Kripke structure, RM1 = (S, ?r, R), agent 1 can say that it hnows that p holds, written as Klp, if e ( RM1, Sk) j= p for sk in all s; E S, i.e., if p iS trUe in all of the possible worlds consistent with agent l’s view of the world. In these circumstances, agent 1 can also say that it believes p, and thus, there is no distinction between the concepts of knowledge and belief when agent 1 reasons or communicates facts about its own view of the world. It is, then, the same for agent 1 to assert “I know p” , as to assert “I believe p”. Our convention of equat- ing the concepts of knowledge and belief in this case differs from some of the established conventions that differentiate between these two concepts based on the Gmytrasiewicz and Durfee 629 properties of the possibility relation R. In particular, “knowledge” is sometimes reserved only for assertions that an agent makes that are true in the actual world (as assessed by some correct and omniscient agent). The uniqueness of our approach stems from the fact that we consider the agent’s knowledge from its own, personal perspective. Because the real world, and its complete description, cannot be known with certainty by the agent, it cannot be sure that the real world is among the worlds that it considers possible. Con- sequently, there is no way that the agent can tell its knowledge and belief apart. In the remainder of this paper, therefore, we will use Klp to denote agent l’s making a statement, p, based on its Kripke structure, RM1, with the understanding that Klp is always equivalent to Blp. Later, however, we will show how the difference between knowledge and belief arises intuitively when an agent makes as- sertions about other agents. Now we continue with the propositional attitudes of a single agent: Agent 1 can say that it knows whetherp holds, writ- ten as Wlp, if e Klp or Kl~p. Further, agent 1 can say that the proposition p is pos- sible, written as PIP, if o 34 E S such that (RM’, Sk) k p, i.e., p is true in at least one of the worlds consistent with agent l’s view of the world. And, agent 1 can say that the proposition p is contin- gent, written as Clp, if e ~Wlp, i.e., agent 1 does not know whether p. Propositional Attitudes of Other Agents To reason about the knowledge and beliefs of oth- ers, agent 1, with its structure RM1 = (S, T, R), can inspect the structures of the other agents, R&f: = (Si, & Ri), in its augmented possible worlds. Thus, this kind of reasoning always pertains to what agent 1 thinks other agents are thinking. A number of proposi- tional attitudes describing other agents can be defined as follows. Belief Agent 1 can say that agent i believes p in a possible world Sk, written KlBl”p, if e (R@,sg,,) /= p for s:,~ in all s%, E Si, i.e., p holds in all of the worlds that agent 1 ihinks that agent i considers possible in Sk. Agent 1 can say that agent i believes a fact p, denoted as K,B,p, if e KlBl”p for sk in all .$. E S, i.e., if agent i believes p in all of agent l’s possible states of the world. The definitions of possibility and contingency for other agents can be constructed analogously to belief. Note that the definitions of the propositional atti- tude of belief of agent i above did not contain any ref- erence to what agent 1 knows (believes) of the world. Therefore, we can say that if agent 1 makes statements about agent i’s beliefs, the propositional attitude of agent 1 would not be revealed. This can be contrasted with agent 1 speaking about agent i in terms of knowl- edge, as we now see. Knowledge Agent 1 can say that agent i knows that p holds in possible world Sk, written as KIKikp, if @ (RM’, Sk) /= p, i.e., p holds in Sk, and if Q (RM1,s&) j= p for s:,~ in all S& E Si, i.e., p holds in all of the worlds that agent 1 thinks that agent i considers consistent with Sk. Agent 1 can say that agent i knows a fact p, written as KlK;p, if Q KlK:‘p for Sk in all si E S, i.e., agent i knows p in all of agent l’s possible states of the world. Let us note that the above also implies that agent I knows p. Analogously, agent 1 can say that agent i knows whether a fact p holds in possible world Sk, written as KIWikp, if 0 KIKgkp or ICI Ic,s”lp. And agent 1 can say that agent i knows whether a fact p holds, written KlWip, if Q KlKip or K1 Kilp. Relations Between Knowledge and Belief It is important to note that the definitions agent 1 uses to characterize agent i in terms of knowledge involve a comparison between i’s view of the world and agent l’s view. Thus, an agent that makes statements about the knowledge of other agents expresses its own com- mitment to this knowledge. Statements about others’ beliefs, on the other hand, do not involve this commit- ment, and the notions of knowledge and belief di$er. Our definitions, therefore, capture the “knowledge as a justified, true belief’ paradigm of modal logic. To investigate the relation between the concepts of knowledge and belief a little further, let us introduce some helpful not at ion. We will call the relation be- tween the possible worlds sk in the augmented worlds, si = (Sk, . . . . RM:, . ..) belonging to the set S of struc- ture RM1 = (S, 7r, R), and the possible worlds si, in the augmented worlds s$, the structure RM; = (~5’;: belonging to the set Sk of 7~1, Rk), a subordination1 re- lation for agent i in the world Sk: Subp” = {(Sk, s:,~)}. The worlds, sl, and si ,, that are connected via a sub- ordination relation will be called a parent world, and a child world, respectively. Thus, the subordination relation connects parent worlds to children, that them- selves can be parents of other worlds, and so on. Theorem 1. If the subordination relation in a per- sonal recursive Kripke structure, RM’, is reflexive for all agents, then the concepts of knowledge and belief, that agent 1 uses, are equivalent. Proof: The definitions of 1(1 K:‘p and 1<r B?‘p in the previous section ensure that KIKfkp implies KIBPkp. ‘Our choice of this term is motivated by such rela- tions investigated in [Hughs and Cresswell, 1972; Hughs and Cresswell, 19841. 630 Representation and Reasoning: Belief To establish the implication in the other direction note that, if the subordination relation is reflexive then the world sk is also one of the si I worlds. Since KrBidLp demands that (RM, s&) b b for all si , worlds, it follows that (RM, sk) b p. Thus, IClhfkp implies KrICikp. The equivalence of Klli’fLp and li’rBi”p for all of the worlds sk ensures the equivalence of K1 ICip and K1Bip. Restated in terms of views, Theorem 1 says that, if an agent can be sure that another agent’s view of any possible world is guaranteed to be-correct, then the distinction between knowledge and belief ceases to ex- ist. It is, therefore, the possibility of the other agent’s view to be an incorrect-description of a world, as op- posed to being only a partial description of it, that al- lows for the intuitively appealing distinction between knowledge and belief. Given - that our formulations have shown that dis- tin&ions between knowledge and belief arise when one agent reasons about another, we might ask what hap- pens when an agent treats itself in this way. An agent doing so amounts to introspection. We call the cor- responding concepts introspective knowledge, 1Cr Klp, and introspective belief, K1Blp. To enable introspection, we can formally modify the personal recursive Kripke structure, RW = (S, R, R), of agent 1, and include in an augmented world, sk, a structure, RM,& representing the information con- tained in the view agent 1 would have in each of its pos- sible worlds, Sk. The augmented worlds contained in S are now: si = The fact that (Sk, RM,1 , RM;, . . . . RM;, . . . . RM,n). RM: is to represent the view the agent would have in sk suggests that RM: describe a portion of RM1 visible2 from Sk. If this is taken to be the case, we obtain the following theorems: Theorem 2. If the accessibility relation, R, in the personal recursive Kripke structure, RM1 = (S, T, R), of agent 1 is reflexive, then the introspective knowledge of a proposition, ICrI<rp, is equivalent to introspective belief, K1Blp, for this agent. Proofi Introspective knowledge, 1<1 IClp, clearly im- plies introspective belief, K1Blp. If R is reflexive, then, using the notation above, every world sk belongs to the set of s:,/ worlds in RM;. Introspective belief, I-lBlp, demands that p be true in all worlds sk ,, and thus also in Sk, for every such Sk,. For reflexive R, therefore, K1Blp implies K1 Klp. Theorem 3. If the accessibility relation, R, in the personal recursive Kripke structure, RM1 = (S, ?r, R), of agent 1 is universal, then the introspective knowl- edge and introspective belief of an agent are equivalent to the agent’s knowledge (and, of course, belief.). Proof: Introspective knowledge, 1Cr .K,p, clearly im- plies knowledge, IClp. Note that, for R universal, the set of possible worlds, Sk, in the set S is the same as 2We say tha I? t the world s1 sees the world 32 if (s1,32) E the set of the worlds, s:,,, accessible from Sk. There- fore, knowledge of a proposition, li’lp, demanding that p hold in all of the worlds Sk, implies that p holds in all of the worlds s:,~ . For a universal R, there- fore, knowledge, Klp, implies introspective knowledge, Ii’lIClp. In this case, according to Theorem 2, intro- spective knowledge is also equivalent to introspective belief. The theorems above provide a certain amount of guidance as to the properties of relations holding among the possible worlds that one can reasonably postulate in practical situations. It seems that it may be desirable to be able to make a distinction between the concepts of knowledge and belief used by agents to describe other agents. Thus, we should not demand that the subordination relation, holding between the parent and the children possible worlds, be reflexive. On the other hand, it seems desirable to demand that the accessibility relations, R, holding among the sib- ling worlds themselves, be not only reflexive, but also universal. This property ensures that the introspec- tive knowledge of an agent will be no different than its knowledge. The nonreflexive subordination relation, together with a universal relation among the sibling worlds in a personal recursive Kripke structure, provides it with a unique composition. It consists of clusters of the sib- ling worlds, interconnected via a universal accessibil- ity relation, overlayed over a tree whose branches con- sist of the monodirectional subordination relation3. Of course, while the above composition does provide for a reasonable set of properties, other models may be equally interesting. Some of them might not provide for equivalence among introspective knowledge, intro- spective belief and knowledge, and it remains to be investigated whether they correspond to any realistic situations. Comparison to Related Work As we mentioned before, the Kripke structure sug- gested for n agents in [Halpern and Moses, 19911 is a tuple M = (S, X, RI, R2, . . . . a). Unlike our definition, the possibility relation of each of the agents is included directly in M. Important consequences of this are revealed when the the agents’ knowledge about each other’s knowledge is considered. It is suggested that, in order for the agents to be able to consider some- body else’s knowledge, they have to have access to their possibility relation. So, in M = (S, T, RI, R2, . . . . &), agent 1 can peek into R2 and claim what agent 2 knows or not. Also, in order for agent 1 to find out what agent 2 knows about agent 1, the possibility relation RI has to be consulted, which is the one summarizing agent l’s knowledge itself. In general, one can say that viewing the knowledge of 30ur nomenclature is again motivated by some of the models analyzed in [Hughs and Cresswell, 19841. Gmytrasiewicz and Durfee 631 the agents via the structure M = (S, T, RI, R2, . . . . R&) amounts to taking an eztermalview of their knowledge, in that it is an external observer that lists the agents’ possible worlds in S and summarizes their knowledge about the real world in relations Ri , It is then counter- intuitive to postulate that the agents themselves can inspect the possibility relation of the other agents. It is also surprising that the agents, wondering how others view them, look into their own possibility relations. Our approach avoids the above drawbacks; the per- sonal recursive Kripke structure represents the infor- mation an agent has about the world from its own perspective, and the information it has about the other agents’ knowledge is represented as a model the agent has of the others. The model of the other agents may contain information the original agent has about how it is itself modeled by the other agents, but this may be quite different from the information the initial agent actually has. The idea that the recursive nesting of knowledge lev- els is necessary for analyzing the interactions in multi- agent systems has been present for quite a while in the area of game theory, and recently received attention in the AI literature, for instance from Fagin and others in [Fagin et csl., 19911. The most obvious difference be- tween their approach and ours is that we use a suitably modified Kripke structures, while the authors of [Fagin et al., 19911, after noting that the classical extension of the Kripke structures to the multiagent case (men- tioned above) is inadequate, develop a complementary concept of knowledge structures. The motivation and basic intuitions behind knowledge structures is very similar to ours. Thus, knowledge structures represent recursive, potentially infinite, nesting of information that agents have about other agents, just as our recur- sive Kripke structures do. An important distinction is that knowledge structures, as defined in [Fagin et al., 19911, do not assume a personal perspective from an agent’s point of view; they instead contain informa- tion of all of the agents in the environment, in addition to the description of the environment itself, and thus amount to an external view of the multiagent situa- tion. While the authors provide for the definition of an individual agent’s view of the knowledge structure, which should correspond to our personal Kripke struc- ture, its function is unspecified. Another difference is that we are able to provide a clear and intuitive dis- tinction between the concepts of knowledge and belief within our single recursive framework. Our motivation here is very similar to one presented in [Shoham and Moses, 19891. This work, although using quite a differ- ent approach, also attempts to derive the connections between knowledge and belief within a single frame- work. The relation between the introspective knowledge and knowledge of agents has received attention in the AI literature, for example from Konolige in [Konolige, 19861, who provides a discussion of some of the proper- ties of introspection: fulfillment and faithfulness. Al- though Konolige does not employ possible worlds se- mantics in his considerations, it seems that these prop- erties can be arrived at using our formalism. Establish- ing further relations between these approaches is a goal of our future research. The issues of recursive nesting of beliefs are also of interest in [Wilks et al., 1991; Wilks and Bien, 19831, but we find this other work most relevant to the heuris- tic construction of the recursive models, described in the next section. Construction of Kripke Structures The personal recursive Kripke structure provides a for- mal model that the agents engaged in a multiagent interaction can use to reason about the other agents’ knowledge and belief. Within this framework they can construct models of the other agents’ knowledge. As we mentioned before, the concept that might be useful for constructing the models is an agent’s view of the world. A view is essentially a partial description of the world that an agent has, given its knowledge base, its location in the environment, sensors it has, etc. To illustrate what we mean by a view, let us consider an example of two agents, 1 and 2, facing each other. Imagine that each of the agents is wearing a hat and can see the hat of the other agent, while being unable to see its own hat. The problem agents are facing is to determine whether their own hat is black or white. Assume that both hats are black, and that it is known that hats can be only black or white. Agent l’s view of this situation may be a partial description of the two hats; agent 1 knows that agent 2 is wearing a black hat, but the color of its own hat is unknown to agent I and might be represented by a “?“, for instance. In a frame- like language this information may be represented as slots with values assigned to them: Agent 1’ view: Agent-l-hat - ? Agent-a-hat - black Out of its incomplete description of the world, agent 1 can construct two possible worlds that are consistent with its view: Possible World 1: Possible World 2: Agent- l-hat - black Agent-l-hat - white Agent-‘L-hat - black Agent-l-hat - black Agent 1 can also construct agent 2’s views of these worlds: Agent 2’s view of PW 1: Agent 2’s view of PW2: Agent-l-hat - black Agent-l-hat - white Agent-Zhat - ? Agent-a-hat - ? These views lead, in turn, to agent 2’s possible worlds in each case, as depicted in Figure 2, where the agents’ views were also included. Let us note a few things about constructing views and possible worlds. First, agent 1 chose to describe the world in terms of primitive propositions denoting 632 Representation and Reasoning: Belief I’s view 1 -hat: ? 2-hat: B PWll 1 -hat: B 2-hat: B PW12 l-hat: B 2-hat: W PW21 l-hat: W 2-hat: B PW22 l-hat: W 2-hat: w Figure 2: Recursive Kripke Structure of Agent 1 agents’ hats being black or white. The reason it chose these particular propositions is that these are the rel- evant ones for this situation. Thus, agent 1 did not include propositions describing the number of hairs on their heads, because this information is clearly irrele- vant. What made the colors of the hats relevant and the number of hairs irrelevant is, in the case of this ex- ample, clear in the statement of the problem they face, along with the fact that there is no apparent connec- tion between the number of hairs and hats being black or white. Now, let us assume for a minute that agent 1 re- ceived confidential information stating that the hat of the agent with more hairs is black. In this case, it would be advisable for agent 1 to include the infor- mation about the number of hairs in its view. This information, then, would find its way into agent l’s possible worlds, but clearly agent I would not be jus- tified in including this information in agent 2’s views, since the information agent 1 received was confidential and agent 2 is not aware of it. Theproblems of relevance and awareness are difficult issues that have to be dealt with when one engages in recursive mo’deling. In the simple example above, they were easily and intuitively resolved, but in real life sit- uations things may be more difficult. In these cases, strong heuristics for properly determining relevance and awareness are needed. It seems that the work by Yoric Wilks and his colleagues [Wilks et al., 1991; Wilks and Bien, 19831 on belief ascription addresses these issues. They propose a number of heuristics, including a relevance heuristic, percolation heuristic, and pushing down environments. They capture the in- tuitive assumption that the other agents are aware of everything that I am aware of, unless my beliefs are atypical or confidential (as was the confidential infor- mation above). Let us also note that the construction of the personal The solution of the Three Wise Man puzzle involves recursive Kripke structure described above does not bottom out, and the recursion describing the nesting of the knowledge and belief seems to go on forever. While this is an uncomfortable prospect, in the next section we will show that in many practical applications the agents can reach useful conclusions with the recursive Kripke structures cached down to a finite level. Applications of Personal Kripke Structures There are possibly a number of ways the information contained in a personal recursive Kripke structure can be used in multiagent reasoning. A class of problems that can be tackled deductively using this information iucludes the Three Wise Men problem, together with similar ones: Muddy Children, Cheating Husbands, etc., described in [Moses et al., 19831. For brevity, we will sketch the solution of the scaled- down version of the Three Wise Man puzzle, easily generalizable to the rest of the problems. The Two Wise Men puzzle describes two “wise” agents that, as described before, wear hats so that they can see the other’s hat but not their own. The ruler of the king- dom the agents live in, intent on testing their wisdom, announces: “At least one of you is wearing a black hat”. Then, he asks agent 2: “DO you know whether your hat is black or white?“. Agent 2’s answer is “No”. The King then asks agent 1 the same question. And l’s answer is “My hat is black”. To trace the reasoning of agent 1, its recursive Kripke structure, developed down to the second level of modeling will be needed. We depict it in Figure 2, showing the state of knowledge of agent 1 before the King’s announcement. PWl and PW2 stand for the two relevant worlds agent 1 considers possible. They are described by propositions stating that the hat of agent 2 is black, p (or white, lp), and that the hat of agent I is black, Q (or white, -g). In each of these worlds, the views of agent 2 are created by agent 1, and these lead to two worlds agent 1 thinks agent 2 considers possible, described by the same set of propo- sitions. After the King announces: “At least one of you is wearing a black hat” the state of knowledge of agent 1 changes, since agent 1 knows that agent 2 consid- ers impossible all of the worlds in which both hats are white. By deduction, PW22 is impossible. Thus, in the possible world in which the hat of agent 1 is white, PW2, agent 2 knows that its own hat is black: IiT1 I<, pw2p, which also implies that it knows whether p: ICIW,pw2p. In the possible world in which the hat of a.gent 1 is black, PWl, agent 2 does not know whether its hat is black or white: IC~-,W~wlp. In this sit- uation, the answer of agent 2 that it does not know whether its hat is black or white solves the puzzle for agent 1, since it deductively identifies PW2 as impos- sible and PWl as the only possible world. Gmytrasiewicz and Durfee 633 the use of the recursive structure developed down to the third level, while n muddy children require n lev- els and the deduction is analogous. In the example problems discussed above, the crucial part of their so- lution is the definitions of propositional attitudes of other agents in various possible worlds. These con- cepts enable the reasoner to move upward in the tree of recursive models and deductively eliminate some of the possible worlds as new information warrants. We have previously studied similar propagation of information upward in a recursive tree of payoff matri- ces in [Gmytrasiewicz et al., 1991a; Gmytrasiewicz et al., 1991b]. In fact, the recursive hierarchy of payoff matrices is a personal recursive Kripke structure, with the information describing the possible worlds cast in the form of payoff matrices. In this work, we applied decision and game theory to facilitate coordination, cooperation, and communication among autonomous agents. Unlike the deductive reasoning used in the Three Wise Men puzzle, our previous work employed the intentionality principle and expected utility calcu- lations. We have noticed that the two approaches ac- tually complement each other. The decision-theoretic modeling is built within a formal framework of reason- ing about knowledge and belief of other agents. Since the deductive powers of this formalism are capable of dealing only with a limited spectrum of problems (in the Three Wise Men puzzle family), they are comple- mented with the capabilities of decision-theoretic rea- soning when it comes to predicting other agents’ ac- tions and to effective communication. Moreover, the decision-theoretic calculations require that probabil- ities be assigned to possible worlds [Balpern, 19891. Our ongoing work includes tying these two approaches together more formally. Conclusion We have developed a preliminary framework based on a possible worlds semantics, modeled by the personal re- cursive Kripke structure, that autonomous agents can use to organize their knowledge. Our model can serve as a semantic model for a logic of knowledge and belief, creating a natural and intuitive distinction between these concepts. This logic can be used to deductively reason about the knowledge and beliefs of the other agents, as in the Three Wise Men puzzle. We sug- gest that our model can also be used as a basis for the type of decision-theoretic reasoning in multia.gent environments that we have found useful for studying coordination, cooperation, and communication. Our future work will address extending the logical frame- work (axiomatization, completeness, consistency, and complexity of decision procedures), and will explore the relationships between our deductive and decision- theoretic recursive models. 634 Represent at ion and Reasoning: Belief Acknowledgments The authors would like to thank Yoav Shoham, Joseph Halpern, and the anonymous reviewers for their helpful comments on many aspects of this work. References [Fagin et al., 19911 Ronald Fagin, Joseph Y. Halpern, and Moshe Y. Vardi. A model-theoretic analysis of knowl- edge. Journal of the ACM, (2):382-428, April 1991. [Gmytrasiewicz et al., 1991a] Piotr 3. Gmytrasiewicz, Ed- mund H. Durfee, and David K. Wehe. A decision- theoretic approach to coordinating multiagent interac- tions. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence, pages 62-68, Au- gust 1991. [Gmytrasiewicz et al., 199lb] Piotr J. Gmytrasiewicz, Ed- mund H. Durfee, and David K. Wehe. The utility of com- munication in coordinating intelligent agents. In Proceed- ings of the National Conference on Artificial Intelligence, pages 166-172, July 1991. [Halpern and Moses, 19901 Joseph Y. HaIpern and Yoram Moses. A guide to the modal Iogics of knowledge and belief. Technical Report 74007, IBM Corporation, AI- maden Research Center, 1990. [Halpern and Moses, 19911 Joseph Y. Halpern and Yoram Moses. Reasoning about knowledge: a survey circa 1991. Technical Report 50521, IBM Corporation, Almaden Re- search Center, 1991. [Halpern, 19891 Joseph Y. Halpern. An analysis of first- order logics of probability. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, pages 1375-1382, August 1989. [Hintikka, 19621 Jaakko Hintikka. Knowledge and Belief. Cornell University Press, 1962. [Hughs and Cresswell, 19721 G. E. Hughs and M. J. Cress- well. An Introduction to Modal Logic. Methuen and Co., Ltd., London, 1972. [Hughs and Cresswell, 19841 G. E. Hughs and M. J. Cress- well. An Introduction to Modal Logic. Methuen and Co., Ltd., London, 1984. [Konolige, 1986] Kurt Konolige. A Deduction Model of Be- liej Morgan Kaufmann, 1986. [Moses et al., 19831 Y. Moses, D. Dolev, and J. Y. Halpern. Cheating husbands and other stories: a case study in common knowledge. Technical report, IBM, Almaden Research Center, 1983. [Shoham and Moses, 19891 Yoav Shoham and Yoram Mos- es. Belief as defeasible knowledge. In Proceedings of the Eleventh International Joint Conference on Artificial In- telligence, pages 1168-l 172, Detroit, Michigan, August 1989. [Wilks and Bien, 19831 Y. Wilks and J. Bien. Beliefs, points of view, and multiple environments. Cognitive Science, 7:95-119, April 1983. [Wilks et al., 19911 Y. Wilks, J. Barden, and J. Wang. Your metaphor or mine: Belief ascription and metaphor interpretation. In Proceedings of the Twelfth Interna- tional Joint Conference on Artificial Intelligence, pages 945-950, August 1991.
1992
111
1,178
Generating Kevin D. Ashley and Vincent Aleven University of Pittsburgh Intelligent Systems Program, School of L aw, and Learning Research and Development Center Pittsburgh, Pennsylvania 15260 Abstract We identify and illustrate five important kinds of Di- alectical Examples, standard configurations of cases which enable an arguer to justify rhetorical asser- tions effectively by example. Our computer program generates Argument Contexts, collections of cases that instantiate Dialectical Examples from an on-line database of cases according to a user’s general speci- fications. The Argument Context generation program provides a human or automated tutor a stock of Di- alectical Examples to teach novice advocates (first year law students) how to recognize, carry out and respond to the associated rhetorical moves. Although generating such examples is very hard for humans even when dealing with small numbers of cases, our pro- gram generates and organizes such examples quickly and effectively. In a preliminary experiment, we em- ployed program-generated Argument Contexts man- ually to teach basic argument skills to first year law students with good results. Our ability to define such complex examples declaratively in terms of logical ex- pressions of Loom concepts and relations affords a number of advantages over previous work. Topic: argument from experience, law, education Introduction When an advocate justifies an assertion by referring to past cases or examples, she employs a variety of stan- dard techniques involving recognized configurations of cases to make her rhetorical points effectively. We call such configurations “Dialectical Examples” and iden- tify five important types that advocates employ as building blocks of more complex arguments. While Di- alectical Examples are valuable assets for expert advo- cates and teachers of argumentation skills, finding the right configurations of cases is hard, even when one uses current computerized information retrieval ser- vices. There are tight constraints on how the cases must be related and many combinations of cases to consider. Adopting a methodological viewpoint in- spired by Clancey [Clancey1983], we have reexamined *This work is supported by a NSF Presidential Young Investigator Award and a grant from the National Center for Automated Information Retrieval. previous work on HYPE, a case-based legal expert sys- tem CAshleyl990, Ashleyl9911, to discover how to reor- ganize the knowledge to support a tutoring system that can teach novices to analyze problems and construct le- gal arguments. The tutoring task requires making ex- plicit certain information that was previously implicit: an expert’s knowledge of how to construct, employ, and respond to Dialectical Examples in support of ar- gument positions. We have built a program that efficiently gener- ates “Argument Contexts”, graph-like configurations of cases that instantiate the five types of Dialecti- cal Examples, according to the user’s specifications. The program queries an argumentation knowledge base, implemented in Loom [MacGregor1988], com- prising definitions for argumentation concepts and representations of aspects of 26 legal cases. We have conducted a preliminary experiment to evalu- ate whether program-generated Argument Contexts can be used to advantage in teaching basic argument moves to first year law students. This work advances AI research on example generation and argumenta- tion [Rissland and Solowayl980, Rissland et al.1984, Suthers and Risslandl988, McGuire et a1.1981, McCarty and Sridharanl981J. Argument Context gen- eration facilitates teaching classification concepts (such as legal claims, defined below) which are not defin- able neatly in terms of necessary and sufficient con- ditions, unlike those dealt with by Collins and Win- ston, [C 11 o ins and Stevensl982, Winstonl9751, and where the instances do not support the construc- tion of isomorphic analogical mappings (See, e.g., [Gentner 19831). ialectical Exam Arguing by analogy to past cases or examples can be modeled in terms of cases, outcomes and factors [Ashley1990, Ashley19911. In a legal domain, cases are disputes between the plaintiff, the side that initiates a lawsuit, and the defendant. We will refer to a current dispute as the “current fact situation” or “cfs”. Past cases are disputes in which a court has previously de- cided that a plaintiff won or lost. The plaintiff in the 654 Representation and Reasoning: Case-Based From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. Figure 1: Four-case “conflict resolution” Argument Context current fact situation asserts some legal claim against the defendant, such as breach of contract or trade se- cret misappropriation, the domain of this work, and makes arguments to convince a court that its claim is valid. We assume that in any dispute involving a le- gal claim, one may identify factors, collections of facts that strengthen the plaintiff’s or the defendant’s side. Typically, some of a dispute’s factors favor the plaintiff and some the defendant. Although an advocate may argue that the factors in its favor outweigh any factors favoring the opponent, he needs to cite some legal au- thority in support of this conclusion. Unfortunately, however, the law has no authoritative scheme for as- signing weights to factors. In general, it is not appro- priate for a legal advocate to employ statistical argu- ments [Ashley and Rissland19881. Instead, there are a variety of rhetorical techniques, employing Dialectical Examples, for justifying assertions about factors and cases. We have identified five general kinds of Dialectical Examples that have utility in making and testing ar- guments, or in teaching these skills. Each involves a configuration of cases that enables an arguer to make a rhetorical point effectively by drawing conclusions from a symbolic comparison of cases. Each is illus- trated below with an Argument Context generated on demand by our program from a database of cases. The different types of Dialectical Examples include: 1. Representative examples 2. Conflict Resolution examples 3. Refutational comparisons (i.e., counterexamples) 4. Ceteris pa&us comparisons 5. Coherence examples Before we introduce the various Dialectical Exam- ples, we illustrate some of the basic elements of our model of case-based argument. Figure 1 shows an Argument Context in the form of a Claim Lattice [Ashley1990, Chapters 5,8]. It is a graph, each node of which represents at least one case. The root node represents a current fact situation, the Motorola case, to which six factors apply, three of which favor plaintiff (?t) and three the defendant (6). These factors all deal with a claim for trade secret misappropriation. In the Motorola case, the problem situation (or cfs), the plain- tiff took certain security measures to protect its con- fidential information (F6), including securing nondis- closure agreements from its former employees (F4) in- volved in this suit. The employees left Motorola to work for the defendant corporation, allegedly bring- ing Motorola’s secrets with them. These employees obtained substantial inducements like raises in making the move (F2), suggesting the possibility of a payoff for bringing plaintiff’s secrets with them. On the other hand, favoring the defendant, plaintiff had allowed some of its secrets to be disclosed to outsiders (FlO), the allegedly secret information was known to competi- tors (F20) and the employee nondisclosure agreements did not make clear exactly what information the plain- tiff regarded as secret (F5). In treating Motorola as a cfs, we assume that its outcome has yet to be de- cided (in fact the defendant won). Each of the other nodes represents a past legal case, or precedent, also represented as a set of factors, but one to which some authority, a judge, has assigned an outcome, that is, the plaintiff either won or lost its legal claim. All of the cases share at least one factor with Motorola and are ordered in terms of the inclusiveness of the sets of factors the case shares with the cfs. Data General is “more on point” relative to the cfs than Yokunu be- cause Yokunu’s set of factors shared with Motorola is a proper subset of that of Data General Each case may also have factors it does not share with the cfs. These unshared factors are listed below the node. 1. Representative examples. One of the simplest jus- tifications of a factor’s importance is a representative example in which the factor clearly contributed to the outcome of the case. We have identified three kinds of representative cases: vanilla, simple conflict and packed cases. Vanilla and simple conflict cases are ex- amples that clearly represent the effect of a factor. A vanilla case is one in which at most two factors apply, both of which favor the winner of the case; it empha- sizes those particular factors. A single conflict case is one where the factor of interest is consistent with the case’s outcome (i.e., pro-winner) and all the other factors favor the side that lost. Such a case makes a dramatic demonstration of a factor’s importance in overcoming all of the competing factors. Packed cases represent the effect of a collection of consistent factors, in other words, they are situations in which one of the parties had a very strong position. Such cases have a larger number of factors, all of which (or all but one of which) are consistent with the case’s outcome. In tu- toring, representative examples are useful to introduce students to their respective factors, vanilla cases be- cause they are so uncomplicated, simple conflicts cases because they so strongly emphasize the effect of a par- ticular factor, and packed cases because they are an economical way to introduce lots of factors in a con- centrated form. Citing a parade of representative examples for each of a side’s strengths is a basic, but important, kind of legal argument. An advocate for the plaintiff in Ashley and Aleven 655 the Moi!oroZu case of Figure 1 could justify an asser- tion that the facts associated with factors F2, F4 and F6, all of which favor plaintiff, justify a victory for its client, by citing some representative examples. For instance, USM is representative of cases where the pro- plaintiff Secutity-Measures factor justifies a victory for the plaintiff. The plaintiff’s attorney could make a simple, but reasonable argument in favor of its client as a representative example of the effect of factor F6: “Where the plaintiff took measures to protect the se- curity of its alleged trade secrets, as plaintiff Motorola did, it should win a claim for trade secret misappro- priation, just as the plaintiff won in the USM case”. Packed cases can also be cited as representative ex- amples, but are more likely to be distinguishable. The plaintiff in Motorola can also cite Data General, a packed case, as a representative example of the effect of factor F6 (as we see below, citing Data General has certain other rhethorical advantages), but in so doing, the plaintiff opens itself to a response by the defen- dant. Data General is a strongly pro-plaintiff packed case; three factors favoring plaintiff apply to it that do not apply to Motorola and offer an alternative expla- nation of the result. In responding to plaintiff’s ar- gument citing Data General as a counterexample, the defendant in Motorola could distinguish Data General by pointing out the unshared pro-plaintiff factors F12, F14 and Fl@. 2. Conflict ResoZution exumpdes support assertions that a set of factors all favoring one side is more impor- tant than another set of factors all favoring the oppo- nent by showing a case with both sets.whose outcome is consistent with the former. In Figure 1, Data General has the virtue that it accounts not only for some of the plaintiff’s strengths in Motorola but also for some of the opponent’s strengths; it resolves the conflict among factors F6 and FlO. Though there is no legally author- itative weighting scheme to which the plaintiff can ap- peal to justify an assertion that Security-Measures is a more important factor than Secrets-DiscZosed-[to]- Outsiders, Data General is an authoritative example that the conflict should be resolved in favor of the plaintiff. For tutoring students about conflict resolution, Fig- ure 1 may not be an ideal example; conflict resolution examples are more effective the more on point they are relative to the problem and the less distinguishable they are. Although Data General is a more persuasive case than Yokunu in so far as it accounts for a more in- clusive set of factors in Motorola, it is less persuasive to the extent that it can be distinguished from the cfs. On the other hand, since a tutor needs to teach students ‘Within a particular context, certain differences among cases are salient and others not. Our model enables one to determine in context, which differences are, in fact, distinc- tions. See [Ashley19891 for more information. Figure 2: Three-case “trumping counterexample” Ar- gument Context to respond to a conflict resolver by distinguishing, Fig- ure 1 may be a very good example. Our program en- ables a tutor to generate and select examples suitable for particular tutoring contexts. The user inputs the number of representative and conflict-resolving prece- dents that the Argument Context must contain. The program formulates an appropriate query and runs it. The user can then filter and rank the generated Argu- ment Contexts according to criteria she deems useful, for instance that the conflict resolver has few distinc- tions. 3. Refutational or couse2erexampZes are cases that re- fute an assertion. If an opponent asserts that a given set of factors necessitates victory for his side, one may refute that with a case where the factors applied but that side did not win. A given factor may have many refuting examples. Although statistical arguments are generally not acceptable, it would be appropriate to support an argument that a factor is not significant with a parade of refuting cases. The program searched for cases to refute the pro-defendant factors in Mo- torola. Only factor I%, the fact that the non-disclosure agreement was not specific, had more than one refuta- tional example (it had two, not a dramatic argument.) A trumping counterexample is a more contextually specific kind of refutational example, because it refutes an opponent’s assertion that the set of shared factors associated with the cited precedent necessitates the same outcome [Ashley1990, Chapters 8, 93. In essence, plaintiff trumps the defendant’s case and refutes its point by citing a counterexample that is more on point ( i.e., a trumping or more-on-point counterexample). For instance, in Figure 1, if the defendant in Motorola cited Yokunu for the proposition that the plaintiff’s disclosures to outsiders (FlO) necessitate a victory for defendant, the plaintiff could refute the defendant’s as- sertion by citing Data General as a counterexample. There, the plaintiff won despite the disclosures, where the plaintiff also took security measures (F6). As a trumping counterexample, Data General satisfies the additional constraint that it shares a more inclusive set of factors with the cfs than the cited case does. Here, again, our program enables a tutor to gener- ate and select instances of trumping counterexamples suitable for particular tutoring contexts. Like conflict resolvers, trumping counterexamples are more persua- sive the more extra similarities they share with the 656 Representation and Reasoning: Case-Based Amoco (a) F3 Empbyee-We-Oevebper (4) F4 Agreed-&f-To-De (R) FS Agreement-Not-Specflk (3) F6 Security-Measures (n) Eastern Marble (R) F4 Agreed-Not- To-Disclose (R) F5 Agreement-NoGpecilk (2) F6 Sechy-Measures (II) 1 I F15 Unhpe-Pmduct (K) Figure 3: Two-case “cetetis puribus” Argument Con- text cfs than the less on point case does and the less dis- tinguishable they are from the cfs. For purposes of introducing students to the rhetorical uses of trump- ing, a better example is shown in Figure 2. Mere, the program has retrieved 81 three-case trumping coun- terexamples and ranked them according to the criteria mentioned, enabling the tutor to select the example in the figure. The program is able to formulate queries for these Argument Contexts; the user simply specifies the number of cases. From rhetorical and pedagogical viewpoints, the ab- sence of refutational examples is interesting, too. If no case involving a particular factor had an outcome inconsistent with the factor, that makes an effective argument that the factor is very important. For in- stance, using the program, a student can discover that factor F20 in Motorola (the allegedly confidential infor- mation was known to competitors), has no refutational examples while there are three cases in the database of 26 cases where factor F20 applied and defendants won. Although the effectiveness of such an argument depends on the size of the sample and exhaustiveness of the search, the rhetorical concept can be taught with the help of the program.2 4. Ceteris puribus comparisons provide another way to support assertions that a factor is important by showing the difference the factor makes to the out- come of two cases that are equivalent but for that fac- tor. Pursuant to this Dialectical Example, one shows pairs of cases where all things are equal except for the presence of a certain factor which accounts for the dif- ference in outcomes. For instance, suppose an advo- cate wanted to substantiate an assertion that a defen- dant employee should win where that employee, while working for the plaintiff, was primarily responsible for inventing and developing the confidential information the plaintiff seeks to protect (factor F3 - Employee- Sole-Developer). T wo cases, differing by the addition of just factor F3 where the defendant won the lat- 2As illustrated above, the nature of the refutation and the assertions refuted may be of a variety of different types. An opponent’s assertion that the absence of particular fac- tors necessitates losing a claim may be refuted by a case where the side won but no such factors were present. Sim- ilarly, rules may be “broadened” by cases where the rule applies though certain supposed prerequisites are absent [Rissland and Skalak1989J. Figure 4: Three-case “cover buses” Argument Context ter case, would be ideal. In figure 3, a comparison of Amoco and Eastern Marble substantiates the effect of factor F3, which, arguably, explains why the defen- dant won in Amoco even though that case is much like Eastern Marble where the plaintiff won. The program enables a tutor to select all of the pairs of cases in the database suitable for making ceteris puribus comparisons or to search for comparisons for a particular factor. The presence of additional differ- ences may spoil a ceteris puribus comparison. Here, Eastern Marble also differs from Amoco in that it has factor F15, plaintiff’s product was unique. Since per- fect comparisons are relatively rare, our program’s defi- nition of a ceZeris puribus comparison admits pairs with more than one difference. The comparisons are ranked according to effectiveness as measured in part by the absence of extraneous differences. 5. Coheren.ce exumpZes are case examples which stick together in some sense. Cohesiveness may be supplied by substantive or tactical considerations. Substantive coherence can be supplied by citing cases having a com- mon theme or analytical rationale (we are planning to represent some of these aspects of cases.) For tactical reasons, an advocate may cite a set of conflict reso- lution cases that “cover the bases” in that they effec- tively counteract all of an opponent’s strengths. That is, for all of the opponent’s strengths, the advocate cites cases in which those strengths were overcome by as few of the advocate’s strengths as possible. Such an argument shows that none of the opponent’s strengths is fatal. For instance, the program generated the Ar- gument Context of Figure 4 which covers the bases on behalf of the defendant in Motorola. Midland-Ross and Amoco were both won by the defendant, and together have all the pro-plaintiff factors that apply in Motorola. This rhetorical device supports a conclusion that de- spite the plaintiff’s strengths in Motorola, despite the employee bribe (F2), the nondisclosure agreement (F4) and the security measures (F6), defendant should still win. By presenting such program-generated examples, a tutor may teach students to seek a parsimonius cover, that is one in which each case takes minimal advantage of an advocate’s strengths while the cases as a group effectively cover the opponent’s strengths. At the same Ashley and Aleven 657 Figure 5: Five-case “select best case” Argument Con- text time, a student needs to learn to cite cases that are more on point and less distinguishable. In Figure 4, for instance, both covering cases are distinguishable from the cfs. Argument Contexts in Tutoring These Dialectical Example types are building blocks of more complicated arguments. An advocate attempts to assemble some set of these examples, tailored to the specific problem and its factors. If the advocate be- lieves that a particular set of factors are key to his case, for instance, he may cite some relatively non- distinguishable representative cases focusing on those factors, and, if possible, make some ceteris p&bus comparisons to establish the factors’ significance. If the advocate seeks to blunt the effect of the opponent’s strengths, he may cover the bases with cases that show those strengths are not fatal and cite as on point a fa- vorable conflict resolving case as possible. In planning arguments, an advocate will always be on the lookout for his opponent’s trumping counterexamples. Students need to learn rhetorical skills like recog- nizing and employing different kinds of Dialectical Ex- amples and making reasonable choices among possi- ble argument moves. We are designing a tutoring system to teach law students to learn such skills by presenting them with Argument Contexts, like those above, that present opportunities for employing the associated argument moves [Ashley and Alevenl991, Aleven and Ashley19921. The tutoring method can be illustrated with Fig- ure 5 which shows an Argument Context the program generated to teach an introductory lesson about: (1) recognizing and avoiding trumping counterexamples , (2) the minimum criteria for citing a case , and (3) the fact that the more on point a case, the better it is to cite. Given this Argument Context, a tutor can ask a student to select the best case to cite for the plaintiff. A student’s answer reveals a lot about his or her understanding of arguing with cases. More specifically, although Analogic, Eastern and Schulen- burg are all citable for plaintiff, Analogic is the best precedent. It is better than Eastern, because Eastern can be trumped by Amoco, whereas Analogic cannot be trumped (1). Analogic is better than Schulenburg because it is more on point, and according to (3), a more on point precedent is better. Analogic is better than Amoco, because Amoco cannot be cited on behalf of the plaintiff - it does not satisfy the minimum re- quirements (2). If the student misses the trump, the program can present instances of trumping counterex- amples, such as the Argument Context of Figure 2, to follow up. To generate the Argument Context shown above, the user only needed to specify the Issues that she wanted the Argument Context to bring up; the program itself formulated and ran the query. Generating such pedagogical examples, however, is a difficult task to perform by hand. A law school le- gal methods instructor agreed that the Argument Con- text of Figure 5 would be useful for pedagogical pur- poses but estimated that it would take hours or days to discover a set of such cases even using available on line retrieval services [Saundersl991]. Even a human tutor already familiar with the opinions of twenty or thirty cases would find it very difficult mentally to con- struct Argument Contexts. The criteria are abstract and there are many combinations of cases to consider, few of which are satisfactory examples. Generating Argument Contexts We have designed a case representation and program that generates the Argument Contexts shown in this paper, and others, automatically in seconds. The user inputs specifications for the Argument Contexts by en- tering parameters for certain standard Argument Con- texts or a query like the one illustrated below. The program outputs instantiated Argument Contexts sat- isfying the query and enables the user to inspect the generated Argument Contexts, filter and rank them according to various useful criteria, and save them to a file. The program generates Argument Contexts by querying a knowledge base for case-based argumenta- tion that we have implemented using the knowledge representation system Loom [MacGregorl988]. Loom is a structured inheritance system, or KL-ONE-style sys- tem [Woods and Schmolzel990]. To represent knowl- edge in Loom, one provides definitions for concepts and relations, and, asserts facts about individuals, the instances of the concepts. Loom provides a deductive query facility that allows one to retrieve most logical consequences of the facts and definitions in the knowledge base. Query expres- sions are, roughly speaking, first-order logic formulae. An example of a query is given below. Loom evalu- ates queries by a process of exhaustive search. The query language can also be used to state definitions of concepts and relations. The knowledge base contains definitions, most of them in Loom’s query language, of many important concepts and relations involved in case-based argumen- tation as well as some pedagogical concepts including the following: 658 Represent at ion and Reasoning: Case-Based Primitives Pedagogical concepts Argument concepts Case Vanilla Case Shared Factor Factor Packed Case Citable Side (w or 6) Conflicting Factors Case More On Point Outcome Unordered Best Case To Cite Applicable Factor Favors Trumping Counterexample Relevant Difference The program’s knowledge base contains facts that represent important aspects of the 20 factors, and 26 cases that we work with. For each factor, the side that it favors is recorded, for each case, its set of applica- ble factors and the side that won. Below, we display a query that retrieves two-case “cover the bases” Ar- gument Contexts; the three-case Argument Context of Figure 4 was generated by a similar query. In English the query says: “Retrieve cases ?cfs and ?ci, such that ?ci is won by side ?s and such that all factors in ?cfs that favor ?s’s opponent are covered by ?CI (that is, apply in ?cl); there must be at least one such factor.” (RETRIEVE (?CFS ?Cl ?S) (:AND (CASE ?CFS) (TSM-PRECEDENT Xl) (NEQ ?Cl ?CFS) (OUTCOME xi 9) (:FOR-SOME ?F (:AND (FACTOR ?F) (APPLICABLE-FACTOR ?CFS ?F) (FAVORS ?F (OPPOSITE ?S)))) (:FOR-ALL ?F (:IMPLIEs (:AND (FACTOR ?F) (APPLICABLE-FACTOR ?CFS ?F) (FAVORS ?F (OPPOSITE ?S))) (APPLICABLE-FACTOR 321 ?F))))) Notice that the query refers to concepts and rela- tions that are defined in our knowledge base. This query returned 11 Argument Contexts in 3.5 seconds. The four-case Argument Context of Figure 1, together with 8 similar ones, was generated by the program in 6 seconds. The query for the three-case Argu- ment Contexts returned 4 Argument Contexts in 22.9 seconds. (Most queries run faster.) The program is described more fully in [Ashley and Alevenl991, Aleven and Ashleyl9921. For pedagogical use, Argument Contexts need to sat- isfy special constraints that facilitate teaching students how to use them. They may present some especially clear cut examples of the above, or situations present- ing some combination of the above and posing a choice of which argument move to make. One does not have to express such preferences as “hard” constraints in a query, but can use the program’s filtering, ranking and sorting facilities to find the best pedagogical examples. In Section 2, we gave some brief illustrations of the fil- tering process. Tutoring Experiment We have conducted a preliminary experiment to as- sess empircally whether program-generated Argument Contexts make useful examples in tutoring argument skills.3 An “experimental group” of three first-year law students and a two-student “control group” took a one hour, written pre-test to provide a baseline indication of their case-based argumentation skills. Kevin Ashley manually conducted a seventy-five minute tutorial ses- sion for the experimental group. Then all five students took a one hour, written post-test to assess any changes in the argument-making skills of the experimental and cant rol groups. All of the exercises in the tutoring session and pre- and post-tests were based on program-generated Ar- gument Contexts including some of those illustrated in the above figures. The pre- and post-tests, for in- stance, employed the Argument Contexts of Figures 1 and 5; students were asked to select the best case to cite for a side, employ it in an argument and to re- spond to the argument from the opponent’s point of view. In all instances, students received textual de- scriptions of each of the cases involved in the various Argument Contexts, but they were not shown any of the graphic representations of the above figures. For each question, the grader simply assessed whether the student had taken advantage of the argument-making opportunities implicit in the Argument Context asso- ciated with the problem and assigned a grade on a five point scale.4 The results, reported in the footnote table, support the conclusion that program-generated Argument Con- texts can be used to improve law students’ argument- making performance, at least in a manual tutoring con- text. After the tutoring session, the members of the experimental group performed better than the control group. The pre-test results indicate that, initially, the experimental group members were slightly better than the control group, but, in light of the narrow spread of pre-test scores, that difference does not appear to have been substantial. After the tutoring session, the exper- imental group students peformed substantially better than those in the control group in both questions of the post-test. The performance difference was more pronounced for question 1 than question 2 of the post- test, an effect we attribute to the fact that question 3The advantages of integrating experimental eval- uation with the initial phases of tutorial program design and implementation have been discussed in [Littman and Soloway1988]. 4Since the cri teria were objective, Kevin Ashley per- formed the grading; we are aware that an independent law professor, unconnected with this research effort, should grade the tests and are attempting to cajole one into coop- erating. Here are the results: Ashley and Aleven 659 2 presented the most complex Argument Context the students had yet seen. See [Aleven and Ashley19921 for more details. Discussion and Conclusions To summarize, we have identified five important kinds of Dialectical Examples, standard configurations of cases which enable an arguer to justify rhetorical asser- tions effectively by example. Our computer program generates Argument Contexts, collections of cases that instantiate such Dialectical Example types, from an on-line database of cases according to a user’s general specifications. We plan to incorporate the Argument Context generation program into a tutorial system to teach law students to argue with cases; it will provide a stock of instances of Dialectical Examples to teach novice advocates how to recognize, carry out and re- spond to the associated rhetorical moves. Although generating such examples is very hard for humans even when dealing with small numbers of cases, our pro- gram generates and organizes such examples quickly and effectively. In a preliminary experiment, we em- ployed program-generated Argument Contexts manu- ally to teach basic argument skills to first year law students with good results. Our ability to define such complex examples declaratively in terms of logical ex- pressions of Loom concepts and relations affords two advantages: (1) the concept definitions are modular and relatively understandable (which makes them eas- ier to read, modify, and explain) (2) complex queries can incorporate quantified variables (e.g., a query does not have to specify a current fact situation but can designate all responsive Argument Contexts involving any case as cfs - an impossible query for WYPO to perform). References Aleven, Vincent and Ashley, Kevin D. 1992. Automated Generation of Examples for a Tutorial in Case-Based Ar- gumentation. In Proceedings of the Second International Conference on Intelligent Tutoring Systems. Ashley, Kevin D. and Aleven, Vincent 1991. Toward an Intelligent Tutoring System for Teaching Law Students to Argue with Cases. In Proceedings of the Third Interna- tional Conference on Articial Intelligence and Law, Ox- ford, England. 42-52. Ashley, Kevin D. and Rissland, Edwina L. 1988. Waiting on Weighting: A Symbolic Least Commitment Approach. In Proceedings AAAI-88. American Association for Arti- ficial Intelligence. St. Paul. Ashley, Kevin D. 1989. Defining Salience in Case-Based Arguments. In Proceedings IJCAI-89. International Joint Conferences on Artificial Intelligence. Detroit. Ashley, Kevin D. 1990. Modeling Legal Argument: Rea- soning with Cases and Hypotheticab. MIT Press, Cam- bridge. Based on Ashley’s 1987 PhD. Dissertation, Uni- versity of Massachusetts, COINS Technical Report No. 88-01. Ashley, Kevin D. 1991. Reasoning with Cases and Hy- potheticals in HYPO. International Journal of Man- Machine Studies. Clancey, W. J. 1983. The Epistemology of a Rule-based Expert System: A Framework for Explanation. Artificial Intelligence 20(3):215-251. Collins, Allan and Stevens, Albert L. 1982. Goals and Strategies of Inquiry Teachers. In Glaser, Robert, edi- tor 1982, Advances in Instructional Psychology, volume 2. Lawrence Erlbaum Associates, Hillsdale, NJ. Gentner, D. 1983. Structure-Mapping: A Theoretical Framework for Analogy. Cognitive Science 7:155-170. Littman, David and Soloway, Elliot 1988. Evaluating ITSs: The Cognitive Science Perspective. In Polson, Martha C. and Richardson, J. Jeffrey, editors 1988, Foun- dations of Intelligent Tutoring Systems. Lawrence Erl- baum Associates, Hillsdale, NJ. MacGregor, Robert M. 1988. A Deductive Pattern Matcher. In Proceedings AAAI-88, Saint Paul, MN. American Association for Artificial Intelligence. 403-408. McCarty, L. Thorne and Sridharan, N. S. 1981. The Rep- resentation of an Evolving System of Legal Concepts: II. Prototypes and Deformations. In Proceedings IJCAI-81, Vancouver, BC. International Joint Conferences on Arti- ficial Intelligence. McGuire, R.; Birnbaum, L.; and Flowers, M. 1981. Oppor- tunistic Processing in Arguments. In Proceedings IJCAI- 81, Vancouver, BC. International Joint Conferences on Artificial Intelligence. 58-60. Rissland, Edwina L. and Skalak, David B. 1989. Combin- ing Case-Based and Rule-Based Reasoning: A Heuristic Approach. In Proceedings IJCAI-89. International Joint Conferences on Artificial Intelligence. Detroit. Rissland, Edwina L. and Soloway, E. M. 1980. Overview of an Example Generation System. In Proceedings AAAI-80, Stanford, CA. American Association for Artificial Intelli- gence. Rissland, Edwina L.; Valcarce, E. M.; and Ashley, Kevin D. 1984. Explaining and Arguing with Examples. In Proceedings AAAI-84, Austin, TX. American Associa- tion for Artificial Intelligence. Saunders, Kurt M. 1991. Knowledge acquisition session re legal methods instruction. Recorded transcript. Assistant Professor of Law, University of Pittsburgh School of Law. Suthers, D. and Rissland, E. 1988. Constraint Manipu- lation for Example Generation. Technical Report 88-71, Department of Computer and Information Sciences, Uni- versity of Massachusetts, Amherst, MA. Winston, Patrick H. 1975. Learning Structural Descrip- tions from Examples. In Winston, Patrick H., editor 1975, The Psychology of Computer Vision. McGraw-Hill, New York. Woods, William A. and Schmolze, James G. 1990. The KL-ONE Family. T ec h nical Report TR-20-90, Center for Research in Computing Technology, Harvard University, Cambridge, MA. 660 Representation and Reasoning: Case-Based
1992
112
1,179
Common Sense A. Julian Graddock University of B.C. Vancouver, B.C., V6T lW5 craddock@cs.ubc.ca Abstract An important and readily available source of knowledge for common sense reasoning is partial descriptions of specific experiences. Knowledge bases (KBs) containing such in- formation are called episodic knowledge buses (EKB). Aggregations of episodic knowledge provide common sense knowledge about the unobserved properties of ‘new’ experiences. Such knowledge is retrieved by applying statistics to a relevant subset of the EKB called the reference class. I study a manner in which a corpus of expe- riences can be represented to allow common sense retrieval which is: 1. Flexible enough to allow the common sense reasoner to deal with ‘new’ experiences, and 2. In the simplest case, reduces to efficient database look-up. I define two first order dialects, L and QL. L is used to represent experiences in an episodic knowledge base. An extension, QL is used for writing queries to EKBs l. The problem A corpus of declarative knowledge consisting of general concepts and their associated properties is frequently assumed adequate for common sense reasoning. At odds with this assumption I suggest that knowledge about specific experiences is necessary for flexible and efficient common sense reasoning. An experience is a common sense reasoner’s (CSR) observation of its do- main which can be described in terms of the proper- ties of a set of observed objects. The descriptions of a CSR’s experiences form its episodic knowledge base (EKB). The problem addressed in this paper is to use knowl- edge about the specific experiences described in an EKB to make inferences about the unobserved proper- ties of ‘new’ experiences; Inferences of the form ‘Will the bird sitting on the lawn fly if I try to catch it?‘. ‘This research was supported by a NSERC of Canada postgraduate scholarship. Such inferences are made by retrieving common sense knowledge: General stable knowledge about the unobserved properties obtained by aggregating over a set of similar experiences in the EKB. Thus, the CSR might infer that the bird sitting on the lawn flys if flying is part of the common sense knowledge about similar birds. In this document I provide a first order framework appropriate for: 1. Describing experiences, and 2. Mak- ing inferences by retrieving common sense knowledge directly from an EKB. The framework differs from many existing knowledge representation techniques as it provides information about specific experiences. I judge the effectiveness of my model by its ability to flexibly and efficiently retrieve common sense knowl- edge. My solution In order to solve the problem my model applies sta- tistical knowledge obtained from old experiences to a new experience as follows: INPUT: 1. A partial specification of the ‘observed’ properties of a new experience and some additional ‘unobserved’ properties, 2. An EKB of partial spec- ifications of past experiences. SELECT: a reference class [Kyburg, 1983][Kyburg, 19881 of experiences described in the EKB relevant to testing the hypothesis that the unobserved prop- erties are also true of the new experience. MODIFY: the criteria for membership in the refer- ence class if there are no directly relevant experi- ences described in the EKB. This process provides the model with the flexibility necessary to reason about ‘new’ experiences: experiences for which there are no similar experiences already described in the EKB. AGGREGATE: over the members of the reference class using statistical techniques to retrieve common sense knowledge about the unobserved properties. OUTPUT: a conditional probability representing a measure of the support provided by the experiences in the EKB for the hypothesis. Craddock 661 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. Representing Experiences In this section I briefly describe L with the following properties: a first order language 1. Individual ground sentences of L provide partial de- scriptions of specific experiences. An EKB is a set of such sentences. 2. A distinguished set of distinct ground terms allows the number of observations of objects described in an EKB sharing the same properties to be counted. Example 1 L allows a CSR to describe experiences such as ‘two grad students drinking beer,. It also aBows a CSR to count how many observations of ‘two grad students drinking beer’ it has described in its EKB’ Syntax Objects are described using a distinguished set of dis- tinct ground terms composed of features, values and Za- bels. Individual features, values and labels are denoted by f(i), v(j> and I(lc) respectively, for some natural numbers i, j, and k. I sometimes write ‘colour’, ‘size’, ‘name’, . . . ‘julian’, . . . for f(i), f(j), f(k), . . ., and ‘red’, ‘large’, for v(i), v(j), v(k) . . . . Each feature has a set of possible values. For exam- ple, the feature ‘colour’ might have the set of possible values ‘red’, ‘green’, ‘ yellow’, and ‘blue’. I assign values to features using n-ary relations. For example, I can describe objects which are ‘red’ ‘cars’ as follows: Ro(z, red,colour) A Ro(x, car,type). The result of assign- ing a value to a feature is called a primitive property. Complex properties are formed from primitive proper- ties using the logical operators of L which include the usual FOL operators. In order for a CSR to count the number of observa tions of objects with the same property the EKB must be able to tell the observations apart. To allow this each object observed in an experience is denoted by an individual label which is unique to that object and that experience. For example, if a EKB contains only the ground sentence 1. Ro(1(4), red, cobour) A Ro(1(5), red, colour) then I say that it knows of two observations of objects which are ‘red’ in ‘colour’. If it contains the twoground sentences 1. 2. Ro(1(4), red, codour) A Ro(d(5), red,colour) &(1(36), cur, type) A Ro(6(36), red, colour) then I say that it knows of three observations of objects which are ‘red’ in ‘colour’ and of one observation of two objects which are both ‘red’ in colour’. In addition to the usual FOL axioms I include ax- ioms saying that there are distinct numbers and that distinct numbers are mapped to distinct individuals, features and values. This ensures that individual val- ues, features, and labels are distinct, avoiding co- referentiality problems which the unique names as- sumption can not solve. I also add axioms to define the complementation operator and exclusivity relation: o The complement&on operator ‘-’ describes set complementation, a restricted form of boolean nega- tion [Craddock, 19921. The usual axioms applicable to normal boolean negation apply to ‘-‘, with the addition Of: -R&d(i), . . . 9 w, VW, f(k)) H (~~)(&-l(@), . . . > +J), V(Y), f(k)) A T(Y = j>) and -+Y)(%-I(+), . . . > w, V(Y), f(k)) A 3Y = j>) * (3Y)(%-l(qi), *. * > (4, V(Y), f(k)) A (Y = j)) for some natural numbers i, . . ., n, j and k. Example 2 The complement of ‘red’ with respect to ccoIour’ is the set of all possible values of ‘colour’ ex- cluding ‘red’. I write -Ro(d(7), red, colour) if some individual has some colour other than red. o The exclusive predicate defines exclusive features. A feature f(i) is exclusive if E( f (i)) is an axiom of L such that: (‘dQ) * ’ ’ (ha)(vY)(vY’)(vr) [(aa-l(@l), *. * 1 @n), I(Y), f(z)) A E(f (z))A $Y = Y’)> 3 -@n-d~(~1), . + . > ~(GJ, ICY’), f(z)))] Example 3 ‘Sex’ is exclusive if it only makes sense to describe an individual by assigning a single value such as ‘male’ or ‘female,, but not both, to the feature ‘sex,. Example 4 Suppose I wish to say that all features are exclusive. I write: (Vx)(E(x)) in L without having to use second order quantification. Example 5 For example, Ro(l(34), red, cobour) V Ro(1(34), yellow, cobour) specifies that 1(34) is either red or yellow in colour. If E(colour) is an axiom of L then (3y)(R(1(34), y, colour) A l(y = blue)) is a theorem. An episodic KB An episodic P(B (EKB) is a closed set of axioms. These include: 1. All the axioms of I< (a subset of L containing the theorems), and 2. A finite set of ground sentences written in L. Each element of the latter set is a partial description of an experience. The following is a simple example of an episodic KB, excluding the axioms of K: Example 6 1) 2) Ro(l(O), red, cobour) A Ro(Z(O), Irg, size) 3) Ro(b(l), Ph.D., deg.) V Ro(Z(l), MSc., deg.) (~Y)(Ro(@), Y, colour) A T(Y = red)) 4) -(Ro(d(5), bl 5) ue, colour) A Ro(1(5), Irg, size)) Ro(1(6), red, c+J I ’ ) A Ro(Z(7), red, colour)) . . . . n) R0(!(34), Ned, name) A Ro(d(34), red, colour) 662 Representation and Reasoning: Case-Based Definition 1 I write EKB I- a if the wflcy is syntac- tically implied by the the axioms of the EKB. Example 7 Given the EKB in Example 3, I can write EKB I- Ro(x,red, colour) and EKB y Ro(x, MSc., hasdegree). Querying an episodic KB In this section I extend L to form a more expressive language &L which allows the the CSR to ask an EKB for more detailed information about particular experiences. In particular I define a probability term Prob(a I&b such that cy and p are wff of L and ekb denotes an episodic knowledge base as defined previ- ously. Probability terms are interpreted as the condi- tional probability, with respect to an episodic knowl- edge base e kb, of o being true of a particular experience given that /3 is true. Example 8 Consider the query ‘What is the condi- tional probability of a particular lawyer being rich?‘. I can write this in QL as Prob( a 1 j?)ekb such that p is defined as Ro(1(34), lawyer, occupation) and cy is defined as Ro(1(34), rich, financial status). In QL conditional probabilities are calculated by counting the number of observations of objects with certain properties in the EKB. In order to count I include axioms for defining sets and set cardinalities in QL. For example, I define the cardinality of a wff Q as the number of observations described in the EKB of objects for which the property cy is true. Definition 2 Let cy be a wfl of L such that (x1 ,**.Y x~) is the n-tuple of labels denoting occur- rences of individuals in cy. Let (tl, . . . , tn) be a n-tuple of terms in L. cr(xl/tl,. . . , xn/tn) is the result of sub- stituting each occurrence of xi by ti. IWl ,...,&a> : EKB I- cu(xl/tl,. . . , x,.&)}l is the cardinality of a wflcu, written IcYI&, with respect to n’ Example 9 l(R0(1(987), red, colour) A &(1(987), large, si.ze))lekb is the number of observations de- scribed in the EKB of objects with the property ‘has colour red and size large ‘. The set of all observations described in the EKB which are relevant to calculating the desired condi- tional probability is called the reference class of the probability term. Definition 3 The reference class of Prob(c#),kb is {(t I>. . . ,tn) : K’R I- (a A P)(a/tl,. . . ,x,&n)} lJ ((tl, . . . ,t,J : EKB I- (-cy A ,8)(x&, . . . ,x,&J) If the reference class of a probability term is not empty then I say that the probability term succeeds. Probability terms with non-empty reference classes are interpreted as: Definition 4 Prob(c#)ekb sif I(0 A P) iekb i(a A P)lekb + I(-& A P)lekb The denominator of each probability term is the car- dinality of the reference class: a count of all the ob- servations described in the EKB which are relevant to calculating the desired conditional probability. Example 10 Consider the query ‘What is the prob- ability of a particular lawyer having a financial status other than rich ?‘. I can write this as Prob(-a(P), such that ,8 is defined as R0(1(987), lawyer, occupation) and a is defined as Ro(l(987), rich, financial status). The numerator of the probability term is the cardi- nality of the set of all n-tuples of terms that satisfy (-QI A p). The reference class is the set of all n-tuples of terms that satisfy (cu A p), e.g. non-rich lawyers, and (-a A /3>, e.g. non-rich lawyers, given the EKB. If E(financia1 status) is an axiom, observations of ‘lawyers’ with financial statuses such as poor’ and ‘middle class’ will be counted in I(--& A /?) l&b Modifying the reference class In this section I discuss probability terms which fail: Definition 5 A probability term Prob(al@)ekb fails if its reference class is the empty set. Probability terms fail because the knowledge in the knowledge base is incomplete: there are no experiences described in the EKB which are relevant to calculat- ing the desired conditional probability. In this section I discuss two general mechanisms for identifying alter- native, non-empty, reference classes using the episodic knowledge described in an EKB. I call these mecha- nisms generalization and chaining. I argue that generalization and chaining are often more appropriate than techniques which depend upon the inclusion of extra-logical information [Asher and Morreau, 19911 in the KB, i.e., assumptions of ir- relevance and principles of defeasibility [Touretzky et al., 19911. Such techniques often ‘cover up’ deficien- cies in the KB’s underlying representational strategy. Thus, I don’t agree with suggestions that the complex- ity of these techniques is necessarily indicative of the complexity of the underlying problem (see for exam- ple [Touretzky et al., 19911). Rather, I believe that the complexity is often an arti- fact of using an incorrect representational strategy. For example, knowledge representations such as taxonomic hierarchies, decision trees, and associative models of memory are founded on the basic premise that episodic knowledge should be parsimoniously organized on the basis of intuitively or statistically apparent structure. However, there are two problems with this premise: 1. Given a reasonable complex domain, the observed data may be too sparse any useful structure. to allow the identification of Craddock 663 2. Even in the presence of adequate data, straightfor- ward Bayesian arguments show that only external information about the likely mix of queries is rele- vant to this determination [Schaffer, 19911. The premise not only fails to address the problem of identifying structure in the absence of adequate data, but as a result of bias [Schaffer, 19911 in the represen- tational strategy the data may be ‘underfitted’. As a result the information necessary for answering unfore- seen queries may be missing. As a result, I argue that retrieval from an EKB is: 1. Appropriate in the presence of sparse apriori structure needs to be identified. data as no 2. Flexible, as episodic knowledge rect response to specific queries. is structured in di- I now discuss how generalization and chaining apply the episodic knowledge in an KB directly to the prob- lem of identifying a new reference class. Both mecha- nisms are motivated by two knowledge structures fre- quently found in the knowledge representation litera ture: 1. Knowledge hierarchies, and 2. Associative chains. Generalization A probability term Prob( CY]&~ is generalized by re- laxing the ‘membership requirements’ of its reference class. However, generalization through arbitrary syn- tactic manipulation of the wffs cy and ,8 is inadequate. For example, the reference class of the query ‘What is the probability that young lawyers are rich?’ should not be arbitrarily relaxed to include observations of the financial status of ‘young lawyers’, ‘dwarf elephants’ and ‘dead dogs’. Instead, generalization should be constrained by the episodic knowledge in the EKB. In particular, I suggest that this knowledge allows us to cc . . . ignore special characteristics of the . . . event under consideration which are not known to be re- lated to the property in question.” [Kyburg, 1969] without relying upon the inclusion of additional knowl- edge by a knowledge base designer. Properties of experiences can be ‘ignored’ by ex- panding feature values: ,,.rritiion 6 The value v(i) ,of a feature f(j) in l ’ * rxnr v(i),f(j)) is expanded by replac- ing i wiih an existentially quantified variable y to get (3y)(ltn-i(~iP l . l rxn, V(Y), f(j))). Example 11 Prob(aI/Y),ka is a generalization of Prob(a IP) ekb if /Y is the result of expanding one or more property predicates in p. All the possible generalizations obtained by expanding feature values can be ordered in a partial lattice: Lemma 1 Let Prob(al/?),ka be a probability term. The set of all possible generalizations of Prob(a]/?),ka with non empty reference classes formed by expanding the property predicates of p forms a partial lattice. In [Craddock, 19921 I argue that the most relevant generalization of a probability term is the minimal el- ement in the lattice of generalizations. This method adopts a principle similar to the principle of specificity described in the non-monotonic logic literature, (see for example, Poole [Poole, 19901). If there are several equally minimal elements, the one obtained by expand- ing the least statistically relevant feature is chosen, in accordance with Kyburg’s [Kyburg, 198$] conditions of statistical epistemological relevance. Example 12 Suppose Prob(Re(x, flys, moves) 1 Ro(x, red, colour) A Ro(x, bird, type) ) fails. There are two minimal elements in the lattice of generalixa- tions: (1) Prob f lys, moves) Ro(x, y, colour) ~Ro(x, bird, type) > (2) Prob Ro(x, f lys, moves) R. (x, red, colour) ARo(x, Y 9 type) > Let abs($‘z,y,) be the absolute value of the correlation coeficient between two variables x and y given some property p. Suppose, abs(r typezbird colour=yellow (moves,colour) ) < abs(r( moves,type)) > then generalization (1) is tion of the original query. the appropriate generaliza- Given a probability term Prob(al/?)ekb, episodic knowledge contained in the EKB can be analyzed us- ing well understood statistical techniques for measur- ing the association between features. These techniques can be used to identify the primitive properties de- scribed in ,# which are most relevant to predicting the truth of CL. Chaining As a result of the partial nature of descriptions of expe- riences in probability terms and EKBs there are cases in which every generalization of a probability term will have an empty reference classes. An intuitive solution is to chain rather than generalize the original proba- bility term. Example 13 Suppose the episodic knowledge con- tained in a particular EKB was collected using two ex- periments: 1. Experiment 1 recorded the observed properties of ob- jects using only the features ‘Virus’ and ?i’yndrome’, and 2. Experiment 2 recorded the observed properties of ob- jects using only the features ‘Syndrome’ and ‘Can- cer’. Now, let a! be defined as Ro(l(S), skin, cancer), p as X0(1(6), HIV+, virus), and y as Ro(l(6), AIDS, syndrome). The probability term Prob(a 1 ,O)ekb fails. Furthermore, as the partial lattice of generalizations with non-empty reference classes is also empty, gener- alization also fails. 664 Representation and Reasoning: Case-Based However, suppose that the conditional probability Prob(y I P> ekb is high, e.g. If you are ‘HIV+ ‘then you have ‘AIDS’. The reference class of the original prob- ability term ‘overlaps’ the reference class of Prob(cu 1 ‘y)ekb, e.g. If you have ‘AIDS’ then you have ‘Skin can- cer), and thus provides an estimate of the desired con- ditional probability (assuming koisy-or’ relationships [Pearl, 19881). In th is example, retrieval has ‘chained’ from the property ‘HIV+ ’ to ‘AIDS’ to ‘Shin cancer’ in a manner similar to reasoning in associative models of human memory. Intuitively, if one set of observations overlaps an- other set then they are similar and related in some way. Depending on the strength of this similarity, pre- dictions made from the new reference class should be representative of predictions made from the original reference class. The validity of this assumption de- pends upon how similar the new reference class is to the old and how well this similarity can be measured. For example, the metric for measuring similarity pre- sented in the previous example is only one of many possibilities. Chaining can be defined recursively as follows: Definition 7 PrOb(aly),kb is a chain on Prob(cu 1 @)ekb $!F 1. -&l ,...A) : EKB t- y&/xl,. . . ,tna/x:na)} - Ut1 , . . . , tm) : KB I- P(tr/xl, . . . , tm/x:m)} # 8, i.e. overlap exists, and/or 2. 36 such that PrOb(aly),kb is a chain on PrOb(&.l6),kb and PrOb(a16),kb iS a chain on PrOb(CklP),kb. As with generalization a partial lattice of possible chainings can be defined. However, as there is a po- tentially large number of ways to chain a probability term the lattice may be very large. In Craddock [Crad- dock, 19921 I d iscuss several classes of heuristics for choosing the most appropriate chain. Some of these are concerned with the length or degree of the chain and are similar to those addressed by Touretzky et. al. [Touretzky et al., 19911. Others are concerned with assessing the similarity of reference classes us- ing techniques found in machine learning [Aha et al., 19911. In [Craddock, 19921 I examine the applicability of these heuristics using data obtained from the “UC1 repository of machine learning data bases and domain theories.“. Relationship to other work I assume that knowledge about specific past ex- periences is necessary for common sense reasoning. In [Craddock, 19921 I show that this assumption is supported by the psychological literature on human episodic memory. Of particular interest are results showing that humans make predictions using reference classes containing as few as one previous experience. Using such knowledge my model is able to treat com- mon sense retrieval as an example of the reference class problem [Kyburg, 19831 [Kyburg, 19881: I extend this problem to include the modification of empty reference classes. The extension provides the flexibility necessary to deal with incomplete knowl- edge. The problem of identifying statistical knowledge about past experiences which is epistemically rel- evant to making an inductive inference about the unobserved properties of a new experience. My model assumes that a good basis on which to retrieve common sense knowledge is something like an associative database, where retrieval is merely look-up or pattern matching (the model is relevant to databases as statistics are computed from individuals and not from classes of unknown cardinality). This assump- tion forms the basis for eficient common sense reason- ing in many models of reasoning, i.e., [Levesque, 1986], [Levesque, 1989],[Etherington et al., 19891, [Frisch, 19871, [Davis, 19901, [D avis, 19871, and [Stanfill and D., 1986]. However, unlike many of these models and other machine learning techniques, i.e., [Aha et al., 19911, I reason directly with partial descriptions of experiences in probability terms and EKBs. My model does not depend upon pre-defined com- mon sense knowledge supplied in the form of defaults or preference orderings as in many representations of common sense knowledge, i.e., [Etherington, 19871, [Boutilier, 19911, [Poole, 19901, [Konolige, 19871. Al- though such techniques are formally well understood they can be criticized as depending upon the inclu- sion of additional ‘extralogical’ knowledge [Asher and Morreau, 19911. Nor does my model depend upon the episodic knowledge being ‘pre-structured’ into general concepts and relations. Instead, episodic knowledge is manipulated in direct response to specific queries. My model can use techniques similar to those used by machine learning in order to measure relevance and similarity. However, unlike machine learning my model does not apply the techniques to improve the pre- dictability of properties from a static reference class; It applys the techniques in order to identify a new reference class from which the conditional probability of the properties can be predicted. As suggested by Kyburg [Kyburg, 19881, considerations as to the ap- propriate application of these techniques mirror con- siderations appropriate to the formalization of non- monotonic logics. My model uses simple techniques to aggregate over sets of incomplete 1 c cy<Gptions of experiences, obtain- ing common sense klrc)wledge directly from an EKB. Furthermore, episodic knowledge is used directly to respond to the problem of incomplete knowledge. I argue that this approach is: 1. Potentially efficient as it reduces to database lookup in I ’ I I lplest case, and 2. More flexible than existing n~~lcls in dealing with experiences with which the common sense reasoner has no ‘relevant’ prior experience. Craddock 665 Discussion The model presented in this paper is a principled way of incorporating specific knowledge about experiences into common sense reasoning. It provides : An expressive and well understood framework for talking about experiences - knowledge which com- puters can readily obtain about the real world. A potentially flexible and efficient mechanism for re- trieving common sense knowledge: 1. Directly from past experiences, and 2. In the presence of sparse data. A mechanism for mapping from inductive techniques for retrieving ‘statistical’ common sense knowledge to well understood deductive techniques for rea- soning with it (see for example, Bacchus [Bacchus, 19901). A well defined, formal framework, for talking about reference classes. In particular, the framework al- lows: 1. The definition of a reference class of past experi- ences. 2. The modification of an empty reference class. 3. The retrieval of information from a reference class. Common sense retrieval is not intended as a replace- ment for the powerful deductive techniques found in the common sense reasoning literature. Rather, I show how retrieval can be approached in a flexible and ef- ficient manner. I believe that the inefficiency and in- flexibility of many existing models of common sense reasoning is a direct result of not explicitly represent- ing knowledge about specific past experiences. There are-two obvious problems with the model: Its reliance on large sets of data. This is becom- ing more acceptable given recent advances in com- putational hardware. In [Craddock, 19921 I describe data reduction in which only distinct experiences - experiences with the same form - are represented as separate entries in the KB. The methods for modifying the reference class briefly discussed in this paper are computationally expen- sive. However, I point out that it is reasonable to as- sume that a CSR will retrieve what it knows quickly and retrieve what it doesn’t know much more slowly. On going work is currently addressing these problems. Acknowledgements I would like to acknowledge the help of P. Turney and D. Poole. References Aha, D.; Kibler, D.; and Albert, M. 1991. Instance- based learning algorithms. IlIachine Learning 6( 1):37- 66. Asher, N. and Morreau, M. 1991. Commonsense en- tailment: A modal theory of nonmonotonic reasoning. In Proceedings of 12th IJCAI. 387-392. Bacchus, F. 1990. Representing and reasoning with probabilistic knowledge: a logical appropach to proba- bilities. MIT Press. Boutilier, C. 1991. Inaccessible worlds and irrele- vance: Preliminary report. In Proceedings of 12th IJCAI. 413-418. Craddock, J. 1992. Access and retrieval of informa- tion from an episodic knowledge representation. Uni- versity of British Columbia Forthcoming Ph.D. The- sis. Davis, L. 1987. Genetic algorithms and simulated an- nealing. Morgan Kaufmann. Davis, E. 1990. Partial information and vivid repre- sentations. In Representation in mental models. Etherington, D.; Borgida, A.; Brachman, R.; and Kautz, H. 1989. Vivid knowledge and tractable rea soning: Preliminary report. In International Joint Conference on AI. 1146-1152. Etherington, D. 1987. A semantics for default logic. In Proceedings of 8th IJCAI. 495-498. Frisch, A. 1987. Knowledge retrieval as specialised inference. Technical Report 214, University of Rochester. Konolige, K. 1987. On the relation between default theories and autoepistemic logic. In Proceedings of 8th IJCAI. 394-401. Kyburg, H.E. 1969. Probability theory. Englewood Cliffs, N.J.: Prentice-Hall. Kyburg, H. E. 1983. The reference class. Philosophy of Science 50:374-397. Kyburg, H. 1988. Epistemological relevance and sta- tistical knowledge. Technical Report 251, University of Rochester. Levesque, H. 1986. Making believers out of comput- ers. Artificial Intelligence 30:81-108. Levesque, H. 1989. Logic and the complexity of rea soning. Technical Report KRR-TR-89-2, University of Toronto. Pearl, J. 1988. Probabilistic reasoning in intelligent systems: Networks of plausible inference. Morgan Kaufmann. Poole, D. 1990. Dialectics and specificity: condition- ing in logic-based hypothetical reasoning (preliminary report). In CSCSI Schaffer, C. 1991. Overfitting avoidance as bias. In Proceedings of the IJCAI Workshop on Evaluating and Changing Iit lrl.L.sentation in Machine Learning, Sydney, Australia. Stanfill, ( and D., Waltz 1986. Toward memory- based rea>, II, !_, Communciations of the ACM 29( 12):1213 -1228. Touretzky, D.S.; Thomason, R.H.; and Horty, J.F. 1991. A skeptics menagerie: Conflictors, preemp- tors, reinstaters, and zombies in non-monotonic in- heritence. In Proceedings of 12th IJCAI. 478483. 666 Representation and Reasoning: Case-Based
1992
113
1,180
aniel @. Edelson Institute for the Learning Sciences Northwestern University Evanston, IL 60208 stract Case-based teaching systems, like good human teachers, tell stories in order to help students learn. A case-based teaching system engages a student in a challenging task and monitors his ac- tions looking for opportunities to tell stories that will assist the learning process. In order to pro- duce stories at the appropriate moment, a case- based teaching system must have a library of sto- ries that are indexed according to how they should be used and a set of reminding strategies to re- trieve stories when they are relevant. In this p+ per, I discuss CreANIMate, a biology tutor that uses stories to help teach elementary school stu- dents about animal morphology. In particular, I discuss the reminding strategies and indexing schemes that enable the system to achieve its ed- ucational objectives. These reminding strategies m-e example remindings, similarity-based remind- ings, and expectation violation remindings. ntro CtiQ Good teachers are good story tellers. This fact is the inspiration for a new architecture for computer-based educational systems known as case-based teaching’ (CBT) systems. A case-based teaching system presents stories to help a student learn. Like a good human story teller who, in addition to being a master of de- livery, knows the right story to tell at the right mo- ment, an effective case-based teaching system must be able to evaluate a student’s situation and identify an appropriate story to help the student learn from that situation. In this paper, I describe the architecture of case-based teaching systems and discuss a system called CreANIMate, which uses stories to help teach animal morphology to elementary school children. In ‘The term Case-based Teaching has been used to de- scribe any type of teaching that makes use of cases (Cog- nition and Technology Group, 1990; Gragg, KMO). In this paper, CBT refers to a specific class of teaching systems described by Schank (1991a). particular, I focus on the reminding strategies of the system. ase-base iteeture People learn well when they are engaged in a task that interests and challenges them. A case-based teaching system engages a student in a task that will provide him with rich opportunities to learn. The system cap- italizes on these opportunities by presenting stories to the student that help him to learn from his situation. Thus, the student learns both from his interactions with his task and from stories that he encounters as a result of his interactions. To provide this learning environment, a case-based teaching system consists of two interdependent compo- nents, a task environment and a storyteller. The task environment presents the student with a motivating, challenging task. Typically, a task environment con- sists of a simulation, a problem-solving environment, or an interactive dialogue. While the student is inter- acting with the task environment, the storyteller mon- itors the student’s actions looking for opportunities to present stories that will assist his learning. Stories can take the form of advice from experts, narratives de- scribing personal experience, or depictions of actual situations from the domain under study (for more de- tail, see Schank, 199l.a). Multimedia technology makes it possible for a case-based teaching system to present stories in a variety of forms, including video, anima- tion, and text. A case-based teaching system can even improve on the capabilities of a human teacher because of its ability to instantly retrieve stories from very large story-bases on mass-media storage devices. The primary justification for teaching with stories, besides the observation that good teachers use them (Schank, 1991b), comes from the theory of case-based reasoning (Kolodner et al., 1985; Riesbeck & Schank, 1989). Since people have been observed using case- based reasoning in a variety of situations as diverse as firefighting, medicine, and architecture (Kolodner, 1991), it follows that an important way to support this style of reasoning is to provide students with cases. Un- covering empirical evidence to support this claim is one Edelson 667 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. aspect of the ongoing research in case-based teaching. When a person acquires a collection of cases for a do- main, the value of this case library is limited by that person’s ability to recall useful cases when they are rel- evant. Thus, a case-based teaching system must pro- vide a student with an appropriate organization for his case library. The way an individual retains a story is influenced by the context in which he hears the story and by his previous knowledge and experience. A case- based teaching system assists the student’s interpreta- tion and integration process by presenting stories in context. That is, the system only presents a story to a student when the information in the story is directly relevant to the student’s situation. This enables the student to index the story in his memory with respect to the context in which he sees the story. In building a case-based teacher capable of present- ing stories in this way, two important research issues emerge: Vocabulary. What is the appropriate vocabulary for indexing stories? Reminding strategies. for retrieving stories. What are valuable strategies An important advantage of the case-based teach- ing architecture is that it enables a system to present appropriate stories to the student without requiring that the system understand the contents of the sto- ries fully. In the same way that a case-based rea- soner is able to retrieve cases before it understands exactly how it will use them, a case-based teaching system is able to present stories without understand- ing them as completely as a student will. Both case- based architectures are able to perform this way be- cause they have cases that are indexed according to situations in which they are useful, as opposed to be- ing indexed according to their content. Traditionally intelligent tutoring systems (ITS’s) have needed to un- derstand all of the information that they want to im- part to the student. This complete knowledge enables systems to identify Ubuggyn behavior, missing infor- mation, or misconceptions on the part of the student, e.g., MENO-II (Soloway et al. 1983) WEST (Burton & Brown 1976;), GUIDON (Clancy 1987). A case-based teaching system sacrifices some of this ability in favor of ease of scale-up and a more expressive knowledge communication2 strategy. To summarize, a case-based teaching system uses a task environment which engages and challenges the student to allow the student to encounter situations that are rich in opportunities for learning. At the same time, the storyteller component of the system moni- tors the student’s situation looking for opportunities to present stories that will assist the student’s learn- ing process. The context in which the student sees the stories helps that student to interpret and integrate 2This term is from Wenger (1987) those stories in a way that from cases in the future. will assist him to reason The CreANIMate program is a system designed to teach elementary school children about animals, their physical features, and how they survive in the wild. It helps them to understand the important connections between the way that an animal looks, the way it be- haves, and how it survives. To engage a student, Cre- ANIMate invites him to create a new animal by taking an existing animal and modifying it in some way. For example, a student might request a gerbil with large claws. The task of creating a new animal was selected because of its inherent appeal, because it encourages creativity, and because it provides many opportunities for learning. A student has the opportunity to learn about an animal by modifying it in much the same way that a scientist studies a system by perturbing it from its natural state and observing the ramifications of that perturbation. Once the student proposes an animal, the program initiates a dialogue in which the student considers the viability of his animal. For example, the program might conduct a discussion of why it might be helpful for a gerbil to have large claws, or how gerbils use the paws that they currently have. The discussion is ac- companied by video clips of actual animals in the wild that illustrate relevant principles. The program might show clips illustrating how different animals use their claws or how certain types of claws are especially well suited for specific purposes. The underlying lesson of the interaction is the relationship between the physical features of animals and the ways in which the animals use those features to help them to survive in their en- vironment. Which examples of these basic principles a student sees is determined entirely by the particular student’s interests. The CreANIMate task environment consists of a question-and-answer dialogue in which the system helps the student to explore the ramifications of chang- ing some aspect of an animal. Each CreANIMate dia- logue is based around a question that is fundamental to understanding the ways in which animals survive in the wild. We call these questions explanation ques- tions, in the terminology of &hank (1986). Explana- tion questions are the qnestions in any domain that a knowledgeable individual asks to construct an explai- nation for a phenomenon in that domain. Some ex- planation questions in animal morphology are, “Why is it useful for this animal to perform a particular ac- tion?,, and “How does this animal use a particular feature to help it survive. 3” We emphasize explanation questions because these questions provide the frame- work for representing a domain. The questions that underlie a domain provide the structure that is critical for indexing cases as well as for explaining new obser- vations. When a student sees a story in the context of 668 Representation and Reasoning: Case-Based an explanation ques&ion, that question helps to empha- size the featnres that the student can use to index that story in his own memory. The goal of the CreANIMate task environment is to conduct dialogues that cover as many explanation questions as possible. The centrality of questions in CreANIMate follows in the tradition of GUIDON and WHY (Stevens & Collins 1977). Edelson (1991) discusses in greater detail the use of quesfions in an instructional dialogue. The following short tran- script of the program’s operation gives a sense of the dialogues it conducts: Suppose you could create a new animal. by taking an existing animal and than some way... What would you miake? STUDENT> a butterfly that can fight Here is your animal before we than it... [Shows a picture] OK, but first we need a reason for your butterfly to fight. want your butterfly t STUDENT> so it can pr ood idea. Kangaroo rats kick dirt em defend themselves against predators. Would you Pike to see that? STUDENT> yes [VIDEO : Kangaroo Rat Kicks Dirt at Snake] I know of some other animals that defend themselves against predators, but they don’t fight. For example, some schoolls of fish use the help of others so they can defend themselves against predators. Would you Bike to see that? STUDENT> yes [VIDEO: Shark Protects School of Fish] Kangaroo rats are not the only animals that fight in order to defend themselves. For example, bees fight to defend themselves. Would you like to see that? STUDENT> no So, one reason a butterfly mi ht fight is to defend itself. Why would you like your butterfly to fight? STUDENTS Show me more reasons. As the transcript shows, the student begins the di- alogue by selecting a modification for an animal, and the program responds by asking an important question about thaf animal, e.g. “Is there a reason you want your butterfly to fight?” In the ensuing dialogue, the program presents videos that illustrate potential an- swers to that question, finally giving the student the opportunity fo commit to an answer to the question for his animal. The actual program has an appealing graphical interface in which the student responds to questions by selecting options with a pointing device such as a mouse and by typing in partial sentences. The current profoeype, which contains more than 200 animals, 600 animal attributes, and 130 video clips, is being graders. evaluated in use by fourth through seventh ing in CreAN a&e When a person thinks of a story to tell, we say that he or she is reminded of a story. We usually think of re- minding as a passive process, something fhaf happens to someone, nof something a person does. Upon exam- ination of the process, one finds a set of processes and representations devoted to actively extracting features from the world and using those features to index useful cases or stories (Schank et al., 1990). Since teaching is an expertise, teachers have a particular set of heuris- tics that, enable them to observe their students and re- trieve appropriate examples, explanafions, and stories (three forms of insfructional cases). In the CreANI- Mate sysfem, we employ several reminding heuristics that enable the system fo retrieve stories fto achieve specific pedagogical objectives. Each reminding heuris- tic places specific demands on the information available in the indices of stories. In the remainder of this sec- tion, I describe three reminding strategies. For each strategy, I give an example from the operation of the system, a description of the indexing information that enables this type of reminding, and a description of the reminding algorithm itself. indings The bread and butter reminding for the CreANIMate sysfem, just as it is for any teacher, is the example. For instance, in a dialogue in which the student asked for a tortoise that could run fast, the system introduced the explanation question, “What features could help a tortoise to run fast?,, When the student asked for suggestions, the program responded: Cheetahs run fast. Do you know what cheetahs have to help them run fast? (I have an awesome video about that.) STUDENT> They have long legs. That ’ s right. Cheetahs have long, muscular legs to help them to run fast. Would you like to see that? STUDENT> Yes. In response to the question asking what features could help a tortoise fo run fast, the system retrieved a story that shows how the long, muscular legs of chee- tahs enable fhem to run fast. The system suggests an- swers to questions in the form of example video clips. The strategy for identifying explanation questions and retrieving examples resembles the issues and examples strategy of Burton & Brown (1976). This dialogue demonstrates an advantage of the case-based architec- ture over a system thaf contained the same type of knowledge but no cases to use as examples, e.g. WHY, GUIDON3. Such a system could only respond by say- ing, ‘Long, muscular legs can be used to run fast.” 3GUIDON uses cases, but not as examples. Edelson 669 To accomplish these remindings, the system relies on specific information being available in an index. An index must detail the explanation questions that the story can exemplify. The relationship between physi- cal features and the functions they support is one of the central lessons of the system. Therefore, one part of an index may indicate the use of a physical feature, e.g., long, muscular legs, for a function, to run fast, in the story. For every feature/function pair that appears in a video clip, there is a corresponding entry in the index indicating the presence of that pair in the story. The same is true for actions that are used to achieve a survival behavior, for instance, to run fast to pursue prey. Thus, part of the index for the cheetah story is4: [INDEX CHEETAH-PURSUING-PREY-CHEETAH :ANIMALS ([ANIMAL CHEETAH]) : FEAFUNS ( [PEAFUN :FEATURE [FEATURE LONG- MUSCULAR-LEGS] :FUNWZIWN [PUNKSWN RUN-FAST]]...) :PLANS ([PLAN :FUNKSHUN [FUNKSHUN RUN-FAST] :BEI~IOR [BEHAVIOR PURSUE-PREY]])...] For each feature/function pair or function/behavior pair in an index, the system is able to use it as an ex- ample for two different explanation questions. For ex- ample, the feature/function pair, long, muscular legs to run fast, can be used as an example in either a di- alogue about ‘What features must an animal have in order to run fast? or “What can long, muscular legs be used for?” The algorithm for example reminding is relatively straightforward. Example reminding is al- ways triggered by a request from the student. Suppose the student had requested a tortoise with long legs. First, the system recognizes that the student’s request gives rise to the explanation question, “Why would it be useful for a tortoise to have long legs?” This ini- tiates the example reminding process, which searches for ways animals use long legs. Since all of the objects in the system are linked in abstraction hierarchies, the system is able to recognize that a story about a subtype is also a story about its supertype. The system knows that [FEATURE LONG-MUSCULAR-LEGS] is asubtype of [FEATURE LONG-LEGS] so it is able to recognize that the cheetah story above is an example of a reason that animals have long legs. It could then show the cheetah story as an example of one reason animals have long ‘This is a concise printed representation of a complex data structure. The first word in a [. . .] form is the type of object being represented. Feature means physical fea- ture, Funkshun refers to the action that a physical feature is used for. (It is spelled junkshun because junction al- ready names a Common Lisp data structure.) I also refer to funkshuns as actions in the text. Feafun refers to a feature/function pair. A Behavior is a high-level survival behavior and a Plan is a function/behavior pair. Each one of these objects is implemented in the system as a frame in a highly interconnected semantic network. legs-to run fast. To summarize, the example reminding algorithm simply consists of 1) identifying candidate explanation questions relevant to the student’s request and 2) searching down abstraction hierarchies starting from the student’s requested modification looking for indices that illustrate the explanation question. One of the risks of teaching with examples is that the student may draw overly specific conclusions from the examples that they see. Therefore, one objective of a case-based teacher is to assist the student in draw- ing inferences at the right level of abstraction. The strategy that CreANIMate uses to help the student form appropriate generalizations is called similarity- based reminding. In similarity-based reminding, the system retrieves a story that illustrates the same basic principle as a previous example, but is sufficiently dif- ferent to allow the student to form a generalization at an appropriate level of abstraction. The following ex- ample of a similarity-based reminding was initiated by a student’s request for a tortoise that could run fast: Cheetahs run fast. Do you know why cheetahs run fast? (I have an impressive video about that. ) STUDENT> So they can catch other animals. That is right. Would you like to see that? STUDENT> yes [VIDEO: Cheetah Pursuing Prey] That reminds me of a cool video. Fishing bats also move fast in order to get food. Only, instead of running fast to pursue their prey, they fly to pounce on their Prey l Would you like to see that? STUDENT> yes [VIDEO : Fishing Bat] In this example, the program presented a video of a cheetah that runs fast to pursue its prey. This story was produced as an example of a reason that animals run fast. However, to ward off the possibility of the stu- dent drawing an overly-specific conclusion, e.g., “An- imals always pursue their prey by running fast,n the program presents a similar story about an animal that moves fast to get its food, but instead of running fast, it flies. In order to perform similarity-based remind- ing, the system must be able to identify stories that are similar, but not identical, to the given example. This is done by adding abstraction information to every plan or f eafun in an index. The following is a portion of the index for the cheetah story in the example above: CI;;;;N;HEETA~-pu~suI~~-~~y-CH~~~A~ ([PLAN :FUNKSUUN [FUNKSHUN RUN-FAST] :BEHAVIOR [BEHAVIOR PURSUE-PREY] :A.BSTRACTION [PLAN ZFUNKSHUN [FUNKSKJN MOVE-FAST] :muwIoR [BEHAVIOR HUNT]]]..)...] 670 Representation and Reasoning: Case-Based The plan, run fast to pursue prey, is annotated with the abstraction move fast to hunt. The system uses this information to help it identify stories that are suit- able similarity-based remindings for the cheetah story. The system understands the abstraction move fast to hunt to mean that if another story contains a plan that falls under the abstraction move fast to hunt, then that story is an appropriate similarity-based remindings. The story about the fishing bat which flies to pounce on its prey is retrieved because move fast to hunt is an abstraction of fly to pounce on prey. In the current example, the explanation question is “Why might it be useful for a tortoise to run fast?” The cheetah’s running fast to pursue prey is presented as a possible answer for that question. Other possi- ble answers will follow. Since the student is at risk for concluding that the only way to pursue prey is to run fast, the similarity-based reminding tries to find things that are similar to running fast that will help the stu- dent form an appropriate generalization. To do so, the similarity-based reminding algorithm performs an ever-widening search starting with running fast look- ing for actions that are like running fast in support of behaviors like pursue prey. The search is restricted by the abstraction in the index so that it will only accept an index that is encompassed by the abstraction move fast to hunt. Thus, similarity-based reminding allows the pro- gram to present stories that help a student to general- ize a principle up to an appropriate level of abstraction. In order to do so, stories are not indexed just according to the specific principle the story illustrates, but to an appropriate abstraction of that principle. Expectation Violation The greatest opportunity for learning takes place when an expectation that you have is violated by experience. This is what Schank (1982) calls failure-driven learn- ing. In this case, failure refers to the failure of an expectation to explain an observation, not the failure of an individual to achieve a goal. The failure of an ex- pectation to explain an observation triggers learning. A case-based teaching system attempts to capitalize on expectations that a student might have in order to promote failure-driven learning. Expectation failures do not just promote learning they provoke interest. In general, stories are interest- ing to the extent that they challenge our expectations rather than confirm them. An experience that con- forms exactly to expectation is boring. Thus, while CreANIMate is unable to judge what the student’s ac- tual expectations are, it is still assured of capitalizing on the student’s interests with expectation violation remindings. In order to perform expectation-violation remind- ings, the system must contain information about what ‘The idea for using abstractions this way originated with Richard Osgood (1990) expectations a student might have. An indexer adds this information when entering stories into the system. These expectations are entered as one of three types of rule: 1) Iy-rules e.g., &Only mammals have h 2) NO-P s, e-g*, “No mammals lay eggs.m 3) rules, e.g., “All birds fly.” For representational poses, we categorize expectations as either exclusive or inclusive expectations. Exclusive expectations are based on the exclusion of some animal from some cat- egory and inclusive expectations are predicated on in- clusion in the category. Only-rules and no-rules lead to exclusion violations and all-rules lead to inclusion vio- lations. Because of space limitations I am only able to show the inclusive reminding strategy here. While the rest of the information in the system’s knowledge base is considered to be true under all circumstances, the information expressed in these rules is treated simply as likely student expectations. elusive xpectation elmindings Inclusive ex- pectation remindings are triggered by all-rules. In the following example, the reminding was triggered by the expectation that all birds flee predators by flying. Quail fly to flee predators. I have a dramatic video about that. Would you like to see that? STUDENT> yes [VIDEO: Hawk chases quail] Did you know that not all birds fly to flee predators? Do you know how ostriches flee predators? (I have an awesome video about that.) STUDENT> They ru11. Yes, that is right. Ostriches run fast to flee predators. Would you like to see that? STUDENT> yes [VIDEO: Ostrich runs fast] This default expectation is entered in the system as the following all-rule: [ALL-RULE ZANIMALS [ANIMAL BIRD] :VALUE [PLAN :FUNKSHUN [FUNKSHUN FLY] :BEHAVIOR [BEHAVIOR FLEE-PREDATOR]] :EXPECT-VIOL [FUNKSHUN FLY] :INDEX [INDEX OSTRICH-RUNS-FAST-OSTRICH]] This rule reads, “Expect that all birds fly to flee predators. In the ostrich story, this expectation is vio- lated by the substitution of something else for to fly.” The reminding strategy is triggered by a story which exemplifies the expectation-a story of a quail flying to flee a hawk. When the system presents the first story, the inclusion reminding algorithm initiates a search for any inclusive expectations about flying and flee- ing predators. In this case, it finds the above all-rule associated with the action to fly and finds that the animal in the initial story, the quail, fits the require- ment of inclusion in the category bird. The storyteller Edelson 671 then takes advantage reminding. of the opportunity to present the To summarize, remindings from expectation viola- tions are designed to capitalize on students’ default expectations. They are triggered by rules. When the system presents a story, it determines whether there is a relevant rule by searching the hierarchy for rules that correspond to the elements of the current story. If there is such a rule, and the animal in the current story corresponds to the category specified by the rule, then the storyteller presents the story. Conchlsion The case-based teaching architecture is designed to take advantage of the way people naturally learn and reason from stories. A case-based teacher presents a student with an engaging task that provides rich op- portunities for learning. Like a good human teacher, a case-based teaching system is able to capitalize on a student’s situation in order to present appropriate sto- ries to further the student’s learning. Unlike a human, a computer-based system is able to draw instantly from an extremely large database of stories, recorded in a variety of media from graphics to video. To be effective, however, such a system must be able to retrieve the right story at the right time. In the course of developing CreANIMate, we have developed a knowledge representation and a collection of remind- ing strategies that enables the system to retrieve stories to achieve specific educational objectives. Currently, these strategies include: 1) example reminding, to pro- vide examples; 2) similarity-based reminding, to assist in generalization; 3) and expectation violation, to chal- lenge students’ expectations. Each of these reminding strategies is activated by a particular context in order to provide the student with a story that is directly rel- evant to his situation. However, these reminding types are just the beginning. Future research will explore re- minding strategies that include counterexamples, ex- tremes, opposites and others. Acknowledgements This work is supported in part by a grant from the IBM Program for Innovative Uses of Information Technol- ogy in K-12 education, and by the Defense Advanced Research Projects Agency monitored by the Office of Naval Research under contract N00014-91-54092. The Institute for the Learning Sciences was established in 1989 with the support of Andersen Consulting, part of the Arthur Andersen Worldwide Organization. The CreANIMate team includes, in addition to the author, the following indexer, programmers, and grad students: Bob Kaeding, Ken Greenlee, Riad Mohammed, Diane Schwartz, John Cleave, and Will Fitzgerald. I would like to thank Andy Fano, Vivian Choy Edelson, and the anonymous reviewers for help- ful comments on earlier drafts of this paper. References Burton, R.R., & Brown, J.S. 1976. A Tutoring and Student Modeling Paradigm for Gaming Environ- ments. In Colman, R., and Lorton, P. Jr. (Eds.) Corn- puter Science and Education. AC.. SIGCSE Bulletin, 8(l), 236-246. Clancey, W. J. 1987. Knowledge-Based Tutoring: The GUIDON Program. Cambridge, MA: MIT press. Cognition and Technology Group at Vanderbilt, The. 1990. Anchored Instruction and Its Relationship to Situated Cognition. Educational Researcher 19(6), 2-10. Edelson, D. C. 1991. Why do Cheetahs Run Fast? Responsive Questioning in a Case-Based Teaching Sys- tem. In Proceedings of the International Conference on the Learning Sciences, 138-144. Charlottesville, VA: Association for the Advancement of Computing in Ed- ucation. Gragg, C. I. 1940. Because Wisdom Can’t Be Told. Harvard Alumni Bulletin, 78-84. Kolodner, J. L. 1991. Improving Human Decision Making through Case-Based Decision Aiding. AIMag- atine 12(2): 52-67. Kolodner, J. L., Simpson, R. L. and Sycara- Cyranski, K. 1985. A Process Model of Case-Based Reasoning in Problem-Solving. In Proceedings of the Ninth International Joint Conference on Artificial In- telligence. Osgood, R. 1990. Personal Communication. Riesbeck, C.K., & Schank, R.C. 1989. Inside Case Based Reasoning. Hillsdale, NJ: Lawrence Erlbaum. Schank, R.C. 1982. Dynamic Memory. Cambridge: Cambridge University Press. Schank, R.C. 1986. Explanation Patterns: Wnder- standing Mechanically and Creatiuely. Hillsdale, NJ: Lawrence Erlbaum Associates. Schank, R.C., Osgood, R. et al. 1990. A Content Theory of Memory Indexing. Technical Report #2, Institute for the Learning Sciences, Northwestern Uni- versity. Schank, R. C. 1991a. Case-Based Teaching: Four Experiences in Educational Software Design. Institute for the Learning Sciences Technical Report #7, North- western University. Schank, R.C. 1991b. Tell me a story : A new look at real and artificial intelligence. New York: Simon and Schuster. Solowary, E.M., Rubin, E., Woolf, B.P., Bonar, J., and Johnson, W.L. 1983. MENO-II: An AI-based Pro- gramming Tutor. Journal of Computer-based Instruc- tion 10(l): 20-34. Stevens, A.L., & Collins, A. 1977. The Goal STruc- ture of a Socratic Tutor. Proceedings of the National ACM Conference, Seattle, Washington, 256-263. New York: Association for Computing Machinery. Wenger, E. 1987. Artificial Intelligence and Tutor- ing Systems. Los Altos, CA: Morgan Kaufmann Pub- lishers. 672 Represent at ion and Reasoning: Case-Based
1992
114
1,181
ased Case A Victoria University of Wellington Wellington, New Zealand eric.jones@cornp.vuw.ac.nz Abstract In this paper, we demonstrate an important role for model-based reasoning in case adaptation. Model-based reasoning can allow a case-based rea- soner to apply cases to a wider range of problems than would otherwise be possible. We focus on case adaptation in BRAINSTORMER, a planner that uses abstract advice to help it plan in the domain of political and military policy as it relates to terrorism. We show that by equipping a case adapter with an explicit causal model of the planning process, cases presented as advice can be flexibly applied to difficulties that arise at a variety of different stages of planning. Introduction Most knowledge-based systems cannot use their prior knowledge flexibly: they cannot use what they al- ready know unless it exactly matches the needs of the current problem. Case-based reasoning has been proposed as a framework for addressing this limita- tion [Kolodner et al., 19851. Traditional systems em- ploy a knowledge-poor process of pattern matching or unification to relate prior knowledge to new prob- lems. Case-based reasoning, in contrast, countenances a knowledge-intensive process of bringing prior knowl- edge to bear, thereby aiming to increase the range of problems that a given knowledge base can ad- dress [Kolodner et al., 19851. A case-based reasoner proceeds by retrieving prior knowledge in the form of a case that may only partly fit the needs of a current problem, then adapting it to resolve any discrepancies with the problem. Because a case adapter can cope with a range of discrepancies, a given case can be ap- plied to a wider range of problems than a conventional system employing only knowledge-poor methods such as unification. In this paper, we identify an important role for model-based reasoning in case adaptation. We focus on case adaptation in BRAINSTORMER [Jones, 1991b]. BRAINSTORMER is a planner that uses *This research was conducted at the Institute for the Learning Sciences at Northwestern University, and was sup- ported in part by the Air Force Office of Scientific Research (AFOSR). The Institute for the Learning Sciences was es- tablished in 1989 with the support of Andersen Consulting, part of The Arthur Andersen Worldwide Organization. abstract advice to help it plan in the domain of po- litical and military policy as it relates to terrorism. The planner first tries to solve problems it is given on its own; if it gets into trouble, it elicits advice from a user. The user responds with abstract planning advice in the form of a case, which BRAINSTORMER proceeds to adapt to fit the problem, by transforming it into specific, contextualized information that resolves the planner’s difficulty. Adaptation in BRAINSTORMER is thus primarily a task of operutionalixution in the sense of [Mostow, 19831: converting generic knowledge in an abstract vocabulary into specific useful knowledge in an operational vocabulary. BRAINSTORMER'S adapter works to resolve several kinds of discrepancies between advice and planning problems. In this paper we focus on just one of these, which we term mismatch in stage of the planning pro- cess. See [Jones, 1991b] for a discussion of several oth- ers. In the next two sections, we describe the inputs to adaptation and explain what we mean by “mismatch in stage of the planning process.” We then outline BRAINSTORMER'S approach to resolving mismatches of this kind, which involves reasoning with a causal model of the planning process. Culturally-Shared Models of A major goal of our research is to develop instructable systems that can be advised in a high-level, human- like vocabulary [Jones, 1991~1. We start from the be- lief that people communicate advice about planning in terms of high-level culturally-shared models of the planning process. Models describe planning actions, states they produce, plans, goals, and causal relations between the actions and the states. We have attempted to identify culturally-shared models and to construct a vocabulary sufficient to represent advice expressed in terms of these models. To this end, many of the examples of advice that BRAINSTORMER handles, including the ones in this paper, are representations of proverbs, encoded in BRAINSTORMER'S high-level vocabulary of planning concepts. Proverbs are cultural/y-shared cases: they identify generic strategies that everyone uses to deal with commonly-occurring problems in planning and social interaction. As such, proverbs provide a rich source of data on culturally-shared models of plan- ning [Schank, 1986; White, 19871. Different proverbs Jones 673 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. implicitly presuppose different culturally-shared mod- els. Representing a large number of proverbs has proved an effective strategy for developing and test- ing our representational vocabulary. It is, however, important to emphasize that we have no special com- mitment to proverbs other than as a source of data on culturally-shared models of planning. As we will see, culturally-shared models of planning play two distinct but related roles in BRAINSTORMER: first, they provide a substrate for representing high- level advice; second, schemas encoded in this vocabu- lary form the declarative component of a model-based reasoning process for transforming high-level advice into operational planner data structures. The Problem BRAINSTORMER'S cases embody abstract planning ad- vice. It follows that adaptation in BRAINSTORMER presents a challenge that many other systems do not have to face: most cases can be made operational in a number of different ways, each of which impacts a different stage of the planning process. As an exam- ple, suppose BRAINSTORMER is attempting to come up with plans for the goal of preventing terrorism and, asking for advice, is presented with the proverb an old poacher makes the best keeper. This proverb can be paraphrased as follows: in a stereotyped attack-defense situation, a former attacker is a good choice for the actor of a plan of defense. There are a number of different ways this advice might be made operational. Which one is appropriate depends upon what stage of the planning process the planner has reached at the time it requests advice. Suppose, for example, that BRAINSTORMER is con- sidering a plan of defense, and is searching for candi- dates for the actor. At this point, the proverb should be interpreted as suggesting try un ex-terrorist. Al- ternatively, suppose the planner is considering a plan of defense against terrorism, and is trying to decide between two plausible candidates for the actor of this plan. The proverb should then be interpreted as sug- gesting pick the candidate with the most experience or expertise. As a third scenario, suppose that the plan- ner is stuck at the first stages of planning and has no concept of how to proceed towards its goal of prevent- ing terrorism. In that event, the proverb could be in- terpreted as try a plan of defense with an ex-terrorist actor. BRAINSTORMER represents each of these inter- pretations as different operational planner data struc- tures. In short, the adapter faces a problem of mismatch in stage of planning process. Cases the adapter is handed are often initially represented in a form that can be used fairly directly by one stage of the planning process but that must be substantially transformed in order to assist other stages of planning. If BRAINSTORMER is to use cases it is handed as flexibly as possible, it has to be able to carry out the relevant inferences. This turns out to centrally involve reasoning with a causal model of the planning process, as we now describe. EXTERNAL ADVICE REPRESENTATIONS Figure 1: Information flow in the adapter Overview of the Approach We begin by distinguishing three separate vocabular- ies for advice. First is the external vocabulary in which the user presents advice. This vocabulary puts as few constraints as possible on the form of the advice: cases can be expressed in terms of a variety of culturally- shared models of planning. Second, we distinguish a privileged subset or kernel of the external vocabu- lary called the canonical vocubulury. This is a vocab- ulary of planning actions consistent with a particular culturally-shared model of the planning process, the canonical model. The canonical model is so called be- cause any action that can be represented in the exter- nal vocabulary can also be redescribed in terms of a planning action in the canonical model. The canonical model describes planning in terms of plan design and plan execution, buildin on Schank’s idea of the goal- plan-action-state chain ‘t Schank, 19861. Third, there is the operational vocabulary of the planner’s data struc- tures, in which the outputs of the adapter are encoded. BRAINSTORMER is equipped with three kinds of knowledge for transforming advice from one vocabu- lary to another: 1. Knowledge for translating actions expressed in the external vocabulary into the canonical vocabulary. 2. Knowledge for causal reasoning within the canonical model of the planning process. 3. Rules for translating expressions in the canonical vo- cabulary into the planner’s operational vocabulary. This knowledge gives rise to the information flow depicted in figure 1. The system first converts external advice into the vocabulary of the canonical model, then variously reexpresses it by causal reasoning within this model; finally, it translates canonical representations into the operational vocabulary of the planner. This three-stage organization of knowledge is well suited to the task of resolving mismatches between stage of planning process. Causal reasoning within the canonical model (stage 2) allows the system to relate advice to different stages of the planning pro- cess. Translating external advice into the canonical vocabulary (stage 1) serves to minimize the size of the knowledge base needed for this causal reasoning. As the figure illustrates, different external expressions of advice can give rise to the same canonical represen- tations, which can then be operated on by identical 674 Representation and Reasoning: Case-Based (def-schema top-down-design =self object =obj partial-designs (design-part input part-spec object =obj output part-spec =pspec object =obj) output spec I object =obj part-specs (=pspec) I &indices (part ial-designs output > ~ Figure 2: Definition of a top-down-design schema. causal knowledge. Rules at stage 3 further simplify causal reasoning by encapsulating implementation de- tails of BRAINSTORMER's planner. We now describe how model-based case adaptation is implemented as a process of schema-based reason- ing, then we describe the model-based case adaptation process in greater detail. Modeling the Planning Culturally-shared models of the planning process are represented as collections of schemus, which are struc- tured descriptions of planning actions and informa- tion that those actions manipulate. Complex actions are represented as partially-ordered collections of sub- actions linked by key enabling conditions and results. Schemas are encoded in a frame-based representa- tion language using a slot-filler notation. A typical schema definition is shown in figure 2. This schema is part of the canonical model of the planning process. It describes a process of top-down design, in which a specification of an artifact to be designed is built up by a sequence design-part actions, each of which spec- ifies the design of a component of the artifact. This schema can be applied to any top-down design task that does not involve interactions between the design of sub-parts. In particular, simple hierarchical planning in the absence of goal interactions can be described us- ing this schema, as we illustrate below. The &indices slot, is treated specially; it specifies how the schema is to be indexed in memory. Logically speaking, schemas are universally quanti- fied implications that relate a schema type to a con- junction of slot-filler assertions. The fillers of slots are existentially quantified, except for “list” slots such as partial-designs in figure 2, which can take an indef- inite number of fillers and are universally quantified. Top-down-design, for example, can be translated as the following first-order formula: Vx Isa(x, top-down-design) > Zlobj, sp[Object(z, obj) A Output(z, sp) AIsa(sp, spec) A Object(sp, obj) A Vdp[Partial-des@ns(x, dp) > [&a( dp, design-part) A 31, m[InpUt( dp, PI) A Isa(m, part-wet) A Object@1 , obj) A Isa&, part-spec) A Object (p2, o bj) A Part-specs( sp, p2)]]]] plementing odel- ased Adaptation input to adaptation is a query from the planner and a culturally-shared case that a user has presented as advice. The task of adaptation is to transform the initial representation of the case into an operational planner data structure that answers the query. Model-based case adaptation is implemented as a process of forward reasoning from advice to queries, an employs a schema-based inference engine that operates over the three kinds of knowledge outlined above. We now describe relevant, aspects of the schema-based in- ference engine, then sketch how it is used to implement the three stages of model-based reasoning. Schema-Based Reasoning in Schema-based reasoning involves activating schemas to “explain” a user’s advice in terms of planning actions that the advice can assist. The resulting explanations are abstract descriptions of planning actions that the planner could carry out using the advice; the advice fills a slot of the schema. This explanation process is similar in essence to motivational analysis as described in [Charniak, 19881, with several extensions. Four basic inference mechanisms are required for schema-based reasoning: schema activation, slot fill- ing, redescription inference, and “if-added” rules. A schema is activated by retrieving it from memory and instantiating it. Schemas are stored in memory in terms of the types of other schemas that they can be plausibly retrieved to explain, as specified by the &indices slot of the schema. The top-down-design schema shown in figure 2 above, for example, is indexed in terms of the schemas design-part and spec, corre- sponding to the partial-designs and output slots of the schema. A schema is instantiated by creating a new constant, abductively asserting that the schema’s type holds of that constant, and then creating skolem terms as prototypical fillers of each non-“list” slot. Similarly, functions for generating prototypical slot fillers on de- mand are also associated with each “list,” slot. Schemas are activated with the aim of explain- ing advice from the user. To form an explanation, however, the advice has to be filled into the slot of the instantiated schema that corresponds to the index used to retrieve it. For example, if an in- stance of a design-part is present in the advice, a top-down-design schema will be activated to ex- plain it. The explanation is formed by filling the design-part into the partial-designs slot of the in- stantiated schema. Slot filling is accomplished by a process of abduct&e unification, in which the represen- tation to be explained is hypothesized to be equal to a prototypical filler of the appropriate slot of the schema that explains it, if this equality is consistent with the system’s knowledge of them both [Charniak, 19881. As an additional complication, a schema can be re- trieved to explain advice that does not abductively unify with any of its slot fillers, if the advice can be re- described in a different way that does abductively unify. BRAINSTORMER uses a specialized inference mecha- nism called redescription inference for this purpose; Jones 675 see [Jones, 1991a] for details. A third inference mech- anism, if-added inference, allows one to write forward- chaining rules that trigger upon filling a slot. Using Schema-Based Reasoning We now have sketched the inferential machinery that BRAINSTORMER uses to implement model-based case adaptation. Adaptation entails a search through a space of’ hypotheses established by schema instanti- ation, abductive unification, and redescription infer- ence. Adaptation is successful if a small set of schema instances can be found that link an initial representa- tion of advice to an operational planner data structure that abductively unifies with a query from the planner. As we discussed above, adaptation proceeds in three stages: conversion to canonical form, causal reason- ing, and conversion to operational form (see figure 1). These stages are not distinguished in procedural terms so much as in terms the content of the schemas and inference rules that they manipulate. At the start of stage 1, if the advice is not already in the form of an recommended action-it might, for example, supply information relevant to performing a planner action- then schemas representing planner ac- tions are activated to explain the advice. Next, if the resulting schema instances are not part of the canonical model, redescription inference is invoked to transform them into canonical representations. During stage 2 (causal reasoning), schemas in the canonical model of the planning process are activated to provide further explanatory context for the repre- sentations produced at stage 1. These schemas de- scribe larger chunks of the planning process as com- plexes of smaller planning actions linked by key en- abling conditions and results. Variable bindings rep- resented using the notation =<symbol> provide con- straints between the actions and their enabling con- ditions and results. For example, a variable binding =pspec in the top-down-design schema of figure 2 constrains the output slot to reflect the results of any sub-actions filled into the partial-designs slot. When a slot of a schema instance is filled by advice (or by an schema instance that explains some advice), variable bindings associated with the slot are used to propagate causal implications of the advice to repre- sentations of earlier and later phases of the planning process. Suppose, for example, that a design-part instance is filled into the partial-designs slot of a top-down-design. The partial design, or part-spec, produced by the design-part will be propagated to a representation of the complete design, or spec, stored in the output slot of the top-down-design. Propaga- tion occurs automatically, as a side effect of slot filling. Stage 3 of adaptation (conversion to operational form) is implemented using “if-added” rules that trig- ger upon filling slots of schemas in the canonical model. An Example In this section, we present an example of case adapta- tion, in which BRAINSTORMER uses model-based rea- soning to resolve a mismatch in stage of the planning ?plan-for plan ?plan goal prevent-goal state terrorist-attack actor Islamic-fundamentalist Figure 3: A query for a plan optimal-evaluation object part-spec object defend-plan parameter actor value attacker-stereotype isa =type context goal-conflict actor1 attacker-stereotype isa 'type Figure 4: Initial representation of the case. process. We elaborate on the third of the scenarios described above, in which the planner gets stuck early in planning, while attempting to retrieve a plan for the goal of preventing terrorism. At that point, the planner issues a query for information to resolve its difficulty; the adapter’s task is to transform the advice it is handed into an answer to this query. The planner’s query asks for a plan for the goal of preventing terrorism, as shown in figure 3. Note that if instead the planner were stuck at a different stage of the planning process, its information requirements would be different, so it would issue a different query. Once the planner issues the query, a user provides BRAINSTORMER with advice in the form of a case rep- resented in the flexible external vocabulary. In the current example, the user supplies a representation of the proverb an old poacher makes the best keeper. We represent this proverb as an evaluation that can in- form a choice, since the proverb’s central point is that old poachers are best. See figure 4. This representation can be paraphrased as follows: “in the context of a goal conflict involving a stereotyped attacker, the optimal actor for a plan of defense is someone who satisfies the attacker’s stereotype.” The adapter’s job is to transform this case represen- tation into an answer to the query of figure 3. The resulting representation is shown in figure 5. It is com- pletely different from the original representation, pri- marily because it informs an entirely different stage of the planning process. The initial representation of the case provides information relevant to choosing the ac- tor of a plan, an action that takes place during plan refinement. The transformed representation, in con- trast, provides information that BRAINSTORMER uses during plan retrieval, an earlier stage in the planning 676 Representation and Reasoninc Case-East ?d choose =ch opt ions (=pspec) evaluations (optimal-evaluation object part-spec =pspec object defend-plan parameter actor value attacker-stereotype) Figure 6: An instance of a choose schema. process in which the planner uses features of goals to retrieve appropriate plan schemas. In the remainder of this section, we sketch the reasoning required to effect this transformation. In the first stage of adaptation, the adapter con- verts the external advice-the initial representation of the proverb as an optimal-evaluation-into plan- ner actions in the canonical vocabulary. The advice is transformed in two steps. First, the adapter activates schemas that represent planning actions, to explain the advice. One schema that is retrieved is choose, which represents the idea of choosing between options on the basis of evaluations of alternatives. The schema is indexed under evaluation (among other indices), as the filler of its evaluations slot is a schema of this type. The choose schema is retrieved using the optimal-evaluation in the advice as a retrieval cue, as this is a subtype of evaluation. The resulting ex- planation is shown in figure 6. The second step is to redescribe the choose action as an action in BRAINSTORMER’S canonical vocabulary, in which plan formulation is viewed as a design task. The result is an instance of a design-part schema, which feeds into the next stage of adaptation. During stage 2 of adaptation (causal reasoning within the canonical model), the adapter first ac- tivates an overarching top-down-design schema to explain the design-part schema, then activates a goal-plan-action-state schema to explain the top-down-design. Each explanation is a schema in- stance one of whose slots is filled with another schema instance it explains. As a side effect of slot filling, fea- tures of the explained schema instance are propagated within the new explanation, establishing key enabling conditions and results. The result of this process is an explanation of the advice in terms of a plausible history of plan construction that starts with the adoption of a hypothetical defend-goal and ends when the planner uses the knowledge in the advice to specify the actor of a defend-plan for this goal. (See figure 7.) The final step of adaptation produces an opera- tional planner data structure that answers the plan- ner’s query and allows it to continue planning. Re- call that the planner originally requested a plan-for frame nominating a particular plan for an existing goal to prevent terrorism. A rule indexed on the plan and goal slots of the goal-plan-action-state frame of figure 7 generates the desired plan-for frame (fig- ure 5, above), which describes the connection between the fillers of these slots using the operational vocabu- lary of the planner. This plan-f or frame answers the planner’s original query, and adaptation is complete. goal-plan-action-state goal defend-goal Plan defend-plan =ob j select-plan -- top-down-design object defend-plan =obj part ial-designs -- design-part input part-spec object defend-plan =ob j parameter actor output part-spec =pspec object defend-plan =ob j parameter actor value attacker-stereotype output spec object =obj part-specs (=pspec) Figure 7: An instance of goal-plan-action-state. iscussion and One of the key aims of case-based reasoning is to improve the flexibility with which prior knowledge can be used as compared to traditional (e.g. schema- based) approaches. Flexibility is measured in terms of the number of different kinds of problems that given prior knowledge can solve within fixed time constraints. If knowledge can be used more flexi- bly, a smaller knowledge base will provide equivalent problem-solving power. It follows that the amount of knowledge that adap- tation itself requires must be less than the decrease in knowledge base size entailed by adopting a case-based approach. Thus, a theory of adaptation must iden- tify generally-applicable kinds of knowledge for resolv- ing discrepancies between problems and cases retrieved to solve them, and must in particular show that the amount of knowledge required for adaptation increases less than linearly with the size of the case base. Model-based reasoning in BRAINSTORMER meets this condition. When building BRAINSTORMER, we fac- tored out knowledge for reasoning about the planning process from knowledge the planner needs to solve planning problems. As we have seen, the former is represented as an explicit abstract causal model of the planning process; the latter remains in the case base. The two kinds of knowledge are combined dynamically during adaptation. The knowledge about the planning process, if not factored out, would have to have been represented over and over again throughout the case base. A traditional schema-based system, for example, would require mul- tiple representations of each of BRAINSTORMER’S cases: one (or sometimes several) representations would be required for each stage of the planning process that could conceivably benefit from the case’s content. This would substantially increase the size of the overall knowledge base. Of course, representing the content of cases requires making implicit or explicit reference to some aspect of the model of the planning process: for example, an old Jones 677 poacher makes the best keeper is represented in terms of information relevant to the planning action of choosing a component of a plan. However, the case represen- ter is free to represent cases in terms of any aspect of planning that facilitates exposition. BRAINSTORMER allows this kind of flexibility, because model-based rea- soning can resolve discrepancies between the stage of planning referenced by the advice and the needs of any particular planning problem. The utility of explicit self models has long been recognized in the field of knowledge acquisition. Starting with Teiresias [Davis, 19821, a number of knowledge acquisition systems (e.g. ASK [Gruber, 19891 and MOLE [Eshelman et al., 19861) have employed explicit models of the problem solver under construc- tion to simplify knowledge entry. Knowledge is typi- cally added in response to the system’s failing to han- dle some new example. Unfortunately, new knowledge must be entered in a form that is very close to fully operational. Moreover, failure diagnosis typically re- quires the user to have a detailed knowledge of reason- ing process of the underlying system. (See [Birnbaum et al., 19901 and [Smith et al., 19851, however, for at- tempts to automate diagnosis completely.) Existing systems employ model-based reasoning to help users manage the complexity of entering data at the oper- ational level, as opposed to facilitating an informal, high-level dialog that sidesteps this complexity. BRAINSTORMER, in contrast, uses model-based rea- soning to relieve the user of the burden of spelling out details of how the advice is to be used, permit- ting higher-level, more flexible interactions between the user and the planner. [Jones, 1991c] describes BRAINSTORMER'S relation to other knowledge acqui- sition and advice-taking systems in greater detail. Conclusion BRAINSTORMER is an exploratory prototype and should be evaluated as such. A complete, practical system that can flexibly employ high-level advice re- mains a distant prospect; nevertheless, it is important to design experimental systems that explore the hard problems that lie en route. In this paper, we have discussed one sich problem: the need to flexibly relate abstract advice to a wide range of problem-solving situations. A central contri- bution of our research is to demonstrate an important role for model-based reasoning. Reasoning with an ex- plicit model of the planning process allows given ab- stract advice to be brought to bear on a variety of different, stages of planning. It follows that a case- based reasoner equipped with a model-based reasoning component can adapt abstract knowledge more flexi- bly. More generally, any system that needs to opera- tionalize advice to assist ongoing problem solving can benefit, from reasoning with an explicit causal model of its own problem-solving process. What are the limitations of our approach? Any con- clusions are necessarily tentative. The current sys- tem only works on a small number of examples, so the amount of additional knowledge the adapter would need to handle a much wider range of advice is uncer- tain. Nevertheless, in light of our discussion in the previous section, we suspect that the knowledge re- quired will increase sublinearly in the size of the case base. Might model-based reasoning become intractable as we add more vocabulary? The current, system uses a very simple forward-chaining control regimen, which may have to be replaced with some form of means-ends analysis to scale acceptably. These and other issues await further research. References Birnbaum, Lawrence; Collins, G.; Freed, M.; and Krul- with, B. 1990. Model-based diagnosis of planning failures. In Proceedings AAAI-88 Eighth National Conference on Artificial Intelligence, Boston. AAAI. 318-323. Charniak, E. 1988. Motivation analysis, abductive unifi- cation, and nonmonotonic equality. Artificial Intelligence 34:275-295. Davis, Randall 1982. Teiresias: Applications of meta-level knowlege. In Davis, Randall and Lenat, D.B., editors 1982, Icnowledge-Based Systems in Artificial Intelligence. McGraw-Hi& New York. 229-484. Eshelman, Larry and McDermott, J. 1986. MOLE: A knowledge acquisition tool that uses its head. In Pro- ceedings AAAI-86 Fifth National Conference on Artificial Intelligence, Philadelphia. AAAI. 950-955. Gruber, Thomas R. 1989. Exemplar-Based Knowledge Ac- quistion. Academic Press, San Diego. Jones, Eric K. 1991a. Adapting abstract knowledge. In Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society, Chicago, IL. Lawrence Erl- baum Associates. Jones, Eric K. 1991b. The Flexible Use of Abstract Knowl- edge in Planning. Ph.D. Dissertation, Yale University. Jones, Eric K. 1991c. Knowledge refinement using a high-level, non-technical vocabulary. In Machine Learn- ing: Proceedings of the Eighth International Workshop, Chicago, IL. Morgan Kaufmann. Kolodner, Janet L.; Simpson, R.L.; and Sycara-Cyranski, K. 1985. A process model of case-based reasoning in problem-solving. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Los Angeles, CA. IJCAI, Inc. Mostow, David J. 1983. Machine transformation of ad- vice into a heuristic search procedure. In Michalski, Ryszard S.; Carbonell, J.G.; and Mitchell, T.M., edi- tors 1983, Machine Learning: An Artificial Intelligence Approach. Tioga Publishing Company, Cambridge, MA. 367-404. Schank, Roger C. 1986. Explanation Patterns: Under- standing MechanicaIZy and Creatively. Lawrence Erlbaum Associates, Hillsdale, NJ. Smith, Reid; Winston, H.; Mitchell, T.; and Buchanan, B. 1985. Representation and use of explicit justifications for knowledge-base refinement. In Proceedings Ninth Ira ter- national Joint Conference on Artificial; Intelligence, Los Angeles. IJCAI, Inc. 675-680. White, Geoffrey M. 1987. Proverbs and cultural models. In Holland, Dorothy and Quinn, N., editors 1987, Cesltural Models in Language and Thought. Cambridge University Press, New York. 151-172. 678 Representation and Reasoning: Qualitative
1992
115
1,182
ualitative ase e* 23. Cui, A.G. Cohn and D.A. Randell Division of Artificial Intelligence School of Computer Studies University of Leeds, Leeds, LS2 9JT, England { cui,agc,dr}@dcs.leeds.ac.uk Abstract We describe an envisionment-based qualitative simulation program. The program implements part of an axiomatic, first order theory that has been developed to represent and reason about space and time. Topological information from the modelled domain is expressed as sets of distinct topological relations holding between sets of ob- jects. These form the qualitative states in the un- derlying theory and simulation. Processes in the theory are represented as paths in the envision- ment tree. The algorithm is illustrated with an example of a simulation of phagocytosis and exo- cytosis - two processes used by unicellular organ- isms for garnering food and expelling waste mate- rial respectively. Introduction Envisionment-based simulation programs used in Qualitative Reasoning (QR) are now well established. The notion of an envisionment originated in de Iileer’s NEWTON program, but now appears as a central pro- gram design feature in many QR simulation programs - see Weld and de Kleer (1990). An envisionment takes a set of predetermined qualitative states, and expresses them in the form of graph or a tree. This represents a temporal partial ordering of all the qualitative states a modelled physical system can evolve into given some indexed state. The term “envisionment” refers to the generated tree of possible states of a modelled system, the term “envisioning” to the actual process of deriv- ing this tree. Envisionments can be attainable or total. Attainable envisionments generate the tree from some particular initial state of the modelled system; total envisionments are generated from all possible states - see Weld and de Kleer (1990) for examples of both types. Our simulation program currently produces an attainable envisionment. The simulation program described below shares many of its general design features with Kuipers’ (1986) QSIM app roach to qualitative simulation. QSIM uses a set of symbols that represent physical *The support of the SERC under grant no. GR/G36852 is gratefully acknowledged. parameters of a modelled system, together with a set of constraint equations (which are taken to be qualita- tive analogues of standard differential equations com- monly used in mathematics and physics). The qual- itative simulation starts with a structural description of the modelled domain (being the description of the parameters and constraint equations which relate the parameters to each other) and an initial state. The program produces a tree which represents the initial state of the system as the root node, and possible be- haviours of the modelled system as paths in the tree from the root node to its leaf nodes. In our simulation program, QSIM’s physical param- eters map to a set of mutually exhaustive and pairwise disjoint set of dyadic relations that can hold between pairs of regions. Similarly, QSIM’s set of transition rules map to a set of transition rules in our theory (which determine the manner in which pairs of objects can change their degree of connectivity over time), and QSIM’s constraint model maps to domain independent and dependent constraints that apply to states, and between adjacent states. While both QSIM and our simulation program take particular physical systems as a model, unlike QSIM, our simulation program first requires the user to abstract out a logical description of the physical model in terms of a set of topological relationships holding between the set of objects in the modelled domain. An analogue of QSIM’s consistency filteriug also appears in our simulation program. The structure of the rest of the paper is as follows. First we outline that part of the underlying theory upon which the present simulation program is based. Then we discuss the simulation program. We give an example model and resulting envisionment, and finally we discuss related and future work. Overview of the Spatial Theory The formal theory which underpins the simulation pro- gram (see Randell and Cohn 1989, Randell, Cohn and Cui 1992 and Randell 1991) is based upon Clarke’s (1981, 1985) 1 1 ca cu us of individuals based on “connec- tion” and is expressed in the many sorted logic LLAMA (Cohn 1987). The theory supports regions having ei- ther a spatial or temporal interpretation. Informally, these regions may be thought to be infinite in number, Cui, Cohn, and Randell 679 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. and any degree of connection from external identity is allowed in the intended model. contact to The-basic part of the formal theory assumes a prim- itive dyadic relation: C(z, y) read as ‘x connects with y’ which is defined on regions. C(x, y) is rellexive and symmetric. In terms of points incident in regions, C(z, y) holds when regions x and y share a common point. Using the relation C(x, y), a basic set of dyadic relations are defined. These relations are DC (is dis- connected from), P (is a part of), PP (is a proper part of), = (is identical with), 0 (overlaps), DR (is discrete from), PO (partially overlaps), EC (is exter- nally connected with), TP (is a tangential part of), NTP (is a nontangential part of), TPP (is a tangen- tial proper part of), NTPP (is a nontangential proper part of), TPI (is the identity tangential part of), and NTPI (is the identity nontangential part of). The re- lations P, BP, TP, NTP, TPP and NTPP support in- verses. Of the defined relations, the set DC, EC, PO, TPP, NTPP, TPI, NTPI, and the inverses for TPP and NTPP form a mutually exhaustive and pairwise disjoint set. From now on we shall refer to this par- ticular set, as the set of base relations defined solely in terms of the primitive relation C. A pictorial model for this set of base relations (excepting the relation NTPI) is given in Figure 1 l. Atomic formulae whose predicate symbol is a base relation will be called basic atoms. Note that all the relations described above can be expressed as d isjunctions of sets of base relations. For the temporal part of the theory assumed by the simulation program, we first introduce temporal re- gions into our Gtology, which we call periods: Periods are subdivided into intervals and moments, where a moment is defined as a period that has no constituent parts such that one part is before another. In addition to periods, a new primitive relation of temporal prece- dence ‘B(z, y)’ read as ‘x is before y’ is added to the formalism and axiomatised to be irreflexive and tran- sitive. A set of 13 dyadic temporal relations are then defined - see Randell ( 1991). These may be viewed as analogues of all the 13 interval relations common to interval logics - see e.g. Allen and Hayes (1983). However, for the purposes of this paper, only the re- lation Meets(z, y) which is irrellexive and transitive is needed. Two periods x and y are then said to meet if and only if x is before y and no other period z exists such that x is before z, and z is before y. In the general theory, an ontological distinction is made between physical objects (bodies) and the re- gions of space they occupy. Bodies and regions are represented in the formal theory as disjoint sorts. The mapping between the two is done by introducing a transfer function ‘space(x, y)’ read as ‘the space occu- ‘Inthisp p a er we make the assumption that all the re- gions are topologically closed (i.e. include their bound- aries). The relation NTPI is only satisfied if the regions it is predicated on are topologically open. Thus we ignore NTPI here. pied by x at y’, that takes a body at a given moment in time, and rnaps this to the region of space it oc- cupies. The transfer function is used in the theory to define a set of ternary relations of the form @(x, y, a) which are used in a set of envisioning axioms meaning that body x is in relation ip to body y during period z. However, in this paper, the temporal parameters in for- mulae used in the simulation program remain implicit, e.g. the formula NTPP( n, a) abbreviates the tempo- rally indexed formula NTPP(n, a, t) - where t denotes a specific period during which the state obtains. The general theory contains a set of envisioning ax- ioms and encodes a set of theorems (derivable from the part of the theory described above) in the form of a transitivity table - cf. Allen’s (1983) transitivity ta- ble. The envisioning axioms describe direct topological transitions that can be made between pairs of regions. Thus, for example, given two regions that DC in one state, a direct transition to EC is allowed, and from EC back to DC or to where the regions PO, and so on. These axioms rule out certain transitions - for example no direct transition between DC and PO is allowed. A pictorial representation of of the envisioning axioms is illustrated in Fig. 1. L3 iPP 0, a 0 b Figure 1: A pictorial representation of the base tions and their direct topological transitions. rela- The theory also uses a precomputed transitivity ta- ble for the set of dyadic base relations described above - for details see Randell, Cohn and Cui (1992). Each R3(a,c) entry in the table represents a disjunction of all the possible dyadic relations holding between regions a and c, for each Rl(a,b) and R2(b,c) conjunction - where Rl, R2, R3 are elements of the set of base re- lations in the theory. The transitivity table is used in the simulation program for checking consistency of state descriptions in the envisioning process. The general theory also includes an additional prim- itive function ‘conv(x)’ read as ‘the convex hull of x’, which is axiomatised and is used to generate a further set of dyadic relations. These additional relations are used to describe regions that are either inside, partially inside or outside other regions - see Randell, Cohn and Cui (1992). As with the set of relations defined solely in terms of C, the extended theory including the new set of inside and outside relations also admits the pos- sibility of constructing several further sets of base re- lations, depending upon the degree of representational 680 Representation and Reasoning: Qualitative detail required by the user. For the basic extension to the theory, the set of base relations extend from 9 to 23. However, here we simply concentrate upon the set of base relations defined solely in terms of C which turns out to be sufficient to demonstrate the general utility of our approach. The Simulation Program State descriptions in the simulation program are repre- sented as conjunctions of ground atomic formulae. The program first of all takes an initial state description, then evolves successive states according to the restric- tions imposed by direct topological transitions encoded in the envisioning axioms, by sets of constraints that apply within a state or between states, and by any sets of add or delete rules that sanction the introduc- tion and deletion of named entities in the modelled domain respectively. A consistency check is made for each state, first for the initial state, and then for all potential evolved states generated in the envisioning process. The envisioning process terminates when for each path generated in the envisionment tree, the last state repeats an earlier one. Each path of states Si, Sa, . . . corresponds to a sequence of periods, tl , t2, . . . such that Meets(ti, ti+l), and the state description of Si ob- tains during ti. Each complete path corresponds to a possible behaviour of the physical model as predicted by the program. However, because the transition rules always allow the possibility that the relationship be- tween two entities continues indefinitely, each initial subpath also corresponds to a predicted behaviour. The program requires a complete n-clique as the initial state, i.e. n(n - 1)/2 atomic formulae. This requirement is needed for consistency and constraint checking by the program to function correctly.2 Constraints The simulation program supports intrastate and inter- state constraints. Intrastate constraints are constraints that apply within a state, and interstate constraints be- tween adjacent states - that is to say, between consec- utive states, or states which meet. For example, in the physical system which is used to illustrate this simula- tion program below - namely modelling phagocytosis of unicellular organisms - an intrastate constraint would be the assertion that the cell’s nucleus is always part of the cell, and an interstate constraint would be the fact that once the food is ingested during phagocytosis and becomes a part of the amoeba, it will remain so. For- mally, both types of constraints assume the following forms: Intrastate constraint: @, where @ is a quantifier free formula, and all terms are variables or constants (in this case all variables are implicitly quantified). Note here that in the current implementation of the theory, @ must be composed of basic atoms. Interstate constraints: @ - (Ro =3 (I-21 v . ..V&)) and Cp ---f (& =+ (RI A . . . A Rn)) where Q is as above, and the Ri are basic atoms predicating the same terms. In the first case, where @ holds, if Ro then in any next state the disjunction Ro VR1 V . . . V R, holds, while in the second case the disjunction & VR: V . . . V R’, must hold where each Rg is a base atom predicating the same terms as Ro and Ri # Rj for any i, j. The presence of an interstate constraint does not force a transition to take place. Add and Delete rules In addition to the set of constraint rules described above, the simulation program also supports add and delete rules. Both sets of rules can be viewed as a spe- cial kind of inter-state constraints. Add rules simply sanction the introduction of new objects into the do- main at the next state, and delete rules the elimination of particular objects in the next state. In the model used to illustrate our program, an example of an add rule is where, having enveloped the food, a vacuole is formed in the amoeba, while an example of a delete rule is where the vacuole containing waste material passes out of existence as it opens up and discharges its con- tents into its environment. Add and delete rules assume the following forms: add 01, . . . . 0, with Ql when Q2. Delete 01, . . . . 0, when 92 . 91 is a conjunction of basic atoms, and 92 is a quantifier free Boolean composition of atoms. 01 1 se*, 0, must be ground terms (at least in the cur- rent implementation). An add rule is fired when the ‘when’ condition is true for some instantiation of any free variables in the condition, and will add 01, . . . . 0, to all next states with the specified relations. Similarly, delete rules will be fired when the ‘when condition is true and will delete all the specified objects in all next states. The Algorit hrn The algorithm first of all takes an initial state of the modelled physical system, then proceeds to generate the envisionment. Each state in the envisioning process is checked for intrastate consistency before the next state in the envisionment is generated. The completed tree representing the envisionment has the initial state as the root node, and paths tracing to leaf nodes as distinct sequences of transitions undergone by the set of modelled objects. The algorithm first puts the initial state SO, in a set S of unexpanded states; and then executes the following steps: Cui, @ohm, and Randell 681 1. If S is empty then stop. 2. Select and remove a state S; from S. 3. If Si is inconsistent, then go to 1. 4. Select applicable transition rules by applying in- terstate constraints. 5. Apply all the selected transition rules to produce a set of possible next states. G. Apply add and delete rules. 7. Check intrastate constraints. 8. Add remaining states generated to S; go to 1. We discuss the details of steps 3 to 7 in the subsec- tions below. Consistency checking In step 3 the algorithm uses a simple form of consistency checking step to filter out sets of atomic formulae (being a potential state in the simulation and thus in the physical model) whose con- junction is inconsistent in the underlying theory, and thus supports no model. In this instance, we use the re- sults encoded in the transitivity table. Given n-objects in the modelled domain, there are exactly n(n - 1)/2 atomic formulae in a state. In particular for each tuple of objects 2, y, Z, there are three atomic formulae of the R&Y), Rz(?/> ) z and R~(z, z). Consistency checking simply consists of checking that each R~(x, Z) formula is logically implied by RI (x, y) and R2(y, z) for each y e {x, z}. In use this is effectively the same as Allen’s (1983) constraint satisfaction algorithm, except that our algorithm can be simplified since we have no “dis- junctive labels”, i.e. we have restricted state descrip- tions to predicating a single base relation to any pair of objects. Generating next states In steps 4 through to 7, the algorithm takes a state produced in step 3, and proceeds to generate a new state. The selected state Sa is a set of basic atoms. For each atom there are between 1 and 5 applicable transition rules - see Figure 1. In step 4 possible transitions for each atom which violate an interstate constraint are filtered out. In step 5 the remaining transitions are applied in all possible combinations to yield a set of possible next states. In step 6 the add and delete rules are then applied in that order. Finally, in step 7 any next states which violate an intrastate constraint are deleted. Correctness The program terminates when for each path generated in the envisioning process, the last state repeats an earlier one. It should be evident that the algorithm will terminate if there are no add rules, but the same applies if there are finitely many add rules. This follows from the syntactic restriction on add rules, that the objects must be ground terms, so only finitely many new objects can ever be introduced. It is important to show that all the behaviours pre- dicted by the simulation correspond to possible be- haviours of the physical system being modelled. This issue brings brings to the fore the question whether or not the simulation can be proved to be “sound” and “complete”. In our case by “soundness” we need to show that every frontier of the envisionment tree (viewed as a disjunction) generated in the simulation is a provable consequence in the underlying theory, and by “completeness”, to show that, given an initial state, every ‘minimal’ provable disjunction of conjoined ba- sic atoms in the underlying theory will be expressed in the envisionment. Whereas Kuipers has proved the correctness of QSIM relative to ordinary differential equations, our gold standard is the logical formalism presented in Randell (1991). We have shown the sys- tem to be be sound and we conjecture the system is complete but have still to complete the proof of this. Complexity The critical point about the algorithm (and its complexity) is that states are complete, i.e., all relations between all objects are explicitly given in terms of base atoms and there is no disjunctive or in- definite information. This means that all constraints and add/delete rules can be considered individually. The complexity of step 3 is O(n3) because there are n3 - 3n2 + 2 different triples given n objects in a state. For step 4, suppose there are c interstate constraints and each constraint contains at most 21 variables and there are n objects, then each constraint can be ap- plied at most CE ways. This is polynomial of degree of v. Applying a constraint is linear to the number of connectives in it. For step 5, if there are n objects there are (n2 - n)/2 relations. The maximum branch- ing rate in the graph for direct topological transitions is 5 (from equality) so there are at most 5n successor states (but more likely 2” which is of course still expo- nential). This compares to the situation in QSIM. In practice, consistency checking will prune the number of next states dramatically. The complexity of steps 6 and 7 is the same as step 5, i.e. O(nw). An Example By way of a simple example, we shall demonstrate the simulation program by modelling cellular behaviour - in particular, the processes known as phagocytosis and exocytosis. Phagocytosis is the process by which cells surround, engulf and then digest food particles. It is the feeding method used by some unicellular organ- isms of which the amoeba is an example, and which is adopted here. The same process is used by white blood cells in an attempt to deal with invading micro- organisms. Exocytosis is the name given to a similar inverse process where waste material originally con- tained in a cell is subsequently expelled from the cell. In the proposed model, an amoeba is depicted in a fluid environment containing other organisms which are its food. Each amoeba is credited with vacuoles (being fluid filled spaces) containing either enzymes or food which the animal has ingested. The enzymes are used by the amoeba to break down the food into nutri- ent and waste. This is done by routing the enzymes to the food vacuole. Upon contact the enzyme and food 682 Represent at ion and Reasoning: Qualitative Figure 2: A pictorial representation of two paths generated in the envisionment. vacuoles fuse together and the enzymes merge into the fluid containing the food. After breaking down the food into nutrient and waste, the nutrient is absorbed into the amoeba’s protoplasm, leaving the waste ma- terial in the vacuole ready to be expelled. The waste vacuole passes to the exterior of the protozoan’s body, which opens up, letting the waste material pass out of the amoeba and into its environment. The formal description of the physical model is as follows. We assume six physical objects: a, f, n, e, nt, w and v, standing for the amoeba, its food, the amoeba’s nucleus, a packet of enzymes, a body of waste material, and a vacuole respectively. In the simulation, the vacuole, the nutrient and the waste are generated dynamically as the process is undergone. The initial state is represented by the conjunction of the following atomic formulae: DC(a,f), NTPP(n,a), NTPP(e,a), DC(n,e) and DC(e,f). 3 Next we introduce our set of domain constraints for the physical model. First the interstate constraints: l)EC(f, a) + DC(f, u) 8)NTPP(nt, V) +$ TPI(nt, U) 2)PO(f, a) ritj EC(f, a) g)EC(nt, v) # PO(nt, u) S)TPP(f, a) + PO(f, a) lO)PO(?zt, u) + EC& U) 4)TPP(f, a) + TPI(f, a) ll)TPP(n.t, V) + TPI(nt, V) 5)DC(nt, v) j DC(nt, V) 12)NTPP(f, a) + TPI(P, a) 6)EC(w a) + PO(w 4 I3)EC(c, f) fi DC@, f) 7)PO(ul, a) a EC@, a) 14)PO(e, f) =G= TPP(e, f) Constraints 1 to 3, 6 and 7, and 13 and 14 respectively impose a unidirectionality of movement between the food and the amoeba, between the waste material and the amoeba and between the enzyme packet and the food. In the first case when the food is in contact with the amoeba it is always ingested to become a proper 31n the initial state, since there are 5 objects, there are really 10 relationships to be specified. As mentioned earlier, the program expands a user supplied partial description to a complete description. In fact although the formula DC(e,f) is formally derivable in the general theory from the first four atomic formulae, it is represented explicitly in the input language here, otherwise no relation between e and f will be generated in subsequent states in the envisioning process - see earlier footnote. part of the animal; in the second case once the waste material is in external contact with the animal, it will never be reingested, and in the last case once the en- zyme packet contacts the food, it will always pass into it becoming a part. Constraints 4 and 8, and 6 and 7 respectively impose the conditions that once the food is ingested (and is thus a proper part) it will remain a proper part of the animal, and that nutrient once pro- duced (being a proper part of the vacuole) remains a proper part. Without these constraints the transition from being a proper part to being identical sanctioned by the envisioning axioms is not violated; this would simply result in a possible state being generated in the envisionment with the amoeba being part of the food, and the vacuole part of the nutrient! The interstate constraints are all straightforward to understand and just impose the obvious static topo- logical constraints between the domain entities. NTPP(e, a), PP(nt, a), PP(v, a), DR(n, v), DR(n, e), NTPP(n, a), PP( w, v), PP(f, v), PP(w, a) ---) PP(v,a) In the simulation, two add-rules are given. The first rule introduces nutrient and waste into the food vac- uole when the enzyme packet is a proper part of the food, while the second rule sees the creation of the vacuole when the food is a proper part of the amoeba. The delete rules govern the deletion of the enzyme and food, and vacuole respectively. Since the first add rule below contains no basic atoms in the ‘with relations’ component, it is actually schematic for 4 rules in which only basic atoms are used. add nt, u, with PP(nt, v) A PP(w, w) whenTPP(e, f) add v with TPP(v, a) A TPP(f, v) wltaen TPP(P, v) delete e, f when P(e, f) delete v wheu TPP(v, a) A PP(zu, v) A DR(nt, v) The simulation program produces an envisionment with 76 distinct states. Our constraints are sufll- ciently strong because each complete path corresponds to the English description of phagocytosis and exocy- tosis given above. A pictorial representation of two paths generated in the envisionment is given in Fig. 2. In both paths generated we can see that the food is Cui, Cohn, and Randell 683 ingested by the amoeba, a vacuole is formed which then contains that food, digestion takes place transforming the food into nutrient and waste, and finally the waste is expelled. Note that in one path the enzyme packet begins to be absorbed into the food before the food is completely enveloped by the amoeba, while in another path the vacuole is formed before the enzyme packet is similarly absorbed. Altogether there are 6 terminal states although there are 264 paths leading from the initial state to these fi- nal states representing different orderings of the topo- logical transformations. Bowever all the complete paths predict that phagocytosis and exocytosis will be undergone. Some of the paths exhibit oscillatory be- haviour. Related and Further Work For a detailed discussion of the ontology a.nd formalism used in the simulation see Randell (1991) . We have already discussed the relationship between this simula- tion program and Kuiper’s QSIM above. The volume (Weld and de Kleer 1990) contains several papers on qualitative spatial simulation, Forbus (1980) reports on a simulator called FROB and Gardin and Meltzer (1989) describe an analogical spatial simulator. Forbus et al (1991) in the context of the CLOCK project give a general framework for qualitative reasoning concern- ing mechanisms, but all use very different ontologies to our work. We mentioned how further dyadic relations describ- ing bodies that are either inside, partially inside or out- side each other. This could be exploited in the amoeba example to give a richer and more realistic model where the food can be made to pass from being outside to being inside the animal, and then options would be available once the food has been engulfed to whether the food is modelled as forming a part of the animal, or not. However, in order to do this an extended tran- sitivity table needs to be built, containing at least 529 cells. This is a formidable task which we have not yet completed; we have recently constructed a transitivity table via a program which reasons about bitmap rep- resentation of space but the resulting transitivity table has not yet been verified with respect to the model- ing theory. It should also be pointed out here that in addition to the set of inside and outside relations mentioned, an set of containment relations can also be defined and exploited, in which one body completely wraps around another - see Randell (1991). At present the modelling primitives simply capture topological in- formation. These could be extended to include metric information, capturing for example notions of relative size and distances between objects. The possibility of introducing a metric extension to the theory is out- lined in Randell (1991). Further envisaged extensions to the theory that would include a subtheory of mo- tion to the modelling language, for at present motion is represented implicitly by specified topological tran- sitions between sets of objects. Other useful extensions would include explicit information about causality and processes, the latter including teleological accounts of a physical system’s behaviour. In the present implementation, constraints and ob- jects have to be individually specified. It would be useful to generalise this restriction to allow for generic constraints and typed objects in the programs descrip- tion language, relating individuals of particular types. The syntax of constraints described in this paper is not necessarily the most liberal that could be efficiently im- plemented, but corresponds to the current version of the system. We intend to investigate this expressive- ness/efficiency trade off. References Allen, J. F. 1983. Maintaining knowledge about tem- poral intervals. CACM, Vol. 26, pp. 832-843. Clarke, B. L. 1981. A Calculus of Individuals Based on Connection. Notre Dame J. of Formal Logic, 2(3) 204-218. Clarke, B. L.1985. Individuals and Points. Notre Dame Journal of Formal Logic 26(1) 61-75. Cohn, A. 6.1987. A More Expressive Formulation of Many Sorted Logic. J. of Automatic Reasoning., 3(2) 113-200. Forbus, F. 1980. Spatial and Qualitative Aspects of Reasoning about Motion. irz Proc. AAAI-80, Morgan Kaufmann, San Mateo, Ca. Forbus K. D., Nielsen P. and Faltings B. 1991. Qual- itative spatial reasoning: the CLOCK project, Art. Id. 51, pp. 417-471, Elsevier. Gardin F., and Meltzer B. 1989. Analogical Repre- sentations of Naive Physics. Art. Int. 38: 139-159. Kuipers, B. 1986. Qualitative Simulation. Artificial Intelligence 29: 298-338. Randell, D.; Cohn, A. G.; and Cui, Z. 1992. Naive Topology: modeling the force pump. in Recent Ad- vances in Qualitative Reasoning, ed B Faltings and P Struss, MIT Press, in press, 1992. Randell, D. A. 1991. Analysing the Familiar: Rea- soning about space and time in the everyday world. Ph.D. Thesis, Dept. of Camp. Sci., Warwick Uni. UK Randell, D., and Cohn, A. G. 1989. Modeling Topo- logical and Metrical Properties in Physical Processes. in Proc. of the 1st Int. Conference on Principles of Knowledge Representation and Reasoning, ed. R.J. Brachman, et al, Morgan Kaufmann, San Mateo, Ca. Randell, D. A.; Cohn, A. G.; and Cui, Z. 1992. Com- puting Transitivity Tables: a Challenge for Auto- mated Theorem Provers. to appear in Proceedings of CADE-11, Springer Verlag. Weld, D., and Kleer, J. de. 1990. Readings in Qualita- tive Reasoning about Physical Systems. Morgan Kauf- mann, San Mateo. 684 Representation and Reasoning: Qualitative
1992
116
1,183
Qualitative Reasoning Group The Institute for the Learning Sciences Northwestern University 1890 Maple Avenue, Evanston, IL, 60201 Abstract Qualitative reasoners have been hamstrung by the in- ability to analyze large models. This includes self- explanatory simulators, which tightly integrate qual- itative and numerical models to provide both preci- sion and explanatory power. While they have im- portant potential applications in training, instruction, and conceptual design, a critical step towards real- izing this potential is the ability to build simulators for medium-sized systems (i.e., on the order of ten to twenty independent parameters). This paper de- scribes a new method for developing self-explanatory simulators which scales up. While our method involves qualitative analysis, it does not rely on envisioning or any other form of qualitative simulation. We describe the results of an implemented system which uses this method, and analyze its limitations and potential. Introduction While qualitative representations seem useful for real- world tasks (c.f. [l; 15]), the inability to reason qual- itatively with large models has limited their utility. For example, using envisioning or other forms of qual- itative simulation greatly restricts the size of model which can be analyzed [14; 41. Yet the observed use of qualitative reasoning by engineers, scientists, and plain folks suggests that tractable qualitative reasoning tech- niques exist. This paper describes one such technique: a new method for building self-expkanatory simudutors [lo] which has been successfully tested on models far larger than previous qualitative reasoners can handle. A self-explanatory simulation combines the precision of numerical simulation with the explanatory power of qualitative representations. They have three advan- tages: (1) Better explanations: By tightly integrating numerical and qualitative models, behavior can be ex- plained as well as predicted, which is useful for in- struction and design. (2) Improved self-monitoring: Typically most modeling assumptions underlying to- day’s numerical simulators remain in their author’s heads. By incorporating an explicit qualitative model, the simulator itself can help ensure that its results are consistent. (3) I ncreused automation: Explicit domain rrian Falkenhainer System Sciences Laboratory Xerox Palo Alto Research Center 3333 Coyote Hill Road, Palo Alto CA 94304 theories and modeling assumptions allow the simula- tion compiler to shoulder more of the modeling burden k& [71>* Applying these ideas to real-world tasks requires a simulation compiler that can operate on useful-sized examples. In [lo], our account of self-explanatory sim- ulators required a total envisionment of the modeled system. Since envisionments tend to grow exponen- tially with the size of the system modeled, our previous technique would not scale. This paper describes a new technique for building self-explanatory simulations that provides a solution to the scale-up problem. It does not rely on envision- ing, nor even qualitative simulation. Instead, we more closely mimic what an idealized human programmer would do. Qualitative reasoning is still essential, both for orchestrating the use of numerical models and pro- viding explanations. Our key observation is that in the task of simulation writing reification of global state is unnecessary. This suggests developing more effi- cient local analysis techniques. While there is room for improvement, SIMGEN.MK2 can already write self- explanatory simulations for physical systems which no existing envisioner can handle. Section outlines the computational requirements of simulation writing, highlighting related research. Sec- tion uses this decomposition to describe our new method for building self-explanatory simulations. Sec- tions and discuss empirical results. We use MK~ below to refer to the old method and implementation and MK2 to refer to the new. The task of simulatisn writing We focus here on systems that can be described via systems of ordinary differential equations without si- multaneities. Writing a simulation can be decomposed into several subtasks: 1. Qualitative Modeling. The first step is to iden- tify how an artifact is to be described in terms of con- ceptual entities. This involves choosing appropriate perspectives (e.g., DC versus high-frequency analysis) and deciding what to ignore (e.g., geometric details, ca- pacitive coupling). Existing engineering analysis tools Forbus and Falkenhainer 685 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. (e.g., NASTRAN, SPICE, DADS) provide little support for this task. Qualitative physics addresses this prob- lem by the idea of a domain theory (2X) whose general descriptions can be instantiated to form models of spe- cific artifacts (e.g., [7]). Deciding which domain theory fragments should be applied in building a system can require substantial reasoning. 2. Finding relevant quantitative models. The conceptual entities and relationships identified in qual- itative analysis guide the search for more detailed mod- els. Choosing to include a flow, for instance, requires the further selection of a quantitative model for that flow (e.g., laminar or turbulent). Current engineering analysis tools sometimes supply libraries of standard equations and approximations. However, each model must be chosen by hand, since they lack the deductive capabilities to uncover non-local dependencies between modeling choices. Relevant AI work includes [3; 7; 171. 3. From equations to code. The selected models must be translated into an executable program. Rele- vant AI work includes [2; 211. 4. Self-Monitoring. Hand- built numerical simula- tions are typically designed for narrow ranges of prob- lems and behaviors, and rarely provide any indica- tion when their output is meaningless (e.g., negative masses). Even simulation toolkits tend to have this problem, relying on the intuition and expertise of a hu- man user to detect trouble. Forcing a numerical model to be consistent with a qualitative model can provide automatic and comprehensive detection of many such problems [lo]. 5. Explanations. Most modern simulation toolk- its provide graphical output, but the burden of under- standing still rests on the user. Qualitative physics work on complex dynamics 119; 16; 201 can extract qualitative descriptions from numerical experiments. But since they require the simulator (or equations) as input and so far are limited to systems with few parameters they are inappropriate for our task. The tight integration of qualitative and numerical models in self-explanatory simulators provides better explana- tions for most training simulators and many design and analysis tasks. Simulation-building by local reasoning Clearly envisionments contain enough information to support simulation-building; The problem is they con- tain too much. The author of a FORTRAN simula- tor never enumerates the qualitatively distinct global truth maintenance (ATMS) [6], Qualitative Process theory [8], Compositional Modeling [7], and QPE [9] as needed. ualit at ive analysis Envisioning was the qualitative analysis method of MK~. The state of a self-explanatory simulator was defined as a pair (ni, Q), with nf a vector of contin- uous parameters (e.g., mass(B) ) and booleans corre- sponding to preconditions (e.g., Open(ValveZ)), and Q ranged over envisionment states. Envisioning tends to be exponential in the size of the artifact A. Many of the constraints applied are designed to ensure consistent global states using only qualitative information. For example, all potential vio- lations of transitivity in ordinal relations must be enu- merated. The computational cost of such constraints can be substantial. For our task such effort is irrele- vant; the extra detail in the numerical model automat- ically prevents such violations. The domain theory 7X consists of a set of model fragments, each with a set of antecedent conditions controlling their use and a set of partial equations defining influences [8] on quantities. The directly in- fluenced quantities are defined as a summation of in- fluences on their derivative dQo/dt = Clnf(Qoj Qi and the indirectly influenced quantities are defined as algebraic functions of other quantities Qc = f(Q1,. - ., Qn). The qualitative analysis identifies rel- evant model fragments, sets of influences, and tran- sitions where the set of applicable model fragments changes. The algorithm is: 1. Establish a dependency structure by instantiating all applicable model fragments into the ATMS. The complexity is proportional to 2X and A. 2. Derive all minimal, consistent sets of assumptions (called local states) under which each fragment holds (i.e., their ATMS labels). The labels enumerate the op- erating conditions (ordinal relations and other propo- sitions) in which each model fragment is active. 3. For each quantity, compute its derivative’s sign in each of its local states when qualitatively unambiguous (QPT influence resolution). This information is used in selecting numerical models and in limit analysis be- low. The complexity for processing each quantity is exponential in the number of influences on it. Typi- cally there are less than five, so this step is invariably cheap in practice. 4. Find all limit hypotheses involving single inequal- states of a complex artifact. Instead she identifies dis- ities (from QPT limit analysis). These possible tran- tinct behavior regimes for pieces of the artifact (e.g., sitions are used to derive code that detects state tran- whether a pump is on or off, or if a piping system sitions. This step is linear in the number of ordinal is aligned) and writes code for each one. Our new comparisons. simulation-building method works much the same way. This algorithm is a subset of what an envisioner Here we describe the method and analyze its complex- does. No global states are created and exponential ity and trade-offs. We use ideas from assumption-based enumeration of all globally consistent states is avoided 686 Representation and Reasoning: Qualitative (e.g., ambiguous influences are not resolved in step 3 and no limit hypothesis combinations are precomputed in step 4). Only Step 2 is expensive: worst case expo- nential in the number of assumptions due to ATMS label propagation. We found two ways to avoid this cost in practice. First, we partially rewrote the qual- itative analysis routines to minimize irrelevant justifi- cations (e.g., transitivity violations). This helped, but not enough. The second method (which worked) uses the fact that for our task, there is a strict upper bound on the size of relevant ATMS environments. Many large envi- ronments are logically redundant [5]. We use labels for two purposes: (1) to determine which model fragments to use and (2) to derive code to check logical conditions at run-time. For (1) having a non-empty label suffices, and for (2) shorter, logically equivalent labels produce better code. By modifying the ATMS to never create environments over a fixed size E,,, , we reduced the number of irrelevant labels. The appropriate value for E,,, can be ascertained by analyzing the domain the- ory’s dependency structure.l Thus, while Step 2 is still exponential, the use of & ma3 greatly reduces the degree of combinatorial explosion.2 A new definition of state for self-explanatory simula- tors is required because without an envisionment, Q is undefined. Let n/ be a vector of numerical parameters, and let 23 be a vector of boolean parameters represent- ing the truth value of the non-comparative proposi- tions which determine qualitative state. That is, 23 includes parameters representing proposit ions and the status of each model fragment, but not comparisons. (Ordinal information can be computed directly from N as needed.) The state of a self-explanatory simula- tor is now defined as the pair (N, a). In effect, each element of Q can be represented by some combination of truth values for 8. Finding relevant quantitative models The qualitative analysis has identified the quantities of interest and provided a full causal ordering on the set of differential and algebraic equations. However, because the influences on a quantity can change over time, a relevant quantitative model must be found for each possible combination. This aspect of simulation-building is identical with MK~. The derivative of a directly influenced param- eter is the sum of its active influences. For indirectly influenced parameters, a quantitative model must be selected for each consistent combination of qualitative proportionalities which constrain it For instance, when ‘Empirically, setting E,,, to double the maximum size of the set of preconditions and quantity conditions for DDI always provides accurate labels for the relevant subset of the ATMS. The factor of two ensures accurate labels when computing limit hypotheses. 2Under some tradeoffs non-exponential algorithms may be possible: See Section . a liquid flow is occurring its rate might depend on the source and destination pressures and the conductance of the path. The numerical model retrieved would be Fluid Conduc tance( ?path) x ( Pressure( ?source) - Pressure( ?dest)) If N qualitative proportionalities constrain a quan- tity there are at most 2N distinct combinations. This worst case never arises: typically there are exactly two consistent combinations: no influences (i.e., the quan- tity doesn’t exist) and the conjunction of all N possi- bilities (i.e., the model found via qualitative analysis). N is always small so the only potentially costly aspect here is selecting between alternate quantitative models (See Section ). The only potential disadvantage with using a over & in this computation is the possibility that a com- bination of qualitative proportionalities might be lo- cally consistent, but never part of any consistent global state. This would result in the simulator containing dead code, which does not seem serious. Code Generation The simulation procedures in a self-explanatory sim- ulator are divided into evolvers and transition proce- dures. An evolver produces the next state, given an input state and time step dt. A transition procedure takes a pair of states and determines whether or not a qualitatively important transition (as indicated by a limit hypothesis) has occurred between them.3 In MK~ each equivalence class of qualitative states (i.e., same processes and Ds values) had its own evolver and transition procedure. In MK2 simulators have just one evolver and one transition procedure. An evolver looks like a traditional numerical simula- tor. It contains three sections: (1) calculate the deriva- tives of independent parameters and integrate them; (2) update values of dependent parameters; (3) up- date values of boolean parameters marking qualitative changes. Let the influence graph be the graph whose nodes are quantities and whose arcs the influences (di- rect or indirect) implied by a model (note that many can’t co-occur). We assume that the subset of the in- fluence graph consisting of indirect influence arcs is loop-free. This unidirectional assumption allows us to update dependent parameters in a fixed global order. While we may have to check whether or not to update a quantity (e.g., the level of a liquid which doesn’t ex- ist) or calculate which potential direct influences are relevant (e.g., which flows into and out of a container are active), we never have to change the order in which we update a pair of parameters (e.g., we never have to update level using pressure at one time and update pressure using level at another within one simulator). The code generation algorithm is: 3Transition procedures also enforce completeness of the qualitative record by signalling when the simulator should “roll back” to find a skipped transition [lo]. Forbus and Falkenhainer 687 Sample of direct influence update code (defprocess (Heat-Flow ?src ?dst ?path) Individuals (( ?src :conditions (Quantity (Heat ?src))) (?dst :conditions (Quantity (Heat ?dst))) (?path :type Heat-Path :conditions (Heat-Connection ?path ?src ?dst))) Preconditions ((heat-aligned ?path)) QuantityConditions ((greater-than (A (temperature ?src)) (A (temperature ?dst)))) Relations ((quantity flow-rate) (Q= flow-rate (- (temperature ?src) (temperature ?dst)))) Influences ((I+ (heat ?dst) (A flow-rate)) (I- (heat Isrc) (A flow-rate)))) (SETF (VALUE-OF (D (HEAT (c-s WATER LIQUID F))) AFTER) 0.0) (WHEN (EQ (VALUE-OF (ACTIVE PIO) BEFORE) ‘:TRUE) (SETF (VALUE-OF (D (HEAT (C-S WATER LIQUID F))) AFTER) (- (VALUE-OF (D (HEAT (C-S WATER LIQUID F))) AFTER) (VALUE-OF (A (HEAT-FLOW-RATE PIO)) BEFORE)))) (WHEN (EQ (VALUE-OF (ACTIVE PIl) BEFORE) ‘:TRUE) (SETF (VALUE-OF (D (HEAT (C-S WATER LIQUID F))) AFTER) (+ (VALUE-OF (D (HEAT (C-S WATER LIQUID F))) AFTER) (VALUE-OF (A (HEAT-FLOW-RATE PIl)) BEFORE)))) (SETF (VALUE-OF (A (HEAT (C-S WATER LIQUID F))) AFTER) (+(&AEU’EAO; (A (HEAT (C-S WATER LIQUID F))) BEFORE) (VALUE--OF (D (HEAT (C-S WATER LIQUID F))) AFTER)))) Sample of (defentity (Contained-Liquid (C-S ?snb liquid ?can)) (quantity (level (C-S ?sab liquid ?can))) (quantity (Pressure (C-S ?sub liquid ?can))) (Function-Spec Level-Function (Qprop (level (C-S ?sab liquid ?can)) (Amount-of (C-S ?sub liquid ?can)))) (Correspondence ((A (level (C-S ?sub liquid ?can))) (A (bottom-height ?can))) ((A (amount-of (C-S ?sub liquid ?can))) zero)) (Function-Spec P-L-Function (Qprop (pressure (C-S ?sub liquid ?can)) (level (C-S ?sub liquid ?can))))) indire (COND tct influence update code (( EQ :GREATER-THAN (COMPUTE-SIGN-FROM-FLOAT (VALUE-OF (A (AMOUNT-OF-IN WATER LIQUID F)) BEFORE))) (SETF (VALUE-OF (A (LEVEL (C-S WATER LIQUID F))) AFTER) (/ (VALUE-OF (A (AMOUNT-OF (C-S WATER LIQUID F))) AFTER) (* 31.353094 (VALUE-OF (A (DENSITY WATER)) AFTER) (VALUE-OF (A (RADIUS F)) AFTER) (VALUE-OF (A (RADIUS F)) AFTER)))) (SETF (VALUE-OF (D (LEVEL (C-S WATER LIQUID F))) AFTER) (- (VALUE-OF (A (LEVEL (c-s WATER LIQUID F))) AFTER) (VALUE-OF (A (LEVEL (C-S WATER LIQUID F))) BEFORE)))) (T (SETF (VALUE-OF (A (LEVEL (C-S WATER LIQUID F))) AFTER) (VALUE-OF (A (LEVEL (C-S WATER LIQUID F))) BEFORE)) (SETF (&AI/E-OF (D (LEVEL (C-S WATER LIQUID F))) AFTER) Sample of boolean update code (defprocess (Liquid-flow ?sub ?src ?dst ?path) Individuals ((?sub :type Substance) (?src :type Container) (?dst :type Container) (?src-cl :bind (C-S ?sub LIQUID ?src)) (?dst-cl :bind (C-S ?sub LIQUID ?dst)) (?path :type Fluid-Path :conditions (Fluid-Connection ?path ?src ?dst))) Preconditions ((aligned ?path)) QuantityConditions (SET$ \‘JFDUE-OF (ACTIVE PIO) AFTER) (EQ :GREATER-THAN (COMPUTE-INEQUALITY-FROM-FLOATS (VALUE-OF (A (PRESSURE (C-S WATER LIQUID (VALUE-OF (A (PRESSURE (C-S WATER LIQUID ((greater- than (A (pressure ?src-cl)) (A (pressure ?dst-cl)))) Relations ((quantity flow-rate) (Q= flow-rate (- (pressure ?src-cl) (pressure ?dst-cl))) . . 0) (EQ (VALUE-OF (ALIGNED Pl) AFTER) ‘:TRUE)) ‘:TRUE ‘:FALSE)) Influences ((I+ (Amount-of-in ?sub LIQUID ?dst) (A flow-rate)) l * *) I F))) AFTER) G))) AFTER))) Figure 1: Code fragments produced by MK2. The relevant model fragments are shown on the left, the corresponding sample code fragments are shown on the right. 1. Analyze the influence graph to classify parame- ters as directly or indirectly influenced, and establish a global order of computation. 2. Generate code for each directly influenced quan- tity. Update order is irrelevant because the code for each summation term is independent. 3. Generate code to update indirectly influenced quantities using the quantitative models found earlier. Updates are sequential, based on the ordering imposed by the influence graph. 4. Generate code to update B, using label and de- pendency information. Figure 1 shows part of an evolver produced this way. Step 1 is quadratic in the number of quantities and the rest is linear, so the algorithm is efficient. The code generation algorithm for transition procedures is linear in the number of comparisons: 1. Sort limit hypotheses into equivalence classes based on what they compare. For instance, the hy- pothesis that two pressures become unequal and the hypothesis that they become equal both concern the same pair of numbers and so are grouped together. 2. For each comparison, generate code to test for the occurrence of the hypotheses and for transition skip (see [lo] for details). To avoid numerical problems, place tests for equality first whenever needed. Explanation generat ion Explanations in MK~ were cheap to compute because the envisionment was part of the simulator. The value of Q at any time provided a complete causal structure and potential state transitions. In MK2 every self- explanatory simulator now maintains instead a con- cise history [18] for each boolean in B. The temporal bounds of each interval are the time calculated for that 688 Representation and Reasoning: Qualitative Table 1: MK2 on small examples All times in seconds. The envisioning time is included for comparison purposes. Example Qualitative Code Envisioning Analysis Generation Two containers 19.4 3.4 40.2 Boiling water 21.8 3.4 45.6 Spring-Block 4.9 1.5 6.2 interval in the simulation. Elements of ,Af can also be selected for recording as well, but these are only neces- sary to provide quantitative answers. A compact struc- tured ezphnation system, which replicates the ontology of the original QP model, is included in the simulator to provide a physical interpretation for elements of a in a dependency network optimized for explanation gen- eration. Surprisingly, almost no explanatory power is lost in moving from envisionments to concise histories. For instance, histories suffice to determine what in- fluences and what mathematical models hold at any time. What is lost is the ability to do cheap coun- terfactuals: e.g., asking “what might have happened instead ?“. Envisionments make such queries cheap be- cause all alternate state transitions are precomputed. Such queries might be supported in MK2’s simulators by incorporating qualitative reasoning algorithms that operated over the structured explanation system. Self-Monitoring In MK~ clashes between qualitative and quantitative models were detected by a simulator producing an in- consistent state: i.e., when N could not satisfy Q. This stringent self-monitoring is impossible to achieve without envisioning. To scale up we must find a good compromise between stringency and performance. Our compromise is to search the nogood database generated by the ATMS during the qualitative analysis phase for useful local consistency tests. These tests are then pro- ceduralized into a nogood checker which becomes part of the simulator. Empirically, few nogoods are use- ful since they rule out combinations of beliefs which cannot hold, given that a is computed from N. Thus so far nogood checkers have tended to be small. How much self-monitoring do we lose? At worst MK~ pro- duces no extra internal consistency checks, making it no worse than many hand-written simulators. This is a small price to pay for the ability to produce code for large artifacts. Examples These examples were run on an IBM RS/6000, Model 530, with 128MB of RAM running Lucid Com- mon Lisp. Table reports the MK2 runtimes on the examples of [lo]. H ere, MK2 is much faster than hu- man coders. To explore MK2’s performance on large problems we tested it on a model containing nine con- tainers connected by twelve fluid paths (i.e., a 3 x 3 grid). The liquid in each container (if any) has two independent variables (mass and internal energy) and three dependent variables (level, pressure, and temper- ature). 24 liquid flow processes were instantiated, each including rates for transfer of mass and energy. We es- timate a total envisionment for this situation would contain over 1012 states, clearly beyond explicit gen- eration. The qualitative analysis took 16,189 seconds (over four hours), which is slow but not forever. Gen- erating the code took only 97.3 seconds (under two minutes), which seems reasonably fast. Analysis The examples raise two interesting questions: (1) why is code generation so fast and (2) can the qualitative analysis be made even faster? Code generation is fast for two reasons. First, in programming framing the problem takes a substantial fraction of the time. This job is done by the qualita- tive analysis. Transforming the causal structure into a procedure given mathematical models is easy, deriving the causal structure to begin with is not. The second reason is that our current implementation does not rea- son about which mathematical model to use. So far our examples included only one numerical model per com- bination of qualitative proportionalities.4 This will not be unusual in practice, since typically each approxima- tion has exactly one quantitative model (e.g., laminar flow versus turbulent flow). Thus the choice of physical model typically forces the choice of quantitative model. On the other hand, we currently require the retrieved model to be executable as is, and do not attempt to optimize for speed or numerical accuracy (e.g. [2; 171). The qualitative analysis for large examples could be sped up in several ways. First, our current routines are culled from QPE, hence are designed for envisioning, not this task. Just rewriting them to minimize irrele- vant dependency structure could result in substantial speedups. Second, using an ATMS designed to avoid internal exponential explosions could help [5]. A more radical possibility is to not use an ATMS. Some of the jobs performed using ATMS labels in Sec- tion can be done without them. Consider the problem of finding quantitative models for indirectly influenced parameters, which requires combining labels for quali- tative proportionalities. For some applications it might be assumed that if no quantitative model is known for a combination of qualitative proportionalities then that combination cannot actually occur. Computing the la- bels of influences is unnecessary in such cases. Some- times ignoring labels might lead to producing code which would never be executed (e.g., boiling iron in a steam plant). At worst speed in qualitative analysis *If there are multiple quantitative MK~ selects one at random models the current Forbus and Falkenhainer 689 can be traded off against larger (and perhaps buggy) simulation code; At the best faster reasoning tech- niques can be found to provide the same service as an ATMS but with less overhead for this task. Discussion SIMGEN.MK~ demonstrates that qualitative reason- ing techniques can scale up. Building self-explanatory simulators requires qualitative analysis, but does not require calculating even a single global state. By avoid- ing envisioning and other forms of qualitative simu- lation, we can build simulators for artifacts that no envisionment-based system would dare attempt. Al- though our implementation is not yet optimized, al- ready it outspeeds human programmers on small mod- els and does reasonably well on models within the range of utility for certain applications in instruction, training, and design. Our next step is to build a ver- sion of MK2 which can support conceptual design and supply simulators for procedures trainers and integra- tion into hypermedia systems. Acknowledgements We thank Greg Siegle for intrepid CLIM programming. This work was supported by grants from NASA JSC, NASA LRC, Xerox PARC, and IBM. References [l] Abbott, K. “Robust operative diagnosis as problem- solving in a hypothesis space”, Proceedings of AAAI- 88, August, 1988. [2] Abelson, H. and Sussman, G. J. The Dynamicist’s Workbench: I Automatic preparation of numerical ex- periments MIT AI Lab Memo No. 955, May, 1987. [3] Addanki, S., Cremonini, R., and Penberthy, J.S. “Graphs of Models”, Artificial Intelligence, 51, 1991. [4] Collins, J. and Forbus, K. “Building qualitative models of thermodynamic processes”, unpublished manuscript. [5] DeCoste, D. and Co&s, J. “CATMS: An ATMS which avoids label explosions”, Proceedings of AAAI- 92, Anaheim, CA., 1991. [6] de Kleer, J. “An assumption-based truth maintenance system”, Artijkial Intelligence, 28, 1986. [7] Falkenhainer, B and Forbus, K. D. Compositional modeling: Finding the right model for the job. Ar- tificial Intelligence, 51( l-3):95-143, October 1991. [8] Forbus, K. D. Qualitative process theory. Artijkial Intelligence, 24~85-168, 1984. [9] Forbus, K. The qualitative process engine, A study in assumption-based truth maintenance. International Journal for Artificial Intelligence in Engineering, 3(3):200-215, 1988. [lo] Forbus, K. D. and FaIkenhainer, B. Self-Explanatory Simulations: An integration of qualitative and quan- titative knowledge. AAAI-90, July, 1990. [ll] Franks, R.E. Modeling and simulation in chemical en- gineering, John Wiley and Sons, New York, 1972. P21 WI PI P51 WI P71 WI PI PO1 WI Haug, E.J. Computer-Aided Kinematics and Dynam- ics of Mechanical Systems Volume I: Basic Methods, Allyn and Bacon, 1989. Hayes, P. “The naive physics manifesto” in Expert sys- tems in the micro-electronic age, D. Michie (Ed.), Ed- inburgh University Press, 1979 Kuipers, B. and Chiu, C. “Taming intractable branch- ing in qualitative simulation”, Proceedings of IJCAI- 87, Milan, Italy, 1987. LeClair, S., Abrams, F., and Matejka, R. “Qualita- tive Process Automation: Self-directed manufacture of composite materials”, AI EDAM, 3(2), pp 125-136, 1989. Sacks, E. “Automatic qualitative analysis of dynamic systems using piecewise linear approximations”, Arti- ficial Intelligence, 41, 1990. Weld, D. “Approximation reformulations”, Proceed- ings of AAAI-90, August, 1990. Williams, B. “Doing time: putting qualitative rea- soning on firmer ground” Proceedings of A A AI-86, Philadelphia, Pa., 1986. Yip, K. “Understanding complex dynamics by visual and symbolic reasoning”, Artificial Intelligence, 51, 1991. Zhao, F. “Extracting and representing qualitative be- haviors of complex systems in phase spaces” Proceed- ings of IJCAI-91, Sydney, Australia, 1991. Zippel, R. Symbolic/Numeric Techniques in Modeling and Simulation. In Symbolic and Numerical Compu- tations - Towards Integration. Academic Press, 1991. 690 Representation and Reasoniner: Qualitativ
1992
117
1,184
Gordon Skorstad Beckman Institute, University of Illinois 405 North Mathews Street Urbana, Illinois 61801 g-skorstad@uiuc.edu bstraet Choosing between multiple ontological perspec- tives is crucial for reasoning about the physical world. Choosing the wrong perspective can make a reasoning task impossible. This paper intro- duces a Lagrangian plug flow ontology (PF) for reasoning about thermodynamic fluid flow. We show that this ontology captures continuously changing behaviors of flowing fluids not repre- sented in currently implemented ontologies. These behaviors are essential for understanding thermo- dynamic applications such as power cycles, refrig- eration, liquefaction, throttling and flow through nozzles. We express the ontology within the framework of Qualitative Process (QP) theory. To derive our QP theory for plug flow, we use the method of causal clustering to find causal inter- pretations of thermodynamic equations. We also incorporate qualitative versions of standard ther- modynamic relations, including the second law of thermodynamics and Clapeyron’s equation. Introduction The choice of ontology critically effects reasoning in a domain. In general, no single theory of a domain is ade- quate for every task because the underlying ontological choices may sometimes be inappropriate. Thus it has become widely recognized that the ability to switch ontological perspectives is crucial for expert physical reasoning (Collins & Forbus 1987)) (Rajamoney & Koo 1990), (Amador & Weld 1990). This paper outlines a new ontology for fluid flow which captures behaviors essential for understanding many important thermody- namic phenomena such as power cycles, refrigeration, liquefaction, and throttling. To demonstrate the importance of ontological choice, consider the phenomenon of fluid ftow in a boiler tube (Figure 1). The thin slices in the figure represent fluid samples at different points along the fluid path. Sub- cooled liquid (i.e., liquid below its boiling temperature) enters the tube at point a. As liquid flows through the tube, heat flows into the liquid, warming it from its initial temperature at a to its boiling point at c, boil- ing it until e, and then superheating the gas above its boiling point until it exits at g. Fluid properties vary from inlet a to outlet g0 Temperature T for example, rises smoothly from a to c, is constant from c to e during boiling, and rises again from e to g. In many thermodynamic fluid flow problems, the spatial distribution of the fluid’s properties must be recognized before an appropriate equation can be cho- sen or before a simplifying assumption can be made. For example, if our task is to calculate the heat sorbed by the boiler fluid based on values of fluid en- tropy S, it would be necessary to recognize the varying temperature and entropy profile of the fluid along the tube. Because of the spatial distribution, the appro- priate equation (derived from an energy balance and the fundamental property relation) is Q-1 s9 - Tds SC% where S, and S, are the entropies of the entering and leaving fluid, respectively. A reasoner needs to know that temperature T varies along the integration path and so cannot be brought outside the integral sign. If our modeling ontology was incapable of rep resenting spatially distributed properties, we could (as will be demonstrated) derive an oversimplified, inac- curate equation. In general, a thermodynamic rea- soning system which cannot represent smooth, con- tinuously changing behaviors through space lacks the wherewithal to make informed decisions about many commonly occurring fluid flow phenomena. The focus of this paper is how such spatially distributed phe- nomena can be represented within the framework of Qualitative Process (QP) theory (Forbus 1984). ullerian Viewpoint dS Several ontologies have been proposed for modeling liquids and gases. At the macroscopic level, the contained-stuff (Forbus 1984) and molecular collection (Collins & Forbus 1987) ontologies describe fluid en- tities large enough to possess the traditional thermo- dynamic properties (temperature, pressure, etc.). At Skorstad 691 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. Heat-Flow Heat-Flow Heat-Flow a b C d e f g Figure 1: Water flow in a boiler tube the microscopic level, the ontologies of (Rajamoney & Koo) and (Amador & Weld) describe the behavior of elementary fluid particles. In this paper we focus on macroscopic theories of fluids. The contained-stuff (CS) ontology, a generalization of Hayes’ contained liquid ontology (Hayes 1985), de- scribes gases and multiple substances as well as liquids. The CS ontology uses the Euleripn viewpoint (Shapiro 1953), (F lk h a en ainer & Forbus 1991) which focuses on a particular region of space and specifies for each in- stant of time the properties of the fluid which happens to occupy the region. In the CS ontology, fluids are partitioned into individuals occupying regions delim- ited by cant ainers. A contained-stuff is a homogeneous lumped param- eter (Coughanowr & Koppel 1965), (Throop 1989) model of fluid. Properties distributed over space (e.g., temperature) are “lumped” into a single value. In many cases, this modeling idealization is too inac- curate. For example, modeling a boiling tube with contained-stuffs is basically equivalent to viewing it as a well stirred boiling tank (Figure 2). Liquid en- ters the tank, boils and leaves as saturated gas (i.e., gas at its boiling temperature). The contained-liquid and contained-gas are both at the boiling tempera- ture of the fluid. A thermodynamic reasoning sys- tem equipped with only the contained-stuff ontology can miss important distinctions needed to control fur- ther analyses. For example, to calculate heat flow &, a contained-stuff model of the boiler tube suggests (with its constant temperature) that an appropriate equation is Q=TaS (2) where AS is the change in entropy from liquid to gas. In contrast to equation (I), this equation is seriously inaccurate for the scenario shown in Figure 1. Lagrangian Viewpoint Models To model the changing conditions along a fluid path, we need a different perspective than that of a contained-stuff. A more promising approach is to use the Lagrangian viewpoint (Shapiro 1953), (Falken- hainer & Forbus 1991) in which a particular fluid par- 692 Representation and Reasoning: Qualitative Heat-Flow 73 Figure 2: Contained-Stuff view of a boiler tube title is modeled. As the particle travels through a sys- tem, it adjusts itself continuously to the new conditions encountered. Choosing a Lagrangian viewpoint means we view the fluid flow from the vantage point of a mov- ing particle. In contrast, using the Eulerian viewpoint means picking a fixed path location to observe flow. As a possible candidate ontology, we consider the molecular collection (MC) ontology developed by (Collins & Forbus 1987). The MC ontology is a tractable specialization of Hayes’ piece-of-stufl (PoS) ontology. In the theory, an MC is a very small piece- of-stuff viewed as a collection of molecules. The on- tology is said to be parasitic on the contained-stuff ontology because a contained-stuff description must be generated before an MC description is possible. When an MC enters a contained-stuff, equilibrating processes influence its properties to match those of the contained-stuff. For a phenomenon like fluid flow into a well mixed tank, this model is usually accu- rate enough. But, for phenomena where continuously changing properties along a path are important, the MC ontology suffers from the same problem as the contained-stuff ontology. Consider again the example of fluid flow in a boiler tube. Figure 3 shows the envisionment describing MC’s history. The MC enters the contained-liquid and begins equilibrating with it (state a). Once equilibrium Heat-Flow “ds Figure 3: MC envisionment for a boiler tube is reached, MC’s properties match those of the boil- ing contained-liquid (state b). After loitering in the contained-liquid (To allow for MC’s implicit motion through the contained-liquid), MC boils and enters the gas contained-stuff (state c) and eventually leaves. Be- cause MC is parasitic on the piece-of-stuff ontology, its temperature is constant along the flow path.l Also, be- cause MC is so small, the gradual vaporization along the fluid path is not represented. Instead, MC jumps from one state where it is completely liquid to another where it is completely vapor. This makes it impossible to directly represent scenarios where the exiting fluid is partially vaporized. Clearly, the MC ontology can- not represent the spatially distributed nature of flowing fluids. Plug Flow Ontology In this section, we introduce a new, nonparasitic plug flow ontology (PF) for reasoning about fluid flow. Like MC, the plug flow ontology is a macroscopic specializa- tion of Hayes’s piece-of-stuff ontology. The plug flow ontology describes a narrow slice or “plug” of fluid which travels along paths (see Figure 1). Unlike an MC which simply equilibrates with fluids it encoun- ters, a plug is large enough to directly participate in the processes affecting the fluid. For example, when a plug is in a heat exchanger tube, it exchanges heat with entities outside the tube. When it is in a turbine, it pushes against the turbine blades and performs work. Our plug flow ontology is restricted enough to avoid many of the problems associated with piece-of-stuff on- tologies as described by Hayes (Hayes 1985). For ex- ample, a distinguishing feature of a plug is its physical contiguity. A plug completely fills a simply-connected region of space. In other words, it exists in one piece and has no holes. Thus we remove from consideration ‘That is, temperature is constant after equilibrating. ing as it moves to different locations and encounters varied conditions. Following the individual parts in such scenarios is impractical, both for human engineers and automated reasoners, and we make no allowances for it in our ontology. Another distinguishing feature of a plug is its sin- gle thermodynamic state. A plug has a single set of thermodynamic properties (e.g., a single tempera- ture). However, this does not mean a plug is homoge- nous. A plug is large enough to contain more than one phase. For example, a plug can be part water and part steam, as demonstrated by the partially vaporized plug at point d in Figure 1. Still, the plug has a single overall state which can be represented by a point in a thermodynamic diagram. Although the plug flow ontology itself says nothing about when its viewpoint is appropriate or useful, it is intended for scenarios where fluid flow is orderly and steady-state. By orderly, we mean that no piece of fluid overtakes or mixes with any other fluid ahead or behind it. By steady-state, we mean that at any point along the fluid’s path, conditions are constant with respect to time. In steady-state flow scenarios, all pieces of fluid act the same, allowing us to characterize their behaviors with a single representative plug. Unlike MC, the plug flow ontology does not depend on a particular alternate ontology for deciding its ap- propriateness or for setting up the plug’s global envi- ronment. Like all ontologies, the plug flow ontology is ‘pseudo-parasiti? in some sense. It is parasitic on an assumed state of the world. Only when an appropri- ate fluid flow is observed or inferred is PF appropriate. It is unimportant from PF’s perspective whether the achievement of that flow was inferred using the piece- of-stuff ontology, an alternate ontology describing fluid pressure waves and gradients, or a set of partial differ- ential equations. An important class of entities in the plug flow on- tology are paths. Paths are logical entities inferred from the structure of a scenario. For example, pipes, heat exchangers, turbines, pumps and nozzles provide paths for plug flow. It is worth noting that a path does not require the existence of a solid physical con- duit. For example, a path can correspond to a stream tube (Shapiro 1953) of air flowing around an airplane wing. Such an application is a clear example of how parasitism on a contained-stuff ontology can be inap- propriate for a piece-of-stuff ontology. As a plug flows through a path, the plug’s width in the direction of flow is small enough that it can be considered to have a single location or position along the path. In QP theory terms, we define a quantity- type (position ?plug ?path) which defines a one- dimensional coordinate system for a ?plug along the arc of a ?path. When a plug’s position in a par- ticular path has a value in the range between zero and the length of the path (length ?path) inclusive, a QP view (in ?plug ?path) becomes active, indi- Skorstad 693 eating that the plug is in the path. When a plug is in a path, a quantity (velocity ?plug ?path) ex- ists. If the velocity of a plug in a path is nonzero, a Path-Mot ion process becomes active, influencing the plug’s posit ion. If a plug’s position quantity for a particular ?path is nonexistent, the (in ?plug ?path) view for that plug and path is inactive, indicating that the plug is not in the path. If a series of paths are connected end to end, continuous flow from one to another can be modeled. As a plug leaves one path and enters another, its previous position quantity ceases to ex- ist and a new position quantity for the next path becomes defined. These motions from place to place appear as transitions in the plug’s total envisionment. Modeling Thermodynamic To make the plug flow ontology useful for thermo- dynamic applications, we must represent the plug’s thermodynamic properties and the processes that affect them. The processes we model are heat- flow, work-flow and its accompanying volume expan- sion/contraction , I boiling, condensation, and throt- tling. Conspicuously absent from our list is mass-flow. Since a plug is a closed system, no mass enters or leaves it. Iu an Eulerian ontology, mass-flow at a location can be directly represented, but not in a Lagrangian ontology. The closest concept to mass-flow in the La- grangian viewpoint is Path-Motion. The nine basic thermodynamic plug properties we model are pressure P, temperature T, volume V, va- por pressure VP), internal energy U, enthalpy H, en- tropy S, vapor fraction y, and mass m. To model the interactions between these properties, we use QP the- ory’s causally directed connectives (basic knowledge of QP theory is assumed): direct influences (I), and qualitative proportionalities (qprop’s). Because these connectives have a fixed causal direction, we are com- pelled to decide a priori which way causation will flow in all circumstances. Using the causal clustering tech- nique (Skorstad 1992), an extension of Iwasaki and Simon’s causaZ ordering procedure (Iwasaki & Simon 1986), we can uncover stable causal interpretations for some thermodynamic equations. As shown in (Sko- rstad 1990), the Ideal Gas Law PV = RT and Joule’s temperature-internal energy relation U = c,T have a stable causal interpretation. These causal dependen- cies among P, V, T, and U are shown in the influence di- agram of Figure 4. This diagram represents our causal theory for thermodynamics. Notice the causal depen- dency of pressure on temperature. A change in temper- ature causes a change in pressure, but not vice versa. This asymmetry may seem odd. However, experiments done since Joule’s time have shown that a drop in pres- sure without an accompanying work or heat flow has no effect on temperature. Although the causal dependencies between P, V, and T were derived from the ideal gas law, these same Figure 4: Causal theory for thermodynamic properties dependencies also hold for liquids. This is true even though the ideal gas law does not hold for liquids; the properties P,V, and 2’ are linked together in some equation of state which also describes liquids. It is an article of faith in thermodynamics that the relation- ship between these properties can be described math- ematically. (Even van der Waals equation does a qual- itatively good job modeling liquids and gases.) The causal clustering technique does not require knowing the exact form of an equation, only which variables are involved. Boiling and condlensing Traditionally, qualitative models of boiling/condensing have relied on the concept of a boiling point constant. The temperature of a liquid must be raised to its boil- ing point before boiling can begin. We build a more general model based on the observation that the boil- &g point of a liquid is the temperature at which the liquid’s vapor pressure, Vp, equals the pressure P dis- tributed throughout the fluid.2 The vapor pressure of a liquid is the pressure exerted by the vapor molecules that escape from the liquid. When the liquid state, its vapor pressure is a substance is in less than the liq- uid’s pressure. As the liquid’s temperature rises, more molecules escape and vapor pressure rises. A liquid must be heated until its vapor pressure reaches the opposing pressure P before boiling can begin. Alter- nately, if the liquid’s pressure is decreased (e.g., by lowering the external confining pressure), boiling can begin once the liquid’s vapor pressure is reached. The causal dependencies for boiling are represented in the top half of Figure 4. Vapor pressure Vp is causally dependent on temperature T. @R represents the rate of a boiling (or condensation) process. H,, is the heat of vaporization, which is the energy expended when a liquid boils. In our theory, boiling and con- densing can only become active when Vp and P are 2We ignore the fact that pressure increases with depth. 694 Representation and Reasoning: $rralitative equal. In reality, there is evidence that vapor pres- sure must exceed fluid pressure, if only infinitesimally, in order to provide a driving force for boiling (Bohren 1989). Normally this disequilibrium is imperceptible and is ignored. In our theory, we follow the example of thermodynamic texts and idealize boiling/condensing by assuming that Vp and P are in equilibrium.3 Since our model excludes a disequilibrium driving force for boiling, what is to be the cause of boiling? This problem is more difficult than may be realized at first. Boiling does not occur simply because a liquid at its boiling point is heated. In fact, if a liquid at its boil- ing point (Vp = P) is heated at constant volume (e.g., in a sealed container), it moves away from its boiling point. To model boiling, we make use of a qualitative version of Clupe yron’s equation: dPfdT = AH/(TAV). Clapeyron’s equation constrains the behavior of tem- perature T and pressure P during boiling and conden- sation. AH and AV are the change in enthalpy and volume, respectively, that occur when a liquid boils. Both are positive numbers. Thus, if pressure increases during boiling, temperature must also increase, and vice versa. It can also be shown that if pressure is con- stant during boiling, temperature must also be con- stant. In QP terminology we have D,[P] = D,[T], where Da is the sign of a derivative. Using this quali- tative Clapeyron constraint, we state our boiling con- ditions: Boiling (or condensation) occurs when y (gas frac- tion) < 1 (or 0 < y for condensation), Vp = B, and it is required to satisfy Clapeyron’s constraint. To complete our theory of boiling/condensing, we must describe its effect on other fluid properties. It is well known that when a liquid converts to the higher energy vapor form (i.e., when it boils), the fluid ~001s.~ We represent this with a qprop from boiling’s heat of vaporization Hvap to temperature T, as shown in Fig- ure 4. Also, it is known that when a liquid vaporizes, the more energetic vapor molecules increase the force of collisions in the entire fluid.s In other words, pres- sure increases. We represent this with a qprop from gas fraction y to pressure P. Thermodynamic Constraints The qualitative influence diagram of Figure 4 will sometimes yield behavioral ambiguity that can be re- duced by adding global thermodynamic constraints. One useful constraint is a corollary of the second law of thermodynamics. The entropy S of an adiabatic system (i.e., engaged in no heat flow) must increase or remain constant. To further constrain behavior, there 3Nonequilibrium boiling models built by the author also yielded very complex behaviors. 4For a microscopic level explanation of this, see (Raja- money & Koo 1990) and (Amador & Weld 1990). 5This can be dem onstrated by boiling a liquid in a cylin- der fitted with a piston. As the liquid boils, the piston rises. are a number of qualitative constraints which can be gleaned by examining thermodynamic diagrams of sub- stances. For example, it can be shown that for fluids, (W/iW)p > 0. In other words, when a fluid’s pres- sure is constant, volume V and entropy S change in the same direction: DS V] shown that (aT/BV I = Ds [S]. Similarly, it can be s < 0. That is, when a fluid’s en- tropy is constant, temperature T and volume V change in opposite directions: DS [T] = -Ds [VI. For a justi- fication of these constraints, see (Skorstad 1992). xample Using the plug flow theory outlined above, QPE (an implementation of QP theory) produces envisionments which explicitly describe the continuously changing na- ture of fluid flow along paths. For example, in a typical boiler tube scenario, QPE generates the 11 state total envisionment shown in Figure 5. The QPE input for this scenario includes a description of the entities in- volved (a plug, pipe and furnace) and constraints im- posed to limit the size of the envisionment. In this scenario, we constrain behavior by asserting that the plug enters as subcooled liquid. Also, motion through the tube is asserted to be in the positive direction, pressure is constant,g and the furnace is always hotter than the plug. The plug history from state a to g corresponds to a fluid behavior already described (see Figure 1). In ad- dition, the envisionment describes alternative histories. For example, the plug may exit the boiler partially va- porized as shown at point i. Many aspects of the fluid’s behavior have been captured which are impossible to represent in a contained-stuff or MC ontology. The plug flow envisionment makes explicit the fact that: (i) gas exiting the boiler may be superheated, as shown at state g, (G) fluid temperature rises from a to c, and from e to g, (Gi) fluid temperature is constant from c to e, and (iv) internal energy U, entropy S and en- thalpy H rise continuously from inlet u to outlet g. Discussion Recognizing the spatially distributed behaviors of fluid flow is important for many reasoning tasks in thermo- dynamics. Our work in this area is most similar to (Throop 1989) h d w o escribes extensions to Kuiper’s QSIM that permit spatial qualitative simulation. While his work focuses on the qualitative simulation task, ours deals mainly with model building. Spatially distributed behaviors are so important that diagrams of them in the form of state trajectories are standard thermodynamic tools in the analysis of many systems. As a piece of fluid travels through a system, its changing properties describe a continuous trajec- tory through thermodynamic space. Engineers rou- tinely use these thermodynamic trajectories to qualita- tively describe and reason about power cycles, refrig- 6This is a standard assumption in boiler analyses. Skorstad 695 0 i 0 h eration, liquefaction, throttling, flow through nozzles and many other phenomena. For example, by com- paring the trajectory of a simple steam plant with a hypothetical Carnot cycle, an engineer can easily see why the steam plant is less efficient. We view the plug flow ontology as a first step towards building a quali- tative representation of fluid flow trajectories. Besides the heat exchanger example shown above, we have also used the plug flow ontology to model the trajectory of fluid flow through a turbine. We believe that qualita- tive representations of such behaviors will provide an important framework for automated reasoning about thermodynamic systems. Acknowledgments: Thanks to Ken Forbus, John Collins and Janice Skorstad for insightful discussions. This work is supported by the Office of Naval Research. References Amador F. G. and Weld, D., ‘Multi-level Modeling of Populations’ ) in Proceedings of the 4th Int. Quali- tative Physics Workshop, Lugano, Switzerland, 1990. Bohren; C., “Boil and Bubble, Toil and Trouble”, Weather-wise, April 1989, pp. 104-108. Collins, J. W. and Forbus, K. D., QReasoning about Fluids via Molecular Collections”, Proceedings AAAI- 87, 1987, pp. 590-594, Seattle, WA. Coughanowr, D. R. and Koppel, L. B., Process Sys- tems Analysis and Control, McGraw-Hill, 1965. Falkenhainer, B. and Forbus, K., “Compositional modeling: finding the right model for the job”, Arti- ficial Intelligence, 51, 1991. Forbus, K., “Qualitative process theory”, Artificial Intelligence, 24, 1984. Hayes, P., KNaive Physics I: Ontology for Liquids”, in J. R. Hobbs and R. C. Moore, editors, Formal Theo- ries of the Commonsense World, chapter 3, 1985. 696 RePresentation and Reasoning: Qualitative Iwasaki, Y. and Simon, H.A., “Causality in Device Behavior”, Artificial Intelligence, 29, July 1986. Rajamoney, S. & Koo, S., “Qualitative Reasoning with Microscopic Theories”, Proceedings AAA I- 90. Shapiro, A.H., The Dynamics and Thermodynamics of Compressible Fluid Flow, John Wiley & Sons, 1953. Skorstad, G., ‘Finding Stable Causal Interpretations of Equations”, in B. Faltings and P. Struss, Recent advances in Qualitative Physics, MIT press, 1992. Skorstad, G., ‘Clustered Causal Ordering”, in Pro- ceedings of the 4th Int. Qualitative Physics Workshop, Lugano, Switzerland, 1990. Skorstad, G., “Towards a Qualitative Lagrangian Theory of Fluid Flow”, to appear as a technical re- port, University of Illinois, 1992. Smith, J. M. & Van Ness, H. C., ‘Intro. to Chemical Engineering Thermodynamics”, McGraw-Hill, 1975. Throop, D., aSpatial Unification: Qualitative Spa- tial Reasoning about Steady State Mechanisms. An Overview of Current Work.“, Proceedings, 3rd Int. Qualitative Physics Workshop, Palo Alto, CA, 1989.
1992
118
1,185
alitative cture of a sse Randall N. Wilson* Jean-Claude Latornbe rwilson@cs.stanford.edu latombe@cs.stanford.edu Robotics Laboratory Department of Computer Science, Stanford University Stanford, CA 94305, USA Abstract A mechanical assembly is usually described by the geometry of its parts and the spatial relations defining their positions. This model does not di- rectly provide the information needed to reason about assembly and disassembly motions. We pro- pose another representation, the non-directional blocking graph, which describes the qualitative in- ternal structure of the assembly. This represen- tation makes explicit how the parts prevent each other from being moved in every possible direction of motion. It derives from the observation that the infinite set of motion directions can be partitioned into a finite arrangement of subsets such that over each subset the interferences among the parts re- main qualitatively the same. We describe how this structure can be efficiently computed from the ge- ometric model of the assembly. The (dis)assembly motions considered include infinitesimal and ex- tended translations in two and three dimensions, and infinitesimal rigid motions. Introduction Many application tasks, such as design, process plan- ning, and robot programming, require reasoning about mechanical assemblies. Typical questions that arise in such tasks are: In which order can a physical device A be assembled or disassembled? How many hands are required? What are all the parts or subassemblies that can be directly removed from A? Is it possible to extract a given subassembly without previously re- moving some other part. 3 What is the minimal num- ber of parts that should be removed prior to the ex- traction of a specified subassembly? These are, for example, the kind of questions an autonomous mainte- nance robot in a space station would have to routinely answer in order to plan (dis)assembly operations for diagnosing failures and repairing hardware equipment *This research was funded by DARPA contract N00014- 88-K-0620 (Office of Naval Research), the Stanford Inte- grated Manufacturing Association, and Digital Equipment Corporation. on board the station. In a different domain, a smart CAD system would help design products that are eas- ier to manufacture and maintain by answering such questions. For instance, while a new product is being designed, it could verify that this product (at its cur- rent stage of definition) can be assembled with simple motions and that all critical parts are easily removable for inspection and/or repair. An assembly A is usually represented as a geometric model, which describes the individual parts compos- ing A and the spatial relations among them. However, this model does not directly provide the information that would allow us to easily answer the above ques- tions. We propose another representation of A, in the form of a qualitative structure that explicitly describes how the parts prevent each other from being removed from A along every possible direction of motion. This representation, the non-directional blocking graph, or NDBG, makes it possible to efficiently answer these questions. We show how it can be computed from the original geometric model of the assembly. The notion of an NDBG derives from the remark that the infinite set of motion directions can be partitioned into an arrangement of subsets, called regular regions, such that over each region the interferences among the parts remain qualitatively the same. The boundary between two regular regions consists of critical direc- tions where one or more interferences change suddenly; for example, a part Pi may block the motion of a part Pz along any direction in one region, but along none in the other region. In each regular region, a directed graph, called the directional blocking graph, or DBG, represents the interference relations among the parts of the assembly for any direction of motion in the re- gion. The set of all regular regions, their adjacency relation, and the corresponding DBG’s together form the NDBG of the assembly. The NDBG is a good example of a qualitative repre- sentation of a physical device obtained by discretizing a continuous set (the set of motion directions for an object) into a finite number of classes (the regular re- gions) based on the identification of physical criticali- ties (the boundaries of the regular regions), such that Wilson and Latombe 697 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. one can associate a constant symbolic structure (the DBG) to every class. We will show that the size of the NDBG is polynomial in the number of parts in the as- sembly and the complexity of these parts, and that it can be computed in polynomial time from the original geometric model of the assembly. Our algorithm com- putes the DBG for one regular region, and then uses this DBG to derive the DBGs in the adjacent regions by applying a simple crossing rule. The transitive closure of this crossing rule yields the full NDBG. In the next section we review previous related work. In succeeding sections we define the notions of DBG and NDBG for the simplest case of infinitesimal trans- lations, and describe the computation of the NDBG. Finally, we extend the notion of an NDBG (and its computation) to infinitesimal motions in translation and rotation, and to extended translations. This paper combines and extends previous results pre- sented in [Wilson, 1991; Wilson and Matsui, 1992; Wilson and Schweikard, 19921. Related Work Reasoning about mechanical assemblies has attracted the interest of AI researchers for a long time. E. ., see BUILD [Fahlman, 19741, NOAH [Sacerdoti, 1977 , and ‘5 RAPT [Popplestone et al., 19801. It plays a key role in assembly sequence planning, a topic which has recently been under active study. See [Homem de Mello and Lee, 19911 for a collection of papers on this topic. The concept of local freedom of translation used in the definition of a DBG was pre- viously introduced in this context [Homem de Mello, 19891. The constraints on the feasible infinitesimal mo- tions in translation and rotation of a part, imposed by a contact with another part, are analyzed in [Hirukawa et al., 1991; Ohwovoriole, 19801 yielding the extension of the NDBG to infinitesimal generalized motions. Related work in Computational Geometry includes methods for separating sets [Toussaint, 19$5]. Given an assembly A in the plane, the problem of finding a subassembly S c A removable from the rest of A by a single translation is addressed in [Arkin et aZ., 19891. An algorithm to construct a sequence of translations separating two polygons is given in [Pol- lack et al., 1988]. The minimum number of hands needed to (dis)assemble a given device is investigated in [Natarajan, 19881. See also the work in motion plan- ning [Latombe, 19911. The notion of an “aspect graph” used in Computer Vision has the same qualitative flavor as the NDBG. The aspect graph of an object describes the appear- ance of the object from all possible viewpoints. Aspect graphs were first computed by discretizing the set of orientations. Recent algorithms take advantage of the fact that, except at critical orientations, the occluding contours of an object remain qualitatively (i.e. topo- logically) the same for small changes in the viewpoint (e.g., [Kriegman and Ponce, 19901). 698 Renresentation and Reasoning: Qualitative Figure 1: A simple assembly and two DBGs Qualitative reasoning in continuous domains is a ma- jor topic in AI [de Kleer and Williams, 19911. It re- quires discretizing a space into regions that are then treated as single entities. Often this discretization is not based on any sort of criticality (discontinuity, sin- gularity, or event) between regions. When criticali- ties are identified, as in this paper, they yield a more meaningful discretization. A subdomain, qualitative kinematics [Joskowicz and Sacks, 1991], studies the in- ternal motions of parts in an operational device. Directional Blocking Graph We consider an assembly A made of n parts PI, . . . , F’. We assume that each part is described as a bounded connected regular’ subset of R2 (2D case) or R3 (3D case). The interiors of any two parts in A are disjoint. If the boundaries of two parts intersect, the two parts are said to be in contact. The assembly may or may not be connected. Suppose that we wish to remove one part, say P!, by translating it along a direction defined by the unit vector d. A part pj of A (j # i) blocks the translation of Pi along d iff an arbitrarily small translation of Pi along d leads the interiors of Pi and P’ to intersect. Hence, if Pj blocks Pi, the two parts are necessarily in contact. A subassembly S of A, i.e. a subset of its parts, is locally free in direction d iff no part in A\S blocks the translation of any part of S along d. The directional blocking graph of A for an infinitesi- mal translation along d, denoted by G(d, A), is defined as follows. The nodes of G(d,A) represent the parts Pi through Pa. An arc of G(d, A) connects Pi to I” iff Pj blocks the translation of Pi along d. Fig. 1 shows a simple assembly of 4 polygonal parts and its DBGs for two directions dl and d2. A subassembly S of A is locally free in direction d iff no arcs in G(d, A) connect parts in S to parts in A\S. If G(d, A) is strongly connected, no such subassem- bly exists. Otherwise, at least one strong component of G(d, A) must have no outgoing arcs. For example, in Fig. 1 the subassemblies {PI, Pz} and {PI, P2, P3) are locally free in direction cdl; only the subassembly {Pa, P’) is locally free in direction d2. For a subassembly S to be removable by a transla- tion along d without previously removing other parts of lA subset of a topological to the closure of its interior. space is regular iff it is equal =2 61 ‘C3 Figure 2: Blocking directions for two parts in contact A, it must be locally free in direction Q. This condition is necessary but not sufficient. Also, the removal of S may require rotation. For those reasons, we will ex- tend the notion of an (N)DBG to infinitesimal motions with rotation and non-infinitesimal motions below. From now on, we will assume that all the parts in the assembly A are polygonal (2D case) or polyhedral (3D case). We will also assume that the intersection of any two parts of A either is empty or consists of straight contact segments of non-zero lengths (2D case) or pieces of planar contact surfaces of non-zero areas (3D case). Both assumptions could be retracted, but at the cost of lengthening our presentation without bring- ing much additional insight. For the first assumption, we could accept parts bounded by algebraic curves (or surfaces) of any degree; this would seriously compli- cate the computation of the NDBGs. For the second, we could allow two parts to touch, say, at an isolated point; this would require us to explicitly consider addi- tional, mostly straightforward, particular cases. These cases are treated in [Wilson, 19921. 2D case We represent the set of all translation di- rections by the unit circle S . This circle is the locus of the extremity of the vector d when its origin is attached at the center of S1. Let Pa and Pj be two parts in contact, whose in- tersection consists of the contact segments El,. . . , Eq (Fig. 2). For each segment Ek, we draw the diameter of S1 parallel to Ek. The two endpoints of this diam- eter partition S1 into an open half-circle circle Ik and a closed half-circle S1 \Ik, with the open half-circle Ik containing the direction pointed by the outer normal to Pi in Ek. Hence, for all directions in Ik, Pj blocks Pi. The union J = Ui=, Ik forms one open connected circular arc, or two non-intersecting open half-circles, or the full circle. The complement of J in S1 is a closed circular arc, or a single point, or two antipodal points, or the empty set. It contains all the directions d along which Pa is locally free to translate relative to Pj. For every contact segment in the intersection of two parts of A, we draw the diameter parallel to that seg- ment. The endpoints of all the diameters and the open circular arcs between them form a partition of S1. We regard each element of this partition as a subset (possi- Figure 3: Part of the NDBG of the assembly of Fig. 1 bly, a singleton) of directions. The directional blocking graph G(4 A) remains constant when d varies over any such subset. Indeed, by construction of the partition, if a part Pj blocks another part Pi for any direction in the subset, then Pj also blocks Pa for any other direc- tion in the same subset. Any subset R in the partition of S1 is called a regular region and we denote the DBG of A for any direction in R by G( R, A). Let (RI,..., Rp) be the list of regular regions on ,S, with & and &+I (mod p) (k: E [l,p]) adjacent. The non-directional blocking graph of A is I’t(A) E ((RI, WI, A)), . - - 7 ( Rp , G( Rp, A))). It represents the blocking structure of A for infinitesimal translations. Part of the NDBG of the assembly of Fig. 1 is shown in Fig. 3. The partition of S1 in this example consists of 20 regular regions. 3D case We represent the set of translation direc- tions by the unit sphere S2. With each contact surface between two parts Pi and Pj we associate a plane par- allel to this surface and passing through the center of S2. This plane cuts S2 along a great circle that parti- tions the sphere into an open half-sphere and a closed one; the open half-sphere contains the direction of the outer normal to Pi in the contact surface. For every contact planar surface in the intersection of two parts of A, we draw the corresponding great circle on S2. The set of obtained circles determines an arrangement of regions of three types: vetiices lie at the intersection of two or more great circles; edges are maximal open connected arcs in great circles not including vertices; and faces are maximal open connected pieces of spheri- cal surface not intersecting edges and vertices. By con- struction, each region of the arrangement is regular in the sense that the directional blocking graph G(d, A) remains constant when d varies over the region. Let G(R, A) denote the DBG of A for any direction in R. Let R = (RI,... , &,} be the set of regular re- gions partitioning S2. Let A = (72, ,C) be the non- directed graph representing the adjacency of these re- gions . A link of & connects any two regions & and Rj such that the boundary of one contains the other. Wilson and Latornbe 699 The non-directional blocking graph of A is Ft(A) z ({RI, WI, A))> - - - 9 (Rp, G&, A)), c). Computation of Blocking Graph We assume that the input consists of the geometric models of the parts in the assembly and the specifi- cation of the contact segments (or surfaces) between parts. In many situations the latter information is not given explicitly and must be extracted from input spa- tial relations among the parts, as in [Wilson, 19921. Let n be the number of parts in A and c the total number of contact segments (or surfaces). We represent each DBG as an n x n adjacency matrix. 2D case The partition of S1 contains O(c) regular regions and is easily computed in O(clogc) time. For each regular region R, we select a direction d in R and compute the DBG G(d, A). After clearing the adja- cency matrix for G(d, A), each contact is considered separately. For a contact between parts Pi and Pj, the inner product of d with the outer normal to Pi in the contact segment with Pj is computed. If the in- ner product is strictly positive, an arc from Pi to Pj is added to G(d, A) (if it does not already exist). The computation of G(d, A) takes O(n2 + c) time. By re- peating this computation for all regular regions, the NDBG is constructed in O(cn2 + c2) time. This computation can be reduced by noticing that there is little or no change between the DBGs of two adjacent regions. This leads to computing the DBG for one region, call it RI, and then incrementally modify- ing this graph to get the DBG for the next region in the NDBG list, and so on, until all the regions have been considered. To that purpose, we slightly modify the DBG of a region by attaching a weight to each arc of the graph. In G(R1, A), this weight is the number of inner products that were strictly positive in the above computation. The absence of an arc from Pi to Pj is treated as an arc of weight 0, and conversely. Let RI be a circular arc. The next region R2 in the (circular) NDBG list is necessarily a singleton. Let D be the di- ameterofsl thatendsat R2,and{Ei,...,E,},sl 1, be the set of all contact segments in A parallel to D. G( R2, A) can be derived from G( RI, A) by applying the following crossing rule: “Initialize G to G( RI, A). For every contact segment Ek, let Pi and Pj be the two parts sharing this segment. If the inner product of any direction in RI and the outgoing normal to Pi in Ek is strictly positive, then retract 1 from the weight of the arc connecting Pi to Pj in 6.” The graph G obtained at the end of the loop is G(&., A). (Ag ain, every arc weighted by 0 is inter- preted as no arc.) If R1 is a singleton and R2 a circular arc, the crossing rule is: “Initialize G to G( RI, A). For every contact segment Ek, let Pi and Pj be the two parts sharing this segment. If the inner product of any direction in R2 and the outgoing normal to Pi in Ek is strictly positive, then add 1 to the weight of the arc connecting Pa to Pj in G.” The cost of computing the NDBG is the sum of: O(c log c), the cost of partitioning S’ ; O(n2 + c), the cost of computing the first DBG; O(cn2), the cost of copying the remaining O(c) DBGs; O(c), the cost of computing the remaining DBGs. [Note: The cost of computing a DBG by applying the crossing rule is pro- portional to the number s of contact edges involved in the computation. This number is in O(c), but through- out the computation of the entire NDBG, each edge is considered only twice. Hence, the time complexity of the computation of all the remaining DBGs is only O(c).] Therefore, the total cost of computing I’t(A) is O(clogc + cn2). The size of I’l(A) is O(cn2). 3D case The main difference with the 2D case is in the computation of the arrangement of great cir- cles on the sphere. Each great circle is derived from a contact surface between two parts, so there are c great circles. We project them into a plane tan- gent to the sphere and not parallel to any of the great circles, using the central projection from the cen- ter of the sphere. We obtain an arrangement of c lines in the plane that intersect at O(c2) points, pro- ducing O(c2) regions. These regions and their ad- jacency relations can be computed in optimal O(c2) time using a topological sweep [Chazelle et al., 1985; Edelsbrunner, 19871 and in O(c2 log c) time using a simpler line sweep [Preparata and Shamos, 19851. The arrangement on the sphere is a direct by-product of the computed arrangement in the plane and has the same size. The rest of the computation is similar to the 2D case. The DBG is computed in an arbitrarily selected region of the arrangement. The DBG in an adjacent region is computed using a straightforward adaptation of the crossing rule stated in the 2D case. Ft(A) is computed in O(c2n2) time. Note that the size of Ft(A) is O(c2n2), and thus this algorithm is optimal in the 3D case. Incremental modification Assume that we modify A by the addition (or deletion) of 6, new parts gener- ating (resp., suppressing) S, contact edges. If S,, << n and 6, << c, then rather than computing the NDBG of the new assembly from scratch, we can compute it by modifying rt(A) at a relatively low cost. For lack of space we will not describe this computation here. Property The crossing rule between two regions en- tails the following property of Ft (A): “Let R1 and R-J be two regular regions such that RI is in the bound- ary of R2. If there exists an arc from Pi to Pj in G( RI, A), this arc also exists in G( R2, A). In particu- lar, if G( RI, A) is strongly connected, so is G(R2, A).” This is useful to more efficiently exploit the NDBG. Implementation Methods similar to the above (specifically, those described in [Wilson, 1991; Wilson, 19921) were implemented in LISP on a DECstation 700 Representation and Reasoning: Qualitative Figure 4: An industrial example Figure 5: The need for rotation 5000 using floating-point arithmetic. The software has been used to plan assembly sequences for several 2D and 3D products, including a 22-part electric bell and the 42-part engine shown in Fig. 4. Extension and Variant Infinitesimal rigid motions One important exten- sion to the NDBG allows motions in rotation. Fig. 5 shows a simple case where one part blocks another for any infinitesimal translation, while a rotation is feasi- ble. Let us consider the 3D case only (the 2D case is just simpler). The direction of an infinitesimal rigid motion is described by a unit vector in a 6D space. Hence, we represent the set of all possible directions of motion by the unit sphere S5 in 6. The definition of an NDBG for infinitesimal translations extends to in- finitesimal generalized motions in the straightforward way. We denote this new NDBG by I’,(A). Consider parts Pi and Pj sharing a piece of planar surface. For each vertex v of the convex hull of the contacting area, the set of all directions of motion that make Pi slide in contact with Pj at v is the intersection of S5 with a 5-dimensional hyperplane passing through the center of S5 [Wilson and Matsui, 19921. This hy- perplane partitions S5 into two half-spheres. The set of hyperplanes determined by the contact surfaces among the parts of A determines an arrangement of regular regions of dimensions 0 (vertex), 1, . . . ,5 on S5, The DBG is constant over each regular region. Since a ver- tex in the arrangement arises at the intersection of five hyperplanes, the arrangement contains O(c5) regions, where c is the number of vertices bounding the convex hulls of contacts. It is constructed in O(c5) time by a multi-dimensional topological sweep [Edelsbrunner, 19871. Constructing a DBG in any region is done in time O(cn2). A crossing rule similar to the pure trans- lational case can be established, yielding an O(c5n2) time algorithm to build the NDBG for a 3D assembly. Again, since the NDBG is of size O(c5n2), this method is optimal. A necessary condition for a subassembly S to be di- rectly removable from A is that there exists a DBG G in I’,(A) such that no arcs in G connect parts in S to parts in A\S. However, this condition is not sufficient. Extended translations A useful variant of the NDBG is to consider non-infinitesimal motions. The variant is relatively simple if we restrict motions to extended translations. An extended translation is an infinite translation along a single direction d E S1 (2D case) or S2 (3D case). Then, given any two parts Pi and Pj in A (these two parts need not be in contact), we say that Pj blocks the translation of Pi along d iff the area that Pi sweeps out when it translates along d from its initial position in A to infinity intersects Pj. The notions of a DBG and an NDBG for extended translations follow from this new blocking relation. We denote the new NDBG by r,(A). Again, consider the 3D case. Let Pa and Pj be any two parts in A. The set B of directions d along which Pj blocks Pi is identical to the set of directions along which the “grown” object Pj 8 Pi = {aj - bi 1 aj E Pi, bi E Pi} (i.e., the Minkowski difference of the two sets of points Pj and Pi at their position in A) blocks the translation of the origin 0 [Lozano-Perez, 19831. It can be shown that if Pi and Pj are polyhe- dra, then Pj 8 Pi is also a polyhedron [Latombe, 1991; Lozano-Perez, 19831. Hence, the set a is the intersec- tion of S2 and the polygonal cone of all rays erected from 0 and intersecting Pj 8 Pi. This intersection is a region of S2 bounded by segments of great cir- cles. The great circles supporting these segments form an arrangement of regions (vertices, edges, and faces). Each region is regular in the sense that the DBG for extended translations remains constant when d varies in it. The arrangement and the associated DBGs form the NDBG of the assembly for extended translations. See [Wilson and Schweikard, 19921 for an algorithm to compute the NDBG for extended translations. A sufficient condition for a subassembly S to be di- rectly removable from A is that there exists a DBG G in I’,(A) such that no arcs in G connect parts in S to parts in A\S. This condition is not necessary. The property stated in the previous section for rt(A) also holds for I’,(A) and I’,(A). . Conclusion The classical geometric model of a physical device con- tains enough information to answer questions about the (dis)assembly of the device. However, this infor- mation is not explicitly given. We have developed Wilson and Latombe 701 another representation, the non-directional blocking graph, which explicitly describes the internal qualita- tive blocking structure of the device. We have defined this representation for three types of motions: infinites- imal translations @‘t(A)), infinitesimal generalized mo- tions (I?,(A)), and extended translations (I’, (A)). We have shown how it can be derived from the original geometric model of the assembly. b(A), r,(A), and L(A) can be used to efficiently answer various questions about the (dis)assembly of A. For instance, I’,(A) can be used to determine whether A can be assembled (or disassembled) with extended translations only. This information can help design for manufacturability, since the cost of manufacturing a product depends critically on the complexity of the mo- tions required by its assembly. If extended translations are not sufficient, I’$ (A) and I’,(A) can then be used to check whether the device may be (dis)assembled with more complex motions. These two NDBGs only provide necessary conditions. If the product satisfies them, a more sophisticated path planner such as the one de- scribed in [Barraquand and Latombe, 19911 is needed to give a definitive answer. Such a planner is costly to run. The NDBGs, acting as low-cost filters, consid- erably reduce the number of times it need be called. In fact, in our experiments on industrial assemblies, NDBGs have supplied the vast majority of part motion calculations required for assembly planning. The notion of an NDBG as presented in this paper still has a number of shortcomings. For instance, it treats parts as if they were free to fly with arbitrary precision. Grasping, fixturing, tolerancing, and uncer- tainty are important problems not addressed in our NDBGs. They also assume that every (dis)assembly operation involves the motion of a single subassembly relative to the rest of the assembly. This corresponds to having only two hands (the assembly table, if any, be- ing one). Although a “well-designed” product usually satisfies this constraint, the (dis)assembly of an arbi- trary device with n parts may require up to n hands (i.e., it may require every part to move relative to ev- ery other part simultaneously) [Natarajan, 19881. De- spite these limitations, we think that NDBGs are clean, useful structures on top of which one may build more complicated ones. References Arkin, E. M.; Connelly, R.; and Mitchell, J. S. B. 1989. On monotone paths among obstacles, with applications to planning assemblies. In Proc. of the ACM Symp. on Computational Geometry. 334-343. Barraquand, J. and Latombe, J-C 1991. Robot motion planning: A distributed representation approach. Int. Journal of Robotics Research 10(6):628-649. Chazelle, B.; Guibas, L. J.; and Lee, D. T. 1985. The power of geometric duality. BIT 25:76-90. de Kleer, J. and Williams, B. 6. (guest editors) 1991. Spe- cial volume, Qualitative Reasoning about Physical Sys- tems II. Artificial Intelligence 51( l-3). Edelsbrunner, H. 1987. Algorithms in Combinatorial Ge- ometry. Springer, Heidelberg. Fahlman, S. E. 1974. A planning system for robot con- struction tasks. Artificial Intelligence 5( l):l-49. Hirukawa, H.; Matsui, T.; and Takase, K. 1991. A general algorithm for derivation and analysis of constraint for mo- tion of polyhedra in contact. In Proc. of the Int. Workshop on Intelligent Robots and Systems. 38-43. Homem de Mello, L. S. and Lee, S., editors 1991. Computer-Aided Mechanical Assembly Planning. Kluwer Academic Publishers, Boston. Homem de Mello, L. S. 1989. Task Sequence Planning for Robotic Assembly. Ph.D. Dissertation, Carnegie Mellon University. Joskowicz, L. and Sacks, E. 1991. Computational kine- matics. Artificial Intelligence 51(1-3):381-416. Kriegman, D. J. and Ponce, J. 1990. Computing exact aspect graphs of curved objects: Solids of revolution. Int. Journal of Computer Vision 5(2):119-135. Latombe, J-C 1991. Robot Motion Planning. Kluwer Aca- demic Publishers, Boston. Lozano-Perez, T. 1983. Spatial planning: A configura- tion space approach. IEEE Transactions on Computers C-32(2):108-120. Natarajan, B. K. 1988. On planning assemblies. In Proc. of the ACM Symp. on Computational Geometry. 299-308. Ohwovoriole, M. S. 1980. An Extension of Screw The- ory and its Application to the Automation of Industrial Assemblies. Ph.D. Dissertation, Stanford University. Pollack, R.; Sharir, M.; and Sifrony, S. 1988. Separating two simple polygons by a sequence of translations. Dis- Crete and Computational Geometry 3:123-136. Popplestone, R. J.; Ambler, A. P.; and Bellos, I. M. 1980. An interpreter for a language for describing assemblies. Artificial Intelligence 14:79-107. Preparata, F. P. and Shamos, M. I. 1985. Computational Geometry: An Introduction. Springer-Verlag. Sacerdoti, E. 1977. A Structure for PZans and Behavior. American Elsevier. Toussaint, G. T. 1985. Movable separability of sets. In Toussaint, G. T., editor 1985, Computational Geometry. Elsevier, North Holland. Wilson, R. H. and Matsui, T. 1992. Partitioning an as- sembly for infinitesimal motions in translation and rota- tion. In Proc. of the Int. Conf. on Intelligent Robots and Systems. To appear. Wilson, R. H. and Schweikard, A. 1992. Assembling poly- hedra with single translations. In Proc. of the IEEE Int. Conf. on Robotics and Automation. To appear. Wilson, R. H. 1991. Efficiently partitioning an assembly. In Homem de Mello, Luiz Scaramelli and Lee, Sukhan, editors 1991, Computer-Aided Mechanical Assembly Plan- ning. Kluwer Academic Publishers, Boston. 243-262. Wilson, R. H. 1992. On Geometric Assembly Planning. Ph.D. Dissertation, Stanford University. 702 Represent at ion and Reasoning: Qualit at ive Model Construct ion
1992
119
1,186
eactive N igat io Experimenta David P. Miller, Rajiv S. Desai, Erann Ivlev and John Loch Jet Propulsion Laboratory / California Institute of Technology 4800 Oak Grove Drive Pasadena, CA 91109 Abstract This paper describes a series of experiments that were performed on the Rocky III rob0t.l Rocky III is a small autonomous rover capable of navigating through rough outdoor terrain to a predesignated area, searching that area for soft soil, acquiring a soil sample, and deposit- ing the sample in a container at its home base. The robot is programmed according to a reactive behavior- control paradigm using the ALFA programming lan- guage. This style of programming produces robust autonomous performance while requiring significantly less computational resources than more traditional mo- bile robot control systems. The code for Rocky III runs on an &bit processor and uses about 10k of memory. Introduction The research described in this paper is motivated by NASA’s planetary rover program. A planetary rover would be used on missions to deploy instruments and collect samples outside of the immediate area sur- rounding a lander. As science instruments get smaller and more sensitive, the size and strength demands placed on the rover by the instruments are commensu- rately reduced. The major constraints on the robot’s size are then determined by the robot’s ability to ma- neuver through the terrain, and the robot’s ability to carry its own power, communications and computa- tion. Most of the pieces of a rover system scale well with reduced size. Conn-nunications, though, is an excep- tion. As the size of the robot is reduced, it becomes more and more difficult to maintain a high bandwidth communications system. As communications capacity is reduced the rover must either operate with a re- duced level of performance, or with greater autonomy [MillerSO]. Th e computation system onboard a rover is also limited by the rover’s size. The power require- ments of the computers, and the mass of the power sub- system limit how much computation a rover can carry. ‘This research was performed at the Jet Propulsion Lab- oratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration. In the past, autonomy has required enormous compu- tational capabilities. It has been suggested [Brook&g, Miller911 that a reactive, behavior-based control sys- tem can reduce these computational requirements and still produce robust autonomous performance. This paper describes a working robot which confirms this theory. The remainder of this paper describes Rocky III. Rocky is a six-wheeled robot massing about fifteen kilograms (see Figure 1). The robot carries all its own power (batteries that will run the robot for ap- proximately ten hours), communications (a 9600 baud radio modem) and computation (a single 6811 micro- controller). An operator designates a sample site, and an optional set of intermediate way-points, and then sends the robot a signal to start its run. Once the start signal is received, the robot requires no further communications. The robot makes its way to the sam- ple site via the way-points (if any were designated). When the robot reaches the sample site, it searches for and collects a sample of soft soil. It then returns to its starting point and deposits the sample in the col- lection container. If the robot should encounter any untraversable terrain during its travels, it modifies its course to go around those areas. The next section describes Rocky III. Its mobility system, sensors, and computation system are detailed. Section three describes the software and algorithms we created for the robot. Section four describes the exper- iments and results that have been done with the robot. The final section presents some conclusions about the role of sensing and internal representation that can be drawn from these experiments. Rocky III - The Hardware The chassis of Rocky III is a six-wheeled springless suspension system called the “rocker-bogie” which con- sists of two pairs of rocker arms or “bogies”. Each pair consists of a main rocker arm and a secondary arm whose pivot point is at the front end of the main arm. Adjustable mechanical stops limit the rotation of the secondary arms. The two rocker-arm assemblies are connected through a differential gear at the center of Miller et al. 823 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. Figure 1: The Rocky III Robot gravity. The main arms pivot relative to the body and each other at the CG (See Figure 2). The main body of the robot is mounted on the differential. The pitch of the main body is thus the average pitch of the two rocker-arm assemblies, providing a more stable mount for instruments and sensors. An electronics enclosure mounted on the main body houses the robot ’ power distribution and conditioning system, fans, and a com- puter and interface boards. The six thirteen-centimeter diameter wheels are in- dependently driven by motors inside the wheels. The front and rear wheels are independently steered. For these experiments, Ackerman steering2 was used to maneuver the vehicle, i.e., normals drawn from each wheel met at a common point. (See Figure 3.) The batteries were mounted in a pan below the differential. The mass of the batteries and the wheels and motors keeps the center of gravity at the differential or slightly below. There is a full wheel diameter of ground clear- ance below the battery pan. ‘ Ackerman steering assumes all wheels are in the ground plane. This assumption is not always valid for Rocky III, however, the errors induced by turning over uneven ground were insignificant and almost entirely removed when the robot servoed to a compass or beacon heading. Figure 2: The Rocker-Bogie Mobility System Figure 3: Ackerman Steering (top view) At the front of the robot is mounted a three-degree- of-freedom arm. The end effector is a soft soil scoop. The arm can reach approximately five centimeters be- low the plane of the wheels in front of the robot, and folds to rest on top of the electronics enclosure when the robot is in motion. The computation system is significant because of its limited capability. The only computer on board is an eight-bit Motorola 6811 processor with 32Kbytes of memory (though only about 10k bytes were needed for any of the experiments described in this paper). No mass storage is used. The processor and interface boards (for communicating with the sensors and mo- tors) are all contained in a stack of five 2x4” com- mercially available boards. A 9600 baud radio modem is used to download programs and commands to the robot. The sensors used on the robot are also very simple. A flux gate compass is mounted on a mast above the main body of the robot (to keep it away from the mo- tors). (See Figures 4 and 5). The compass element is mounted on a float so that changes in roll and pitch up to twenty degrees do not affect its heading read- ing. The compass is accurate to approximately one degree of arc. Roll and pitch clinometers in the main 824 Robot Navigation body of the robot are accurate to about half a degree. Magnetic reed switches installed on the front-bogie piv- ots indicate when one of the rocker arms is at one of its limit positions. The two middle wheels are instru- mented with one-count-per-revolution encoders. Since the robot would start its runs with its wheels in an ar- bitrary point in their rotation, it was possible to have a dead reckoning error (without slip) of up to a wheel circumference (approximately forty centimeters). Fi- nally, there are mechanical contact sensors underneath the robot’s bottom panel, and at the front of the robot. Figure 4: Side view of Rocky III Rocky’s arm has eight-bit position encoders on each of the three joints. Inside the end-effector are two con- tact switches used determine the hardness of the soil. One switch has a spike attached to it, the other has a flat plate. When the gripper is opened and pressed against the ground, the spike makes contact before the plate. If the soil is hard the switch with the spike will close on contact. If the soil is soft, then the spike pen- etrates the ground and the switch with the flat plate closes first. Rocky also has an infrared beacon detector mounted on a rotating platform which senses a three-phase bea- con mounted above the sample receptacle. The beacon is mounted on top of the lander (located at the start- ing position for each experiment). The single beacon consists of three sets of infrared LEDs operating at distinct frequencies which can be discriminated by the detector. Each set of LEDs is aimed in a different direction. By noting what frequency is received the robot can tell whether it is to the right, left, or aligned with the center of the beacon. The beacon detector can also determine the angle to the beacon using an Figure 5: Front view of Rocky III eight-bit encoder on the platform, although this infor- mation was not used in the experiments. The beacon has a range of approximately five meters. The bea- con is used only for end-point homing during the final phase of an experiment. Rocky PI1 - The Software Rocky III is controlled using a software paradigm which has come to be called behavior control, which can be characterized by the following features: Behavior control tightly couples sensors to actuators through fairly simple computational mechanisms. Complexity in behavior control mechanisms is man- aged by decomposing the problem according to tasks (e.g. collecting a soil sample) rather than functions (e.g. building a world model). Behavior control mechanisms tend to manifest them- selves as layered systems. Exactly what the layers do and how they should interact is the subject of much disagreement. Behavior control can be applied in situations where classical control theory is not applicable, either be- cause the plant (i.e. the environment) cannot be modelled with sufficient precision, or because the complexity and dimensionality of the transfer func- tion is too high to allow the mathematics to be car- ried through. Behavior control employs transfer functionals rather than the transfer functions of classical control the- ory, that is, the output of a behavior-based con- Miller et al. 825 trol mechanism can contain stored internal state (i.e. and the current vehicle heading as reported by the memory). compass, and generating an appropriate steering com- Rocky III is programmed in ALFA, a behavior lan- guage for designing reactive control mechanisms for autonomous mobile robots [GatSla]. ALFA programs consist of networks of computational modules which communicate with each other and with the outside world by means of communications channels. Modules and channels are nominally dataflow devices, that is, they continuously recompute their outputs as a func- tion of their current inputs. Modules can also contain stored internal state, allowing sequential computations to be embedded within ALFA’s dataflow framework. ALFA was designed to support a bottom-up hierar- chical layered design methodology. In contrast to sub- sumption [Brooks861 where layers interact by suppress- ing communications in other layers, ALFA is designed to support an architecture where higher layers provide information to lower layers through interfaces which operate at progressively higher levels of abstraction. The structure of the control software for Rocky III is shown in Figure 6. The lowest layer reads two input channels, which output a nominal vehicle speed and steering direction, and computes the settings for the individual drive and steering motors. d steering motors Figure 6: The Structure of the Control Software The second layer performs two functions, moving to a commanded heading and moving around obsta- cles. Moving to a commanded heading simply involves computing the difference between the desired heading mand. Moving around obstacles is currently accom- plished by backing away from the obstacle and turning to one side. This is a fairly simplistic strategy and could be improved, but field tests have shown this ap- proach to be quite robust. The third layer is the master sequencer which per- forms high-level control over the mission. On Rocky III the sequencer moves the robot through a series of way-points (goal locations), stops the vehicle, collects a soil sample, and returns to the lander. The master sequencer collects input from the orig- inal task description (goal and way-points) and from the beacon. The closest thing to a map in Rocky III is manipulated by the master sequencer. The “map” is simply a list of X,Y points that give the position of the lander, the goal point, and the waypoints. The master sequencer uses these points one at a time to compute the robot’s current heading. When returning to the beacon, the signal received from the beacon is used to compute the input to the goal heading channel. De- pending on the signal from the beacon the robot will maneuver directly towards the beacon, at right angles to it (so it can line up on the center line), or head to- wards the best estimate of the beacon’s position (when the beacon is out of range or occluded). Experiments The experiments performed with Rocky III were done to verify that it could meet certain requirements needed to autonomously carry out a planetary rover mission. These requirements include: being able to navigate to a designated area; being able to acquire a suitable sample; being able to negotiate obstacles (ei- ther by going over or around the obstacles); being able to return precisely to the lander and deposit the sample there; being able to operate with no real-time commu- nication; being able to carry all power, computation, communications, etc. Dozens of tests were performed to verify the robot’s abilities. The experiments took place in a large indoor laboratory, and outdoors in the Arroyo Seco outside of JPL (a dry wash strewn with rocks, boulders, sand, and hard packed soil). All of the experiments had the same basic format, though the details of the robot’s starting position and orientation, the positions of ob- stacles, the sample site and way-points differed from test to test. At the start of an experiment, the operator down- loads the sample site and way-points (if any) to Rocky. Each point requires four bytes. The positions are given in X-Y coordinates with the.X-axis aligned to the cen- terline of the lander, and the origin at the front of the lander. The robot is given its starting location and the compass orientation of the lander. The operator then tells the robot to start. 826 Robot Navigation As Rocky starts moving forward, it compares its cur- rent heading to the heading needed to get to the first way-point (or the sample site if no way-point has been specified) from its current location. It calculates the proper direction of turn and turns in that direction. This behavior is continuously repeated so that should the turn overshoot, Rocky will automatically correct its orientation. The robot keeps track of its position by using the wheel encoders and current compass heading to update its X-Y estimate. As Rocky travels, it comes across rocks, ledges, and slopes of various sizes and degree. Any ledge or rock smaller than 13cm (one wheel diameter) is traversed by the mobility system. Ledges or rocks greater than a wheel diameter in size that are first contacted by a wheel, trigger one of the bogie switches. Rocks larger than a wheel diameter that go between the front wheels are detected by the skid plate or front contact switches. Severe slopes are detected by the roll and pitch cli- nometers. When a section of untraversable terrain is detected, the robot executes an avoidance maneuver. It backs up, turns ninety degrees to the left or right (the op- posite direction from the side of the vehicle at which the obstacle was first detected), moves forward a ve- hicle length (approximately a half meter), and then resumes its normal behavior. These obstacle detection and avoidance behaviors are active at all times and can override any other active behaviors. When Rocky reaches the sample area, it deploys the sampling arm and tests the ground ahead of it for soft soil. If the ground is unsuitable, it lifts the arm, moves forward a few centimeters, and tests again. If suit- able soil is not found within five trials, the mission is aborted, and the rover returns to the lander. When soft soil is found, the gripper closes around the sample, and the arm is retracted and stowed. Using its current position estimate, the robot turns to a heading to bring it back to the lander. At the same time, it starts scanning for the lander beacon. The bea- con consists of two sectors approximately forty degrees in angle, and a twenty degree center sector. Each sec- tor’s signal is modulated at its own characteristic fre- quency. When the beacon is detected by Rocky the robot heads straight towards the beacon if it detects the center sector, or it turns ninety degrees to the di- rection of the beacon, and moves towards the center sector, if a side sector is detected, until the center sec- tor is detected. The beacon is then followed all the way to the lander. If sight of the beacon is ever lost (from occlusion), then Rocky reverts to navigation back to the lander by dead-reckoning. The robot uses its front contact sensors to detect when it has docked with the lander. At this point it deploys the arm and deposits the sample in the col- lection container. Figure 7 shows key events during a typical experiment. After the software was debugged and tuned, dozens Figure 7: A Typical Run of runs were performed with only a few failures. Each of these failures involved hardware (a drive motor gave out, the beacon failed, a contact sensor was stuck on). In all cases the avoidance software succeeded in get- ting the robot through the obstacles and to the des- tination. For the outdoor runs, the vegetation was usually removed from the test area. But even during a run where the sample area was designated in a heav- ily vegetated spot, the robot eventually made its way around the large weeds and to the sample area. esults and Conclusions We have described Rocky III, a small mobile robot ca- pable of autonomously and reliably navigating through rough outdoor terrain, acquiring a soil sample, and de- positing the sample in a receptacle marked by a bea- con. Such a capability has applications for planetary exploration. Planetary rover research has often em- phasized innovative design to reduce power (eg., [Sim- mons911). Rocky III’s architecture allows the robot to be made small, and therefore lower power in all of its subsystems. Small robots are cheaper to launch, but because they cannot support high-bandwidth commu- nications, must possess some level of autonomy. Rocky III is the only example known to us at this time of an autonomous robot which operates off-road and performs both navigation and manipulation. The robot’s performance has been demonstrated in dozens of tests in both an indoor laboratory setting and out- door rough-terrain environments. Rocky III is controlled by a small $-bit processor using about 10K of memory. This is made possi- ble through the use of a reactive behavior-control ap- proach, where sensors are coupled directly to actua- tors through relatively simple computations. The con- Miller et, al. 827 . trol structure for Rocky III is extremely similar to that used on Tooth, our indoor object-collecting micro- rover [Miller SO]. This work adds to the body of evidence for the claim that complex symbolic computations and world models are not necessary to produce robust, intelligent behav- ior in autonomous mobile robots. However, it must be pointed out that an absence of necessity does not im- ply undesirability. In other work [Bonasso91, GatSIb, Miller89, Payton86] the integration of reactive control mechanisms with symbolic reasoning has been shown to be able to increase the complexity of behavior over that which is capable from a straightforward reactive control implementation. The success of the Rocky III experiments make an- other important point. Compared to most other reac- tive robots (eg., Herbert [ConnellSO]), Rocky III is sen- sory impoverished. The proprioceptive sensors (com- pass, encoders, and clinometers) tell the robot its ori- entation (and to some extent location) in space. How- ever, all the information about the terrain that the robot is crossing comes from the bogie limit switches and the four contact switches on the front of the robot - eight single bit sensors. Rocky cannot sense the en- vironment until it literally runs into it! Despite this handicap, Rocky is very capable of making its way through realistic and hazardous terrain. This capabil- ity is not connected to the robot’s size (the roughness of natural rocky/lava terrain is independent of scale from a few centimeters to dozens of meters [TaylorSl]). The robot’s success is due to several factors. First, the mobility system is capable of handling most minor haz- ards. Second, the sensors, while very limited will vir- tually always detect a hazard that the robot cannot go over (though they might also generate false alarms). Finally, natural terrain is seldom a maze. Terrain is rich with paths, and it is not necessary for the robot to select the optimal path, only a path that works. If placed in a maze, Rocky might never make it out, but in real terrain it will succeed in almost all circum- stances (it has not failed yet!). By limiting the scope of the robot to those environments it will realistically encounter, we have been able to simplify the sensing and computation systems of Rocky beyond those typ- ical even for reactive robots. Given the simplicity of the sensors and the program- rning, the question arises: “where is the intelligence?” The capabilities exhibited by this robot are a result of the entire robot system interacting with its environ- ment. The sensors are simple, but they are the right sensors for this robot and this class of activities. By mixing the sensing and reactive capabilities appropri- ately with the mobility hardware’s capabilities, and the class of tasks assigned to the robot, we have a robot that operates intelligently over its domain. The intelli- gence is just as much hardwired into the selection and placement of the sensors and the actuators as it is in the executed code, but it works just as well. The ex- periments described above show that an intelligently acting system can be created where the intelligence is in large- part encoded in the device structure, than totally in the control/p1 anning system. rather Acknowledgements: The authors wish to thank David Atkinson, Donald Bickler, Jack Fraisier, Nora Mainland, Jim Tran, and Brian Yamauchi for their significant contributions to this research. References [BonassoSl] Peter Bonasso, Underwater Experiments Using a Reactive System for Autonomous Vehicles, in the Proceedings of the 1991 National Conference on Ar- tificial Intelligence, AAAI pgs 794-800, July 1991. [Brooks861 Rodney A. Brooks, A Robust Layered Con- trol System for a Mobile Robot, IEEE Journal on Robotics and Automation, vol RA-2#1, March 1986. [Brooks891 Brooks, R.A., Flynn, A.M., Fast, Cheap, and out of Control: A Robot Invasion of the Solar System, Journal of the British Interplanetary Society, vol 42, #lo, ~~478-485, October 1989. [ConnellSQ] Jonathan Connell, A Colony Architecture for an Artificial Creature, MIT Artificial Intelligence Labo- ratory TR#llSl, 1990. [GatSla] ALFA: A Language for Programming Robotic Control Systems, in Proceedings of the IEEE Conference on Robotics and Automation, May 1991. [GatSlb] Erann Gat, Reliable GoaLdirected Reactive Control for Mobile Robots, Ph.D. thesis, Virginia Tech, May 1991. [MillerSl] David P. Miller, Autonomous Rough Terrain Navigation: Lessons Learned, paper #AIAA-91-3813- CP in the Proceedings of Computing in Aerospace 8, AIAA, pgs 748-753, October, 1991. [MillerSO] David P. Miller, The Real-Time Control of Planetary Rovers Through Behavior Modification, in the Proceedings of the 1990 SOAR Conference, Alberquerque NM, June 1990. [Miller89] David P. M 11 i er, Execution Monitoring for a Mobile Robot System, in the Proceedings of the 1989 SPIE Conference on Intelligent Control and Adaptive Systems, vol 1196, pp. 36-43, Philadelphia, PA, Novem- ber 1989. [Payton86] David Payton, An Architecture for Reflexive Autonomous Vehicle Control, Proceedings of the 1986 IEEE Coraference on Robotics and Automation, May 1986. [Simmons911 Reid Simmons, Erik Krotkov & John Bares, A Six-Legged Rover for Planetary Explo- rationpaper #AIAA-91-3812-CP in the Proceedings of Computing in Aerospace 8, AIAA, pgs 739-747, October, 1991. [Taylor911 Jeff Taylor, Personal communications, Plane- tary Geology Department, University of Hawaii. 828 Robot Navigation
1992
12
1,187
Palo Alto, CA 94304. nayakBcs.stanford.edu Abstract Adequate problem representations require the identification of abstractions and approximations that are well suited to the task at hand. In this paper we introduce a new class of approximations, called cuusal approximations, that are commonly found in modeling the physical world. Causal ap- proximations support the efficient generation of parsimonious causal explanations, which play an important role in reasoning about engineered de- vices. The central problem to be solved in generat- ing parsimonious causal explanations is the iden- tification of a simplest model that explains the phenomenon of interest. We formalize this prob- lem and show that it is, in general, intractable. In this formalization, simplicity of models is based on the intuition that using more approximate models of fewer phenomena leads to simpler models. We then show that when all the approximations are causal approximations, the above problem can be solved in polynomial time. Introduction One of the earliest important ideas in AI is that ef- fective problem solving requires the use of adequate problem representations [Amarel, 19681. Adequate problem representations incorporate abstractions and approximations that are well suited to the problem solving task. Different types of abstractions and ap- proximations have been identified for a variety of tasks: abstractions in ABSTRIPS [Sacerdoti, 19741 speed up planning by dropping select operator precon- ditions; approximations in mathematical domains sim- plify equation solving by ignoring ne ligible quantities [Bennett, 1987; Raiman, 19911; PLR Sacks, 19871 ana- f lyzes dynamic engineering systems using piecewise lin- ‘Pandurang Nayak was supported by an IBM Gradu- ate Technical Fellowship. Additional support was provided by the Defense Advanced Research Projects Agency un- der NASA Grant, NAG 2-581 (under ARPA order number 68221, by NASA under NASA Grant NCC 2-537, and by IBM under agreement number 14780042. ear approximations of ordinary differential equations; fitting approximations support efficient model sensitiv- ity analysis [ Id, 1990; Weld, 19911; horn approximcc tions of a logical theory allow efficient inference [Sel- man and Kautz, 19911. In this paper we introduce a new class of approximations, called causal approxima- tions, that are commonly found in modeling the physi- cal world. Causal approximations support the efficient generation of parsimonious causal explanations. Parsimonious causal explanations play an important role in reasoning about engineered devices [Weld and de Kleer, 19901: (a) they are a vehicle for explain- ing phenomena of interest to a human user; and (b) they cau be used to focus subsequent reasoning: in design, causal explanations allow the identification of changes to be made to a device to create a better de- sign; in diagnosis, causal explanations focus the rea- soning on just what could have caused a symptom; causal explanations can guide quantitative analysis. Causal explanations are usually generated from under- lying device models [de Kleer and Brown, 1984; For- bus, 1984; Williams, 1984; Iwasaki and Simon, 1986; Iwasaki, 19881. Hence, to generate parsimonious ex- planations, the underlying device models must be as simple as possible. Device models can introduce irrelevant details into causal explanations either by modeling irrelevant phe- nomena, or by including needlessly complex models of relevant phenomena. Consider the temperature gauge in figure 1, consisting of a battery, a wire, a bimetal- lic strip, a pointer, and a thermistor. A thermistor is a semiconductor device; an increase in its tempera- ture causes a decrease in its resistance. A bimetallic strip has two strips made of different metals welded together. Temperature changes cause the two strips to expand by different amounts, causing the bimetal- lic strip to bend. The following is a causal explana- tion of how the gauge works: the thermistor’s temper- ature determines its resistance. This determines the circuit current, which determines the heat generated in the wire, and hence the bimetallic strip’s tempera- ture. This determines the bimetallic strip’s deflection, which determines the pointer’s angular position. Nayak 703 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. Figure 1: A temperature gauge To generate the above explanation, we model the wire as a resistor that dissipates heat due to current flow. Modeling irrelevant phenomena (e.g., the electro magnetic field generated by the wire) is unnecessary. Approximating the wire’s resistance by assuming it is constant is adequate-more accurate models that in- clude the dependence of the wire’s resistance on its temperature or length are unnecessary. No single device model is adequate for generating parsimonious explanations for all phenomena: every model will be either unnecessarily complex or too sim- ple to explain some phenomenon. For example, unlike in the previous case, to explain how the length of the wire affects the functioning of the temperature gauge, the dependence of the wire’s resistance on the wire’s length must be modeled. Hence, a system that gener- ates parsimonious causal explanations for a variety of device phenomena must be able to generate a simplest model that explains the phenomenon of interest. In this paper we formalize the problem of con- structing a simplest device model that explains a phe- nomenon. Device models are constructed by compos- ing model fragments, which are partial descriptions of phenomena. We define the simplicity of a device model based on a primitive approximation relation between model fragments. We show that the problem of finding a simplest device model that explains a phenomenon is intractable. We then introduce causal approximations. We show that when all the approximation relations be- tween model fragments are causal approximations, the causal relations grow montonically as models become more accurate. As a consequence, the above prob- lem becomes tractable. A number of examples demon- strate that causal approximations are commonly found in modeling the physical world. Formalizing the problem In this section we give a formal description of the prob- lem of constructing a simplest device model that ex- plains some phenomenon. Models of engineered de- vices are usually expressed as sets of equations. Hence, we start this section with a discussion of causal order- ing: the process of generating causal explanations from equation models. Next, we introduce model fragments, which are partial descriptions of phenomena. Device models are constructed by composing an appropriate set of model fragments. We then define coherent device models, and a simpler than relations between models. Finally, we give a formal statement of the problem and a theorem stating its intractability. Equations and causal ordering In this paper we will concentrate on algebraic equa- tions. Differential equations are discussed later. Equa- tions can be viewed as acausal representations of mech- anisms. To have a causal import, equations must be causally oriented so that one of the parameters of the equation is causally dependent on the other parameters of the equation. The dependent parameter is said to be causally determined by the equation. The causal orientation of an equation can be fixed a: priori [Forbus, 19841, or it can be inferred from the equations of a device model [de Kleer and Brown, 1984; Williams, 1984; Iwasaki and Simon, 1986; Iwasaki, 19881. Fixing the causal orientation a priori is overly restrictive, since different causal orientations are often possible. However, not all causal orientations fit our intuitions about causality. For example, the equation v = iR, representing electrical conduction in a resis- tor, can be causally oriented in one of two ways: either V can be causally dependent on i and R, or i on V and R. The third possibility, R being causally dependent on V and i, makes no sense. The set of allowed causal orientations of an equa- tion, e, can be represented by the set, P..(e), of pa- rameters that can be causally determined by e. As a typographical aid, parameters that can be causally de- termined by an equation will be typeset in boldface, e.g., V = iR. Extend Pe to a set, E, of equations: P,(E) = u, E P,(e). The causa relations entailed by a set of indepen- 5 dent equations is the causal ordering of the parameters used in the equations. (See [Iwasaki and Simon, 1986; Iwasaki, 19881 for a discussion of causal ordering.) The causal ordering can be generated efficiently by (a) causally orienting each equation such that each param- eter is causally determined by exactly one equation; and (b) taking the transitive closure of the causal de- pendencies entailed by the causal orientations.’ We formalize this as a causal mapping: Definition 1 (Causal mapping) Let E be u set of equations. A function F : E ---) PE( E) is a causal mapping if and only if (a) F is l-l; and (b) for each e E E, F(e) E P,(e). ‘This algorithm is due to Serrano and Gossard [Serrano and Gossard, 198’71. They axe interested in evaluating a set of constraints. The parameter dependencies that they generate are identical to the causal ordering, and their al- gorithm can be viewed as causally orienting each equation. 704 Representation and Reasoning: Qualitative Model Construction Hence, a causal mapping causally orients each equa- tion such that each parameter is causally determined by at most one equation. A set of equations is com- plete if and only if there is an onto causal mapping, i.e., if each parameter is causally determined by some equation. Feedback manifests itself as a cycle in the causal dependencies generated by the causal mapping. Device models are constructed by composing model fragments [Falkenhainer and Forbus, 1991; Nayak et al., 19921. A model fragment is a set of indepen- dent equations that (partially) describe some physical phenomena in the device, e.g., a component or pro- cess model instance. Different model fragments can describe different phenomena, or can be different de- scriptions of the same phenomena. Model fragments provide an appropriate level of description: (a) un- like device models, they are reusable; and (b) not all meaningful physical phenomena can be represented by a single equation. Model fragments usually have structural and behav- ioral applicability conditions on their use [Falkenhainer and Forbus, 1991; Nayak et aI., 19921. In this paper we do not explicitly model these applicability conditions. Rather, we assume that the set of model fragments under consideration are precisely the ones whose ap- plicability conditions are satisfied. Let ml, 7732 be model fragments. m2 is said to con- tradict ml (written con(m1, m2)) if they make con- tradictory assumptions about the domain. m2 is said to be an approximation of ml (written app(m1, m2)) if it is a domain-dependent fact that ml is a more accurate model of some phenomenon than m2. It is important to note that con and app are primitive, domain-dependent relations between model fragments, i.e., these relations cannot, in general, be derived from the equations of the model fragments. However, one can see that con is irreflexive and symmetric, that app is irreflexive, anti-symmetric, and transitive, and that approximations are also contradictory: 7 con(ml , ml ) (1) con(ml , m2) 3 con(m2, ml) (2) ‘aPP(ml9 ml 1 (3) app(ml, m2) * -w(m2 y ml) (4 am(m , m2) A aw(m2, m3) =+ a&m, m3) (5) app(w , m2) =+ con(ml, m2) (6) Model fragments are organized into assumption classes [Addanki et al., 1991; Falkenhainer and For- bus, 1991; Nayak et al., 19921, which are sets of mutu- ally contradictory model fragments that can be viewed as different descriptions of the same phenomena. In the rest of the paper we assume that the con relation partitions the set of model fragments into a set of mu- tually consistent assumption classes, i.e., model frag- ideal-cond(wire): VW = 0 ideal-ins(wire) : i, = 0 conat-restvise) : VW = i,&; exogenousf dep-rescwire): VW = &R,; &, = pwlw/Aw app(const-res(wire), ideal-cond(wire)) app(const-res (wire), ideal-ins (wire)) app(dap-res(wire), const-rescwire)) Figure 2: Electrical conduction model fragments ments are in the same assumption class if and only if they are contradictory. This assumption is reasonable since there is little reason for model fragments describ- ing different phenomena to be mutually contradictory. Figure 2 shows the model fragments in an as- sumption class describing electrical conduction in the wire, and the approximation relations between them. (exogenous( ) is a shorthand for = c, for some constant c.) Note that const-res ire) and dsp-res(sire) are not complete descriptions of elec- trical conduction. Note also that there is nothing in- herently contradictory about the equations of the first two model fragments- the contradiction between them is a domain fact. odels A device model is a set of model fragments describing various phenomena in the device. A coherent device model satisfies the following three properties. First, coherent models must be consistent. A model is consis- tent if no two model fragments in it are mutually con- tradictory, i.e., the model contains at most one model fragment from each assumption class. Second, coher- ent models must be complete. A model is complete if the set of equations in the model fragments of the model is complete. All parameters of a complete model can be determined by the model’s equations. Third, coherent models must satisfy any domain- independent, domain-dependent, and device depen- dent constraints on the assumption classes used in them [Falkenhainer and Forbus, 1991; Nayak et al, 19921. For example, because of the linkage between the pointer and the bimetallic strip, any coherent model of the temperature gauge that includes a model fragment describing the pointer’s rotation must also include a model fragment describing the bimetallic strip’s de- flection. We capture these additional constraints us- ing the reqlsa’res relation on assumption classes. If A4 is a coherent model, and Al, A2 are assumption classes, requires(A1, A2) says that whenever iI4 con- tains a model fragment from Al, M must also con- tain a model fragment from Az.~ Note that requires 20ther types of requires relations are also possible, e.g., reguires(on, A) says that every coherent model that includes model fragment m must include a model fragment from assumption class A. However, we will not discuss these variations here. IVayak 705 linkage&ms,ptr): therral-bms (bms 1: heat-flow(brs,atr) : heat-f low (wire ,brs) : cons+temp(atm) : thermal-equil (bmsl : thermal-equil(wire): const-res(wire): thermal-restwire): elec-therrctherristor): cons+voltage(battery1: kvl-kc1 : Input : @P = klxb xb = k2Tb fba = h(Tb - T,) f tub = h(Tw - a) exogenous(T,) fbo = fwb f fw wb = VW = i, R,; exogenous(li, ) fw = v,i, I$ = itRt; & = kgekelTt exogenous(V, ) vu = v, $ v& i, = it; it = i, exogenous(Tt ) Xb: Bms deflection Rt: Thermistor resistance Vt: Thermistor voltage VW: Wire voltage V,: Battery voltage Ta: Atm temperature Tt: Thermistor temperature 0,: Pointer angle R,: Wire resistance it: Thermistor current i,: Wire current i, : Battery current Tb: Bms temperature Tw : Wire temperature fbo: Heat flow (bms to atm) furb: Heat flow (wire to bms) f,,,: Heat generated in wire kj: Exogenous constants Figure 3: A possible model of the temperature gauge statements typically have preconditions. In this paper, instead of explicitly modeling these preconditions, we assume that the only requires statements under con- sideration are the ones with true preconditions. We use the app relation between model fragments to define a simpler than partial ordering on models. This definition of simplicity is based on the following two intuitions: (a) a model is simpler if it models fewer phenomena; and (b) approximate descriptions are sim- pler to use than more accurate ones. Definition 2 (Simplicity of models) A model M2 is simpler than a model Ml (written MS 5 Ml) if for each model fragment m2 E M2 either (a) m2 E Ml; or (b) there is a model fragment ml E Ml such that m2 is an approximation of ml, M2 is strictly simpler than Ml (written MS < Ml) if M2 i Ml and Ml f M2. Figure 3 shows the model fragments and equations in a model of the temperature gauge. Replacing const-res(wire) by ideal-cond(wire) results in a simpler model, while replacing it with dep-res(wire) results in a more complex model. Finally, we represent the phenomenon to be ex- plained by a query, causes?(pl,p2), which requests a causal explanation for how p1 causally determines ~2. We use the query to define a causal model: Definition 3 (Causal Model) A coherent device model is a causal model with respect to a query causes?(pl, p2) if and only if pl causally determines p2 in the the causal ordering of the parameters generated using the model’s equations. A causal model is a min- imal causal model if and only if every strictly simpler coherent model is not a causal model. 706 For example, the query causes?(Tt , op) requests an explanation for how the thermistor’s temperature causally determines the pointer’s angular position. Figure 4 shows the causal ordering of the parameters generated from the model in figure 3. (The bracketed parameters form a feedback loop.) Since Tt causally determines op in this causal ordering, the model in fig- ure 3 is a causal model with respect to this query. Problem statement Using the formalization developed above, we now give a formal statement of the problem of finding a simplest model that explains the phenomenon of interest. We call this the MINIMAL CAUSAL MODEL problem: Definition 4 (MINIMAL CAUSAL MODEL) Let M be a set of model fragments. Let %on” and Uapp” be binary relations on model fragments that sat- isfy constraints l-6. Let “con” partition M into a set A of mutually consistent assumption classes and let %equires” be a binary relation on assumption classes. Let p1 and p2 be parameters, and let causes?(pl ,p2) be the query. Find a minimal causal model, i.e., a mini- mal, coherent model that is able to answer the query. We call the corresponding decision problem (“Does there exist a causal model?“) the CAUSAL MODEL problem, and state the following theorem: Theorem 1 The CAUSAL MODEL problem is NP- complete. The proof of this theorem is based on a reduction from ONE-IN-THREE 3SAT. ON&IN-THREE 3SAT, which is exactly like 3SAT except that each clause is required to have exactly one true literal, is shown to be NP-complete in [Schaefer, 19781. Briefly, the re- duction introduces a model fragment for each literal in an instance of ONE-IN-THREE 3SAT, with model frag- ments corresponding to complementary literals being placed in the same assumption class. The mapping between truth assignments and models is straightfor- ward: a literal is true if and only if the corresponding model fragment is in the model. Equations are as- signed to model fragments to ensure that a model is a causal model if and only if the corresponding truth as- signment assigns exactly one true literal to each clause. See [Nayak, 19911 for details of this and other proofs. The intractability of the MINIMAL CAUSAL MODEL problem is an immediate corollary: Corollary 1 The MINIMAL CAUSAL MODEL prob- lem is NP-hard. Causal approximations In this section we introduce causal approximations and the ownership of parameters by assumption classes. We show that if an instance of MINIMAL CAUSAL MODEL is such that (a) all the approximation rela- tions between model fragments are causal approxima- tions; and (b) h w enever two assumption classes own Representation and Reasoning: Qualitative Model Construction Tt - -4 fW - fw zb - 0 P Figure 4: Causal ordering of the parameters the same parameter they require each other, then the causal relations entailed by coherent models increase monotonically as models become more accurate. An important consequence of this is that whenever a model is not a causal model, no strictly simpler model is a causal model. This leads to a polynomial time al- gorithm to solve instances of the MINIMAL CAUSAL MODEL problem that satisfy the above properties. In what follows, let I= (M, con, QPP, A, rq@=m,n) (7) be an arbitrary instance of MINIMAL CAUSAL MODEL, -where the elements of the tuple on the right hand side are as in definition 4. We start by defining local parameters: Definition 5 (Local parameters) A parameterp is said to be local to a model fragment rn E M if and only if p can be causally determined by the equations of m, but not by the equations of any model fragment that does not contradict m: p E P,(m) A (Vm’ E M) p E P,(m’) =+ con(m’, m) The above definition implies that local parameters can be causally determined only by equations of model fragments of a single assumption class. Next, we define causal approximations. The idea underlying this def- inition is that if m2 is a causal approximation of ml, then any causal orientation of the equations of m2 can be extended to a causal orientation of the equations of ml in a straightforward manner:3 Definition 6 (Causal approximation) A model fragment m2 is said to be a causal approximation of a 1. 2. model fragment ml if and only if: m2 is an approximation of ml; There exists a l-l mapping G : m2 --) ml such that for each e E m2, P(e) C_ P(G(e)), and P,(e) C_ PCWN. G( > d e an e are said to be corresponding equations; and 9. Let E* denote the equations of ml that have no cor- responding equations in m2, and let P* denote the set of parameters that are local to ml, but not used in m2. Then there exists an onto causal mapping L:E*+P’. 3 P(e) return s the set of all parameters in equation e. Condition 1 ensures that causal approximations are ap- proximations. Condition 2 ensures that for any causal orientation of an equation e E m2, there exists a causal orientation of G(e) E ml which entails a superset of relations. Condition 3 ensures that newly intro- local parameters can be causally determined. The approximations in figure 2 are causal approxi- mations if we let &, be a local parameter. For exam- ple, ideal-cond(wire) is a causal approximation of cons+res(wire) because the equation VW = 0 can be matched to VW = ;,&,, and const-res(wire) introduces &,,, a new local parameter not used in ideal-cond(wire) , which can be causally determined by exogenous(& ). The following lemma tells us that when all the ap- proximations are causal approximations, if we simplify a model without dropping any assumption classes, the causal relations entailed by the new model are a subset of the causal relations entailed by the original model: Lemma 1 Let Z be an instance of MINIMAL CAUSAL MODEL such that all the uapp” relations are causal approximations. Let Ml, M2 E M be coherent models such that (a) Ml 5 iW2; and (b) for every assumption class A E A, either both or neither Ml and iI42 contain a model fragment from A. The causal relations entailed by Ml are a subset of the causal relations entailed by M2. This lemma’s proof is based on the fact that a causal mapping of the equations of Ml can be extended in a straightforward manner to a causal mapping of the equations of M2 using conditions 2 and 3 in defini- tion 6. To extend this lemma to simplifications involv- ing the dropping of assumption classes, we introduce the ownership of parameters by assumption classes. These are the parameters that might be causally deter- mined by equations of model fragments of that class: arameter ownership) The param- eters owned by an assumption class A, denoted by owns(A), are the parameters that can be causally de- termined by the equations of model fragments of A: owns(A) = UmEA E(m). One can view an assumption class as being possibly relevant to the parameters it owns. For example, the assumption class in figure 2 owns VW, i,, and &,,, but not pw,lw, and A,. Nayak 707 f”nnctisn find-minimal-causal-model( M,Z) if M is not a causal model then /* Siuce no simpler model can be a causal model */ return nil else for each AK’ E simpZifications(M,Z) do result := find-minimal-causal-model(M’, a) if result # nil then /* A simpler causal model has been found */ return result endif endfop /* No simplification is a causal model, but A4 is */ return M endif end Figure 5: Function find-minimal-causal-model If we stipulate that whenever two assumption classes own a common parameter they also require each other ( i.e., a coherent model must include all assumption classes possibly relevant to all its parameters), we can extend lemma 1 to all simplifications: Lemma 2 Let Z be an instance of MINIMAL CAUSAL MODEL such that all the “app” relations are causal ap- proximations. Let the %equires” relation be such that if two assumption classes own the same parameter then they “require * each other. Let Ml, M2 C M be coher- ent models such that Ml < Ms. The causal relations entailed by Ml are a subset of the causal relations en- tailed by M2. The importance of lemma 2 is that instances of MIN- IMAL CAUSAL MODEL that satisfy the lemma’s con- ditions can be solved in polynomial time using the function find-minimal-causal-model shown in figure 5. Find-minimal-causal-model takes two inputs: (a) Z, an instance of MINIMAL CAUSAL MODEL; and (b) a co- herent model M. It returns a minimal causal model that is simpler than M. If there is more than one min- imal causal model, it returns the first one it finds. If no such model exists, it returns nil. Simplifications(M), used in find-minimal-causal- model, returns the set of coherent models that are just simpler than M. A coherent model M’ is just simpler than M if M’ < M and there does not ex- ist a coherent model M” such that M’ < M” < M. Find-minimal-causal-model(M, Z) works by systemat- ically searching the simplifications of M, until it finds a causal model M’ such that all the simplifications of M’ are not causal models. Lemma 2 then assures us that M’ is a minimal causal model. More precisely, we have the following theorem: Theorem 2 Let Z be an instance of MINIMAL CAUSAL MODEL such that all the “app” relations are causal approximations. Let the “requires” relation be such that if two assumption classes own the same pa- Massless objects Non-relativistic mass Rigid bodies Frictionless motion Elastic collisious Ideal gases Carnot engines Inviscid fluid flows Constant gravity Constant resistivity Infinite heat sources/sinks Ideal thermal iusulators and conductors Ideal electrical insulators and conductors Figure 6: Examples of causal approximations rameter then they %equire” each other. Let each as- sumption class in A have a single most accurate model fragment, and let the set of these model fragments be M, the most accurate model of Z. Find-minimal- causal-model(M,Z) finds a minimal causal model of Z if one exists, and returns nil otherwise, in time poly- nomial in the size of 1. This theorem’s proof is based on lemma 2 and the fact that under the conditions of the theorem every coher- ent model has a polynomial number of simplifications. iSCUSSiOgl Causal approximations are relatively common in mod- eling the physical world. Figure 6 shows some com- monly used approximations, all of which are causal approximations (see [Nayak, 19911 for the equations). We have constructed a library of about 150 model frag- ments as part of an automated model selection system [Nayak et al., 19921 in which all the approximations are causal approximations. The results of this paper extend in a straightforward manner to models involving differential equations, as long as approximating a differential equation does not convert it into an algebraic equation. In [Iwasaki, 19881, Iwasaki identifies two ways of approximating a differential equation, exogenizing and equilibrating, and defines self-regulating equations. The results of this paper continue to apply if exogenizing is allowed and if the e uilibration of self-regulating equations is allowed (see 1 Nayak, I99I] for details). One problem with our definition of a minimal causal model is that it makes no mention of the causal strength of an explanation. For example, another “explanation” of the working of the temperature gauge in figure 1 is based on heat flow from the thermistor to the wire via thermal conduction at the junction between them. However, the strength of this “explanation” is insignifi- cant compared to the explanation given earlier because of the large value of the thermal resistance at the junc- tion. To properly quantify the strength of a causal ex- planation, one needs the values of parameters, i.e., one needs the device behavior. In [Nayak et al., 19921 we describe an implemented system that uses device be- havior and a modified version of find-minimal-causal- model to find a simplest model that provides significant explanations of a phenomenon. 708 Representation and Reasoning: Qualitative Model Construction elate ark Weld [Weld, 19901 ’ t d m ro uces an interesting class of approximations called fi~ingr approximations. riefly, a model A& is a fitting approximation of a m el Ii41 if Mr contains an exogenous parameter, called a fit- ting parameter, such that the predictions using Mi ap- proach the predictions using A&, as the fitting param- eter approaches a limit. Fitting approximations and causal approximations are fundamentally incompara- ble, since the former talks about behavior and the latter about causal dependencies. ther class subsumes the other. owever, in practice, it seems that most fitting approximations are also causal approximations, e.g., tions in [Weld, 19911 most of the fitting approxima- are also causal approximations. In [Williams, 19911, Williams introduces critical ab- stractions, which are similar to our minimal causal models. His emphasis is on automatically creating a critical abstraction from a base description, whereas our emphasis has been on constructing appropriate models from a prespecified space of possible model fragments. In conclusion, constructing adequate problem rep- resentations involves the identification of abstractions and approximations that are particularly suited for the problem solving task. In this paper we introduced a new class of approximations, called causal approx- imations, that are commonly found in modeling the physical world. Causal approximations support the ef- ficient generation of parsimonious causal explanations. Parsimonious causal explanations are derived from a simplest model that explains the phenomenon of inter- est. We formalized the problem of finding a simplest model that explains the phenomenon of interest and showed that, in general, it is intractable. We then showed that when (a) all the approximation relations between model fragments are causal approximations; and (b) the requires relation is such that whenever two assumption classes own the same parameter they TY- q&e each other, then the causal relations entailed by coherent models increase monotonically as models be- come more accurate. This led to a polynomial time algorithm for finding a simplest model that explains a phenomenon. A number of examples demonstrate that causal approximations are common in modeling the physical world. We believe that causal approxima- tions will play an important role in reasoning about engineered devices. I would like to thank Alon Levy, Leo Joskowicz, Yumi Iwasaki, Tom Gruber, Andy Goldiug, Brian Falkenhainer, Richard Fikes, John Mohammed, Judea Pearl, and the anonymous reviewers. eferenees Addanki, S.; Cremonini, R.; and Penberthy, J. S. 1991. Graphs of models. Art@%1 1ratelligence 51:145-177. Amarel, S. 1968. On representations of problems of rea- soning about actions. ltaachirae Intelligence 33131-171. Bennett, S. W. 1987. Approximation in mathematical 8. In Promedings of the Te9ath International Joint Conference on Artificial Intelligence. 239-241. de Kleer, 9. and Brown, J. S. 1984. A qualitative physics based on confluences. Artificial Intelligence 24:7-$3. Falkenhainer, B. and Forbus, K. D. 1991. Compositional modeling: Finding the right model for the job. Artificial Intelligence 51:95-143. F~rbus, K. D. 1984. ualitative process theory. Artificial Intelligence 24~85-168. aI ordering in a mixed structure. venth I\TatSonaI Conference 891 Ar- tomated model selection using context-dependent behav- iors. In Proceede’ngs of the Tenth iVats’onad Conference ooa Artificial Intelligence. Nayak, P. I’. 1991. Causal approximations: The details. Technical Report KSL eport No. 91-48, Knowledge Sys- tems Laboratory, Stanford University. Raiman, 0. 1991. Order Intelligence 51:11-38. of magnitude reasoning. Artificial Sacerdoti, E. 1974. Planning in a hierarchy of abstraction spaces. Artificial Intelligence 5:115-135. Sacks, E. 1987. Piecewise linear reasoning. In Broceedivags of the Sixth lVationa1 Conference on Artificial Intelligence. 655-659. Schaefer, T. J. 1978. The complexity of satisfiabdity prsb- lems. In Proceedings of the Tenth Annual ACM Synapo- siwn on Theory of Conaputing. 216-226. Selman, B. and Kautz, W. 1991. Knowledge compilation using horn approximations. In Proceedings of the Ninth National Conference on Artificial Intelligence. 904-9Q9. Serrano, D. and Gossard, D. C. 1987. Constraint man- agement in conceptual design. In Sriram, D. and Adey, Ft. A., editors 1987, Knowledge Based Expert Systems in Engineering: BlaPanirag and Design. Computational Me- chanics Publications. 211-224. Weld, D. S. and de Kleer, J., editors 1990. Readings in Qualitative Reasoning About Bhpdcal Systems. Morgan Kaufmann Publishers, San Mateo, California. Weld, ID. S. 1990. Approximation reformulations. In Pro- ceedings of the Eighth National Conference err Artificial Intelligence. 407-412. Weld, D. S. 1991. Reasoning about Model Accuracy. Tech- nical Report 91-05-02, University of Washington, Depart- ment of Computer Science and Engineering. Williams, B. C. 1984. Qualitative analysis of MOS cir- cuits. Artificial Intelligence 241281-346. Williams, B. C. 1991. Critical abstraction: Generating simplest models for causal explanation. In Proceeda’ngs of the Fijth International Workshop on Qualitative Reason- ing about Physical Systems.
1992
120
1,188
P. Pandurang Nayak* Leo Joskowicz Sanjaya Addanki Knowledge Systems Lab., IBM, Watson Res. Ctr., IBM, Watson Res. Ctr., 702 Welch Road, Bldg. C, P.O. Box 704, P.O. Box 704, Palo Alto, CA 94304. Yorktown Heights, NY 10598. Yorktown Heights, NY 10598. Abstract Effective reasoning about complex engineered de- vices requires device models that are both ade- quate for the task and computationally efficient. This paper presents a method for constructing simple and adequate device models by selecting appropriate models for each of the device’s com- ponents. Appropriate component models are de- termined by the context in which the device op- erates. We introduce context-dependent behaviors (CDBS), a component behavior model representa- tion for encapsulating contextual modeling con- straints. We show how CDBs are used in the model selection process by exploiting constraints from three sources: the structural and behavioral contexts of the components, and the expected be- havior of the device. We describe an implemented program for selecting a simplest adequate model. The inputs are the structure of the device, the expected device behavior, and a library of CDBs. The output is a set of component CDBs forming a structurally and behaviorally consistent device model that achieves the expected behavior. Introduction Effective reasoning about complex engineered devices requires device models that are adequate for the task and computationally efficient. Producing such models requires identifying relevant device features and ap- plicable simplifications. In most existing applications, the user constructs the device model appropriate for the task. This is a difficult, error-prone, and time- consuming activity requiring skilled and experienced engineers. Automating the model construction pro- cess overcomes these drawbacks and provides future intelligent programs with a useful modeling tool. *Pandurang Nayak was supported by an IBM Graduate Technical Fellowship. Additional support for this research was provided by the Defense Advanced Research Projects Agency under NASA Grant NAG 2-581 (under ARPA order number 6822), by NASA under NASA Grant NCC 2-537, and by IBM under agreement number 14780042. Model-based reasoning systems embody an impor- tant advance in automating the construction of device models-the device model is automatically constructed from a description of the structure of the device. How- ever, currently these systems have a single model for each component, thus limiting both the modeling scope and the problems that can be solved. Allowing multi- ple component models adds flexibility and extends the scope. Producing a simplest, adequate device model then consists of selecting component models that are mutually compatible, globally consistent, and incorpo- rate appropriate simplifying assumptions. In this paper we present a method for constructing simple and adequate device models by selecting appro- priate models for each of the device’s components. The key insight is that adequate component models are de- termined by the context in which they operate. We have identified three types of contexts: the structural and behavioral contexts of the components, and the ex- pected behavior of the device. The structural context of a component consists of its physical properties and the components to which it is structurally related. The be- havioral context of a component consists of its behav- ior and the behavior of related components. Expected behaviors are abstract descriptions of device behavior. We introduce context-dependent behaviors (CDBS), a behavior model representation for encapsulating con- textual modeling constraints. We describe an imple- mented program that uses the structural and behav- ioral contexts of components, and the expected behav- ior of the device, to select adequate component CDBs. The inputs are the structure of the device, the expected device behavior, and a library of CDBs. The output is a set of component CDBs forming a structurally and behaviorally consistent device model that minimally achieves the expected behavior. le: a temperature gauge This section presents an example of a device with mul- tiple models for its components, and defines the prop- erties of a good model. Figure 1 shows the schematic of a temperature gauge from [Macaulay, 19&S], consist- ing of a battery, a wire, a bimetallic strip, a pointer, 710 Represent at ion and Reasoning: Qualit at ive Model Con& ruct ion From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. -resistor and a constant-resistor, the c strip as a thermal-bms (which models the bending of the bimetallic strip due to temperature changes), the pointer as a rotating-object, the bat- tery as a voltage-source, and the thermistor as a thermal-thermistor (which models the temperature dependence of the thermistor’s resistance). This model satisfies the two important properties container of water of a good model: adequacy and simplicity. Adequacy guarantees that the model correctly captures the de- vice’s behavior. For example, ignoring the thermal properties of the wire (i.e., modeling it only as a con- stant-resistor, which does not model the heat gen- Figure 1: A temperature gauge perfect-conductor electrical-conductor constant-resistor electromagnet temp-depresistor wire inductor thermal-resistor expanding-wire thermally-expanding-wire axially-rotating-wire rigid-rotating-wire - Poh;sible model link torsion-spring Figure 2: The possible models of a wire. and a thermistor. A thermistor is a semiconductor device; a small increase in its temperature causes a large decrease in its resistance. A bimetallic strip has two strips made of different metals welded together. Temperature changes cause the two strips to expand by different amounts, causing the bimetallic strip to bend. The temperature gauge works as follows: the thermistor senses the water temperature. The thermis- tor temperature determines the thermistor resistance, which determines the circuit current. This determines the amount of heat dissipated in the wire, which de- termines the bimetallic strip’s temperature. This de- termines the bimetallic strip’s deflection, which deter- mines the pointer’s angular position. To model the temperature gauge, we use a compo- nent model library that contains multiple models for each component. Figure 2 shows some of the wire’s possible models. For example, the wire can be modeled as an electrical-conductor, which can be a per- feet-conductor or a resistor. The resistor can be modeled as a constant-resistor, a temp-dep-re- sistor, or a thermal-resistor (which models the heat, generated in the resistor). All components can be modeled as physical-things with various thermal, mass, and motion models. The simplest model that explains the workings of the temperature gauge models the wire both as a erated due to current, flow) fails to account for the bimetallic strip bending and consequently the pointer’s displacement. If the pointer does not move, the de- vice does not measure temperat,ure changes. Simplic- ity guarantees that the model captures only the phys- ical phenomena necessary for explaining the device’s behavior. For example, modeling the wire’s magnetic and kinematic properties in addition to its thermal and electrical properties produces a consistent but need- lessly complicated model with respect to its main func- tion of measuring temperature changes. Selecting an appropriate subset of component models, from a space of possible models, guarantees both properties. endent behaviors Component models in our system are encapsulations of component behavior and contextual modeling con- straints. We call these models context-dependent be- haviors (CDBs). Behavioral information is represented with qualitative or quantitative time-varying equa- tions. Contextual modeling constraints are represented with structural and behavioral constraints. CDBs are represented as classes that inherit, prop- erties to their instances. A component is modeled as a CDB by making it an instance of the correspond- ing class. CDBs are organized in three ways: (a) in a generalization hierarchy, representing the “subset-of” relation between CDBs; (b) in a “possible-models” hi- erarchy (e.g., Figure 2); and (c) into assumption classes [Addanki ei al., 1991; Falkenhainer and Forbus, 1991]. The possible models of a CDB are the set of additional CDBs that can be used to model instances of that CDB. The “subset-of” and “possible-models” relations be- tween CDBs may overlap, but are not identical. For example, thermal-thermistor is a specialization, but not a possible model, of thermal-object: not all ob- jects can be modeled as thermal-thermistors, only thermistors can. An assumption class is a set of CDBs that can be viewed as different, mutually contradictory descrip- tions of the same phenomenon. CDBs within an as- sumption class can be related to each other by a prim- itive approximation relation which captures the rela- tive accuracy with which the CDBs describe the phe- nomenon. We assume that each assumption class con- Nayak, Joskowicz, and Addanki 711 (defcdb resistor (electrical-conductor) ((parameters (resistance :range resistance-parameter : dot “The resistor’s resistance”) ) (equations (= (voltage-difference ?object) (* (resistance ?object) (current (elec-term-l ?object)))) 0 (resistance ?object) 0)) (possible-models constant-resistor temp-dep-resistor thermal-resistor) (possible-model-of electrical-conductor) (assumption-class elec-conductor-class) (approximations perfect-conductor) (req-assumption-classes resistauce-class) (structural-constraints) (behavioral-constraints (implies (>= (* (voltage-difference ?object) (current (elec-term-l ?object))) (elec-power-threshold ?object)) (model-as ?object thermal-resistor))))) Figure 3: The resistor CDB. tains a single most accurate CDB. Figure 3 shows the definition of the resistor CDB.I It is a specialization of the electrical-conductor CDB, and it defines the resistance parameter for its instances. The equations clause describes behav- ior with equations relating parameters defined for in- stances of resistor. The possible-models and pos- sible-model-of clauses describe the resistor’s posi- tion in the “possible models” hierarchy. The assump- tion-class clause identifies the resistor’s assump- tion class, and the approximations clause identifies CDBs in that assumption class that are approximations of resistor. The resistor CDB provides a partial description of electrical resistance; the req-assump- tion-classes clause identifies assumption classes that must be included to complete the description. The structural-constraints and behavior- al-constraints clauses define the CDBs’s contex- tual modeling constraints and are stated in a first- order constraint language. There are two types of constraints: model-as constraints and preconditions. Model-as constraints are implication constraints with a model-as literal (or a conjunction of model-as lit- erals) in the consequent. 2 They are rules that specify conditions under which particular components must be modeled by particular CDBs. Preconditions are con- straints that are not model-as constraints. They are ‘Symbols starting with “?” are variables. “?object” is bound to the CDB instance under consideration. 2 (model-as ?x ?y) says that ?x should be modeled as an instance of ?y. necessary (but not sufficient) conditions for a compo- nent to be modeled by the CDB. CDBs describing different aspects of a component’s behavior can be combined to produce a component model. We use a set of rules to obtain the equa- tions of such a component model from the equations of the individual CDBS.~ For example, consider a hot wire under tension. To model the dependence of the wire’s length on both tension and temperature, we would model the wire both as an elastic-wire and a thermally-expanding-wire. The relationship between components and CDBs is a many-to-many mapping: a single component can be modeled by different CDBs, and a single CDB can model different components. For example, a wire can be modeled as an electrical-conductor or an elec- tro et. The electrical-conductor CDB can be used to model a wire, or a metallic pipe, This many- to-many relation between components and CDBs gives modeling flexibility for different reasoning tasks. For example, device analysis consists of finding the appro- priate CDBs for a given set of components. Device design consists of finding components for a given set of desired behaviors described as CDBs. Structural context The structural context of a component consists of its physical properties (e.g., its shape, mass, and material composition), the structural relations that it partici- pates in, and the components to which it is structurally related. Structural relations describe the structure of a device and include relations such as connected-to (in- dicating that two component terminals are connected), coiled-around (indicating that a wire is coiled around a component), and meshed (indicating that a pair of gears mesh). The structural context constrains the space of possible device models: the physical properties of components constrain the space of possible compo- nent models, while the structural relations constrain the space of possible component interactions. The structural preconditions of a CDB are con- straints on the structural context of a component that must be satisfied if the component is to be modeled by that CDB. For example, the precondition: (and (composition ?object ?material) (metal ?material)) in the electrical-conductor CDB indicates that a component must be metallic for it to be modeled as an electrical-conductor. Structural preconditions are similar to process preconditions in QP theory [Forbus, 19841. However, unlike process preconditions, they are only necessary conditions. Hence, the above constraint does not require that every metallic object be modeled electrical-conductor. del-as structural constraints are heuristic con- straints on the structural context of a component that 3SimiIar to corn bining influences in [Forbus, P984]. 712 Representation and Reasoning: Qualitative Model Construction enforce the selection of compatible CDBs for struc- turally related components. Compatible CDBs allow structurally related components to interact with each other. For example, the model-as constraint: (implies (and (terminals ?object ?terml) (voltage-terminal ?terml) (connected-to ?termi ?term2)) (model-as ?term2 voltage-terminal)) in the electrical-component CDB says that if a com- ponent is modeled as an electrical-component, then terminals connected to that component’s voltage ter- minals must be modeled as voltage terminals. This allows the components corresponding to the connected terminals to interact by sharing voltages at, those ter- minals. Behavioral context The behavioral context of a component consists of its behavior and the behavior of related components. The behavior of a component is the values, and variations over time of the values, of parameters used to model the component. A component’s behavioral context can provide modeling information not explicitly available in the structural context. Behavior generation expli- cates information that is implicit in equations. Con- sider modeling an air gap: if the the voltage drop across it is large enough (as in a properly functioning spark plug), then it, should be modeled aa an electrical con- ductor; if the voltage drop across it is not large enough (as in a common electrical switch), it should be mod- eled as an electrical insulator. The value of the voltage drop across the air gap (a behavioral property) deter- mines the appropriate model for it. Behavioral preconditions in a CDB are constraints on the behavioral context of a component that must be satisfied if the component is to be modeled by that CDB. For example, the precondition: (C (voltage-difference ?U) (voltage-diff-threshold ?U)) in the perfect-conductor CDB indicates that a com- ponent can be modeled as a perfect-conductor only if the voltage drop across it is less than some thresh- old. Behavioral preconditions are used to decide which CDBs in an assumption class can be used to model a component. This is in contrast to quantity conditions in processes [Forbus, 19841 which only control the ac- tivity of a process, but not, the existence of the process. Hodel-as behavioral constraints are constraints on the behavioral context of a componenf that enforce the selection of additional component CDBs. For example, the model-as constraint: (implies (>= (* (voltage-difference Pobject) (current (elec-term-l ?object))) (elec-power-threshold ?object)) (model-as ?object thermal-resistor)) in the resistor CDB states that if the dissipated power exceeds a threshold, then it must be ex- plicitly modeled by modeling the component as a the -resistor. Behavioral constraints can be viewed as enforcing the selection of significant CDBs. Significance is me* sured with appropriately set thresholds. Thresholds can be either preset or computed dynamically. A threshold of 2300 for Reynolds number, that distin- guishes laminar flow from turbulent flow, is a widely used preset threshold. Other thresholds can be preset by an engineer from common practice. Thresholds can also be set dynamically based on the evolving device model and knowledge of acceptable tolerances on cer- tain parameters (see [Shirley and Falkenhainer, 1990; Nayak, 19913 for some initial work). ice is an abstract, pos- sibly incomplete description of what the device does (but not how it does it). use expected behaviors to capture, in part, what is commonly referred to as the function of a device. For example, seating that the device in Figure 1 is a temperature gauge indicates that the device model must explain how the tempera- ture of the thermistor determines the angular position of the pointer. Expected behaviors can also provide abstract descriptions of device behaviors that would not, normally be considered the device’s primary func- tion. For example, to assist a design engineer select dimensions for the temperature gauge wire, the device model must explain how the wire’s length and croe sectional area affect the angular position of the pointer. The most common expected behavior descriptions are input/output descriptions of device behavior. Knowledge of the expected behavior is commonplace and almost always available either directly from the user, from the description of the problem to be solved, or from the context in which the device operates. For example, device names, such as light bulb, vacuum cleaner, and disk drive are widely used and all are as- sociated with expected behaviors. Or suppose we want to know if a disk drive can be used as a door stop. The expected behavior-to stop the door from shutting- suggests that the disk drive model should focus on its kinematic and dynamic properties as an object, not its information retrieval properties! Expected behaviors are an essential component of a device description and play an important role in focusing the model selection process. Without it, all consistent device models are equally plausible: the disk drive as an information re- trieval device, a heating device, or a door stop. We specify expected behaviors with causal relations between parameters. Hence, expected behaviors spec- ify which component parameters fnust appear in the device model, and the causal relations between them. For example, the temperature gauge’s expected behav- ior, representing its primary function, is: Nayak, Joskowicz, and Addanki 713 (causes (temperature thermistor) (angular-position pointer)) which says: the thermistor model must include a temperature parameter; the pointer model must in- clude an angular-position parameter; and the model must explain how the thermistor’s temperature deter- mines the pointer’s angular-posit ion. Modeling program In this section we describe an implemented polynomial- time algorithm to find a simplest adequate model us- ing structural, behavioral, and expected behavior con- straints. A device model is said to be adequate when (a) it explains the expected behavior; (b) the struc- tural and behavioral preconditions and the behav- ioral model-as constraints of each component CDB are satSied;4 and (c) each component model includes ex- actly one CDB from each required assumption class. A device model Mi is said to be simpler than a de- vice model M2 if A41 models fewer phenomena, more approximately: for each CDB C selected for compo- nent I in Ml, either C is selected for I in M2 or C’ is selected for I in MS and C is an approximation for 6’. Finding a simplest model that satisfies the expected behavior is in general intractable. However, Nayak [Nayak, 1992a] shows that this problem can be solved efficiently if all the approximation relations between CDBs are causal approximations. Briefly, when all the approximations are causal, the causal relations entailed by a simpler model are a subset of the causal relations entailed by a more complex model. Causal approx- imations are both natural and common in modeling the physical world. Our program assumes that all ap- proximations are causal approximations. The program’s inputs are a device description, an expected behavior, and threshold values. The device description specifies the device’s structure-its com- ponents and the structural relations between them- and any user-selected CDBs for each component. The program produces a simplest adequate model by first identifying an adequate model and then simplifying it. We describe these two phases next. Identifying an adequate model An adequate model is identified in four steps. The first step augments the initial device description to in- clude all expected behavior parameters. The second and third steps augment the device model using the model-as structural and behavioral constraints, re- spectively. The fourth step checks the expected be- havior. We now describe these steps and the flow of control between them. ‘Structural model-as constraints are based on the heuristic that every component interaction that can take place must be modeled, even if the interaction is irrele- vant. Hence, we do not require that adequate models sat- isfy structural model-as constraints. In the first step, the program checks if the initial device model contains all expected behavior parame- ters. If a component parameter is missing, the pro- gram searches the possible-models of that compo- nent for the most general CDB that provides the re- quired parameter. 5 Only CDBs whose structural and behavioral preconditions are satisfied are considered in this search. The resulting CDB is added to the com- ponent’s model, together with the most accurate CDB of any req-assumption-classes. For example, the expected behavior of the tempera- requires a temperature parameter for the lar-posit ion parameter for achieved by adding the to the thermistor model DB to the pointer model. the program checks the structural s of each component model. If a hen it means that the device model does not include a required CDB for some com- ponent. The program searches the possible-models of that component for the most general CDB that is a specialization of the required CDB.5 The resulting CDB is added to the component’s model, together with the most accurate CDBs of any req-assumption-class- es. This can result in additional model-as structural constraints being violated. Hence, this step is repeated until all structural model-as constraints are satisfied. For ple, modeling the thermistor as a ther- mal-t stor requires electrical models for the bat- tery and the wire. Hence, the battery is modeled as a voltage-source and the wire as an electrical-con- ductor. The wire model is further augmented with the resistor and temp-dep-resistor CDBs to satisfy the req-assumpt ion-classes constraints. lModeling the pointer as a rotat ing-ob j ect requires a kinematic in- teraction with the bimetallic strip, which can be satis- fied by modeling the latter as a thermal-bms. In the third step, the program uses the above de- vice model to generate the behavior. It uses this be- havior to check the behavioral model-as constraints of each component model. This check is exactly analo- gous to step two. If CDBs are added to any component model, the program repeats steps two and three until all structural and behavioral model-as constraints are satisfied. For behavior generation the program computes the order of magnitude of each parameter using a novel technique described in [Nayak, I992b]. Briefly, the or- der of magnitude, am(a) , of a quantity Q is defined as: om( q) = [log,, ]Q ]J . A set of rules propagate exoge- nous orders of magnitudes to dependent parameters. For example, the rule: am(a) + am(b) 5 om(a * b) 5 am(a) + am(b) + 1 propagates orders of magnitude to a product. The or- der of magnitude behavior is at the right level of detail: 5Ties are broken arbitrarily. 714 Representation and Reasoning: Qualitative Model Construction being qualitative, it does not require exact numeri- cal values for exogenous parameters. In addition, it provides valuable numerical information not available from purely qualitative behaviors [ For example, assuming that the orders of magnitude of the l stance of the wire and of the thermistor is 2, and f of the battery is 1, the order of mag- nitude of the heat generated in the wire is predicted to be between -3 and 1. If the order of magnitude of the elec-power-threshold of the wire is set to be less than or equal to 1, the model-as behavioral constraint in the resistor CDB (Figure 3) requires that the wire be modeled as a theraaal-resistor. In the fourth step, the program determines if the expected behavior is satisfied by first computing the causal ordering [Iwasaki and Simon, 1986; Iwasaki, 19881 of th e d evice model parameters, using the de- vice model equations. The causal ordering is used to check if the expected behavior is satisfied. If it is sat- isfied, the model is adequate. Otherwise, the program augments the device model with an additional CDB for some component, and repeats steps two through four, until it finds an adequate model. The additional CDB is selected from the more special CDBs that could have been used to satisfy constraints in the above steps, but were not. If no additional CDB can be added, there is no adequate model, and the program reports failure. For example, the model generated above does satisfy the expected behavior and hence is adequate. Simplifying the model The adequate model identified above can be more com- plex than necessary for one of three reasons: (a) for each required assumption class, the program adds in the most accurate CDB, even though a more approxi- mate (simpler) CDB might do; (b) since the structural model-as constraints are heuristic, the CDBs added to satisfy them may not be necessary; and (c) unnec- essary CDBs may have been added in step four. A model can be simplified by applying one of the follow- ing two simplification operators: (a) replace a CDB by one of its immediate approximations; and (b) al- together remove a CDB. The program simplifies the adequate model found above by applying a sequence of simplification operators, while preserving adequacy. When all simplifications of a model are not adequate, the program terminates, and returns that model as a simplest adequate model. For example, t emp-dep-re- sistor can be replaced by constant-resistance in the wire model. The application of different sequences of simplifica- tion operators can result in different models. However, the different models differ in features deemed to be insignificant by the behavioral constraints and thresh- olds, and hence the ‘program returns the first simplest adequate model it finds. The program is implemented in Common Lisp and has been tested on ten electromechanical de- vices drawn from [Artoboievsky, 1980; Macaulay, 1988; van Amerongen, 19671. The devices range in complex- ity from 10 to 54 components per device, and include different types of temperature gauges, thermostats, re- lays, and workpiece inspection devices. In all cases the program constructs a model in 0.5 to 8 minutes on an Explorer II. The choice of devices and the scenarios of our experiments illustrate a number of points: 1. 2. 3. 4. Device descriptions can include irrelevant informa- tion, e.g., in our representation of the temperature gauge, we include the atmosphere which can ther- mally interact with the wire and with the battery. The program correctly disregards the thermal inter- action between the atmosphere and the battery. Similar components in different devices are modeled differently. For example, in the temperature gauge the coil of wire is modeled as a resistor generating heat, while in a galvanometer the coil of wire is mod- eled as an electromagnet. In all cases our program correctly identifies the relevant models. Models of differing complexity are built depend- ing on the threshold settings. For example, if the voltage-difi-threshold of an electrical-con- ductor is low enough, the program models it as a resistor, even if modeling it as a psrfect-con- ductor will explain the expected behavior. The same device can have different expected behav- iors, e.g., the effect of the wire’s dimensions on the angular position of the pointer. The program cor- rectly constructs different models to explain different expected behaviors. ave constructed a library of 25 component types, including wires, bimetallic strips, springs, and perms nent magne The CDB library consists of approxi- mately 150 Bs including descriptions of electricity, magnetism, heat, and the kinematics and dynamics of one-dimensional motion (including both rotation and translation). Each component has an average of 23 CDBs describing different aspects of its behavior. Currently, the model selection program has two main limitations. First, behavior generation does not involve any integration over time. Instead, the order of magni- tude values at the beginning of an interval are used as the behavior throughout the interval. A consequence is that expected behaviors have to be specified for each operating region. Of course, this means that different models can be built for each operating region, rather than requiring a single model for all operating regions. Second, expected behavior constraints are limited to causal relations between parameters. work and conclusions Falkenhainer and Forbus [Falkenhainer and Forbus, 19911 select models by compositional modeling. Each Nayak, Joskowicz, and Addanki 715 model is conditioned on a set of simplifying and oper- ating assumptions. A set of constraints govern the use of simplifying assumptions. These constraints are sim- ilar to our structural constraints (though the latter are only heuristics used to find an initial adequate model). In addition, we have identified a useful source of these constraints-the observation that components must be modeled in a compatible way. Operating assumptions are similar to behavioral constraints. An important difference is that while we generate an order of magni- tude behavior, they generate either a purely qualitative or a purely quantitative behavior. A& with our use of the expected behavior, they use a user query to gener- ate an initial set of simplifying assumptions. However, in addition, the expected behavior provides feedback on the choice of models-an adequate model’s equa- tions must subsume the expected behavior. Finally, their model selection algorithm is based on dynamic constraint satisfaction [Mittal and Falkenhainer, 19901, while we exploit causal approximations to develop a polynomial time model selection algorithm. Addanki et al [Addanki et al., 19911 discuss tech- niques for selecting models of acceptable accuracy. The accuracy of a model is determined either by asking a user, or by using consistency rules. Consistency rules are similar to our behavioral preconditions. If a model’s accuracy is unacceptable, a set of domain- dependent parameter change rules select a more accu- rate model. They start the analysis with the simplest model, and switch models until a model of acceptable accuracy is found. While we have not addressed model switching, our techniques can be used to make a more intelligent choice of an initial model, using contextual modeling constraints that are absent in their system. In [Weld, 19901, Weld shows that, when models can be formalized as fitting approximations of one an- other, the parameter change rules used above can be replaced by a domain-independent technique for model switching. Most fitting approximations are also causal approximations, and hence his model switching tech- niques can be incorporated into our system. In conclusion, having multiple models for individual components is necessary to account foi the different as- sumptions, perspectives, and purposes that determine the adequate device model. This paper shows how the context in which the device and its components oper- ate provide a powerful guide for the model selection process. We introduced context-dependent behaviors (cDBs), a component behavior model representation for encapsulating contextual modeling constraints. We showed how CDBs are used in the automated selection of device models by exploiting constraints from three sources: the structural and behavioral contexts, and the expected behavior of the device. We tested our ideas with an implementation. We believe that our modeling paradigm will prove to be useful for a variety of tasks including analysis, design, and tutoring. The compositionality of CDBs and the many-to-many relationship between compo- nents and CDBs provides modeling flexibility, and ex- pected behaviors allow teleological reasoning. CDBs provide a uniform mechanism to represent and reason about the structure, behavior, and function of a de- vice. Future work will involve handling a wider range of expected behaviors, including behaviors over time, and the dynamic setting of thresholds. gements ward E’eigenbaum, Richard F&es, Brian FaBkenhainer, Renate Fruchter, Andy Gold- oyd, Yumi Iwasaki, Rich Keller, Alou Levy, wamy, Rich Washington, Dan Weld, Michael d the auonymous reviewers for useful discus- sions and for comments on earlier drafts. eferences Addanki, S.; Cremonini, Hk.; and Penberthy, J. S. 1991. Graphs of models. Artificial Intelligence 51:145-177. Artobolevsky, I. I. 1980. Mechanisms in Modern EngC neering Design, volume 5. Mir Publishers, Moscow. Bobrow, D., editor 1984. Qualitative Reasoning About Physical Systems. North-HoIland. Falkenhainer, B. and Forbus, K. D. 1991. Compositional modeling: Finding the right model for the job. Artificial Intelligence 51:95-143. Forbus, K. D. 1984. Qualitative process theory. Artificial Intelligence 24~85-168. Iwasaki, Y. and Simon, H. A. 1986. Causality in device behavior. Artificial Intelligence 29:3-32. Iwasaki, Y. 1988. Causal ordering in a mixed structure. In Proceedings of the Seventh National Conference on Ar- tificial Intelligence. 313-318. Macaulay, D. 1988. The Way Things Work. Houghton Midfllin Company, Boston. Mittal, S. and Falkenhainer, B. 1990. Dynamic constraint satisfaction. In Proceedings Eighth National conference on Artificial Intelligence. 25-32. Nayak, P. P. 1991. Validating approximate equilibrium models. In Proceedings of the 1991 Model-Based Reason- ing Workshop. Nayak, P. P. 1992a. Causal approximations. In Proceed- ings of the Tenth National Conference on Artificial Intel- ligence. Nayak, P. P. 1992b. Order of magnitude reasoning us- ing logarithms. Technical Report KSL 92-29, Knowledge Systems Laboratory, Stanford University. Shirley, M. and Falkenhainer, B. 1990. Explicit reasoning about accuracy for approximating physical systems. In Working Notes of the Automatic Generation of Approxi- matiom and Abstractions Workshop. 153-162. van Amerongen, C. 1967. The Way Things Work. Simon and Schuster. Weld, D. S. 1990. Approximation reformulations. In Pro- ceedings of the Eighth National Conjerence on Artificial Intelligence. 407-412. 716 Representation and Reasoning: Qualitative Model Construction
1992
121
1,189
harat Rae and Stephen C-Y. Knowledge-Based Engineering Systems Laboratory University of Illinois at Urbana-Champaign Ahtract This paper discusses discovery of mathemati- cal models from engineering data sets. KEDS, a Knowledge-based Equation Discovery System, identifies several potentially overlapping regions in the problem space, each associated with an equa- tion of different complexity and accuracy. The minimum description length principle, together with the KEDS algorithm, is used to guide the partitioning of the problem space. The KEDS- MDL algorithm has been tested on discovering models for predicting the performance efficiencies of an internal combustion engine. that the phenomenon being modeled is homogeneous, and are unsuitable for engineering problems. KEDS, in addition to being a model-driven empirical discov- ery system, can also be viewed as a coatceptzsal clus- teting system, which pastitions the data into regions based upon the mathematical relationships that it dii- covers between the domain variables. The intertwining of the discovery and partitioning phases enables KEDS to overcome many of the problems involved in learning relationships from engineering data. It is well known that to achieve the objective of best classifying unseen data, it is not always best to construct a perfect model, which predicts every sin- gle data point in the training set without any error. Greater accuracy in the prediction of new data is of- ten achieved by using an imperfect, smaller model [Breiman et al., 19841, rather than one that may be over sensitive to statistical irregularities and noise in the training set. While cross-validation techniques may help to avoid over-fitting, they are fairly expen- sive. The minimum description length (MDL) priuci- ple [Bissanen, 19851 is an elegant and powerful the- ory, that balances model complexity with model er- ror. In combination with KEDS, the Knowledge- based Equation Discovery System [Rao and Eu, 1992; Rao et al., 19911, the KEDS-MDL algorithm is able to discover accurate and comprehensible models in engi- neering domains. Engineering phenomena are often non-homogeneous, namely, the relationship between the domain vari- ables varies across the problem space. IIowever, many discovery techniques make the underlying assumption *This research was partially supported by a research contract from. the Digital Equipment Corporation. For fur- ther information please contact the first author at Beck- man Institute for Advanced Science and Technology, 405 N. Mathews, Urbana, IL 61801, bharat@cs.uiuc.edu. The output of KEDS is a single region IQ, which is associated with a mathematical equation bi that de- scribes the behavior of the variables within the region. Iu earlier research [Rao and Eu, 19921, KEDS was in- voked multiple times until all available resources were exhausted to produce a collection of overlapping re- gions. These regions were then combined to obtain a model that covered the entire problem space. This approach was very wasteful of resources. It is more efficient to run KEDS for a limited time, select a sin- gle region-equation pair from the available candidates, run KEDS on the remainder of the problem space, and so on. Bowever, while a number of metrics can be used to select a (l&J;) p air, many metrics (for exam- ple, accuracy or comprehensibility) are unsatisfactory because the result of evaluating a candidate (IQ,fi) pair provides a local, rather than a global measure of how well the candidate models the entire data set. The chosen metric should be able to select between two (l&J;) pairs, where the regions are of different sizes and cover distinct (potentially overlapping) re- gions within the problem space, and the equations have different complexity and accuracy. The MDL princi- ple is an ideal metric for discriminating between al- ternate candidates. The selection of the next region in the problem space is made globally by choosing the candidate (from those created by running KEDS for a limited period of time) that minimizes the description length of the entire data set, rather than selecting a candidate that optimizes a local metric (for example, selecting the most accurate (R, f) pair). As empirically demonstrated in this paper, the models discovered by using MDE as an evaluation metric with KEDS, out- Rae and Lu 717 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. {A)&{B)arcrcgions ill dis- space. Figure 1: KEDS: (a) Sampling the data set, (b) Computing covers, & (c) Partitioning the problem space perform those created by KEDS with other metrics. The KEDS-MDL algorithm presented in this paper is a multi-dimensional extension of the MDL surface re- construction algorithm presented in [Pednault, 19891 (which applied to functions of a single variable). A variety of tools have been applied to form engi- neering models (see [Finger and Dixon, 19891). Sta- tistical techniques like CART [Breiman et ol., 19841 and MARS [Friedman, 19911, and machine learning techniques like ID3 [Quinlan, 19861 and PLS [Ren- dell, 19831, may be characterized as split-and-fit sys- tems, which first partition (split) the problem space and then successively model (fit) each region. KEDS is a fit-and-split system which builds up hypotheses in a bottom-up fashion and partitions the problem space to find the enclosing region. Techniques like those in [Kadie, 19901 can be used to combine multi- ple overlapping hypotheses. Some traditional empirical discovery systems [Langley et oz., 1987; Falkenhainer and Michalski, 19861 perform well when the equations that best describe the data have relatively small inte- ger coefficients and the problem space does not have to be partitioned into several regions. AIMS (Adap tive and Interactive Modeling System [Lu and Tcheng, 1991]), integrates machine learning and optimization techniques to aid engineering decision making. Con- ceptual clustering programs [Stepp, 1984; Fisher, 1985; Cheeseman et al., 19881 produce partitions superior in comprehensibility to those discovered via numeri- cal clusterings. However, the mathematical relation- ships that exist between attributes of the events are not used to partition the events. The MDL princi- ple has been used to infer decision trees [Quinlan and Rivest, 19891, for image processing [Pednault, 1989; Pentland, 1989; Leclerc, 19891, and for learning con- cepts from relational data [Derthick, 19911. The KEDS System Given : A training data set with n data points, E = (el,e2,... , err). Each data point e has a response (or de- pendent) attribute y, and P predictor (or independent) attributes, 2” = x1,x2,...,xp. All of y and 2; are real- valued and continuous variables (although nominal vari- ables are acceptable as well). . Goal : From E , build a model Q that predicts the value of v from any given set of values of 3: & = Q (Zi). In addition to the above data set, KEDS is pro- vided with some generalized knowledge about the re- lationships that we expect to find in engineering do- mains. KEDS generates models of the form g = F(Z), where F is a model chosen from a collection of pa- rameterized polynomial models provided to, KEDS. For the experiments in this paper we have considered five different families of parameterized models: y = a, Y =ax+-b, y = ax2 + bx + c, y = axI + bx2 + c, and Y = ax1 + bx2 + cx3 + d, where the x’s are the nominal parameters of F (the names of the real-valued predic- tor variables pi that are assigned to F), and a b . . . are the real-valued parameters (coefficients discovered by KEDS). The equation templates provided to KEDS correspond to the basis functions used in statistical methods like linear regression. Since the domain can be non-homogeneous, the underlying function that gener- ated the data is assumed to be of the form {RI 3 y = J%(x) + p(m), R2 SJ y = h(x) + p(m) . . .) where p(u) represents a O-mean Gaussian process, and & is a region in predictor space. The KEDS algorithm consists of two-phases: discov- ery and partitioning. The partitioning is model driven and is based upon the relationships that are discovered from the data, while the discovery process is restricted within the boundaries of the regions created by the partitioning. In the initial discovery phase, an equa- tion template F is chosen and the data set E is sampled to determine the coefficients of a candidate equation, f. This is illustrated in Figure l(a) for the template Y = ax + b, where two separate candidate equations are created (based on two distinct instantiations of the initial discovery phase). Note that the horizontal axis in Figure 1 is multi-dimensional, representing the P- dimensional space of predictor variables. In the partitioning phase, the deviation Ayi between the actual and predicted value for each data point ei, is converted into a positivity score via a score function. This is illustrated in Figure l(b) for a very simple score function, where all data points with deviation Ay; 5 E are assumed to be positive examples of fwith a score of 1, and all other data points are negative examples with a score of 0. The training data set E is partitioned over the space of predictor variables to isolate regions 718 Representation and Reasoning: Qualitative Model Construction 1. For a template F with k unknowns, repeat Step 2 below, N(F, k, S) (see Equation 1) times. 2. Initialize S to k random samples from the data set E a 3. Discovery: determine the k coefficients of f from S. 4. Partitioning: (a) For all examples in E , compute positivity-score( f , e;) = exp( -( Ay;)2/2u2). (b) Partition E to return regions Rwith high score. (c) for crll R; (subject t o m): set S to events e E Ri. 5. Return to Step 3 and refine f, until f ceases to improve. Figure 2: The KEDS algorithm with a low error (Ce., high score), as in Figure l(c). The score for a region is the average of the scores of the data points in the region. The discovery phase is then invoked in each region to refine the candidate equation. Since we are using polynomial equations, we can perform multi-variable regression over the events in the region to improve the values of the coefficients. The KEDS algorithm is summarized in Figure 2. KEDS uses three parameters to control its search for formulae. The accuracy parameter, is a score func- tion that converts the deviation into a positivity score. The score function creates a transformation of the data set, wherein the score associated with each data point measures the likelihood that the data point is correctly predicted by the candidate equation. The sim le score function shown in Figure l(b) was used in Rao et al., 19911. The score function used here, P eXp(-(Ay@/2u2), is the normalized probability den- sity function for a normal distribution (see Figure 2). The size parameter, m, is the minimum fraction of the events in the data set that must be described by a equation. For the two regions A and B found in Fig- ure l(c), region B is rejected because it covers fewer events than permitted by m. The cover of the formula f is modified to (the set of events in) region A. Note that the regions A and B are hypercubes in the P- dimensional predictor space. The choice of PM [Ren- dell, 19831 as the partitioning algorithm imposes the constraint that these regions be defined by conjunctive concepts over the discriminant attributes. The confidence parameter, 5, controls the number of times N(F), that the KEDS algorithm is called for a given template F. If all 6;: data points are chosen from a region associated with an equation, experiments with synthetic data sets with added Gaussian noise have shown that KEDS quickly converges to the correct equation and region. KEDS find regions that cover m fraction of the events with a (high) probability (1 - a). If a given template F has I% unknown coefficients, then it follows that the total number of times N(F), that the KEDS algorithm needs to be instantiated for F (Step 1 in Figure 2), is: N(F) 2 log 6 log(1 - mk) According to the Minimum Description Length priu- ciple [Rissanen, 19831, the theory that best accounts for a collection of observations E = {el, e2, . . . , e,) is the one that yields the shortest description [Bissanen, 1985; Barron, 19841. Therefore, the best theory to in- duce from a set of data will be the one that minimizes the sum of the length of the theory and the length of the data when encoded using the theory as a predic- tor for the data. Let Q be a model created from the region-equation pairs discovered by KEDS. Then the model that will be the best predictor of yet unseen data will be the model & , that minimizes the length: ~(Q,E)=C(&)+d:(EIQ) (2) where C (& ) is th e number of bits required to encode the model & , and t (E ]Q ) is the number of bits needed to encode the &#erence between the training data set and the values predicted for the events in by Q . The L (Q ) term in Equation 2 corresponds to a complexity penalty (increasingly complex models will require a greater number of bits to encode) and the C (E ] & ) term corresponds to an error penalty. These two terms create a balance between modeling the data accurately and overfitting the data. There are different encoding techniques that can be used to encode the total code length L: (Q , E ). It is important that these coding techniques be efficient. An inefficient method of coding the model will penal- ize larger models too heavily, and will result in the selection of smaller models with large error. Similarly, an inefficient method for coding the difference term C (E 1 Q ), will result in complex, overly large models being selected. Encoding the Model - L (Q ) Consider the sim- ple case where the model is composed of exactly one equation f (i.e., Q = f ). As we are working with a fixed family of models, such that the description of the family does not depend upon E or the parameters used in the models, L (f) is the cost of encoding the fam- ily F of f, plus the cost of encoding the values of the parameters in d. For the experiments in this paper, we have considered five families of models (Section 2). Therefore, the cost of encoding F is log 5 bits (all loga- rithms in this paper are base 2). If F takes w predictor variables and all the P predictor variables can be as- signed to F with equal probability, then encoding the nominal parameters off costs log (f) bits. Each of the u real-valued coefficients off requires i log 7~ bits [Ris- sanen, 19831 to encode. An additional $logn bits are needed to encode u2, the value of the variance used later in Equation 5 to calculate the difference term. c (f) =log5+log(,B)+ y- (u+ ‘)logn (3) As the domain is non-homogeneous, & is a piecewise-continuous model that is a collection of Rao and Lu 719 region-equation pairs. If Q is divided into T regions [RI,... R,.] and a unique equation fj is associated with each region Ri, then the cost of encoding & is 7 ? j=l ? j=l = Wr) + xlL tRj) + L (fj)l (4) j=l where X(T) is the number of bits require to encode the integer T. According to the Elias-Cover-Rissanen prior for integers, the number of bits needed to encode T is log*(P+1)+4J l ssanen9 19831. Mowever, this encod- ing is optimal only for relatively large integers, and is inefficient for the small number of regions that occur in & . It is more efficient to use a fixed number of bits to encode T. Note that when comparing two models, the A(V) term cancels out and can be dropped from Equation 4. A region R is a hypercube in the space of the pre- dictor variables Z9 and is defined by a collection of I intervals on some, none, or all of the l? predictor vari- ables. Instead of using the Elias-Cover-Rissanen prior to encode I, it is more efficient (for small P) to use P bits to indicate the presence or absence of each of the predictor variables. An interval over the variable zj can be written as [loj < zj < hij]. While it is pos- sible to encode both the interval end points loj and hii, we note that for an arbitrary interval either loj or hij could be the boundary value for variable zj. We can take advantage of this and significantly reduce the code length for intervals that begin or end at a boundary value, while increasing it slightly for other intervals. To encode the three different possibilities (i.e., either loj or hij is a boundary value or neither is) costs log3 bits. If L is the number of bits to encode an interval end-point that is not a boundary value, an additional L or 2L bits are needed to encode the inter- val completely. Encoding the interval end-points with full precision costs log n bits, but this is obviously too costly as we only use i log n bits to encode the param- eters for f. Encoding with reduced precision is used to get a smaller cost for L. Encoding the Difference Term - L (E IQ ) Again, consider the case when Q is composed of a single function f. To encode the data points in E 9 the difference between the actual value of the response variable y and the value jj predicted by the model Q is analyzed statistically by viewing the difference as a random process. This random process along with the model Q induces a probability distribution p on the data points, where p is the maximum likelihood func- tion. From the Shannon coding measure in information theory [Gallager, 1968]9 we can encode the data points using a number of bits equal to the negative log likeli- hood (- log p(E )). When Q is a collection of regions, Equation 5 (below) is applied to each region. C (E IQ) = AYZ % 10g(2?ra2) + log e x s (5) e;EE Encoding the Exce An individual (R, f) pair can be encoded by ns 4 and 5. Our goal is to evaluate the entire model, and yet somehow select equations one at a time rather than having to evaluate the model only after it is completely formed. We do this by dividing Q into two models: Qeqn, consisting of all the equations chosen in & 9 and Qeze, of all the exceptions in Q (i.e., the events in not covered by the equations in Qepn). Qezcls enco by fitting the exceptions to an averaging model (the mean value of the exceptions). The code length of the averaging model is computed using Equation 4, while Equation 5 is used to calculate the difference term for the exceptions. The code length L (QcZc) is a measure of the randomness of the exceptions. e Variance The nearest neighbor alculated by using the nearest neigh- bor of a point in a normalized predictor variable space as the best estimate $. The true variance of Eis assumed to be CY~ = u&J2 [Stone, 1977; Cover, 19681. Note that u must be estimated sepa- rately for each region (the only assumption made is that the variance is constant within a region). Encod- ing the value of u accounts for the extra !J log n bits in Equation 3. Once a region has been modeled by an equation f 9 the deviation from f itself can be used to calcu- late u. Then the cost of encoding the difference term for a region (from Equation 5) reduces to L (R ] fi) = ni log(&%). This seems counter-intuitive at first; for low values of u, the code length L (& ] fi) is nega- tive. Mowever, the t~-~ term in Equation 5 should actu- ally be a a2/lJ2 term, where l? is the precision used in representing the data. Obviously I’ < u and log(u/r) will always be positive. Note that when comparing two candidate equations, the log(l/y) term cancels out, and therefore, can be ignored. The value for the code length t (E IQ ) is simply a relative rather than an absolute value. L Algorithm The KEDS-MDL al- gorithm (summarized in Figure 3) calls KEDS itera- tively and uses MDL as an evaluation metric to select the best candidate at the end of each loop. The to- tal available resources, N(F) for each template F are calculated via Equation 1, and are divided among I iterations. At the end of each iteration the candidate that minimizes the description length L (Q ) of the data set, is added to Qeqn, and the data points within the region are removed from Qezc. This continues until no exceptions remain or the description length cannot be reduced. For the purposes of this paper, the avail- able resources N(F) are divided equally among the I iterations. 720 Represent at ion and Reasoning: C&alit at ive Model Construct ion 1. 2. 3. 4. 5. Table 1: Predictor 8c Response variables in OTTONET (IC engine simulator) Q eqn = (), &em = (E}, determine Mi(F), i 5 I. For each F, call KEDS iVi(F) times. For each R, f : Qesc = Qeac - R, Qeqn = Qeqn + f . Compute d: (Q ). Select R, J that minimizes L (& ). Return to (2) unless Qezc = () OP I iterations done. Figure 3: The MEDS Ottonet [Assanis, 19901, a simulator for an internal combustion engine, was used to provide the data for KEDS. The predictor (input) and response (output) variables for the internal combustion engine domain are shown in Table 1. The input variables were ran- domly varied over the ranges shown in Table 1, to gen- erate 300 events, 50 of which were randomly set aside to create a test set. The remaining 250 events were used as a training data set for the two sets of experi- ments described below. Experirment : In the first series of experiments, the parameters were set at 1pz = 0.3 and 6 = 0.1. Three separate experiments were run. (a) KEDS was run in a breadth first fashion and the results were combined at the end to produce a model. (b) KEDS-MDL was run with the same resources as in La. (c) KEDS-MDL was run again with the available resources reduced by a factor of 10. In the KEDS-MDL experiments (Lb and I.c), the available resources were divided equally among four iterations (i.e., Ni(F) = N(P)/4,i <, 4). The models were used to predict the attribute val- ues of the test events (the test events were not part of the data set seen by KEDS). The models were eval- uated in terms of the predictive error and the model creation time (in seconds on a DECStation 5000). The results are summarized in Table 2. Note that even with the same available resources, KEDS-MDL (Ex- periment Lb) took less time than in Experiment La. This is because the limitation of available resources refers to the maximum number of times M(F) that the KEDS algorithm may be called. On the surface, although the KEDS algorithm appears to be indepen- dent of the number of data points n, the discovery and partitioning steps (Steps 3 and 4 in Table 2) depend heavily on n. As KEDS-MDL discovers regions and adds them to Qeqn, n decreases on each iteration. What is also very interesting is that even when provided with limited resources (one-tenth that avail- able in La and Lb), KEDS-MDL learned models that were extremely competitive with those that had been learned using full resources. This indicates that the MDL metric, is effective even with extremely limited resources. xperiment : These series of experiments were de- signed to compare the performance of different metrics for model discovery. Using the same limited amount of resources available in 1.c above, a second series of ex- periments were run with the following four metrics for model selection: (a) MDL (identical to I.c), (b) UCCU- rcacy, the most accurate equation was chosen, (c) gize, the region that covered the maximum number of events was chosen, and (d) goodness (= accuracy * size). In each iment the appropriate metric was used in place DL in Steps 3 and 4 in Figure 3, to choose a region-equation pair in each iteration. The generated models were evaluated for predictive error for all the response variables. The results are presented in Table 3. As can be seen KEDS-MDL out- performed the KEDS algorithm using other metrics. Below is the model created for the response variable qVOr in Experiment IIc (using KEDS-MDL with lim- ited resources). [CR > 7.21 [IPI > 0.5731 * ?&lo1 = 0.63 CR + 18.78 PI + 75.3 ELSE [PI < 0.5731 q %I 01 = 1.88 CR + 50.03 PI - 1.68 @ + 48.88 Csnclusions In this paper, we have defined an encoding schema for the models discovered by KEDS, and demonstrated that MDL can be used as an evaluation metric to effi- ciently acquire models from complex non-homogeneous domains. In the future we intend to apply KEDS- MDL to other engineering domains, such as modeling the delay of a VLSI circuit. KEDS will be enhanced so as to consider domain knowledge other than equa- tion templates (for example, analyzing the topology of a VLSI circuit to determine various possible critical paths). The KEDS-MDL algorithm was motivated by our overall goal of developing a methodology to sup- port engineering decision making. Under this method- ology, called inverse engineering, the models discovered by KEDS-MDL will be used to directly support synthe- sis activities in engineering decision-making. This will greatly reduce design iterations and take advantage of the design expertise already captured in computer- based analysis tools. Rao and LU 721 Var (a) Breadth-first (b) KEDS-MD& (c) KEDS-MDL:( I/IO) Error RunTnne (s) Error RunTune (s) Error RunTIme s V9T 0.00639 4701 0.00312 1411 0.00685 89 . rlrcet 0.00765 1956 0.00724 572 0.00838 237 TV01 0.00511 5151 0.00365 1222 0.00395 168 _ Table 2: Experiment I: Comparing KEDS (breadth-first) with KEDS-MDL 0 Metric 11 Error: qfl 1 Error: Get 1 Error: qvor 1 Table 3: Expt II: Predictive Error for different Metrics Acknowledgments We are grateful to Jorma Rissanen for his comments and suggestions. Our thanks also go to Dennis Assanis for the OTTONET data, to Robert Stepp and Bradley Whitehall for help while developing KEDS, and to Edwin Pednault for introducing us to minimum length encoding. References Assanis, D.N. 1990. OTTONET. Technical Report, De- partment of Mechanical and Industrial Engineering, Uni- versity of Illinois, Urbana, IL. Barron, A.R. 1984. Predicted Squared Error: A criterion for automatic model selection. In Farlow, S.J. (ed), SeZf Organizing Methods in Modeling. Marcel-Dekker. 87-103. Breiman, L.; Friedman, J. H.; Olshen, R. A.; and Stone, C. J. 1984. Claasifzcation and Regression Trees. Wadsworth. Cheeseman, P. and others, 1988. Bayesian classification. In Proceedings of the Seventh National Conference on Ar- tificial Intelligence, St. Paul, MN. 607-611. Cover, T.M. 1968. Estimation by the nearest neighbor rule. IEEE Trans. on Information Theory IT-14:50-55. Derthick, M. 1991. A minimal encoding approach to fea- ture discovery. In Proceedings of the Ninth National Con- ference on Artificial Intelligence. 565-571. Falkenhainer, B.C. and Michalski, R.S. 1986. Integrating qualitative discovery: The ABACUS system. Machine Learning 1(4):367-401. Finger, S. and Dixon, J.R. 1989. A Review of Research in Mechanical Engineering Design. Part I: Descriptive, Prescriptive, and Computer-Based Models of Design pro- cesses. Research in Engineering Design 1(1):51-67. Fisher, D.H: 1985. A proposed method of conceptual clus- tering for structured and decomposable objects. In Pro- ceedings of the Second International Workshop on Ma- chine Learning. 38-40. Friedman, J.H. 1991. Multivariate Adaptive Regression Splines. Annals of Statistics. Gallager, R.G. 1968. Information Theory and Reliable Communication. John Wiley & Sons. Kadie, C.M. 1990. Conceptual Set Covering: Improving fit-and-split algorithms. In Proc. of the Seventh Interna- tional Conference on Machine Learning, Austin. 40-48. Langley, P.; Simon, H.A.; Bradshaw, G.L.; and Zytkow, J.M. 1987. Scientific Discovery: Computational Explo- rations of the Creative Processes. MIT Press. Leclerc, Y.G. 1989. Constructing simple stable descrip- tions for image partitioning. International Journal of Computer Vision 3( 1):73-102. Lu, S. C-Y. and Tcheng, D. K. 1991. Building layered models to support engineering decision making: A ma- chine learning approach. Journal of Engineering for In- dustry, ASME Transactions 113(1):1-g. Pednault, E.P.D. 1989. Some experiments in applying in- ductive inference principles to surface reconstruction. In Proceedings of the Eleventh International Joint Confer- ence on Artificial Intelligence. 1603-1609. Pentland, A. 1989. Part segmentation for object recogni- tion. Neuml Computation 1:82-91. Quiulan, J.R. and Rives& R.L. 1989. luferring decision trees using the minimum description length principle. In- formation and Computation 80~227-248. Quinlan, J.R. 1986. Induction of decision trees. Machine Learning 1( 1):81-106. Rao, R. B. and Lu, S. C-Y. 1992. KEDS: A Kuowledge- based Equation Discovery System for engineering prob- lems. In Proceedings of the Eighth IEEE Conference on Artificial Intelligence for Applications. 211-217. Rao, R. B.; Lu, S.C-Y.; and Stepp, R.E. 1991. Knowledge- based equation discovery iu engineering domains. Iu Pro- ceedings of the Eighth International Workshop on Machine Learning. 630-634. Rendell, L. A. 1983. A new basis for state-space learn- ing systems and a successful implementation. Artificial Intelligence 20(4):369-392. Rissanen, J. 1983. A universal prior for integers and esti- mation by minimum description length. Annals of Statis- tics 11(4):416-431. Rissanen, J. 1985. Minimnm description length principle. kz Kotz, S. and Johnson, N.L., editors 1985, Encyclopedia of Statistical Sciences, Vo1.5. John Wiley & Sons. 523-532. Stepp, R.E. 1984. Conjunctive Conceptual Clustering: A Methodology and Experimentation. Ph.D. Dissertation, Department of Computer Science, University of llliuois, Urbana, IL. Stone, C.J. 1977. Consistent nonparametric regression (with discussion). The Annals of Statistics 5:595-645. 722 Representation and Reasoning: Qualitative Model Construction
1992
122
1,190
radley L. Dept. of Computer Sciences University of Texas Austin, Texas 787 12 bradley@cs.utexas.cdu Dept. of Artificial Intelligence University of Edinburgh 80 South Bridge Edinburgh EHl 1 HN Scotland, UK inak@ai.cd.ac.uk ers Dept. of Computer Sciences University of Texas, Austin Austin, Texas 78712 kuipers@cs .utexas . edu Abstract We describe a method of automatically abduciug qualitative models from descriptions of behaviors. We generate, from either quantitative or qualitative data, models in the form of qualitative differential equations suitable for use by QSIM. Constraints are generated and filtered both by comparison with the input behav- iors and by dimensional analysis. If the user provides complete information on the input behaviors and tbe dimensions of tbe input variables, the resulting model is unique, maximally constrained, and guaranteed to reproduce tbe input behaviors. lf the user provides incomplete information, our method will still generate a model which reproduces tbe input behaviors, but the model may no longer be unique. incompleteness can take several forms: missing dimensions, values of variables, or entire variables. Qualitative simulation of physical systems provides researchers with insights by giving an overview of system behaviors without the deluge of detail inherent in quantita- tive simulation. Perhaps even more important, it may be possible to develop a qualitative simulation where develop- ing a quantitative one would be impossible due to inexact knowledge of the system’s internal workings. But even with the power of qualitative simulation systems like QSIM ([Kuipers, 1986, 1989]), developing qualitative models remains something of an art. For this reason, many researchers are investigating automatic model building. The most common approach is to construct models from given model fragments ([Forbus, 19841, [Forbus, 19861, ]deKleer and Brown, 19841, and [Craw- ford, Farquhar, and Kuipers, 19901). In this paper, we present MISQ, a method for building models purely from behavioral information. Given some or all of the behaviors ’ exhibited by a particular system, we abduce a model which reproduces those behaviors. If the user provides sufficient information on the input behaviors, the resulting model is unique, maximal, and correct in the sense that it reproduces the input behaviors. The models we build are qualitative differential equations (QDEs) suitable for use by QSIM. The QSIM framework provides explicit functions, landmarks, and corresponding values, all of which are critical to the success of MISQ. A weaker framework such as confluences would be inadequate. MISQ was first implemented as a special-purpose system, and a preliminary description appeared in ]Kraan, Richards, and Kuipers, 199 11. Since then, reimplemented as a domain theory withm purpose learning system Forte [Richards and Mooney, 19911. described in ]&chards an oney , 19921, allows MISQ to infer the existence of m The remainder of this paper is organized as follows: Section 2 provides an overview of our method of model building. Section 3 proves the theorem central to the correctness of our approach. Section 4 gives detailed examples of models we constructed automatically. Section 5 presents our method for introducing new variables into a model. Section 6 summarizes related work. Finally, Section 7 gives our conclusions and suggests directions for further research. erview. generation process is broken into three major phases. In the first phase, if we are given quantitative data, we convert it into qualitative behaviors. It is also possible to input qualitative behaviors directly. In the second phase, we generate and test individual constraints, creating constraints consistent with the input behaviors. In the third phase, we construct models (QDEs) from the set of constraints generated in the second phase. If the models are not connected, i.e., they consist of independent sub-models, we use relational pathfinding to search for variables that connect the sub-models. Cmversion of ~~a~~tat~v~ data. We can execute on two forms of quantitative input: high-resolution sensor data or hand-generated quantitative behaviors. If the input is high-resolution sensor data, we convert the data to the required numeric precision and align events which occur in different variables at insignificantly different times2. We then discard all but the “interesting” points in the data; i.e., points where some variable reaches a maximum, a minimum, or zero. Hand-generated quantitative behaviors are the analog of processed sensor data: quantitative behaviors which include only interesting time points. Richards, Kuipers, and Kraan 723 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. The quantitative behaviors are converted into qualita- tive behaviors. For each variable at each time point, the quantitative value is turned into a qualitative value consist- ing of a qualitative magnitude and a direction-of-change. The qualitative magnitude is constructed by generating a landmark value and, if it is new, inserting it into the qspace constructed so far3. The direction-of-change is determined by comparing the numeric value of the vari- able at the current time point with those of the preceding and subsequent time points. Further, we add qualitative states to behaviors as needed. If, for example, a variable is at a minimum at one time point and at a maximum at the next, the qualitative state for the interval during which the variable is increasing is added to the behavior. Constraint generation. The input to the second phase is a set of consistent qualitative behaviors, the landmark values (qspaces) of the initial state, and dimensional information. In the first step of this phase, we select an arbitrary behavior and generate all constraints satisfied by any combination of variables. This is done by generating tuples of variables and testing their values against the satisfaction conditions for each constraint type. We do not generate tuples that lead to immediately redundant con- strain&, e.g., (M+ n y) and (M+y x). We currently implement the following constraint types4: arithmetic constraints (add, mult, and minus), differential constraints (d/dt), functional constraints (Mf and M- for strictly monotonically increasing and decreas- ing functions), and direction-of-change constraints (constant). These constraints are a subset: of the con- straints provided in QSIM. Nevertheless, they are expres- sive enough to build many interesting qualitative models. The satisfaction conditions are similar to those in QSIM, though somewhat simpler. Since the input behav- iors are assumed to be correct, we need not check the continuity criteria from a state to its successor. And, since the satisfaction criteria within a state are the same for time points and intervals, we need not distinguish between time points and intervals. The constraint satisfaction criteria are based on the magnitudes, signs, directions of change, and corresponding values of the variables; these criteria are defined in detail in lKuipers, 19861. For example, the constraint (Mf x y) is satisfied if the directions of change of x and y (expressed as increasing, decreasing, or steady) are always identical, and there are no conflicting corresponding values. if, for instance, there are corre- sponding values at (x1 y 1) and (x1 y2), and y 1 and y2 are known to be distinct values, the relationship between n and y cannot possibly represent a function. The constraint (d/dt x y) is satisfied if the direction-of-change of n is increasing, decreasing, or steady and the sign of y is +, -, or 0, respectively. Finally, we ensure that the dimensions of the variables in each constraint. are compatible. If, for example, the constraint (d/dt x y) has been generated, MISQ will test whether the dimensions of x can be the dimensions of y divided by time. This ensures that the constraints are abstractions of equations potentially representing real physical systems. Functional constraints impose no a priori restrictions on the dimensions of their arguments. Since we are working with qualitative data, dimensions are generally stated in terms of fundamental types like mass, time, or length rather than in units of measurement such as meters or grams. Since MISQ is only interested in the relationship between the dimensions of variables, users are free to define their own types, except for time. In some cases, the user may be able to reduce the number of constraints in the final QDE by making dimen- sions more specific. For example, if a system contains both oxygen and water, and the user knows that it makes no sense to combine amounts of oxygen and water, the user can use different dimensions, such as amount-of- oxygen and amount-of-water. After generating all possible constraints from a single behavior, we test all remaining behaviors against these constraints, eliminating any constraint that violates any satisfaction condition. Model generation. If the user provided complete infor- mation on the input behaviors, the set of constraints from the second phase forms a unique model guaranteed to reproduce the input behaviors (see Section 3). If, on the other hand, the user provides incomplete information, the set of constraints from the second phase may not form a valid model. In this case, further processing is required (see Section 4). Variable Inflow outflow Netflow Amount Initial @pace 0, inl, = 0, 00 -00, 0, netl, 00 0, O” Dimensions mass/time mass/time mass/time mass State 0 (initial state) Variable Magnitude hlflow itll outflow 0 Netflow net1 Amount 0 successors: state 1 Direction-of-Change steady increasing decreasing increasing State 1 Variable Magnitude Inflow outflow Netflow Amount in1 - (07 WI (0, netl) (07 4 State 2 Variable Inflow outflow Netflow Amount Magnitude ill1 out1 0 amount 1 successors: state 2 Direction-of-Change steady increasing decreasing increasing successors: none Direction-of-Change steady steady steady steady This information can be automatically generated from high- resolution sensor data. 724 Representation and Reasoning: Qualitative Model Construction As an example of model generation with complete information, consider an empty bathtub with a finite capacity, a constant inflow, and a constant drain opening. This bathtub exhibits three behaviors: reaching equilibri- um at a level below the top of the tub, reaching equilib- rium exactly at the top, and overflowing. We simulated the bathtub using QSIM, and presenud MISQ with a complete qualitative description of the behavior with an equilibrium point less than full (see Figure 1). MISQ abduced the exact QDE used to produce the behavior, with the addition of two redundant constraints: (constant inflow) (add outflow netflow inflow) (M- outflow netflow) (M+ outflow amount) (&I- netflow amount) (d/dt amount netflow) The redundant constraints are added since MISQ generates maximal QDEs. For example, since the Mf constraint is transitive, if M+ constraints hold between variables a and b and between b and c, MISQ would also include a redundant M+ constraint between variables a and c. A central feature of our method is that, given sufficient information on the input behaviors, it will generate a unique maximal QDE which is guaranteed to reproduce the input behaviors. This further implies that, if the user presents all system behaviors as input, we will produce a correct system model. This section presents the essential definitions and proves this central feature. Consistent set of behaviors. A set of behaviors is consistent if it (potentially) represents a real physical system. This can be summarized by two criteria: First, relationships among variables must be qualitatively consistent among behaviors. In other words, if two vari- ables are related by some constraint, then this constraint. must be the same in all behaviors (e.g., not M+ in one behavior and M- in another). Second, dimensions must make sense (as they must in a real system). We might imagine a system in which a variable and its derivative have the same dimensions, and create behaviors for the system, but such a system could never actually exist. Theorem. Given a complete description of a consistent set of behaviors, we will produce the most constrained QDE which reproduces those behaviors. Furthermore, this QDE is unique. Proof of Theorem. Given a fixed set of variables, two sets of constraints on these variables C, and C2, and the behaviors consistent with these constraints Beh(C,) and Beh(C,), the set of behaviors consistent with both sets of constraints is given by the relation Beh(C, u ca = Beh(C,) n Beh(C,) (1) Given a complete and consistent set of input behaviors, we exhaustively generate all constraints which are individually consistent with the behaviors. If we combine these constraints using (l), the intersection of their behavior sets will include all input behaviors. Thus, a correct. model exists. Since we generate all consistent constraints, the resulting QDE is maximally constrained and unique. erview. behavioral The user may not always provide complete information. There are two ways in which behavioral information may be incomplete. First, entire variables may be missing from the behaviors. Second, information on variables may be partial in that their dimensions or some of their qualitative values are not given. Entire variables may be missing if the user does not know what set of variables are important to a system. Qualitative values may be omitted either when a variable is difficult to measure or when measurements are not available for all time points. Dimensions will generally be given for all variables specified by the user, but will be unavailable for variables created by MISQ. If we have only partial information on some variables, constraints generated during the second phase of model building may be mutually inconsistent. We must eliminate these inconsistencies in order to generate a final model. Once we have a consistent model, we check whether the model forms a connected graph. If it does not, either the behaviors describe independent processes or an essential system variable is missing. In this case, we consider adding new variables to the model. Missing Qualitative Values and mensions. When Complete description. A description of a behavior is complete if three criteria are met. First, all variables in the system are identified. Second, values of the variables are given for all time points and intervals. Finally, dimen- sions are given for all variables. We do not require that all behaviors of a system be given. However, specifying too few behaviors may result in a model which is too con- strained to produce behaviors of the system which were not given as input. The more behaviors that are given, the more constraints may be eliminated, thus making it less likely that the resulting model will be overconstrained and more likely that it is the intended model. qualitative values or dimensions are left unspecified, some generated constraints may make incompatible assumptions about the missing values. MISQ resolves this incompati- bility by dropping one or more of the conflicting con- straints. Since there is a choice of which constraints to delete, the resulting model is no longer unique. However, the model still reproduces the input behaviors. One type of incompatibility arises with qualitative values. For example, suppose we have the constraints: (d/dt a b) (M+ a c) At a particular time-point, let the direction-of-change for a be unknown, the sign of b positive, and the direction-of- change of c decreasing. The constraints are mutually Richards, Kuipers, and Kraan 725 inconsistent, since the derivative constraint assumes the direction-of-change of a to be increasing, while the M + constraint assumes it to be decreasing. Incompatibilities arising from missing dimensions are detected by an analysis which ensures that a set of con- straints makes sense as a model of a physical system. For example, even without any dimensional information, we know that the following constraints are inconsistent: (d/dt a b) (add a b c) They are inconsistent because variables in an add con- straint must have identical dimensions, but the dimensions of a variable and its derivative differ by a factor of l/time. As an example, we presented MISQ with the bathtub behavior in Figure 1, but with no dimensional information. MISQ generates six models. One of these is the desired model shown earlier. The others reflect the fact that, without dimensional information, MISQ is no longer able to distinguish between outflow and amount, as they are qualitatively indistinguishable in the specified behavior. Missing Variables. Once we have a consistent set of constraints, we check to see whether they form a connect- ed graph. If so, we consider our model complete. If not, there are two possibilities: we may be missing one or more variables, where the constraints associated with those variables would connect the model, or the behaviors may describe multiple independent processes. These two possibilities define a spectrum of choices. At one extreme, we can choose to always consider the processes independent. At the other extreme, we can always generate some sequence of intermediate variables to connect any set of processes, In this spectrum, we have chosen the following position: We assume that the user has omitted only a small number of variables, and therefore only connect isolated parts of the model if we can do so by introducing at most one intermediate variable for each connection. Any portions of the model which cannot be connected in this way we consider independent. New variables are added by a method called relational pathfinding, which is part of the general-purpose learning system Forte. We give a brief description of this method here. A complete description may be found in [Richards and Mooney, 19921. Relational pathfinding provides a natural way to introduce new variables into a model. It is based on the assumption that relational concepts can be represented by one or more fixed paths between the constants that define an instance of the relation. In the case of qualitative modeling, we are looking for paths, composed of constraints, which will join model fragments into a coherent whole. The pathfinding method seeks to find these paths by successively expanding the paths leading from each known system variable. To expand a path, we try adding all possible constraints involving one new variable. The added constraint and existing variables restrict the possible behaviors of the new variable. We take the set of new variables generated for each model fragment and look for an intersection between them. An intersection occurs when two new variables have consistent restrictions placed on their behaviors. When we find an intersection, the intersection point becomes a new system variable and the constraints leading to it are added to the model. While relational pathfinding potentially amounts to exhaustive exponential search, it is generally successful for two reasons. First, by searching from all model fragments simultaneously, we greatly reduce the total number of paths explored before we reach an intersection. Second, we limit the length of the missing paths and hence the depth of search. An example of a model containing variables added by relational pathfinding is included in the following section. 5 es We have run MISQ on a variety of common models, including the U-tube modeled by GOLEM [Bratko, Muggleton, and Vargek, 19911, a nonlinear pendulum, a system of two cascaded tanks, and a system of two independent bathtubs. The latter two are discussed in detail below. The U-tube consists of two tanks connected by a pipe at the bottom. GOLEM required one positive behavior, one hand-tailored positive timepoint, and six hand-generat- ed negative timepoints. MISQ produced a correct model using only the positive behavior given to GOLEM. The nonlinear pendulum is a simple second-order system. MISQ produces a correct model given the first few states of a single damped behavior. Cascaded tanks. Cascading two tanks so that the drain from one provides the inflow to the next provides a more complex system than the u-tube. We ran MISQ on various types of input: -- qualitative, quantitative, and high-resolution data -- with and without missing variables A graph of the high-resolution data for the amount vari- ables is shown in Figure 2. In all cases with complete dimensional information, MISQ produced the model in Figure 3, which is exactly the one we would expect. The constraints are: (constant inflow-a) (add outflow-a netflow-a inflow-a) (add outflow-b netflow-b outflow-a) (d/dt amount-b n&flow-b) (d/dt amount-a netflow-a) (A4 + amount-a outflow-a) (I% amount-a netflow-a) (I4 + amount-b outflow-b) (M- outflow-a netflow-a) When we omitted system variables, we selected those that a user might realistically forget. We supposed the user measured all the flows and amounts but did not realize that the calculated netflow for each tank would be important. We therefore provided MISQ with the same qualitative behaviors as above, but omitted the netflow 726 Representation and Reasoning: Qualitative Model Construction variables. The standard model generation process, before relational pathfinding, produces the constraints: (constant inflow-a) (M + amount-a outflow-a) (54 + amount-b, outflow-b) Note that these constraints are not connected. Relational pathfinding finds the missing two variables and six constraints, and again produces the correct model. Two tubs. As a test of our ability to identify independent processes, we presented MISQ with two behaviors of a system containing two independent bathtubs. The standard model generation process produces the model: (M + amount-a outflow-a) (M + amount-b outflow-b) (d/dt amount-a netflow-a) (d/dt amount-b netflow-b), (add netflow-a outflow-a inflow-a) (add netflow-b outflow-b inflow-b) This model includes all constraints needed for the two tubs (note that neither inflow is constant). The model is not connected, and relational pathfinding tries to add new vari- ables. It is unable to connect the two bathtubs with one intermediate variable, and the model remains unchanged. 6 work Machine learning. Our approach is similar to the gener- alizing half of the Version Space algorithm described in Mitchell, 1982]. Mitchell presents a method of deriving logical descriptions from a series of examples. Given a set of examples of the concept of interest, Version Space constructs the most specific conjunctive expression which includes those examples. We construct the most con- strained model (essentially a conjunction of constraints) which reproduces all the input behaviors. Model building. GENMODEL [Coiera, 19891 is a system which constructs maximally constrained qualitative models from completely specified qualitative behaviors. MISQ uses the same method to generate its initial set of con- straints . However, MISQ generates fewer constraints, since it performs dimensional analysis. GENMODEL does not process quantitative behaviors, work with incomplete information, or perform dimensional anaIy sis . In [Bra&o, Muggleton, and Vargek, 19911, the learning system GGLEM is used to abduce qualitative models. Their method requires hand-generated negative information (i.e., examples of behaviors which the system does not exhibit), it does not completely implement the QSIM constraints (e.g., corresponding values are ignored), and it does not use dimensional information. The dimensional analysis MISQ performs is similar to [Bhaskar and Nigam, 19901, which uses dimensions to derive qualitative relations. However, [Bhaskar and Nigam, 19901 requires dimensions to be stated in terms of predefined fundamental types, whereas we allow dimen- sions to be user-defined or even to remain unspecified. Thoueonds 10 2.5 - I I 10 15 Amount in Tanks - Amount A - Amount B ‘igure 2. High-resolution quantitative data. i iFigure 3. Correct model for cascaded tanks. [DeCoste, 19903 presents a system for maintaining a qualitative understanding of a dynamic system from continuous quantitative inputs, but begins with a qualita- tive model. [Hellerstein, 19901 discusses the process of obtaining quantitative predictions of system performance in the absence of exact knowledge of the target system. And [Forbus and Falkenhainer, 19901 combines quantita- tive and qualitative models to produce “self-explanatory simulations, ” which produce quantitative predictions along with qualitative explanations of overall system behavior. But, again, Forbus and Falkenhainer require system models as input and exploit the relationship between the quantitative and qualitative models, rather than deriving the qualitative model from the input data. USiOlBS Model building can be a difficult and time-consuming task. It can be simplified by automating some steps of the process. In this paper, we presented a method for auto- Richards, Kuipers, and Kraan 727 ma&ally producing models from known behaviors. This approach is useful both in design and diagnosis. In design, researchers often want. models to produce specified quantitative or qualitative behaviors; our method can eliminate the need to handcraft these models. In diagnosis, our method can derive a model which reproduc- es a faulty behavior. Comparing the model of the faulty behavior with the correct model may show where the system fault. lies. The fact that we can work directly with the available quantitative information is particularly helpful in this context. There are several promising directions for further re- search. First, our approach can be extended to include other types of constraints like the QSIM S and U con- straints. Second, when MISQ is given incomplete infor- mation and generates many potential models, additional filters could eliminate some of the proposed models. These filters could make use of behaviors which should not be produced by the model. Forte is already capable of using this type of negative information. Third, incon- sistent input behaviors may represent a system which is crossing a transition. Modeling such a system would require constructing multiple models connected by well- defined transitions. Lastly, MISQ represents an extreme, knowledge-free approach to model-building. If more knowledge is available, for example in the form of a view- process library, this knowledge should be usable to restrict the set of possible constraints. Similarly, MISQ could be integrated with qualitative systems which work with partial quantitative information; rather than converting quantita- tive inputs to a purely qualitative model, we could retain the quantitative information and pass it, along with the model, to a system like 42 (puipers and Berleant , 19881). Acknowledgements This work has taken place in tbe Qualitative Reasoning Group at the Artificial Intelligence Laboratory, The University of Texas at Austin. Research of the Qualitative Reasoning Group is supported in part by NSF grants ITI-8905494, WI-8904454, and IRI-9017047, by NASA grant NAG 9-512, and by tie Texas Advanced Research Program under grand 003658-l 75. Referents R. Bhaskar and A. Nigam, “Qualitative Physics Using Dimen- sional Analysis,” Artificial Intelligence, 45:73-l 11, 1990. E. Coiera, “Generating Qualitative Models from Example Behaviors, ” Technical Report DCS 8901, Department of Computer Science, University of New South Wales, May 1989. J. Crawford, A. Farqubar, and B. Kuipers, “QPC: A Compiler from Physical Models into Qualitative Differential Equations, ” Proceedings of the Eighth National Conference on Artificial Zntelligence (AAAI-90), pp. 365-372, 1990. D. DeCoste, “Dynamic Across-Time Measurement Interpreta- tion,” Proceedings of the Eighth National Conference on Artifi- cial Intelligence (AAAI-90), pp. 373-379, 1990. J. deKleer and J. Brown, “A Qualitative Physics Based on Confluences,” Artificial ZnteZZigence, 24:7-83, 1984. B. Falkenhainer and K. Forbus, “Compositional Modeling: Finding the Right Model for the Job,” unpublished draft, 1990. K. Forbus, “Qualitative process Theory,” Artificial Intelligence, 24:8S-168, 1984. K. Forbus, “The Qualitative Process Engine, ” Technical Report, Department of Computer Science, University of Illinois, 1986. K. Forbus and B. Falkenbainer, “Setting Up Large-Scale Qualitative Models,” Proceedings of the Seventh National Conference on Artificial Zntelligence (M&88), pp. 301-306, 1988. K. Forbus and B. Falkenbainer, “Self-Explanatory Simulations: An integration of qualitative and quantitative knowledge,” Proceedings of the Eighth National Conference on Artificial Intelligence @U&I-90), pp. 380-387, 1990. J. Hellerstein, “Qbtaining Quantitative Predictions from Mono- tone Relationships ,” Proceedings of the Eighth National Confer- ence on Artificial Intelligence (M&90), pp. 388-394, 1990. I. C. Kraan, B. L. Richards, and B. Kuipers, “Automatic Abduction of Qualitative Models,” Proceedings of the Fifth International Workshop on Qualitative Reasoning about Physical Systems, 295-301, 1991. B. Kuipers, “Qualitative Simulation,” Artificial Intelligence, 29:289-338, 1986. B. Kuipers, “Qualitative Reasoning: Modeling and Simulation with Incomplete Knowledge,” Automatica, 25571-585, 1989. B. Kuipers, Qualitative Reasoning: Modeling and Simulation with Zncomplete Knowledge, unpublished draft, 1990. B. Kuipers and D. Berleant, “Using Incomplete Quantitative Knowledge in Qualitative Reasoning,” Proceeding of the Seventh National Conference on Artificial Intelligence (AAAI-88), pp. 324-329, 1988. T. M. Mitchell, “Generalization as Search,” Artificial Zntelli- gence, 18:203-226, 1982. B. L. Richards and R. J. Mooney, “Learning Relations by Pathfinding,” Proceedings of the Tenth National Conference on Artificial Intelligence (M-92), 1992. Notes ‘A behavior is a continuous time-ordered sequence of variable values. *We do not address issues of precision or noise. Our emphasis is qualitative model building, and in a realistic application we would expect sensor data to be pre-processed by a system de- signed to deal with these problems. “A qspace is a totally ordered set of landmarks. Landmarks are values which break the domain of a variable into qualitatively distinct intervals. For example, tbe qspace of the temperature of a pot of water might be (absolute-zero, freezing, boiling, infinity ). 4The constraint (add x y z) means x -I- y = z, (d/dt x y) means dxldt = y, (RI+ x y) means a strictly increasing monotonic function holds between x and y, and so forth 728 Representation and Reasoning: Qualitative Model Construction
1992
123
1,191
Torn Bylander Laboratory for Artificial Intelligence Research Department of Computer and Information Science The Ohio State University Columbus, Ohio 43210 email: byland@cis.ohio-state.edu Abstract Korf (1985) presents a method for learning macro- operators and shows that the method is applicable to serially decomposable problems. In this paper I analyze the computational complexity of serial decomposability. Assuming that operators take polynomial time, it is NP-complete to determine if an operator (or set of operators) is not serially decomposable, whether or not an ordering of state variables is given. In addition to serial decompos- ability of operators, a serially decomposable prob- Iem requires that the set of solvable states is closed under the operators. It is PSPACEcomplete to determine if a given “finite state-variable prob- lem” is serially decomposable. In fact, every solv- able instance of a PSPACE problem can be con- verted to a serially decomposable problem. Fur- thermore, given a bound on the size of the input, every problem in PSPACE can be transformed to a probIem that is nearly serially-decomposable, i.e., the problem is serially decomposable except for closure of solvable states or a unique goal state. ntroduction Korf (1985) presents a method of learning macro- operators (hereafter called MPS for the Macro Prob- lem Solver system that applied the method), defines a property of problems called serial decomposability, and demonstrates that serial decomposability is a sufficient condition for MPS’s applicability. However, several questions were left unanswered. How difficult is it to determine whether a probIem is serially decomposable? How difficult is it to solve seri- ally decomposable problems? Can MPS’s exponential running time be improved ? What kinds of problems can be transformed into serially decomposable prob- lems? Subsequent work has not addressed these questions in their full generality. Chalasani et al. (1991) analyze the “general permutation problem” and discuss how it is related to serial decomposability. In a permuta- tion problem, the values of the state variables after applying an operator are a permutation of the previ- ous values. Chalasani et al. show that this problem is in NP, but NP-completeness is open. Tadepalli (1991a, 1991b) shows how macro tables are polynomially PAC- learnable if the examples are generated using a macro table. In each case, a strong assumption (that opera tors are permutations or that a macro table generates the examples) is made. My results hold for serially decomposable problems in general. Let a “finite state-variable problem” be a problem defined by a finite number of state variables, each of which has a finite number of values; a finite number of operators; and a unique goal state, i.e., an assignment of values to all state variables. A serially decomposable problem is a finite state-variable problem in which each operator is serially decomposable (the new value of a state variable depends only on its previous value and the previous values of the state variables that precede it), and in which the set of solvable states is closed under the operators (operator closure). Assuming that operators take polynomial time, it is NP-complete to determine if an operator (or set of op- erators) is not serially decomposable, whether or not an ordering of state variables is given. It is PSPACE- complete to determine if a given finite state-variable problem is serially decomposable, primarily due to the difficulty of determining operator c1osure.l In fact, ev- ery solvable instance of a PSPACE problem can be transformed to a serially decomposable problem. Thus, MPS applies to all solvable instances of PSPACE prob- lems, and MPS’s exponential running time is indeed comparable to solving a single instance of a problem, assuming PfPSPACE. Thus, any large improvement of MPS’s running time is unlikely. Furthermore, given a bound on the size of the in- put, every problem in PSPACE can be transformed to a problem that is “nearly serially-decomposable,” i.e., serially decomposable except for either operator ’ PSPACE is the class of problems solvable in polynomial space. PSPACE -complete is the set of hardest problems in PSPACE. All evidence to date suggests that PSPACE- complete is harder than NP-complete, which in turn is harder than P (polynomial time). By lander 729 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. closure or a unique goal state. That is, the representa- tion of any PSPACE problem with limited-size input can be changed to one in which the operators are se- rially decomposable, but some other property of serial decomposability is not satisfied. It should be noted that MPS is not guaranteed to work without operator closure or without a unique goal state. An open issue is determining the class of problems that can be effi- ciently transformed into “fully” serially-decomposable problems. These results show that serial decomposability is a nontrivial property to detect and achieve. Of special interest is that detecting serial decomposability of op- erators is relatively easy in comparison to detecting operator closure. Also, except for special cases (Cha- lasani et al., 1991), it is not clear how to transform a problem so that all properties of serially decompos- ability are attained. Another open question is the dif- ficulty of determining whether an instance of a serially decomposable problem is solvable. The above summarizes the technical results. The re- mainder of the paper recapitulates Korf’s definition of serial decomposability, gives an example of how blocks- world operators can be transformed into serially de- composable operators, and demonstrates the new re- sults. Serial Decomposability The following definitions are slightly modified from Korf ( 1985).2 A finite state-variable problem is a tuple (S, V, 0, g) where: S is a set of n state variables (~1,232,. . . , sra}; V is a set of h values; each state s E Vn is an as- signment of values to S; 0 is a set of operators; each operator is a function from V” to V”; and g is the goal state, which is an element of VI/“. A serially decomposable problem is a finite state- variable problem (S, V, 0, g) where each operator is serially decomposable, and the set of solvable states is closed under the operators (operator closure). Both conditions are defined below. 21 avoid Ko f’ r s definition of “problem” because every such “problem” is finite, and so conflicts with the usage of “problem” in computational complexity theory. His “prob- lem” is equivalent to my ‘(finite state-variable problem” with the additional constraint of operator closure. He also includes all solvable states in his definition of “problem,‘, which could lead to a rather large problem de- scription. I include “state variables” and their “values,” instead. Finally, he assumes that every instance of a serially de- composable problem has a solution. I modify this assump- tion to operator closure. An operator o is serially decomposable if there exist functions fi, 1 5 i 5 n such that o(s1, s2, s3, * f *, &a) = (f&l), f2(%S2), * a', fta(Sl,S2, s3,* - +a)) That is, the new value of each sd is only dependent on the previous values of s1 through sd. An instance of a serially decomposable problem spec- ifies an initial state silait. A solution to such an in- stance is a finite sequence of operators (01,02, . . . , ok) such that ok(. . . (02(o~(sinit)))) = 9. The same oper- ator can be used multiple times within a sequence. A state s is solvuble if there is a solution when s is the initial state. A serially decomposable problem satisfies operator closure if the set of solvable states is closed under the operators, i.e., o(s) is always a solvable state whenever s is. Korf (1985) h s ows that a macro table exists for ev- ery serially decomposable problem, and describes the Macro Problem Solver (MPS) for learning macro ta- bles. The definitions of macros, macro tables, and MPS are not essential to the results of this paper, though I should note that without operator closure or a unique goal state, MPS is not guaranteed to work. See Korf (1985) for more details. Every serially decomposable problem is finite. With a finite number of state variables and a finite number of values, there are a finite number of states. So, to consider the asymptotic complexity of serial decompos- ability, it is necessary to consider the set of all serially decomposable problems. After an example of how the block-world problem relates to serial decomposability, I demonstrate the complexity of determining whether an operator is serially decomposable, the complexity of determining whether a finite state-variable problem is serially decomposable, and the transformation from PSPACE problems with limited size inputs to prob- lems that are nearly serially-decomposable. Blocks-World Example This section provides an example of the nature of my results and the proofs underlying them. The blocks world problem is an example of a problem that does not have serially decomposable operators. This is be- cause the dependencies from pre- to postconditions in operators have circularities. Any solvable blocks-world instance, however, can be transformed into a serially decomposable problem. The trick is assert inconsis- tency if an operator is applied when its preconditions are false, and permit the effects of an operator whether or not its normal preconditions are true. Unfortu- nately, it is difficult to transform a set of blocks-world instances into a single problem without giving up op- erator closure or a unique goal state. Consider the Sussman anomaly. In this blocks-world instance, there are three blocks A, B, and C. Initially C is on A, A is on the table, and B is on the table. The goal is to have A on B and B on C. For every 730 Represent at ion and Reasoning: Temporal block, let there be two variables to indicate whether and encoding the goal state, a unique goal state can the block is clear or on the table, respectively, e.g., be achieved only if these variables are “erased.” How- clear-A and A-on-table. For every pair of blocks, I& there be a variable to indicate whether they are on one another, e.g., A-on-B. Finally, include another variable ok. The initial state sets the block variables to 0 and 1 as appropriate. The goal is A-on-B = 1 and B-on-C = 1. As is well-known, the goal implies specific values for all other state variables. Now consider the following operator to stack A from B to C: if A-on-B = 1 A clear-A = 1 A clear-C = 1 then A-on-B t 0 clear-B t 1 clear-C + 0 A-on-C t 1 As given, this operator is not serially decomposable because both A-on-B and clear-C appear in the pre- conditions and are changed, i.e., they depend on each other. The complexity result for determining serial decomposability of operators depends on a nondeter- ministic way to find this kind of dependency. However, with an additional variable, this operator can be transformed into a serially decomposable oper- ator. Let OX: be the new state variable with OX: = 1 in the initial and goal state. The transformed operator is: if A-on-B = 0 V clear-A = 0 V clear-C = 0 then ok t- 0 A-on-B c- 0 clear-B c 1 clear-C + 0 A-on-C c 1 If the original preconditions are true, then this oper- ator has the effect of the previous operator; otherwise ok = 0 and the effect is an inconsistent state represen- tation. The ordering that makes this serially decom- posable is that ok comes after all other variables. If the other operators are similarly transformed, then the Sussman anomaly can be transformed into a seri- ally decomposable problem. The unique goal state is the goal state of the Sussman anomaly. Operator clo- sure can be achieved by including an operator that resets the other variables to the initial state, so that any state can reach the goal state. However, if this transformation is applied to an unsolvable block-world instance, then the reset operator ensures that solvable states can reach the unsolvable initial state, i.e., en- sures that operator closure, and thus serial decompos- ability, is not satisfied. The complexity result for deter- mining serial decomposability of a finite state-variable problem depends on this kind of transformation. Can more than one blocks-world instance (i.e., with different initial states and goal states) fit into a single serially decomposable problem? No simple fix to the above formulation appears to work. If a set of state variables are used for remembering the initial state ever, it is not possible to be sure that the goal state is reached before-erasing these variables-that would vio- late serial decomposability. For example, the variables encoding the goal state would be used to set another state variable indicating that the goal state has been achieved, so the goal state variables must be ordered before the goal-state-achieved variable. However, serial decomposability means that erasing the goal state vari- ables cannot depend on the value of goal-state-achieved variable. The problem transformation result meets a similar fate. It is possible to ensure serially decompos- able operators and either operator closure or a unique goal state, but not both. ecomposability of Operators Let SD-OPERATOR-GIVEN be the problem of deter- mining whether a given operator of a given finite state- variable problem is serially decomposable for a given ordering of state variables. Let SD-OPERATOR-ANY be the problem of whether a given operator is serially decomposable for any ordering of state variables. I as- sume that applying an operator takes polynomial time. Theorem 1 SD-OPERATOR-GIVEN is coNP-com- plete. Proof: First, I show that SD-OPERATOR-GIVEN is in coNP. To demonstrate that an operator is not serially decomposable, nondeterministically choose two states in which state variables 81 up to si have the same values, but after applying the operator, the two new states have different values for sr to si, implying that some variable in sr to si depends on some variable in si+l to sn. If no such two states exists, then it must the case that no s1 to sa depends on si+l to sn. To show that this problem is coNP-hard, I reduce from SAT to SD-OPERATOR-GIVEN so that a yes (no),answer for the SD-OPERATOR-GIVEN instance implies a no (yes) answer for the SAT instance. Let F be a boolean formula. Let 1~1, . . . , urn) be the m boolean variables used in F. Create an SD-OPERATOR-GIVEN instance as follows Let S = {SO,S~,Sl,..‘, sm) be m + 2 state variables in order with V = (0, 1) as possible values. Now define an operator 0 as: if ui = si, 1 5 i 5 m, satisfies F then exchange values of SO and sb If F is not satisfiable, then o is simply the identity function, which is serially decomposable. Otherwise, the new values for SO and sb depend on each other and on whether the values of the other variables satisfy F; hence, o is not serially decomposable for this ordering of state variables. Thus, SD-OPERATOR-GIVEN is coNP-hard. Because it is also in coNP, it follows that SD-OPERATOR-GIVEN is coNP-complete. 0 Note the above reduction works for total decompos- ability as well. Therefore, it follows that: Bylander 731 Corollary 2 It is coNP-complete to determine whe- show that an instance of SDSAT does not have oper- ther a given operator of a given finite state-variable ator closure, find two states s and s’ such that s’ and problem is totally decomposable. Theorena 3 SD-OPERATOR-ANY is coNP-complete. the goal state 9 can be reached from 5, but or cannot be reached from 5’. This would show that a unsolvable state s’ can be reached from a solvable state s. Proof: The reduction in Theorem 1 shows that SD- OPERATOR-ANY is coNP-hard because if the for- mula F in the proof is satisfiable, then se and sb de- pend on each other, and so no ordering of the state variables can satisfy serial decomposability. It remains to show that SD-OPERATOR-ANY is in coNP. To demonstrate that si cannot be ordered before sj, choose two states in which all state variables have the same values except for sj, but after applying the operator, the two new states have different values for si. If two such states exist, then it must be the case that si depends on sj, i.e, sj must be ordered before si. If no such pair of states exists, then si must be a function of the other variables, i.e., si can be ordered before sj. To demonstrate that no ordering exists, choose two states for each (i, j), i # j, and based on the pairs that cannot be ordered, show that no partial ordering exists. This requires O(n3) nondeterministic choices- 2n choices for each of O(n2) pairs of state variables- and so SD-OPERATOR-ANY is in coNP. Because it is also coNP-hard, it follows that SD-OPERATOR-ANY is coNP-complete. 0 The above results easily generalize to determining serial decomposability of a set of operators. For The- orem 1, this just involves a nondeterministic choice of an operator. For Theorem 3, a choice of operator must be made for each ordered pair of state variables. The interesting consequence is that detecting serial decom- posability of a set of operators for any ordering of state variables is not much harder than detecting serial de- composability of a single operator for a given ordering. The difference is only a polynomial factor rather than a more dramatic change-in complexity class. Determining whether a state s is solvable is in PSPACE. If there are n variables and h values, and there is a solution, then the length of the smallest so- lution path must be less than h”. Any solution of length hn or larger must have “loops,” i.e., there must be some state that it visits twice. Such loops can be removed, resulting in a solution of length less than hn. Hence, no more than h” nondeterministic choices are required. Each operator only requires polynomial time, which implies polynomial space. Once an operator has been applied, its space can be reused for the next op- erator. Thus, determining whether a state is solvable is in NPSPACE. Because NPSPACE = PSPACE, this problem is also in PSPACE. Because each property of serial decomposability can be determined in PSPACE, it follows that SDSAT is in PSPACE. 0 Lemma 6 SDSAT is PSPACE-hard. Proof: Let M be a Turing machine with input X122 s v -2, and whose space is bounded by a poly- nomial p(n). Without loss of generality, I assume that M is a one-way Turing machine with a single tape. Because NPSPACE=PSPACE, it does not mat- ter whether the machine is deterministic or not. Create state variables with possible values V = (0, 1) as follows: sd,Z equals 1 if the ith tape square contains symbol x, for 0 5 i < p(n) and x E C, where C is the tape alphabet of M %l equals 1 if M is in state Q, for Q E Q, where & is the set of states of A4 Serial Decomposability of Si equals 1 if the tape head is at the ith tape square Let SDSAT be the problem of determining whether a given finite state-variable problem is serially decom- posable. That is, an instance of SDSAT is a tuple (S, V, 0, 9) with definitions as given above. I again as- sume that applying an operator takes polynomial time. Theorem 4 SDSAT is PSPACE-complete. ss equals 1 if M accepts the input S# equals 1 if the other variables encode a coher- ent configuration of M Create operators as follows. Let Oi*it set the state variables so they correspond to the initial configuration of M. For 1 5 i 5 n, si.s:; = 1. For i = 0 and n < i < p(?l),Si,b- 1, where b E C is the blank symbol. sqO = 1, where ~0 is the start state. Also, Sl = 1 and s# = 1. All other state variables are set to 0, including s*. Proof: To show this, first I shall show that SDSAT is in PSPACE, and then I shall show that the set of Turing machines whose space is polynomially bounded can be reduced to SDSAT. Lemma 5 SDSAT is in PSPACE. Proof: Theorems 1 and 3 show that determining se- rial decomposability of an operator is in coNP. Determining operator closure is in PSPACE if deter- mining whether a state is solvable is in PSPACE. To A state transition of M is specified by (q, x, y, Q’, d), i.e., if Q is the current state and x is the current svmbol, then change x to y, make q’ the current state, and move in direction d (-1 or +l). For each transition t and tape square i, create an operator Ot,i that performs the following code: 732 Representation and Reasoning: Temporal if sq = 0 V Si = 0 V Si,, = 0 then s# * 0 sq co Si,s + 0 Si,y + 1 Sq’ + 1 Si tO Sifd + 1 That is, if Turing machine 2M is not in state q with tape head at square i containing symbol x, then apply- ing this operator results in a incoherent configuration and so 0 is assigned to s#. Oinit needs to be applied again to achieve s# = 1. In any case, set other state variables as if M were at state q at tape square i con- taining symbol x. Special operators can be encoded for the ends of the tape. Note that all the ot,i oper- ators are serially decomposable if s# is ordered after all other state variables (except for SI because of an operator described below). The reader might be uncomfortable with the large number of operators (number of tape squares times number of transitions) that are created. This can be avoided if the si variables are allowed to range from 0 to p(n) - 1, and modifications are made as follows. Oinit sets Si = i, 0 5 i < p(n), and only one ot operator is created for each transition. ot looks for the si equal to 1 to determine which si,s, si,y , and Si+d variables to change. To move the tape head, ot either adds to or subtracts 1 from all the si variables (modulo p(n)) depending on the direction. This modification requires that all the si variables be ordered first. Finally, for M’s accepting states, create an operator ofinal that sets all state variables to 0 except that ss is set to 1 if s# = 1 and if an accepting configuration has been reached. The goal state is then s* = 1 and all other variables equal to 0. All operators are serially decomposable and there is a unique goal state. Suppose that M halts in an accepting configuration. Then the goal state can be reached from any other state by applying Oinit, fol- lowed by operators corresponding to the transitions of A4, followed by Ofinal. Thus operator closure is sat- isfied, and the finite state-variable problem is serially decomposable. Suppose M does not halt in an accepting configura- tion. Then the goal state cannot be reached after Oi,it is applied. Because the goal state is a solvable state, and applying oinit leads to an unsolvable state, opera- tor closure is not satisfied, and the finite state-variable problem is not serially decomposable. By inspection, this reduction takes polynomial time. Thus, an algorithm for SDSAT could be used to solve any problem in PSPACE with polynomial additional time. It follows that SDSAT is PSPACE-hard. D. Because SDSAT is in PSPACE and is PSPACE- hard, SDSAT is PSPACE-complete. 0 It is interesting to note that if the PSPACE Tur- ing machine instance is satisfiable, then the SDSAT instance is not only serially decomposable, but also every state is solvable. Problem Transformation The above proof transforms a solvable instance of a PSPACE Turing machine to a serially decomposable problem, but different solvable instances would be transformed to different serially decomposable prob- lems. Of course, transforming a single infinite problem into a finite serially decomposable problem is out of the question, but it might be possible to transform finite subsets of an infinite problem to a serially decompos- able problem. There is an unsatisfying answer based on boolean circuit complexity. It is possible to encode a limited- size version of any polynomial-time problem into a polynomial-size boolean circuit. Such circuits have no circularities, and so, can easily be transformed into a serially decomposable problem. This might be con- sidered uninteresting because only one operator is re- quired to propagate values from one level of the circuit to the next. In the remainder of this section, I show how a PSPACE Turing machine, given a limit on the size of the input, can be transformed in polynomial time to a problem that is nearly serially-decomposable, i.e., seri- ally decomposable except for either operator closure or a unique goal state. It is an open question whether or not a transformation to “fully” serially decomposable problems is possible, but I shall suggest some reasons why such a transformation probably does not exist. For a Turing machine M, let Mn denote M with size of input limited by n. Theorem 7 For any PSPACE Turing machine M and positive integer n, it takes time polynomial in n to reduce Mn to a problem that is serially decompos- able except for either operator closure or a unique goal state. Proof: Mn uses at most polynomial space p(n). Transform M for an arbitrary input x1 . . . xn of size n into a finite state-variable problem as in Lemma 6 with the following modifications. Add s: ~ variables, 1 5 i 5 n, such that sa G. = 1 and 0 otherwise. , I Modify the @nit operator ables to the si,x variables. so it copies the sb ~ vari- Modify the of inal operator so it changes the sa m vari- ables to zero. I Different inputs for Mn can now be accommodated by initializing the s:,~ variables to different values. The operators remain serially decomposable as long as the s:,~ variables are ordered before the si,,: variables. Wowever, operator closure is not satisfied. The Ofinal operator irrevocably erases the s:,~ variables, and the Turing machine can no longer be initialized to any proper configuration. Thus, this finite state-variable B ylander 733 problem is serially decomposable except for operator closure. Suppose that the finite state-variable problem is changed as follows: any state in which s* = 1 is a goal state, and Ofinal does not change any of the sa 2 variables. Now the problem has operator closure bk- cause no o’perator erases the initial sate; however, there is no longer a unique goal state. Thus, this problem is serially decomposable except for a unique goal state. Cl It is very unlikely that the above transformation re- sults in a “natural” representation. However, it sug- gests that many PSPACE problems are likely have a “natural” counterpart with serially decomposable op- erators, which might take some effort to find. There is a good argument against the existence of a transformation from PSPACE or even NP to “fully” serially decomposable problems. Operator closure suggests that either operators never make a decision that requires backtracking, or that it is always possible to restore the initial state. NP-hard problems appear to require backtracking, so that leaves restoring the initial state as the only pos- sibility, which implies that the state variables always remember the initial state. But a unique goal state means that there is no memory of any specific state when the goal is reached. In addition, serial decompos- ability of operators means that information can only “how” in one direction-from sr to sn. The variables used to restore the initial state must be before all other variables, including any that indicate whether a solu- tion has been found. As a result, erasing the initial state cannot depend on whether a solution has been found. So restoring the initial state is not a viable pos- sibility, either. Somehow, applying an operator never loses the information that the instance is solvable. This is the opposite what seems necessary for NP- hard problems. For example, consider the problem of satisfying a boolean formula. To transform this prob- lem to a serially decomposable problem, there must be operators that erase the original formula in order to reach a common goal state from any formula. How- ever, if other operators decide on assignments to vari- ables, and if incorrect assignments have been made, then erasing the original formula results in an unsolv- able state. Conclusion I have shown that it is hard (coNP-complete) to deter- mine serially decomposability of operators, and even harder (PSPACE-complete) to determine serially de- composability of problems. Thus, it is nontrivial to test for serial decomposability, especially to test for opera- tor closure. I have also shown that an efficient trans- formation from PSPACE Turing machines to nearly serially-decomposable problems exist, given a limit on the size of the input. That is, it is easy to achieve serial decomposability of operators, albeit without operator closure or a unique goal state. A problem instance can be transformed into a serially decomposable prob- lem, but it is not clear when a set of problem instances can be transformed into a single serially decomposable problem. Another major question that is left open is: If a problem is known to be serially decomposable, how difficult is it to determine whether a given instance is solvable? Acknowledgments The reviewers made valuable suggestions. This re- search has been supported in part by DARPA/AFOSR contract F49620-89-C-0110. References Chalasani, P.; Etzioni, 0.; and Mount, J. 1991. Inte- grating efficient model-learning and problem-solving algorithms in permutation environments. In Proc. Second Int. Conf. on Principles of Knowledge Repre- sentation and Reasoning, Cambridge, Massachusetts. 89-98. Korf, R. E. 1985. Macro-operators: A weak method for learning. Artificial Intelligence 26( 1):35-77. Tadepalli, P. 1991a. A formalization of explanation- based macro-operator learning. In Proc. Twelfth Int. Joint Conf. on Artificial Intelligence, Sydney. 616- 622. Tadepalli, P. 1991b. Learning with inscrutable the- ories. In Proc. Eighth Int. Workshop on Machine Learning, Evanston, Illinois. 544-548. 734 Representation and Reasoning: Temporal
1992
124
1,192
Technical University Vienna, Christian Doppler Laboratory for Expert Systems, Paniglgasse 16 A- 1040 Vienna, Austria email: dom@vexpert.dbai.tuwien.ac.at Abstract Temporal reasoning is widely used in AI, especially for natural language processing. Existing methods for temporal reasoning are extremely expensive in time and space, because complete graphs are used. We pre- sent an approach of temporal reasoning for expert sy- stems in technical applications that reduces the amount of time and space by using sequence graphs. A sequence graph consists of one or more sequence chains and other intervals that are connected only loosely with these chains. Sequence chains are based on the observa- tion that in technical applications many events occur sequentially. The uninterrupted execution of technical processes for a long time is characteristic for technical applications. To relate the first intervals in the applica- tion with the last ones makes no sense. In sequence graphs only these relations are stored that are needed for further propagation. In contrast to other algorithms which use incomplete graphs, no information is lost and the reduction of complexity is significant. Additio- nally, the representation is more transparent, because the “flow” of time is modelled. Introduction In many AI applications reasoning about time is essential and therefore several techniques for the explicit representa- tion and processing of time have been developed. Most of these techniques use graph theoretic models, with time entities as nodes and temporal relations as edges. The ap- plication area we have in view is the control of technical processes which involves planning, scheduling, monito- ring, and diagnosis. In contrast to areas like NLP, special characteristics exist in this domain that require appropria- te techniques, but may be used also to improve the pro- cessing. Existing methods for temporal reasoning are extremely expensive in time and space, because general constraint propagation techniques are applied. In our approach, the characteristic of most real-time applications is considered. In these applications programs run for a very long time without interruption. A controlling program loops for- ever and some temporal constraints are used seldom and others more often. Moreover, in scheduling and planning we have to tackle uncertainty about the future, which im- plies the necessity to represent this uncertainty effi- cien tly . Before introducing the representation and the propaga- tion based on this model, we show why temporal reaso- ning is useful in this application area and which objecti- ves should be achieved with a new technique. Additio- nally, we discuss other approaches that have similar ob- jectives. Temporal reasoning is used to assure or to prove con- sistency between a set of temporal qualified propositions. If a proposition is added that is not consistent with the existing knowledge base, either the new proposition is invalid or some of the old propositions are wrong. This decision cannot be supported by temporal reasoning. It has to be decided with causal reasoning of some kind. The temporal consistency mechanism is used for diffe- rent tasks. In planning (Allen 1991) the inconsistency in- dicates that a chosen action is not appropriate for a given goal. Either the action is inconsistent with the goals or the set of propositions describing other actions and facts in the planning environment is inconsistent with the cho- sen action. It is also possible that a new goal is inconsi- stent with the knowledge base. This states that it is im- possible to achieve this goal and replanning is needed. In scheduling of production processes (Dorn 1991) temporal reasoning is used to represent temporal con- straints like delivery dates, durations of operations, slack times, and the temporal description of process plans. Usually the inconsistency indicates that a resource needed is used by another operation at the same time. In process control and diagnosis (Nokel 1989) temporal reasoning can be used to recognize deviations between the expected course of the process and that course that ac- tually happens. Another purpose of temporal reasoning is the computa- tion of new knowledge. New knowledge about temporal constraints can be deduced with intersection and transitive conclusions. In planning, the sequence of actions can be deduced and the start times for actions can be computed. In diagnosis, a new hypothesis may be concluded or time- outs for supervision can be computed through temporal constraints. Darn 735 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. One of the first described applications that has used some kind temporal reasoning was that of (Kahn & Gorry 1977). The system was not based on a graph theoretic model and was therefore more transparent for a user of this system, but was also restricted to a small application area. A concept called before/after-chains was used, that influenced our idea for the propagation of intervals. Most popular and also a basis for our representation is the model of (Allen 1983). Unfortunately, this model is not very transparent and does not show the “flow” of time, because every interval in the interval graph is uni- form and all intervals are connected with each other. Mo- reover, the space requirements for the representation of the complete graphs and the time needed for the propaga- tion is very high. Often time point calculus instead of interval calculus is proposed to reduce the amount of work to achieve a con- sistent graph. In (V&tin, Kautz, & van Beek 1990) it was shown that the global consistency for the time point con- sistency is achievable in polynomial time, but this ad- vantage must be paid by a lower expressiveness. In plan- ning and scheduling a usual constraint is to rule out that two intervals overlap. In Allens model this is expressed by 11 {c, m, mi, >) 12, but in the time point calculus such a constraint cannot be expressed. In order to reduce time and space requirements, in (Allen 1983) reference intervals were proposed. Since he has not given any rules on the generation of reference in- tervals, information may be lost in this model. Hence, in (Koomen 1989) rules were given to construct reference intervals automatically by a program. Here, a reference interval must contain its intervals and therefore no infor- mation is lost and the computation of the relation bet- ween two intervals that are part of different reference in- tervals is easier. However, for applications that we have in view, reference intervals are not the adequate represen- tation, because a hierarchical representation is used spar- sely. In (Dechter & Pearl 1988) heuristic ordering for con- straint graphs was proposed, to improve the general con- straint satisfaction problem. Such a kind of ordering could be the “flow” of time. In (Ghallab & Alaoui 1989) an algorithm was proposed to order intervals temporally and they have detected that the propagation process can be sped up with this technique. Their model consists of two graphs: one graph with all intervals which can be ordered definitely and one graph with intervals that can not be or- dered, because their relations are uncertain. We will now present a model of representation and pro- pagation that uses some of these ideas. We use the con- cept of “flow” of time in a graph theoretic model and ob- tain thus a kind of an ordered constraint graph. Sequence Graphs We have mentioned that the uninterrupted execution of technical processes for a long time is typical for our ap- plications. A controlling program loops forever but the involved intervals and their constraints may differ. Arran- ging intervals of the process along a time axis we obtain a figure that is stretched along a hypothetical time axis. The parallelism in the process is comparatively few in contrast to the amount of intervals over the whole lifeti- me of the process. The following example is typical for a technical pro- cess. It is a simplification of a set of intervals from a scheduling expert system in a flow shop (Dom & Shams 1991). The different processes described by intervals is a simplification of the treatments for one charge. The set of intervals and their temporal constraints can be interpreted as a process plan. In the following discussion we use only this process plan, but the reader should have in mind that a lot of process plans must be combined in order to get one schedule. Important temporal constraints will be between intervals of different process plans and therefore it would not make much sense to use a reference interval for a charge. The scheduling expert system has to com- bine approximately 200 process plans for one week. il I I i2 i5 i6 i3 I I -- i, - i7 . I 1 Figure 1: Process Plan described by Intervals These intervals and their relations can be represented in a complete graph with 21 edges. Sequence graphs are ba- sed on the observation that a complete interval graph con- tains a path, where all intervals are constrained to occur one after another. This path is called sequence chain. Obviously, several chains may exist in one sequence graph. Applying sequence graphs, not every constraint is represented, because the transitivity property of the se- quence chain is used. The complete graph of the process plan can be reduced to the following graph: Figure 2: A Sequence Graph One of the sequence chains emphasized by a bold line consists of the intervals i2, i3, b, is, and i6. It is is a special subgraph with uniform edges. We can deduce, that i2 is before i4, because i2 is before i3 and i3 is before i+ No explicit transitivity rule is needed, because the rela- tion is obtained from the position of both intervals in the chain. The other intervals have to be connected explicitly to the sequence chain. But only relations to intervals which occur simultaneously have to be represented. The advan- tage of transitive chains is the reduction of edges in the graph and by that the amount of work and space. But we 736 Representation and Reasoning: Temporal have to show that no information about the interval con- straint is lost and every inconsistency is found. In (Hrycej 1987) it was described how transitivity chains may be used to reduce the complexity of interval algebra. He has used Allen’s algorithm for transitive closure, but has changed the procedure “comparable”. Af- ter the insertion of a new edge, superfluous edges are dele- ted. We improve this algorithm by deleting edges earlier. Thus, our algorithm is faster than that described by Hry- cej. Furthermore, we use a stronger citerion to eliminate edges. Thus, we obtain also graphs with less edges. resentation We represent temporal knowledge by intervals and con- straints between these intervals. Sequence graphs are in- tegrated into a tool called TIMEX (Dom 1990) that uses Allen’s relations. These are 13 mutually exclusive simple relations between intervals that are abbreviated by follo- wing symbols: =, c, >, m, mi, o, oi, d, di, s, si, f, fi. Through disjunction of simple relations more complex relations can be formulated. These are interpreted as edges of an interval graph and they are represented as triples: R = (11, C, 12). To simplify theorems later on, we introduce predicates for some complex relations. unknown(C) w C = (=, <, >, m, mi, o, oi,d, di, s, si, f, fi) A unknown(IlJ2) tj R = (11, C, 12) A unknown(R) Two intervals are connected if a path exists between them in the interval graph. Such a path may contain also “unknown’‘-relations. connected(I1,12) tj 3 (s,%) A II, 12 E 3 r3 ( 11, c, 12) E s v ( 12, a 11) E 5.R v 3 I3 [connected(Il, 13) A connected(I3, I2)]] efinition 3: Connection of Intervals A sequence graph is an incomplete interval graph, be- cause the properties of a sequence chain are used to reduce the number of edges in the graph. A sequence chain is a subgraph where all constraints are sequence constraints. sequenceChain((3, %)) tj V ( 11, C, 12) E 33 A sequence(C) Definition 4: Sequence Chain If an edge between two intervals 11 and I2 exists, then there is no knowledge about an interval I3 that occurs between both intervals. Or with other words, there is no interval between two explicit connected intervals. sequenceGraph((9, 3)) -+ v Ii,12 E 3 A ( 11, Cl, 12) E % l 3 I3 E 3 [sequence(Il, 13) A sequence(I3, I$] Axiom 1: Sequence Graph sequence(C) ti C = {c, m) A sequence(I1,12) ++ R = (11, C, 12) A sequence(R) starts-before(C) ti C = {<, m, o, di, fi) A starts-before(I1, 12) tj R = (11, C, 12) A sequence(R) uery for interval constraints Since sequence chains are used the query for interval con- straints is not so easy as in complete graphs. If the rela- tion between two intervals that are not connected expli- finishes-after(C) 6+ C = (>, mi, di, si, oi] A finishes&er(I1,12) CJ R = (11, C, 12) A sequence(R) efinition 1: Complex Interval Relations citly is asked, we must search for a path in the graph bet- ween the two intervals. However, this graph search is straightforward, because it must be searched only in one direction of the sequence chain. The relation could be only before or after. A transitive relation can be defined in a table as introdu- ced by Allen. We use the function “trans(C1, C.2))” to denote transitive relations. The “unknown’‘-relation will be represented in graphs, but we can not generate further knowledge out of them. By truth tables it can be shown that the transitive relation of two relations is the “un- known”-relation, in case that one of them is unknown. unknown(C.1) v unknown(C2) -+ .,. unknown(trans(C1, C2)) Theorem 1: Transitivity of “unknown’‘-relation An interval graph is a pair (3, ‘%) consisting of a finite set of intervals 3 and a finite set of interval relations 31. All intervals of such a graph must be connected. intervalGraph((3, %)) -+ V ( 11, C, 12) E 32 [I1 E 3 A I2 E 3 ] A v II, I2 E 3 [connected(Il, I2)] Definition 2: Interval Graph Suppose we are looking for the constraint between the intervals 11 and 12. If there is no explicit edge between the two corresponding nodes in the graph we have two possibilities - 11 is before I2 or vice versa. If we have de- cided on one direction and this direction is right we never backtrack. The asked interval must be part of the sequence chain or it is connected explicitly to another interval in the sequence chain. before-in-sequence-chair@1 ,12) tj 13 (11, cl, 12) A 3 13 lIbefore(Il, 13) A @ (13, C, 12) v before-in-sequence-chain(I3, I$)] Theorem 2: Searching Interval Relation For planning or scheduling applications it seems to be a good heuristic to search into the future, because we sup- pose that the sequence chain into the past is longer for real-time application. Dorn 737 Monotony in Sequence Chains Some transitivity computations are not needed in se- quence graphs, because no stronger constraint can be de- duced. This property is called monotony. monotone(C1, Q) t) V C3 trans(C1, trans(C2, C3)) c trans(trans(C1, C2), C3) A ~ans(t.rans(C3, Cl), C2) c trans(C3, Q-ansCi, C2)) Definition 5: Monotony in Sequence Chains In (Hrycej 1987) it was shown that two “sequence”-relati- ons are monotone. The monotony states that the con- straint via the two relations is always less constraining than successive constraint propagation via them. If a new constraint C3 is added to a sequence chain (Cl, Q), it is not necessary to use the transitive computation via the sequence chain. We formulate a stronger criterion, that will be used to reduce the edges in a sequence graph further. We only de- mand for a theoretical point instead of an interval to be between two intervals to make an edge between them su- perfluous. Suppose an interval 11 is during an interval 12 and it is known that interval I3 is after 12. Our theorem says that we must not represent the edge between the in- tervds I1 and 13. (Cl = m A c2 = m) v (Cl E (<, m, o, s, di) A c2 = -c) V (c2 E (c, m, 0, s, di) A cl = c) -+ monotone(C 1, C2) Theorem 3: Strong Monotony in Sequence Chains The theorem is provable by truth tables and it captures the property that was described in (Koomen 1989) that an interval that is during a reference interval must not be connected explicitly with another interval that is before or after its reference interval. Propagation in Sequence Graphs The basic algorithm for propagation is similar to the al- gorithm for transitive closure in graphs described in (Allen 1983). A relation between two intervals is added to a graph and also to an agenda. All tasks of the agenda are performed in a loop. In the process of executing these tasks other tasks may be produced. If a constraint is added to the graph we distinguish whether the new constraint is a sequence constraint or not. If it is one, some constraints are deleted and some are inserted. Otherwise, normal propagation is done. For a task (an edge in the graph) a set of “comparable” intervals is generated. For every such interval the transi- tivity rule is applied in two directions. If the new compu- ted interval constraint is stronger than the old one, the new constraint is added to the graph and results in a new task. The propagation algorithm terminates if no further task exists. The termination is safe, because tasks are only added when stronger constraints are added. 738 Representation and Reasoning: Temporal In the next subsection we describe how the set of com- parable intervals is computed in sequence graphs. In the second subsection we suppose that an existing edge will be constrained stronger so that other edges can be deleted. In the last subsection the case is described that an edge to a new interval is added. Comparable Intervals The function “comparable” computes all intervals for which the transitivity rule is applied. In the general case, these are all intervals that are connected explicitly with one of the intervals of the new edge. All intervals that are connected via an “unknown’‘-relation can be suppressed. If the new constraint Cl and the constraint C2 that connects an interval I3 with the new edge are monotone, then for I3 the transitivity rule must not be applied. comparable(( 11, Cl, 12), Set) ti Set = (13 I< 13, c2,h) V ( 12, c2,13) A 7 monotone(C1, C2)) Theorem 4: Comparable Intervals in Sequence Graphs Constraining two ntervals stronger If the relation between two intervals is constrained stron- ger and the new constraint Cl is a sequence constraint, two sets are generated. One set Z3 1 (‘before-set”) includes all intervals connected explicitly thru sequence constraints with 11 so that they are before 11 and the other set 32 (“after-set”) includes all intervals connected explicitly with 12 thru sequence constraints so that they are after 12. Suppose we have the following constellation: il I i5 i2 Figure 3: Constraining two Intervals stronger The explicit constraints between these intervals are des- cribed in the following graph. i finishes-after \ \ \ _ i 0 I \ ,/ finishes-aftep-star-u s-before Figure 4: Sequence Graph Suppose we constrain i3 to be before i4. Then 3 1 con- sists of il and i2 and the 5 2 of is. Now all edges bet- ween members of 9 1 and 32 and edges between i3 and members of 132 and between i4 and members of 9 1 are deleted 5 i2 Figure 5: Reduced Graph Insertion of new Intervals Now the case is described that a new interval is inserted and therefore new constraints are inserted. Suppose the following sequence graph exist: Figure 6: Insertion of a new Interval If the interval i7 is inserted and the interval i3 is con- strained to meet the new interval. Between il resp. i2 and i7 no edges have to be added. But between i7 and i4, i5 and i6 “starts-before”-relations have to be established sta- ting that these intervals starts after i7. This interval con- straint may be added without generation of new tasks, be- cause we know that the interval constraints between the intervals i4, is, and i6 do not change. We get following graph: m i 5 Starts-be i6 Figure 7: Insertion of a new Interval If the new constraint would be a “before’‘-constraint no- thing about the relations between i7 and the other inter- vals could be concluded and therefore no further edges have to be added. The complexity reduction thru sequence graphs is hard to define for the general case, but we assume special charac- teristics for our applications. In the next section we create a theoretical framework to estimate the reduction of com- plexity. In the second section we show the advantage on a more complex example. Theoretical Consideration To examine the complexity of our new technique we de- fine some new properties of sequence graphs. The width of a graph is the maximal number of concurrent intervals in the graph and correlates with the concept in (Dechter and Pearl 1988). The length of a sequence graph is the length of the lon- gest possible sequence chain. The length of a sequence chain is the number of intervals in the chain. sequenceGraph((3,%)) + max(l % I) < width((3, ‘%))2 * length((3, 3)) - 1 This formula is nice but the reduction depends heavily on the application. The width of a sequence graph can not be determined so easyly. Suppose all intervals of an applica- tion are arranged in an “overlaps’‘-chain, then all intervals could be concurrent. Practical Considerations To get a feeling for the reduction of complexity in tech- nical applications we take a scheduling example from the application described in (Dorn Jz Shams 1991). The first digit of the intervals indicate the number of a charge. The constraints for one charge are: ix1 m ix2 m ix3 m ix4 m ix5 m ix6 A ix7 m ix5. The following constraints can be deduced by propagation with sequence graphs: fi- nishes-after(ix7, ixl) A finishes-after(ix7, ix2) A finishes- after(ix7, ix3) A ix7 {=, f, fi) ix4. These are ten con- straints. In a complete graph we would have 21 con- straints for one process plan. One important constraint connecting process plans de- mands that two charges that use in the first treatment the same resource are scheduled after another. We introduce the interval ix0 which describes the delay between two charges. Furthermore, we assume that the last treatment must be again sequential. So we constrain iI5 c i25 c i35 < i45. Now we have 3 1 intervals connected in one sequence graph. We have inserted 30 constraints. The propagation algorithm deduces 143 constraints. The amount of con- straints in a complete graph would be 32 * (31-1) / 2 = 496. Figure 8 describes the intervals of four charges. Dorn 739 ill Figure 8: Practical Example Conclusions We have presented an improved technique for temporal reasoning that is less complex in time and space than other known techniques. In contrast to other algorithms no information is lost and in contrast to time point calcu- lus the expressiveness of Allen’s interval calculus is pre- servd. The amount of reduction in our model depends heavily on the application, so that we can not make an universal- ly valid estimation. But if we suppose that the average of concurrent intervals is 1 and the length of the longest chain is k intervals we estimate 21k edges. For a com- plete graph it would be n * (n - 1) / 2 edges for n inter- vals. Unfortunately, if all constraints given are “over- laps”-relations, we will obtain no reduction of space. Our method has one disadvantage. If there is a query for a constraint we have to search for a path between both in- tervals. In the complete graph we would find this con- straint immediately as a constraint. We have tested our representation with a number of ex- amples from real world scheduling problems and in most cases a significant reduction of time and space require- ments were determined in comparison with pure interval calculus. The reduction occurs preferably with large amounts of intervals. Nevertheless, for most technical applications the pre- sented model is insufficient, because also quantitative re- presentation and propagation of time is needed. Further- more, concepts for inexact reasoning with quantitative time is necessary and this reasoning must be combined with the qualitative reasoning. In (Dorn 1990) we have integrated both, and the reasoning from qualitative to the quantitative representation is quite easy and can be per- formed with linear effort. Unfortunately, we have not yet found good algorithms to conclude from quantitative to qualitative representation. eferences Allen, J. F. 1983. Maintaining Knowledge about Tempo- ral Intervals. CACM, 26( 11): 823-843. Allen, J. F. 1991. Planning as Temporal Reasoning. In Proceedings of the 2nd KR, Cambridge, Massachu- setts, 3-8. Dechter, R.; Pearl, J. 1988. Network-Based Heuristics for Constraint-Satisfaction Problems. Artificial InteIZi- gence 34, 1-38. Dom, J. 1990. TIMEX - A Tool for Interval-Based Re- presentation for Technical Applications.In Proceedings of the 2nd Conference on Tools for Artificial Intelli- gence, Washington, DC, 501-506. Dorn, J. 1991. Qualitative Modeling of Time in Techni- cal Applications. Proceedings of the International GI Conference on Knowledge Based Systems, Munchen, Springer Verlag, 310-319. Dom, J.; Shams, R. 1991. An Expert System for Sche- duling in a Steelmaking Plant. In Proceedings of the World Congress on Expert Systems, 395-404. Ghallab, M.; Alaoui, A. M. 1989. Managing Efficiently Temporal Relations Through Indexed Spanning Trees. In Proceedings of the 11th WCAI, Detroit, 1297-1303. Kahn, K.; Gorry, G. A. 1977. Mechanizing Temporal Knowledge. Artificial Intelligence 9, 87-108. Koomen, J. A. G. M. 1989. Localizing Temporal Con- straint Propagation. In Proceedings of the 1st KR, To- ronto, 198-202. Hrycej, T. 1987. An Efficient Algorithm for Reasoning about Time Intervals. In Proceedings Expertensysteme ‘87, Ntimberg, Germany 327-340. Nijkel, K. 1989. Temporal Matching: Recognizing Dy- namic Situations Ii-om Discrete Measurements. In Pro- ceedings of the 1 lth IJCAI, Detroit, 1255-1260. Vilain, M.; Kautz, H.; van Beek, P. 1990. Constraint Propagation Algorithms for Temporal Reasoning. Readings in Qualitative Reasoning about Physical Systems, Daniel S. Weld, Johan de Kleer, 373-381. , 740 Representation and Reasoning: Temporal
1992
125
1,193
artin charles Gchmbic IBM Israel Scientific Center Technion City, Haifa, Israel; and Bar-Ilan University, Ramat Gan, Israel on Sl-mmir Computer Science Dept. Tel Aviv University Tel-Aviv 69978, Israel golumbic@israearn.bitnet shami&math.tau.ac.il Interval consistency problems deal with events, each of which is assumed to be an interval on the real line or on any other linearly ordered set. This paper deals with problems in reasoning about such intervals when the pre- cise topological relationships between them is unknown or only partially specified. This work unifies notions of interval algebras for temporal reasoning in artificial intel- ligence with those of interval orders and interval graphs in combinatorics, obtaining new algorithmic and com- plexity results of interest to both disciplines. Several versions of the satisfiability, minimum label- ing and all consistent solutions problems for temporal (interval) data are investigated. The satisfiability ques- tion is shown to be NP-complete even when restricting the possible interval relationships to subsets of the re- lations intersection and precedence only. On the other hand, we give efficient algorithm for several other restric- tions of the problem. Many of these problems are also important in molecular biology, archaeology, and resolv- ing mutual-exclusion constraints in circuit design. I Reasoning about time is essential for applications in ar- tificial intelligence and in many other disciplines. Given certain explicit relationships between a set of events, we would like to have the ability to infer additional relation- ships which are implicit in those given. For example, the transitivity of “before” and “contains” may allow us to infer information regarding the sequence of events. Such inferences are essential in story understanding, planning and causal reasoning. There are a great number of prac- tical problems in which one is interested in constructing a time line where each particular event or phenomenon cor- responds to an interval representing its duration. These include seriation in archeology [23, 241, behavioral psy- chology [9], temporal reasoning [l], scheduling [30], cir- cuit design [38, p. 1841 and combinatorics [31]. Indeed, it was the intersection data of time intervals that lead Hajijs [21] t o d fi e ne and ask for a characterization of in- terval graphs, and which provides the clues for solving the “Berge mystery story” [16, p. 201. Other appli- cations arise in non-temporal context: For example, in molecular biology, arrangement of DNA segments along a linear DNA chain involves similar problems [6]. In this paper, we relate the two notions of interval al- gebra from the temporal reasoning community and inter- val graphs from the combinatorics community, obtaining new algorithmic complexity results of interest to both disciplines. Allen [l] defined a fundamental model for temporal reasoning where the relative position of two time intervals is expressed by the relations (less than, equal or greater than) of their four endpoints, generat- ing thirteen primitive relations (see Table 1). We call this 13-valued interval algebra -Ala. Our approach has been to simplify Allen’s model in order to study its com- plexity using graph theoretic techniques. The first of the two lines of specialization which we study in this paper is macro relations. Macro relations refers to partitioning the 13 primitive relations into more coarse relations by regarding a subset of primitive relations as a new basic relation. We let n {m,m-l,o,o-l,s,s-l,f,f-l,d,d-l,~} = = (m,o), ~2-l = {m-1,o-l} c”= {s,f,d}, c-l = {s-l, f-‘,d-‘} From these we define the 3-valued and T-valued in- terval algebras & and AT whose elements are called its atomic relations, respectively, -AL3 : {++I A7 : (4, >, a, Q-l, c, c-l, =} For certain applications it is convenient to assume that all interval endpoints are distinct. This simplification Golumbie and Shamir 741 RELATION x before y 31 after x x meets y y met-by x x overlaps y y overlapped-by x starts y y started-by x x during y y includes x x finishes y y finished-by x 2 equals y X NOTA- TION 4 s m:l INTERPRE- TATION . ~~ , r &==-I . . p--i Table 1: The 13-valued interval algebra AIS. Thin line: x interval. Thick line: y interval. generates the 6-valued algebra d43 : {+,+,o,o-l,d,d-l} The choice of which of these algebras cAa to use depends on the nature of the application, data, constraints and on the type of complexity result being proved. We use the term “algebra” here since the set of atomic relations in each ,Ai forms a Boolean algebra, and in cer- tain cases, together with an additional composition op- eration, forms a relation algebra, in the sense defined by Tarski [34], see [27]. In Section 2, we present the background on constraint satisfiability problems as they relate to the study of inter- val algebras. General results on the relative complexity of the problems of interval satisfiability, minimum la- beling and finding all consistent solutions are presented in Section 3. Several new NP-completeness results are given in Section 4, specifically the interval satisfiability problem for the S-valued interval algebra and the interval graph sandwich problem. Section 5 deals with polyno- mial time solutions to restricted domain problems. Our conclusions are given in Section 6. 2 Temporal reasoning as a con- straint satisfiabillity Temporal reasoning problems can be defined in the con- text of constraint satisfiability, where the variables will correspond to pairs of temporal events (i.e., intervals) and will take on values which represent the qualitative relationship between them (i.e., intersect, overlap, con- tain, less than, etc.) For each pair of events x and y, let D(x, y) be a set of atomic relations in the algebra Ai. The semantics here is that we do not know precisely the relationship between x and y, but it must be one of those in the set D(x, y). (In the language of constraint satisfiability, there is a variable V(X, y) representing the relation between x and y, and its value must be taken from those in the set D(z, y) corresponding to its do- main.) For example, we read 0(x, y) = { 4, C} as x is either before or contained in y. The interval satisfiability problem (ISAT), called con- sistent singleton labeling in [35], is determining the ex- istence of (and finding) one interval representation that is consistent with the input data D(x, y). The minimal labeling problem (MLP) is to determine the minimal sets D’(x,Y) C D(x,Y) such that every remaining atomic re- lation participates in some solution. Example. X(-X, m, o}y, y{+ S, +}z, z(f) s}x. Here xoy, y>%, ZSZ, and x-iy, y>%, zfx are both consistent with the input, as shown in Figure 1. On the other hand, ~4% and % zz y are impossible. The minimum labeling for this problem is X(-X, m, o}y, y+z, df, Sk 2 I I A ,z_. , Y I L-4 Y I I: xoy, y>-%, zsx II: x+y, y*z, zfx Figure 1: Two interval realizations for the above example The all consistent solutions problem (ACSP), which we define here, is that of determining a polynomial repre- sentation structure C requiring O@(n)) space and from which Ic distinct combinations of atomic relations consis- tent with the input can be produced in O(q(n, k)) time, where n is the number of variables, p and q are polyno- mial functions, and X: is any number less than or equal to the number of solutions. The structure C thus repre- sents all possible combinations of atomic relations con- sistent with the given data. Our contention is that the all consistent solutions problem is a more faithful closure problem for interval algebras than the minimal labeling problem, since not all tuples of the cross product of a minimal labeling are solutions. Consider the following simple example. Example. a(4, >-}b, b{+, +}c, a(+, >.}c. This is a minimal labeling, and yet only 6 of the pos- sible 8 instantiations have interval representations. The closely related endpoint sequence problem (ESP), is that of enumerating all the distinct interval realiza- tions which are consistent with the given data. For A13 and &, ACSP and ESP are equivalent, but in A7 and da there may be several (or many) distinct endpoint se- quences which realize the same combination of atomic relations. 742 Represent at ion and Reasoning: Temporal An application in archaeology: The seriation and is the reason for the polynomiality of the algorithm. problem in archaeology attempts to place a set of artifact types in their proper chronological order. This problem was formulated by Flinders Petrie, a well-known archae- ologist at the turn of the century, while studying 800 types of pottery found in 900 Egyptian graves. To each artifact type there corresponds a time interval (unknown to us) during which it was in use. To each grave there is a point in time (also unknown) when its contents were interred. Each grave provides the intersection data for the intervals corresponding to its contents. 3 ity 0 In this section we show that the polynomiality of the ISAT problem implies the polynomiality of MLP and ACSP. These results are valid in the general context of constraint satisfaction problems, if the maximum num- ber of labels in a domain is bounded by a constant. We present here the results for the algebras Aa, i = 3,6,7,13, and they apply also to any restricted domain in these al- gebras. Proposition 3.1 The minimum labeling problem and the interval satisfiability decision problem are polynomi- ally equivalent for each of the algebras J&, i = 3,6,7,13. Proof. Clearly a solution to the MLP gives an answer to the ISAT decision problem. For the converse, one can use an oracle for ISAT to solve MLP as follows: Replace one relation set by one of the atomic relations it contains, and keep the rest of the problem input unchanged. ISAT is satisfiable for the resulting problem if and only if that atomic relation is part of a minimum labeling of the orig- inal problem. Hence MLP can be solved by a number of calls to ISAT oracle which equals the number of atomic relations in the input. Proposition 3.2 In any ofthe algebras CAi, i = 3,6,7,13, if the interval satisfiability problem is polynomial, then there exists a polynomial representation structure for the corresponding all consistent solutions problem. When El = 8 or El U E2 is the complete graph on V, the answer is trivially yes. When E2 = 0, the problem is polynomial by the algorithm of Booth and Leuker [7]. The following new result shows that in the general case of the problem is NP-Complete. Theorem 4.1 The interval graph sandwich problem is NP-complete. Proof. (Sketch) By Proposition 3.1, the polynomial- ity of ISAT implies the polynomiality of MLP. The al- gorithm for solving ACSP is an exhaustive depth-first search on the solution space defined by the cross prod- uct of the relation sets of the MLP solution. In each level of the search one more relation set is replaced by a single- ton it contains, and the modified problem is checked for consistency using the ISAT oracle. This allows pruning partially restricted solutions which are already inconsis- tent at the root of their subtree without traversing it, Allen [l] originally provided a heuristic approach for solving the MLP in A 13. That algorithm is polynomial but does not always provide a minimal solution, and may give a false positive answer to ISAT. Vilain and Kautz [37] h ave shown that MLP is in fact NP-complete for d13. Their proof relies on relations in which end- points are equal, such as in the meets relation. We ob- tain a stronger result using macro relations to reduce the number of atomic relations from thirteen to three. Our first main result is to show that even in d3, the inter- val satisfiability problem is NP-complete. Consequently, all four problems ISAT, MLP, ACSP and ESP are in- tractable for all four interval algebras da, i = 3,6,7,13. To prove that ISAT is NP-complete for d3, we intro- duce a new combinatorial problem, called the interval graph sandwich problem, which we prove NP-complete and show to be a special case of ISAT. An undirected graph G = (V, E) is called an interval graph if its ver- tices can be represented by intervals on the real line such that two vertices are adjacent if and only if their inter- vals intersect (see [12, 13, 15, 16, 173.) The interval graph sandwich (IGS) problem is the following: Interval Graph Sandwich problem: INPUT: Two disjoint edge-sets, El and E2 on the same vertex set V. QUESTION: Is there a graph G = (V, E) satisfying El E E 5 El U E2 which is an interval graph? The proof (omitted in this abstract) relies on a reduction from the Not-All-Equal 3-Satisfiability problem (Schae- fer [32]), and the observation by Lekkerkerker and Boland [28] that an interval graph cannot contain an asteriodal triplet of vertices. The detailed proof is given in [19]. (se also [20]. Theorem 4.2 LSAT is NP-complete for d3. Golumbic and Shamir 743 hoof. Define F = {V x V} - {El U E2}. For a given instance of the IGS problem, construct an instance of ISAT on As as follows: For each edge (s, y) E El, E2 or F, let D(z, y) = (n}, (4, n, +} or (4, +}, respectively. It is clear that this ISAT problem has a solution if and only if the IGS has one. q Corollary 4.3 ISAT, MLP, ACSP and ESP are NP- hard for d3, J46, d7 and d13. Proof. This follows from the observation that the alge- bra $13 is contained in di and that for any i = 3,6,7,13, ISAT has a solution if and only if MLP has a non-empty solution if and only if ACSP has a non-empty solution if and only if ESP has a non-empty solution. An application in molecular biolo cal mapping of DNA, information on intersection or non- intersection of pairs of segments originating from a cer- tain DNA chain is known from experiments. The goal is to find out how the segments can be arranged as intervals along a line (the DNA chain), so that their pairwise in- tersections in that arrangement match the experimental data. In the graph presentation, vertices correspond to segments and two vertices are connected by an El-edge (resp., F-edge) if th eir segments are known to intersect (resp., not to intersect). E2-edges correspond the the case where the experimental information on the intersec- tions is inconclusive, or simply unavailable. The decision problem is thus equivalent to the IGS problem. 5 omain roblems Because of the intractability of the general versions of ISAT, MLP ACSP and ESP, attention has been focused on the work of several authors who have studied polyno- mial time heuristic algorithms for MLP on dla [l, 35, 361. Solutions to several restricted cases of the interval satisfiability problem have been known for a long time. These will be extended by the new results presented in this section. By suitably restricting the input domain of an NP- complete problem, one can often obtain a special class which admits a polynomial time algorithm. In the gen- eral case for an interval algebra di, each relation set D(z, y) may take any of 2i - 1 possible values (the empty subset of relations is not allowed). In this section, we restrict this by designating A to denote a particular family of relation sets in da and requiring that each set D(z, y) be a member of A. To simplify notation we shall represent each relation set in ds by a concate- nation of its a$tomic relations, omitting braces. Hence +I = { 4, n}, 4 = { 4) etc.. The seven possible relation sets in d3 in this notation are: 4, +, n, -a nh 4, -cm-. 744 Representation and Reasonher: Ternmmal ISAT will d enote the ISAT problem where all the relation sets are restricted to the set A. The proof of Theorem 4.2 shows that even when all relation sets are restricted to be from A0 = (n, 4, -O-) (meaning in- tersect, disjoint or don’t cure), ISAT remains NP- complete. A number of well-known recognition problems in graph theory and partially ordered sets may be viewed as re- stricted interval satisfiability problems. Five of these, all of which have polynomial time solutions, are given in Table 2 along with their appropriate A, see (12, 16, 311. Class Interval orders Interval graphs Circle (or overlap) graphs Interval contain- ment graphs Posets of dimension 2 Restricted Domain i-(9 +‘, nl ,s 1, Reference WI [15, 13, 7, 261 [14, 81 PI [lo, 2, 181 Table 2: Polynomial interval satisfiability problems in graph theory. Previous to the work of Belfer and Golumbic [3, 5, 41, we do not know of any study which has investigated the ESP. Their results demonstrate polynomial time solutions for the ESP in .4s restricted to (i) A = (4, >, n} (interval orders) using the so called II structure and its associated construction algorithms, and (ii) A = {+-, n} (interval graphs) using the endpoint-tree structure and its construction algorithms. We describe here the results of a systematic study on the complexity of the restricted domains in As. Since we can assume that two converse relation sets (4 and %, or -+l and f+) always appear together in a restricted domain, there are 31 possible restrictions. We classify 27 out of them as either polynomial or NP-complete, leaving open a conjecture that would settle the remaining four. For certain restricted domains special polynomial algorithms are devised. 5.1 Algorithms for the domain A3 - -+-. In this section we deal with problems in the domain: Al = (-+2-,n, -in, n+, -cc=-), that is, Aa - -+. We shall give efficient algorithms for ISAT, MLP and ACSP on this domain, and they wilI apply imme- diately to any subdomain of A,. Hence by excluding just a single relation set from AS the problems become tractable. We first prove that ISAT is polynomial. Construct a graph with vertices corresponding to the endpoints of event intervals and arcs representing the relative order of endpoints. The key observation is that every relation in Al is equivalent to a certain order requkemeoat between a pair (or two pairs) of such endpoints. This observation is stated below: Lemma 5.1 Let i and j be the event intervals [Z;, T;] and T; < i3 < . . . 5 pi, which implies that the input relations [!j 2 TJ], TeSpeCtiVdy. I n each of the following cases, the inter- cannot be satisfied. In case G is acyclic, it represents a partial vals satisfy the set of relations if and only if their endpoints order on the vertices. By Lemma 5.3, it has a realization and satisfy the corresponding inequalities: hence J is satisfiable. i+j u T; < lj b-j u T3 < li (1) Complexity: Constructing G(J) requires Q(m) steps, where i+nj u d; < Tj m is the number of input relation sets, since the effort is con- if% u l.i 5 Ti C2) &ant per relation. inj * Z; < TV and a, 5 pi (3) Checking if G is acyclic can be done, for example, by depth first search, in time linear in ],!?I, the num- If An>j, no constraint is imposed. ber of arcs [33]. Since m 5 n(n - 1)/2, and ]E] 5 2m + n, the algorithm is linear. The graph is now constructed as follows: For an instance Remark. Since all the constraints defining an instance J of ISAT with n events, form a directed graph G(J) = of ISAT(A ) 1 are linear inequalities, the satisfiability problem G(V, E) with vertex set V = (~1,. . . T,.,, II,. . . I,}. The arc set can be reformulated as a feasibility question for a system of E consists of two disjoint subsets, Eu and El. The former linear inequalities. This can be solved by linear program- will represent weak orders and the latter strict orders between ming algorithms, and in fact, by specialized algorithms using pairs of endpoints. The arcs are defined as follows: the fact only two variables appear in each inequality [29, 221. (l&G) E Eo While this is less efficient than the method described above, 2= ‘1 n ,“., (4) it allows the natural introduction of additional linear con- (Ti, a~) E J% Vi, j s.t. i+j (5) (b’,) E Eo Vi, j s.t. i+nj (6) straints, outside the scope of As or even Als, like lengths of (k, T3) E Eo and (aj, Ti) E Eo Vi, j s.t. inj (7) intervals, fixing endpoints to specific time values, etc. For pairs i, j with the relation i-$%-j, no arc is introduced. Define now E = Es U El. We call the arcs in Es (resp., El) the weak arcs (resp., strict arcs). Note that the graph G is bipartite. Denote the two parts of vertex set by R = b-1 ,..., TV} and L = (11, . . . ,ln}, and call arc an RL-arcs (resp., L&arc) if it is directed from R to L (resp., from L to R). In the graph G all the RL-arcs are strict and all the LR-arcs are weak. Hence we need not record explicitly the type of each arc since it is implied by its direction. Since G is bipartite, a cycle must contain vertices both from R and from L. In particular, it must contain an RL-arc, so we obtain the following. Our next result solves the Minimum Labeling Problem ef- ficiently for the domain AI. Once G(J) has been constructed and shown to be acyclic, the MLP can be solved by forming the transitive closure of G(J), deducing from it additional (weak and strict) orders of endpoints, and then using the equivalence established in Lemma 5.1 between these orders and interval relations to create the minimum labeling. The total complexity of the procedure is O(mn) steps. A complete description appears in [19]. We finish this section by sketching a simple procedure to solve the ESP for Al. The transitive closure of the graph G(J) generated in the previous paragraph corresponds to a partially ordered set. The ESP thus reduces to constructing Lemma 5.2 Every cycle in G(J) contains a strict arc. all the linear orders consistent with a partial order. This can be done by placing a minimal element in all possible posi- Lemma 5.3 Suppose G(J) = (V, E) is acyclic. Then a Zin- tions with respect to a previously ordered subset of elements ear order on V is consistent with and only if it is a realization of J. the partial order G(J) and repeating recursively. Using this technique we can show that all the realizations consistent with an instance on AI Proof. Take any linear order P which extends the partial can be computed in O(n) steps per each endpoint sequence produced. The distinction between strict and weak inequal- order G. P is an ordering of all the endpoints, which bY ities (following from the original distinction between strict Lemma 5.1 satisfies all the relations in the input, so P gives a realization for J. On the other hand, every realization of and weak arcs) can also be maintained in such procedure. J gives a linear order of the endpoints, in which each of the input relations must be satisfied, by Lemma 5.1. Hence the 5.2 The domain (4, +, n, O-}. linear order must be consistent with the partial order G(J). An algorithm for solving ISAT constructs G(J) ac- cording to rules (4)-(7), and checks if it is acyclic. J is deter- mined to be satisfiable if and only if G(J) is acyclic. Theorem 5.4 The above algorithm correctly recognizes if an instance of ISAT is satisfiable in linear time. Proof. Validity: By Lemma 5.1, each arc reflects the order relation of a pair of interval endpoints as prescribed by the input relations. If G contains a cycle, then by Lemma 5.2 that cycle contains a strict arc. Hence that cycle must satisfy Graph theoretic techniques provide a proof of the next result. An undirected graph is chordal if for every cycle of length greater than or equal to 4 there is an edge (chord) connect- ing two vertices which are not consecutive in the cycle. The complement ?? of G is the undirected graph whose edges are the non-edges of G; a graph is transitively orkentabde (TRO) if each undirected edge can be assigned a direction so that the resulting orientation satisfies the usual transitivity property. A classical characterization due to Gilmore and Hoffman [15] is that G is an interval graph if and only if G is a chordal graph and its complement E is transitively orientable. Golunnbic and Sknalrrmilr 745 Theorem 5.5 ISAT is solvable in O(n3) time for A, = 14, h n, 41. Proof. Without loss of generality, we may assume that for each pair of elements x and y, the relation sets D(z, y) and D(y, x) given as input are already consistent, i.e., for each atomic relation R, R E D(z, y) M R-l E D(y, x). Otherwise, it is a simple matter to restrict the relation sets further so that they satisfy these properties or are shown to be unsatisfiable. Thus, for each pair of elements x and y, exactly one of the following holds: 6) sf-w (ii) z-0-y (iii) x-xy (iv) x%-y. We construct two complementary graphs G and H as follows. The graph G = (V, E) has undirected edges where {x,Y} E E u x:ny. The graph H = (V, E’) has both directed and undirected edges where (2, y} E E’ u 2-o-y (2, y) E E’ u x+y (An undirected edge between x and y is denoted by (2, y}. A directed edge from x to y is denoted by (z, y).) It is an easy consequence of the Gilmore-Hoffman Theorem that ISAT has a solution if and only if G is chordal and H has a transitive orientation. Testing whether G is chordal can be done in 0( IV1 + I#]) time [7], and obtaining a transitive orientation for H can be achieved in O(lVllEl) time by a variant of the TRO algorithm [16, p. 1241 for undirected graphs. An alternative polynomial solution to this problem has been given in [25]. 5.3 The domain {+, +, -cn>-, -+- }. onclusion In this paper we have dealt with the consistency of asser- tions about the relations between intervals. We have investi- gated three basic problems in temporal reasoning: determin- ing satisfiability (ISAT), maximum strengthening of a satis- fiable assertion (MLP), and producing all the consistent so- lutions via a polynomial representation structure (ACSP and ESP). ACSP and ESP were shown to be tractable whenever ISAT is. We have shown that even a major simplification of Allen’s interval algebra - from thirteen relations to three only - leaves ISAT intractable. On the positive side, we have shown that in this simplified algebra d3 many restricted domain problems are efficiently solvable. Of the 31 possi- ble restrictions, we have classified 27 as either polynomial or NP-complete. A summary of the possible restrictions is pre- sented in Figure 2. We conjecture that ISAT is NP-complete on A, = { dn, nz-, 4). A proof to this conjecture wilI re- solve the remaining four cases. The tools we have used have been mainly from graph theory and complexity theory. We have hoped to demonstrate that the interconnection between these disciplines and reasoning problems in AI can be quite rich, and its investigation benefit both fields. Theorem 5.6 ISAT is polynomialforA3 = (4, >, -m-, 4). Proof. Form a directed graph G(V, E) with vertices corre- sponding to events, and (u, v) E E if U~V. If G contains a cycle, then the instance is clearly not satisfiable. If G is acyclic, then one can create an interval realization of G in which (1) all intervals are disjoint, and (2) (u, v) E E if u+v. This can be done by taking any linear extension of G and ordering the intervals so that they are disjoint and ordered according to that order. In the resulting realization, (1) and (2) are satisfied. (2) guarantees that all 4 and + relations in the input are satisfied, and (1) guarantees that all other relations ( -+ and -cm-) are satisfied. 5.4 The domain {+n, w, -+-, d-c-}. The final theorem obtained here uses a reduction from the in- terval sandwich problem, to show the following intractability result. Figure 2: The complexity of ISAT on restricted domains of AS. Top sets are minimal NP-complete, bottom sets are maximal polynomial sets. eferences [l] J. F. Allen. Maintaining knowledge about temporal in- tervals. Comm. ACM, 26:832-843, 1983. [2] K. R. Baker, P. C. Fishburn, and F. S. Roberts. Partial orders of dimension 2. Networks, 2:11-28, 1972. [3] A. Belfer and M. C. Golumbic. Counting endpoint se- quences for interval orders and interval graphs. Discrete Math. (1992). Theorem 5.7 ISAT is NP-complete for the restricted do- main A4 = {-m, n>-, 4, -o- }. [4] A. Belfer and M. C. Golumbic. The role of combinatorial structures in temporal reasoning. Proc. AAAI Workshop on Constraint Satisfaction, Boston, Mass., July 1990. ‘746 Represent at ion and Reasoning: Temporal [5] A. Belfer and M. C. Golumbic. A combinatorial ap- proach to temporal reasoning. In Proc. Fifth Jerusalem Conf. on Information Technology, pp. 774-780. IEEE Computer Society Press, 1990. [S] S. Benzer. On the topology of the genetic fine structure. Proc. Nat. Acad. Sci. USA, 45:1607-1620, 1959. [7] K. S. Booth and G. S. Lueker. Testing for the consecutive ones property, interval graphs, and planarity using PQ- tree algorithms. J. Comput. Sys. Sci., 13:335-379, 1976. [8] A. Bouchet. R d e ucing prime graphs and recognizing cir- cle graphs. Combinatorics, 7:243-254, 1987. [9] C. H. Coombs and J. E. K. Smith. On the detection of structures in attitudes and developmental processes. Psych. Rev., 80:337-351, 1973. [lo] B. Dushnik and E. W. Miller. Partially ordered sets. Amer. J. Math., 63:600-610, 1941. PI WI Cl31 PI Cl51 El61 WI P81 PI WI P. Fishburn. Intransitive indifference with unequal in- difference intervals. J. Math. Psych., ?:144-149, 1970. P. Fishburn. Interval Orders and Interval Graphs. Wiley, New York, 1985. D. R. Fulkerson and 0. A. Gross. Incidence matrices and interval graphs. Pacific J. Math., 15:835-855, 1965. C. P. Gabor, K. J. Supowit, and W-L Hsu. Recognizing circle graphs in polynomial time. J. ACM, 36:435-473, 1989. P. C. Gilmore and A. J. Hoffman. A characterization of comparability graphs and of interval graphs. Canad. J. Math., 16:539-548, 1964. M. C. Golumbic. Algorithmic Graph Theory and Perfect Graphs. Academic Press, New York, 1980. M. C. Golumbic. Interval graphs and related topics. Discrete Math., 55:113-121, 1985. M. C. Golumbic and E. R. Scheinerman. Containment graphs, posets and related classes of graphs. Ann. N. Y. Acad. Sci., 555:192-204, 1989. M. C. Golumbic and R. Shamir. Complexity and algo- rithms for reasoning about time: A graph-theoretic ap- proach. Technical Report 91-54, DIMACS Center, Rut- gers University, NJ, 1991. Submitted for publication. M. C. Golumbic and R. Shamir. Complexity and al- gorithms for reasoning about time: A graph-theoretic approach. Proc. Israel Symp. on Theory of Comput- ing and Systems, Lecture Notes in Computer Science, Springer-Verlag, 1992. G. Hajiis. Uber eine art von graphen. Intern. Math. Nachr., 11, 1957. problem 65. [24] D. 6. Kendall. S ome problems and methods in statistical archaeology. World Archaeol., 1:68-76, 1969. [25] N. Korte and R. H. Mijhring. Transitive orientation of graphs with side constraints. In H. Noltemeier, editor, Proc. of the International workshop on Graphtheoretic concepts in computer Science (WG ‘85), pp. 143-160, Linz, 1985. Universitktsverlag Rudolf Trauner. [26] N. Korte and R. H. Mijhring. An incremental linear time algorithm for recognizing interval graphs. SIAM J. Comput., 18:68-81, 1989. [27] P. B. Ladkin and R. Maddux. The algebra of constraint satisfaction problems and temporal reasoning. Technical report, Kestrel Institute, Palo Alto, 1988. Kw Pgl PO1 WI WI WI c34 WI WI WI WI C. G. Lekkerkerker and J. Ch. Boland. Representation of a finite graph by a set of interval on the real line. Fundam. Math., 51:45-64, 1962. N. Megiddo. Towards a genuinely polynomial algorithm for linear programming. SIAM J. Comput., 12:347-353, 1983. C. Papadimitriou and M. Yannakakis. Scheduling inter- val ordered tasks. SIAM J. Comput., 8:405-409, 1979. F. S. Roberts. Discrete Mathematical Models, with Ap- plications to Social Biological and Environmental Prob- lems. Prentice-Hall, Englewood Cliffs, New Jersey, 1976. T. J. Schaefer. The complexity of satisfiability prob- lems. In Proc. 10th Annual ACM Symp. on Theory of Computing, pp. 216-226, 1978. R. E. Tarjan. Depth-first search and linear graph algo- rithms. SIAM JourPzal on Computing, 1:146-160, 1972. A. Tarski. On the calculus of relations. Journal of Sym- bolic Logic, 6:73-89, 1941. P. vanBeek. Approximation algorithms for temporal rea- soning. In Proc. Eleventh Int?. Joint Copzf. on Artificial Intelligence (IJCAI-89), pp. 1291-1296, August 1989. P. vanBeek. Reasoning about qualitative temporal in- formation. In Proc. Eighth Nat’l. Conf. on Artificial In- telligence (AAAI-90), pp. 728-734, August 1990. M. Vilain and H. Kautz. Constraint propagation al- gorithms for temporal reasoning. In Proc. Fifth Nat ‘1. Conf. on Artificial Intelligence (AAAI-86), pp. 337-382, August 1986. S. A. Ward and R. H. Halstead. Computation Structures. MIT Press, 1990. [22] D. S. Hochb aum and J. Naor. Simple and fast algorithms for linear and integer programs with two variables per inequality. Proc. 2nd Conf. on Integer Programming and Combinatorial Optimization Carnegie Mellon University Press, 1992, to appear. Golumbic and Shamir 747
1992
126
1,194
e Computation n! ral Projection an * Bernhard Nebel German Research Center for Artificial Intelligence (DFKI) Stuhlsatzenhausweg 3 D-6600 Saarbriicken 11, Germany nebel@dfki.uni-sb.de Abstract One kind of temporal reasoning is temporal projection-the computation of the consequences of a set of events. This problem is related to a number of other temporal reasoning tasks such as story understanding, planning, and plan vali- dation. We show that one particular simple case of temporal projection on partially ordered events turns out to be harder than previously conjec- tured. However, given the restrictions of this problem, story understanding, planning, and plan validation appear to be easy. In fact, we show that plan validation, one of the intended applications of temporal projection, is tractable for an even larger class of plans. Introduction The problem of temporal projection is to compute the consequences of a set of events. Dean and Boddy [1988] analyze this problem for sets of partial1 ordered events assuming a propositional STRIPS-like Fikes and Nils- T son, 19711 representation of events. They investigate the computational complexity of a number of restricted problems and conclude that even for severely restricted cases the problem is NP-hard, which motivate them to develop a tractable and sound but incomplete decision procedure for the temporal projection problem. Among the restricted problems they analyze, there is one they conjecture to be solvable in polynomial time. As it turns out, however, even in this case temporal projection is NP-hard, as is shown in this paper. The result is somewhat surprising, because planning, plan validation, and story understanding seem to be easily solvable given the restriction of this temporal projec- tion problem. This observation casts some doubts on whether tem- poral projection is indeed the problem underlying plan *This work was supported by the German Ministry for Research and Technology (BMFT) under contract ITW 8901 8 and the Swedish National Board for Technology De- velopment (STU) under grant STU 90-1675. 748 Representation and Reasoning: Temporal kkstrh-4 Department of Computer and Information Science, Linkiiping University S-581 83 Linkaping, Sweden cba@ida.liu.se validation and story understanding, as suggested by Dean and Boddy [1988]. It seems natural to assume that the validation of plans is not harder than plan- ning. Our NP-hardness result for the simple tempo- ral projection problem seems to suggest the contrary, though. One of the most problematical points in the defini- tion of the temporal projection problem by Dean and Boddy seems to be that event sequences are permitted to contain events that do not affect the world because their preconditions are not satisfied. If we define the plan validation problem in a way such that all possible event sequences have to contain only events that affect the world, plan validation is tractable for the class of plans containing only unconditional events, a point al- ready suggested by Chapman [1987]. In fact, deciding a conjunction of temporal projection problems that is equivalent to the plan validation problem appears to be easier than deciding each conjunct in isolation. Temporal Projection Given a description of the state of the world and a de- scription of which events will occur, we are usually able to predict what the world will look like. This kind of reasoning is called temporal projection. It seems to be the easiest and most basic kind of temporal reasoning. Depending on the representation, however, there are subtle difficulties hidden in this reasoning task. The formalization of the temporal projection prob- lem for partially ordered events given below closely follows the presentation by Dean and Boddy [1988, Sect. 23. Definition 1 A causal strplc&ure is given by a tuple tD = (P, I,R), where 0 P={p1,... , pn) is a set of propositional atoms, the conditions, @ E= {El,... , cm} is a set of event types, 0 R= {q,..., r,} is a set of causal rules of the form ri = (E~,(P~,cQ,&), where - Q E 8 is the triggering event type, From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. - ‘pi & P is a set of preconditions, - CQ & P is the add list, - and Sa 5 P is the delete list. In order to give an example, assume a toy scenario with a hall, a room A, and another room B. Room A contains a public phone, and room B contains an electric outlet. The robot Robby can be in the hall (denoted by the atom h), in room A (a), or in room B (b). Robby can have a phone card (p) or coins (c). Ad- ditionally, when Robby uses the phone, he can inform his master on the phone that everything is in order (i). Robby can be fully charged (f), almost empty (e), or, in unlucky circumstances, his batteries can be dam- aged (d). Summarizing, the set of conditions for our tiny causal structure is the following: P = {a, b, h, p, c, i,d, e, f). Robby can do the following. He can move from the hall to either room (~h-,~, ~+a) and vice versa (ea,h, ~-+h). Provided he is in room a and he has a phone card or coins, he can call his master (ccall). Addition- ally, if Robby is in room b, he can recharge himself (arge)- II owever, if Robby is already fully charged, this results in damaging his batteries. we have the following set of event types: Summarizing, E = (Eh --+a, Eh-+b, fa-+h, Eb-+h, Ecall, Echarge}, and the following set of causal rules: R = I! [;;-;1 I$ Ml WL (6 =zh: (a): @I> {hH! (cb-+h, {b), 041 {a)>? (hdl, 041 04L (hdl, {%PL {iI, @>, h 41 ii)? kH, (Echarge, {b, ‘+, If), {e)), @charge, {b, f), (d), {f))). In order to talk about sets of concrete events and temporal constraints over them, the notion of a par- tially ordered event set is introduced.’ Definition 2 Assuming a causal structure Qe, = (P, C, R), a partially ordered event set (POE) over <p is a pair Aa = (,&, 4) events Aa = {el, . consisting of a set of actual . . , eP} such that type(ei) E C, and a strict partial order’ 4 over ,A*. Continuing our example, we assume a set of six ac- tual events A = {A, B, C, D, E, F}, such that type(A) = Eh-a type(B) = QI type(C) = %-+h We(D) = Eh-+b tYPe(E) = Echarge type(F) = Eb-+h, ‘This notion is similar to the notion of a nonlinear plan. 2A strict partial order is a transitive and irreflexive relation. and A4B+C D-iE+F. POEs denote sets of possible sequences of events sat- isfying the partial order. A partial event sequence of length m over such a POE (A, 4) is a sequence f = (fl,...,fm) such that (1) {fl, . . . , fm} C A, (2) fi # f’ if i # j, and (3) for each pair fi, fj of events appearing in f, if fi 4 fj then i < j. For instance, (A, B, C) is a partial event sequence of length three over the POE given above, while (A, C, B) is not. If the event sequence is of length IAl, it is called a complete event sequence over the POE. The sequences (A, B, C, D, E, F) and (A,D,B,E,C,F) are complete event sequences, for instance. The set of all complete event sequences over a POE A is denoted by CS(A). Iff= (ii,“‘, fk,...,fm> (fl,... is an event sequence, then , fk) is the initial sequence of f up to fk, writ- ten f/fk. Similarly, f\fk: denotes the initial sequence (fl,. *. Jk-1) consisting of all events before fk. Fur- ther, we write f; g to denote (fi, . . . , fm, g). Each event maps states (subsets of P) to states. Let S C P denote a state and let e be an event. Then we say that the causal rule r is applicable in state S ifF r = OYP+g,cp, (Y, 6) and cp C S. Given e and S, app(S, e) denotes the set of all appllicable rules for e in state S. An event e is said to affect the world in a state S iff app(S, e) # 0. I n order to simplify notation, we write (p(r), cr(r), 6(r) to denote the sets cp, cy, and S, respectively, appearing in the rule r = (6, (p, (Y, 6). If there is only one causal rule associated with the event type type(e), we will also use the notation cp(e), a(e), and s(e). Based on this notation, we define what we mean by the result of a sequence of events relative to a state S. Definition 3 The function “Res” from states and event sequences to states is defined recursively by:3 Res (S, 0) = S Res (S, (f; g)) = Res(S, f)- w-l r E app(Res(S, f>, 9)) u M->I r E aPP@s(S,f), 9)). It is easy to verify that the following equation holds for our example scenario: Res({h, 8, ~1, (Al B, Cl D, E, F)) = {h, f, i). The definition of the function Res permits sequences of events where events occur that do not affect the world. For instance, it is possible to ask what the result of (A,D,B,E,C,F) in state {h,e,c} will be: Res({h, e, ~1, (AA B, E, Cl F)) = {h, e, i). 3Note that it can happen that two rules are applicable in a state, one adding and one deleting the same atom p. In this case, we follow [Dean and Boddy, 19881 and assume that p holds after the event as reflected by the definition of Res . Nebel and Nckstr6m 749 Although perfectly well-defined, this result seems to be strange because the events D, E, and F occurred without having any effect on the state of the world. Given a state S, we will often restrict our atten- tion to event sequences such that all events affect the world. These sequences are called admissible event sequences relative to the state S. The set of all com- plete event sequences over A that are admissible rela- tive to S are denoted by A CS(A, S). In the following, we will often talk about which con- sequences a POE will have on some initial state. For this purpose, the notion of an event system is intro- duced . Definition 4 An event system 0 is a pair (Aa, Z), where A* is a POE over the causal structure <p = (P, E, R), and Z E P is the initial state. In order to simplify notation, the functions CS and ACS are extended to event systems with the obvious meaning, i.e., CS((A,S)) = CS(A) and ACS((& S)) = ACS(A,S). Further, if CS(0) = ACS(O), 0 is called coherent. The problem of temporal projection as formulated by Dean and Boddy [1988] is to determine whether some condition holds, possibly or necessarily, after a particular event of an event system. Definition 5 Given an event system 0, an event e E A, and a condition p E P: p E Poss(e, 0) ifl 3f E CS(0): p E Res(Z, f/e) p E Nec(e, 0) iff Vf E CS(0): p E Res(Z,f/e). Continuing our example, let us assume the initial state Z = {h, e, c} . Then the following can be easily verified: i E Poss(B,@) i 4 Nec(B,@) d # Poss(E, 0) d $. Nec(E, 0). In plain words, Robby is only possibly but not neces- sarily successful in calling his master. On the positive side, however, we know that Robby’s batteries will not be damaged, regardless of in which order the events happen. Given a set of conditions S and a sequence f, Res(S,f) can easily be computed in polynomial time. Since the set CS(0) may contain exponentially many sequences, however, it is not obvious whether p E Poss(e, 0) and p E Nec(e, 0) can be decided in poly- nomial time. A “Simple’9 Temporal Projection Problem In the general case, temporal projection is quite diffi- cult. Dean and Boddy [19SS] show that the decision problems p E Poss(e,O) and p E Nec(e,O) are NP- complete and co-NP-complete, respectively, even un- der some severe restrictions, such as restricting o or S to be empty for all rules, or requiring that there is only one causal rule associated with each event type. Definition 6 An event system is called uncondi- tional iff f or each e E Z, there exists only one causal rule with the triggering event type e. An event system is called simple iff it is unconditional, Z is a single- ton, and for each causal rule r = (e,cp, LV, S), the sets cp, CK, and S are singletons and cp = 6. Dean and Boddy conjecture that temporal projec- tion is a polynomial-time problem for simple event sys- tems [Dean and Boddy, 1988, p. 3791. As it turns out, however, also this problem is computationally difficult. Theorem 1 For simple event systems 0, deciding p E Poss(e, @) is NP-complete and deciding p E Nec(e, 0) is co-N P-complete. Proof Sketch. First we show NP-completeness of p E Poss(e, 0). Membership in NP is obvious. Guess an event se- quence f and verify in polynomial time that f E CS(0) and p E Res(Z,f/e). In order to prove NP-hardness, we give a polyno- mial transformation from path with forbidden pairs (PWFP) to the temporal projection problem. The for- mer problem is defined as follows: Given a directed graph G = (V, A), two vertices s, t E V, and a collection C = { {al, h), . . .> bdnl} of pairs of arcs from A, is there a directed path from s to t in G that con- tains at most one arc from each pair in C? This problem is NP-complete, even if the graph is acyclic and all pairs are disjoint [Garey and Johnson, 1979, p. 2031. We now construct an instance of the simple temporal projection problem from a given instance of the PWFP problem, assuming that the graph is acyclic and the forbidden pairs are all disjoint. Let G = (V, A) be a DAG, where V = (01, . . . , vk) , and let C be a collection of “forbidden pairs” of arcs from A. Further, let s and t be two vertices from V and assume without loss of generality that there is no arc (t, vi) E A. Then define P c -7-z A tYPe(ei,j) tYPe(e*) e ek,l z = {vl,--,vk} u {*) = +i,jl (vi, vj) E A) u {G} = {(qj, {vih {vj), {vi}>1 (vi, vj) E A} u {(E*, {*Is {*I, i*:))l = {ei,jl Ei,j E E} U {e,} = Ei,j for all ei,j E d - {e,} = E* 4 e, for all e E A - {e,) 4 ei,j 8 {(vi, Vj)j (vk, VI)} E c and there is a path from Vj to vk = {s}. Note that 0 can be constructed in polynomial time and that 0 is a simple event system. Further note that since the forbidden pairs are pairwise disjoint, there 750 Representation and Reasoning: Temporal ‘;” 1 no fse:oi events {fi , f2, fs} z A - {e,) such that 2 3. It is now easy to verify that there is a path from s to t in G that contains at most one arc from each pair in C if, and only if, t E Poss(e*, 0). The co-NP-hardness result for the second problem follows by a slight modification of the above transfor- mation. Membership in co-NP is again obvious.4 This result is somewhat surprising because one might suspect that story understanding and planning are easy under the restrictions imposed on the struc- ture of event systems. In fact, a highly abstract form of story understanding is a polynomial-time problem under these restriction [Nebel and Backstrom, 1991; Backstrom and Nebel, 19921. Also planning is an easy problem in this context. Planning can usually be trans- formed to the problem of finding a shortest path in a graph, which is a polynomial time problem. In the general case, the size of the graph is exponential in the size of the problem, but it turns out that the simple problem corresponds to a linearly sized graph. Hence, the problem can be solved in polynomial time. Sim- ilar tractability results have been obtained by Bylan- der [1991], Erol et al [1991] and Backstrom and Klein [1991] for more complicated planning problems. Some relations between these results and the complexity re- sults for temporal projection are discussed in the full paper [Nebel and Backstrom, 19911. One reason for analyzing the temporal projection problem is that it seems to constitute the heart of plan validation. If we now consider the restrictions placed on the simple temporal projection problem, we have already noted that planning itself-a problem one would expect to be harder than validation-is quite easy. One explanation for this apparent paradoxical situation could be that a planner could create the com- plicated structure we used in the proof of Theorem 1, but it never would do so. Hence, the theoretical com- plexity never shows up in reality. This explanation is unsatisfying, however. If this would be really the case, we should be able to characterize the structure of the nonlinear plans planning systems create and validate. The real reason is more subtle, as will be shown below. Temporal Projection and Plan Validation Dean and Boddy [1988, p. 3781 suggest that tempo- ral projection is the basic underlying problem in plan validation: A nonlinear plan is represented as a set of actions +a,... , e,) partially ordered by 4. Each action has some set of intended effects: Intended E P. A nonlinear plan is said to be valid just in case Intended 2 Necessary(ei), for 1 _< i 5 n. 4Complet p e roofs can be found in the full paper [Nebel and BSckstriim, 19911. Although this definition sounds reasonable, there are some points which are arguable. We use a slightly different definition of plan validation in the following. nition 7 A POE A+ over a causal structure @ = (P, E, R) is called a ~&d nonlinear paala with respect to an initial stete Z 2 P and a goal Aa achieves its goal, i.e., G C Res(Z CS(A*J, and (A*,Z) is coherent. Note that our definition coincides with Chapman’s [1987, p. 34O] definition of when a plan solves a prob- lem. In contrast to Dean and Boddy’s formulation, our definition does not refer to the intended e_tfects of particular events but to the effects of the overall plan and to the state before particular events. Further note that plan validation can be reduced to deciding coher- ence of an event system in linear time. lf Aa is a POE and S is the goal state, A: shall denote the POE A, extended by an event e, such that e, has to occur last and there is exactly one causal rule associated with e* such that cp(e,) = G. roposikion 2 A POE Aa is a valid nonlinear plan with respect to Z and G ifl (Ag,Z) is a coherent event system. In what follows, we show that coherence, and, hence, the validity of nonlinear plans, can be decided in poly- nomial time, provided the event system is uncondi- tional. Although the restriction may sound severe, it shows that plan validation is tractable for a consider- ably larger class of plans than temporal projection. In the full paper [Nebel and Backstrom, 19911 we argue that the restriction to unconditional actions is not very severe given the formalism used in this paper. First of all, we note that coherence cannot be eas- ily reduced to temporal projection as defined by Dean and Boddy since coherence refers to the state before an event occurs. For this reason, we define a variant of the temporal projection problem. Definition 8 Given an event system 0, an event e E A, and a condition p E P: p E Possb(e, 0) i# 3f E CS(C3): p E Res(Z,f\e) p E Necb(e, 0) ifl V’f E CS(0): p E Res(Z,f\e). Deciding p E Necb(e, 0) instead of p E Nec(e,O) does not simplify anything. All the NP-hardness proofs for Net can be easily used to show NP-hardness for Necb. Nevertheless, using this variant of temporal pro- jection we can decide coherence for unconditional event terns. oposition 3 An unconditional event system 0 is coherent iff Ve E A: cp(e) C Necb(e, 0). In order to simplify the following discussion, we will restrict ourselves to consistent unconditional event systems, which have to meet the restrictions that Nebel and BZckstrijm 751 a(e) fl6(e) = 0, f or all e E A. Note that any uncon- ditional event system 0 can be transformed into an equivalent consistent unconditional event system 0’ in lcinr; time by replacing 6(e) with 6(e) - o(e) for all . As a first step to specifying a polynomial-time algo- rithm that decides coherence for unconditional event systems, we define a simple syntactic criterion, written Maybeb(e, O), that approximates Necb(e, 0). Definition 9 Given a consistent unconditional event system 0, an atom p f P, and an event e E d, Muybeb(e, 0) is defined as follows: p E Maybe&, 0) Q-7 (1) p E Z V 3e’ E A: [e’ + e A p E (r(e’)]A (2) de’ E d - {e}: [e’ # e A e # e’ A p E b(e’)]A (3)Ve’Ed: [(e’-(eApEb(e’))+ 3e” E d: (e’ 4 e” 4 e A p E cx(e”))]. This definition resembles Chapman’s [1987] modal truth criterion. The first condition states that p has to be established before e. The second condition makes sure that there is no event unordered w.r.t. e that could delete p, and the third condition enforces that for all events that could delete p and that occur before e, some other event will reestablish p. It is obvious that this criterion can be checked in polynomial time. Maybeb is nei ther sound nor complete w.r.t. Necb in the general case because we do not know whether the events referred to in the definition actually affect the world. However, Maybeb coincides with Necb in the important special case that the event system is coherent. Lemma 4 Let 0 be an consistent system. If 0 is coherent, then unconditional event ve E d: Necb(e, 0) = Maybeb(e, 0). Proof Sketch. “C”: Suppose that the first condi- tion does not hold for some event e and atom p E Necb(e, 0). S ince 0 is coherent, we can construct an admissible complete event sequence f = (fi , . . . , e, . . .) such that g = f\e contains only events ga such that gi 4 e. By induction over the length of f\e, we get p $! Res(Z, f\e), which is a contradiction. Suppose that the second condition does not hold, i.e., there exists an event e’ unordered with respect to e such that p E s(e’). Then there exists a complete event sequence f = (fi , . . . , e’ , e, . . .) . Since 0 is coherent, and thus e’ affects the world, it is obvious that p 4 Res(Z,f/e’) = Res(Z,f\e), which is a contradiction. Suppose the third condition is not satisfied, i.e., there exists p E Necb(e, 0) and an event e’ 4 e such that p E 6(e’), but there is no e” such that e’ 4 e” 4 e and p E o(e”). Consider a complete event sequence f such that there are only events ed between e’ and e that have to occur between them. Because p @ Res(Z, f/e’) and b ecause by assumption p 4 cu(ei) for all events ei occurring between e’ and e, we can infer p $! Res(Z, f\e) 1 Necb( e, 0)) which is again a contradiction. “2” : Assume p E Mu ybe b( e, 0). Consider any com- plete event sequence g E CS(0). We want to show that P E Res(K g\e). BY condition (1) of the definition of MUybeb and the fact that all complete event sequences are admissible, we know that there exists gi E A such that ]g\gi] 5 ]g\e] and p E Res(Z, g\gi). Consider the latest such event, i.e., gi with a maximal i. Since all event sequences are finite, such an event must exist. If !A = e, we are ready. Otherwise, because of conditions (2) and (3), i cannot be maximal. q Now we can give a necessary and sufficient condi- tion for coherence of consistent unconditional event systems. Theorem 5 A consistent unconditional event system 0 is coherent ifl ‘de E A: cp(e) s Maybeb(e,@). Proof Sketch. Lb.+” : Follows immediately from Lemma 4. Ut=” : For the converse direction, we use induc- tion on the number of conditions appearing in the preconditions of events over the entire event-system: i = Cecd IPW For the base step, k = 0, the claim holds trivially. For the induction step assume an event system 0 with C+l preconditions such that cp(e) C Muybeb(e, 0) for all e E A. Consider an event system 0’ that is identical to 0 except that for one event f such that p(f) # 0 we set cp’(f) = 8. Because Ic > 2 we can apply our induction hypothesis an ezd’ IP’te) I) conclude that 0’ &coherent. By Lemma 4, we have cp( f) E Maybeb(f, 0) = Muybeb(f, 0’) = Necb(f, 0’). Hence, any sequence g E CS(0’) that contains f is an ad- missible sequence even if we would have cp’( f) = p(f). Since CS(0) = CS(@‘), it follows that 0 is coherent. q Since plan validation can be reduced to coherence in linear time, it is a polynomial-time problem if the causal structure is unconditional. Theorem 6 Plan validation for unconditional causal structures is a polynomial-time problem. Proof Sketch. Follows from Proposition 2, from The- orem 5, the fact that any unconditional event struc- tures can be transformed\ into a consistent one in lin- ear time, and the fact that Maybeb can be decided in polynomial time. q One interesting point to note about this result is that it appears to be easier to decide a big conjunction of the form A 9’(e) c Neca(e, 0) eEd than to decide one of the conjuncts. In other words, the claim by Dean and Boddy [19&S] that temporal 752 Representation and Reasoning: Temporal projection (in some form) is the underlying problem of plan validation is conceptually correct. However, it turns out that solving the subproblems is harder than solving the original problem (assuming N P # P). Intuitively, temporal projection is difficult because we cannot avoid to consider all elements of CS(0) as demonstrated in the proof of Theorem 1. Plan valida- tion for unconditional causal structures is easy, on the other hand, since satisfaction of all preconditions can be reduced to a local syntactic property. Although maybe surprising, the result is not new. Chapman [1987] used a similar technique to prove plan validation to be a polynomial-time problem for a slightly different formalism. It should be noted, how- ever, that Chapman’s [1987, p. 3681 proof of the cor- rectness and soundness of the modal truth criterion is correct only if we make the assumption that the plan is already coherent- a property we want to decide. In fact, it seems to be the case that Chapman missed to prove the second half of our Theorem 5. iscussion Reconsidering the problem of temporal projection for sets of partially ordered events as defined by Dean and Boddy [1988], we noted that one special case conjec- tured to be tractable turned out to be NP-complete. Although this result does not undermine the arguments of Dean and Boddy [1988] that temporal projection is a quite difficult problem, it leads to a counter-intuitive conclusion, namely, that planning is easier than tem- poral projection in this special case. Further, we showed that plan validation, if defined appropriately, is tractable for a more general problem, namely validation of unconditional nonlinear plans. This means that the problem of validating a plan as a whole is easier than validating all its actions sepa- rately. In other words, what might look like a divide and conquer strategy at a first glance is rather the op- posite. These two observations lead to the question of whether the formalization of temporal projection [Dean and Boddy, 1988] really captures one of the intended applications, namely, validation of nonlinear plans. In particular, one may ask whether the incom- plete decision procedure for temporal projection devel- oped by Dean and Boddy [1988] is based on the right assumptions. It turns out that the incomplete deci- sion procedure fails on plans that could be validated in P olynomial time using the techniques described above Nebel and Bsckstrom, 1991; Backstrom and Nebel, 19921. As a final remark, it should be noted that the criti- cisms expressed in this paper are possible only because Dean and Boddy [1988] made their ideas and claims very explicit and formal. Acknowledgements We would like to thank Gerd Brewka, Bart Selman, and the anonymous referees, who provided helpful comments on an earlier version of this paper. eferences Backstrom, Christer and Klein, Inger 1991. Parallel non-binary plannin ‘i in polynomial time. In Mylopou- 10s and Reiter [1991 . 268-273. An extended version of this paper is available as Research Report LiTH-IDA- R-9 l- 11, Department of Computer and Information Science, Linkaping University, Linkoping, Sweden. Backstrom, Christer and Nebel, Bernhard 1992. On the computational complexity of planning and story understanding. In Neumann, B., editor 1992, Pro- ceedings of the 10th European Conference on Artificial Intelligence, Vienna, Austria. Wiley. To appear. Bylander, Tom 1991. Complexit In Mylopoulos and Reiter [1991 T results for planning. . 274-279. Chapman, David 1987. Planning for conjunctive goals. Artificial Intelligence 32(3):333-377. Dean, Thomas L. and Boddy, Mark 1988. Reasoning about partially ordered events. Artificial Intelligence 36(3):375-400. Erol, Kutluhan; Nau, Dana S.; and Subrahmanian, V. S. 1991. Complexity, decidability and undecidabil- ity results for domain-independent planning. Tech- nical Report CS-TR-2797, UMIACS-TR-91-154, De- partment of Computer Science, University of Mary- land, College Park, MD. Fikes, Richard E. and Nilsson, Nils 1971. STRIPS: A new approach to the application of theorem proving as problem solving. Artificial Intelligence 2:198-208. Garey, Michael R. and Johnson, David S. 1979. Com- puters and Intractability-A Guide to the Theory of NP-Completeness. Freeman, San Francisco, CA. Mylopoulos, John and Reiter, Ray, editors 1991. Pro- ceedings of the 12th International Joint Conference on Artificial Intelligence, Sydney, Australia. Morgan Kaufmann. Nebel, Bernhard and Backstrom, Christer 1991. On the computational complexity of temporal projec- tion and some related problems. Research Report RR-9 l-34 (DFKI) and LiTH-IDA-R-9 l-34 (Univ. Linkoping), German Research Center for Artificial Intelligence (DFKI), Saarbriicken, Germany, and Department of Computer and Information Science, Linkijping University, Linkoping, Sweden, 1991. Rebel and Biickstriim 753
1992
127
1,195
rose Robert Dianne, Eric Mays, Frank J. Oles IBM T. J. Watson Research Center P.0. Box 218 Yorktown Heights, NY 10598 Abstract In this paper, we propose a new approach to inten- sional semantics of term subsumption languages. We introduce concept algebras, whose signatures are given by sets of primitive concepts, roles, and the operations of the language. For a given set of variables, standard results give us free algebras. We next define, for a given set of concept defini- tions, a term algebra, as the quotient of the free algebra by a congruence generated by the defini- tions. The ordering on this algebra is called de- scriptive subsumption (la). We also construct a universal concept algebra, as a non-well-founded set given by the greatest fixed point of a certain equation. The ordering on this algebra is called structural subsumption (?a). We prove there are unique mappings from the free algebras, to each of these, and establish that our method for classi- fying cycles in a term subsumption language, K- REP, consists of constructing accessible pointed graphs, representing terms in the universal con- cept algebra, and checking a simulation relation between terms. Introduction Classification of cycles in term subsumption languages has thus far been avoided, for a variety of sound and perhaps not so sound reasons. In this paper we will discuss how cycle classification is handled in K- REP [Mays et al., 19911, a KL-ONE [Brachman and Schmolze, 19851 style of language. Using ideas from universal algebra and the theory of non-well-founded sets, a new framework, that of concept algebras is pre- sented. These algebras elucidate the structural com- parisons that are actually made when testing subsump- tions. The motivation is to view terms (referred to as concepts) in these languages as intensional descrip- tions, and to view subsumption as a process of struc- tural comparison between terms. In this sense this framework differs from existing treatments that make use of model theory, in that it closely corresponds to the actual implementation of the classifier in K-REP. In a recent paper Bill Woods [Woods, 19911 has sug- gested that a more intensional view of concept descrip- tions should be taken. Concepts might describe things that may or may not exist in the world. Different con- cept descriptions can have the same meanings, yet still may be regarded as distinct concepts (“the morning star” vs. “the evening star”). To date most of the work in semantics of term subsumption languages takes an extensional view. Concepts are interpreted as sets of objects from some universe. Roles of concepts are in- terpreted as binary relations over the universe. These languages are thus seen as some subset of first order logic. Concept descriptions are complex predicates and one asks whether or not given instances satisfy those predicates. In this paper we’d like to pursue Woods’ sugges- tion and try to look at concepts intensionally. We will consider a small subset of the K-REP language, that has primitive concepts, concepts formed by conjunc- tions, and roles of concepts whose value restrictions are other concepts. This subset is chosen not only to simplify the presentation, but because it is not clear yet how to extend these ideas to more complex constructs like disjunctions and negations. This subset is roughly the same subset handled in [Baader, 19901. A knowl- edge base is seen as a set of possibly mutually recursive equations, involving terms of this concept language. The outline of this paper is as follows. The next sec- tion is a brief overview of the K-REP language. Next will be a general discussion of cycles, the type that are of use, how they arise, and how they are handled in K-REP. As pointed out by Bernhard Nebel [Nebel, 19901, the type of cycles of interest appear through role chains. What he refers to as descriptive semantics comes closest to capturing our intuitive understand- ing of them. Descriptive semantics, as well as least and greatest fixed point semantics, are all based on the view of modeling concept descriptions as subsets of some universe. This is very appealing as we think of more general concepts as describing larger classes of objects. However, implementations reason with the descriptions of the classes, subsumption is determined by structural comparison and not by subset inclusion Dionne, Mays, and Qles 761 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. of sets of objects. We then present a new model for this subset of the K-REP language. It makes use of basic tools from universal algebra [Jacobson, 19891, and Aczel’s theory of non-well-founded sets [Aczel, 19881. What will be modeled are the descriptions of the concepts. First we give a signature and a set of axioms, for which we can discuss a class of algebras called concept algebras. Standard results will give us free algebras generated from a given set of variables. Terms in the algebras will correspond to concept descriptions. Next a Knowl- edge Base (hereafter KB) is defined as a set of possibly mutually recursive definitions over terms of a free al- gebra. These equations generate a congruence on the free algebra and the quotient algebra is then the con- cept algebra generated from this given KB. Any KB thus gives rise to a quotient algebra in this way. We will refer to the ordering on terms of this algebra as descriptive subsumption. This algebra can be uniquely mapped to a certain non-well-founded set C that arises as the greatest fixed point solution of a certain equa- tion. Suitable operators are defined on C to make it, too, into a concept algebra. We will see that in some sense C is the most abstract model for our language in that it captures concepts as intensional descriptions. The subsumption ordering on this algebra will be called structural subsumption. The reason that C is inter- esting is that its ordering captures the essence of the implementation of subsumption in K-REP, even when cycles occur. We are hopeful that we can extend this model to also include disjunctions of concepts, since they can be viewed as sets of descriptions, one of which a given instance might satisfy. The Representation Language K-REP is in the class of languages known as term sub- sumption languages. Terms in K-REP are called con- cepts. Concepts are meant to describe classes of ob- jects in some universe, both by defining them in terms of other concepts, and by describing the attributes a given class has. There are two types of concepts, prim- itive and defined. Primitive concepts are like natural kinds, their attributes are necessarily true of instances belonging to the concept, but are not sufficient to de- termine membership in the class represented by the concept. Defined concepts on the other hand are nor- mally defined in terms of other more general primitive concepts. Their attributes are both necessary and suf- ficient for instances belonging to the class. Attributes are called roles, and since they have values, they can be viewed as binary relations over the classes, Terms are constructed by a few concept-forming operators. Us- ing set-theoretic semantics, concepts are interpreted as subsets of some universe 24, and roles as binary rela- tions on 24. Table 1 shows the operators together with their abstract form and semantics for the subset of K- REP we will be considering in this paper. This is the standard set-theoretic semantics often seen in the lit- erature. It is included here for purposes of comparison with the semantics we propose in this paper. In order to simplify the technical details of our model introduced in the next section, we will assume that in- troduced primitives are defined only in terms of top. Since other defined primitives can be expressed in terms of these and role definitions, this results in no loss of expressivity in our language. A knowledge base (MB) is a collection of concept terms. Since we are not considering instances, we make no distinctions between T-boxes and A-boxes. A given set of concept defini- tions will define a KB. Consider the following KB: C z (and P (R W)) W E (and P (S C)) where P is a primitive concept, R and S are roles and C and W are defined concepts. Notice that these two concepts contain a simple cycle. In the first implemen- tation of K-REP forward references were not allowed, so that these could not be defined without some hack- ery. We will write (R W) as an abbreviation for (V3R : W), in order to motivate the view of roles as meet- homomorphisms on concepts, which we will see in our model. Since conjunction in the language is just set- theoretic intersection in the semantics, this is just the statement that (and (R C) (R W)) G (R (and C W)). The subsumption relation produces an ordering on the concepts of a given KB. One concept subsumes an- other written 61 k Cs if all of Cl’s primitives subsume a primitive of Ca, and for each role R of Cl, C2 has that role and the value restriction of R on Cl sub- sumes the value restriction of R on C2. Note that if Cl h C2, then (and Cl Cs) must be equivalent to C2, in the sense that each subsumes the other. Cycles When cycles are allowed through role chains, one can see that a straightforward approach to subsumption testing can lead to infinite looping. In the original implementation of K-REP, concepts were classified one at a time and forward references were not allowed. This prevented the occurrences of cycles, other than those of length one, which caused no harm and were more or less ignored. A first attempt at handling cycles was to classify all the concepts involved in a cycle at the same time. Each concept in the cycle is classified as far as possible, and one loops through them all until no more further relationships are discovered. This appeared to work well and only required that given a collection of concepts, one can detect the cycles syntactically before they are classified. However, it missed the following case: C E (and P (RI W)) W 3 (and P (& C)) C’ 2 (and P (RI 0’) (R2 P’)) W’ - (and P (& C’) (2.72 P’)) 762 Representation and Reasoning: Termimological Table 1: K-REP Language Concrete Form Abstract Form Concept Forming Operators top T (and Cr . . . C,,) Cl A...AC, (allsome R C) V3R:C Terminological Axioms (defconcept N C) (defprimconcept N C) NrC NZ = c= N&C N=CC= C’ and W’ are the same as C and W except that they have additional roles Ra and S2 and therefore C should subsume C’ and W should subsume W’. If we were to classify C and W together and then 6’ and W’ we would not detect that C subsumes 6’. That test would involve checking if W subsumes W’, which would not be known at that point. The solution to this is to begin testing the first con- cept in a cycle. If any subsumption questions arise that involve other concepts in that cycle then recur- sively invoke the next subsumption question. At some point this process terminates, or a subsumption ques- tion arises that one has already visited, at which point one stops. This is tantamount to just assuming it is true. Applying this technique to the previous exam- ple we see that the question C h C’ leads to W k W’ which leads back to C t C’. In order to see this more closely one can draw cer- tain graphs to represent the concepts of a given KB. These graphs are actually encoded forms of accessible pointed graphs (upg ‘s), used in non-well-founded set theory. Each one has a root node labelled by the con- cept name, an epsilon labelled arc that points to the conjunction of the primitives in the concept, and an R labelled arc for each role R in the definition. Let us call the collection of these D. Using these graphs we check if Cr 2 C2 by first checking that the primi- tive pointed to by the epsilon arc of Cr subsumes the primitive pointed to by the epsilon arc of Cz. Then for each R labelled arc of Cr we check that C’2 has an arc of the same label, and that the nodes pointed to by them are also in the subsumption relation. If either of these nodes is in a cycle, we arrive at the recursive step. Take the graphs corresponding to the concept names of these two nodes and paste them down at these nodes. However, change any labelled arcs that point to con- cepts whose graphs are already present, to point to those nodes. This may introduce loops in the graphs. Continue testing the nodes where the new graphs were pasted on. Since the number of concepts is finite, the number of cycles must be also, and this process termi- Semantics iv cfn...nc; (d E U 1 R=(H) C Cz A R=(d) # 0) where Rz E U x U nates. Recognition of previously asked subsumptions questions, occurs when loops with the same labels are introduced that point to pairs of previous nodes. This process actually constructs a Hoare simulation between the two apgs, from the top down. We are certain that it terminates because our KBs are finite. The Models Consider the signature C containing a set P of con- stants, a set % of unary operators, a binary operator A, and a constant T. Let E be the following set of axioms with respect to this signature: zAx = x (idemptence) xAy = yAx (x A Y) A 2 (commutativity) = xA(yAz) (OrssoCaativity) xAT = x R(x Ay) = (T is a unit) R(x) A R(y) VR E 32 We now consider the class of algebras for this signa- ture C, that satisfy the axioms E. We will call such al- gebras concept algebras. Using A, a partial order 2 can be imposed on a concept algebra (p 2 Q iff p A q = q). The constants in P are meet-irreducible primitives and each R E 92 defines a meet homomorphism on the al- gebra. Given X = (xl,. . . ,z~} , A[X] = A[xr,. . . , zn] is the free concept algebra generated by X. A[] = A[01 is the initial concept algebra. A KB is a set of n possibly mutually recursive defi- nitions A = (Xl = t1,. . . , xn z tn) where td E A[zl, *. . , xn] . A gives rise to GA, the least congruence on A[xr, . . . , x,,] containing A. For each KB, A, we will be interested in three relations on A[X]: sr 24 s2 (descriptive subsumption) sr )Ld) 52 (StrUCtUrid subsumption) sr >A s2 (extensional subsumption) Dionne, Mays, and Oles 763 A congruence on A[X] is an equivalence relation that is also a subalgebra of A[X] x A[X]. For any re- lation T on A[X] , let E(T) be the smallest equiva- lence relation on A[X] that contains T, and let F(T) be the smallest subalgebra on A[X] x A[X] that con- tains T. Let So = ((z;,ti)]x; s ti E A} and let Sk+1 = F (E(Sk)) for k: > 9. This generates an in- creasing sequence of algebras and we define =A as Un S,. This is a congruence on A[X] that contains A, and if B is any other algebra that contains A, then an induction on k: shows that B contains Sk and therefore contains =A , so it is the smallest. Let A~[xl,..., x,J hereafter refer to the quotient al- gebra A[xl,. . ., x,J/ =A. Given a set of concept def- initions A we can view AA [xl, . . . , x,J as the algebra of congruence classes of concept terms with respect to this set of equations. Then Theorem 1 AA[x~, . . . , 2,] is Q, conservative exten- sion of A[], i.e., the unique homomorphism A[] -& AA[~I, l --I xcn] is one-one. Proof. Since the only nontrivial identifications of terms of A[xl, . . . , 3c,] implied by -A involve the free variables xl,. . . , xc,, and since A[] has no terms in- volving xl,...,x% , the unique homomorphism from A[] into AA[x~, . . . , x,J is one-to-one 0. We are now ready to define descriptive subsumption. Definition 1 Given two terms sl,s2 E A[X], we say s1 descriptively subsumes ~2, written s1 aA 232, ifl RAsl >Aa[X] 7iAs2 (TA is the canonical projection map from A[X] to AA[X]). We next wish to construct an algebra that provides semantics for these terms, so that each term can be mapped to an element of the semantic algebra. There will be no variables in this algebra, and we will show that even sets of equations with cycles in them have unique solutions in this semantic algebra. Keep in mind that the elements of this algebra, which we call the universal concept algebra, are modeling the con- cepts as descriptions or intensions, rather than exten- sions. The “universal concept” algebra is the greatest fixed point solution of the equation c = (P<UP) x (%<;C) where (P<WP) is the collection of finite subsets of P, and (WAC) is the collection of partial functions from 92 to C,<Gith finite domain. A concept definition is composed of a collection of primitives conjoined together with a collection of role definitions. Each element of C is an ordered pair whose first component is a set of primitives from P, and whose second component is a set of ordered pairs, each arising from one of the role definitions. This set can just be represented as a partial function on ?I?, defined for each role in the concept’s definition. Note that C is a not a set in the normal ZFC sense, but in the sense of Aczel’s theory of non-well-founded sets (roughly sets that can contain themselves as members). This is how circularity of concept definitions can be allowed. To see that C is a concept algebra: Tc = (0,0> pia = ((pi}, 0) Rc(x) = (8, {(R, x))) Given Cr = (S~,fc,) where $1 C P and fc, E (!PkC) and similarly for C2 = (Q2rfca) we define: Cl A @2 = (QI u Q2, fcgca) where J&AC~ E (a ( w AC) is defined as ~c~~c~(R) = { fc, (RI if R E dm(f~,) - dom(fc,) fc, CR) if R E dom(fc,) - dm(fcl) fc1 (R) A fc, (R) if R E dm( fcl ) n dm(fca) Technically, to define the meet operation on C, one must do a bit more than write down this recursive def- inition, because it is not entirely clear that this defi- nition leads to a well defined function. However, the recursive definition can readily be translated into a si- multaneous system of equations, to which the Solution Lemma [Aczel, 19881 can be applied, and this allows us to assert the existence and uniqueness of the meet operation. With respect to A, let us define what is meant by a good set of definitions. Definition 2 A set of definitions A is good if each xi appears only once on the left hand side, and each equation xi = ti is of the form xi = (Aj Pi,j) A (l\k(Ri,k S&k)) where Pi,j E P and s&k E A[X] Vi, j, k. In other words each equation is composed of a conjunc- tion of a conjunction of primitives, and a conjunction of role terms that may OT may not contain variables. Theorem 2 There ezists a unique concept algebra ho- momorphism AA[x~, . . . , xn] % C if A is a good set of definitions Proof. Without loss of generality, we may take the generators (21, . . . , xn}, as atoms (or Urelemente), i.e., objects that are not sets and not in any way composed of sets. Using the interpretations in C, of the opera- tors in C, the KB A can be interpreted as a system of equations for which we seek a solution, Xl = Yl,...,Xn =Y,, where each Yi is a pure set (a set involving no atoms) in 6. The existence and uniqueness of this solution fol- lows immediately from Aczel’s Solution Lemma [Aczel, 19881, p. 13. Hence there exists a unique homomor- phism p from the free algebra A[X] to C, such that p(xi) = Yi Vi. 764 Represent at ion and Reasoning: Terminological The fundamental homomorphism theorem then im- plies that there exists a unique mapping, call it PA, from A[X]/KeT p to C such that the following diagram commutes. Jq-q p b c +/ PA Provided that we can show that Ker p contains =A we can apply Theorem 2.3 in [Jacobson, 19891 (page 64) to obtain a unique mapping $ from &[X] to A[X]/Ker p such that the following commutes. 4x1 t 4JwKe~ P TA v / 1c) AA Lx1 But since p was defined on the generators zi, we see that for (zi,ti) E -A , p(z;) = p(ti), and therefore =A s Ker p. The unique map from AA [X] to C is the composition of PA with 4, i.e. PA = PA 0 II) 0. Now we have the following picture: A[X] -S AA[X] % C given two terms in sumption says that 4x1, SI and ~2, descriptive sub- sl 7A s2 iff KAsl >A&] 3-As2 and now we can define structuTa1 subsumption. Definition 3 Given two teTms ~1,s~ E A[X], we say 51 StTUCtUTdy subsumes 52, WTitttX S1 ?A S2, #((PA0 rA)sl >C ($‘A o rA)s2- Structural subsumption is given by the ordering on C. We now explicitly state the ordering on C, to both relate it to- A and the actual implementation in K- REP. Given Cl = (QI, fc,) where &I C P and fc, E (!@&+C) and similarly for C2 = (92, fc,), we say that Cl -%c G, if $1 !G Q2, and dom(fc,) C dom(fc,), and VR E dom( fcl) , fc,(R) 2~ fc,(R). This is exactly the test for subsumption of concepts stated in the previous section on the language. Notice also that when Cl >c Css if one inspects Cl A C2, that &u&2 = 92 and since dom(fc,) & dom(fc,), that fcIAc2 reduces to the second and third cases. The third case corresponds to fcl(R) 2~ fCa(R), and thus Cl 2~ C2 iff Cl A C2 = 62. The algorithm discussed in the previous section for coping with cycles, is essentially constructing objects in C, by first constructing an apg-like object in D, for each term of A[X] in which we are interested. This construction corresponds to the map ImpA in the di- agram below. Each apg in D then describes a unique set in C via the unnamed arrow. The testing of sub sumption is actually testing the presence of a Hoare simulation between the objects in 6. With respect to our implementation we now have the following com- mutative diagram. (Our new implementation of K-REP actually cre- ates two spaces of objects representing concepts. Our definition space corresponds to AA [Xl, and our seman- tic space corresponds to C. Thus the definition space allows for multiple definitions that might map to the same object in the semantic space). The fact that PA preserves order proves the following claim: ‘I’heorenm 3 Descriptive tural subsumption subsumption implies StTUC- This shows us that descriptive subsumption is weaker than structural subsumption. Proposition 5.2 (page 133) of [Nebel, 19961, states that subsumption with respect to descriptive semantics is weaker than subsumption with respect to a least or greatest tied point semantics. Let us now define extensiona sub- sumption. efinition 4 Given two terms ~1, s2 E A[X], we say s1 extension&y subsumes ~2, written 51 ,>A 52, iff s;u ,> sp, V extensional greatest fixed point models M. It appears that our structural subsumption is equiv- alent to subsumption with respect to all extensional greatest fixed point models. Stated more succintly, Conjecture 1 51 ?A 32 iff s1 >A s2- Conclusion In this paper we have developed concept algebras as a new approach to semantics for term subsumption lan- guages. We have shown that for a subset of the K-REP Dionne, Mays, and Qles 765 language, these algebras accurately model the process of subsumption testing, even in the presence of cycles. We feel this approach is somewhat simpler as these algebras model the concepts as descriptions, without reference to subsets of some universe. These algebras also allow for multiple definitions, i.e., concepts with different names that semantically represent the same concept. We are incorporating both these ideas into a new version of K-REP. Though we have only dealt with a subset of K-REP it appears that cardinality restrictions do not add much more complexity to the model. However role value maps certainly do and they warrant further investigation. One immediate extension appears to be disjunction. Since one can view a disjunction as a collection of con- cept descriptions, one of which a given instance might satisfy, it seems that the finite power set of the algebra C under a suitable ordering (perhaps the Hoare), could provide an algebra that also allows for disjunctions. References Aczel, Peter 1988. Non- Well-Founded Sets, volume 14 of CSLI Lectures Notes. CSLI/Standford. Baader, Franz 1990. Terminological cycles in kl-one- based knowledge representation languages. In PTO- ceedings of AAAI, Boston, Mass. Brachman, R. J. and Schmolze, J. G. 1985. An Overview of the K&One Knowledge Representation System, volume 9. 171-216. Jacobson, Nathan 1989. Basic Algebra II. W. H. Freeman and company. Mays, E.; Dionne, R.; and Weida, R. 1991. K-rep system overview. SIGART Bulletin 2. Nebel, Bernhard 1990. Reasoning and Revision in Hybrid Representation Systems, volume 422 of Lec- ture Notes in Artificial Intelligence. Springer-Verlag. Woods, William A. 1991. Understanding Subsumption and Taxonomy: A Framework for Progress. Morgan Kaufman. 766 Representation and Reasoning: Terminological
1992
128
1,196
Jochen ernhard Nebel, an German Research Center for Artificial Intelligence (DFKI) Stuhlsatzenhausweg 3 W-6600 Saarbriicken, Germany e-mail: (last name)@dfki.uni-sb.de rofitlich Abstract The family of terminological representation sys- tems has its roots in the representation system KL- ONE. Since the development of this system more than a dozen similar representation systems have been developed by various research groups. These systems vary along a number of dimensions. In this paper, we present the results of an empiri- cal analysis of six such systems. Surprisingly, the systems turned out to be quite diverse leading to problems when transporting knowledge bases from one system to another. Additionally, the runtime performance between different systems and knowl- edge bases varied more than we expected. Finally, our empirical runtime performance results give an idea of what runtime performance to expect from such representation systems. These findings complement previously reported analytical results about the computational complexity of reasoning in such systems. Introduction Terminological representation systems support the tax- onomic representation of terminology for AI applica- tions and provide reasoning services over the termi- nology. Such systems may be used as stand-alone in- formation retrieval systems [Devanbu et al., 19911 or as components of larger AI systems. Assuming that the application task is the configuration of computer systems [Owsnicki-Klewe, 19881, the terminology may contain concepts such ils local area network, worksta- tion, disk-less workstation, file server, etc. Further, these concepts are interrelated by specialization rela- tionships and the specification of necessary and suffi- cient conditions. A disk-less workstation may be de- fined m a workstation that has no disk attached to it, for example. The main reasoning service provided by terminological representation systems is checking for *This work has been carried out in the WIP project which is supported by the German Ministry for Research and Technology BMFT under contract ITW 8901 8. inconsistencies in concept specifications and determin- ing the specialization relation between concepts-the so-called subsumption relation. The first knowledge representation system support- ing this kind of representation and reasoning was KL- ONE [Brachman and Schmolze, 19851. Meanwhile, the underlying framework has been adopted by various re- search groups, and more than a dozen terminological representation systems have been implemented [Patel- Schneider et al., 19901. These systems vary along a number of important dimensions, such as implemen- tation status, expressiveness of the underlying repre- sentation language, completeness of the reasoning ser- vices, efficiency, user interface, interface functionality, and integration with other modes of reasoning. Nowadays, it seems reasonable to build upon an ex- isting terminological representation system instead of building one from scratch. Indeed, this was the idea in our project WIP, which is aimed at knowledge-based, multi-modal presentation of information such as oper- ating instructions [Wahlster et al., 19911. However, it was by no means clear which system to choose. For this reason, we analyzed a subset of the available systems empirically. It turned out that the effort we had to invest could have well been used to implement an ad- ditional prototypical terminological representation sys- tem. However, we believe that the experience gained is worthwhile, in particular concerning the implemen- tation of future terminological representation systems and standard efforts in the area of terminological rep- resentation systems. One of the main results of our study is that the dif- ferences in expressiveness between the existing systems are larger than one would expect considering the fact that all of them are designed using a common seman- tic framework. These differences led to severe prob- lems when we transported knowledge bases between the systems. Another interesting result is the runtime performance data we obtained. These findings indi- cate (1) that the structure of the knowledge base can have a significant impact on the performance, (2) that the runtime grows faster than linearly in all systems, and (3) that implementations ignoring efficiency issues Heinsohn et al. 767 From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. can be quite slow. Additionally, the performance data gives an idea of what performance to expect from ex- isting terminological representation systems. These re- sults complement the various analytical results on the computational complexity of terminological reasoning. The Experiment The empirical analysis can be roughly divided into two parts (see Figure l).l The first part covers qualitative facts concerning system features and expressiveness. In order to describe the latter aspect, we first developed a “common terminological language” that covers a su- perset of all terminological languages employed in the systems we considered. The analysis of the expressive- ness shows that the intersection over all representation languages used in the systems is quite small. Manual8 l&l c Exprossivomss Problwnatioal Cases (20) Obvious Inforonoos (20) Real KBs (6) 100-400 oonoapts Random KEs (30) 154000 oonoopta Figure 1: Experiment design In the second part we ran different test cases on the systems in order to check out the performance, com- pleteness and the handling of problematical cases. We designed five different groups of experiments. The first group consists of tests dealing with cases that are not covered by the common semantic framework of termi- nological representation systems. The second group explores the degree of the inferential completeness of the systems for “easy” (i.e., polynomial) inferences. It should be noted that we did not try to design these tests in a systematic fashion by trying out all possi- ble combinations of language constructs, though. The third group consists of problems which are known to be “hard” for existing systems. They give an impression of the runtime performance under worst-case condi- tions. For the fourth group of experiments we used existing knowledge bases to get an idea of the runtime perfor- mance under “realistic” conditions. First, we manu- ally converted the knowledge bases into the “common terminological language” mentioned above. Then, we implemented a number of translators that map the ‘The details of the experiment are given in a technical report [Heinsohn et al., 19921. knowledge bases formulated using the “common ter- minological language” into system specific knowledge bases. Although the results of the fourth group of experi- ments give some clues of what the behavior of the sys- tems may be in applications, we had not enough data points to confirm some of the conjectures that resulted from this initial test under realistic conditions. Addi- tionally, it was not evident in how far the translation, which is only approximate, influenced the performance. For this reason, a fifth group of experiments was de- signed. A number of knowledge bases were generated randomly with a structure similar to the structure of the realistic knowledge bases. In general, we concentrated on the terminological representation part (also called TBoc) of the systems. This means that we ignored other representation and reasoning facilities, such as facilities for maintaining and manipulating databases of objects (also called ABox) that are described by using the concepts repre- sented in the terminological knowledge base. This con- centration on the terminological component is partly justified by the fact that the terminological part is the one which participates in most reasoning activities of the entire system. Thus, run time performance and completeness of the terminological part can be gener- alized to the entire system-to a certain degree. How- ever, the systems may (for efficiency reasons) use differ- ent algorithms for maintaining a database of objects, which may lead to a different behavior in this case. Nevertheless, even if the generalization is not valid in general, we get at least a feeling how the terminological parts perform. As a final note, we want to emphasize that our em- pirical analysis was not intended to establish a rank- ing between the systems. For this purpose, it would be necessary to assign weights to the dimensions we compared, and this can only be done if the intended application has been fixed. Despite the fact that we analyzed only the terminological subsystems, the tests are not intended to be complete in any sense and there may be more dimensions that could be used to ana- lyze the systems. Further, the results apply, of course, only to the system versions explicitly mentioned in the following section. The system developers of a num- ber of systems have improved their systems since we made our experiment. So, the runtime performance may have changed. Systems There are a large number of systems which could have been included in an empirical analysis, e.g., KL-ONE, LILOG, NIKL, K-REP, KRS, KRYPTON, and YAK (see e.g. [Patel-Schneider et al., 1990; Nebel et al., 19911). Mow- ever, we concentrated on a relatively small number of systems. This does not mean that we feel that the sys- tems we did not include (or mention) are not worth- while to be analyzed. The only reason not to include all 768 Represent at ion and Reasoning: Terminological the systems was the limited amount of time available. We hope, however, that our investigation can serve as a starting point for future empirical analyses. The sys- tems we picked for the experiment are: BACK [Pelta- son, 19911 (V ersion 4.2, pre-released), CLASSIC [Patel- Schneider et al., 19911 (Version 1.02, released), KRIS [Baader and Hollunder, 19911 Version 1 .O, experimen- 1991 f (Version of May 1990, wsnicki-Klewe, 19SS] (Version 2.0, released), and SB-ONE [Kobsa, 19911 (Version of January 1990, released). The BACK system has been developed at the Tech- nical University of Berlin by the KIT-BACK group as part of the Esprit project ADKMS. The main applica- tion is an information system about the financial and organizational structure of a company [Damiani et al., 19901. It is the only system among the ones we tested that is written in PROLOG. We tested the system on a Solbourne 601/32 using SICSTUS-PROLOG 2.1. CLASSIC has been developed in the AI Principles Re- search Department at AT&T Bell Laboratories. It sup- ports only a very limited terminological language, but turned out to be very useful for a number of appli- cations [Devanbu et al., 19911. As all other systems except for BACK, it is written in COMMONLISP and we tested it on a MacIvory. KRIS has been developed by the WIN0 project at DFKI. In contrast to other systems, it provides complete inference algorithms for very expressive lan- guages. Efficiency considerations have played no role in the development of the system. LOOM has been developed at USC/IS1 and supports a very powerful terminological logic-in an incomplete manner, though-and offers the user a very large num- ber of features. In fact, LOOM can be considered as a programming environment. MESON has been developed at the Philips Research Laboratories, Hamburg, as a KR tool for different applications, e.g., computer configuration [Owsnicki- Klewe, 19881. Although it is also written in COMMON- LISP, we tested it not on a MacIvory but on a Sol- bourne 601/32 in order to take advantage of its nice X-Window interface. SB-ONE has been developed in the XTRA project at the University of Saarland as the knowledge repre- sentation tool for a natural language project. One of the main ideas behind the design of the system was the possibility of direct graphical manipulations of the represented knowledge. ualitative esu1ts The main qualitative result of our experiment is that although the systems were developed with a common framework in mind, they are much more diverse than one would expect. First of all, the terminological lan- guages that are supported by the various systems are quite different. While three of the six systems use a similar syntactic scheme (similar to the one first used by Brachman and Levesque [Brachman and Levesque, 1984]), and one system adapted this syntactic scheme for PROLOG, i.e., infix instead of prefix notation, the remaining two systems use quite different syntactic schemes. Furthermore, there are not only superficial differences in the syntax, but the set of (underlying) term-forming operators varies, as well. In fact, the common intersection of all languages we considered is quite small. It contains only the concept-forming oper- ators concept conjunction, value restriction, and num- ber restriction.2 These differences led to severe problems when we de- signed automatic translators from the “common termi- nological language” to the languages supported by the different systems. Because of the differences in expres- siveness, the translations could only be approximate, and because of the differences in the syntax we used a translation schema that preserved the meaning (as far as possible) but introduced a number of auxiliary con- cepts. Using the translated knowledge bases, we no- ticed that the introduction of auxiliary concepts influ- ences the runtime performance significantly-a point we will return to. Discounting the differences in syntax and expres- siveness, one might expect that the common semantic framework (as spelled out by Brachman and Levesque [Brachman and Levesque, 19841) leads to identical behavior on inputs that have identical meaning and match the expressiveness of the systems. Bowever, this is unfortunately wrong. When a formal specifica- tion is turned into an implemented system, there are a number of areas that are not completely covered by the specification. One example is the order of the in- put. So, some systems allow for forward references in term definitions and some do not. Furthermore, some systems support cyclic definitions (without handling them correctly according to one of the possible seman- tics [Nebel, 19911, h owever, or permitting cyclic defi- nitions only in some contexts), and some give an error message. Also redefinitions of terms are either marked as errors, processed as revisions of the terminology, or treated as incremental additions to the definition. Finally, there are different rules for determining the syntactic category of an input symbol. Another area where designers of terminological sys- tems seem to disagree is what should be considered as a an error by the user. So, some systems mark the def- initions of semantically equivalent concepts as an error or refuse to accept semantically empty (inconsistent) concepts, for instance. These differences between the systems made the translation from the “common terminological lan- guage” to system-specific languages even more compli- cated. In fact, some of the problems mentioned above were only discovered when we ran the systems on the 2The technical report [Heinsohn et al., 19921 contains tables specifying precisely the expressiveness of the differ- ent systems. Heinsohn et al. 769 translated knowledge bases. We solved that problem by putting‘the source form of the knowledge base into the most unproblematical form, if possible, or ignored problematical constructions (such as cyclic definitions) in the translation process. Summarizing, these results show that the ongoing process of specifying a common language for termino- logical representation and reasoning systems [Neches et al., 1991, p. 50-511 will probably improve the situation in so far as the translation of knowledge bases between different systems will become significantly easier. One main point to observe, however, is the area of prag- matics we touched above, such as permitting forward references. Finally, we should mention a point which all sys- tems had in common. In each system we discovered at least one deviation from the documentation, such as missing an obvious inference or giving a wrong error message. This is, of course, not surprising, but shows that standard test suites should be developed for these systems. There are a number of other dimensions where the systems differ, such as the integration with other rea- soning services, the functionality of graphical user in- terfaces, ease of installation, and user friendliness, but these are issues which are very difficult to evaluate. Quantitative Results One important feature of a representation and rea- soning system is, of course, its runtime performance. In the case of terminological representation systems, the time to compute the subsumption hierarchy of concepts -a process that is often called classification- is an interesting parameter. In order to get a feeling for the runtime behavior of the systems we designed several tests to explore how the systems behave un- der different conditions. Since most of the systems are still under development, the runtime data we gathered is most probably not an accurate picture of the perfor- mance of the most recent versions of the systems, In particular, new (and faster) versions Of BACK, CLASSIC, and LOOM are available. Computational complexity results show that sub- sumption determination between terms is NP-hard [Donini et al., 19911 or even undecidable [Schmidt- Schauf3, 19891 f or reasonably expressive languages. Even assuming that term-subsumption can be com- puted in polynomial time (e.g., for restricted lan- guages), subsumption determination in a terminology is still NP-hard [Nebel, 19901. In order to explore this issue, we designed some tests to determine the behav- ior of the systems under conditions that are known to be hard. One test exploits the NP-hardness result for term- subsumption for languages that contain concept- conjunction, value restrictions, and qualified existen- tial restrictions [Donini et al., 19921. It turned out that three systems could not express this case, one system renorted an internal error, one system missed the in- feience (but exhibited a polynomial runtime behavior), and only one system handled the case, but with a very rapid growth in runtime. Three other tests exploit the NP-hardness result for subsumption in terminologies [Nebel, 19901. The first two tests show that only one of the six systems uses a naive way of performing subsumption in a terminology by expanding all concept definitions before checking subsumption [Nebel, 1990, p. 2391. The third test was designed in a way such that also clever subsumption algorithms are bound to use exponential time [Nebel, 1990, p. 2451. Th e results of the latter test are given in Figure 2. 3 They clearly indicate that the systems indeed exhibit a very rapid growth in the runtime. 8000 R U ; 6000 i m e * 4000 e C 16 24 32 40 No. of concepts Figure 2: Runtime performance for hard cases Despite their theoretical intractability, terminologi- cal reasoning systems have been used for’quite a while and the literature suggests that the knowledge bases involved were larger than just toy examples (i.e., more than 40 concepts). Hence, one would a&ume that the knowledge bases’that have been used in applications are of a form that permits easy inferences, br the sys- tems are incomplete and ignore costly inferences. In any case, it is questionable of whether the runtime per- formance for worst-case examples give us the right idea of how systems will behave in applications. In order to get a feeling of the runtime performance under “realist&” conditions, we asked other research groups for terminological knowledge bases they use 3The runtimes of BACK and MESON are not directly com- parable with the other systems because BACK and MESON were tested on a Solbourne 601/32, which is two to three times faster than a MacIvory with respect to the execu- tion of COMMONLISP programs, a remark that applies also to the other runtime performance tests. Additionally, it is not clear to us in how far the performance of BACK is influenced by the fact that it is implemented in PROLOG. 770 Represent at ion and Reasoning: Terminological m e s 200 e C 100 @BACK 0 CLASSIC ,g, KRIS ( x20) @LOOM o MESON 0 SB-ONE ( x20) No. of concepts No. of concepts Figure 3: Runtime performance for realistic cases Figure 4: Runtime performance for small random KBs in their projects. Doing so, we obtained six differ- ent knowledge bases. As mentioned above, these were first manually translated into the “common termino- logical language” and then translated to each target language using our (semi-) automatic translators. In Figure 3, the runtime for the systems is plotted against the number of concepts defined in the different knowl- edge bases. There are a number of interesting points to note here. First of all, two systems, namely, KRIS and SB- ONE, were too slow to be plotted together with the other systems using the same scale. For this reason, we divided the runtimes by the factor of 20 before plotting it. Second, the diagram indicates that the runtime ratio between the slowest system (KRIS) and the fastest sys- tem (CLASSIC) in case of the largest knowledge base is extreme, namely, 45,000/56 x 800. Considering that KRIS was developed as an experimental testbed for dif- ferent complete subsumption algorithms and CLASSIC was designed as an efficient system for an expressively limited language to be used in different applications, this result is actually not completely surprising. It would be of course desirable to explain this and other differences in performance on the level of algorithms and implementation techniques. However, these issues are not described in the literature and a source code analysis was beyond the scope of our analysis. Third, the knowledge base with 210 concepts seems to be somehow special because the runtime curve shows a peak at this point. Inspecting this knowledge base, we discovered that one concept is declared to be super- concept (i.e, mentioned literally in the definition) of 50% of all other concepts. Removing this concept led to a smoother curve. Hence, the structure of a knowl- edge base can severely influence the runtime. Although this should have been obvious already from the first diagram showing the runtime behavior under worst- 250 R U y 150 i m e 30 60 90 120 150 2500 2000 R U p 1500 m e s 1000 e C 500 No. of concepts Figure 5: Runtime performance for large random KBs case conditions, it is an indication that under realistic conditions the runtime behavior can be unexpectedly influenced by the structure of the knowledge base. Summarizing the curves in Figure 3, it seems to be the case that most of the systems, except for SB-ONE, are similar in their runtime behavior in that the same knowledge bases are considered as “difficult” or “easy” to a similar degree. However, it is not clear whether the system runtimes differ only by a constant factor or not. Further, because of the approximative nature of the translations and the introduction of auxiliary concepts, it is not clear to us how reliable the data is. For these reasons, we generated knowledge bases ran- domly in the intersection of all languages-avoiding the translation problem. The structure of these gen- erated knowledge bases resembles the structure of the six real knowledge bases (percentage of defined con- cepts, average number of declared super-concepts, av- Heinsohn et al. 771 R U n t 6000 m e 8 4000 e C Figure 6: KBs 1000 2000 3000 4000 5000 No. of concepts Runtime performance for very large random erage number of role restrictions, etc.). The results of this test are given in Figures 4, 5, and 6. Comparing the curves in these three figures with the curves in Figure 3, it seems to be the case that the structure of the randomly generated knowledge bases is indeed similar to the structure of realistic knowledge bases in so far as they lead to a similar runtime perfor- mance. However, we do not claim that the knowledge bases are realistic with respect to all possible aspects. In fact, too few facts are known about which struc- tural properties can influence the performance of ter- minological representation systems. Bob MacGregor, for instance, reported that the number of distinct roles heavily influence the performance. He observed that the runtime decreases when the number of distinct roles is increased and all other parameters are hold constant (same number of concepts and role restrictions). These curves indicate that the runtime grows faster than linearly with the number of concepts. We con- jecture that in general the runtime of terminological representation systems is at least quadratic in the num- ber of concepts. This conjecture is reasonable because identifying a partial order over a set of elements that are ordered by an underlying partial order is worst-case quadratic (if all elements are incomparable), and there is no algorithm known that is better for average cases. In fact, average case results are probably very hard to obtain because it is not known how many partial or- ders exist for a given number of elements [Aigner, 1988, p. 2711. From this, we conclude that designing efficient ter- minological representation systems is not only a matter of designing efficient subsumption algorithms, but also a matter of designing efficient chssijicution algorithms, i.e., fast algorithms that construct a partial order. The main point in this context is to minimize the number of subsumption tests. Another conclusion of our runtime tests could be that the more expressive and complete a system is, the slower it is-with KRIS as a system supporting complete inferences for a very expressive language and CLASSIC with almost complete inferences for a compa- rably simple language at the extreme points. However, we do not believe that this is a necessury phenomenon. A desirable behavior of such systems is that the user would have “to pay only as s/he goes,” i.e., only if the full expressive power is used, the system is slow. In fact, at DFKI together with the WIN0 group we are currently working on identifying the performance bot- tlenecks in the KRIS system. First experiences indicate that it is possible to come close to the performance of LOOM and CLASSIC for the knowledge bases used in our tests. Conclusions We have analyzed six different terminological represen- tation and reasoning systems from a qualitative and quantitative point of view. The empirical analysis of the different terminological languages revealed that the common intersection of the languages supported by the systems is quite small. Together with the fact that the systems behave differently in areas that are not covered by the common semantic framework, sharing of knowl- edge bases between the systems does not seem to be easily achievable. In fact, when we tried to translate six different knowledge bases from a “common termi- nological language” into the system-specific languages we encountered a number of problems. Testing the runtime performance of the systems, we noted that the structure of the knowledge base can have a significant impact on the performance, even if we do not consider artificial worst-case examples but real knowledge bases. Further, the systems var- ied considerably in their runtime performance. For instance, the slowest system was approximately 1000 times slower than the fastest in one case. The overall picture suggests that for all systems the runtime grows at least quadratically with the size of the knowledge base. These findings complement the various analyses of the computational complexity, providing a user of terminological systems with a feeling of how much he can expect from such a system in reasonable time. Acknowledgments We would like to thank the members of the research groups KIT-BACK at TU Berlin, AI Principles Re- search Department at Bell Labs., WIN0 at DFKI, LOOM at USC/ISI, MESON at Philips Research Lab- oratories, and XTRA at the University of Saarland, for making their systems and/or knowledge bases avail- able to us, answering questions about their systems, and providing comments on an earlier version of this paper. Additionally we want to express our thanks to Michael Gerlach, who made one of the knowledge bases available to us. 772 Representation and Reasoning: Terminological Aigner, Martin 1988. Combinatorical Search. Teub- ner, Stuttgart, Germany. Baader, Franz and Hollurader, Bernhard 1991. KRIS: Knowledge representation and inference system. SIGART Bulletin 2(3):8-14. Brachman, Ronald 9. and Levesque, Hector J. 1984. The tractability of subsumption in frame-based de- scription languages. In Proceedings of the 4th Na- tional Conference of the American Association for Artificial Intelligence, Austin, TX. 34-37. Brachman, Ronald J. and Schmolze, James G. 1985. An overview of the ML-ONE knowledge representa- tion system. Cognitive Science 9(2):171-216. Damiani, M.; Bottarelli, S.; Migliorati, M.; and Pelta- son, C. 1990. Terminological Information Manage- ment in ADKMS. Hn ESPRIT ‘90 Conference Pro- ceedings, Dordrecht , Holland. Kluwer . Devanbu, Premkumar T.; Brachman, Ronald J.; Selfridge, Peter G.; and Ballard, Bruce W. 1991. LaSSIE: a knowledge-based software information sys- tem. Communications of the ACM 34(5):35-49. Donini, Francesco M. ; Lenzerini, Maurizio; Nardi, Daniele; and Mutt, Werner 1991. The complex- ity of concept languages. In Allen, James A.; Fikes, Richard; and Sandewall, Erik, editors 1991, Principles of Knowledge Representation and Reason- ing: Proceedings of the 2nd International Conference, Cambridge, MA. Morgan Kaufmann. 151-162. Donini, Francesco M . ; Lenzerini, Maurizio; Nardi, Daniele; Hollunder, Bernhard; Mutt, Werner; and Spacamella, Albert0 Marchetti 1992. The complex- ity of existential quantification in concept languages. Artificial Intelligence 53(2-3):309-327. Heinsohn, Jochen; Kudenko, Daniel; Nebel, Bern- hard; and Profitlich, Hans-Jiirgen 1992. An em- pirical analysis of terminological representation sys- tems. DFKI Research Report RR-92-16, German Research Center for Artificial Intelligence (DFKI), Saarbriicken. - Kobsa, Alfred 1991. First experiences with the SB- ONE knowledge representation workbench in natural- language applications. SIGART Bulletin 2(3):70-76. MacGregor, Robert 1991. Inside the LOOM descrip- tion classifier. SIGART Bulletin 2(3):88-92. Nebel, Bernhard; von Luck, Kai; and Peltason, Christof, editors 1991. International workshop on ter- minological logics. DFKI Document D-91-13, German Research Center for Artificial Intelligence (DFKI), Saarbriicken. Also published as KIT Report, TU Berlin, and IWBS Report, IBM Germany, Stuttgart. Nebel, Bernhard 1990. Terminological reasoning is inherently intractable. Artificial Intelligence 43:235- 249. Nebel, Bernhard 1991. Terminological cycles: Seman- tics and computational properties. In Sowa, John F., editor 1991, Principles of Semantic Networks. Mor- gan Kaufmann, San Mateo, CA. 331-362. Neches, Robert; Fikes, Richard; Finin, Tim; Gruber, Thomas; Patil, Ramesh; Senator, Ted; and Swartout, William R. 1991. Enabling technology for knowledge sharing. The AI Magazine 12(3):36-56. Owsnicki-Klewe, Bernd 1988. Configuration as a con- sistency maintenance task. In Hoeppner, Wolfgang, editor 1988, Kzinstliche Intelligent. GWAI-88, 12. Jahrestagung, Eringerfeld, Germany. Springer-Verlag. 77-87. Patel-Schneider, Peter F. ; Owsnicki-Klewe, Bernd; Kobsa, Alfred; Guarino, Nocola; MacGregor, Robert; Mark, William S.; McGuinness, Deborah; Nebel, Bernhard; Schmiedel, Albrecht; and Yen, John 1990. Term subsumption languages in knowledge represen- tation. The AI Magazine 11(2):16-23. Patel-Schneider, Peter F.; McGuinness, Deborah L.; Brachman, Ronald J.; Alperin Resnick, Lori; and Borgida, Alex 1991. The CLASSIC knowledge repre- sentation system: Guiding principles and implemen- tation rational. SIGART Bulletin 2(3):108-113. Peltason, Christof 1991. The BACK system - an overview. SIGART Bulletin 2(3):114-119. Schmidt-Schau& Manfred 1989. Subsumption in KL- ONE is undecidable. In Brachman, Ron J.; Levesque, Hector J.; and Reiter, Ray, editors 1989, Principles of Knowledge Representation and Reasoning: Pro- ceedings of the 1st International Conference, Toronto, ON. Morgan Kaufmann. 42 l-43 1. Wahlster, Wolfgang; Andre, Elisabeth; Bandyopad- hyay, Som; Graf, Winfried; and Rist, Thomas 1991. WIP: the coordinated generation of multimodal pre- sentations from a common representation. In Ortony, Andrew; Slack, John; and Stock, Oliviero, editors 1991, Computational Theories of Communication and their Applications. Springer-Verlag, Berlin, Heidel- berg, New York. To appear. Also available as DFKH Research Report RR-91-08. Meinsohn et al. 773
1992
129
1,197
School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3890 Robert.Doorenbos@CS.CMU.EDU, Milind.Tambe@CS.CMU.EDU, and Allen.Newell@CS.CMU.EDU Abstract This paper describes an initial exploration into large learning systems, i.e., systems that learn a large number of rules. Given the well-known utility problem in learning systems, efficiency questions are a major concern. But the questions are much broader than just efficiency, e.g., will the effectiveness of the learned rules change with scale? This investigation uses a single problem-solving and learning system, Dispatcher-Soar, to begin to get answers to these questions. Dispatcher-Soar has currently learned 10,112 new productions, on top of an initial system of 1,819 productions, so its total size is 11,931 productions. This represents one of the largest production systems in existence, and by far the largest number of rules ever learned by an AI system. This paper presents a variety of data from our experiments with Dispatcher-Soar and raises important questions for large learning systems1 Introduction The machine learning community has a strong view that it is infeasible simply to learn new rules, because the cost of matching them would soon devour all the gains. This is known as the utility problem (Minton, 1988a) - the cumulative benefits of a rule should exceed its cumulative computational cost. There is data that supports this, as well as a special phenomenon of expensive chunks (Tambe et al., 1990). Thus, when considering systems that are to learn indefinitely, efficiency issues are a critical concern. But the issues are broader than just efficiency. As we grow large systems via learning, what sort of systems will emerge? Will they be able to keep learning? What will happen to the usefulness of the learned rules, e.g., will only a few rules provide all the action? Will there be mutual interference? What scales and what doesn’t? It is not even clear these are the right questions, because we do not know what such systems will be like. ‘The research was sponsored by the Avionics Laboratory, Wright Research and Development Center Aeronautical Systems Division (AFSC), U. S. Air Force, Wright-Patterson AFB, OH 45433-6543 under Contract F33615-9&C-1465, ARPA Order No. 7597. The first author was sponsored by the National Science Foundation under a graduate fellowship award. The views and conclusions contained in this document am those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. government. 830 Scaling Up This paper reports on an empirical exploration to begin to get answers to these questions, both the pressing ones of efficiency and the myriad others that should interest us. It reports on a single problem-solving and learning system, Dispatcher-Soar, performing a sequence of randomly generated tasks, all the while continuously learning. At the time of this paper, Dispatcher-Soar has learned 10,112 new rules (called chunks), on top of an initial system of 1,819 rules, so its total size is 11,93 1 rules. This represents one of the largest rule-based systems in existence, and by far the largest number of rules ever learned by an AI system.2 This effort is only an initial probe into the unknown of large learning systems. We have no intention of stopping at 10,000 chunks. Will matters look the same at 50,000 chunks? Dispatcher-Soar is only a single system working in a single task domain. What is idiosyncratic to this system and task domain, and what is more general? Indeed, what are the right questions to be asked of these systems and task domains to better understand these issues? Thus, although we present data from a world none of us has yet glimpsed, the paper mostly focuses on formulating the right questions. ispatcher-Soar A brief description of Dispatcher-Soar will suffice. The system was not designed with these experiments in mind, but to explore how to use databases. Its task is to dispatch messages for a large organization, represented in a database containing information about the people in the organization, their properties, and their ability to intercommunicate. Given a specification of the desired set of message recipients (e.g., “everyone involved in marketing for project 2’7, the system must find a way to get the message to the right people. This problem is different from a simple network routing problem because ?-The Rl/XCON system at Digital is the only production system we know that exceeds 10,000 rules (Barker & O’Connor, 1989). Most production systems still have have only hundreds of productions, though there are some with a few thousand. Systems that have substantially more rules are specialized to have only constants in their patterns (Brainware, 1990), thus avoiding the main source of computational cost, matching of pattern variables. From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. both communication links and desired recipients are specified in terms of properties of people - for example, a communication link might be specified by “John can talk to all the marketing people at headquarters.” Also, the data is available only indirectly in a database, which is external to Dispatcher-Soar, rather than part of it. Whenever Dispatcher-Soar needs some piece of information from the database, it must formulate a query using the SQL database-query language, send the query off to a database system, and interpret the database- system’s response. Dispatcher-Soar is implemented using Soar, an integrated problem-solving and learning system, already well-reported in the literature (Laird, Newell, & Rosenbloom, 1987; Rosenbloom et al., 1991). Soar is based on formulating every task as search in problem spaces. Each step in this search - the selection of problem spaces, states and operators, plus the immediate application of operators to states to generate new states - is a decision cycle. The knowledge necessary to execute a decision cycle is obtained from Soar’s knowledge base, implemented as a production system (a form of rule-based system). If this knowledge is insufficient to reach a decision, Soar makes use of recursive problem-space search in subgoals to obtain more knowledge. Soar learns by converting this subgoal-based search into chunks, productions that immediately produce comparable results under analogous conditions (Laud, Rosenbloom, & Newell, 1986). Chunking is a form of explanation-based learning (EBL) (DeJong $ Mooney, 1986; Mitchell, Keller, & Kedar-Cabelli, 1986; Rosenbloom & Laird, 1986). Dispatcher-Soar is implemented using 20 problem spaces. Figure 1 shows these problem spaces as triangles, arranged in a tree to reflect the structure of the system: one problem space is another’s child if the child space is used to solve a subgoal of the parent space. (The boxed numbers will be explained later.) The 8 problem spaces in the upper part of the figure are involved with the dispatching task: three basic methods of performing the task, together with a method for selecting between them. The 12 lower problem spaces are involved with using the external database. 43-i et Dispatcher-Soar begins with 1,819 initial (unlearned) productions, and uses a database describing an organization of 61 people. The details of this organization were created by a random generator, weighted to make the organization modestly realistic, e.g., people in the same region are more likely to have a communication link between them than people in different regions. We generated 200 different base problems for Dispatcher-Soar, using a random problem generator with weighted probabilities. (A problem is just an instance of the dispatching task, e.g., “Send this message to everyone involved in marketing for project 2.“) We then had Dispatcher-Soar solve each base problem in turn, learning as it went along. Starting from the initial 1,819- production system, Dispatcher-Soar required 3,143 decision cycles to solve the first problem, and built 556 chunks in the process. Then, starting with the initial productions plus the chunks from the first problem, the system solved the second problem. It continued in this fashion, eventually solving problem 200 using the initial productions together with all the chunks it had learned on the first 199 problems. After solving 200 base problems, the system had learned a total of 10,112 chunks (an average of 506 per problem space), bringing the total number of productions to 11,931. Figure 1, (1) in the box, gives the number of these chunks in each problem space and Figure 1 (2) gives their distribution over the spaces?Approximately 30% of the chunks are search-control rules (the selection space), used to choose between the different methods the system has available to it. Another 32% of the chunks help implement various operators (many spaces). The remaining 38% contain memorized database query results (the memory and database-query spaces), and are based on the memorization technique introduced in (Rosenbloom & Aasman, 1990). Whenever Dispatcher- Soar has to go out to the database to find some piece of information, it memorizes it; if the information is needed again, the database will not have to be consulted. These chunks transfer to other problems - Dispatcher-Soar solved 42 of the 200 base problems without consulting the database. However, it did not memorize the entire database - it still had to consult it during problem 200. e Figure 2 plots the rate at which chunks are built. The horizontal axis in the figure gives both cumulative decision cycles and cumulative production firings. (We will show later that these two numbers are proportional.) The vertical axis gives the number of chunks built. The figure shows that the rate of chunking remains remarkably constant over time - on average, one chunk is built every 6.3 decision cycles. The rate of chunking was indeed expected to stay roughly the same throughout the run (Newell, 1990). However, the constant rate is intriguing because it holds up over a long period of time, despite the different problem spaces employed and the variety of chunks learned (see Figure 1). Even more intriguing, the 3The results reported here are for Dispatcher-Soar RW, and Soar version 5.2.2 using all-goals chunking. Times are user cpu time on a DECstation 5000, excluding Lisp garbage collection time. In addition to the chunks reported here, Soar also builds internal chunks, which have only constants (no variables) in their conditions and are excised following the completion of each problem. Internal chunks appear to affect our results only slightly and are ignored here. Two problem spaces build no chunks. The top space cannot build chunks because it has no parent. The send-query-and-get-response (SQAGR) space builds only internal chunks. An additional problem space is used to wait for results to be returned from the external database. It does no problem solving and is omitted here. oorenbos, Tarnbe, and Newell 831 Figure 1: Dispatcher-Soar problem spaces and associated chunks. same rate of approximately 1 chunk per 6 decision cycles was observed on a smaller scale (-100 chunks) in a completely different set of tasks (Tambe et al., 1990, Tambe, 199 1). A second interesting phenomenon is that in some of the problem spaces, the rate of chunking is highly non- uniform. Figure 1 (3) shows the percent of chunks that were built during the first half of the run, i.e., the first 50% of the total decision cycles required to solve the 200 problems. For example, the figure indicates that 87% of the chunks from the pathfinder space were built during the first half of the run. So the rate at which pathjkder chunks are built drops significantly over the course of the run. For the find-all-matches space, on the other hand, only 34% of the chunks were built during the first half of the run, so the chunking rate for this space is increasing over the course of the run. 00-v I 4 0 10000 30000 50000 70000 D = cumulative decision cycles I I 0 200 400 600 800 1000 1200 1400 Cumulative production firings (thousands) Figure 2: Chunks built on 200 base problems: C=D/6.3. As more and more chunks are built in a problem space, the space may become inactive: the space is never used 832 Scaling Up again, because in every situation where it would previously have been used, some chunk fires to handle the situation instead. In Dispatcher-Soar, some spaces become inactive very quickly: the table-list-text space was used during the first problem, built one chunk, and was never needed again - from then on, that single chunk transferred to all the situations where it was needed. Other spaces take longer to become inactive: the where-conditions-list-text space became inactive during problem 19, and the pat?finder space became inactive during problem 164. Many spaces show no sign of becoming inactive, even after 200 problems. As mentioned above, Figure 2 has horizontal scales for both cumulative decision cycles and cumulative production firings. In this paper we work primarily in terms of decision cycles, the natural unit of cognitive action in Soar. However, production firings are another familiar base unit. While the number of production firings that make up a decision cycle varies from one decision cycle to the next, Figure 3 shows that the number of cumulative production firings is proportional to the number of cumulative decision cycles, with a conversion factor of approximately 20.5 production firings per decision cycle. Figures 2 and 3 together show that chunks built, cumulative decision cycles, and cumulative production firings are all proportional to one another: 6.3 decision cycles per chunk and 20.5 production firings per decision cycle (hence, 130 production firings per chunk). Our graphs will have extra scales, based on these conversion factors. 9 zz- E Jo I 1 t II 0 10000 30000 50000 70000 a D = cumulative decision cycles Figure 3: Production firings vs. decision cycles: P=20.5 xD. The Cost of 10, Data from existing EBL systems (Etzioni, 1990; Minton, 1988a; Cohen, 1990), including some small Soar systems (Tambe et al., 1990), predicts that a large number of chunks will extract a heavy match cost. This increase in match cost due to the accumulation of a large number of chunks is referred to as the average growth effect (AGE).4 The prediction of a high AGE in Dispatcher-Soar 4While (Tambe et al., 1990) reported on the AGE phenomenon, their main focus was on enpensive chunks, i.e., individual chunks that consume a large amount of match effort. Our focus is on the impact of large numbers of individually inexpensive chunks. is bolstered by static measurements of the Rete algorithm (Forgy, 1982) employed in Soar for production match: with the addition of 10,000 chunks, the Rete network size (interpreted code size) increases eight-fold. The mystery that emerges from Dispatcher-Soar is the absence of any average growth effect! Figure 4 illustrates this. It plots the number of cumulative decision cycles over the 200 base problems on the horizontal axis, and the number of token changes per decision cycle on the vertical axis. A token is a partial instantiation of a production, indicating what conditions have matched with what variable bindings. The number of tokens generated by the matcher during a decision cycle is a commonly used implementation-independent measure of the amount of match effort in that decision cycle (Gupta, 1986; Minton, 1988b; Nayak, Gupta, & Rosenbloom, 1988; Tambe et al., 1990). In Figure 4, each point indicates the number of token changes per decision cycle during a lo- decision cycle interval. The figure shows that over the course of about 65,000 decision cycles and the addition of 10,112 chunks, there is no increase in token changes per decision cycle. In fact, a least-squares fit indicates a slightly decreasing trend. Q, Number of chunks built z $8 0 2000 4000 6000 8000 10000 s8 - : . dv e 0 10000 30000 50000 70000 Cumulative decision cycles r 8 0 200 400 600 800 1000 1200 1400 Cumulative production firings (thousands) Figure 4: Token changes per decision cycle. A variety of hypotheses have so far failed to explain this lack of average growth effect. One hypothesis was that the chunks were simply sharing existing parts of the Rete network, thus not adding to the match activity (Tambe et al., 1990). But this is not the case, as the network size turns out to grow in rough proportion to the number of productions in the system. Another hypothesis was that a finer-grained analysis at a level below the decision cycles would reveal the missing average growth effect. Figure 5 presents such a finer-grained analysis: it plots the number of token changes per change to working memory during the 200 base problems5 But it shows no AGE either. Yet another hypothesis was that the chunks do not incur match cost, because they don’t match ‘As noted earlier, each decision cycle involves about 20 production firings. Each of these involves executing right-hand-side action to add or delete working-memory elements. Doorenbos, Tambe, and Newell 833 variables or because they never match. But the chunks test many variables in their conditions, not just constants. And they often match, as the next section indicates. Number of chunks built 0 2000 4000 6000 8000 10000 s . I --- I- I -I 0 10000 30000 50000 70000 Cumulative decision cycles I I 0 200 400 600 800 1000 1200 1400 Cumulative production firings (thousands) Figure 5: Token changes per action. Interestingly, the absence of AGE is consonant with the expectations of a different section of the AI community. One of the key experimental results in the area ofparallel production system is that the match activity in a production system does not increase with the number of productions (Gupta, 1986; Miranker, 1987; Oflazer, 1987). However, this experimental result is based on non-learning OPS5 production systems (Brownston et al., 1985) with substantially smaller numbers of productions than Dispatcher-Soar (-100 to -1000 productions). The relation between the results in this paper and those in (Gupta, 1986; Miranker, 1987; Qflazer, 1987) remains unclear. A second unexpected phenomenon in Dispatcher-Soar is that, in spite of the lack of increase in token changes, the total execution time per decision cycle shows a gradual increase as chunks are added to the system. Figure 6 plots this time per decision cycle over the course of the 200 base problems. Again, each point indicates the time per decision cycle during a lo-decision cycle interval. A linear fit indicates that the time per decision cycle is initially 1.28 seconds, and increases by 0.47 seconds per 10,000 chunks. This is certainly a growth effect, but its source is unclear. The total execution time includes the time spent in matching, chunk building, tracing and other activities of the Soar system. A new, highly instrumented version of Soar is currently under development (Milnes et al., 1992), which should allow us to understand this phenomenon. The Effectiveness of Learning To explore the effectiveness of Dispatcher-Soar’s learning, Figure 7 plots the number of decision cycles for each of the 200 base problems in sequence. The horizontal axis plots the problem number and the vertical axis plots the problem duration in decision cycles. The figure shows more long problems at the beginning than at the end and that almost all the later problems are short. As Dispatcher-Soar continuously accumulates chunks, it is able to solve problems faster, i.e., in fewer decision cycles, which is direct evidence for effective learning. 3 Number of chunks built -a 0 2000 4000 6000 8;. 8000 10000 . 2 : Y .I I E 0 I .- l- 10000 30000 50000 70000 Cumulative decision cycles I 7 0 200 400 600 800 1000 1200 1400 Cumulative production firings (thousands) Figure 6: Time per decision cycle. 0 50 100 150 200 Base problem number Figure 7: Durations of the base problems. For a continuously learning system, we can ask whether all the useful learning occurs early on, or whether later learning is still effective. Are the rules learned during the second 10 problems as effective as those learned during the first 10 problems? Unfortunately, a fine-grained analysis of Figure 7 is useless because of the variance in the problem durations. (This has two causes: varying difficulty of the base problems and varying amounts of chunk transfer.) We could smooth out the variance by taking an average every 50 problems -if the average problem-solving time of the last 50 problems were less than that of the previous 50, we could conclude that the learning in later problems is useful. But averaging prevents getting fine-grained information on the effectiveness of learning. Averaging 50 problems, we will not get information about the effectiveness of the learning from the first 10 problems as compared to the second 10 problems. Instead of averaging, we used a set of probe problems. We selected 10 probe problems, generated by the same random generator as the base problems, but disjoint from the base problems. 6 To ensure that this small set 6All the problems must be different. If Soar were presented with the same problem twice, the second attempt would take just 4 decision cycles by firing a top-level chunk. 834 Scaling Up adequately covered the space of problems, we selected problems of varying difficulty: 3 easy, 3 medium, and 4 hard. The difficulty of a problem was judged by giving it to the initial Dispatcher-Soar system (no chunks) and seeing how many decision cycles the system needed to solve it. The system’s performance was then tested on each probe problem, starting with the initial productions plus the chunks from a varying number of base problems. In trial 1, we ran each probe problem starting with just the initial 1,819-production system.7 In trial 2, we ran each probe problem starting with the initial productions plus the chunks from base problem 1. And so on, at selected intervals, up to the last trial, in which we ran each probe problem starting with the initial productions plus all the chunks from the 200 base problems. Figure 8 shows how long the system took to solve the probe problems, plotted against the trial number. The three lines (top to bottom) represent the average number of decision cycles needed for the hard, medium, and easy probe problems. The number of decision cycles goes down sharply at first. Hence, the chunks learned in the early base problems are very useful. The decision cycles keep decreasing, but less sharply, throughout the trials. Hence, chunks learned in the later base problems are still useful, but not as useful as the ones from earlier base problems. 2 0 50 100 150 200 Q: Trial number Figure 8: Probe-problem durations, grouped by difficulty. Figure 9 shows the average number of decision cycles the system needed to solve a probe problem (of any difficulty) plotted against the trial number on a log-log scale. The points fall roughly in a straight line: Dispatcher-Soar’s learning follows a power law. This power Eaw of practice shows up throughout human skill acquisition, and has been previously observed in Soar (Newell, 1990). That the probe-problem durations follow such a simple rule is remarkable in itself, and strengthens ‘The system w as allowed to learn as it solved a probe problem. However, it was started fresh for each of the 10 probes - none of the other probes’ chunks were used. The durations given here exclude a handful of decision cycles spent waiting for the database to respond to queries. our conviction that the use of probe problems is a crucial technique for the analysis of continuously learning systems. I I 8 5 10 50 N = trial number 100 Figure 9: Probe-problem durations, log-log: T=1683~?@.~~~. Taking the ratio of the probe-problem durations in the last and first trials, we see that the overall speedup of the probe problems due to the learning on the 200 base problems was a factor of 7. This gives us an analogue to the one-shot learning analysis done in many studies of learning systems (including Soar) (Minton, 1988a; Rosenbloom etal., 1985), in which the performance of a system is compared before and after learning; i.e., a set of problems is run before any learning takes place; then learning occurs; then the problems are rerun after learning is finished. Overall, 10.3% of the chunks built during the base problems ever transferred to later base problems. This number seems low at first, but upon further examination it is quite reasonable. First, chunks built in later problems have little opportunity to transfer, since few other problems were run while those chunks were in the system. In fact, the chunks built during problem 200 never get a chance to transfer. Such chunks are hardly useless - with more base problems, the fraction of chunks from the first 200 problems that transfer would increase. Second, some chunks in child spaces never get a chance to transfer, because in all the situations where they would, some chunk in a parent space transfers first and cuts off the subgoaling before the child space arises. The rrep-for-ent space is a good example. It built 55 chunks, but none of them ever transferred - for each one, a corresponding chunk from the build-relational-rep space always transferred instead. The effectiveness of learning also varies between problem spaces. Figure 1 (4) shows, for each problem space, what percent of the chunks built from that space in one base problem transferred to some later base problem. For those chunks that did transfer, Figure 1 (5) indicates the average number of transfers per chunk. There is wide variance in the effectiveness of learning by problem space - the percent of chunks that transfer varies all the way from 0% to 100%. Learning was most effective in those spaces where a small number of total chunks were built, but those chunks transferred often. Doorenbos, T’ambe, and Newell 835 Questions This investigation is an initial exploration into very large learning systems - systems that acquire a very large number of rules with diverse functionality. The single system used so far, Dispatcher-Soar, has learned over 10,000 rules at the time of this paper. We expect to grow Dispatcher-Soar much further and to grow additional systems. As phenomena and data accumulate, we expect to answer some significant questions about such systems - about what it is like out there. Discovering the right questions is a major part of the enterprise. The general ones posed in the introduction certainly remain relevant. But the results presented here already permit us to formulate some sharper ones. 1. Why is there no average growth effect? Will that remain true further out? Can we have learning systems with 100,000 rules? Does this apply to EBL systems in general? 2. Why does the learning rate stay constant? Is this an architectural constant, independent of task and application system? Will different systems have different (but constant) rates? What is the relationship between the constant total rate and the strongly varying rates by problem space? 3. Does the power law of practice hold further out? Does it apply for different tasks and systems? What is behind it? 4. What is the nature of the rules contributing to early learning (which is very effective) and of those contributing to late learning (which is less effective)? 5. What characterizes spaces that become inactive? How do they affect the mix of rules that are being learned? Can they become active again? 6. What about the structure of Dispatcher-Soar (and other systems and tasks) determines its long-term growth and learning properties? h Barker, V., O’Connor, D. 1989. Expert systems for configuration at Digital: XCON and beyond. Communications of the ACM, 32(3), 298-318. Brainware. Spring 1990. The BMT expert system. Pragmatica: Bulletin of the Inductive Progr amming Special Interest Group (IPISG). Brownston, L., Farrell, R., Kant, E. and Martin, N. 1985. Programming expert systems in OPS5: An introduction to rule- based programming. Reading, Massachusetts: Addison- Wesley. Cohen, W. W. 1990. Learning Approximate Control Rules of High Utility. Proceedings of the Sixth International Conference on Machine Learning, 268-276 . DeJong, G. F. and Mooney, R. 1986. Explanation-based learning: An alternative view. Machine Learning, I(2), 145-176. Etzioni, 0. 1990. A structural theory of search control. Ph.D. diss., School of Computer Science, Carnegie Mellon University. Forgy, C. L. 1982. Rete: A fast algorithm for the many pattern/many object pattern match problem. Artificial Intelligence, 19(l), 17-37. Gupta, A. 1986. Parallelism in production systems. Ph.D. diss., Computer Science Department, Carnegie Mellon University. Also a book, Morgan Kaufmann, (1987). Laird, J. E., Newell, A. and Rosenbloom, P. S. 1987. Soar: An architecture for general intelligence. Artificial Intelligence, 33(l), l-64. Laird, J. E., Rosenbloom, P. S. and Newell, A. 1986. Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1( l)q 1 l-46. Milnes, B.G., Pelton, G., Hucka, M., and the Soar6 Group. 1992. Soar6 specification in 2. Soar project, Carnegie Mellon University, Unpublished. Minton, S. 1988. Quantitative results concerning the utility of explanation-based learning. Proceedings of the National Conference on Artificial Intelligence, 564-569 . Minton, S. 1988. Learning Effective Search Control Knowledge: An explanation-based approach. Ph.D. diss., Computer Science Department, Carnegie Mellon University. Miranker, D. P. 1987. Treat: A better match algorithm for AI production systems. Proceedings of the S&h National Conference on Artificial Intelligence, 42-47 . Mitchell, T. M., Keller, R. M., and Kedar-Cabelli, S. T. 1986. Explanation-based generalization: A unifying view. Machine Learning, I(l), 47-80. Nayak, P., Gupta, A. and Rosenbloom, P. 1988. Comparison of the Rete and Treat production matchers for Soar (A summary). Proceedings of the Seventh National Conference on Artificial Intelligence, 693-698 . Newell, A. 1990. Unified Theories of Cognition. Cambridge, Massachusetts: Harvard University Press. Oflazer, K. 1987. Partitioning in Parallel Processing of Production Systems. Ph.D. diss., Computer Science Department, Carnegie Mellon University. Rosenbloom, P.S. and Aasman J. 1990. Knowledge level and inductive uses of chunking (EBL). Proceedings of the National Conference on Artijkial Intelligence, 821-827 . Rosenbloom, P. S. and Laird, J. E. 1986. Mapping explanation- based generalization onto Soar. Proceedings of the Fifth National Conference on Artificial Intelligence, 561-567 . Rosenbloom, P. S., Laird, J. E., Newell, A., and McCarl, R. 1991. A preliminary analysis of the Soar architecture as a basis for general intelligence. Artificial Intelligence, 47( l-3), 289-325. Rosenbloom, P. S., Laird, J. E., McDermott, J., Newell, A., and Grciuch, E. 1985. Rl-Soar: An experiment in knowledge- intensive programming in a problem-solving architecture. IEEE Transactions on Pattern Analysis and Machine Intelligence, 7, 561-569. Tambe, M. 1991. Eliminating combinatorics from production match. Ph.D. diss., School of Computer Science, Carnegie Mellon University. Tambe, M., Newell, A., and Rosenbloom, P. S. 1990. The problem of expensive chunks and its solution by restricting expressiveness. Machine Learning, 5(3), 299-348. 836 Scaling Up
1992
13
1,198
Reeognit ion Algorithms obert M. MacGregor and avid rill USC/Information Sciences Institute 4676 Admiralty Way Marina de1 Rey, CA 90292 macgregor@isi.edu, brill@isi.edu Abstract Most of today’s terminological representation sys- tems implement hybrid reasoning architectures wherein a concept classifier is employed to reason about concept definitions, and a separate recog- nizer is invoked to compute instantiation relations between concepts and instances. Whereas most of the existing recognizer algorithms designed to maximally exploit the reasoning supplied by the concept classifier, LOOM has experimented with recognition strategies that place less emphasis on the classifier, and rely more on the abilities of LOOM’S backward chaining query facility. This paper presents the results of experiments that test the performance of the LOOM algorithms. These results suggest that, at least for some applica- tions, the LOOM approach to recognition is likely to outperform the classical approach. They also indicate that for some applications, much better performance can be achieved by eliminating the recognizer entirely, in favor of a purely backward chaining architecture. We conclude that no single recognition algorithm or strategy is best for all applications, and that an architecture that offers a choice of inference modes is likely to be more useful than one that offers only a single style of reasoning. Introduction LOOM (MacGregor and Burstein, 1991; MacGregor, 1991b) belongs to the family of terminological repre- sentation systems descended from the language KL- ONE (Brachman and Schmolze, 1985). A character- istic feature of these systems is their ability to reason with definitional and descriptive knowledge. A spe- cialized reasoner called a classifier enables these sys- tems to compute subsumption relationships between definitions (to determine when one definition implies *This research was sponsored by the Defense Advanced Research Projects Agency under contract MDA903-87-C- 0641. another) and to test the internal consistency of defini- tions and constraints. The kind of reasoning obtain- able with a classifier has proved to be useful in a wide variety of applications (Patel-Schneider ei al., 1990). A recognizer is a reasoner that complements the abilities of a classifier. Recognition (also called “re- alization” (Nebel, 1990)) refers to a process that com- putes instantiation relationships between instances and concepts in a knowledge base. An instantiation rela tionship holds between an instance and a concept if the instance satisfies (belongs to) that concept. We have observed that LOOM applications use the recog- nizer much more than they do the classifier (i.e., they spend more of their time reasoning about instances than about definitions) and we believe that this trait extends as well to applications based on other termi- nological representation systems. Thus, it is of critical importance that the recognition process be as efficient as possible. Most classifier-based representation systems treat recognition as a variation of concept classification, and implement recognizers that rely on the classifier to per- form most of the necessary inferences. LOOM is per- haps the only terminological representation system to experiment with recognition algorithms that differ sig- nificantly from what has become the standard or classi- cal approach to building a recognizer. In this paper, we present the results of experiments that suggest that, for at least some applications, the LOOM algorithms per- form better than the classical algorithm. LOOM also supports an alternative mode for computing instanti- ation relationships that substitutes a backward chain- ing prover in place of the recognizer. This backward chaining mode has proven to be much more efficient for some applications. Thus, the adoption of more flexible (and more complex) recognition algorithms has lead to improved performance for some LOOM applications. The Classical Approach to Terminological representation systems partition their knowledge bases into a terminological component (TBox) and an assertional component (ABox) (Mac- Gregor, 1990; Nebel and von Luck, 1988). The TBox 774 Representation and Reasoning: Terrninological From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. Interpretation Xx- VY. R(x, Y) + C(Y) XX. 3 k distinct yi pi R(x, yi) XX. B k + 1 distinct yi Ai R(x, yi) xx. R(x, f) Xx. VY. R&J, Y) * Rz(x, Y) Table 1: Some LOOM Features contains a set of descriptions (one-variable lambda ex- pressions) organized into a subsumption hierarchy. A named description is called a concept. The ABox con- tains facts about individuals. In LOOM a fact is rep- resented either as an assertion that an instance (in- dividual) satisfies a particular description (e.g., “The individual Fred satisfies the concept Person.“) or as an assertion that a role relates two individuals (e.g., “50 is a filler of the role ‘age of Fred’.“). A knowledge base can also contain constraint axioms of the form “de- scriptionl implies description2” having the meaning “Instances that satisfy description1 necessarily satisfy description,%” A description is either atomic or composite. We call atomic descriptions features-Table 1 lists some LOOM features. A composite description consists of a con- junction of two or more features.l An instance satisfies a description if and only if it satisfies each of its fea- ture(s). The LOOM recognizer finds an instantiation relationship between an instance I and a concept C by proving that I satisfies each of the features of C. Example 1. Suppose we have concepts A and B, role relations R and S, and an individual I with definitions and assertions as follows: concept A = at#ost(l,S). concept B = atLeast(l,R) and atMost(l,S). assert A(I), R(I,3), R(I,4). The fact R(I, 3) implies that I satisfies the feature atLeast (1, R), and the fact A( I) implies that I satis- fies the feature atMost (I, S). Hence, I is an instance of the concept B. A classifier computes subsumption relationships be- tween a new description and the already classified de- scriptions in a description hierarchy. The final step in the classification process is to link the newly-classified description into the hierarchy. A description hierar- chy is constructed by starting with an empty network, and classifying descriptions one at a time until all of them have found their place relative to each other in the hierarchy. In Example 1 above, the subsumption test invoked by the classifier would determine that A subsumes (is more general than) B. Classifying A first ‘LOOM descriptions can also contain disjunctive ex- pressions, which this paper ignores for the sake of simplicity. and then B or vice-versa would result in the same de- scription hierarchy, with A above B. The classical approach to classifying (recognizing) instances utilizes an abstraction/classification (A/C) strategy that relies on the classifier machinery to com- pute instantiation relationships (Kindermann, 1990; Quantz and Kindermann, 1990; Nebel, 1990). Given an instance I, the A/C algorithm operates by (i) com- puting a description AI representing an abstraction of I, and (ii) classifying AI. I is an instance of each con- cept, that subsumes AI. For example, the description atLeast(2,R) and filledBy(R,3) and filledBy(R,4) and atMost(i,S) represents a possible abstraction of the individual I in Example 1. This abstraction is subsumed by concepts A and B. The abstract description of an instance has the po- tential to be much larger (take up more space) than the originally asserted facts about that instance. Some classical classifiers are turning to an incremental ap- proach wherein successively more detailed partial ab- stractions of an instance are generated during a recog- nition cycle (Kindermann, 1991). However, because of concerns with possible performance problems, the LOOM choice is to implement a completely different recognition algorithm. We see two potential areas where performance of the classic A/C algorithm may deteriorate: (la) Instances in a knowledge base are typically pair- wise distinguishable. Hence, their abstractions, if sufficiently detailed, will also be pairwise distinct. Hence, the number of descriptions in a system that utilizes A/C recognition may grow to be propor- tional to the number of instances. We expect that for knowledge bases containing thousands or tens of thousands of individuals, the A/C strategy will cause the size of the TBox to become unmanageably large. (lb) With the A/C algorithm, every update to an in- stance necessitates generating a new abstraction. Unless old abstractions are deleted, frequent, updates may cause the cumulative number of abstractions in the net to be much larger than “just” the number of instances.2 (2) In the classical approach to recognition, all instanti- ation relationships are continuously cached and kept up to date. Thus, every update to an instance neces- sitates recomputing instantiation relationships be- tween the instance and every description that it sat- isfies. As the size of a net grows, this computation becomes increasingly expensive. To mitigate performance problems (la) and (lb), LOOM has experimented with recognition strategies designed to reduce the number of additional abstract descriptions generated during the recognition process. 2A strategy that deletes each “old” abstraction risks having to generate that same abstraction over and over. MacGregor and Brill 775 50 40 30 20 10 0 1 2 3 4 5 Trial Number Figure 1: Recognition Speed over 5 Iterations in the sequel include timings for repeated trials over the same data, to provide indications of LOOM’S be- havior at both ends of the performance spectrum. Query-base ecognition The LOOM recognition algorithm is designed to min- imize the number of new abstractions generated as a by-product of the recognition cycle. To compare an in- stance I with a description D, the A/C algorithm gen- erates an abstract description AI of I, and then com- pares AI with D using a standard subsumption algo- rithm. The LOOM strategy is to implement a special- ized subsumption test that allows it, to directly com- pare I against D, thus avoiding the necessity for gen- erating an abstract description. Further details of the LOOM recognition algorithm can be found in (MacGre- 9 1988) and (MacGregor, 1991a). ery-based Recognition-an algorithm for com- puting the set of concepts satisfied by an instance I: Section presents LOOM’S “query-based” recognition strategy, wherein calls to a backward chaining query facility are substitueed in place of searches for features within an instance’s abstract description. To cope with problem (2), LOOM supports an inference mode wherein instantiation relationships are computed only on demand; few or none of the instantiation relation- ships for an instance are cached. This is discussed in section . Before looking at experiments that compare differ- ent recognition algorithms, we first present the results of (’ in an experiment this case, the -performed recognizer using a single recognizer implemented for LOOM version 1.4). Figure 1 shows results the using a LOOM- based application called DRAMA (Harp et ad., 1991). DRAMA is a system that provides intelligent assis- tance to analysts of logistics databases. The portion of DRAMA that we used in our tests analyzes data for anomolies, such as inconsistencies and noteworthy changes, and logs and categorizes any data anomolies it encounters. In this experiment, the knowledge base was initially loaded with 672 concept definitions, 481 relation definitions, and 677 constraint axioms. We made five trials over the same sequence of instance cre- ations and updates (25 instances were created), clear- ing the ABox portion of the knowledge base between trials. Observe that the performance of recognizer improves significantly on the second and following trials. We at,- tribute the improved performance of the 1.4 recognizer to the fact that the recognizer generates significantly fewer augmentations to the description network during the second and subsequent trials. We have observed this “learning” effect in a variety of applications.’ Be- cause of this effect, the performance figures we present 3However, we have also observed situations where an excess of descriptions, created by prior invocations of the recognizer, can cause system performance to degrade. (9 (ii) (iii) (iv> (9 (vi) Inherit all features from concepts that I satisfies by direct, assertion, and inherit all features implied by those concepts via constraint axioms. Compute a normalized set of features representing the unification of the set of inherited features. Mark all directly asserted concepts; mark all features in the normalized set; and recursively mark the su- periors of marked descriptions. Classify I substituting the query based satisfaction test described below in place of the classical sub- sumption test. Mark each newly satisfied descrip- tion. Inherit, all features implied by the most specific de- scriptions that I has been proved to satisfy in step (iv). Repeat steps (ii) through (v) until closure is achieved. Query-based Satisfaction Test (assumes concepts and features have been marked prior to the test): Instance I satisfies a concept C if I satisfies all unmarked features of C. Iterate over the unmarked features, and execute a query for each one to test its satisfaction by I. For example, if the feature is atEeast (k,R), then retrieve the fillers of the role “R of I”, count them, and return true if the sum is at least k. If the feature is atMost (k,R), then return true if the role “R of I” is closed and if the cardinality of the set of fillers of that role is at most k. If the feature is all (R, B) , then return true if the role “R of I” is closed and if each filler of that role satisfies the concept B. Let us apply query-based recognition to the instance in Example 1. In step (i) instance I inherits the fea- ture atnost (I ,S> from the concept A. The normalized set, computed in step (ii) is just the singleton set con- taining that feature. In step (iii) we mark the concept A and the feature atnost (I ,S>. In step (iv) we visit the unmarked concept B, and test for satisfaction of its 776 Represent at ion and Reasoning: Terrninological unmarked features-in this case, we test the feature atLeast (I, R). The feature satisfaction test involves retrieving the filler set ((3,4}) of the role “R of I”, computing the cardinality (2 in this case), noting that 2 is at least 1, and returning true. We have proved that I satisfies D so we mark it. In step (iv) we inherit two features from B. Repeating steps (ii) through (v) reveals no new instantiation relationships for I, so the algorithm terminates. A key difference between query-based recognition and the abstraction/classification strategy is that the former algorithm tends to generate fewer new features. For example, in performing recognition for instance I in Example 1, the A/C algorithm generates the feature atLeast(2,~) as a part of the abstraction of I, while the query-based algorithm generates no new features. We now discuss an extension to the query-based recognition algorithm that was implemented starting with version 1.4 of LOOM. In versions up through LOOM 1.3, the set of instantiation relationships for each recognized instance was cached by recording a list of the most specific descriptions satisfied by that instance. This list is called the “type” of the instance. In Example 1, the type of I is the singleton list con- taining B, (concept B is more specific than concept A). Starting with version 1.4, we changed the recognition algorithm so that whenever the type of an instance (computed at the end of step (iv)) contains more than one description, a new description is created repre- senting the conjunction of the descriptions in that in- stance’s type. This new description then replaces the list of descriptions. Thus, the LOOM 1.4 recognizer occasionally generates new partial abstractions of in- stances, and hence bears a greater resemblance to the classical A/C recognizer than does its predecessor. Why did we, as LOOM implementors, make this change, given that generating new abstractions tends to slow things down ? In a LOOM-based parsing appli- cation (Kasper, 1989) we observed that unification op- erations over the same sets of descriptions were being performed repeatedly during step (ii) of the recognition cycle (see (MacGregor, 199 lb) for a discussion of de- scription unification). Creation of a new description D representing the conjunction of descriptions in step (iv) has the effect of caching the unification operation for that set of descriptions, because the LOOM data struc- ture representing D stores within it the unification of all features inherited (or implied by) D. The creation of these additional conjunction descriptions can have the effect of increasing the likelihood of triggering an optimization in step (ii) wherein the unification opera- tion is eliminated whenever all inherited features derive from a single description. When we modified the 1.4 recognizer to generate these additional conjunctive descriptions, we observed speed-ups of up to 20 percent in the parsing applica- tion. Unfortunately, we observed an opposite effect when we tested the effects of this modification on the Figure 2: 1.3 Recognizer vs. 1.4 Recognizer DRAMA application. As illustrated in Figure 2, the ef- fect of generating additional descriptions was a degra- dation in performance of around 30 percent. This is due primarily to the fact that the 1.4 recognizer gen- erates 83 new descriptions during the first trial, while the 1.3 recognizer generates only 6 new descriptions. However, the degradation persists in the second and subsequent trials, when neither version generates a sig- nificant number of new descriptions. In the DRAMA application we found a large variation across the differ- ent ABox instances created during the test run, caus- ing the leverage derived from caching unifications to be low. Hence, the 1.4 recognizer ran more slowly because it had to wade through a larger number of descriptions in the hierarchy. We conjecture that the performance of DRAMA using a fully classical recognizer would be even worse than that exhibited by the version 1.4 rec- ognizer . a&ward Chaining sts LOOM’S query-based recognizer algorithm is designed to address performance problems (la) and (lb) above, which are concerned with the creation of excessive numbers of descriptions during recognition. The re- sults in Figure 2 indicate that for some applications, these concerns are real. However, the query-based recognition algorithm described above does not address performance problem (2). We addressed that problem by implementing a purely backward chaining algorithm for computing instantiation relationships between in- stances and descriptions. Because the query part of the LOOM recognition algorithm is itself a backward chainer, construction of the new algorithm was much simpler than would have been the case if we had orig- inally implemented a classical recognizer. Figure 3 shows the results when the DRAMA appli- MacGregor and Brill 777 60 50 40 OR x Q 30 tn 20 10 0 1 2 3 4 5 Trial Number lm=4- LOOM 1.4.1 -U- LOOM 1.4.1 Accel. 1 Figure 3: Accelerated vs. Non-accelerated Recognition cation was run in an accelerated mode where the per- centage of concepts for which instantiation relation- ships were computed by the recognizer was reduced from 79 percent to 36 percent. Computation of the other 64 percent of the instantiation relationships was performed (only on demand) by the backward chain- ing facility. We observed a decrease of between 46 and 72 percent in the total amount of time spent comput- ing instantiation relationships. Future modifications to LOOM should enable us to completely eliminate the use of the recognizer for the DRAMA application, hopefully resulting in further improvements in that ap- plication’s performance. Given the positive results obtained with the accel- erated mode, why don’t we run all LOOM applications in that mode? The answer is that inference using the LOOM backward chainer is strictly weulcer than infer- ences obtained using the LOOM recognizer, because the backward chainer does not implement an analogue of the constraint propagation (Nebel and von Luck, 1988; Nebel, 1990) p recess that is interleaved between invo- cations of the recognition cycle. Constraint propaga- tion involves propagating the effects of features satis- fied by an instance to adjacent instances in the ABox. For example, suppose a knowledge base contains the axiom A implies all(R,B) where A and B are concepts, and R is a role, and sup- pose the LOOM recognizer proves that an instance I satisfies A. I will then inherit the feature all(R,B). During the next constraint propagation phase, the im- plication implicit in that feature is used as the basis for inferring that all fillers of the role “R of I” necessarily satisfy B, i.e., the constraint of satisfying B propagates to each of the filler instances. A backward chaining procedure for evaluating the above axiom would look as follows: “To prove that an instance x satisfies B, try to prove that some filler of the role ‘(inverse of R) of x’ satisfies A.” In our estima- tion, extending the capabilities of the backward chainer to include inferences of this sort would be likely to significantly degrade its performance, possibly negat- ing its utility. Instead, LOOM has made the follow- ing architectural decision: “An axiom of the form ‘de- scriptionl implies description!2 can be applied dur- ing backward chaining inference only in the case that description2 is a concept.” In effect, during back- ward chaining LOOM utilizes only those axioms that are syntactic analogues of Horn clauses. Thus, for ex- ample, the axiom “all(R,B) implies A” would be applicable during backward chaining, but the axiom “A implies all(R,B)” would not. Other classes of constraint propagation that are ap- plied during recognition, but not during backward chaining, include merging pairs of instances that are inferred to be equivalent, and generating skolem in- dividuals to fill roles in cases where a role filler is known to exist, but where the identify of the filler is not known. Some LOOM applications, notably, natu- ral language parsing, explicitly depend on this kind of constraint propagation (Kasper, 1989)-these applica- tions cannot profit from the accelerated mode. The DRAMA application provides an existence proof of a real application that can execute correctly (and more efficiently) using the weakened form of inference pro- vided by LOOM’S backward chaining facility. Discussion and Conclusions One characteristic that differentiates the recognizer al- gorithms we have considered is the number of abstract descriptions that they generate as a side-effect of recog- nition. The 1.3 recognizer generates relatively few, the classical algorithm generates relatively many, and the 1.4 recognizer lies somewhere in between. The instru- mentation we performed on our algorithms suggests that when 1.3 outperformed 1.4, the difference in speed was due to the fact that 1.3 generated fewer abstract descriptions. Hence, while the jury is still out as to whether 1.3 or 1.4 is the better all around performer, we interpret our tests as casting serious doubt as to the viability of a full-classical recognizer. However, the re- sults of our experiments should be regarded as sugges- tive rather than as definitive. It is clear that we could have formulated a test application that would produce very different (i.e., opposite) results. The DRAMA ap- plication we used has the virtue that it is both real and non- trivial. With respect to modes of inference, we observe that no single inference strategy can deliver acceptable per- formance across a spectrum of domain applications. This lesson is of course no surprise to AI practitioners. In many reasoning systems inference is controlled by explicitly marking rules as either forward or backward chaining. We consider embedding control information 778 Representation and Reasoning: Terminological within individual rules to be an ultimately bankrupt approach to performance enhancement, for at least two reasons: First, we believe that the task of tuning such rules will become increasingly untenable as the size of rule bases increases. Second, we believe that it will eventually become commonplace that a rule base will be shared by two or more applications that need to apply different modes of inference to that same set of rules. In LOOM, the application chooses which mode of inference is most suitable. Thus, for example, a LOOM-based diagnosis system might choose to run in backward chaining mode, while a natural language ex- planation system running against the same rule base might choose to invoke the recognizer to assist its own reasoning processes. The most general conclusion indicated by our exper- iments is that complex, multi-modal classifier architec- tures appear to be faster than simple (more elegant) architectures, at least for uniprocessor-based systems. This is basically a negative result, since it increases our estimation of the difficulty involved in building a knowledge representation system that is both general purpose and efficient. Acknowledgement The authors wish to thank Tom Russ for his help in producing the graphs used in this paper, and Craig Knoblock and Bill Swartout for their criticisms of an earlier draft of this paper. References Brachman, R.J. and Schmolze, J.G. 1985. An overview of the KL-ONE knowledge representation system. Cognitive Science 171-216. Harp, B.; Aberg, P.; Neches, R.; and Szekely, P. 1991. DRAMA: An application of a logistics shell. In Pro- ceedings of the Annual Conference on Artificial Intel- ligence Applications for Military Logistics, Williams- burg, Virginia. American Defense Preparedness Asso- ciation. 146-151. Kasper, Robert 1989. Unification and classification: An experiment in information-based parsing. In Pro- ceedings of the International Workshop on Parsing Technologies, Pittsburg, PA. Kindermann, Carsten 1990. Class instances in a terminological framework-an experience report. In Marburger, H., editor 1990, GWAI-90. 14th German Workshop on Artificial Intelligence, Berlin, Germany. Springer-Verlag. 48-57. Kindermann, Carsten 1991. Personal communication. MacGregor, Robert and Burstein, Mark H. 1991. Us- ing a description classifier to enhance knowledge rep- resentation. IEEE Expert 6(3):41-46. MacGregor, Robert 1988. A deductive pattern matcher. In Proceedings of AAAI-88, The National Conference on Artificial Intelligence, St. Paul, MINN. AAAI. 403-408. MacGregor, Robert 1990. The evolving technology of classification-based knowledge representation sys- tems. In Sowa, John, editor 1990, Principles of Se- mantic Networks: Explorations in the Representation of Knowledge. Morgan-Kaufman. chapter 13. MacGregor, Robert 1991a. Inside the LOOM descrip- tion classifier. SIGART Bulletin 2(3):88-92. MacGregor, Robert 1991b. Using a description clas- sifier to enhance deductive inference. In Proceeding Seventh IEEE Conference on AI Applications, Miami, Florida. IEEE. 141-147. Nebel, Bernhard and von Luck, Kai 1988. Hybrid rea- soning in BACK. Methodologies for Intelligent Sys- tems 3:260-269. Nebel, Bernhard 1990. Reasoning and Revision in Hybrid Representation Systems, volume 422 of Lec- ture Notes in Artificial Intelligence. Springer-Verlag, Berlin, Germany. Patel-Schneider, P.F.; Owsnicki-Klewe, B.; Kobsa, A.; Guarino, N.; MacGregor, R.; Mark, W.S.; McGui- ness, D; Nebel, B.; Schmiedel, A.; and Yen, J. 1990. Term subsumption languages in knowledge represen- tation. The AI Magazine 11(2):16-23. Quantz, Joachim and Kindermann, Carsten 1990. Im- plementation of the BACK system version 4. KIT Report 78, Department of Computer Science, Tech- nische Universitaat Berlin, Berlin, Germany. MacGregor and Will 779
1992
130
1,199
Computing Least Common Subsumers in Description Logics William W. Cohen Alex Borgida* Haym Hirsh AT&T Bell Laboratories Dept. of Computer Science Dept. of Computer Science 600 Mountain Avenue Rutgers University Rutgers University Murray Hill, NJ 07974 New Brunswick, NJ 08903 New Brunswick, NJ 08903 wcohen@research.att.com borgida@cs.rutgers.edu hirshQcs.rutgers.edu Abstract Description logics are a popular formalism for knowl- edge representation and reasoning. This paper intro- duces a new operation for description logics: comput- ing the “least common subsumer” of a pair of de- scriptions. This operation computes the largest set of commonalities between two descriptions. After ar- guing for the usefulness of this operation, we analyze it by relating computation of the least common sub- sumer to the well-understood problem of testing sub- sumption; a close connection is shown in the restricted case of “structural subsumption”. We also present a method for computing the least common subsumer of “attribute chain equalities”, and analyze the tractabil- ity of computing the least common subsumer of a set of descriptions -an important operation in inductive learning. Introduction and Motivation Description logics (DLs) or terminological logics are a family of knowledge representation and reasoning systems that have found applications in several di- verse areas, ranging from database interfaces [Beck et al., 19891, to software information bases [Devanbu et al., 19911 to financial management [Mays et al., 19871. They have also received considerable attention from the research community (e.g., [Woods and Schmolze, 1.9921.) DLs are to used to reason about descriptions, which describe sets of atomic elements called individuals. Individuals can be organized into primitive classes, which denote sets of individuals, and are related through binary relations called roles (or attributes when the relation is functional). For example, the in- dividuals Springsteen and BorntoRun might be re- lated by the sings role, and Springsteen might be an instance of the primitive class PERSON. Descrip- tions are composite terms that denote sets of indi- viduals, and are built from primitive classes (such a.s PERSON), and restrictions on the properties an in- dividual may have, such as the kinds or number *On sabbatical leave. of role fillers (e.g., “persons that sing at least 5 things”). For example, the statement “all songs sung by Springsteen (and there are at least 6) are set in New Jersey” could be expressed by attaching to the individual Springsteen the descrip- tion (AND (AT-LEAST 5 sings) (ALL sings (FILLS setting NJ))). Knowledge base management systems (KBMS) based on DLs perform a number of basic operations on descriptions: for example, checking if a description is incoherent, or determining if two descriptions are dis- joint. An especially important operation on descrip- tions is testing subsumption: DI subsumes D2 iff it is more general than D2. Efficient implementation of such operations allows a KBMS to organize knowledge, maintain its consistency, answer queries, and recognize conditions that trigger rule firings. This paper introduces a new operation for descrip- tion logics: computing the least common subsumer (LCS) of a pair of concept (i.e., finding the most spe- cific description in the infinite space of possible de- scriptions that subsumes a pair of concepts.‘) This operation can also be thought of as constructing a concept that describes the largest set of commonali- ties between two other concepts. In logic program- ming, similar operations called “least general gener- alization” and “relative least i! eneral generalization” have been extensively studied Frisch and Page, 1990; Plotkin, 1969; Buntine, 19SS]; in the context of DLs, there are a number of circumstances in which the LCS operation is useful. Learning from examples. Finding the least gen- eral concept that generalizes a set of examples is a common operation in inductive learning. For exam- ple, [Valiant, 19841 proved that L-CNF (a class of Boolean functions) can be probabilistically learned by computing the LCS of a set of positive examples; also, many experimental learning systems make use ‘This should n ot be confused with the “most specific subsume?’ operation, which searches the (finite) space of raarraed concepts to find the most specific named concept(s) that subsumes a single concept [Woods and Schmolze, 19921. 754 Representat ion and Reasoning: Terminological From: AAAI-92 Proceedings. Copyright ©1992, AAAI (www.aaai.org). All rights reserved. of LCS-like operations [Flann and Dietterich, 1989; Hirsh, 19921. A companion paper explores the learn- ability of DLs via LCS operations [Cohen and Hirsh, 19921; DLs are useful for inductive learning because they are more expressive than propositional logic, but still (in some cases) tractably learnable. Knowledge-base “vivification” (e.g. [Borgida and Etherington, 1989; Levesque, 19861). Reasoning with a knowledge base that contains disjunctive facts is often intractable. Vivification is a way of logically weakening a knowledge base to avoid disjunction: for example, rather than encoding in the knowledge base the fact that PIANIST( Jill) V ORGANIST( Jill), one might elect to encode some conjunctive approximation to this disjunction, such as KEYBOARD-PLAYER( Jill). In DLs that reason tractably with non-disjunctive de- scriptions, replacing a disjunction D1 V . . . V D, with theLCSofDl,... , Dn is a vivification operation, as it leads to an approximation that can be reasoned with effectively. Other reasoning tasks. Frish and Page [ 19901 ar- gue that certain types of abduction and analogy can also be performed using LCS operations. Also, many specific applications of LCS operations have been de- scribed. For example, Schewe [1989] proposes an ap- proach to the important problem of developing of vari- ants of existing software using the Meson DL; a key notion in his approach is the use of the LCS of pairs of descriptions, denoting the desired and existing por- tions of code respectively. In the remainder of this paper, we first precisely de- fine the LCS of two descriptions. We then develop an understanding of the LCS operation by analyz- ing the relation of LCS to the well-studied and well- understood problem of subsumption. A close connec- t’ion is shown for the special case of “structural sub- sumption”: for example, given a recursive structural subsumption algorithm and definitions of LCS for the base cases one can mechanically construct a recursive LCS algorithm. Finally, we analyze the complexity of computing the LCS of attribute chain equalities, an important construct for DLs that normally requires a non-structural subsumption algorithm. Our results are not specific to any particular DL; however, we will present our examples in a specific DL, namely CLASSIC [Borgida et al., 19891, the syntax of which is summarized in an appendix. The lcs Operator Let ,$ be a description logic and let q be its subsump- tion relationship-i.e., if description Dl subsumes D2 then we will write D2==.+Dl. By definition, - must be a partial order on (denotations of) descriptions. It is possible that Dl=+D2 and D2eDl without Dl and D2 being syntactically identical; in this case, we will say that Dl and D2 are semantically equivalent, and write DlzD2. C E L is a least common subsumer @C’S) of Dl and D2 iff a) C subsumes both Dl and D2, and b) no other common subsumer of Dl and D2 is strictly subsumed by C. Notice that in general, a least common subsumer may not exist, or there may be many semantically dif- ferent least common subsumers. We define the Zcs op- erator to return a set containing exactly one represen- tative of the equivalence class (under G-) of every least common subsumer of Dl and D2. More precisely: nition 1 If Dl and 02 are concepts in a descrip- language f, then lcs(Dl,DZ) is a set of concepts {Cl,. . . , ci, . . .} such that a) each Ci is a least com- mon subsumer of Dl and 02, b) for every C that is a least common subsumer of Dl and 02, there is some Ci in lcs(Dl,DZ) such that Ci z C, and c) for i # j, Ci $ Cj. However, all extant DLs provide some description constructor (usually called AND) which builds a de- scription whose denotation is the intersection of the denotation of its arguments. If such a constructor ex- ists, then it can be shown that if any least common subsumer exists, it is unique up to equivalence under 3. We have the following proposition. roposition 1 For any DL with the description in- tersection constructor (AND), if lcs(Dl,DZ) is not empty, then it is a singleton set. (Proof by contradiction: if two incomparable LCS exist, their intersection is a more specific common sub- sumer, which could not be equivalent to them.) In the remainder of the paper, we will consider only DLs equipped with a constructor AND. elating ICS and Subsumption The choice of constructors available for building de- scriptions clearly affects the expressive power of the DL and the complexity of reasoning with it, as well as affecting the computation of the ICS operator. Since much research has been devoted to determining sub- sumption in various DLs (especially its tractablility), it would useful to find a correlation between computing subsumption and computing Its. The following two theorems show that this is not likely to be straightfor- ward in the general case: Theorem 1 There exists a DL such that the subsump- tion problem is podynomiaGtime decidable but comput- ing the ICS operator cannot be done in polynomial time. Proof: Consider the DL Bit, where descriptions have the form (AND (PRIM (vet)>*>, where (vet) is a bit vector y oflength n, where y = lO...O,y = O...Ol or y has $ 1’s. (The following would then be a description in Bita: (AND (PRIM 1100) (PRIM 0101) (PRIM iooo)).) Th e semantics of the PRIM constructor are that it represents a primitive concept (i.e., membership in its extension is explicitly asserted) but with the con- straint that (PRIM vl) must be a superset of (PRIM v2) Cohen, Borgida, and Hirsh 755 if vlAv2 = vl. A simple polynomial-time algorithm for determining whether some Cl subsumes C2 is to check, for every primitive (PRIM v) appearing in Cl, whether C2 has a primitive (PRIM w) such that v A w = v. On the other hand, the computation of lcs( (AND (PRIM 10.. .O)) , (AND (PRIM 0.. .Ol)) requires exponen- tial time since the answer (the conjunction of all the other bit-vector primitives in the language) is of expo- nential size.2 Theorem 2 There exists a DL for which the lcs oper- ator can be computed in linear time, while subsumption is co-NP-hard. Proof: Consider the DL containing the constructors PRIM, AND, and OR (where OR denotes disjunction). For this language lcs(c, D) = {(OR CD)} is a correct implementation. However subsumption is co-NP-hard; see, for example, [Borgida and Etherington, 19891 for a proof. (We note, however, that for this DL testing semantic equivalence is also intractable, and thus the lcs as we compute it is perhaps less useful for reason- ing.) However, although lcs and subsumption appear to be unrelated in the general case, they are closely re- lated in the restricted setting described below. Relating lcs and Structural Subsumption In a typical DL, description constructors interact in various ways; for example, (AND (AT-LEAST 2 sings) (AT-MOST i sings)) is equivalent to the in- consistent concept and is hence subsumed by any other description. Many, though not all, DLs reason by first reducing descriptions to some normal form, where im- plicit facts are explicated, inconsistencies (such as the one above) are detected, and so on. Subsumption tests and other operations are then performed on normal- form representations of the description via relatively simple algorithms-algorithms which need not con- sider the possible interactions among various construc- tors. In particular, subsumption testing can be done on normal-form descriptions by independently consid- ering the parts of descriptions built with the same con- structor. We will write normalized descriptions (as opposed to CLASSIC descriptions) as first-order terms without vari- ables, where the various concept constructors appear as functors, with integers, tuples, lists, other terms, etc, as arguments (e.g., a.nd( [atleast (5, sings) s all(sings,fills(setting,NJ))]). In such anormal 2Note that the problem of actually computing lcs can be made less trivial by having Bit, contain only some suf- ficiently large random subset of the bit vectors with p l’s, so that the subclass hierarchy must be traversed to find the common (primitive) ancestors. form, the subsumption algorithm3 Subsumes? would return false on the invo- cation Subsumes?(atmost(foo) , atleast(bar)) be- cause the constructors involved are different. We will call such a subsumption relationship structural sub- sumption4: Defiaition 2 Subsumes? is a structural subsumption relationship i$ Subsumes?(kl (a), k@)) is false when- ever ICI # k2. The list of DLs for which structural subsumption is used include Kandor [Patel-Schneider, 19841, Krypton [Brachman et al., 19831, Meson [Edelman and Ows- nicki, 19861, ENTITY-SITUATION [Bergamaschi et al., 1988], CLASSIC (without the SAME-AS constructor) and the logic of [Patel-Schneider, 19891. Observe how- ever that a structural subsumption relationship need not be tractable: it may be that putting a description into normal form is difficult, or that for some specific constructor subsumption is difficult to compute. Given that Subsumes? is a structural subsumption relationship, Subsumes?(k(a), k(p)) can depend only on the specifics of cy and p. Thus a structural sub- sumption relationship can be fully defined by a table specifying Subsurnes?(k(a), k(p)) for the various term constructors k; this means that subsumption imposes a partial order (which we denote Sk) on the space Vk of possible arguments to constructor k. More precisely we define ask@ iff Subsumes?(k(a), k(p)). As an example, let n/ denote the natural numbers and Role the set of possible roles. For the construc- tor AT-MOST in CLASSIC, VAT-MOST is N x Role, and ((%?l)<AT-MOST(n2, r2)) is true iff nl 5 722 and . The lcs function for this constructor can ;;‘, d=efi?ed as follows: lcs(atmost(ni, q), atmost(n2, r2)) = if r1 = r2 then atmost(max(ni, n2), r1) else THING This definition is based on the definition of the least upper bound (lub) operation in the space VAT-MOST: ignoring role-identity, we have 5 on integers as the partial order, and max as the lub.5 This of course is 3Notice that we use _ to denote the partial order on descriptions, and Subsumes ? to denote the algorithm that implements this partial order. *Informally, this term was introduced in [Patel- Schneider, 19891. 5To be precise, when we say that constructor k has par- tial order <k’ and lub LJ k “ignoring role identity” we mean that the domain is Vi x Role, the actual partial order is (XI, n)5k’(22,7-2)) = (if n = 7-2 then ZI<~‘Z~ else false) and the actual lub is (a, n)Uk(~2, ~2) = (if ~1 = ~2 then 21Ll652 else THING) where THING denotes the set of all individuals. Most of the common constructors that deal with roles have relatively simple lubs and partial orders if role identity is ignored. 756 Representation and Reasoning: Terminological no accident: in general, if <k is being used to define the partial order used for subsumption ,, then the cor- responding lub operation (if it exists) will define the ICS. This observation provides a connection between structural subsumption and lcs computation. Theorem 3 Suppose Dl and D2 are terms in normal form, so that every constructor k has its arguments taken from a join/upper semi-lattice6 (Vk, Sk, Uk). If subsumption is computed structurally based on <k, then lcs (Dl, 02) is non-empty, unique, and is spec- ified by Further, if both Subsumes ? and the computation of each uk. takes polynomial time, then the computation of Ics(Dl,D2) also takes polynomial time. The power of this theorem is most apparent when one considers constructors that can be composed to- gether. For example, consider CLASSIC's ALL construc- tor, which has the associated partial order (ignoring role-identity) of *. From the theorem, this defini- tion, and the fact that lcs is the lub operator for the * partial order, we can derive a recursive definition of lcs for the ALL constructor (see Table l).’ In gen- eral, given a recursive structural Subsumes? algorithm and definitions of lcs for the base cases one can im- mediately construct a corresponding recursive lcs al- gorithm. Observe also that the theorem allows a concise specification of lcs for an arbitrary set of construc- tors; this is again illustrated by the table, which pro- vides such a specification for all of the constructors in the CLASSIC DL except for SAME-AS (which we will discuss in the next section.) The table can also be readily extended as new constructors are added. For example, if we introduce a concept constructor RANGE to represent Pascal-like integer intervals, then ~~RANGEiSnrx~,(nl,n2)LRANGE(mlrm2)iff(ml 5 ??I) A (n2 5 ?732), and (?&?%2)i-tnANGE(mI,?n2) = (min(n1, ml), max(n2, m2)). Computing lcs for Attribute Chain Equalities Some DLs support a constructor for testing the equal- ity of attribute chains; an example is CLASSIC's 6A join/upper semi-lattice is a poset (Vk, Sk) with an associated operation CVU~P that returns a unique least up- per bound for each pair of elements CY and /3. 7 Where Znd={individuals}, Ftioz={ host language functions}, dtom={atoms}, and ‘Desc={descriptions}; 2x denotes the power set (e.g., 2Znd is the set of all sets of indi- viduals) and X* denotes the set of sequences (of arbitrary length) of members of X. A single sequence is denoted x = (Xl,. . . ) 2n). Role identity is ignored for AT-MOST, AT-LEAST, and FILLS. SAME-AS COnStrUctOr. Attribute chain equalities are useful in representing relationships between parts and subparts of an object; to take an example from nat- ural language processing, one might define a sentence to be “reflexive” if its subject is the same as the direct object of the verb phrase, i.e. if (SAME-AS (subj > (vp direct-obj ) ). In this section, we will consider computing lcs for the SAME-AS constructor. The semantics of SAME-AS can be concisely stated by noting that it is an equality relationship (i.e., it is reflexive, symmetric, and transitive) such that the equality of A’ and g is preserved by appending iden- tical suffixes .s Consider the sublanguage DLSAME-AS, containing only conjunctions of SAME-AS terms. Each such description D will partition the space Attr* of all attribute sequences into equivalence classes; let us de- note the partition induced by these equivalence classes as n(D). A description Dl subsumes D2 iff X( Dl) is a refinement of n(D2) ( i.e., all equivalences in Dl also hold in D2). The lcs operator must therefore gen- erate a description of the coarsest partition that is a refinement of both n(D1) and 7r(D2). Unfortunately, such partitions cannot be represented directly, as there may be infinitely many equivalence classes, each of which may be infinitely large. Ait- Kaci [Ait-Kaci, 19841 has described a finite represen- tation for these partitions called a $-type. Following Ait-Kaci, we will represent a 4-type (and hence a par- tition) as a rooted directed graph (V, E, ~0) in which each edge is labeled with an attribute; such a graph will be called here an equality graph (E&G). A partition is represented by constructing an EQG such that at- tribute sequence 2 is the SAME-AS attribute sequence B’ iff there are two paths in the E&G with labels 2 and g that lead from the root to the same vertex, or if i and B’ are the result of adding identical suffixes to two shorter sequences 2 and @ where (SAME-AS 2 @). Ait-Kaci [1984] h s ows that $-types form a lattice, and presents efficient algorithms for constructing T,& types, testing the partial order (+) and construct- ing the glb (AND) of two $-types. In the remainder of this section, we present additional algorithms for COnStrUCtiUg a ?&type from a COnjUnCtiOn Of SAME-AS restrictions, and for computing the lub (1~s) of two $-types. To construct an E&G that encodes a description D, first add (to an initially empty graph) enough ver- tices and edges to create distinct paths labeled A’ and g for each restriction (SAME-AS A g) in D. Then, merge9 the final vertices of each of these pairs of paths. Finally, make the resulting graph “determin- 81.e., (SAME-AS A' 8) iff (SAME-AS ii3 BS). Of course, attribute sequences not mentioned in the SAME-AS condi- tion are in different equivalence classes. ‘To merge two edg es, one replaces their destination ver- tices with a single new vertex. Cohen, Borgida, and Hirsh 757 Constructor k ALL AT-LEAST AT-MOST FILLS ONE-OF TEST PRIM AND Domain vk Partial Order Sk Upper Bound Uk- Role x Desc X((n,&),(r2,~2)). if (rl = r2) Dl=+D2 else false J+-l, &),(~2,02)). if (r-1 = r2) lcs( Dl,D2) else THING N x Role 2 min N x Role I max 21nd x Role 3 n 2Lna 2j un Atom 2, esc* U f-l )r(pl , ~2). if (~1 = ~2) pl else THING X(C,D). ANDi,jlCS(Cj, Da) Table 1: Structural subsumption and lcs rules for common constructors. istic” by repeatedly merging edges that exit from the same vertex and have the same label.” When this process is finished, every vertex v in the constructed E&G will represent an equivalence class of attribute sequences -namely those sequences that label paths to v from the root of the EQG. (See Figure 1 for exam- ples.) LCS computation can be done by performing an analog of the “cross-product” construction for finite- automaton intersection [Hopcroft and Ullman, 19791 - i.e., one forms a new graph with vertex set VI x V2, root (VOI,VO~), and edge set {((vI,v~), (wl,w),a) : (vl, wl,a) E El and (v2, w2,a) E E2); here a tuple (v, w, a) denotes an edge from v to w with label a. This construction can be performed in polynomial time, and yields an E&G of size no greater than product of the sizes of the two input EQG’s. By the above comments, DLSAME-AS is a DL for which subsumption and lcs are tractable. However, although by Table 1 CLASSIC without SAME-AS is also tractable in the same sense, the SAME-AS and ALL con- structors interact in subtle ways, making a straightfor- ward integration of the two DLs impossible (more pre- cisely, the interaction makes it impossible to normalize in a manner that preserves the semantics for ALL pre- sented in Table 1.) The technique of this section can, however, be extended to the full CLASSIC language, as is detailed in [Cohen et al., 1992].11 Computing lcs for Sets of Descriptions In some applications, notably inductive learning, it is necessary to find the commonalities between relatively large sets of objects. For this reason, we will now con- sider the problem of computing the lcs of a set of loMaking the graph deterministic makes it possible to tractably test subsumption, as one can tractably check if one deterministic graph is a subgraph of another. ‘lBriefly, one can distribute information about the other constructors through an EQG, tagging the vertices of the EQG with ONE-OF and PRIM constraints and the edges of the EQG with role-related restrictions; efficient (but non- structural) Subsumes ?, AND, and lcs algorithms can then be defined on this data structure. objects, rather than merely a pair. In the following discussion, IDI denotes the size of a description D. Observe that by definition lcs is commutative and associative: thus it is always possible to compute lcs( D1, . . . , Dra) by a series of pairwise lcs compu- tations. However, problems may arise even if the lcs of a pair of descriptions is tractably computable: Theorem 4 There exists a a polynomial function p(n) so thatforalln 3Dl,..., D, E DLSAME-AS such that ID4 = O(n) for each Da, but Ilcs(D1, . . . . D,)I = Pro& Consider the descriptions Dl, D2, . . . . D,, with Da defined as Da = (AND (ANDj+i (SAME-AS () (aj))) (ANDj+i (SAME-AS (CQ) (Ui Uj))) (SAME-AS () (ai ai))) where the ai’s are all distinct attributes. The parti- tion r(Di) has two equivalence classes: two attribute sequences are equal iff they contain the same number of occurrences (modulo 2) of attribute ai. Recall that for Dlcs =lcs(D1,... , D,), n( Dies) will be the coars- est partition that is a refinement of each r(Di); this means that two attribute sequences will be considered equal by Dlcs iff they are considered equal by all of the Di’s, i.e., if they contain the same number of oc- currences (modulo 2) of all n attributes. Thus ~(DI,,) contains at least 2” equivalence classes, and hence its equality graph must contain at least 2n nodes. Finally, since converting a description D to an EQG requires time only polynomial in IDI, no equivalent description D can be more than polynomially smaller. In the theorem, exponential growth occurs because Ilcs(D1, D2)j is bounded only by 1011 - 1021; hence Ics(D~, . . . , Dn) can be large. We note that exponen- tial behavior cannot occur without this multiplicative growth: Proposition 2 Under the conditions of Theorem 3, if jCYuk,f?I 5 ICYI + I/?], then lcs(D1,. . . , Dn) can be computed in polynomial time. 758 Representation and Reasoning: Terrninological @a LoanApplicant (AND (SAME-AS (LoanApplicant) (Student)) (SAME-AS (Cosigner) (Student Father))) (zrJ;=> LoanApplicant (AND (SAME-AS (LoanApplicant) (AND (HouseBuyer)) (SAME-AS (SAME-AS (Cosigner) (Cosigner) (HouseBuyer Fat her))) (LoanApplicant Father))) Figure 1: Two E&G’s and their lcs From Table 1, we see that for CLASSIC without SAME-AS, lcs is tractable in this strong sense. Conclusions This paper has considered a new operation for descrip- tion logics: computing the least common subsumer @CS) of a pair of descriptions. This operation is a way of computing the largest set of commonalities be- tween two descriptions, and has applications in areas including inductive learning and knowledge-based viv- ification. We have analyzed the LCS operation, pri- marily by analyzing its relationship to the well-studied and well-understood problem of subsumption. First, we showed that no close relationship holds in the gen- eral case. We then defined the class of structural sub- sumption relations (which apply to a large portion of extant DLs) and showed that, in this restricted set- ting, the relationship between LCS and subsumption is quite close; making use of this relationship, we also developed a very general and modular implementation of LCS. Finally, we presented a method for computing the LCS of attribute chain equalities, and presented some additional results on the tractability of comput- ing the LCS of a set of descriptions; the latter problem is important in learning. We have seen that constructors such as SAME-AS whose subsumption is not easily described in a “struc- tural” manner can cause complications in the com- putation of LCS. For this reason, we have postponed consideration of generalizations of SAME-AS which are known to make subsumption undecideable, such as SUBSET-OF and SAME-AS applied to role (as opposed to attribute) chains. We also leave as further work a.nalysis of LCS for DLs allowing recursive concept def- initions, role hierarchies, or role constructors such as inversion and transitive closure. Acknowledgements The authors would like to thank Ron Brachman, Peter Patel-Schneider, and Bart Selman for comments on a draft of the paper; we are also grateful to Tony Bonner and Michael Kifer for pointing out a technical error in an earlier version of the paper. Other colleagues at Bell Labs contributed through many helpful discussions. Appendix: The CLASSIC 1.0 DL The following are the description constructors for CLASSIC, and their syntax. (AT-LEAST n r) : the set of individuals with at least n fillers of role r. (AT-MOST n r) : the set of individuals with at most n fillers of role r. (ALL r D) : the set of individuals for which all of the fillers of role r satisfy D. (AND D1 . ..D.) : the set of individuals that satisfy all of the descriptions D1, . . . , D,. (FILLS T' 11 ..&) : the set of individuals for which individuals II,. . . , In fill role V. (ONE-OF I1 . . . Ita) : . . .or {I,}. denotes either the set (11) or (TEST Tl . . .Tn) : tests (arbitrary host language predicates) Tl , . . . ,T, are true of the instances of the concept. This is essentially an escape-hatch to the host language, which enables a user to encode pro- cedural sufficiency conditions. (PRIM i) : denotes primitive concept i. A primitive concept is a name given to a “natural kind”-a set of objects for which no sufficient definition exists. (SAME-AS (fl,l . ..PI.~) (~1 . ..q+)) : The set ofin- dividuals for which the result of following the first chain of attributes is the same as the result of fol- lowing the second chain of attributes, Cohen, Borgida, and Hirsh 759 References (Ait-Kaci, 1984) Hassan Ait-Kaci. A lattice theoretic approach to computation based on a calculus of par- tially ordered type structures. PhD Thesis, Univer- sity of Pennsylvania, 1984. (Beck et al., 1989) H. Beck, H. Gala, and S. Navathe. Classification as a query processing technique in the CANDIDE semantic model. In Proceedings of the Data Engineering Conference, pages 572-581, Los Angeles, California, 1989. (Bergamaschi et al., 1988) S. Bergamaschi, F.Bonfatti, and C. Sartori. Entity- situation: a model for the knowledge representation module of a kbms. In Advances in Database Tech- nology: EDBT’88. Springer-Verlag, 1988. (Borgida and Etherington, 1989) A. Borgida and D. Etherington. Hierarchical knowledge bases and efficient disjunctive reasoning. In Proceedings of the First International Conference on Principles of Knowiedge Representation and Reasoning, Toronto, Ontario, 1989. (Borgida et al., 1989) A. Borgida, R. J. Brachman, D. L. McGuinness, and L. Resnick. CLASSIC: A structural data model for objects. In Proceedings of SIGMOD-89, Portland, Oregon, 1989. (Brachman et al., 1983) R. J. Brachman, R. E. Fikes, and H. J. Levesque. Krypton: A functional ap- proach to knowledge representation. IEEE Com- puter, 16( 10):67-73, 1983. (Buntine, 1988) Wray Buntine. Generalized subsump- tion and its application to induction and redun- dancy. Artificial InteZZigence, 36(2):149-176, 1988. (Cohen and Hirsh, 1992) W. Cohen and H. Hirsh. Learnability of description logics. In preparation, 1992. (Cohen et al., 1992) W. Cohen, H. Hirsh, and A. Borgida. Learning in description logics using least common subsumers. In preparation, 1992. (Devanbu et al., 1991) P. Devanbu, R. J. Brachman, P. Selfridge, and B. Ballard. LaSSIE: A knowledge- based software information system. Communica- tions of the ACM, 35(5), May 1991. (Edelman and Owsnicki, 1986) J. Edelman and B. Owsnicki. Data models in knowledge rep- resentation systems. In Proceedings of GWAI-86. Springer-Verlag, 1986. (Flann and Dietterich, 1989) Nicholas Flann and Thomas Dietterich. A study of explanation-based methods for inductive learning. Machine Learning, 4(2), 1989. (Frisch and Page, 1990) A. Frisch and C. D. Page. Generalization with taxonomic information. In Pro- ceedings of the Eighth NationaZ Conference on Artifi- cial Intelligence, Boston, Massachusetts, 1990. MIT Press. (Hirsh, 1992) Haym Hirsh. Polynomial-time learning with version spaces. In Proceedings of the Tenth National Conference on Artificial Intelligence, San Jose, California, 1992. MIT Press. (Hopcroft and Ullman, 1979) John E. Hopcroft and Jeffrey D. Ullman. Introduction to Automata The- ory, Languages, and Computation. Addison-Wesley, 1979. (Levesque, 1986) Hector Levesque. Making believers out of computers. Artificial Intelligence, 30:81-108, 1986. (Originally given as the “Computers and Thought” lecture at IJCAI-85.). (Mays et al., 1987) E. Mays, C. Apte, J. Griesmer, and J. Kastner. Organizing knowledge in a com- plex financial domain. IEEE Expert, pages 61-70, Fall 1987. (Patel-Schneider, 1984) P. F. Patel-Schneider. Small can be beautiful in knowledge representation. In Proceedings of the IEEE Workshop on Principles of Knowledge-Based Systems, pages 11-16, Denver, Colorado, 1984. (Patel-Schneider, 1989) P. F. Patel-Schneider. A four- valued semantics for terminological logics. Artificial Inteddigence, 38:319-351, 1989. (Plotkin, 1969) G. D. Plotkin. A note on inductive generalization. In Machine Inteldigence 5, pages 153-163. Edinburgh University Press, 1969. (Schewe, 1989) Klaus Dieter Schewe. Variant con- struction using constraint propagation techniques over semantic networks. In Proceedings of the 5th Austrian AI Conference, Insbruck, 1989. (Valiant, 1984) L. G. Valiant. A theory of the learn- able. Communications of the ACM, 27(11), Novem- ber 1984. (Woods and Schmolze, 1992) W. A. Woods and J. G. Schmolze. The KL-ONE family. Computers And Mathematics With Applications, 23(2-5), March 1992. 760 Representation and Reasoning: Terminological
1992
131